RunsOn is now handling more than 400k jobs per day

New achievement unlocked: RunsOn is now handling more than 400k jobs per day across all users (at least those with telemetry enabled)! 🎉
New achievement unlocked: RunsOn is now handling more than 400k jobs per day across all users (at least those with telemetry enabled)! 🎉
In recent discussions about GitHub Actions runners, there’s been some debate around the true cost and complexity of self-hosted solutions. With blog posts like “Self-hosted GitHub Actions runners aren’t free” ↗ and various companies raising millions to build high-performance CI clouds, it’s important to separate fact from fiction.
It’s true that traditional self-hosted GitHub Actions runner approaches come with challenges:
However, these challenges aren’t inherent to self-hosted runners themselves. They’re symptoms of inadequate tooling for deploying and managing them.
At RunsOn, we’ve specifically designed our solution to deliver the benefits of self-hosted GitHub Actions runners without the traditional downsides:
While some providers claim to eliminate maintenance, they’re actually just moving your workloads to their infrastructure—creating new dependencies and security concerns. RunsOn takes a fundamentally different approach:
When third-party providers advertise “2x cheaper” services, they’re comparing themselves to GitHub-hosted runners—not to true self-hosted solutions. With RunsOn:
Many third-party solutions gloss over a critical fact: your code and secrets are processing on their infrastructure. RunsOn:
High-performance CI doesn’t require VC-funded cloud platforms:
The often-cited “human cost” of self-hosted runners assumes significant ongoing maintenance. With RunsOn:
Let’s address some specific claims from recent competitor blog posts:
Reality: RunsOn handles all AMI maintenance for you, with regularly updated images that are 100% compatible with GitHub’s official runners. If you want full control, we also provide templates for building custom images.
Reality: RunsOn uses fully managed AWS services and ephemeral runners that are automatically recycled after each job. There’s no infrastructure to babysit.
Reality: With RunsOn, you only need to change one line in your workflow files—replacing runs-on: ubuntu-latest
with your custom labels. No GitHub Actions expertise required.
Reality: RunsOn provides high-performance CI within your own AWS account, with benchmarks showing 30% faster builds for x64 workloads than GitHub-hosted runners and full compatibility with the latest instance types and architectures.
For arm64 workfloads, AWS is currently the leader in CPU performance.
Self-hosted GitHub Actions runners can be complex and costly, if you’re using the wrong approach. But with RunsOn, you get all the benefits of self-hosting (cost savings, performance, security) without the traditional drawbacks.
Before making assumptions about the “true cost” of self-hosted runners, evaluate solutions like RunsOn that have specifically solved these challenges. Your developers, security team, and finance department will all thank you.
Check out the new documentation pages for:
Now for the full release notes:
Support for EFS, TMPFS, and ECR ephemeral registry for fast docker builds. Also some bug fixes.
/mnt/efs
if the extras
label include efs
. Useful to share artefacts across job runs, with classic filesystem primitives.jobs:
with-efs:
runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=efs
steps:
- run: df -ah /mnt/efs
# 127.0.0.1:/ 8.0E 35G 8.0E 1% /mnt/efs
env:
MIRRORS: "https://github.com/PostHog/posthog.git"
# can be ${{ github.ref }} if same repo as the workflow
REF: main
jobs:
with-efs:
runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=efs
steps:
- name: Setup / Refresh mirrors
run: |
for MIRROR in ${{ env.MIRRORS }}; do
full_repo_name=$(echo $MIRROR | cut -d/ -f4-)
MIRROR_DIR=/mnt/efs/mirrors/$full_repo_name
mkdir -p "$(dirname $MIRROR_DIR)"
test -d "${MIRROR_DIR}" || git clone --mirror ${MIRROR/https:\/\//https:\/\/x-access-token:${{ secrets.GITHUB_TOKEN }}@} "${MIRROR_DIR}"
( cd "$MIRROR_DIR" && \
git remote set-url origin ${MIRROR/https:\/\//https:\/\/x-access-token:${{ secrets.GITHUB_TOKEN }}@} && \
git fetch origin ${{ env.REF }} )
done
- name: Checkout from mirror
run: |
git clone file:///mnt/efs/mirrors/PostHog/posthog.git --branch ${{ env.REF }} --single-branch --depth 1 upstream
type=registry
buildkit cache instruction. If the extras
label includes ecr-cache
, the runners will automatically setup docker credentials for that registry at the start of the job.jobs:
ecr-cache:
runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=ecr-cache
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v4
env:
TAG: ${{ env.RUNS_ON_ECR_CACHE }}:my-app-latest
with:
context: .
push: true
tags: ${{ env.TAG }}
cache-from: type=registry,ref=${{ env.TAG }}
cache-to: type=registry,ref=${{ env.TAG }} }},mode=max,compression=zstd,compression-level=22
Support for setting up a tmpfs
volume (size: 100% of available RAM, so only to be used on high-memory instances), and binding the /tmp
, /home/runner
, and /var/lib/docker
folders on it. /tmp
and /home/runner
are mounted as overlays, preserving their existing content.
Can speed up some IO-intensive workflows. Note that if tmpfs
is active, instances with ephemeral disks won't have those mounted since it would conflict with the tmpfs
volume.
jobs:
with-tmpfs:
runs-on: runs-on=${{ github.run_id }},family=r7,ram=16,extras=tmpfs
steps:
- run: df -ah /mnt/tmpfs
# tmpfs 16G 724K 16G 1% /mnt/tmpfs
- run: df -ah /home/runner
# overlay 16G 724K 16G 1% /home/runner
- run: df -ah /tmp
# overlay 16G 724K 16G 1% /tmp
- run: df -ah /var/lib/docker
# tmpfs 16G 724K 16G 1% /var/lib/docker
You can obviously combine options, i.e. extras=efs+tmpfs+ecr-cache+s3-cache
is a valid label 😄
Until now, when an instance has locally attached NVMe SSDs available, they would be automatically formatted and mounted so that /var/lib/docker
and /home/runner/_work
directories would end up on the local disks. Since a lot of stuff (caches etc.) seem to end up within the /home/runner
folder itself, the agent now uses the same strategy as for the new tmpfs
mounts above (i.e. the whole /home/runner
folder is mounted as an overlay on the local disk volume, as well as the /tmp
folder. /var/lib/docker
remains mounted as a normal filesystem on the local disk volume). Fixes #284.
/runs-on
folder on Linux. More coherent with Windows (C:\runs-on
), and avoids polluting /opt
folder.app_version
in logs (was previously empty string due to incorrect env variable being used in v2.8.1).Private
mode is set to only
, no longer enable public ip auto-assignment in the launch templates. Thanks @temap!I’m excited to announce that RunsOn is now a partner of StepSecurity ↗ to address a critical gap in the software supply chain security landscape: protecting CI/CD runners from supply chain attacks.
While corporate laptops and production servers typically have robust security monitoring in place, CI/CD runners often lack equivalent protection despite handling sensitive information like secrets for package registries and cloud environments. This oversight has contributed to significant supply chain attacks in recent years, including the SolarWinds and Codecov breaches.
Traditional security monitoring solutions aren’t effective for CI/CD runners due to their ephemeral nature and lack of context for correlating events with specific workflow runs.
StepSecurity addresses this gap with security monitoring specifically designed for CI/CD environments. Their Harden-Runner is a runtime security agent that provides:
The integration is now available to all RunsOn users. To get started:
Obtain a StepSecurity API key (enterprise license required, or start a free trial at stepsecurity.io ↗)
Configure RunsOn with your StepSecurity API key using the IntegrationStepSecurityApiKey
stack parameter
Use StepSecurity images in your workflows:
ubuntu24-stepsecurity-x64
ubuntu24-stepsecurity-arm64
Learn more in the dedicated documentation.
👋 v2.6.4 and v2.6.5 have been released in the last weeks, with the following changes.
Note: v2.6.6 ↗ has been released to fix an issue with the VpcEndpoints
stack parameter.
Optimized GPU images, new VpcEndpoints
stack parameter, ability to specify custom instance tags for custom runners.
Note: there appears to be some issues with the new VPC endpoints. I'm on it! If you need that feature, please hold on to your current version of RunsOn.
ubuntu22-gpu-x64
and ubuntu24-gpu-x64
: 1-1 compatibility with GitHub base images + NVidia GPU drivers, CUDA toolkit, and container toolkit.VpcEndpoints
stack parameter (fixes #213), and reorganize template params. Note that the EC2 VPC endpoint was previously automatically created when Private
mode was enabled. This is no longer the case, so make sure you select the VPC endpoints that you need when you update your CloudFormation stack.runs-on-
prefix, and key and values will be sanitized according to AWS rules.CLI 0.0.1 released, fix for Magic Cache, fleet objects deletion.