Skip to content

Blog

The true cost of self-hosted GitHub Actions - Separating fact from fiction

In recent discussions about GitHub Actions runners, there’s been some debate around the true cost and complexity of self-hosted solutions. With blog posts like “Self-hosted GitHub Actions runners aren’t free” ↗ and various companies raising millions to build high-performance CI clouds, it’s important to separate fact from fiction.

The Complexity Narrative

It’s true that traditional self-hosted GitHub Actions runner approaches come with challenges:

  • Operational overhead: Maintaining AMIs, monitoring infrastructure, and debugging API issues
  • Hidden costs: Infrastructure expenses, egress charges, and wasted capacity
  • Human costs: Engineering time spent on maintenance rather than product development

However, these challenges aren’t inherent to self-hosted runners themselves. They’re symptoms of inadequate tooling for deploying and managing them.

RunsOn: Self-Hosted Without the Headaches

At RunsOn, we’ve specifically designed our solution to deliver the benefits of self-hosted GitHub Actions runners without the traditional downsides:

1. Truly Maintenance-Free

While some providers claim to eliminate maintenance, they’re actually just moving your workloads to their infrastructure—creating new dependencies and security concerns. RunsOn takes a fundamentally different approach:

  • Battle-tested CloudFormation stack: Deploy in 10 minutes with a simple template URL.
  • Zero Kubernetes complexity: Unlike Actions Runner Controller (ARC), no complex cluster management.
  • Scales to zero: No jobs in queue? No cost. When a job comes up, RunsOn spins up a new runner and starts the job in less than 30s.
  • Automatic updates: Easy, non-disruptive upgrade process.
  • No manual AMI maintenance: Regularly updated runner images.

2. Cost Transparency

When third-party providers advertise “2x cheaper” services, they’re comparing themselves to GitHub-hosted runners—not to true self-hosted solutions. With RunsOn:

  • Up to 90% cost reduction compared to GitHub-hosted runners.
  • AWS Spot instances provide maximum savings (up to 75% cheaper than on-demand).
  • Use your existing AWS credits and committed spend.
  • No middleman markup on compute resources.
  • Transparent licensing model, with a low fee irrespective of the number of runners or job minutes you use.

3. Security Without Compromise

Many third-party solutions gloss over a critical fact: your code and secrets are processing on their infrastructure. RunsOn:

  • 100% self-hosted in your AWS account—no code or secrets leave your infrastructure.
  • Ephemeral VM isolation with one clean runner per job.
  • Full audit capabilities through your AWS account.
  • No attack vectors from persistent runners.

4. Enterprise-Grade Performance

High-performance CI doesn’t require VC-funded cloud platforms:

  • 30% faster builds than GitHub-hosted runners.
  • Flexible instance selection with x64, ARM64, GPUs, and Windows support.
  • Unlimited concurrency (only limited by your AWS quotas).
  • Supercharged caching with VPC-local S3 cache backend (5x faster transfers).

The “Human Cost” Reality

The often-cited “human cost” of self-hosted runners assumes significant ongoing maintenance. With RunsOn:

  • 10-minute setup with close to zero AWS knowledge required.
  • No ongoing maintenance burden for your DevOps team. Upgrades are one click away, and can be performed at your own pace.
  • No infrastructure to babysit or weekend emergency calls.
  • No complex debugging of runner API issues.

Breaking Down the Claims

Let’s address some specific claims from recent competitor blog posts:

Claim: “Maintaining AMIs is time-consuming and error-prone”

Reality: RunsOn handles all AMI maintenance for you, with regularly updated images that are 100% compatible with GitHub’s official runners. If you want full control, we also provide templates for building custom images.

Claim: “Self-hosting means babysitting infrastructure”

Reality: RunsOn uses fully managed AWS services and ephemeral runners that are automatically recycled after each job. There’s no infrastructure to babysit.

Claim: “You’ll need to become an expert in GitHub Actions”

Reality: With RunsOn, you only need to change one line in your workflow files—replacing runs-on: ubuntu-latest with your custom labels. No GitHub Actions expertise required.

Claim: “High-performance CI requires third-party infrastructure”

Reality: RunsOn provides high-performance CI within your own AWS account, with benchmarks showing 30% faster builds for x64 workloads than GitHub-hosted runners and full compatibility with the latest instance types and architectures.

For arm64 workfloads, AWS is currently the leader in CPU performance.

Conclusion

Self-hosted GitHub Actions runners can be complex and costly, if you’re using the wrong approach. But with RunsOn, you get all the benefits of self-hosting (cost savings, performance, security) without the traditional drawbacks.

Before making assumptions about the “true cost” of self-hosted runners, evaluate solutions like RunsOn that have specifically solved these challenges. Your developers, security team, and finance department will all thank you.

Get started with RunsOn today!

🚀 v2.8.2 is out, with EFS, Ephemeral Registry support, and YOLO mode (tmpfs)!

Check out the new documentation pages for:

Now for the full release notes:

📝 v2.8.2

Support for EFS, TMPFS, and ECR ephemeral registry for fast docker builds. Also some bug fixes.

What's changed

EFS

  • Embedded networking stack can now create an Elastic File System (EFS), and runners will auto-mount it at /mnt/efs if the extras label include efs. Useful to share artefacts across job runs, with classic filesystem primitives.
jobs:
  with-efs:
    runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=efs
    steps:
      - run: df -ah /mnt/efs
      # 127.0.0.1:/      8.0E   35G  8.0E   1% /mnt/efs
📝 Example use case for maintaining mirrors For instance this can be used to maintain local mirrors of very large github repositories and avoid long checkout times for every job:
env:
  MIRRORS: "https://github.com/PostHog/posthog.git"
  # can be ${{ github.ref }} if same repo as the workflow
  REF: main

jobs:
  with-efs:
    runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=efs
    steps:
      - name: Setup / Refresh mirrors
        run: |
          for MIRROR in ${{ env.MIRRORS }}; do
            full_repo_name=$(echo $MIRROR | cut -d/ -f4-)
            MIRROR_DIR=/mnt/efs/mirrors/$full_repo_name
            mkdir -p "$(dirname $MIRROR_DIR)"
            test -d "${MIRROR_DIR}" || git clone --mirror ${MIRROR/https:\/\//https:\/\/x-access-token:${{ secrets.GITHUB_TOKEN }}@} "${MIRROR_DIR}"
            ( cd "$MIRROR_DIR" && \
              git remote set-url origin ${MIRROR/https:\/\//https:\/\/x-access-token:${{ secrets.GITHUB_TOKEN }}@} && \
              git fetch origin ${{ env.REF }} )
          done
      - name: Checkout from mirror
        run: |
          git clone file:///mnt/efs/mirrors/PostHog/posthog.git --branch ${{ env.REF }} --single-branch --depth 1 upstream

Ephemeral registry

  • Support for an Ephemeral ECR registry: can now automatically create an ECR repository that can act as an ephemeral registry for pulling/pushing images and cache layers from your runners. Especially useful with the type=registry buildkit cache instruction. If the extras label includes ecr-cache, the runners will automatically setup docker credentials for that registry at the start of the job.
jobs:
  ecr-cache:
    runs-on: runs-on=${{ github.run_id }},runner=2cpu-linux-x64,extras=ecr-cache
    steps:
      - uses: actions/checkout@v4
      - uses: docker/setup-buildx-action@v3
      - uses: docker/build-push-action@v4
        env:
          TAG: ${{ env.RUNS_ON_ECR_CACHE }}:my-app-latest
        with:
          context: .
          push: true
          tags: ${{ env.TAG }}
          cache-from: type=registry,ref=${{ env.TAG }}
          cache-to: type=registry,ref=${{ env.TAG }} }},mode=max,compression=zstd,compression-level=22

Tmpfs

Support for setting up a tmpfs volume (size: 100% of available RAM, so only to be used on high-memory instances), and binding the /tmp, /home/runner, and /var/lib/docker folders on it. /tmp and /home/runner are mounted as overlays, preserving their existing content.

Can speed up some IO-intensive workflows. Note that if tmpfs is active, instances with ephemeral disks won't have those mounted since it would conflict with the tmpfs volume.

jobs:
  with-tmpfs:
    runs-on: runs-on=${{ github.run_id }},family=r7,ram=16,extras=tmpfs
    steps:
      - run: df -ah /mnt/tmpfs
      # tmpfs            16G  724K   16G   1% /mnt/tmpfs
      - run: df -ah /home/runner
      # overlay          16G  724K   16G   1% /home/runner
      - run: df -ah /tmp
      # overlay          16G  724K   16G   1% /tmp
      - run: df -ah /var/lib/docker
      # tmpfs            16G  724K   16G   1% /var/lib/docker

You can obviously combine options, i.e. extras=efs+tmpfs+ecr-cache+s3-cache is a valid label 😄

Instance-storage mounting changes

Until now, when an instance has locally attached NVMe SSDs available, they would be automatically formatted and mounted so that /var/lib/docker and /home/runner/_work directories would end up on the local disks. Since a lot of stuff (caches etc.) seem to end up within the /home/runner folder itself, the agent now uses the same strategy as for the new tmpfs mounts above (i.e. the whole /home/runner folder is mounted as an overlay on the local disk volume, as well as the /tmp folder. /var/lib/docker remains mounted as a normal filesystem on the local disk volume). Fixes #284.

Misc

  • Move all RunsOn-specific config files into /runs-on folder on Linux. More coherent with Windows (C:\runs-on), and avoids polluting /opt folder.
  • Fix app_version in logs (was previously empty string due to incorrect env variable being used in v2.8.1).
  • Fix "Require any Amazon EC2 launch template not to auto-assign public IP addresses to network interfaces" from AWS Control Tower. When the Private mode is set to only, no longer enable public ip auto-assignment in the launch templates. Thanks @temap!

Upgrade

Info card

StepSecurity Partnership

I’m excited to announce that RunsOn is now a partner of StepSecurity ↗ to address a critical gap in the software supply chain security landscape: protecting CI/CD runners from supply chain attacks.

The Security Gap in CI/CD Infrastructure

While corporate laptops and production servers typically have robust security monitoring in place, CI/CD runners often lack equivalent protection despite handling sensitive information like secrets for package registries and cloud environments. This oversight has contributed to significant supply chain attacks in recent years, including the SolarWinds and Codecov breaches.

Traditional security monitoring solutions aren’t effective for CI/CD runners due to their ephemeral nature and lack of context for correlating events with specific workflow runs.

Introducing StepSecurity Integration with RunsOn

StepSecurity addresses this gap with security monitoring specifically designed for CI/CD environments. Their Harden-Runner is a runtime security agent that provides:

  • Egress network monitoring and filtering: Monitors outbound traffic to build baselines, detect anomalies, and enforce allow lists
  • GitHub API call monitoring: Provides visibility into API usage and helps determine minimum required permissions
  • File integrity and tampering detection: Detects unauthorized modifications to source code during builds
  • Granular event correlation: Maps each network connection, file operation, and process execution to specific steps, jobs, and workflows
  • GitHub Checks integration: Automatically monitors network activity and fails checks when anomalous activity is detected

Getting Started with StepSecurity on RunsOn

The integration is now available to all RunsOn users. To get started:

  1. Obtain a StepSecurity API key (enterprise license required, or start a free trial at stepsecurity.io ↗)

  2. Configure RunsOn with your StepSecurity API key using the IntegrationStepSecurityApiKey stack parameter

  3. Use StepSecurity images in your workflows:

    • ubuntu24-stepsecurity-x64
    • ubuntu24-stepsecurity-arm64

Learn more in the dedicated documentation.

v2.6.5 - Optimized GPU images, VpcEndpoint stack parameter, tags for custom runners

👋 v2.6.4 and v2.6.5 have been released in the last weeks, with the following changes.

Note: v2.6.6 ↗ has been released to fix an issue with the VpcEndpoints stack parameter.

📝 v2.6.5

Optimized GPU images, new VpcEndpoints stack parameter, ability to specify custom instance tags for custom runners.

Note: there appears to be some issues with the new VPC endpoints. I'm on it! If you need that feature, please hold on to your current version of RunsOn.

What's Changed

  • New GPU images ubuntu22-gpu-x64 and ubuntu24-gpu-x64: 1-1 compatibility with GitHub base images + NVidia GPU drivers, CUDA toolkit, and container toolkit.
  • Add new VpcEndpoints stack parameter (fixes #213), and reorganize template params. Note that the EC2 VPC endpoint was previously automatically created when Private mode was enabled. This is no longer the case, so make sure you select the VPC endpoints that you need when you update your CloudFormation stack.
  • Suspend versioning for cache bucket (fixes #191).
  • Allow to specify instance tags for runners (fixes #205). Tag keys can't start with runs-on- prefix, and key and values will be sanitized according to AWS rules.

Info card

📝 v2.6.4

CLI 0.0.1 released, fix for Magic Cache, fleet objects deletion.

What's changed

  • CLI released: https://github.com/runs-on/cli. Allows to easily view logs (both server logs and cloud-init logs) for a workflow job by just pasting its GitHub URL or ID. Also allows easy connection to a runner through SSM.
  • Fix race-condition in Magic Cache (fixes #209).
  • Delete the fleet instead of just the instance (fixes #217).

Upgrade

Info card