Skip to content
Want to know the fastest EC2 instance type? Check out our latest benchmark.

Shockingly cheaper CI

  • 10x cheaper, up to 2x faster
  • On-premise, in your AWS account
  • 1-1 compatibility with GitHub Action
  • x64, arm64 & custom images supported
  • Unlimited concurrency, unlimited cache
  • 💰 Return on investment measured in DAYS

Join companies that get faster CI, at lower cost


Note: this is the pricing with the most recent EC2 family types (m7a*,m7i*). You can get even more savings by switching to previous-generation instance types.

RunnerRunsOnGitHubRunsOn vs GitHub
2cpu-linux-x64$0.0011$0.00807.2x cheaper
4cpu-linux-x64$0.0018$0.01609.0x cheaper
8cpu-linux-x64$0.0026$0.032012.4x cheaper
16cpu-linux-x64$0.0055$0.064011.5x cheaper
32cpu-linux-x64$0.0091$0.128014.1x cheaper
64cpu-linux-x64$0.0155$0.256016.5x cheaper
$/min, us-west-2 prices.
Includes compute + storage costs.
Savings are even higher if you include the speed gains.

RunsOn helped reduce our monthly CI costs from $1100 (third-party SaaS) to $400. Also because it is cheaper we increased the instance size, which made it faster and improved our iteration times by 2x.

Alec Mocatta
Founder at Tably
profile picture

~$563 with GH hosted runners, ~$136 for compute on RunsOn.

Our main repository saw an average decrease in CI runtime of 8 minutes (23 -> 15). Another repository could fully utilize 16 threads, which dropped its runtime by 15 minutes (20 -> 5)

Read full case study.

Timo Schäfer
Software Engineer at Wikando

Why RunsOn?

400MB/s, local S3 backend cache
 Unlimited concurrency
x64 + ARM64
Personal + Organization
Slow, even for larger runners
$$$ Pricey
10GB cache per repo
100MB/s cache bandwidth
20 (Free plan), 60 (Team plan)
 No native docker cache
 No custom image
x64 + MacOS + Windows
Personal + Organization
Third-party SaaS
Up to 2x faster
2x cheaper
10-25GB cache per repo
100-400MB/s cache bandwidth
64vCPUs max, or you pay more
 No native docker cache
 No custom image
x64 + ARM + ~MacOS
 Third-party, no SOC2 certification
Organization only

RunsOn comes as a CloudFormation stack that you install in your own AWS account.

It is built using modern tooling (Go), and follows the KISS principles, while blowing away most of the VC-funded competition in terms of price, speed, hardware availability (even GPUs!), and concurrency.

No need to manage a k8s cluster with ARC, or deploy complicated, half-maintained stacks.

The RunsOn server deployed in your account runs on a $1.5/month AWS AppRunner container, receives workflow events from GitHub Action, and launches ephemeral EC2 instances to fulfill the workflow job requirements. No dangling EC2 instances, no unused capacity.


It can be deployed in 4 different regions, and takes advantage of multiple availability zones to make sure it will always find a spot instance, with the least probability of being interrupted. In case no spot instance is available, it automatically switches to on-demand pricing to ensure that the job is scheduled (still 5x cheaper than GitHub).

It is not a SaaS, and not an agent pinging external services. Users are usually up and running in 10mins top.

It comes with a flat license price, at 300€/year (25€/month), or free if you are a non-profit. I assure you will be blown away by the insane ROI on this.

Part of the code is available on GitHub. Full source code is available under the Sponsorship license.

Oh, and some of our users are running many thousands of jobs EVERY DAY without any issues.

Job stats

10x cheaper, up to 2x faster

Latest generation of AWS instances, fast volumes, and 10+ Gbps network. Your builds will be faster.

Use the calculator to see how much you can save compared to GitHub runners. Look at the benchmarks to see how much faster your runners can be.

Unlimited concurrency

Tested with hundreds of concurrent jobs at once.


One-line change

1-1 workflow compatibility with official GitHub runners. In fact, RunsOn is the maintainer of the AWS images for GitHub Action, for both x64 and ARM64.

Get the runners your team deserves

Forget statically assigned runner types. Specify RAM, CPU, DISK size, at runtime. Make your runners fit your workflow needs, not the other way around.

Not a SaaS

Cut the middleman. Keep your code and workflow secrets private. RunsOn is installed in your own AWS account.


4 CPUs, x64, 16 or 32GB ram, default instance family (m7a, m7i):

- runs-on: ubuntu-latest
+ runs-on: runs-on,runner=4cpu-linux-x64,ram=16+32,disk=100

16 CPUs, ARM64, explicit instance family:

- runs-on: ubuntu-latest
+ runs-on: runs-on,runner=16cpu-linux-arm64,family=c7g

32CPUs, on-demand only, custom AMI:

- runs-on: ubuntu-latest
+ runs-on: runs-on,runner=32cpu-linux-x64,ami=my-custom-ami,spot=false

I've been testing Kubernetes with ARC: more complex, each runner takes ~25 seconds to boot (on an existing node), far less attractive than the simplicity of RunsOn.

Tim Petricola
Software Engineer at Alan
profile picture

CI times are down from ~30 mins to 12 mins, thanks to the larger instances. S3 being faster also helped our one not-very-parallel job.

Alec Mocatta
Founder at Tably
profile picture

You are here: 🤔. You could be here: 🥳

Try RunsOn now. You are 10 minutes away from massive savings.

Free trial for 15 days

But but but… who made this?