GitHub Actions runners benchmark
This page offers a detailed comparison of CPU speeds, queuing times, and disk performance for various providers of GitHub Actions self-hosted runners. The aim is to assist in identifying the best GitHub Actions self-hosted runner provider for your projects, based on your specific needs.
An analysis is provided below the benchmark results. Note that this benchmark is published by RunsOn, so you might consider me biased, but I try my best to provide a fair comparison with the details available.
Providers included in the benchmark
Third-party alternatives:
- Buildjet (Hetzner)
- Blacksmith (Hetzner / Secured Servers LLC)
- Namespace (Hetzner)
- Ubicloud (Hetzner)
- Cirrus (Hetzner)
Note: Warpbuild and Depot are third-party providers that resell AWS instances, but they forbid benchmarking their platform 🤷.
CPU speed and queuing times
Last updated:
Benchmarks are performed using the Passmark benchmarking tool ↗, using the CPU Single Threaded metric. The table displays the last 60 days of data, before the last updated date.
Key metrics such as the processor model, single-thread CPU speed, queue time, pricing, and the underlying infrastructure provider are compared. The CPU single-threaded rating is a crucial metric as it is the most significant factor (unless your job is massively parallel) in accelerating any of your workflows.
x64 runners
Provider |
CPU speed
(p50 | p95) |
Queue time (s)
(p50 | p95) | Processor | Infra | Samples |
---|---|---|---|---|---|
Blacksmith (2x cheaper)
| 3918 | 4184 | 22 | 34 | AMD EPYC (x86_64) | Hetzner Online | 18 |
Namespace (2x cheaper)
| 3886 | 4022 | 20 | 23 | AMD EPYC (x86_64) | Flow Swiss | 20 |
Buildjet (2x cheaper)
| 3317 | 3475 | 23 | 38 | AMD Ryzen 9 5950X 16-Core Processor (x86_64) | Hetzner Online | 10 |
Cirrus (fixed price per runner)
| 3218 | 3228 | 23 | 31 | Intel Xeon Gold 5412U (x86_64) | Hetzner Online | 20 |
RunsOn (10x cheaper)
| 3079 | 3084 | 28 | 61 | Intel Xeon Gold 6455B (x86_64) | Amazon.com | 20 |
RunsOn (10x cheaper)
| 2882 | 2884 | 27 | 44 | AMD EPYC 9R14 (x86_64) | Amazon.com | 20 |
RunsOn (10x cheaper)
| 2771 | 2935 | 28 | 44 | Intel Xeon Platinum 8488C (x86_64) | Amazon.com | 20 |
Ubicloud (10x cheaper)
| 2576 | 2596 | 18 | 39 | AMD EPYC 9454P 48-Core Processor (x86_64) | Hetzner Online | 20 |
GitHub
| 2398 | 2401 | 9 | 98 | AMD EPYC 7763 64-Core Processor (x86_64) (custom) | Microsoft Azure | 20 |
GitHub
| 2290 | 2296 | 10 | 12 | AMD EPYC 7763 64-Core Processor (x86_64) | Microsoft Azure | 20 |
AWS CodeBuild
| 2138 | 2153 | 42 | 75 | Intel Xeon Platinum 8275CL CPU @ 3.00GHz (x86_64) | Amazon.com | 36 |
AWS CodeBuild
| 2013 | 2020 | 42 | 75 | Intel Xeon Platinum 8124M CPU @ 3.00GHz (x86_64) | Amazon.com | 21 |
Warpbuild (2x cheaper)
| N/A | N/A | N/A | Amazon.com | 0 |
Depot (2x cheaper)
| N/A | N/A | N/A | Amazon.com | 0 |
arm64 runners
Provider |
CPU speed
(p50 | p95) |
Queue time (s)
(p50 | p95) | Processor | Infra | Samples |
---|---|---|---|---|---|
RunsOn (10x cheaper)
| 1933 | 1935 | 26 | 33 | Neoverse-V2 (aarch64) | Amazon.com | 20 |
RunsOn (10x cheaper)
| 1543 | 1548 | 28 | 30 | (aarch64) | Amazon.com | 20 |
Cirrus (fixed price per runner)
| 1325 | 1326 | 23 | 29 | Neoverse-N1 (aarch64) | Hetzner Online GmbH | 20 |
GitHub
| 1321 | 1323 | 9 | 89 | Neoverse-N1 (aarch64) | Microsoft Azure | 20 |
Blacksmith (2x cheaper)
| 1318 | 1321 | 24 | 27 | Neoverse-N1 (aarch64) | Hetzner Online | 20 |
Ubicloud (10x cheaper)
| 1316 | 1322 | 17 | 18 | Neoverse-N1 (aarch64) | Hetzner Online | 20 |
Namespace (2x cheaper)
| 1304 | 1308 | 24 | 26 | Neoverse-N1 (aarch64) | Hetzner Online | 20 |
Buildjet (2x cheaper)
| 1323 | 1324 | 39 | 59 | Neoverse-N1 (aarch64) | Hetzner Online | 19 |
Warpbuild (2x cheaper)
| N/A | N/A | N/A | Amazon.com | 0 |
Depot (2x cheaper)
| N/A | N/A | N/A | Amazon.com | 0 |
Observations
- Namespace and Blacksmith have the best single-threaded performance for x64 (gaming CPUs instead of server CPUs). Would be great if AWS was able to offer similar performance for x64. Buildjet used to have the best performance for x64, but it doesn’t ensure the same CPU model for an identical runner type, so you might get a slower runner at random (not great for reproducibility).
- RunsOn has the best performance for arm64. Hetzner-based providers don’t have access to the latest ARM CPUs (yet).
- GitHub uses outdated and slow CPUs for x64. arm64 is better, but not widely available yet. Queue times are very good, although larger runners are reported to wildly vary in terms of queuing time (sometimes multiple minutes).
- Cheapest providers are RunsOn and Ubicloud.
- Default AWS CodeBuild instances are even slower than GitHub, and pricing is not competitive. Queueing time is also high.
Missing from the benchmark:
- Testing concurrency and scaling limits of each provider. Hetzner is not yet at the scale of AWS, and bumping concurrency limits at Hetzner-based providers is a manual request, and sometimes costly (e.g. buildjet). Companies with many thousands of jobs per day might prefer the scalability of AWS.
Disk performance
While CPU speed is the most important factor for most workflows, disk performance can be the limiting factor in specific scenarios that require a high number of IOPS. That’s why it is important to be able to pick runner types that match your needs.
The table below compares the disk performance of various providers. Sequential read and write performance is compared, as well as random read and write performance. Sequential performance is more important when dealing with large files, while random performance is more important for small, many files. Benchmark is run as per this Google Cloud article ↗ and measures the performance on the disk where the GITHUB_WORKSPACE
resides. I performed 3 runs and picked the best result for each provider, although I didn’t notice much variance across runs.
For RunsOn, multiple instance types are compared, to highlight the performance difference between:
-
instance types with locally attached NVMe volumes vs EBS-only instances. When an instance has local storage in addition to EBS, the results are for the local storage only (EBS results stay the same).
-
storage bandwidth scaling of locally attached NVMe volumes with instance size, since on AWS larger instances are allocated a higher percentage of the storage bandwidth ↗ of the underlying host.
Note that RunsOn automatically mounts and formats the local instance storage volumes for you. If multiple volumes are available, they are automatically mounted in a RAID-0 configuration, to maximize disk space and performance.
The default sorting of the table is by Random writes
, because this is where the network-attached EBS volumes show their main limitation. You can click on the column header to sort by a different metric.
Results
Provider | Type | Configuration | Seq Writes (MiB/s) | Rand Writes (MiB/s) | Seq Reads (MiB/s) | Rand Reads (MiB/s) | Infrastructure |
---|---|---|---|---|---|---|---|
RunsOn | c6id.24xlarge | 5312 GiB (4 * 1328 GiB NVMe SSD) | 3435 | 1495 | 7332 | 1564 | AWS |
RunsOn | c5ad.24xlarge | 3540 GiB (2 * 1770 GiB NVMe SSD) | 2966 | 1465 | 6353 | 1285 | AWS |
RunsOn | c7gd.12xlarge | 2656 GiB (2 * 1328 GiB NVMe SSD) | 1699 | 1402 | 3664 | 1648 | AWS |
RunsOn | c5ad.12xlarge | 1676 GiB (2 * 838 GiB NVMe SSD) | 1479 | 1315 | 3177 | 1313 | AWS |
RunsOn | c7gd.metal (equivalent to c7gd.16xlarge) | 3540 GiB (2 * 1770 GiB NVMe SSD) | 2285 | 1286 | 4899 | 1483 | AWS |
Cirrus | ghcr.io/cirruslabs/ubuntu-runner-arm64:22.04-md | 47 GiB SSD | 2680 | 1162 | 34500 | 1140 | Hetzner |
RunsOn | c7gd.8xlarge | 1770 GiB NVMe SSD | 1135 | 1038 | 2448 | 1688 | AWS |
Namespace | nscloud-ubuntu-22.04-arm64-2x8 | 98 GiB SSD | 2501 | 638 | 4960 | 831 | Hetzner |
RunsOn | c7gd.4xlarge | 884 GiB NVMe SSD | 574 | 508 | 1205 | 1051 | AWS |
Ubicloud | ubicloud-standard-2-arm | 86 GiB SSD | 1079 | 316 | 1149 | 443 | Hetzner |
RunsOn | c5ad.4xlarge | 560 GiB (2 * 280 GiB NVMe SSD) | 242 | 219 | 604 | 511 | AWS |
Buildjet | buildjet-2vcpu-ubuntu-2204-arm | 118 GiB SSD | 1819 | 165 | 8746 | 320 | Hetzner |
RunsOn | c6id.xlarge | 220 GiB NVMe SSD | 146 | 131 | 303 | 262 | AWS |
RunsOn | c7gd.xlarge | 220 GiB NVMe SSD | 147 | 131 | 303 | 262 | AWS |
Blacksmith | blacksmith-2vcpu-ubuntu-2204-arm | 130 GiB SSD | 143 | 93 | 44 | 1271 | Hetzner |
RunsOn | c7gd.large | 110 GiB NVMe SSD | 75 | 66 | 153 | 131 | AWS |
RunsOn | c7gd.medium | 55 GiB NVMe SSD | 41 | 33 | 80 | 66 | AWS |
GitHub | 2cpu-arm64 | 75 GiB, Network volume | 199 | 17 | 203 | 28 | Azure |
RunsOn | c7g.medium | 40 GiB, EBS only, 400MiB provisioned throughput | 404 | 16 | 406 | 16 | AWS |
RunsOn | c7g.medium | 80 GiB, EBS only, 750MiB provisioned throughput | 730 | 16 | 755 | 16 | AWS |
Observations
Network-attached volumes
- Sequential read/write of network-attached volumes is actually pretty good, and you get what you pay for in terms of volume throughput provisioning. Interestingly it can be higher than some providers with local SSDs (gp3 volumes can provision up to 1000MiB/s of throughput).
- At the same time, random read/write performance of network-attached volumes is… not great, reaching only up to 20MiB/s. This is the case for default runner types at GitHub, RunsOn, and other providers at AWS.
The good news is that poor random read/write performance has a surprisingly low impact in the context of most CI workflows, where the sequential performance is often more relevant, as you’ll be pulling / writing large cache files, tarballs, artefacts, or docker images from the network, and then executing mostly CPU-intensive tasks.
However some workflows will require very large amounts of space and/or much improved random performance, and that’s where it’s important to be able to also pick a runner with local SSDs when you need it.
Local SSDs
- If you want the fastest sequential AND random write performance, AWS with locally-attached NVMe volumes is your friend. Although you will need to choose larger instance types due to the linear scaling of the storage bandwidth with the instance size. You also get access to terabytes of local storage if you need it.
- Performance of AWS locally-attached NVMe volumes is identical whether you select an x64 or arm64 instance.
- Cirrus has really high sequential read performance, and good performance overall. Namespace as well. I suppose (reach out if I’m wrong!) that contrary to AWS they are not throttling bandwidth so performance could suffer from noisy neighbours on the same host. Probably same story with Buildjet and Ubicloud, which have lower performance as well. Blacksmith is probably suffering from this issue, since the disk performance is much lower than the other providers at Hetzner (except for random reads, for some reason).
- Among Hetzner providers, max disk space is currently limited to 130GiB (Blacksmith). AWS allows up to 64 TiB for EBS volumes, and offers instance types that can go up to 305 TiB GB of local storage.
RunsOn allows to select instances with local SSDs (of variable sizes and performance), and automatically mount them.
Make your choice!
Note: this analysis is valid as of 2024/11/04. Things change quickly in this space, so make sure you do your own research as well.
Best alternatives for self-hosted GitHub Actions runners
The best alternatives for self-hosted GitHub Actions runners will heavily depend on your primary requirement:
-
Cheapest pricing: RunsOn or Ubicloud. Cirrus can also be a good option, if the fixed price/runner pricing model makes sense for you.
-
Fastest machines: Buildjet, Namespace, Cirrus have the fastest CPUs for x64 jobs. Be aware that they are hosted at Hetzner, with variable performance for the network speeds. RunsOn has the fastest performance for arm64, and good performance for x64.
-
GPU support: RunsOn is the only alternative provider that currently offers GPU support. GitHub restricts custom GPU runners to higher-tier plans, and they are expensive.
-
Windows support: RunsOn is the only alternative provider that currently offers Windows support.
-
Low queue times: GitHub has the best queue time for standard runners, and all alternatives are ~ok on that front.
-
Self-hosted deployment: RunsOn is the only solution that is entirely hosted in your AWS infrastructure, with no third-party or centralized service. You can also choose the actions-runner-controller ↗ (ARC) project to self-host in Kubernetes, although it is more complex and expensive to setup, maintain, and operate.
-
macOS support: many third-party providers offer macOS runners. On AWS, Apple is currently forcing a 24h reservation for macOS hosts, which is not ideal for short-lived CI jobs. As such, this can only work when a centralized service is pooling jobs for many clients, which is not the case e.g. for RunsOn.
When should I use GitHub’s official actions runners?
GitHub’s official actions runners are best used for short-lived jobs or those that do not require fast CPUs. They boot quickly, making them ideal for smaller, less intensive tasks. However, for larger or more CPU-intensive jobs, consider alternative runners due to the slower performance and higher costs of GitHub’s larger runners. Be aware that GitHub bills by the minute, even if your job only runs for a few seconds.