GitHub Actions runners benchmark
This page offers a detailed comparison of CPU speeds, queuing times, and disk performance for various providers of GitHub Actions self-hosted runners. The aim is to assist in identifying the best GitHub Actions self-hosted runner provider for your projects, based on your specific needs.
An analysis is provided below the benchmark results. Note that this benchmark is published by RunsOn, so you might consider me biased, but I try my best to provide a fair comparison with the details available.
Providers included in the benchmark
CPU speed and queuing times
Last updated:
Benchmarks are performed using the Passmark benchmarking tool ↗, using the CPU Single Threaded metric. The table displays the last 60 days of data, before the last updated date.
Key metrics such as the processor model, single-thread CPU speed, queue time, pricing, and the underlying infrastructure provider are compared. The CPU single-threaded rating is a crucial metric as it is the most significant factor (unless your job is massively parallel) in accelerating any of your workflows.
x64 runners
Provider |
CPU speed
(p50 | p95) |
Queue time (s)
(p50 | p95) | Processor | Infra | Samples |
---|---|---|---|---|---|
Namespace (2x cheaper)
| 3885 | 4030 | 21 | 24 | AMD EPYC (x86_64) | Hetzner Online | 54 |
Buildjet (2x cheaper)
| 3724 | 4046 | 23 | 25 | AMD Ryzen 9 7950X3D 16-Core Processor (x86_64) | Hetzner Online | 30 |
Buildjet (2x cheaper)
| 3256 | 3497 | 25 | 30 | AMD Ryzen 9 5950X 16-Core Processor (x86_64) | Hetzner Online | 24 |
Blacksmith (2x cheaper)
| 3354 | 3575 | 22 | 38 | AMD EPYC (x86_64) | Hetzner Online | 54 |
Cirrus (fixed price per runner)
| 3223 | 3231 | 23 | 30 | Intel Xeon Gold 5412U (x86_64) | Hetzner Online GmbH | 54 |
RunsOn (10x cheaper)
| 3151 | 3158 | 29 | 34 | Intel Xeon Gold 6455B (x86_64) | Amazon.com | 54 |
RunsOn (10x cheaper)
| 2902 | 2904 | 30 | 43 | AMD EPYC 9R14 (x86_64) | Amazon.com | 54 |
RunsOn (10x cheaper)
| 2785 | 2938 | 31 | 37 | Intel Xeon Platinum 8488C (x86_64) | Amazon.com | 54 |
Warpbuild (2x cheaper)
| 2903 | 2904 | 56 | 65 | AMD EPYC 9R14 (x86_64) (custom) | Amazon.com | 40 |
Warpbuild (2x cheaper)
| 2901 | 2904 | 52 | 62 | AMD EPYC 9R14 (x86_64) | Amazon.com | 54 |
Ubicloud (10x cheaper)
| 2559 | 2592 | 17 | 40 | AMD EPYC 9454P 48-Core Processor (x86_64) | Hetzner Online | 54 |
GitHub
| 2397 | 2403 | 13 | 19 | AMD EPYC 7763 64-Core Processor (x86_64) (custom) | Microsoft Azure | 53 |
GitHub
| 2291 | 2303 | 10 | 12 | AMD EPYC 7763 64-Core Processor (x86_64) | Microsoft Azure | 54 |
AWS CodeBuild
| 2014 | 2021 | 50 | 92 | Intel Xeon Platinum 8124M CPU @ 3.00GHz (x86_64) | Amazon.com | 31 |
AWS CodeBuild
| 2139 | 2152 | 49 | 63 | Intel Xeon Platinum 8275CL CPU @ 3.00GHz (x86_64) | Amazon.com | 70 |
arm64 runners
Provider |
CPU speed
(p50 | p95) |
Queue time (s)
(p50 | p95) | Processor | Infra | Samples |
---|---|---|---|---|---|
RunsOn (10x cheaper)
| 1936 | 1937 | 30 | 48 | Neoverse-V2 (aarch64) | Amazon.com | 36 |
RunsOn (10x cheaper)
| 1546 | 1552 | 30 | 42 | (aarch64) | Amazon.com | 55 |
Warpbuild (2x cheaper)
| 1546 | 1552 | 59 | 70 | (aarch64) (custom) | Amazon.com | 40 |
Warpbuild (2x cheaper)
| 1546 | 1552 | 18 | 62 | (aarch64) | Amazon.com | 55 |
Cirrus (fixed price per runner)
| 1325 | 1327 | 25 | 28 | Neoverse-N1 (aarch64) | Hetzner Online GmbH | 55 |
Buildjet (2x cheaper)
| 1322 | 1324 | 39 | 41 | Neoverse-N1 (aarch64) | Hetzner Online | 55 |
GitHub
| 1321 | 1323 | 13 | 95 | Neoverse-N1 (aarch64) | Microsoft Azure | 53 |
Ubicloud (10x cheaper)
| 1318 | 1323 | 18 | 53 | Neoverse-N1 (aarch64) | Hetzner Online | 55 |
Namespace (2x cheaper)
| 1312 | 1317 | 25 | 27 | Neoverse-N1 (aarch64) | Hetzner Online | 55 |
Blacksmith (2x cheaper)
| 1319 | 1324 | 29 | 58 | Neoverse-N1 (aarch64) | Hetzner Online | 55 |
Observations
- Namespace has the best single-threaded performance for x64. AWS needs to catch up! Interestingly other Hetzner-based providers are not able to replicate this performance.
- RunsOn has the best performance for arm64. Hetzner-based providers don’t have access to the latest ARM CPUs (yet).
- GitHub uses outdated and slow CPUs for x64. arm64 is better, but not widely available yet. Queue times are very good, although larger runners are reported to wildly vary in terms of queuing time (sometimes multiple minutes).
- Warpbuild used to have very low queue times, but switched to a mix of pooling and non-demand runners, which increased the queuing times. BYOC offering and custom runners have high queue time as well.
- Cheapest providers are RunsOn and Ubicloud.
- Default AWS CodeBuild instances are even slower than GitHub, and pricing is not competitive. Queueing time is also high.
Missing from the benchmark:
- Testing concurrency and scaling limits of each provider. Hetzner is not yet at the scale of AWS, and bumping concurrency limits at Hetzner-based providers is a manual request, and sometimes costly (e.g. buildjet). Companies with many thousands of jobs per day might prefer the scalability of AWS.
Disk performance
While CPU speed is the most important factor for most workflows, disk performance can be the limiting factor in specific scenarios that require a high number of IOPS. That’s why it is important to be able to pick runner types that match your needs.
The table below compares the disk performance of various providers. Sequential read and write performance is compared, as well as random read and write performance. Sequential performance is more important when dealing with large files, while random performance is more important for small, many files. Benchmark is run as per this Google Cloud article ↗ and measures the performance on the disk where the GITHUB_WORKSPACE
resides. I performed 3 runs and picked the best result for each provider, although I didn’t notice much variance across runs.
For RunsOn, multiple instance types are compared, to highlight the performance difference between:
-
instance types with locally attached NVMe volumes vs EBS-only instances. When an instance has local storage, the results are for the local storage only.
-
storage bandwidth scaling of locally attached NVMe volumes with instance size, since on AWS larger instances are allocated a higher percentage of the storage bandwidth ↗ of the underlying host.
Note that RunsOn automatically mounts and formats the local instance storage volumes for you. If multiple volumes are available, they are automatically mounted in a RAID-0 configuration, to maximize disk space and performance.
The default sorting of the table is by Random writes
, because this is where the network-attached EBS volumes show their main limitation. You can click on the column header to sort by a different metric.
Results
Provider | Type | Configuration | Seq Writes (MiB/s) | Rand Writes (MiB/s) | Seq Reads (MiB/s) | Rand Reads (MiB/s) | Infrastructure |
---|---|---|---|---|---|---|---|
RunsOn | c6id.24xlarge | 5312 GiB (4 * 1328 GiB NVMe SSD) | 3435 | 1495 | 7332 | 1564 | AWS |
RunsOn | c5ad.24xlarge | 3540 GiB (2 * 1770 GiB NVMe SSD) | 2966 | 1465 | 6353 | 1285 | AWS |
RunsOn | c7gd.12xlarge | 2656 GiB (2 * 1328 GiB NVMe SSD) | 1699 | 1402 | 3664 | 1648 | AWS |
RunsOn | c5ad.12xlarge | 1676 GiB (2 * 838 GiB NVMe SSD) | 1479 | 1315 | 3177 | 1313 | AWS |
RunsOn | c7gd.metal (equivalent to c7gd.16xlarge) | 3540 GiB (2 * 1770 GiB NVMe SSD) | 2285 | 1286 | 4899 | 1483 | AWS |
Cirrus | ghcr.io/cirruslabs/ubuntu-runner-arm64:22.04-md | 47 GiB SSD | 2680 | 1162 | 34500 | 1140 | Hetzner |
RunsOn | c7gd.8xlarge | 1770 GiB NVMe SSD | 1135 | 1038 | 2448 | 1688 | AWS |
Namespace | nscloud-ubuntu-22.04-arm64-2x8 | 98 GiB SSD | 2501 | 638 | 4960 | 831 | Hetzner |
RunsOn | c7gd.4xlarge | 884 GiB NVMe SSD | 574 | 508 | 1205 | 1051 | AWS |
Ubicloud | ubicloud-standard-2-arm | 86 GiB SSD | 1079 | 316 | 1149 | 443 | Hetzner |
RunsOn | c5ad.4xlarge | 560 GiB (2 * 280 GiB NVMe SSD) | 242 | 219 | 604 | 511 | AWS |
Buildjet | buildjet-2vcpu-ubuntu-2204-arm | 118 GiB SSD | 1819 | 165 | 8746 | 320 | Hetzner |
RunsOn | c6id.xlarge | 220 GiB NVMe SSD | 146 | 131 | 303 | 262 | AWS |
RunsOn | c7gd.xlarge | 220 GiB NVMe SSD | 147 | 131 | 303 | 262 | AWS |
Blacksmith | blacksmith-2vcpu-ubuntu-2204-arm | 130 GiB SSD | 143 | 93 | 44 | 1271 | Hetzner |
RunsOn | c7gd.large | 110 GiB NVMe SSD | 75 | 66 | 153 | 131 | AWS |
RunsOn | c7gd.medium | 55 GiB NVMe SSD | 41 | 33 | 80 | 66 | AWS |
GitHub | 2cpu-arm64 | 75 GiB, Network volume | 199 | 17 | 203 | 28 | Azure |
RunsOn | c7g.medium | 40 GiB, EBS only, 400MiB provisioned throughput | 404 | 16 | 406 | 16 | AWS |
RunsOn | c7g.medium | 80 GiB, EBS only, 750MiB provisioned throughput | 730 | 16 | 755 | 16 | AWS |
Warpbuild | warp-ubuntu-latest-arm64-2x | 150 GiB, EBS only, 300MiB provisioned throughput? | 305 | 16 | 307 | 16 | AWS |
Observations
Network-attached volumes
- Sequential read/write of network-attached volumes is actually pretty good, and you get what you pay for in terms of volume throughput provisioning. Interestingly it can be higher than some providers with local SSDs (gp3 volumes can provision up to 1000MiB/s of throughput).
- At the same time, random read/write performance of network-attached volumes is… not great, reaching only up to 20MiB/s. This is the case for default runner types at GitHub, Warpbuild, and RunsOn.
The good news is that poor random read/write performance has a surprisingly low impact in the context of most CI workflows, where the sequential performance is often more relevant, as you’ll be pulling / writing large cache files, tarballs, artefacts, or docker images from the network, and then executing mostly CPU-intensive tasks.
However some workflows will require very large amounts of space and/or much improved random performance, and that’s where it’s important to be able to also pick a runner with local SSDs when you need it.
Local SSDs
- If you want the fastest sequential AND random write performance, AWS with locally-attached NVMe volumes is your friend. Although you will need to choose larger instance types due to the linear scaling of the storage bandwidth with the instance size. You also get access to terabytes of local storage if you need it.
- Performance of AWS locally-attached NVMe volumes is identical whether you select an x64 or arm64 instance.
- Cirrus has really high sequential read performance, and good performance overall. Namespace as well. I suppose (reach out if I’m wrong!) that contrary to AWS they are not throttling bandwidth so performance could suffer from noisy neighbours on the same host. Probably same story with Buildjet and Ubicloud, which have lower performance as well. Blacksmith is probably suffering from this issue, since the disk performance is much lower than the other providers at Hetzner (except for random reads, for some reason).
- Among Hetzner providers, max disk space is currently limited to 130GiB (Blacksmith). AWS allows up to 64 TiB for EBS volumes, and offers instance types that can go up to 305 TiB GB of local storage.
RunsOn allows to select instances with local SSDs (of variable sizes and performance), and automatically mount them.
Make your choice!
Best alternatives for self-hosted GitHub Actions runners
The best alternatives for self-hosted GitHub Actions runners will heavily depend on your primary requirement:
-
Fastest machines: Buildjet, Namespace, Cirrus (x64, arm64, macOS), and Blacksmith are good choices. Be aware that they are hosted at Hetzner, with variable performance for the network speeds. Buildjet also doesn’t guarantee you to end up on one of their fastest runners.
-
Cheapest pricing: RunsOn, Ubicloud, or Cirrus (if ok with limited concurrency) are the best options. Be aware that Ubicloud has a slightly lower CPU speed and somewhat variable queue times for x64 (but improving).
-
Low queue times: GitHub has the best queue time for standard runners, and all third-parties are ~ok on that front.
-
Self-hosted deployment: RunsOn is the only solution that is entirely hosted in your AWS infrastructure, with no third-party centralized service. Warpbuild has an option to spawn runners from your AWS account, but you are still reliant on their managed service to orchestrate and register runners (queue times are also higher when using that option). You can also use the action-runner-controller (ARC) project to self-host in Kubernetes, although it is more complex to setup, maintain, and operate.
-
All rounder: If using AWS, RunsOn is a good option, due to top-of-the-line CPU speeds (especially ARM64), cheapest pricing, stable queue times across x64 and arm64, GPU & Windows support, and the fact that it is entirely self-hosted in your own AWS infrastructure. If you need MacOS machines, Cirrus or Warpbuild are good choices. If you cannot use AWS and want the cheapest prices, try Ubicloud.
When should I use GitHub’s official Action runners?
GitHub’s official Action runners are best used for short-lived jobs or those that do not require fast CPUs. They boot quickly, making them ideal for smaller, less intensive tasks. However, for larger or more CPU-intensive jobs, consider alternative runners due to the slower performance and higher costs of GitHub’s larger runners. Be aware that GitHub bills by the minute, even if your job only runs for a few seconds.