Skip to content

GitHub Actions I/O performance

GitHub Actions I/O performance

This page offers a detailed comparison of disk I/O performance for various providers of GitHub Actions self-hosted runners. The aim is to assist in identifying the best GitHub Actions self-hosted runner provider for your projects, based on your specific needs.

Providers included in the benchmark

Official:

Self-hosted alternatives:

Third-party alternatives:

Note: Warpbuild and Depot are third-party providers that resell AWS instances, but they forbid benchmarking their platform 🤷.

Disk performance

While CPU speed is the most important factor for most workflows, disk performance can be the limiting factor in specific scenarios that require a high number of IOPS. That’s why it is important to be able to pick runner types that match your needs.

The table below compares the disk performance of various providers. Sequential read and write performance is compared, as well as random read and write performance. Sequential performance is more important when dealing with large files, while random performance is more important for small, many files. Benchmark is run as per this Google Cloud article ↗ and measures the performance on the disk where the GITHUB_WORKSPACE resides. I performed 3 runs and picked the best result for each provider, although I didn’t notice much variance across runs.

For RunsOn, multiple instance types are compared, to highlight the performance difference between:

Note that RunsOn automatically mounts and formats the local instance storage volumes for you. If multiple volumes are available, they are automatically mounted in a RAID-0 configuration, to maximize disk space and performance.

The default sorting of the table is by Random writes, because this is where the network-attached EBS volumes show their main limitation. You can click on the column header to sort by a different metric.

Results

Provider
Type
Configuration
Seq Writes (MiB/s)
Rand Writes (MiB/s)
Seq Reads (MiB/s)
Rand Reads (MiB/s)
Infrastructure
RunsOnc6id.24xlarge5312 GiB (4 * 1328 GiB NVMe SSD)3435149573321564AWS
RunsOnc5ad.24xlarge3540 GiB (2 * 1770 GiB NVMe SSD)2966146563531285AWS
RunsOnc7gd.12xlarge2656 GiB (2 * 1328 GiB NVMe SSD)1699140236641648AWS
RunsOnc5ad.12xlarge1676 GiB (2 * 838 GiB NVMe SSD)1479131531771313AWS
RunsOnc7gd.metal (equivalent to c7gd.16xlarge)3540 GiB (2 * 1770 GiB NVMe SSD)2285128648991483AWS
Cirrusghcr.io/cirruslabs/ubuntu-runner-arm64:22.04-md47 GiB SSD26801162345001140Hetzner
RunsOnc7gd.8xlarge1770 GiB NVMe SSD1135103824481688AWS
Namespacenscloud-ubuntu-22.04-arm64-2x898 GiB SSD25016384960831Hetzner
RunsOnc7gd.4xlarge884 GiB NVMe SSD57450812051051AWS
Ubicloudubicloud-standard-2-arm86 GiB SSD10793161149443Hetzner
RunsOnc5ad.4xlarge560 GiB (2 * 280 GiB NVMe SSD)242219604511AWS
Buildjetbuildjet-2vcpu-ubuntu-2204-arm118 GiB SSD18191658746320Hetzner
RunsOnc6id.xlarge220 GiB NVMe SSD146131303262AWS
RunsOnc7gd.xlarge220 GiB NVMe SSD147131303262AWS
Blacksmithblacksmith-2vcpu-ubuntu-2204-arm130 GiB SSD14393441271Hetzner
RunsOnc7gd.large110 GiB NVMe SSD7566153131AWS
RunsOnc7gd.medium55 GiB NVMe SSD41338066AWS
GitHub2cpu-arm6475 GiB, Network volume1991720328Azure
RunsOnc7g.medium40 GiB, EBS only, 400MiB provisioned throughput4041640616AWS
RunsOnc7g.medium80 GiB, EBS only, 750MiB provisioned throughput7301675516AWS

Observations

Network-attached volumes

  • Sequential read/write of network-attached volumes is actually pretty good, and you get what you pay for in terms of volume throughput provisioning. Interestingly it can be higher than some providers with local SSDs (gp3 volumes can provision up to 1000MiB/s of throughput).
  • At the same time, random read/write performance of network-attached volumes is… not great, reaching only up to 20MiB/s. This is the case for default runner types at GitHub, RunsOn, and other providers at AWS.

The good news is that poor random read/write performance has a surprisingly low impact in the context of most CI workflows, where the sequential performance is often more relevant, as you’ll be pulling / writing large cache files, tarballs, artefacts, or docker images from the network, and then executing mostly CPU-intensive tasks.

However some workflows will require very large amounts of space and/or much improved random performance, and that’s where it’s important to be able to also pick a runner with local SSDs when you need it.

Local SSDs

  • If you want the fastest sequential AND random write performance, AWS with locally-attached NVMe volumes is your friend. Although you will need to choose larger instance types due to the linear scaling of the storage bandwidth with the instance size. You also get access to terabytes of local storage if you need it.
  • Performance of AWS locally-attached NVMe volumes is identical whether you select an x64 or arm64 instance.
  • Cirrus has really high sequential read performance, and good performance overall. Namespace as well. I suppose (reach out if I’m wrong!) that contrary to AWS they are not throttling bandwidth so performance could suffer from noisy neighbours on the same host. Probably same story with Buildjet and Ubicloud, which have lower performance as well. Blacksmith is probably suffering from this issue, since the disk performance is much lower than the other providers at Hetzner (except for random reads, for some reason).
  • Among Hetzner providers, max disk space is currently limited to 130GiB (Blacksmith). AWS allows up to 64 TiB for EBS volumes, and offers instance types that can go up to 305 TiB GB of local storage.

RunsOn allows to select instances with local SSDs (of variable sizes and performance), and automatically mount them.