Job labels
RunsOn supports flexible runner configuration using runs-on:
labels, allowing you to set your runner’s CPU, RAM, instance type family and disk size at runtime, depending on your workflow requirements. This flexibility allows you to optimize your runners for each job, and ensure you do not pay for unused resources.
It is also very useful if you ever find yourself splitting tests in many different jobs, because with RunsOn you now have access to far larger runners, up to hundreds of CPUs if you like! So stop wasting engineering resources going around your CI provider self-imposed constraints, and just ask for a far beefier runner than what GitHub provides (AWS instances have also faster CPUs than GitHub Actions runners).
How it works
The way you can define your requirements is by specifying custom labels for the runs-on:
parameter in your workflow.
For instance, if you want a runner with 4 CPUs, 16GB RAM, using either m7a
or m7i-flex
instance types:
jobs: test: runs-on: runs-on=${{ github.run_id }}/cpu=4/ram=16/family=m7a+m7i-flex/image=ubuntu22-full-x64 # can also be written with comma-separated values instead of slash-separated values: runs-on: runs-on=${{ github.run_id }},cpu=4,ram=16,family=m7a+m7i-flex,image=ubuntu22-full-x64
Note that the array syntax is also supported, but not recommended because other jobs within the same workflow run that target a subset of those labels could “steal” the runner originally launched for this job, leaving it without a runner:
jobs: test: runs-on: - runs-on=${{ github.run_id }} - cpu=4 - ram=16 - family=m7a+m7i-flex - image=ubuntu22-full-x64
Custom runner definitions
If you want to avoid having to declare the labels inline over and over across workflows, RunsOn supports defining custom runner names at the repository or organization level using the special .github/runs-on.yml
configuration file. You can then reference them just by using the runner
label with your runner name (e.g. runner=my-custom-runner
).
More details about custom runner configurations can be found in the Repository configuration section.
Available labels
family
Instance type family. Can either be:
- instance type full name e.g.
family=c7a.large
, - a partial name e.g.
family=c7
(this will automatically get expanded toc7*
wildcard), - a wildcard name e.g.
family=c7a.*
, particularly useful when multiple instance types have the same prefix but want a specific one (e.g.m7i
vsm7i-flex
,c7g
vsc7gd
, etc.) - multiple values, separated by
+
: e.g.family=c7+c6
,family=m7i.*+m7a
, etc.
Partial names and wildcards are useful when you want to specify a range of instance types, but don’t want to specify each one individually.
If the family definition matches multiple instance types, AWS will select the instance type that matches the requirements, and is ranked best according to the selected spot
allocation strategy, at the time of launch. Sometimes it can happen that a beefier instance is cheaper than a smaller one on the spot marklet.
E.g.
family=c7a+c6
will ensure that the runner is scheduled on an instance type in thec7a*
orc6*
instance type family.family=c7a.2xlarge
will ensure that the runner always runs on ac7a.2xlarge
instance type (however if AWS has no capacity left, the runner could fail to launch. It’s always recommended to use a range of instance types, instead of a single one).
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/family=m6+c6
cpu
Number of vCPUs to request (default: 2
). If you set multiple values, RunsOn will request any instance matching the lowest up to the highest value.
E.g.
cpu=4
will ensure that the runner has 4 vCPUs (min=4, max=4).cpu=4+16
will ensure that the runner has at least 4 vCPUs but also consider instances with up to 16 vCPUs.
Setting a variable amount of vCPUs is useful for expanding the pool of available spot instances, if your workflow is not to sensitive to the exact number of vCPUs.
jobs: test: runs-on: - runs-on=${{github.run_id}}/family=m7+c7+r7/cpu=2+8/image=ubuntu22-full-x64
ram
Amount of memory to request, in GB (default: 0
). If you set multiple values, RunsOn will request any instance matching the lowest up to the highest value.
E.g.
ram=16
will ensure that the runner has 16GB of RAM (min=16, max=16).ram=16+64
will ensure that the runner has at least 16GB of RAM but also consider instances with up to 64GB of RAM.
jobs: test: runs-on: - runs-on=${{github.run_id}}/family=m7+c7/ram=16/image=ubuntu22-full-x64
image
Runner image to use (see Runner images). Especially useful when you want to use a custom image, or don’t want to specify a runner
label (in this case, family
is required).
E.g.
image=ubuntu22-full-x64
will ensure that the runner is launched with theubuntu22-full-x64
image.image=ubuntu22-full-arm64
will ensure that the runner is launched with theubuntu22-full-arm64
image.
jobs: test: runs-on: - runs-on=${{github.run_id}}/family=m7+c7/cpu=2/image=ubuntu22-full-arm64
ami
AMI to use for the runner. Can be used if you don’t want to declare a custom image (see above), or for quick testing. For long-term use, declaring a custom image is recommended, because it can match AMIs based on a wildcard.
The AMI must be a valid AMI ID for the region where the runner is launched, and must either be a public image, or be accessible to the stack’s IAM role (by default the AMIs within the same account are accessible).
E.g.
ami=ami-0123456789abcdef0
will ensure that the runner is launched with theami-0123456789abcdef0
AMI.
jobs: test: runs-on: - runs-on=${{github.run_id}}/family=m7+c7/ami=ami-0123456789abcdef0
disk
Added in v2.5.7Disk configuration. One of default
or large
(default: default
). Corresponds to the disk configurations configured in the CloudFormation template. By default, the default
configuration is using a 40GB EBS volume, the large
configuration is using a 80GB EBS volume.
If you require more flexible disk sizes, I strongly require using instance types that come with locally-attached NVMe disks (see here ↗). RunsOn will automatically mount them (in a RAID-0 for better performance if multiple disks are detected) and point the runner workspace and docker lib folder to it.
Personal recommendation: if you have access to it, the i7ie
family is very good (a mix of both great CPU performance, and local instance storage).
E.g.
disk=large
will ensure that the runner is launched with thelarge
disk configuration.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/disk=large
hdd
Deprecated since v2.5.7, and removed in v2.7.0. Use the disk
label instead.
retry
Added in v2.6.0Retry behaviour. Currently only supported for spot instances.
retry=when-interrupted
: default for spot instances. Will retry at most once the interrupted job, using an on-demand instance.retry=false
: opt out of the retry mechanism.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/retry=false
spot
Whether to attempt to use spot pricing (default: true
, equivalent to price-capacity-optimized
). Can be set to an explicit spot allocation strategy.
E.g. spot=false
will ensure that the runner is launched with regular on-demand pricing.
Supported allocation strategies on RunsOn include:
spot=price-capacity-optimized
orspot=pco
: This strategy balances between price and capacity to optimize cost while minimizing the risk of interruption.spot=lowest-price
orspot=lp
: This strategy focuses on obtaining the lowest possible price, which may increase the risk of interruption.spot=capacity-optimized
orspot=co
: This strategy prioritizes the allocation of instances from the pools with the most available capacity, reducing the likelihood of interruption.
For more details on each strategy, refer to the official AWS documentation on Spot Instance allocation strategies ↗.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/spot=lowest-price
ssh
Whether to enable SSH access (default: true
).
E.g.
ssh=false
will ensure that the runner is launched with SSH access fully disabled.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/ssh=false
private
Whether to launch a runner in a private subnet, and benefit from a static egress IP.
The default for this label depends on your Stack configuration for the Private
parameter:
-
If the stack parameter
Private
is set totrue
, private subnets will be enabled but runners will be public by default. You need to set the job labelprivate=true
to launch a runner in the private subnet. -
If the stack parameter
Private
is set toalways
, runners will be private by default and you must set the job labelprivate=false
to launch a runner in the public subnet. -
If the stack parameter
Private
is set toonly
, runners can only be launched in private subnets and you will get an error if you try to specify the job labelprivate=false
. -
If the stack parameter
Private
is set tofalse
, runners can only be launched in public subnets and you will get an error if you try to specify the job labelprivate=true
.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/private=true
extras
Extra configuration for the runner.
Currently supports
s3-cache
(since v2.6.3): enables the magic cache feature (available for Linux and Windows runners).ecr-cache
(since v2.8.2): enables the ephemeral registry feature, if enabled at the stack level (available for Linux runners only).efs
(since v2.8.2): enables the EFS feature, if enabled at the stack level (available for Linux runners only).tmpfs
(since v2.8.2): enables the tmpfs feature (available for Linux runners only).
E.g. extras=s3-cache
will enable the magic cache.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/extras=s3-cache
You can also combine multiple extras:
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/extras=s3-cache+ecr-cache+tmpfs+efs
Special labels
runner
Using the previous labels, you can configure every aspect of a runner right from the runs-on:
workflow job definition.
However, if you want to reuse a runner configuration across multiple workflows, you can define a custom runner type in a .github/runs-on.yml
configuration file in the repository where you want those runners to be available, and reference that runner type with the runner
label.
E.g.
runner=16cpu-linux-x64
will ensure that the runner is launched with the16cpu-linux-x64
runner type. Learn more about default and custom runner configurations for Linux and Windows.
Important: this label cannot be set as part of a custom runner configuration in the .github/runs-on.yml
file.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=16cpu-linux-x64
env
The RunsOn Environment
to target. Defaults to production
.
E.g.
env=staging
will ensure that only a runner from the RunsOn staging stack is used to execute this workflow. This allows you to isolate different workflows in different environments, with different IAM permissions or stack configurations, etc.
See Environments for more details.
Important: this label cannot be set as part of a custom runner configuration in the .github/runs-on.yml
file.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/env=staging
region
This label is only useful if you have set up multiple RunsOn stacks in different AWS regions. If so, then you can use this label to specify which region to launch the runner in.
If you have multiple stacks in different regions listening on the same repositories, make sure that all your workflows use the region
label, to ensure that only one stack launches a runner for a given job.
E.g.
region=eu-west-1
will ensure that the runner is launched in theeu-west-1
region (assuming a RunsOn stack has been set up in that region).
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/region=eu-west-1
debug
Whether to enable debug mode for the job (default: false
). Note: this is only available on Linux runners for now.
E.g. debug=true
will ensure that the runner is launched with debug mode enabled.
jobs: test: runs-on: - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/debug=true
When enabled, the runner will pause before executing the first step of the job. You can then connect to the runner using SSH or SSM to debug the job. When you are ready to resume the job, you simply need to remove the debug lock file:
sudo rm /opt/runs-on/hooks/debug.lock
At that point the runner will resume the job execution.