Skip to content

Job labels

RunsOn supports flexible runner configuration using runs-on: labels, allowing you to set your runner’s CPU, RAM, instance type family and disk size at runtime, depending on your workflow requirements. This flexibility allows you to optimize your runners for each job, and ensure you do not pay for unused resources.

It is also very useful if you ever find yourself splitting tests in many different jobs, because with RunsOn you now have access to far larger runners, up to hundreds of CPUs if you like! So stop wasting engineering resources going around your CI provider self-imposed constraints, and just ask for a far beefier runner than what GitHub provides (AWS instances have also faster CPUs than GitHub Actions runners).

How it works

The way you can define your requirements is by specifying custom labels for the runs-on: parameter in your workflow.

For instance, if you want a runner with 4 CPUs, 16GB RAM, using either m7a or m7i-flex instance types:

.github/workflows/my-workflow.yml
job: Test
runs-on:
- runs-on # required so that RunsOn knows it needs to process the workflow
- cpu=4
- ram=16
- family=m7a+m7i-flex
- image=ubuntu22-full-x64

It is highly recommended to include the run ID as part of the workflow labels. In case of a failure, this allows you to easily find the logs for that runner:

.github/workflows/my-workflow.yml
job: Test
runs-on:
- runs-on=${{ github.run_id }}
- ...

Finally, since v2.5.4, if the array syntax for labels is not convenient, you can use a single label, with / as separator. Example:

.github/workflows/my-workflow.yml
job: Test
runs-on: runs-on=${{ github.run_id }}/cpu=4/ram=16/family=m7a+m7i-flex/image=ubuntu22-full-x64

Available labels

cpu

Number of vCPUs to request (default: 2). If you set multiple values, RunsOn will request any instance matching the lowest up to the highest value.

E.g.

  • cpu=4 will ensure that the runner has 4 vCPUs (min=4, max=4).
  • cpu=4+16 will ensure that the runner has at least 4 vCPUs but also consider instances with up to 16 vCPUs.

Setting a variable amount of vCPUs is useful for expanding the pool of available spot instances, if your workflow is not to sensitive to the exact number of vCPUs.

ram

Amount of memory to request, in GB (default: 0). If you set multiple values, RunsOn will request any instance matching the lowest up to the highest value.

E.g.

  • ram=16 will ensure that the runner has 16GB of RAM (min=16, max=16).
  • ram=16+64 will ensure that the runner has at least 16GB of RAM but also consider instances with up to 64GB of RAM.

family

Instance type family. Can either be the full name (family=c7a.large), or a partial name (family=c7). Can accept multiple values (family=c7+c6).

If you specify multiple values, AWS will start the instance that matches the requirements, and is ranked best according to the selected spot allocation strategy, at the time of launch. Sometimes it can happen that a beefier instance is cheaper than a smaller one on the spot marklet.

E.g.

  • family=c7a+c6 will ensure that the runner has at least one instance type in the c7a* or c6* instance type family.
  • family=c7a.2xlarge will ensure that the runner always runs on a c7a.2xlarge instance type (however if AWS has no capacity left, the runner could fail to launch. It’s always recommended to use a range of instance types, instead of a single one).

image

Runner image to use (see Runner images).

E.g.

  • image=ubuntu22-full-x64 will ensure that the runner is launched with the ubuntu22-full-x64 image.

disk

Added in v2.5.7

Disk configuration. One of default or large. Corresponds to the disk configurations configured in the CloudFormation template.

hdd (deprecated)

Deprecated since v2.5.7

Disk size, in GB (default: 40).

Since v2.2 this label is a bit misleading, because you can actually only get one of 2 disk configurations, which are configured in the CloudFormation template (by default, small runner has 40GB, large runner has 80GB).

For instance, if you set hdd=41 and your small runner is specified to use 40GB, the runner will be launched with the large runner configuration.

The disk size and volume throughput for each runner size can be configured when you install or update RunsOn. Using high volume throughput is interesting when booting large custom AMIs, as it reduces the time it takes to download the AMI, at the cost of a more expensive instance.

Also note that RunsOn will not take this label into account if the value is lower than the size of the root volume of the selected AMI (e.g. the default Linux AMI is 30GB, so you can’t set the small runner disk size to 20GB if you plan on using that image). In that case it would auto-select the runner configuration that has a disk size that is at least equal to the AMI root volume size, or fail if it can’t find one.

spot

Whether to attempt to use spot pricing (default: true, equivalent to price-capacity-optimized). Can be set to an explicit spot allocation strategy.

E.g. spot=false will ensure that the runner is launched with regular on-demand pricing.

Supported allocation strategies on RunsOn include:

  • spot=price-capacity-optimized or spot=pco: This strategy balances between price and capacity to optimize cost while minimizing the risk of interruption.
  • spot=lowest-price or spot=lp: This strategy focuses on obtaining the lowest possible price, which may increase the risk of interruption.
  • spot=capacity-optimized or spot=co: This strategy prioritizes the allocation of instances from the pools with the most available capacity, reducing the likelihood of interruption.

For more details on each strategy, refer to the official AWS documentation on Spot Instance allocation strategies ↗.

ssh

Whether to enable SSH access (default: true).

E.g.

  • ssh=false will ensure that the runner is launched with SSH access fully disabled.

private

Whether to launch a runner in a private subnet, and benefit from a static egress IP.

The default for this label depends on your Stack configuration for the Private parameter:

  • If the stack parameter Private is set to true, private subnets will be enabled but runners will be public by default. You need to set the job label private=true to launch a runner in the private subnet.

  • If the stack parameter Private is set to always, runners will be private by default and you must set the job label private=false to launch a runner in the public subnet.

  • If the stack parameter Private is set to only, runners can only be launched in private subnets and you will get an error if you try to specify the job label private=false.

  • If the stack parameter Private is set to false, runners can only be launched in public subnets and you will get an error if you try to specify the job label private=true.

Special labels

runner

Using the previous labels, you can configure every aspect of a runner right from the runs-on: workflow job definition.

However, if you want to reuse a runner configuration across multiple workflows, you can define a custom runner type in a .github/runs-on.yml configuration file in the repository where you want those runners to be available, and reference that runner type with the runner label.

E.g.

  • runner=16cpu-linux-x64 will ensure that the runner is launched with the 16cpu-linux-x64 runner type. Learn more about default and custom runner configurations for Linux and Windows.

Important: this label cannot be set as part of a custom runner configuration in the .github/runs-on.yml file.

env

The RunsOn Environment to target. Defaults to production.

E.g.

  • env=staging will ensure that only a runner from the RunsOn staging stack is used to execute this workflow. This allows you to isolate different workflows in different environments, with different IAM permissions or stack configurations, etc.

See Environments for more details.

Important: this label cannot be set as part of a custom runner configuration in the .github/runs-on.yml file.

Custom runner definitions

If you want to avoid having to declare the labels inline over and over across workflows, RunsOn supports defining custom runner names at the repository or organization level using the special .github/runs-on.yml. You can then reference them just by using the runner label with your runner name (e.g. runner=my-custom-runner).

More details about custom runner configurations can be found in the Repository configuration section.