RunsOn RunsOn

Job labels

Learn how to configure your GitHub Actions runners with specific requirements in terms of CPU, GPU, RAM, volume size, and more.

RunsOn supports flexible runner configuration using runs-on: labels, allowing you to set your runner’s CPU, RAM, instance type family and volume size at runtime, depending on your workflow requirements.

Available labels

family

Instance type family. Can either be:

  • instance type full name e.g. family=c7a.large,
  • a partial name e.g. family=c7 (this will automatically get expanded to c7* wildcard),
  • a wildcard name e.g. family=c7a.*, particularly useful when multiple instance types have the same prefix but want a specific one (e.g. m7i vs m7i-flex, c7g vs c7gd, etc.)
  • multiple values, separated by +: e.g. family=c7+c6, family=m7i.*+m7a, etc.

Partial names and wildcards are useful when you want to specify a range of instance types, but don’t want to specify each one individually.

If the family definition matches multiple instance types, AWS will select the instance type that matches the requirements, and is ranked best according to the selected spot allocation strategy, at the time of launch. Sometimes it can happen that a beefier instance is cheaper than a smaller one on the spot marklet.

E.g.

  • family=c7a+c6 will ensure that the runner is scheduled on an instance type in the c7a* or c6* instance type family.
  • family=c7a.2xlarge will ensure that the runner always runs on a c7a.2xlarge instance type (however if AWS has no capacity left, the runner could fail to launch. It’s always recommended to use a range of instance types, instead of a single one).
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/family=m6+c6

cpu

Number of vCPUs to request (default: 2). If you set multiple values, RunsOn will request any instance matching the lowest up to the highest value.

E.g.

  • cpu=4 will ensure that the runner has 4 vCPUs (min=4, max=4).
  • cpu=4+16 will ensure that the runner has at least 4 vCPUs but also consider instances with up to 16 vCPUs.

Setting a variable amount of vCPUs is useful for expanding the pool of available spot instances, if your workflow is not to sensitive to the exact number of vCPUs.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/family=m7+c7+r7/cpu=2+8/image=ubuntu22-full-x64

ram

Amount of memory to request, in GB (default: 0). If you set multiple values, RunsOn will request any instance matching the lowest up to the highest value.

E.g.

  • ram=16 will ensure that the runner has 16GB of RAM (min=16, max=16).
  • ram=16+64 will ensure that the runner has at least 16GB of RAM but also consider instances with up to 64GB of RAM.
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/family=m7+c7/ram=16/image=ubuntu22-full-x64

image

Runner image to use (see Runner images). Especially useful when you want to use a custom image, or don’t want to specify a runner label (in this case, family is required).

E.g.

  • image=ubuntu22-full-x64 will ensure that the runner is launched with the ubuntu22-full-x64 image.
  • image=ubuntu22-full-arm64 will ensure that the runner is launched with the ubuntu22-full-arm64 image.
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/family=m7+c7/cpu=2/image=ubuntu22-full-arm64

ami

AMI to use for the runner. Can be used if you don’t want to declare a custom image (see above), or for quick testing. For long-term use, declaring a custom image is recommended, because it can match AMIs based on a wildcard.

The AMI must be a valid AMI ID for the region where the runner is launched, and must either be a public image, or be accessible to the stack’s IAM role (by default the AMIs within the same account are accessible).

E.g.

  • ami=ami-0123456789abcdef0 will ensure that the runner is launched with the ami-0123456789abcdef0 AMI.
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/family=m7+c7/ami=ami-0123456789abcdef0

volume

Added in v2.9

Volume configuration with flexible size and performance options. Format: size:type:throughput:iops (e.g., volume=80gb:gp3:125mbs:3000iops).

All parts are optional and can be specified in any order:

  • Size: Volume size (e.g., 80gb, 500g, 1tb)
  • Type: EBS volume type - gp3, gp2, io1, io2, st1, sc1, standard (default: gp3)
  • Throughput: Throughput in MiB/s (e.g., 125mbs, 250mbps) - only for gp3 volumes (range: 125-1000).
  • IOPS: IOPS performance (e.g., 3000iops, 4000iops) - for gp3, io1, io2 volumes (range for gp3: 3000-4000)

For gp3 volumes, AWS requires the throughput/IOPS ratio to be ≤ 0.25 MiBps per IOPS. RunsOn will automatically adjust IOPS if needed to meet this requirement.

If you require more flexible disk sizes or maximum performance, consider using instance types that come with locally-attached NVMe disks (see here). RunsOn will automatically mount them (in a RAID-0 for better performance if multiple disks are detected) and point the runner workspace and docker lib folder to it.

Personal recommendation: if you have access to it, the i7ie family is very good (a mix of both great CPU performance, and local instance storage).

Examples:

  • volume=80gb - 80GB volume with default type (gp3)
  • volume=200gb:gp3 - 200GB gp3 volume
  • volume=100gb:gp3:500mbs:4000iops - 100GB gp3 volume with 500 MiB/s throughput and 4000 IOPS
  • volume=gp3:750mbs:4000iops - Use default size with custom throughput and IOPS
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/volume=80gb:gp3:500mbs:4000iops

disk

Added in v2.5.7. Deprecated since v2.9 - use volume instead.

Legacy disk configuration. One of default or large (default: default). This label is automatically converted to the equivalent volume configuration during processing.

Migration guide: Use the volume label for more flexible configuration:

  • disk=largevolume=80gb
  • For custom sizes and performance: volume=100gb:gp3:500mbs:4000iops
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/disk=large  # deprecated

spot

Whether to attempt to use spot pricing (default: true, equivalent to price-capacity-optimized). Can be set to an explicit spot allocation strategy.

E.g. spot=false will ensure that the runner is launched with regular on-demand pricing.

Supported allocation strategies on RunsOn include:

  • spot=price-capacity-optimized or spot=pco: This strategy balances between price and capacity to optimize cost while minimizing the risk of interruption.
  • spot=lowest-price or spot=lp: This strategy focuses on obtaining the lowest possible price, which may increase the risk of interruption.
  • spot=capacity-optimized or spot=co: This strategy prioritizes the allocation of instances from the pools with the most available capacity, reducing the likelihood of interruption.

For more details on each strategy, refer to the official AWS documentation on Spot Instance allocation strategies.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/spot=lowest-price

retry

Added in v2.6.0

Retry behaviour. Currently only supported for spot instances.

  • retry=when-interrupted: default for spot instances. Will retry at most once the interrupted job, using an on-demand instance.
  • retry=false: opt out of the retry mechanism.
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/retry=false

ssh

Whether to enable SSH access (default: false).

E.g.

  • ssh=false will ensure that the runner is launched with SSH access fully disabled.
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/ssh=false

private

Whether to launch a runner in a private subnet, and benefit from a static egress IP.

The default for this label depends on your Stack configuration for the Private parameter:

  • If the stack parameter Private is set to true, private subnets will be enabled but runners will be public by default. You need to set the job label private=true to launch a runner in the private subnet.

  • If the stack parameter Private is set to always, runners will be private by default and you must set the job label private=false to launch a runner in the public subnet.

  • If the stack parameter Private is set to only, runners can only be launched in private subnets and you will get an error if you try to specify the job label private=false.

  • If the stack parameter Private is set to false, runners can only be launched in public subnets and you will get an error if you try to specify the job label private=true.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/private=true

extras

Extra configuration for the runner.

Currently supports

  • s3-cache (since v2.6.3): enables the magic cache feature (available for Linux and Windows runners).
  • ecr-cache (since v2.8.2): enables the ephemeral registry feature, if enabled at the stack level (available for Linux runners only).
  • efs (since v2.8.2): enables the EFS feature, if enabled at the stack level (available for Linux runners only).
  • tmpfs (since v2.8.2): enables the tmpfs feature (available for Linux runners only).
  • otel (since v2.12.0, beta): starts a local OTEL collector on the runner and ships runner logs, traces, and host metrics to your configured OpenTelemetry backend.

E.g. extras=s3-cache will enable the magic cache.

otel uses the stack OTLP settings such as OtelExporterEndpoint and OtelExporterHeaders.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/extras=s3-cache

You can also combine multiple extras:

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/extras=s3-cache+ecr-cache+tmpfs+efs+otel

debug

Whether to enable debug mode for the job (default: false). Note: this is only available on Linux runners for now.

E.g. debug=true will ensure that the runner is launched with debug mode enabled.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/debug=true

When enabled, the runner will pause before executing the first step of the job. You can then connect to the runner using SSH or SSM to debug the job. When you are ready to resume the job, you simply need to remove the debug lock file:

sudo rm /runs-on/hooks/debug.lock

At that point the runner will resume the job execution.

Special labels

runner

Using the previous labels, you can configure every aspect of a runner right from the runs-on: workflow job definition.

However, if you want to reuse a runner configuration across multiple workflows, you can define a custom runner type in a .github/runs-on.yml configuration file in the repository where you want those runners to be available, and reference that runner type with the runner label.

E.g.

  • runner=16cpu-linux-x64 will ensure that the runner is launched with the 16cpu-linux-x64 runner type. Learn more about default and custom runner configurations for Linux and Windows.

Important: this label cannot be set as part of a custom runner configuration in the .github/runs-on.yml file.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=16cpu-linux-x64

env

The RunsOn Environment to target. Defaults to production.

E.g.

  • env=staging will ensure that only a runner from the RunsOn staging stack is used to execute this workflow. This allows you to isolate different workflows in different environments, with different IAM permissions or stack configurations, etc.

See Environments for more details.

Important: this label cannot be set as part of a custom runner configuration in the .github/runs-on.yml file.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/env=staging

region

This label is only useful if you have set up multiple RunsOn stacks in different AWS regions. If so, then you can use this label to specify which region to launch the runner in.

If you have multiple stacks in different regions listening on the same repositories, make sure that all your workflows use the region label, to ensure that only one stack launches a runner for a given job.

E.g.

  • region=eu-west-1 will ensure that the runner is launched in the eu-west-1 region (assuming a RunsOn stack has been set up in that region).
# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{github.run_id}}/runner=2cpu-linux-x64/region=eu-west-1

pool

Target a specific runner pool for this job. Pools are pre-provisioned runners that stay warmed up and ready, dramatically reducing queue times from ~25 seconds (cold-start) to under 6 seconds for hot instances.

When using the pool label, all other RunsOn labels (like cpu, ram, family) are ignored. The runner specification is determined entirely by the pool configuration defined in .github-private/.github/runs-on.yml.

E.g. pool=small-x64 will route the job to instances from the small-x64 pool.

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on: runs-on/pool=small-x64
    # or for more deterministic runner assignment:
    runs-on: runs-on=${{github.run_id}}/pool=small-x64

Automatic overflow: If the pool is exhausted (all instances in use), RunsOn automatically creates a cold-start instance to handle the job, ensuring jobs never fail due to lack of pool capacity.

Important notes:

  • Pool configurations must be defined in .github-private/.github/runs-on.yml
  • The .github-private repository must be accessible to the RunsOn GitHub App
  • This label cannot be set as part of a custom runner configuration

For comprehensive documentation about configuring and using pools, see the Runner pools guide and pool configuration reference.

Label syntax

How it works

The way you can define your requirements is by specifying custom labels for the runs-on: parameter in your workflow.

For instance, if you want a runner with 4 CPUs, 16GB RAM, using either m7a or m7i-flex instance types:

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on: runs-on=${{ github.run_id }}/cpu=4/ram=16/family=m7a+m7i-flex/image=ubuntu22-full-x64
    # can also be written with comma-separated values instead of slash-separated values:
    runs-on: runs-on=${{ github.run_id }},cpu=4,ram=16,family=m7a+m7i-flex,image=ubuntu22-full-x64

RunsOn also supports the array syntax, but we recommend the single-string syntax for most workflows:

# .github/workflows/my-workflow.yml
jobs:
  test:
    runs-on:
      - runs-on=${{ github.run_id }}
      - cpu=4
      - ram=16
      - family=m7a+m7i-flex
      - image=ubuntu22-full-x64

Single string vs array syntax

GitHub does not interpret these two forms the same way:

  • With the single-string syntax, GitHub sees one label. Another job can only reuse that runner if it requests the exact same string.
  • With the array syntax, GitHub sees multiple labels. Another job can match that runner if it requests a subset of those labels.

That means these two array-based jobs can overlap:

jobs:
  test1:
    runs-on:
      - runs-on=${{ github.run_id }}
      - cpu=4
      - family=m7a+m7i-flex

  test2:
    runs-on:
      - runs-on=${{ github.run_id }}
      - family=m7a+m7i-flex

test2 can steal the runner launched for test1, because GitHub matches all requested labels and test2 asks for a subset of test1’s labels.

The equivalent single-string syntax does not overlap:

jobs:
  test1:
    runs-on: runs-on=${{ github.run_id }}/cpu=4/family=m7a+m7i-flex

  test2:
    runs-on: runs-on=${{ github.run_id }}/family=m7a+m7i-flex

Those are two different labels, so GitHub will not consider them interchangeable.

Array syntax can still be safe if one of the labels is fully unique, for example runs-on=${{ github.run_id }}-${{ strategy.job-index }}. The reason we still recommend the single-string syntax is that it is much harder to accidentally create overlapping label sets, and easier to reason about when debugging runner stealing.

Custom runner definitions

Instead of repeating labels across workflows, you can define reusable runner configurations in a .github/runs-on.yml file at the repository or organization level. Reference them with the runner label (e.g. runner=my-custom-runner).

See Repository configuration for details.