Skip to content

Blog

Changelog v1.6.2 - Restore launch queue size to sane limit

RunsOn v1.6.2 has just been released πŸŽ‰.

This is mostly a maintenance release, but important for the users who are launching a lot of runners in a short time.

By default EC2 has pretty aggressive rate limits set on the RunInstances API (2/s, with some burst allowed), and if you go over that limit, your runner will fail to start and RunsOn will Β send you an email alert telling you about it (RequestLimitExceeded).

Until now the queue size was set to 8/s, but since most users are using new accounts to install RunsOn, it can cause issues with the low default of max 2/s.

So from now on RunsOn will default to 2/s as well, and if your account has increased quota for the RunInstances API, you can then specify a higher number by using the new CloudFormation template AppEc2QueueSize:

EC2 queue size setting

Have a great day!

Changelog v1.6.1 - ARM64 full image, support for S3 cache for workflows

RunsOn v1.6.1 has just been released πŸŽ‰.

Availability of ARM64 full image

New image name available: ubuntu22-full-arm64.

This image is mostly compatible with the GitHub Action ecosystem. It also has a lot of development and CI tooling (docker, kubernetes, nodejs, various languages, etc.) preinstalled. As soon as GitHub releases an official image, we will align the RunsOn image towards theirs.

Boot times for ARM64 full image is only 20s, vs 40s for the x64 full image.

Support for unlimited cache to S3

RunsOn will now create an S3 bucket dedicated to cache artefacts. This bucket will be automatically accessible to the runners thanks to an IAM EC2 Instance Profile, so that no credentials need to be setup.

Then, simply replace actions/cache@v4 with runs-on/cache@v4 and your workflows will now store their caches on the local S3 bucket, which allows for:

  • much faster download/restore speed (300MB/s+ vs 50-100MB/s on GitHub)
  • UNLIMITED cache storage size (GitHub only gives you 10GB).

Pretty excited about that one! You can read more about it here.

s3 action cache

New info in logs

Added AMI ID, as well as availability zone in log outputs

Log outputs

How to verify that VPC traffic to S3 is going through your S3 gateway?

Gateway endpoints for Amazon S3 are a must-have whenever your EC2 instances send and receive traffic from S3, because they allow the traffic to stay within the AWS network, hence better security, bandwidth, throughput, and costs. They can easily be created, and added to your VPC route tables.

But how do you verify that traffic is indeed going through the S3 gateway, and not crossing the outer internet?

Using traceroute, you can probe the routes and see whether you are directly hitting the S3 servers (i.e. no intermediate gateway). In this example, the instance is running from a VPC located in us-east-1:

Terminal window
$ traceroute -n -T -p 443 s3.us-east-1.amazonaws.com
traceroute to s3.us-east-1.amazonaws.com (52.216.215.72), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 52.216.215.72 0.890 ms 0.916 ms 0.892 ms
Terminal window
$ traceroute -n -T -p 443 s3.amazonaws.com
traceroute to s3.amazonaws.com (52.217.139.232), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 52.217.139.232 0.268 ms 0.275 ms 0.252 ms

Both outputs produce the expected result, i.e. no intermediary gateway. This is what would happen if you were accessing a bucket located in the us-east-1 region.

Let’s see what happens if we try to access an S3 endpoint located in another zone:

Terminal window
$ traceroute -n -T -p 443 s3.eu-west-1.amazonaws.com
traceroute to s3.eu-west-1.amazonaws.com (52.218.25.211), 30 hops max, 60 byte packets
1 * * *
2 240.4.88.37 0.275 ms 240.0.52.64 0.265 ms 240.4.88.39 0.215 ms
3 240.4.88.49 0.205 ms 240.4.88.53 0.231 ms 240.4.88.51 0.206 ms
4 100.100.8.118 1.369 ms 100.100.6.96 0.648 ms 240.0.52.57 0.233 ms
5 240.0.228.5 0.326 ms * *
6 240.0.32.16 0.371 ms 240.0.48.30 0.362 ms *
7 * 240.0.228.31 0.251 ms *
8 * * *
9 * * 240.0.32.27 0.392 ms
10 * * *
11 * 242.0.154.49 1.321 ms *
12 * * 52.93.28.131 1.491 ms
13 * * 100.100.6.108 1.286 ms
14 100.92.212.7 67.909 ms 52.218.25.211 67.356 ms 67.929 ms

As you can see, the route is completely different, and as expected does not hit straight to the S3 endpoint.

TL;DR: make sure your route tables are correct, and only point to S3 buckets located in the same region.

GitHub Action runner images (AMI) for AWS EC2

As part of the RunsOn service, we automatically maintain and publish replicas of the official GitHub runner images πŸ”— as AWS-formatted images (AMIs).

New images are automatically released within 48h of the official upstream release, and are slightly trimmed to remove outdated software, or (mostly useless) caches.

This means the disk usage is below 30GB, making it sufficiently small to boot in around 40s. The runner binary is also preinstalled in the images.

Supported regions:

  • us-east-1
  • eu-west-1

AMIs can be found using the following search:

  • name: runs-on-ubuntu22-full-x64-*
  • owner 135269210855

You can find more details on https://github.com/runs-on/runner-images-for-aws πŸ”—.

Changelog v1.5.0

RunsOn v1.5.0 has just been released πŸŽ‰.

Faster boot for large images

ubuntu22-full-x64:

  • Before: between 140 to 160s from launch to runner ready
  • After: between 40 to 60s from launch to runner ready
image

Proper disk resizing

Volume is now properly resized, if given hdd size is greater than AMI size.

New timings section in logs

Very useful to compare boot times between runner types / images.

image

New license types

  • standard
  • sponsorship