Magic Cache for GitHub Actions
RunsOn Magic Cache is a new feature that allows you to transparently switch the GitHub Actions caching backend to use the local, fast, and unlimited S3 cache backend.
This is a new beta feature in v2.6.3, and you can opt-in to it by setting the extras=s3-cache
job label.
How it works
When the magic cache is enabled, the RunsOn agent will start a sidecar cache backend on the runner, and direct the various caching actions (including the gha
cache backend for Docker Buildx) to send caching requests to the sidecar.
The sidecar will then transparently forward the requests to the S3 cache backend, and return the cache hits back to the caching actions.
This means that you can use the same caching configuration for your builds, regardless of whether youโre using the official GitHub Actions cache or the S3 cache.
How to use
Using the magic cache is simple:
- Set an additional job label:
extras=s3-cache
. - Add the
runs-on/action@v1
โ action to your job. - Use the normal caching actions as before.
Note that when running on official GitHub Actions runners, the runs-on/action@v1
action will just be a no-op, so itโs fine to keep in your workflows even if you mix official and RunsOn runners. This action will also soon be used to configure more advanced aspects of the runner, like CloudWatch monitoring, SSM agent enablement, etc.
Magic dependency caching on S3
As an example, the workflow below will compare the speed of the magic cache vs official runners for multiple cache sizes by generating a random file, saving it to the cache, and then restoring it from the cache. It uses the official actions/cache
action to save and restore the cache, but it should also work with any language-specific caching actions (e.g. actions/setup-node
, ruby/setup-ruby
, etc.).
Magic docker layers caching on S3
Same as above, use extras=s3-cache
and runs-on/action@v1
, and use the gha
cache backend for Docker Buildx, as you would with the official GitHub Actions runners. Your docker layers will transparently get stored in your local S3 bucket.
FAQ
Speed
Speed depends on file size and instance type, but the larger the files and the larger the instance type, the faster (up to 5x compared to official runners) it will be for saving and restoring.
Size
The cache is UNLIMITED, items will be evicted based on the cache eviction policy defined in your RunsOn stack (default: 10 days).
Cost
The cache mechanism is free, and the bandwidth (ingress/egress) is also free, since everything stays within your VPC, and the VPC has a (free) S3 gateway attached. You only pay for S3 storage.
Limitations
-
The magic cache is only available on Linux runners for now.
-
Currently, the EC2 instance role assigned to the runners has full access to the entire cache bucket, so make sure you are fine with the lack of isolation across repositories handled by RunsOn if you want to use this cache. While this might actually be a desirable feature for certain use cases, future versions will restrict cache access to only the current repository.
-
The magic cache is opt-in only, and you need to add the
runs-on/actions@v1
action to your jobs. -
The magic cache should cohabit peacefully with
actions/upload-artifact
andactions/download-artifact
, but let me know if you run into any issues. -
This is a beta feature, and some language-specific actions that use an outdated cache toolkit version might not work as expected. Please let me know if you run into any issues.