RunsOn gives you access to the full range of EC2 instances, with the ability to select specific GPU instances for your workflows, for much cheaper than the official GitHub Actions GPU runners. There is also no plan restriction.
To get started with GPU runners, we recommend that you define a custom image configuration referencing the latest Deep Learning AMI, and then define a custom runner configuration referencing that image and the GPU instance type that you want to use.
Configuration file
Workflow job definition
Note that runners will take a bit more time than usual to start due to the base image being very large (multiple versions of Cuda, etc.). If you know exactly what you require, you could create a more streamlined custom image with only what you need, using the Building custom AMI with packer guide.
Example output for the Display NVIDIA SMI details step:
Using locally-attached NVME disk(s)
GPU instances come with locally-attached NVME disk(s) of different sizes, which can be used to speed up your workflows. They come for free with the instance, so you donβt have to worry about the cost of the storage.
In our example with g4dn.xlarge, the NVME disk will be automatically mounted at /opt/dlami/nvme. You can use sudo lsblk -l to list the available block storage and their mount points.
Example output for the Display block storage step:
Cost
GitHub provides GPU runners (gpu-t4-4-core) with 4 vCPUs, 28GB RAM and a Tesla T4 GPU with 16GB VRAM, for $0.07/min.
By comparison, even with on-demand pricing, the cost of running a GPU runner with the same Tesla T4 GPU card, 4vCPUs, and 16GB RAM (g4dn.xlarge) on AWS with RunsOn is $0.009/min, i.e. 85% cheaper. If using spot pricing, the cost is even lower, at $0.004/min, i.e. more than 10x cheaper.