Skip to content

Blog

GitHub to charge $0.002/min for self-hosted runners starting March 2026

Update Dec 17th 2025: GitHub has suspended the fee for now.

GitHub recently announced that starting March 2026, they will begin charging $0.002 per minute for jobs running on self-hosted runners, including those managed by e.g. Actions Runner Controller (ARC) or other solutions like RunsOn.

Until now, self-hosted runners have been free to use on GitHub Actions - you only paid for your own infrastructure costs. Starting March 2026, GitHub will add a per-minute fee on top of your infrastructure costs for any job running on a self-hosted runner.

For context:

  • $0.002/min = $0.12/hour = $2.88/day for a runner running 24 hours
  • For 40,000 minutes/month: additional $80/month in GitHub fees
  • For 100,000 minutes/month: additional $200/month in GitHub fees

RunsOn will continue to provide significant cost savings compared to GitHub-hosted runners, even with this additional fee. However, the savings margin will be reduced for some runner configurations.

To help you understand the impact, we’ve updated our pricing tools:

Our pricing page now includes a toggle to show prices with or without the GitHub self-hosted runner fee. This lets you compare:

  • Current pricing (without the fee)
  • Post-March 2026 pricing (with the $0.002/min fee included)

The pricing calculator has also been updated with the same toggle. You can now see exactly how much you’ll save with RunsOn both before and after the fee takes effect.

Even with the additional GitHub fee, RunsOn remains significantly cheaper than GitHub-hosted runners for most configurations:

  • Spot instances: Still deliver 60-90% savings depending on runner size
  • On-demand instances: Still deliver 30-60% savings for most configurations
  • Larger runners: The bigger the runner, the more you save (GitHub’s hosted runner pricing scales up faster than AWS EC2 pricing)

The fee has a larger impact on smaller runner sizes where the base cost is lower. For 2-CPU runners, the $0.002/min fee represents a larger percentage of the total cost.

  1. Check your current usage: Review your GitHub Actions minutes to understand your monthly consumption
  2. Use our calculator: Try the updated calculator with the fee toggle enabled to see your projected costs
  3. Consider runner sizes: Larger runners provide better value as the fee is fixed per minute regardless of runner size
  4. Use spot instances: AWS spot instances remain the most cost-effective option

Cloud-init tips and tricks for EC2 instances

Working extensively with RunsOn, I’ve spent considerable time with cloud-init, the industry-standard tool that automatically configures cloud instances on startup. Present on Ubuntu distributions on AWS, cloud-init fetches metadata from the underlying cloud provider and applies initial configurations. Here are some useful commands and techniques I’ve discovered for troubleshooting and inspecting EC2 instances.

When debugging instance startup issues, you often need to check what user-data was passed to your instance. Here are three ways to retrieve it:

The most reliable method is using the built-in cloud-init query command:

Terminal window
# View user data
sudo cloud-init query userdata
# View all instance data including user data
sudo cloud-init query --all

Cloud-init stores user-data locally after fetching it:

Terminal window
# User data is stored in
sudo cat /var/lib/cloud/instance/user-data.txt

3. Using the instance metadata service (IMDS)

Section titled “3. Using the instance metadata service (IMDS)”

You can also query the EC2 metadata service directly:

Terminal window
# IMDSv2 (recommended)
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/user-data
# IMDSv1 (legacy)
curl http://169.254.169.254/latest/user-data

Accessing all EC2 metadata without additional API calls

Section titled “Accessing all EC2 metadata without additional API calls”

Here’s a powerful tip: cloud-init automatically fetches all relevant data from the EC2 metadata API at startup and caches it locally. Instead of making multiple API calls, you can read everything from a single JSON file:

Terminal window
sudo cat /run/cloud-init/instance-data.json

This file contains a wealth of information about your instance. Let’s explore some useful queries:

Get comprehensive instance details:

Terminal window
sudo cat /run/cloud-init/instance-data.json | jq '.ds.dynamic["instance-identity"].document'

Output:

{
"accountId": "135269210855",
"architecture": "x86_64",
"availabilityZone": "us-east-1b",
"billingProducts": null,
"devpayProductCodes": null,
"imageId": "ami-0db4eca8382e7fc27",
"instanceId": "i-00a3d21a80694c44b",
"instanceType": "m7a.large",
"kernelId": null,
"marketplaceProductCodes": null,
"pendingTime": "2025-04-25T12:00:18Z",
"privateIp": "10.0.1.93",
"ramdiskId": null,
"region": "us-east-1",
"version": "2017-09-30"
}

Access all EC2 metadata fields:

Terminal window
sudo cat /run/cloud-init/instance-data.json | jq '.ds["meta-data"]'

This reveals extensive information including:

  • Network configuration (VPC, subnet, security groups)
  • IAM instance profile details
  • Block device mappings
  • Hostname and IP addresses
  • Maintenance events

Need just the public IP? Or the instance type? Use jq to extract specific fields:

Terminal window
# Get public IPv4 address
sudo jq -r '.ds["meta-data"]["public-ipv4"]' /run/cloud-init/instance-data.json
# Output: 3.93.38.69
# Get instance type
sudo jq -r '.ds["meta-data"]["instance-type"]' /run/cloud-init/instance-data.json
# Output: m7a.large
# Get availability zone
sudo jq -r '.ds["meta-data"]["placement"]["availability-zone"]' /run/cloud-init/instance-data.json
# Output: us-east-1b
# Get VPC ID
sudo jq -r '.ds["meta-data"]["network"]["interfaces"]["macs"][.ds["meta-data"]["mac"]]["vpc-id"]' /run/cloud-init/instance-data.json
# Output: vpc-03460bc2910d2b4e6

Understanding cloud-init and how to query instance metadata is crucial for:

  1. Troubleshooting: When instances fail to start correctly, checking user-data and metadata helps identify configuration issues
  2. Automation: Scripts can use this cached data instead of making API calls, reducing latency and API throttling
  3. Security: Accessing cached data avoids exposing credentials to the metadata service repeatedly
  4. Performance: Reading from local files is faster than HTTP requests to the metadata service

Cloud-init does more than just run your user-data script. It provides a comprehensive interface to instance metadata that’s invaluable for debugging and automation. Next time you’re troubleshooting an EC2 instance or writing automation scripts, remember these commands - they’ll save you time and API calls.

RunsOn is now handling more than 600k jobs per day

RunsOn stats showing 606,674 total runners

Another milestone reached: RunsOn is now processing over 600,000 jobs per day across all users! 🚀

Less than two months after hitting the 400k mark, we’ve grown by 50% to reach 606,674 daily jobs. This rapid growth demonstrates the increasing demand for reliable, cost-effective GitHub Actions runners. Organizations are discovering that self-hosted runners don’t have to be complex or expensive when using the right solution.

  1. Cost savings: Teams are saving up to 10x on their CI costs compared to GitHub-hosted runners
  2. Performance: Faster builds with dedicated resources and optimized configurations
  3. Flexibility: Choose your exact instance types and configurations
  4. Simplicity: Deploy in minutes, not days, with our streamlined setup

As we continue to scale, we’re focused on:

  • Improving performance monitoring and insights
  • Faster boot times for runners

Thank you to all our users who trust RunsOn for their critical CI/CD workflows. Here’s to the next milestone! 🎯

Want to join the thousands of developers already using RunsOn? Get started today and see why teams are switching to RunsOn for their GitHub Actions needs.

Why smart developers choose ephemeral runners (and you should too)

Here’s a question that separates senior engineers from the rest: Should your GitHub Actions runners live forever or die after each job?

If you answered “live forever,” you’re probably still debugging why your CI randomly fails on Tuesdays.

Long-lived runners feel intuitive. Spin up a VM, register it with GitHub, let it churn through jobs. No startup overhead, no provisioning delays. It’s the CI equivalent of keeping your laptop running 24/7 because “booting takes too long.”

But here’s what actually happens:

  • Week 1: “Our runners are blazing fast!”
  • Week 3: “Why do tests pass locally but fail in CI?”
  • Week 6: “Let’s restart all runners and see if that fixes it.”
  • Week 12: “We need a dedicated person to babysit our CI infrastructure.”

Sound familiar?

The four horsemen of long-lived runner apocalypse

Section titled “The four horsemen of long-lived runner apocalypse”

Your runner accumulates garbage like a browser with 847 open tabs. Docker layers, npm caches, environment variables, temp files—each job leaves traces. Eventually, Job #847 fails because Job #23 left some Node modules lying around.

Memory fragments. Disk fills up. CPU gets pinned by zombie processes. What started as a c5.large performing like a c5.large slowly becomes a c5.large performing like a t2.micro having an existential crisis.

That environment variable someone set for debugging last Tuesday? Still exported. That SSH key generated for a one-off deployment? Still in ~/.ssh. Your “clean” runner is basically a museum of security vulnerabilities.

Bugs that only appear after the runner has processed exactly 47 jobs involving TypeScript compilation. Good luck reproducing that locally.

Ephemeral runners are the Marie Kondo approach to CI: if it doesn’t spark joy (i.e., it’s not your current job), thank it and throw it away.

Every job gets:

  • A pristine environment identical to your base image
  • Zero state from previous executions
  • Consistent resource allocation
  • Perfect isolation from other workloads

The math is simple:

  • Long-lived: Pay for 24/7 × N runners × mysterious overhead
  • Ephemeral: Pay for actual job runtime × spot pricing discount

The classic objection: “Ephemeral runners are slow because of boot time!”

This is 2025 thinking with 2015 assumptions. Modern ephemeral runners boot in under 30 seconds. Your Docker build probably takes longer to download base images.

Plus, what’s worse: 30 seconds of predictable startup time, or 3 hours debugging why your integration tests only fail on runner-07 when Mars is in retrograde?

We’ve processed millions of jobs with this approach. Here’s how we make it work:

  • 30-second boot times with optimized AMIs and provisioned network throughput
  • Spot instance compatibility for 75% cost savings
  • One runner per job ensures perfect isolation
  • Zero operational overhead because there’s no state to manage

When your job finishes, the runner gets terminated. No cleanup scripts, no monitoring dashboards, no 3 AM alerts about runner-14 being “unhealthy.”

The architecture your future self will thank you for

Section titled “The architecture your future self will thank you for”

Long-lived runners are like global variables in code—they seem convenient until they’re not, and by then you’re too deep to refactor easily.

Ephemeral runners are like pure functions: predictable inputs, predictable outputs, no side effects. The kind of architecture that lets you sleep soundly knowing your CI isn’t a ticking time bomb.

Your security team gets perfect isolation. Your finance team gets usage-based costs. Your developers get consistent, reproducible builds. Everyone wins except the person who has to maintain the old system (which is no longer you).

If you’re still running long-lived CI infrastructure in 2025, you’re optimizing for the wrong metrics. You’re choosing theoretical performance over actual reliability, imaginary cost savings over real operational simplicity.

Smart money is on ephemeral. Smart developers choose tools that scale without accumulating technical debt.

Make the smart choice. Try RunsOn today.