- Published on
Fly.io for Backend Engineers — Fast Global Deployments Without Kubernetes
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Fly.io abstracts away Kubernetes complexity while delivering true global deployment. Write a Dockerfile, run fly launch, and your app runs in 30+ regions within seconds. Auto-scaling, persistent volumes, and managed databases ship out of the box.
- Fly.io vs Heroku vs Railway vs Render in 2026
- fly launch for Zero-Config Deployment
- fly.toml Configuration
- Multi-Region Deployment With Data Locality
- Fly Machines API for Dynamic Compute
- Persistent Volumes for Databases
- Private Networking Between Services
- Fly Postgres Managed Database
- Secrets Management
- Autoscaling Based on Load
- GitHub Actions CI/CD
- Cost Model
- When Fly.io Wins
- Checklist
- Conclusion
Fly.io vs Heroku vs Railway vs Render in 2026
Comparison matrix:
| Feature | Fly.io | Heroku | Railway | Render |
|---|---|---|---|---|
| Global deployment | 30+ regions | Limited US | Limited | Limited |
| Managed databases | Postgres, Redis | Yes (deprecated) | Postgres, Redis, MySQL | Postgres, MySQL |
| Private networking | Yes (Wireguard) | No | Yes | No |
| Persistent volumes | Yes | Ephemeral only | Mounts | Mounts |
| Starting cost | $1.94/month | $5/month minimum | $5/month | $7/month |
| Cold starts | None | ~4 seconds | ~1 second | ~2 seconds |
| CLI experience | Excellent | Deprecated | Good | Good |
Fly.io wins on global distribution and cost. Railway and Render suit smaller teams avoiding multi-region complexity.
fly launch for Zero-Config Deployment
fly launch detects your app runtime and generates configuration:
$ fly launch
? Would you like to set up a Postgresql database now? (y/N) y
? Would you like to set up an Upstash Redis cache now? (y/N) y
? Fly App Name [my-app-staging]: my-api-prod
? Region for Deployment [sfo] (use 'fly platform regions' to see options): lax
Fly generates fly.toml:
app = "my-api-prod"
primary_region = "lax"
[env]
LOG_LEVEL = "info"
[build]
image = "ghcr.io/my-org/my-api:latest"
[[services]]
internal_port = 3000
protocol = "tcp"
[services.http_checks]
enabled = true
grace_period = "5s"
interval = "10s"
timeout = "5s"
path = "/health"
[[services.ports]]
port = 80
handlers = ["http"]
[[services.ports]]
port = 443
handlers = ["tls", "http"]
No YAML boilerplate. No Helm charts. Ship it:
$ fly deploy
Your app scales globally in under 2 minutes.
fly.toml Configuration
Advanced configuration for production:
app = "my-api-prod"
primary_region = "sfo"
console_command = "/app/bin/rails console"
[build]
image = "ghcr.io/my-org/my-api:latest"
args = ["RAILS_ENV=production"]
[env]
DATABASE_URL = "postgresql://..."
LOG_LEVEL = "info"
RAILS_MASTER_KEY = "xxx" # Avoid—use `fly secrets`
[[services]]
internal_port = 3000
protocol = "tcp"
min_machines_running = { "sfo" = 1, "lhr" = 1 }
auto_stop_machines = "stop"
auto_start_machines = true
[services.concurrency]
type = "connections"
hard_limit = 100
soft_limit = 80
[[services.http_checks]]
grace_period = "10s"
interval = "15s"
timeout = "5s"
path = "/health"
method = "GET"
[[vm]]
cpu_kind = "performance"
cpus = 2
memory_mb = 1024
Secrets stay encrypted in Fly's vault:
$ fly secrets set DATABASE_PASSWORD=secret123
Multi-Region Deployment With Data Locality
Deploy simultaneously to multiple regions and route based on geography:
$ fly scale count 3 --region sfo
$ fly scale count 2 --region lhr
$ fly scale count 1 --region syd
Fly's DNS automatically routes users to the nearest region. Add Postgres replicas for read-local consistency:
$ fly postgres create --name api-db-primary --region sfo --initial-cluster-size 3
$ fly postgres attach api-db-primary --app my-api-prod
Replicas synchronize asynchronously; read-heavy workloads benefit from local Postgres replicas per region.
Fly Machines API for Dynamic Compute
Scale programmatically by issuing API calls from your application:
import fetch from 'node-fetch';
const FLY_API_TOKEN = process.env.FLY_API_TOKEN;
const APP_NAME = 'my-api-prod';
async function scaleMachines(count: number) {
const response = await fetch(
`https://api.machines.dev/apps/${APP_NAME}/machines`,
{
method: 'GET',
headers: { Authorization: `Bearer ${FLY_API_TOKEN}` },
}
);
const machines = await response.json();
const currentCount = machines.length;
if (count > currentCount) {
for (let i = currentCount; i < count; i++) {
await createMachine();
}
} else if (count < currentCount) {
const toRemove = machines.slice(0, currentCount - count);
for (const machine of toRemove) {
await deleteMachine(machine.id);
}
}
}
async function createMachine() {
return fetch(
`https://api.machines.dev/apps/${APP_NAME}/machines`,
{
method: 'POST',
headers: { Authorization: `Bearer ${FLY_API_TOKEN}` },
body: JSON.stringify({
config: {
image: 'ghcr.io/my-org/my-api:latest',
services: [
{
ports: [{ port: 3000, handlers: ['http'] }],
protocol: 'tcp',
},
],
},
}),
}
);
}
Machines API enables fine-grained autoscaling based on custom metrics (queue depth, memory, request rate).
Persistent Volumes for Databases
Attach persistent storage to machines:
$ fly volume create pgdata --size 10 --region sfo
$ fly volume create pgdata --size 10 --region lhr
Configure mounting in fly.toml:
[[mounts]]
source = "pgdata"
destination = "/var/lib/postgresql/data"
size_gb = 10
Volumes survive machine stop/restart, enabling durable databases without managed Postgres.
Private Networking Between Services
Fly's Wireguard mesh connects services securely without public IPs:
$ fly apps create api-service
$ fly apps create worker-service
$ fly wireguard create
Services communicate via .internal addresses:
// api-service talks to worker-service on private network
const response = await fetch('http://worker-service.internal:8080/process', {
method: 'POST',
body: JSON.stringify(job),
});
No firewall rules. No VPN tunnels. Encrypted by default.
Fly Postgres Managed Database
Launch a managed Postgres cluster:
$ fly postgres create --name main-db --region sfo --initial-cluster-size 3 --volume-size 50
? Select VM size: shared-cpu-1x (256MB)
? Do you want to enable read replicas? Yes
? In which regions? lhr, syd
Fly manages backups, replication, and failover. Access via connection string:
$ fly postgres connect -a main-db --command "CREATE TABLE users (...)"
$ fly postgres users create main-db admin
Automatic SSL certificates for encrypted connections.
Secrets Management
Store sensitive values securely:
$ fly secrets set \
DATABASE_PASSWORD=xyz \
SLACK_TOKEN=abc \
STRIPE_API_KEY=sk_live_...
$ fly secrets list
$ fly secrets unset DATABASE_PASSWORD
Secrets inject into environment at runtime, never logged or exposed in builds.
Autoscaling Based on Load
Configure metrics-based autoscaling:
[services]
[[services.concurrency]]
type = "connections"
hard_limit = 200
soft_limit = 150
[[services.cpu]]
values = ["20", "50"]
[[services.http_checks]]
grace_period = "10s"
interval = "15s"
timeout = "5s"
path = "/health"
Fly automatically scales when soft limits are breached. Hard limits reject excess connections.
GitHub Actions CI/CD
Deploy via GitHub Actions:
name: Deploy to Fly.io
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: superfly/flyctl-actions/setup-flyctl@master
- run: flyctl deploy --remote-only
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
Every push to main deploys automatically. No separate CI/CD platform.
Cost Model
Fly.io pricing (2026):
- Shared CPU:
$1.94/monthper machine (256MB RAM) - Dedicated CPU:
$18/monthper vCPU - Bandwidth:
$0.02per GB egress - Volumes:
$0.15per GB per month - Postgres:
$5-100/month depending on size
A single shared CPU in 3 regions costs ~$6/month. Heroku's $50/month "Hobby Dyno" exceeds Fly costs by 10x.
When Fly.io Wins
Fly.io excels at:
- Global latency-sensitive apps (API, real-time)
- Cost-conscious scaling (startup to scale-up)
- Simple deployment workflow (Dockerfile →
fly deploy) - Multi-region requirements without Kubernetes
Heroku still wins for teams wanting managed everything (no database setup). Railway suits intermediate deployments.
Checklist
- Install Fly CLI and authenticate
- Run
fly launchto generate initial configuration - Test deployment to staging region first
- Configure persistent volumes if using databases
- Set up secrets with
fly secrets - Enable health checks in
fly.toml - Configure auto-scaling concurrency limits
- Set up GitHub Actions deployment
- Add monitoring and logging (Sentry, Grafana)
- Document rollback procedures
Conclusion
Fly.io eliminates the Kubernetes-or-Heroku false choice. Global deployment, transparent pricing, and CLI-first experience make it ideal for backend engineers scaling from startup to scale-up without infrastructure overhead.