- Published on
Supabase in Production — Beyond the Starter Tutorial
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Supabase markets itself as "Firebase for PostgreSQL." That pitch undersells it. Firebase gives you auth and a database. Supabase gives you auth, a production-grade database, realtime subscriptions, scheduled jobs, and edge compute—all on Postgres, which has a 30-year battle-tested ecosystem.
The challenge: Supabase tutorials show you how to build a prototype, not a production system. This post covers patterns for real applications.
- Supabase Auth with Custom JWT Claims
- Row-Level Security Patterns for Multi-Tenant SaaS
- Edge Functions for Custom Logic
- Realtime Subscriptions with Filters
- pg_cron for Scheduled Jobs
- Connection Pooling with Supavisor
- pgvector for AI Similarity Search
- Managing Multiple Environments
- Migration Strategy with supabase db push
- Self-Hosting Supabase
- Checklist
- Conclusion
Supabase Auth with Custom JWT Claims
Supabase Auth generates JWT tokens. By default, tokens contain sub (user ID) and aud (audience). For production, you need custom claims: organization ID, roles, permissions.
Supabase provides auth hooks (now called "Auth Extensions"). When a user signs up or signs in, a database trigger fires, and you can mutate the JWT.
// This is a PostgreSQL function, not TypeScript
CREATE OR REPLACE FUNCTION public.custom_access_token_hook(
event jsonb
) RETURNS jsonb AS $$
DECLARE
claims jsonb;
org_id uuid;
BEGIN
SELECT organization_id INTO org_id
FROM public.user_metadata
WHERE user_id = (event->>'sub')::uuid;
claims := event->'claims';
claims := jsonb_set(
claims,
'{org_id}',
to_jsonb(org_id)
);
event := jsonb_set(event, '{claims}', claims);
RETURN event;
END;
$$ LANGUAGE plpgsql;
Now your JWT includes org_id. When you query, you can trust this claim.
import { createClient } from '@supabase/supabase-js';
import * as jwt from 'jsonwebtoken';
const supabase = createClient(url, key);
const { data, error } = await supabase.auth.signInWithPassword({
email: 'user@example.com',
password: 'password',
});
// data.session.access_token contains org_id claim
const decoded = jwt.decode(data.session.access_token);
console.log(decoded.org_id); // Now available
Row-Level Security Patterns for Multi-Tenant SaaS
RLS (row-level security) is Postgres''s built-in access control. Every query is filtered by policies you define. For SaaS, RLS is how you prevent customer A from seeing customer B''s data.
-- Enable RLS on posts table
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
-- Policy: users can only see posts from their organization
CREATE POLICY posts_org_isolation
ON posts FOR SELECT
USING (
organization_id = (auth.jwt()->>'org_id')::uuid
);
-- Policy: users can only insert posts for their org
CREATE POLICY posts_org_insert
ON posts FOR INSERT
WITH CHECK (
organization_id = (auth.jwt()->>'org_id')::uuid
);
Supabase automatically sets auth.jwt() from the session token. Every SELECT, INSERT, UPDATE, DELETE is filtered. If a user tries to query another organization''s posts, the query returns empty.
This is defense in depth. Your application logic is the first defense. RLS is the second. If someone finds an SQL injection or bypasses your API, RLS still protects data.
Edge Functions for Custom Logic
Supabase Edge Functions are TypeScript functions deployed at the edge (Deno runtime). Use them for webhooks, third-party integrations, custom business logic.
// supabase/functions/send-email/index.ts
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';
Deno.serve(async (req: Request) => {
const supabaseClient = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_ANON_KEY')!,
{
global: { headers: { Authorization: req.headers.get('Authorization')! } },
},
);
const { userId } = await req.json();
const { data: user } = await supabaseClient
.from('users')
.select('email')
.eq('id', userId)
.single();
// Send email via Resend, SendGrid, etc.
const response = await fetch('https://api.resend.com/emails', {
method: 'POST',
headers: { Authorization: `Bearer ${Deno.env.get('RESEND_API_KEY')}` },
body: JSON.stringify({
from: 'hello@example.com',
to: user.email,
subject: 'Welcome',
html: '<p>Welcome to our platform</p>',
}),
});
return new Response(JSON.stringify({ success: true }));
});
Deploy with supabase functions deploy send-email. Edge Functions scale infinitely and run near your users.
Realtime Subscriptions with Filters
Supabase Realtime pushes database changes (inserts, updates, deletes) to subscribed clients. For a chat app, you''d subscribe to new messages. For a dashboard, subscribe to metric updates.
const channel = supabase
.channel('realtime:public:messages')
.on(
'postgres_changes',
{
event: 'INSERT',
schema: 'public',
table: 'messages',
filter: `thread_id=eq.${threadId}`,
},
(payload) => {
console.log('New message:', payload.new);
},
)
.subscribe();
The filter clause is crucial. Without it, you''d receive every message inserted globally—a broadcast storm. With it, only messages for this thread trigger your callback.
For production, set presence intervals and heartbeats to detect offline clients:
const channel = supabase.channel('presence', {
config: { broadcast: { self: true } },
});
channel.on('presence', { event: 'sync' }, () => {
const state = channel.presenceState();
console.log('Online users:', state);
});
channel.subscribe(async (status) => {
if (status === 'SUBSCRIBED') {
await channel.track({
user_id: userId,
online_at: new Date(),
});
}
});
pg_cron for Scheduled Jobs
PostgreSQL''s pg_cron extension runs SQL jobs on a schedule. For daily report generation, weekly cleanups, or hourly aggregations, use pg_cron instead of a separate job queue.
-- Generate daily reports at 2 AM UTC
SELECT cron.schedule(
'generate-daily-reports',
'0 2 * * *',
$$
INSERT INTO reports (date, summary)
SELECT
CURRENT_DATE,
json_build_object(
'total_users', COUNT(DISTINCT user_id),
'total_revenue', SUM(amount)
)
FROM transactions
WHERE created_at::date = CURRENT_DATE - INTERVAL '1 day'
$$
);
Jobs run in Postgres—no external orchestration. Logs and errors are visible in Supabase dashboards. For most use cases, pg_cron replaces Bull queues or Temporal workflows.
Connection Pooling with Supavisor
Postgres connections are expensive. Each connection consumes RAM and CPU. With 10,000 concurrent users, you can''t have 10,000 connections.
Supabase includes Supavisor (a connection pooler) built-in. It maintains a pool of physical connections to Postgres and multiplexes client connections. Settings:
- Session mode: One pool per user session (slower, safer)
- Transaction mode: Pool resets per transaction (faster, requires stateless queries)
For most applications, transaction mode is fine:
const supabase = createClient(url, key, {
db: {
schema: 'public',
},
options: {
shouldThrowOnError: true,
},
});
// Supabase client uses pooled connections
const { data } = await supabase.from('users').select('*');
Supavisor automatically scales. You don''t manage pool size—it adapts to load.
pgvector for AI Similarity Search
Searching by semantic meaning (not keywords) requires embeddings. pgvector is a Postgres extension that stores and queries vectors efficiently.
-- Create embeddings table
CREATE TABLE documents (
id uuid PRIMARY KEY,
content text,
embedding vector(1536) -- OpenAI embeddings are 1536-dim
);
-- Create index for fast search
CREATE INDEX ON documents USING ivfflat (embedding vector_cosine_ops);
Query by semantic similarity:
const embedding = await generateEmbedding('search query');
const { data } = await supabase
.rpc('match_documents', {
query_embedding: embedding,
match_threshold: 0.5,
match_count: 10,
});
Define the function in Postgres:
CREATE OR REPLACE FUNCTION match_documents(
query_embedding vector,
match_threshold float,
match_count int
)
RETURNS TABLE (id uuid, similarity float) AS $$
SELECT id, 1 - (embedding <=> query_embedding) as similarity
FROM documents
WHERE 1 - (embedding <=> query_embedding) > match_threshold
ORDER BY embedding <=> query_embedding
LIMIT match_count;
$$ LANGUAGE sql;
This powers semantic search for docs, FAQs, and knowledge bases.
Managing Multiple Environments
Production SaaS has dev, staging, and prod databases. Supabase gives you a "projects" concept—each project is a separate Postgres instance with its own schema and storage.
In supabase/config.toml, define environments:
[env.dev]
api_url = "https://dev-xxx.supabase.co"
[env.staging]
api_url = "https://staging-xxx.supabase.co"
[env.prod]
api_url = "https://prod-xxx.supabase.co"
Migrations run per environment with supabase db push --env staging. Schema changes are tested in staging before rolling to production.
Migration Strategy with supabase db push
Supabase auto-generates migration SQL from your schema changes. Workflow:
- Edit
supabase/migrations/xxxx_your_migration.sql supabase db pushapplies it locally- Test your application
- Commit migration file
- Deploy to staging:
supabase db push --env staging - Test in staging
- Deploy to prod:
supabase db push --env prod
Migrations are tracked in supabase/migrations/ folder. Each has a timestamp, ensuring order.
Self-Hosting Supabase
Supabase is open-source. You can self-host on Kubernetes or Docker. Benefits: compliance (data on your servers), cost (no per-row fees), independence from Supabase outages.
Tradeoffs: you manage Postgres, backups, replication, upgrades. For enterprise customers, self-hosting is often required.
# Docker Compose setup (minimal)
docker-compose up -d
# Exposes PostgreSQL on localhost:5432
# Supabase Studio on localhost:3000
Self-hosting is complex but possible. Most startups use Supabase Cloud.
Checklist
- Set up Supabase project and initialize local development
- Configure custom JWT claims with Auth Extensions
- Enable RLS on all tables and write policies
- Create Edge Functions for external integrations
- Test Realtime subscriptions with filters
- Set up pg_cron jobs for scheduled tasks
- Create pgvector indexes for semantic search
- Test migrations across dev/staging/prod
- Monitor connection pool usage in dashboards
- Plan backup and disaster recovery strategy
Conclusion
Supabase handles the infrastructure, but production-grade applications require thought. RLS policies prevent data leaks. Custom JWT claims enable authorization. Edge Functions scale custom logic. Connection pooling handles load.
Mastering these patterns transforms Supabase from a rapid prototyping tool into a full-featured backend for scalable SaaS. Most startups don''t need a microservices architecture. Supabase, properly configured, handles billions of requests.