Supabase in Production: What Works and What Breaks
Supabase gives you a PostgreSQL database, authentication, storage, and real-time subscriptions with a single project. The developer experience is excellent. The question is: what happens when real users arrive?
After running Supabase in production for over a year, the answer is nuanced. Some parts work flawlessly. Others have sharp edges that documentation does not cover. This is an honest look at both.
What Works Well
Authentication
Supabase Auth is the strongest part of the platform. It handles email/password, magic links, OAuth providers, and JWTs out of the box.
const { data, error } = await supabase.auth.signInWithPassword({
email: "user@example.com",
password: "secure-password",
});
Session management, token refresh, and provider linking work reliably. The auth hooks for custom claims are powerful once you understand them.
Row Level Security
RLS is the right way to handle authorization in database-heavy applications. Policies run at the database level, so no application bug can bypass them. The patterns in Supabase RLS cover production-ready implementations.
CREATE POLICY "Users can only see their own data"
ON profiles
FOR SELECT
USING (auth.uid() = user_id);
PostgREST API
The auto-generated REST API is surprisingly capable:
const { data } = await supabase
.from("projects")
.select("id, name, tasks(id, title, status)")
.eq("owner_id", userId)
.order("created_at", { ascending: false })
.limit(20);
Nested selects, filtering, pagination, and ordering all work through the client library. For most CRUD operations, you never need to write a custom API endpoint.
What Breaks
Connection Limits
Every Supabase plan has a connection limit. The free tier allows 60 direct connections. The Pro plan allows 200. With serverless deployments (Vercel, Netlify), every function invocation can open a connection.
Problem:
50 concurrent users × 2 API calls each = 100 connections
+ Realtime subscriptions = 50 more
= 150 connections on a 60-connection plan
Fix: Use connection pooling via Supavisor (enabled by default on pooler URLs) and close connections promptly. Prefer the REST API (supabase.from()) over direct connections when possible.
Realtime at Scale
Supabase Realtime works well for small-scale applications. At hundreds of concurrent listeners on the same table, latency increases and messages can be delayed.
// Works fine with 10-50 listeners
supabase
.channel("room:123")
.on("postgres_changes", {
event: "INSERT",
schema: "public",
table: "messages",
filter: "room_id=eq.123",
}, handleMessage)
.subscribe();
Fix: Use Realtime for presence and notifications, not for high-throughput messaging. For chat-like features with hundreds of active users, consider a dedicated WebSocket service.
Edge Functions Cold Starts
Supabase Edge Functions run on Deno Deploy. Cold starts add 200-500ms to the first request after inactivity.
Fix: For latency-sensitive endpoints, use a keep-alive cron job or accept the cold start cost. For most SaaS applications, the occasional 500ms first request is acceptable.
Full-Text Search
PostgreSQL full-text search works, but the Supabase client library makes it awkward:
const { data } = await supabase
.from("articles")
.select()
.textSearch("title", "react performance", {
type: "websearch",
config: "english",
});
Complex queries with ranking, highlighting, and typo tolerance require raw SQL or a dedicated search engine.
Production Checklist
Before Launch
- Enable RLS on every table
- Set up database backups (automatic on Pro)
- Configure custom SMTP for auth emails
- Add rate limiting to Edge Functions
- Set up monitoring with pg_stat_statements
Monitoring
-- Slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
-- Table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
Security
- Use service role keys only on the server, never in client code
- Enable email confirmations in auth settings
- Add CAPTCHA to sign-up flows
- Audit RLS policies quarterly
Common Mistakes
Mistake 1: Using the service role key in the frontend. This bypasses all RLS policies. Use the anon key in the browser and the service role key only on the server.
Mistake 2: Not enabling RLS. Every table starts with RLS disabled. Forgetting to enable it means any authenticated user can read any data.
Mistake 3: Direct connections from serverless. Use the pooler URL (pooler.supabase.com) from serverless environments. Direct connections exhaust the limit quickly.
Mistake 4: Treating Supabase as a full backend. Supabase is a database and auth layer. Complex business logic still needs a backend service.
Takeaways
Supabase is a strong choice for applications where PostgreSQL is the right database and you want auth, storage, and APIs without building them from scratch. Know its limits — connection pooling, realtime scaling, and cold starts — and design around them. For most SaaS applications under 10,000 users, Supabase handles the infrastructure so you can focus on the product.