Use shared tables with row-level security (RLS) for 5,000+ tenants or seed-stage products; it costs $15-$200/month total. Use database-per-tenant for regulated industries (fintech, healthcare) with fewer than 100 tenants. Use schema separation for 50-5,000 tenants needing moderate isolation. This decision shapes your cost structure, compliance story, and deployment pipeline for years.
You're building a SaaS product. Multiple customers will use it. Each customer expects their data to stay private, their configurations to stay separate, and their experience to feel like their own platform. The question isn't whether you need multi-tenancy. It's which model to pick.
This decision shapes your database schema, your deployment pipeline, your compliance story, and your cost structure for years. Get it wrong, and you'll spend six months migrating to a different model while your roadmap stalls.
There are three models worth considering. Each trades off cost, isolation, and operational complexity in different ways.
The three multi-tenant architecture models
1. Database-per-tenant
Each tenant gets their own database. The application layer routes queries to the correct database based on a tenant identifier, typically resolved from the subdomain, API key, or JWT claim.
This is the strongest isolation model. Tenant A's data lives in a separate database from Tenant B's data. There's no way for a query bug to leak rows across tenants, because the rows exist in different databases on different connections.
Advantages:
- Compliance-friendly. Auditors love hearing "each customer has their own database." SOC 2, HIPAA, and data residency requirements become straightforward conversations.
- Per-tenant backup and restore. When a tenant asks you to roll back to yesterday's state, you restore one database. No surgical extraction from shared tables.
- No noisy-neighbor risk. A tenant running expensive analytics queries can't degrade performance for other tenants.
- Per-tenant scaling. You can put high-value tenants on larger database instances.
Disadvantages:
- Expensive. Each database has a baseline cost: compute, storage, backups, monitoring. At 10 tenants, this is manageable. At 500, it's a line item that makes your CFO uncomfortable.
- Migration hell. Schema changes need to run across hundreds of databases. You need tooling to orchestrate migrations, track which databases are up to date, and handle failures partway through a rollout.
- Connection pooling gets complicated. Your application server needs connection pools to hundreds of databases. Connection limits become a constraint before CPU or memory does.
- Cross-tenant queries are painful. Aggregate reporting, platform-wide analytics, or admin dashboards that show data across tenants require federation queries or a separate analytics pipeline.
Best for: Fintech, healthcare, and enterprise SaaS with strict data residency requirements. If your contract says "customer data must reside in a specific AWS region" or "data must be deletable within 24 hours of contract termination," database-per-tenant makes both trivially achievable.
2. Shared database, schema separation
One database, but each tenant gets their own schema (namespace). In PostgreSQL, this means each tenant has a separate set of tables under their own schema name: tenant_abc.users, tenant_xyz.users. The application sets the search_path on each connection to route queries to the correct schema.
This is the middle ground. You get better isolation than shared tables, at lower cost than separate databases.
Advantages:
- Cheaper than database-per-tenant. One database instance, one connection pool, one monitoring stack.
- Decent isolation. Schemas provide a namespace boundary. A misconfigured query in one schema can't access another schema's tables (assuming you set
search_pathcorrectly). - Per-tenant backup is possible through
pg_dump --schema. - Migrations are easier than database-per-tenant. You run the migration once per schema, but all schemas live in the same database, so tooling is simpler.
Disadvantages:
- Schema drift. When migrations fail on some schemas but succeed on others, you end up with tenants running different versions of your schema. Debugging this is miserable.
- PostgreSQL doesn't handle thousands of schemas well. Past 5,000-10,000 schemas, you'll hit performance degradation in
pg_cataloglookups, slowerpg_dumptimes, and autovacuum contention. - Same noisy-neighbor risk as a shared database. One tenant's expensive query still competes for the same CPU and I/O.
- Tooling support is inconsistent. ORMs and migration frameworks have varying levels of support for schema-per-tenant patterns.
Best for: Mid-market SaaS with 50-5,000 tenants where you need better isolation than row-level security but can't justify the cost of separate databases.
3. Shared everything with row-level security
All tenants share the same tables. A tenant_id column on each table identifies which tenant owns each row. Row-level security (RLS) policies in PostgreSQL enforce that queries can only see rows belonging to the current tenant.
This is the most common model for B2B SaaS products that aren't selling to regulated enterprises.
Advantages:
- Cheapest infrastructure cost. One database, one set of tables, one connection pool. Adding a tenant costs zero additional infrastructure.
- Simplest migrations. One schema, one migration run. You're not orchestrating anything across multiple databases or schemas.
- One codebase path. No conditional logic for "which database am I talking to" or "which schema should I use." The
tenant_idcolumn is part of the schema, and RLS handles the rest. - Cross-tenant analytics are trivial. Admin dashboards and platform-wide reporting query the same tables with elevated permissions.
Disadvantages:
- Query bugs can leak data. If a developer writes a query that bypasses RLS (using a superuser connection, or forgetting to set the session variable), tenant data leaks. This is a code-level risk, not an infrastructure-level guarantee.
- Compliance conversations are harder. "All customer data lives in the same tables, separated by a column" is a tougher sell to enterprise security teams than "each customer has their own database."
- Noisy-neighbor risk is highest here. One tenant importing 10 million rows locks tables that affect all tenants.
- Per-tenant backup and restore requires surgical extraction. You can't restore a single tenant without writing custom tooling.
Best for: B2B SaaS below the enterprise tier. Products with hundreds or thousands of tenants where infrastructure cost matters more than compliance certifications.
Comparison table
| Factor | Database-per-tenant | Schema separation | Shared + RLS |
|---|---|---|---|
| Cost per tenant | High (dedicated compute + storage) | Medium (shared compute, separate schemas) | Low (one table, one row) |
| Data isolation | Strongest (separate databases) | Moderate (schema boundaries) | Weakest (column + policy) |
| Query complexity | Low per tenant, high cross-tenant | Low per tenant, moderate cross-tenant | Low (same tables, RLS handles it) |
| Migration difficulty | Hard (N databases to migrate) | Moderate (N schemas, one database) | Easy (one schema, one migration) |
| Compliance readiness | Excellent (auditors love it) | Good (defensible boundary) | Adequate (requires RLS audit trail) |
| Noisy-neighbor risk | None | Present | Highest |
| Tenant onboarding cost | Provisioning required | Schema creation + migration | Insert a row |
How we built DropTaxi on shared-database multi-tenancy
When we built DropTaxi, a multi-tenant taxi booking SaaS for Indian operators, we chose the shared database model with tenant_id columns.
The reasoning was straightforward. Taxi operators are small businesses. They don't have compliance requirements that demand database-level isolation. The tenant count needed to scale from 5 to 500+ without per-tenant infrastructure costs. And onboarding had to be instant: sign up, configure branding, point a domain, go live. No deployment, no provisioning, no waiting.
Here's what the architecture looks like in practice:
Zero-deploy tenant onboarding. Adding a new tenant means inserting a row into the tenants table with their brand name, colors, logo URL, domain, fare rates, and Telegram bot token. No CI pipeline runs. No container restarts. The next request to that domain resolves the new tenant and renders their branded site.
Branded subdomains via middleware. Every incoming HTTP request hits a Hono middleware layer that reads the Host header. The middleware queries the database (via Drizzle ORM on Turso) to find the tenant matching that domain. If it finds a match, the tenant's full configuration loads into the request context. If it doesn't, the request gets a 404. The Astro SSR layer then renders pages using the tenant's branding, so visitors see a standalone taxi booking site rather than a generic platform.
Single deployment for all tenants. One Fly.io machine runs the entire platform. One database. One codebase. The only per-tenant cost is DNS configuration and the rows in the database that store their settings.
This model works because the product doesn't serve regulated industries, the data sensitivity is low (booking details, not financial records), and the primary scaling concern is tenant count rather than per-tenant data volume.
Practical patterns for multi-tenant systems
Regardless of which model you choose, these patterns show up in production multi-tenant systems.
Tenant resolution middleware
Tenant resolution should happen once, at the edge of your request pipeline, and the resolved tenant should propagate through the entire request lifecycle. Common resolution strategies:
- Subdomain:
acme.yourapp.comresolves to theacmetenant. Parse from theHostheader. - Custom domain:
app.acme.commaps to a tenant via a domain lookup table. - Path prefix:
yourapp.com/acme/dashboardextracts the tenant from the URL. Less common in production, but useful during development. - JWT/API key: For API-first products, the tenant identifier lives in the authentication token. The middleware validates the token and extracts the tenant claim.
Store the resolved tenant in request-scoped context (Hono's c.set(), Express's req.tenant, or a thread-local variable in Go). No downstream code should need to re-resolve the tenant.
Connection pooling
In the database-per-tenant model, connection pooling becomes the first bottleneck you'll hit. Each tenant database needs its own pool, and your application server has a finite number of connections it can hold open.
Solutions that work in production:
- PgBouncer per database instance with transaction-level pooling. This multiplexes many application connections over a smaller number of database connections.
- Lazy pool initialization. Don't create connection pools for tenants that haven't received a request in the last hour. Spin up pools on demand and evict idle pools with a TTL.
- Managed pooling services like Supabase's connection pooler or Neon's serverless driver, which handle pool management outside your application process.
For shared-database models, a single connection pool works fine. The tenant context gets set at the session level (SET app.current_tenant for RLS, SET search_path for schema separation) on each connection checkout.
Tenant-scoped caching
Your cache keys need tenant prefixing. If you cache a "dashboard data" key without scoping it to a tenant, you'll serve Tenant A's dashboard data to Tenant B. This sounds obvious, but it's the most common multi-tenancy bug in production.
Use a key format like tenant:{tenant_id}:resource:{resource_type}:{resource_id}. If you're using Redis, consider separate Redis databases per tenant (0-15 are available by default) for smaller deployments, or key prefixing for larger ones.
Background job isolation
Background jobs (email sends, report generation, data imports) need tenant context propagated from the enqueue site to the worker. When you enqueue a job, include the tenant_id in the job payload. The worker should set up the same tenant context that your HTTP middleware provides before processing the job.
For noisy-neighbor protection in job queues, use separate queues or queue priorities per tenant. A tenant importing 100,000 records shouldn't block another tenant's welcome emails. BullMQ, Sidekiq, and Celery all support named queues that let you route high-volume tenants to dedicated workers.
Multi-portal as a multi-tenancy variant
Multi-tenancy doesn't only mean "same app, different customers." It can also mean "same platform, different user roles with separate portals." When we built ZestAMC, a role-based multi-portal platform, the architecture shared the same database and codebase, but exposed different interfaces to different user types: admins, managers, and field agents. Each portal had its own routing, permissions, and UI, but all drew from the same data layer with row-level scoping.
This is a useful pattern when your "tenants" aren't separate organizations but separate roles within one organization. The multi-tenancy primitives (scoped data access, per-role middleware, context propagation) remain the same.
Decision framework: how to pick your model
The right model depends on three variables: compliance requirements, expected tenant count, and infrastructure budget.
Start with compliance. If your customers are in regulated industries (finance, healthcare, government), or if your contracts include data residency clauses, database-per-tenant is the safest choice. The cost premium is a line item in enterprise contracts that your customers expect to pay for.
Factor in tenant count. If you expect fewer than 100 tenants and each tenant generates meaningful revenue, database-per-tenant is viable. Between 100 and 5,000, schema separation works if you're on PostgreSQL and can invest in migration tooling. Above 5,000, shared tables with RLS is the pragmatic choice. The infrastructure economics of the other models fall apart at high tenant counts.
Check your budget. If you're pre-revenue or seed-stage, shared tables with RLS lets you ship faster and spend less. You can migrate to a stronger isolation model later when you have enterprise customers asking for it (and paying for it). Most SaaS products never need to make that migration, because most SaaS products sell to SMBs who care about features, not database isolation models.
Consider the hybrid approach. Some products run shared tables for their standard tier and database-per-tenant for enterprise customers. This is more operational complexity, but it lets you serve both markets without forcing a single model. Stripe and Notion both use variations of this pattern.
Common mistakes to avoid
- Choosing database-per-tenant for a product that will have thousands of tenants. The operational cost of managing thousands of databases outweighs the isolation benefits. If your tenants are SMBs paying $50/month, you can't afford dedicated infrastructure per tenant.
- Forgetting to scope your test suite. Your integration tests should run with tenant context. If your tests pass without setting a tenant, they're testing a code path your production users won't hit.
- Not testing RLS policies with adversarial queries. Write tests that attempt to access Tenant B's data while authenticated as Tenant A. Run them in CI. A passing test suite without cross-tenant access tests gives false confidence.
- Building tenant management as an afterthought. Tenant provisioning, configuration, and deprovisioning are first-class product features. Build the admin tooling alongside the product, not after launch.
- Skipping tenant-aware logging. Every log line should include the tenant identifier. When you're debugging a production issue at 2 AM, "something broke" is useless. "Tenant acme_corp hit a foreign key constraint on the orders table" is actionable.
The short version
Pick database-per-tenant if compliance drives your architecture. Pick schema separation if you need moderate isolation at moderate scale. Pick shared tables with RLS if you're optimizing for speed and cost. Build tenant resolution middleware on day one. Scope your caches, your logs, your background jobs, and your tests to tenant context. And don't over-engineer the isolation model for your current stage; you can tighten isolation when your customers demand it and your revenue supports it.
Frequently asked questions
What is the best multi-tenant database architecture for SaaS?
It depends on three factors. Use database-per-tenant for regulated industries (fintech, healthcare) with fewer than 100 tenants. Use schema separation for 50-5,000 tenants needing moderate isolation. Use shared tables with row-level security for 5,000+ tenants or seed-stage products optimizing for speed and cost.
What is row-level security in multi-tenant SaaS?
Row-level security (RLS) uses PostgreSQL policies to restrict each tenant's queries to their own rows. A tenant_id column on every table identifies ownership. RLS is the cheapest model (one database, zero per-tenant infrastructure cost) and handles 5,000+ tenants. The risk: a misconfigured query bypassing RLS can leak data across tenants.
How much does database-per-tenant cost vs shared database?
Database-per-tenant adds $15-$100+ per tenant per month in compute, storage, and backups. At 500 tenants, that is $7,500-$50,000/month in database costs alone. Shared tables with RLS runs one database at $15-$200/month total. Schema separation falls in between at one database instance but with per-schema migration overhead.
How do you handle tenant resolution in multi-tenant apps?
Resolve the tenant once at the edge of your request pipeline using subdomain parsing, custom domain lookup, path prefix, or JWT claims. Store the resolved tenant in request-scoped context (Hono c.set(), Express req.tenant). No downstream code should re-resolve the tenant. This pattern works across all three isolation models.
Can I mix multi-tenant isolation models for different customer tiers?
Yes. Run shared tables with RLS for your standard tier and database-per-tenant for enterprise customers who require compliance certifications. Stripe and Notion use variations of this hybrid approach. It adds operational complexity but lets you serve SMBs at low cost while meeting enterprise data isolation requirements.
Related reading
When to migrate from a monolith to microservices (and when not to)
Most startups adopt microservices too early. Most enterprises wait too long. Here's how to know when your monolith has outgrown itself, and how to migrate without a rewrite.
Supabase vs Firebase vs custom backend: which one for your startup
Supabase gives you Postgres for free until 500MB. Firebase scales to millions but locks you into Google's ecosystem. A custom backend costs $3,000-$8,000 upfront but gives you full control.
Serverless vs containers: which architecture fits your SaaS?
Serverless costs $0 at launch but gets expensive at scale. Containers cost more upfront but stay predictable. Here's how to pick the right architecture for your SaaS product.