
The real reason most SaaS apps run on shared databases isn't architecture. It's operational overhead.
Multi-tenancy comes down to one tradeoff: how much isolation tolerance do you have?
On one end you accept zero data leakage and zero noisy-neighbor risk. Every tenant gets their own database, their own cache, their own everything. That's the cleanest answer for security questionnaires, regulated industries, and any customer who doesn't want their CPU stolen by the loudest neighbor on the box.
On the other end you put everything in one shared database with a tenant ID column on every table. Economies of scale win. Operational cost drops. You eat the noisy-neighbor risk and you trust your code to never leak a row.
Most SaaS pick the second option. Not because it's architecturally correct. Because the first option is operationally maddening at any real scale.
Think about what you actually sign up for with per-tenant infrastructure:
- Provisioning a new database every time a customer signs up
- Running every schema migration across N tenants instead of once
- Upgrading database versions, cache layers, and storage engines N times
- Backing up and restoring N independent systems
- Troubleshooting issues that hit one tenant but not the others
- Doing all of this in a way that still scales to thousands of tenants
That's the wall. It's not a database problem, it's an operational layer problem. And it's why most teams retreat to shared schemas even when they know it's the weaker isolation model.
We built TenantsDB to be that operational layer. One proxy across PostgreSQL, MySQL, MongoDB, and Redis. The model is workspace -> blueprint -> tenant. You design the schema in a workspace (a regular database you connect to with any client). The schema is versioned as a blueprint. Each tenant is a deployed instance of that blueprint, as a real isolated database. Add a column in the workspace, deploy to every tenant with one command. Migrations stop being scripts and start being deploys.
Two isolation levels. L1 shared host (cheap, instant provisioning, still its own database, never mixed data). L2 dedicated host (no noisy neighbors, guaranteed CPU and memory). Move tenants between them with one command. Cutover is roughly 2 seconds via native replication (PG logical, MySQL binlog, Mongo change streams).
Benchmark numbers (5-run medians, 80% reads, 20% writes, stock configs):
| DB | Direct p50 | Proxy p50 | Overhead | Single-tenant QPS | 100-tenant aggregate QPS | Errors |
|---|---|---|---|---|---|---|
| PostgreSQL | 0.82ms | 2.23ms | +1.41ms | 2,039 | 3,926 | 0 |
| MySQL | 1.01ms | 2.34ms | +1.33ms | 1,776 | 2,460 | 0 |
| MongoDB | 1.45ms | 3.32ms | +1.87ms | 1,467 | 1,467 | 0 |
| Redis | 0.66ms | 3.09ms | +2.43ms | 1,260 | 1,195 | 0 |
Zero errors across 2 million queries at 100 concurrent tenants. Under 9-tenant noisy-neighbor pressure (45 concurrent writers per noisy tenant), every engine held sub-17ms latency on the shared host. The dedicated tier removes neighbors entirely.
Free tier is real, no card required. Link: tenantsdb.com
Question: if you're on shared-schema today, is that actually the architecture you'd pick on a clean slate, or is it the choice you settled on because per-tenant looked operationally impossible?