The tool looks genuinely useful for teams that are committed to the shared-DB + RLS path. the failure modes you list are real, and "the same code is either correct or catastrophically wrong, depending entirely on database state your AI agent never sees" is the cleanest one-line indictment of the model i've read. The thing that pushed us off RLS entirely (we hit all five of these in production at different points) was realizing the bug class doesn't exist if there are no other tenants in the database to leak: \- no shared rows to forget a filter on \- no policies to misconfigure or set to \`using (true)\` \- no SECURITY DEFINER bypass surface \- service-role key only sees one tenant's DB \- transaction-mode pgbouncer + missed \`SET LOCAL\` becomes a non-issue We ended up building our own per-tenant database orchestration layer (TenantsDB) for exactly this reason. Each tenant gets a real isolated database, schema versioning is coordinated across all of them, and the audit question becomes trivial because there's nothing to audit. not saying RLS auditing is wrong. For teams that can't move off shared infra, your tool is the right..
Both fair concerns. real answer to either flips a lot based on scale (50 tenants vs 5000) and stack. dm me and i can show how i got past it..
Happy to compare notes if useful, been deep in this for a while now...
Your honesty about not being technical is actually a strength. Most people fake it and end up worse off. You're asking the right questions, that already puts you ahead. On scaling and multi-tenancy from day one, here's the simple version. In your meditation app, every user is a row in your database. That's fine for a B2C app, low sensitivity, no compliance pressure. The "multi-tenancy" thing matters more if you ever pivot to selling to companies, (and you want to do this for more revenue) like corporate wellness programs where each company has their own employees using your app. Then each company would want their data separated. Not something you need today, just something to keep in your back pocket. Things you can actually ask your dev that will tell you a lot: * Are the API keys only on the server, never inside the mobile app * Is RLS turned on for every single table * Where are user passwords stored, and are they hashed * Are backups running automatically You don't need to understand the answers deeply. You just need to ask. A good dev explains it patiently. A bad one gets defensive. That alone tells you what you need to know. Keep going. You're closer than you think.
My honest advices is , and i am assuming you have some idea of whats goin on, do everything yourself. You dont need to commission anything to anybody for a meditation app. Times have changed. I used to be in your position 5 years ago where i had a team, not anymore. Secondly, think scaling and multi-tenancy from day one.
Read this with interest because I went the other direction. Ran self-hosted Supabase on cx22 for a few months and the box dropped on me twice during launch traffic. cx22 inventory is also a coin flip in some regions, you reboot at the wrong time and the VM doesn't come back up immediately. Fine for a hobby, not fine for production. A few of your pain points are actually solved if you split the problem differently. PITR you're cobbling with WAL backups. That's the right idea but the operational cost is real. Built-in PITR exists on managed DB platforms without paying Supabase prices. Branching not being git-nice. The reason it's painful is because schema lives in one place and you're versioning it manually. If your DB layer treats schema as a versioned blueprint that deploys to multiple environments with one command, branching stops being a separate problem. Single point of failure. The cx22 going down taking your app with it is the real story here. Multi-tenant isolation matters here too, even for a single app. If your customer data is in one DB and that DB goes down, every customer is down. If each customer has their own DB on a fleet of backing instances, one node failure takes out a slice, not everyone. Multi-tenant isolation is the bigger thing nobody mentions in self-host vs cloud threads. The moment you have customers with sensitive data, the question stops being "Supabase or self-host" and becomes "how do I prove tenant A's data is separated from tenant B's." RLS doesn't pass an audit. Self-host doesn't either. There's a layer that handles this, DB-per-tenant with a proxy that routes wire-protocol connections, schema deploys across all tenants in parallel, zero-downtime tier migrations. That's TenantsDB, what I build. Free up to 10 tenants and isolation is built into the architecture, not bolted on. [docs.tenantsdb.com](http://docs.tenantsdb.com) if you want detail. Your cost math is correct for an indie app though. If you're under 1000 MAU and don't have enterprise customers, self-hosting wins on price.
Separate databases per tenant, yes. Not separate servers for each one. One backing instance runs many tenant DBs. Each tenant gets their own logical database, so there's no row-level filter to mess up. The tenants who need more (compliance, noisy load) get moved to their own VM. A proxy in front makes the app side painless. You connect with a standard driver to one endpoint, it routes to the right tenant DB. Schema changes deploy to all tenants in parallel.
Multi-tenant with tenant\_id works until someone forgets a scope somewhere. Usually a background job, sometimes a migration. The bug class exists until the data is actually separated. RLS covers the obvious read paths but misses migrations running as superuser, SECURITY DEFINER functions, replication streams, anything that doesn't go through the policy. You find out the hard way. Branching per PR is great until you realize production has 50 tenants and your branch has 1. The branch doesn't represent what production looks like. Cognitive load actually gets worse over time with the integrated-platform path. You end up learning Supabase's opinions on everything instead of picking tools you already know. There's an architecture that solves all of this cleanly, DB-per-tenant with a proxy that routes wire-protocol connections. Your app code doesn't change, you just connect to the proxy and it handles the rest. Sounds expensive operationally but it isn't if the orchestration is done for you. Worth looking into before you lock in.
If you are trully building this for internal use you should not worry about the scalabiltiy and for an ERP system you should not use Supabase. But if you are going to sell this as a service and lean towars multi-tenant architecture , consider phisical database isolation from the begining for every one of your customers.