few things you can do. first never give them your supabase service\_role key or your dashboard owner login. add them as a team member through the supabase dashboard with a limited role so they can access what they need without having full admin control. second, use a separate branch or a separate supabase project for development. let them build and test there, then you review and merge changes into your production project yourself. that way they never touch your live database directly. third, before they start, take a database backup. supabase does daily backups on paid plans but you can also run pg\_dump manually through the sql editor to have your own copy. if anything goes wrong you have a restore point. the biggest risk honestly isn't malicious intent, it's accidental damage. someone running a delete without a where clause or dropping a table they thought was a test table. limiting their access to a dev environment instead of production handles most of that.
supabase pauses free tier projects after they've been inactive for a while. when that happens all the services go down including auth and you get dns errors because the project endpoint is basically offline. go to your supabase dashboard and check if the project shows as paused. if it is there's a button to restore it and everything should come back. if it's not paused then try hitting your project url directly in the browser. if that times out too it's definitely a dns or network thing on your end. try a different network or run nslookup on your project url to see if it resolves.
the 401 is almost certainly because supabase webhooks send a request with an authorization header by default and your api is rejecting it. even if you don't have middleware explicitly checking auth, whatever framework you're using might have default behavior that rejects unrecognized auth headers. also host.docker.internal can be flaky depending on your setup. try checking if the request even reaches your api at all by logging at the very first line before any routing or middleware runs. if you see nothing in the logs the request isn't making it to your server and it's a networking issue not an auth issue. if it is reaching your server check what headers supabase is sending. you can configure the webhook in the supabase dashboard to include custom headers or remove the default auth one. that usually fixes it.
since both apps are on subdomains of the same domain the simplest approach is sharing the supabase session cookie across them. set the cookie domain to .mycompany.com instead of the full subdomain and both apps can read the same session token. supabase uses a cookie for the session by default, you just need to configure the cookie domain when you initialize the client. for the redirect flow it's the standard oauth pattern. user hits app b, you check if there's a session, if not you redirect to sso.mycompany.com with a redirect parameter like ?redirect=b.mycompany.com/dashboard. sso page handles the google login through supabase auth, supabase sets the session cookie on .mycompany.com, then your sso app reads the redirect parameter and sends them back. when they land on app b the cookie is already there so they're logged in. the main gotcha on the free tier is that supabase only gives you one auth instance. that's actually fine here because you want both apps sharing the same user pool anyway. just make sure both apps are initializing the supabase client with the same project url and anon key. the session is the same, the users are the same, the cookie is shared across subdomains. no need to build anything fancy.
the policies themselves are usually pretty straightforward. most of the time it's just checking auth.uid() against a column. the actual hard part is that when something goes wrong postgres tells you absolutely nothing. no error, no log, no hint. your query just returns empty and you're left guessing whether it's the policy, the token, a type mismatch, or something else entirely. i think if postgres had something like a "policy denied this row because X" mode even just for development, most people would have zero issues with rls. the logic isn't complicated, it's just invisible. you can't debug what you can't see. so yeah it's not that rls is hard. it's that silent failure plus no tooling makes simple mistakes feel impossible to find.
yeah these are spot on. the rls one especially. i've looked at a bunch of supabase apps recently and the most common thing is rls enabled but the policies are either way too permissive or just missing entirely. people enable it thinking that's the security step and then never actually write proper policies. the silent failure thing makes it worse because nothing looks broken until someone actually tries to access data they shouldn't. for staging i've just been using a separate supabase project and running migrations there first. it's not fancy but it catches the dumb stuff before it hits prod. the free tier makes this basically zero cost. the migration drift thing is real too. i've seen setups where someone made a quick change in the dashboard on prod and now the schema doesn't match what's in the migration files at all. once that happens you're basically guessing what the actual source of truth is.
storing filenames in the db and building the url client side is the right approach, don't overthink it. definitely don't store blobs in postgres, that's going to destroy your query performance and your database size will explode for no reason. supabase storage exists for exactly this. one thing to watch, if the bucket is public anyone with the url can access the images. if that's fine for your use case great. if any of those images are premium content you're gating behind a flag, a public bucket means anyone can just guess the url and download them. in that case use a private bucket and generate signed urls server side with an expiry.
i've built something similar so here's the short version. use supabase auth directly, make a profiles table that references auth.users with a trigger to auto-create rows on signup. keep a points integer right on that profiles row and do all the math in postgres functions, never from the client or people will give themselves infinite points. for posts store a likes\_count and comments\_count directly on the post row so you're not running count queries on every feed load. put a unique constraint on your reactions table (user\_id + post\_id) to prevent double likes. then one postgres trigger on reaction insert can handle the counter increment, the points update, and the notification insert all in one transaction. media goes in supabase storage, separate buckets for avatars, post media, and chat media. compress videos client side before upload. for chat just subscribe to realtime on a messages table filtered by conversation\_id. and please enable RLS on every single table, especially messages. i've seen so many apps where anyone with the supabase url can read everyone's private chats. that's the #1 thing people skip and it's the one that will actually get you in trouble.
props for actually building it instead of just talking about it. most "im going to build an alternative to X" posts never make it past the readme. one thing to think about early — auth is where supabase alternatives quietly become dangerous. if people are self-hosting this with real user data, the auth layer needs to be airtight from day one. token handling, session management, password hashing, oauth state verification, refresh token rotation — getting any of these slightly wrong creates vulnerabilities that users wont notice until someone exploits them. supabase has had years and a full security team iterating on gotrue. its the part that looks simple from the outside but has a thousand edge cases. same goes for the storage layer. the moment you let users upload files you need access policies that actually isolate files per user. ive seen so many self-hosted setups where the storage is technically working but every file is readable by anyone who can guess the path. worth thinking about how youre handling bucket permissions before people start putting sensitive documents in there. how are you handling RLS in this? is it postgres native RLS like supabase or a custom middleware layer? curious because that choice basically determines the whole security model
cool project. the cloudflare caching layer in front of vercel is smart for protecting execution limits but make sure your cache isnt accidentally serving authenticated responses to the wrong users. if a logged-in users page gets cached at the cloudflare level and served to a different user thats a data leak. worth setting cache-control headers carefully on any route that returns user-specific data and only caching truly public pages. for the settlement engine running as a cron job — is that edge function using the service\_role\_key to update all the user bets? if so just double check that function isnt callable from the client side. ive seen setups where edge functions meant for cron jobs are also exposed as regular endpoints and anyone can trigger them manually. in supabase you can set verify\_jwt on the function to restrict access. the decoupled landing page approach is the right call for SEO on free tier. monolith next.js apps on vercel burn through serverless function invocations fast on the free plan even for pages that are basically static. keeping the marketing site as pure static and only loading the app shell after auth saves a ton of execution budget
the self-hosting experience is definitely rougher than the cloud version, youre right about that. the auth UI, storage dashboard, and a bunch of the nice integrations are cloud-only features. its open source in the sense that the core components (postgres, gotrue, postgrest, realtime) are all open source but the glue that makes it feel like supabase is partly proprietary. that said i would seriously think twice before building a supabase alternative from scratch. the amount of work behind what supabase does is massive. auth alone with email, phone, oauth providers, MFA, session management, token refresh — thats a multi-year project if you want it production ready and secure. then you need RLS tooling, storage with access policies, realtime subscriptions, edge functions, migrations, and a dashboard tying it all together. its not that it cant be done but most "i will build an open source alternative" projects die after the auth module because people underestimate the scope. if the main issue is cost, a middle ground is self-hosting supabase with docker compose for your dev and staging environments and only paying for cloud on production. that cuts your bill significantly while you still get the nice dashboard and auth integrations where it actually matters. for the free project limit you can also just spin up local instances with the supabase CLI for experimentation and prototyping, those cost nothing
the main value is speed to mvp. with supabase you get auth, database, storage, edge functions, and realtime all from one dashboard with one sdk. no configuring RDS, no setting up clerk separately, no writing your own API layer. for a solo dev or small team trying to validate an idea its genuinely faster to go from zero to working app. the tradeoff is exactly what you said though — youre coupling everything to one platform. with your fastapi + RDS + clerk setup you own every layer and can swap any piece independently. with supabase if you outgrow it or hit a limitation youre migrating everything at once. for a serious production app with a backend team your stack is probably the better long term choice. where supabase really shines is RLS. having row level security enforced at the database layer means even if your API has a bug or someone finds an endpoint you forgot to protect, the database itself refuses to return data the user shouldnt see. with a traditional setup your security lives entirely in your API middleware and if you miss one route its game over. thats a meaningful architectural difference not just a convenience thing. but yeah if youre already comfortable building backends supabase is more of a speed boost than a necessity
honestly just ship it with a generous free tier and a small paid tier for heavy usage. nobody is going to judge you for charging for edge function compute costs, thats completely reasonable. free tier could be something like 10 policy analyses per month which is enough for most small projects, then charge a couple bucks for unlimited. another option if you really want to keep it fully free, run the sql parser server side but deploy it as a single docker container on railway or fly.io. both have free tiers that would easily handle a small community tool. that way people dont have to self host anything, they just use your hosted version, and you only pay if traffic actually gets significant. the tool itself sounds genuinely useful though. debugging RLS policies is one of the most painful parts of supabase because when a policy silently blocks a query you get zero feedback about which policy failed or why. if your analyzer can show "this select is being blocked because policy X expects auth.uid() to match column Y but the current user has a different id" that would save people hours. id focus on making the error explanations really clear because thats the part supabase itself doesnt do well
havent used flutterflow but this kind of thing is the risk with any no-code tool that controls your integration layer. they can change how they connect to third party services and suddenly your whole workflow breaks with zero warning. short term workaround if youre blocked, you can manage your schema entirely through the supabase dashboard or CLI and just manually update your flutterflow data types to match. not ideal but it unblocks you while you wait for them to respond. if youre using supabase CLI locally you can run supabase db diff to generate migration files and push schema changes without needing flutterflow to sync anything. longer term this might be worth rethinking the architecture. having a no-code frontend tool be the only way you sync your database schema is a single point of failure. if your self-hosted supabase is the backbone of production with auth, chat, product data and geo queries, your schema management should live in version controlled migration files not behind a button in flutterflow that can disappear overnight. that way even if flutterflow changes their integration again your database workflow is independent
used both. clerk is genuinely smoother for the initial setup, prebuilt components, user management dashboard, webhooks all just work out of the box. if you want auth done in an afternoon clerk wins on speed. but since youre already using supabase db and you have your own fastapi backend, supabase auth makes your life easier long term. the main reason is RLS. if you use supabase auth, auth.uid() works natively in your RLS policies which means your database is secured at the row level without writing any extra middleware. with clerk you have to manually pass the clerk user id into every supabase query and write RLS policies that check against a custom claim or a separate users table. its doable but its extra plumbing you maintain forever. the clerk DX advantage mostly disappears once you get past the login screen. the ongoing stuff like session handling across your fastapi backend, syncing user data between clerk and supabase, handling webhook failures when a user deletes their account in clerk but the row still exists in supabase, thats where it gets messy. keeping auth and db in the same system avoids an entire category of sync bugs
good writeup. one thing id add thats bitten me a few times with lovable specifically — the supabase client sometimes gets initialized before the auth session is restored on page load. so the first few queries fire with no session token attached, RLS treats them as anon, and you get empty results. then if you navigate to another page it suddenly works because by then the session is loaded. feels completely random to the user but its just a timing issue. easiest way to catch this is to log supabase.auth.getSession() right before your query runs. if session is null but you know the user is logged in, your client is firing too early. wrapping your data fetches in an onAuthStateChange listener fixes it. also the "pointing at a different supabase project" one is way more common than people think. if youve got multiple projects and youre using lovable, check your env vars carefully because lovable sometimes caches the old project URL even after you reconnect to a new one. seen people debug RLS for hours when the app was just talking to the wrong database the whole time
this isnt actually a supabase setting, its on the google side. go to [console.cloud.google.com](http://console.cloud.google.com), open your OAuth consent screen settings, and change the app name there. thats what shows up when users see the google login prompt. right now its probably showing your supabase project name or URL because thats what got auto-filled when you set up the OAuth credentials. while youre in there also upload your app logo and set the homepage URL to your actual domain. google caches the consent screen info so it might take a few hours to update but once it does users will see your brand name instead of the database url. one thing to watch out for — if your app is still in "testing" mode in the google console, only users you manually add as test users can log in. once youre ready to go live you need to publish the app and go through googles verification process otherwise random users will get a scary "this app isnt verified" warning which kills signups
yeah this is exactly the problem. stripe webhooks hit your endpoint with no user session so any supabase query using the anon key gets blocked by RLS since theres no auth.uid() to match against. the standard fix is to use the supabase service\_role key specifically for webhook handlers. create a separate supabase client in your webhook route initialized with the service\_role\_key instead of the anon key. the service role bypasses RLS which sounds scary but in this context its the correct approach because the webhook is server-side only, never exposed to the client, and you control exactly which queries it runs. the important part is scoping it tightly. dont pass that service role client around your app or export it from a shared lib. create it inline in your webhook handler, do the specific reads and writes you need for the sync, and thats it. keep your regular anon client for everything else. if you really dont want to bypass RLS at all you can create a specific postgres function with security definer that does the subscription update internally, then call it via rpc from your webhook. security definer functions run with the permissions of the function creator (usually postgres) so they bypass RLS but the logic is contained inside the function itself. feels cleaner than passing a service role client around. either way make sure your webhook route is verifying the stripe signature before doing anything. if someone hits that endpoint with a fake payload and youre using service\_role access thats a bad combo
yeah the migration workflow is confusing at first because supabase tracks which migrations have been applied in a table called supabase\_migrations.schema\_migrations on the remote db. when you run supabase db push it compares whats in that table against whats in your local supabase/migrations folder. if theres a mismatch you get those repair suggestions. the most common reason this happens is when both of you create migrations locally at the same time with different timestamps, or when someone manually changes something on the remote db through the dashboard instead of through a migration file. the remote db now has a state that doesnt match any migration and everything gets confused. the workflow that works cleanly: never touch the remote db through the dashboard once you start using migrations. all changes go through migration files only. when one of you creates a new migration locally, push it to git, the other person pulls and runs supabase db reset locally to apply it. when youre ready to deploy, one person runs supabase db push to apply all new migrations to remote in order. for the repair command, its not updating migration files, its updating that tracking table on the remote. if a migration was applied manually or got out of sync you use repair to tell supabase "hey this migration is actually already applied, skip it." its a fix for when things get out of sync, not part of the normal workflow. if youre already in a messy state the easiest fix is to run supabase db remote commit to pull the current remote state into a migration file, get everything in sync, and then start fresh with the clean workflow from there
the fact that youre even thinking about this puts you ahead of 99% of indie devs. most people ship with admin access to everything and never think twice about it. what you want is client-side encryption before anything touches supabase. the user encrypts the image in the browser using a key derived from their password (something like Web Crypto API with AES-GCM), then uploads the encrypted blob to supabase storage. you only ever store the ciphertext. that way even if you open the file from the dashboard its just random bytes. the tricky part is key management. if you derive the key from their password and they forget it, those images are gone forever. no recovery possible. thats the tradeoff with true zero-knowledge encryption. you can soften this by letting users export a recovery key on signup that they store somewhere safe, but youre shifting responsibility to the user. one thing people miss, make sure you encrypt the metadata and commentary too, not just the images. a photo you cant see but with a caption that says "our trip to paris 2024" still leaks a lot. encrypt everything before it hits supabase and store the encryption params (iv, salt) alongside the ciphertext. also your RLS is still important even with encryption. encryption protects against you and anyone who gets database access. RLS protects users from each other at the query level. you want both layers
done something similar with 3 supabase projects and the centralized auth approach was the least painful. basically pick one project as your auth source of truth, then share the same JWT secret across all projects so tokens issued by the auth project are trusted everywhere. you set the jwt\_secret in each projects config to match and RLS policies work as normal since auth.uid() resolves from the token regardless of which project issued it. the external IdP route with clerk or auth0 works too but adds a dependency and another bill for something supabase auth already handles. id only go that route if you need SAML or enterprise SSO that supabase doesnt support yet. for the AI context layer i wouldnt try to query across 3 supabase projects in real time. the latency stacks up fast and if one project is slow everything is slow. better to replicate the data you need into a single read-only project using pg\_cron or a simple sync job. slight delay but way more reliable than fanning out queries at runtime. biggest "dont do this" warning: dont share the service\_role\_key across projects to shortcut the cross-project reads. ive seen people do this and it means a breach in any one project gives full access to all five. keep the blast radius small, use the JWT approach for user-scoped access and the replication approach for AI reads
the manual env var copy-paste thing is genuinely one of the most common ways supabase apps break in production. ive seen people accidentally paste their service\_role\_key where the anon key should go and suddenly their frontend has full admin access to every table. nobody notices until someone checks the network tab. for managing it across environments i just use a .env.local per environment and a simple script that pulls the keys from supabase CLI using supabase status. that way the keys are always in sync with whichever project is linked and theres no manual copying. for CI/CD the keys go into github secrets or vercel env vars once and never get touched again. the scarier version of this problem is when people commit their .env file to git by accident. ive seen public repos with service\_role\_keys sitting right there in the commit history even after they delete the file. if anyones reading this and not sure, check your git history with git log --all --full-history -- .env and make sure nothing is in there
had a similar issue connecting a different frontend to supabase where the API showed public schema exposed but no tables showed up. two things fixed it for me. first check if RLS is enabled on those tables but you have no policies added yet. when RLS is on with zero policies it blocks everything including the API from listing the tables. either add a basic select policy for anon or temporarily disable RLS on one table to test if thats the issue. second thing, in your supabase dashboard go to settings > API and make sure the tables you want are actually in the public schema and not accidentally created in a different schema. flutterflow and the REST API only see whats in the schemas listed under PGRST\_DB\_SCHEMAS which defaults to public. if your tables ended up in another schema they wont show up even though they have the green checkmark in the table editor. also double check that your anon key matches the project youre looking at. if you have multiple supabase projects its easy to grab the key from the wrong one and everything looks connected but returns nothing
email is the thing that breaks silently and you dont find out until someone complains they never got their invite. been there. biggest thing that helped me was stopping using supabase edge functions for email entirely and just using resend or loops with a webhook. you set up one webhook trigger on your users table insert and the email service handles deliverability, templates, open tracking, all of it. trying to manage SMTP config and deliverability from edge functions is a losing battle when youre not a backend person. for the trigger breaking after schema updates, thats a common one. if you rename a column or change the table structure the postgres trigger still references the old schema and just silently stops firing. worth checking your triggers in the SQL editor after any migration to make sure theyre still pointing at the right columns. also one thing to watch since you mentioned 300 users and youre a non-dev using lovable, make sure your edge functions arent exposing your SMTP credentials or email API keys on the client side. ive seen lovable generated code where the function env vars were accessible in ways you wouldnt expect. worth a quick check in your supabase dashboard under edge functions > settings to confirm your secrets are actually secret
this is a really well thought out breakdown of the options. the service user accounts approach is honestly the most practical one ive seen people use for this. the extra auth request per run sounds annoying but if your functions are already running 30-60 seconds an extra 50ms auth call is nothing in the grand scheme. one thing that makes the service user approach cleaner is to generate the session token once on cold start and reuse it across invocations until it expires. lambda keeps containers warm for a while so you wont be re-authing every single run. just cache the token in memory and refresh it when the jwt expires. that way your cost stays flat. the signed jwt approach is what i ended up doing for a similar setup. the single signing key limitation is annoying but you can scope the jwt claims per service by giving each one a different custom role in the payload. so service A gets a jwt with role image\_generator and service B gets role data\_processor, and your RLS policies check the role claim. if one service gets compromised you revoke its specific jwt without rotating the signing key itself — just add the compromised tokens kid or jti to a deny list. honestly supabase is missing a first class solution here. scoped api keys with per-key RLS roles would solve this cleanly but i dont think its on their roadmap anytime soon
yeah this is a real pain point. the DX for auth on the client side is great but the moment you need to validate tokens server side or in edge functions it falls off a cliff. feels like two different products honestly. i hit the same JOSENotSupported error when moving between environments and it turned out my JWKS endpoint was returning a key with an algorithm the jose library didnt expect. the fix was stupid simple but finding it took hours because the error gives you zero context about which key or which part of the validation failed. your point about the abstraction layer is spot on. if every developer ends up writing the same 100 lines of token validation code thats a sign it should be a built-in helper. something like supabase.auth.verifyToken() on the server side that just returns the user or throws a clear error with an actual error code would save everyone a ton of time. until they build that i ended up wrapping the whole jwks fetch and validation into a small utility that caches the keys and gives human readable errors when something fails. not ideal but at least i dont have to debug JWSInvalid with no context anymore
99% chance this is RLS. when you look at tables in the supabase dashboard youre using the service\_role key which bypasses all RLS policies. but your app uses the anon key which is subject to RLS. so you can see everything in the dashboard but the app sees nothing. quickest way to confirm: go to the supabase SQL editor and run select \* from your\_table — that uses service\_role and will return data. then go to table editor, click the little dropdown next to the table name and switch to "use API key" with anon role. if that returns empty, its RLS blocking you. if youre using auth and want logged in users to read their own rows, you need a policy like select for authenticated role where auth.uid() = user\_id. the user\_id column in your table has to match exactly what supabase auth assigns, lovable sometimes generates schemas where the user id column is named something different like owner\_id or created\_by and the RLS policy references the wrong one. also check that youre actually passing the session token with your queries. if the auth session isnt attached to the supabase client the request hits as anon even if the user is logged in. in lovable generated code this sometimes breaks when the supabase client gets initialized before the auth state is ready
been running supabase postgres in production for a write-heavy app for about 8 months. no real complaints on reliability, its been solid. the always-on aspect is real, you dont get cold start weirdness on the db side which matters when your api is already on render and you dont want two layers of wake-up latency stacking. neon's scale-to-zero sounds great on paper but for a write-heavy app with food logs and scans coming in constantly your db is basically never idle anyway. so youre paying for the autoscaling machinery without actually benefiting from it. and the cold start when it does scale back up adds a noticeable delay on the first query which is annoying if a user opens the app after a quiet period. one thing to watch with supabase though, even if you dont plan to use their auth or storage, their postgres instance comes with a bunch of extensions and schemas pre-installed (auth, storage, realtime, etc). not a problem functionally but if youre doing your own migrations or backups just be aware theres more in the database than what you put there. also make sure you disable realtime on tables you dont need it on, it adds overhead on writes. for write-heavy on render id go supabase. simpler mental model, predictable pricing, no cold starts
havent checked the tax math but since youre handling peoples financial data on supabase id definitely double check a few things before sharing this more widely. make sure your RLS policies are airtight, with tax data you really dont want a situation where one user can read another users records by guessing an id. test it by creating two accounts and trying to fetch the other users data directly through the supabase client. youd be surprised how often this gets missed especially when AI generates the initial schema. also the receipt scanner with gemini, are you sending the receipt images to gemini's api directly from the client or routing through a server action? if its client side your api key is probably exposed in the network tab. with a tax app thats a bad look even if the key itself is scoped. and since you said no login required for the calculator part, make sure theres no way to hit authenticated endpoints without a session. ive seen setups where the anon key gives more access than intended because the RLS policies were written assuming the user is always logged in. cool project tho, the 1040 waterfall modeling sounds like a ton of work
the RLS cascading ownership pattern is underrated honestly. most people either skip RLS entirely and do auth checks in middleware or they write one policy per table and call it done. chaining it through parent relationships is the right way but it gets gnarly fast once you have nested resources. one thing id watch out for with the supabase storage setup, make sure your storage bucket policies are locked down separately from your table RLS. ive seen projects where the database rows were properly protected but the storage bucket was public so anyone with the file path could access other users images directly. easy to miss since the dashboard shows them as separate sections how are you handling the stripe webhook verification on cloudflare workers btw? last time i tried that the crypto.subtle api on workers handled the signature check differently than node and it silently passed invalid signatures
went with separate projects after trying branches for a while. branches are fine in theory but i kept running into annoying edge cases where migrations behaved slightly differently on the branch vs prod, and debugging that was worse than just maintaining two projects. separate projects also means you get completely isolated storage buckets, auth configs, and edge functions which is nice when youre testing stuff you dont want accidentally hitting prod data. the extra cost of a second project on the free tier is zero so theres not really a downside. only annoying part is keeping migrations in sync but if youre using the supabase cli with db push its pretty painless. i just have a deploy script that runs against staging first and then prod once i verify everything works
this is really solid. the indexeddb queue with auto-replay is smart, most people just let the app crash and blame supabase lol. honestly ive never gone as far as a full ec2 hot standby but ive been burned by the realtime connection dropping silently and writes just disappearing. ended up doing something similar with a local queue that retries on reconnect but way less sophisticated than what you built here. curious about one thing, how are you handling conflicts when the sync replays? like if someone else modified the same row on supabase while your app was writing to the failover, does the replay just overwrite or do you have some kind of last-write-wins logic?
most of those "shipped in 24 hours" people are doing nextjs + vercel and nothing else. the second you add mobile the auth story gets 10x harder and nobody talks about that. went through the same thing with expo + supabase. two things that fixed it for me: ditch asyncstorage for expo securestore for session persistence — asyncstorage randomly loses sessions on ios cold starts and thats probably why it feels flaky. and for redirect urls you need a custom scheme in your app.json (like carrotcash://) and add it as a separate entry in the supabase dashboard. trying to make one redirect url work for both web and mobile was half my debugging time. youre not overcomplicating it, mobile auth genuinely is that annoying. once you get past this part the rest of expo + supabase is smooth tho
yeah this is a classic gotcha with self-hosted supabase. the issue is almost certainly that your custom volume path for /etc/postgresql-custom doesnt have the right files in it, or the permissions are off. when the db container starts it expects to find config files at /etc/postgresql/postgresql.conf but that file actually gets generated from whats in /etc/postgresql-custom. if that mounted directory is empty or the files inside dont have the right ownership (postgres user, uid 26), the config generation fails and you get that fatal error. couple things to check: make sure the directory youre mounting to /etc/postgresql-custom actually has the files from the original volumes folder. if you just created a new empty directory at that path thats your problem, it needs the contents copied over, not just the same folder structure. also check permissions on the host side. run ls -la on your custom path and make sure the files arent owned by root with 600 perms or something. the postgres process inside the container runs as a specific uid and if it cant read the config files youll get exactly this error. one more thing the :Z flag on your volume mounts is for selinux relabeling. if youre not on a selinux system (like if youre on ubuntu) that flag can occasionally cause weird permission issues. try removing the :Z and just use a plain mount for the db volumes and see if that fixes it. if none of that works try running docker compose logs supabase-db right after it fails, theres usually a more detailed error above the fatal line that tells you exactly which config key is broken
had this happen twice. both times it ended up being stuck in a weird limbo state where the restore process started but didnt fully complete on their backend. what worked for me: go to the project settings page and try pausing it again, wait like 5 minutes, then restore. basically forces it to restart the whole restore cycle cleanly. second time around it came up in like 10 min. if that doesnt work, hit the supabase support through the dashboard (the little chat bubble bottom right). they can manually kick the restore process on their end. when i did that they fixed it within a couple hours. also heads up, after it does come back, double check your edge functions and any cron jobs you had running. mine came back with the db fine but two of my edge functions were in a weird undeployed state and i didnt notice for like a day. fun times lol
oh man yeah the silent failure thing in supabase is brutal. got bitten by the same thing a while back. Had an insert that was failing because of an RLS policy i misconfigured and the response just came back looking totally fine. no error, no nothing. took me like 3 hours to figure out why data wasnt showing up. one thing that helped me beyond sentry is wrapping every supabase call in a helper that checks for both `error` and whether `data` is actually what you expect. something like if the insert returns null data but no error, thats usually an RLS issue. saved me a ton of debugging since. also worth checking if you have `REPLICA` mode on for realtime, that one silently drops writes too if your RLS isnt set up for the realtime role. super annoying to track down. good call on sentry tho, the supabase + sentry combo catches most of the weird edge cases