Supabase has [this great resource](https://supabase.com/docs/guides/realtime/postgres-changes#database-instance-and-realtime-performance) to estimate when the Postgres Changes Realtime feature might start seeing bottlenecks on your database. If you are using that, I recommend taking a look to see if you've hit your theoretical limits
I would also like for that custom role to have access to internal statistics table in Postgres for example, so this key encompasses all monitoring needs, not just the privileged endpoints they expose. Reason: our product [DBM](https://www.datadoghq.com/product/database-monitoring/) uses those statistics for our offering which honestly is 100x more useful than just metrics. Other products in the same category will have the same issue or even if you want to build something yourself, overall I think its a net benefit for everyone if we increased the scope of what the key could access
I see yeah that makes sense, it was probably taking more than 2 seconds to connect, which is kind of long but 🤷♂️ glad you could fix the issue!
I'm assuming the "~" portion in your connection is following the correct format postresql://{user}:{password}@{server}:6543/postgres?pgbouncer=true. If that's the case, can you tell me the following: Do you have apps already talking to the database? If so, provide: * Pool size (Database -> Settings) * Max client connections (Database -> Settings) * Database connections (Observability -> Settings) Do you have Network restrictions? (Observability -> Settings -> Network Restrictions) or Network Bans?
Other thoughts you could take a look at DB level * Connection health: If your connections are saturated maybe queries sporadically take more time to be acquired by the application? * CPU health: Same case, if CPU is saturated, connections might take longer to release or work the engine has to do might take longer due to no available CPU to process it * IOPS budget, once you reach your limit queries become _really_ slow
np! sorry I couldn't help that much, my expertise lies mostly at the DB level :c don't have a lot of experience with Supabase <-> Networking issues (if that's even the case here)
well yeah, i don't work for Supabase my suggestion is just patch fix in case its urgent haha
Yeah it's kind of hard to tell like this, I don't think it's the SDK, most likely some network connectivity issue making requests take longer? I'd keep an eye on the Query Performance report as Gary suggested and see if the actual query execution times spike If the intermittent 15 seconds queries are causing you trouble and the underlying data is not changing that much, maybe a simple in memory cache per user-id would alleviate the pains (as long as there hasn't been a restart of the server) to serve data faster
Is the endpoint taking 15 seconds to execute, or did you confirm its the query itself? From the data you sent it seems the query itself takes ~100ms, which is what i'd expect
after you run that you could paste the result here and I can help, but I'm thinking the issue might not be in the DB layer
All queries seem to be slower than I'd expect, have you tried running an EXPLAIN ANALYZE to see what the issue might be? Here's a [page](https://explain.datadoghq.com/) that shows you how to do that
Also check your usage metrics e.g disk, connections or cpu to see if any of them are spiking during the hours where it stops working
I'm assuming Lovable uses a version of [Supabase For Platforms](https://supabase.com/docs/guides/integrations/supabase-for-platforms), which IIRC wouldn't give you access to the underlying Supabase account as its all managed by them
are you talking to your database directly or using something like postgrest? Also, who are the consumers of the tables? I would assume you could put some type of rate limiter in your application code but I'd need more context as to what you're trying to achieve
not a supabase employee but maybe I could help. Could you provide more details into your resource usage? Maybe stuff like connections / disk / cpu. Are you using the [pooler](https://supabase.com/docs/guides/database/connecting-to-postgres#poolers) to handle connections better as well?