Use the browser debugging tools to watch the network waterfall of the page loading. For the layout are you redefining the space for the image so the page can be rendered before the images are loaded? See if you can find some tools that defer loading of the images below the fold so it appears to the user that the page is rendered quickly. Some sort of just-in-time scheme.
This is not “running your LLM in the database”, it is calling an API.
You can host on cloudflare.
If it is just a personal side project self host the entire thing locally, preferably using a virtual machine or container, and open access to it using a cloudflare tunnel.
Test and comment on that ticket and maybe someone will merge it.
See [https://github.com/supabase/setup-cli/issues/399#issuecomment-4184082349](https://github.com/supabase/setup-cli/issues/399#issuecomment-4184082349)
There already is work on this. It may have been released in the last day or so. I haven’t checked since last week.
You need to update your seed file to match the latest db schema definition. You can still test your data migrations by running the `supabase migration up` command in local dev. I personally like to test them by just running the file inside the Postgres CLI with the `\i` command inside a transaction and rollback as necessary. Once it is satisfactory I run migration up.
I would at minimum make each white labeled customer use their own bucket. This makes for easier bulk delete when that stop being a customer. The folder structure of storage is just an illusion. It is all just part of the file name as far as the storage system is concerned. Also as others have said, you should do some projections of the cost and decide if this is the right solution for your use case. Be sure to include the added complexity of managing the external storage in your comparison.
Does your client computer support IPv6?
See https://supabase.com/docs/guides/api/securing-your-api#examples for an example of how to build out rate limiting for write requests. There doesn’t seem to be a solution for read limiting.
The default API endpoint url is already behind cloudflare. You don’t control it, but it is there. You don’t need a custom url or paid plan for it.
Fo the outbox pattern the `pgmq` extension is great. It provides good primitives for reliably adding and removing events to process. Someone even built and shared a robust workflow processing system on top of it called `pgflow`. You can also use Inngest to process event driven workflow but that is an external service you’d have to pay for (I use it because pgflow did not exist when I started my project).
If your goal is to keep the messages secret from you, the site operator, you cannot have the keys. End of story.
If you have the encryption key then it is not hidden from you. If you can render it in a view then you have the key. The only solution is end to end encryption where the message is encrypted on the client and decrypted on the client and the customers are the only ones with keys. Look up how Signal or WhatsApp solve this.
You’re asking if your solution is reasonable, but we cannot tell you unless we know the problem you are trying to solve. What are you accomplishing by encrypting that column when you can just see it from the view anyway? From whom are you protecting the column contents? As for using the vault you will have to put your query into a security definer function. There’s no way to get the REST api to reference it.
What exact problem do you want to solve here? Encryption at rest?
Can you show the result of running `nslookup` or `dig` on your .co hostname? This was an issue a couple of weeks ago which has since been resolved.
My advice is to avoid enums in Postgres unless you really really need them. The only case I find them acceptable is if you need to sort by a column of those values. Even then it is limited in that you can only append to the list of values. You cannot remove or reorder them. Use a relation table instead. I agree with the advice others have given.
There is no limit, you just pay per project based on the size of the resources you select (disk and cpu and RAM). The cost of the pro plan includes $10 of credits towards your first project which exactly equals how much it costs to run the second smallest machine.
Curious why you would write code for production without any error checking?
Sounds like OP wanted the database api call to throw an exception on a FK error instead of having to check the success status of the api call.
When they say “extreme” it really is that. Most of us mortals will never get to that scale. We can get by with a single node and lots of RAM and IOPS. About 10 years ago I was pushing hundreds of millions of events per day into tracking tables, while still running reporting and other regular application queries. I did this on a single 12-core server (and an identical replica for failover) with gobs of RAM and ssd cache drives for ZFS running FreeBSD.
What’s your location? Did you check the status page?
Check this project out: https://github.com/psteinroe/postgres-conductor It basically wraps pgmq with some well defined patterns for coordinating tasks.
They’ve had some issues recently but they haven’t affected me so much. The orchestrator runs on their infrastructure but everything else runs within my own application and database. I recently came across this project and I think it is a worthy contender: https://github.com/psteinroe/postgres-conductor My only issue with it is it needs along running worker process and I run everything on serverless architecture. I might work up a way to use it anyhow, because I like not having many intertwined dependencies whenever I can avoid it.
I use Inngest for all my durable tasks. I implement web hooks in my front end and submit the work to Inngest. Once accepted I return success from my endpoint.
What do you mean “persisting”?
They are all available as ARM images. That’s how the local development environment runs on modern Macs.
They have an S3 compatible API, but you shouldn’t consider it to actually be S3.
Folders are an illusion. The `/` separator is just part of the object’s name. It is only useful for you as the user to group them. You could just as well have used any other symbol.
I see some chatter about DNS among the Supabase community helpers who hang out in Discord. Maybe check there.
You can access the supabase storage using S3 protocol. You cannot bring your own S3 storage buckets to use with supabase.
No. You cannot bring your own storage tobsipavase. You will need to modify your code to use the AWS S3 directly.
The scanner is following a POST form button? Maybe stick a captcha on that screen or a delay before the button is valid.
This was just asked the other day. https://www.reddit.com/r/Supabase/s/lLaU8m488S
There is discussion about this problem in the Supabase Auth documentation, so yes, others have experienced it. Short answer: use the GET request to display a form that does a POST to trigger the final action.
Yes. Just use the Supabase SSR library to integrate authentication into your app and the Supabase JS library to register your users and query/update the database. Ignore all other features Supabase provides.
You do that once per migration file you are going to create. A migration file can have as much DDL in it that you want. Now that you mention declarative schemas, ignore this advice.
When you need a new migration use the `supabase migration new xxxxx` CLI. It will number them to keep them in order.
This command from the CLI: ``` npx supabase vanity-subdomains --project-ref abcdefg activate --desired-subdomain mycustomname --experimental ``` It is documented _somewhere_ I just don't know off-hand where.
There’s an “experimental” setting you can do with the Supabase CLI program to set a semi custom name like `mycustomname.supabase.co`. This is free. I’ve been using it for a couple of years on my staging environments.
What did you change since yesterday?
The default JWT session is not 1 minute. Something you are doing is either stomping over the session object or you are setting it really short.
The REST interface is just using the `fetch` API from your JavaScript runtime. You do not need to manage the socket for it. You are expected to refresh the JWT every so often. If you are using the provided authentication library it will do it for you. For your other questions you need to describe exactly what you are doing and what symptom you get needing to reauthenticate.
Your whole post is overly dramatic. I fully understand the Supabase architecture and don’t need an AI to explain it to me.
You can connect to it directly the same as any other hosted Postgres.
Are you referring to the shared proxy server? The DB itself is running its own VPS in Amazon’s cloud. You have your very own REST server too.
So, like all virtual machines you get from any cloud provider.
There’s a line in your error message that starts with “Caused by”. What does that line mean to you? It seems pretty clear you have an issue connecting to your instance. There are basically two possibilities. One is that you have networking connectivity problems between you and your Supabase instance and the second is that your Supabase instance is down. Did you check the Supabase status page?