I personally only use the UI for account settings. Everything else about the database I manage via CLI tools (and migrations), mostly `psql`. I have a file with queries I copy/paste to give me various statistics and configurations, and I have a full `pg_tap` test suite to ensure nothing changes that is not expected to change.
Does your app have a concept of "invite link already used" and if so, what happens if the person clicks the invite link the second time? If that sends them to your landing page instead, then you might be running into the issue where mail scanners (such as Microsoft 365 mail) "consume" the code while scanning for viruses.
Just tried from my non-usual browser and had no issues logging in. Also east coast US, specifically near DC.
OAuth works great locally for users to login using the various social logins such as Google and Microsoft. You will need to configure the `[auth.external.google]` section (for each service) to provide the necessary credentials you would normally set via the dashboard on the cloud. Note that some providers don't allow you to use 127.0.0.1 so you have to configure your local service to use the name "localhost" and then update all your OAuth providers to use the same for the callback.
I know nothing of react native. for my web app I use PKCE login flow and the callback route calls `supabase.auth.exchangeCodeForSession()` after the supabase auth layer has done the OAuth2 exchange with google.
For local development, the authentication service needs to see the values. Those are configured in the supabase config file: ```toml [auth.external.google] enabled = true client_id = "env(SUPABASE_AUTH_EXTERNAL_GOOGLE_CLIENT_ID)" secret = "env(SUPABASE_AUTH_EXTERNAL_GOOGLE_CLIENT_SECRET)" skip_nonce_check = false ``` You can either put those values directly into the `config.toml` file, or you can set them in your `.env` file and reference them as I do. On production, you just set them using the Supabase console. Not sure what you do for self-hosting.
correct.
Good points. I don't use the dashboard UI at all to interact with the DB, but I do rely on backups. Good thing to check.
Here are my notes for setting up a credential to use with Supabase Authentication with Google social login. They are sparse because they are really just notes to remind myself how I did it, and I'm familiar with the google cloud platform. When you go to the overview screen, on the left side is a tab for "Clients". Here you should see the "app" you created for OAuth2 use in the platform. Select that and there should be a screen with a section called "Client secrets". Since you cannot reveal the one you already have, add another and delete the old one. The secret only needs to be put into the Supabase configuration; your client does not need it. --- Create an Auth app in the [cloud console](https://console.cloud.google.com/auth/overview?project=MYPROJECTNAME) First time through you need to enable the feature, then create a "client". Add the callback URLs from the to the Authorized redirect URI list. Configure the privacy policy and AUP links and the logo. For testing, need to white list the test users in the "Audience" screen (until the OAuth2 app is "approved"). Add the keys to the environment for dev only (to be picked up by the supabase config file). For production the keys are configured in the dashboard.
I will definitely have to make my permissions carefully.
Does your email provider have a "trash" folder to retrive recently deleted email messages?
How is your project using Supabase?
I run my tests using the included `pg_tap` test framework and a utility function to simulate logging in as a user or service role as needed. This is as close to the actual tests as possible, so will apply to any layer above it such as the REST interface.
Not sure if you've succeeded yet, but another thing you can do is increase the amount of memory the server will use while creating the index. The default is pretty small unless your server instance has gobs and gobs of RAM. Right after you set the `statement_timeout`, also set the `maintenance_work_mem` to something like 25%-30% of the RAM on your instance. On the micro instances (with 1GB RAM) it defaults to 64MB, and on a medium instance (4GB RAM) the default is 256MB.
That was me on the github issue feature request. 🙂
In your other post, you're getting help from one of the most helpful helpers known around these parts. He even offered to escalate your ticket for you, and here you are saying that you are feeling pressured into paying for support to get a resolution. To your main question here, "what is a good alternative", you need to articulate exactly what you need such an alternative to do for you. There are many services rolled into one with Supabase. I also doubt you'll get much advice here since we are users of Supabase.
This is really wonderful. I'm already liking it so much better than trying to click around dBeaver (or any full UI for that matter).
Historically, this is why the role of DBA existed. All changes to the database schema would be coordinated and approved by the person/team responsible for ensuring the stability of the database.
You have a lot of unknowns here. What stack is your front end built on? Does it need an app server or does it run out of static file hosting? On the pro plan, you can select what level of server you want. It just depends on how much IO bandwidth and storage you need. How big is the data you're storing, both per-form and in total. What is the expected distribution of your actions? Usually one would model the "arrival time" of the users using either a Student's-T or Poisson distribution over time. That is, it is pretty much never a constant rate. Do you expect surges such as the morning when the doors open and everyone signs up all at once? You also need to specify what kind of response time you need. Does everything have to respond within 200ms? is 1s good enough? 5 seconds? After you figure that all out you can make reasonable decisions on how big of a server you want to rent from supabase. If you only need it for a few days during the event, it is not going to hurt so much if you over-provision it as the difference would be only a few dollars. Final advice is to simulate your load and see if you get the performance you require.
The migrations don't run in the `psql` client program. They use their own interpreter here and things like setting `statement_timeout` I find generally do not work. You also probably will need to convince github actions to allow your command to run longer. I don't know what their timeouts are.
There are no `A` records for the db.*.supabase.co hostnames unless you pay extra for the IPv4 address add-on. They will have `AAAA` (IPv6 address) records always.
I'm not a fan of ORMs in general, but I've been deep into SQL databases for 30+ years and I've only built my own SaaS applications which did not need automatic migration from arbitrary versions. I think adding ways to get various metadata from the supabase-js library is something they should build, so showing a real-world use case will help provide justfication to build it.
For now I'd just tell TS to expect the error, then file a feature request on the github to allow asking for `xmin`, `xmax`, and `ctid` at least. I wouldn't think it would be right to monkey-patch the database types definition to include these fields in the record definition, but maybe that works for you.
It looks like what electric collection needs is the txid that inserted the row. You can select `xmin` from the table to see that. From the CLI it looks like this: ``` postgres=> select user_id,xmin from user_metadata ; ┌──────────────────────────────────────┬────────┐ │ user_id │ xmin │ ├──────────────────────────────────────┼────────┤ │ 292b94c3-818d-444c-8013-bffa943fb3aa │ 1264 │ │ a2d69425-b0be-4bf4-9bc8-16d86b1fbf58 │ 2932 │ │ a08a8483-bcf9-4f35-ac79-1012923c0c0b │ 27832 │ │ 83e856a0-efb4-4f5e-acce-f3c6e72d84d9 │ 40512 │ │ c90df842-ebda-459c-8dde-1abf66435008 │ 44567 │ │ ca0813b6-148e-4645-8133-61c4952976cf │ 538576 │ └──────────────────────────────────────┴────────┘ (6 rows) ``` The column `xmin` is basically a system-provided column that is otherwise hidden. Another useful on is `ctid` which is the physical location of that row on disk. I'm not sure how you'd do that cleanly with the typescript client through PostgREST. If you try to do `.select('*,xmin')` Typescript complains that column `xmin` does not exist on the table. However, it does return the value: ```json {"user_id":"ca0813b6-148e-4645-8133-61c4952976cf","xmin":"538576"} ```
There's a field called `key_id` in `vault.secrets` That's the value it wants. It is the most craptastical design because you cannot easily automate this via migrations; you have to insert the secret into the vault, then query it to get the key ID that was assigned. This came about because one of the people who implemented it did not understand that the vault "name" was unique, so made the wrapper use the assigned key_id instead. Aside from this, this FDW is extremely fragile. If you make too many queries too quickly that do API "write" calls, it just pauses and/or gets stuck and/or just fails. It is much, much better to use the Stripe API directly within your app.
How big is your table (number of rows) and how busy is your application querying that table? Altering a table requires an exclusive lock and granting that lock can time out if something else is holding records in that table locked as well.
nslookup is using `A` for the query. You need to tell it to use `AAAA` ``` nslookup -query=aaaa db.xyhwxtuicgtormnxscac.supabase.co 8.8.8.8 ```
If you're being paid to make the app, you are not allowed to use Vercel free hosting for it.
If you go to the "backups" tab on the database part of the dashboard and it now says "physical" instead of "logical" you are done with the maintenance. Here's what it looks like. It is not a big stretch to conclude that the timestamp of the first physical backup is when this project was converted.
Adding another data point: I have a project created in May 2024 for which I have no migration on storage.objects other than defining the policy. On that db, `rls_enabled` is true, which is what the default was upon creation of the project.
That's the text which only shows while editing fields.
Clearly I updated my prod instance before 30 was the new 2. 🙂
The rate limit link doesn't show up unless you alter a value in the SMTP server form.
On my other project it was 30. Don't recall ever setting those. These projects are like 2 years old.
Thanks!
Interesting... when I went to reset my SMTP settings, it showed me a link to that screen and was indeed still sitting at 2. SMH.
Updating `raw_user_meta_data` within a `BEFORE INSERT` trigger on `auth.users` does not in fact do what a resaonable person would want it to do. There are apparently several updates following the insert which will put back the original value of that field when a user is created. You have to wait until the full user create process is complete.
OP please reply back if you get details from Supabase. This would be a good thing to know.
I backup some AWS S3 buckets to a local backup server daily using `rclone`. There are many tutorials on how to set that up. I don't see why you couldn't do that using the S3 interface Supabase provides to storage but I haven't tried it yet. One thing rclone does is compare file timestamps to decide if it needs to fetch the file again. This is *usually* a sufficient test of a file changing on S3.
What's unruly about them? Over time, you will accumulate them. I had a service I ran for ~18 years which ended up with around 100 migrations by the time we shut it down. That wasn't Supabase but it was on Postgres. What I do is keep a copy of the current schema for easy reference: ```sh npx supabase db dump --local --keep-comments > supabase/current_schema.sql ```
This has worked the entire 3+ years I've been using Supabase. It is not related to updating the CLI.
Are you consuming your link with a GET request? If so, your user's email service is causing it to become "used" when it scans that URL for anti-virus purposes. The solution is to make the GET display a screen with a button that does a POST to cause the actual action (which calls the Supabase function) to occur. My hunch is they are using Microsoft hosted email, which is notorious for this problem.
I just implemented OAuth2 and wanted to say thanks again for sharing this <@869319595784290395> . There is no need to put any public IP in front of the local supabase to make OAuth2 work, at least with Azure, Google, and LinkedIn. They all work just fine with localhost. The only requirement is your browser needs to be able to access it, and the URL string matches the allowed configuration. However, Azure *will not* let you configure the return URL as http://127.0.0.1:54321/ (the default) so you *must* set this undocumented `api.external_url` value to be http://localhost:54321 to test locally with Azure. I'm not sure if any other OAuth2 providers require that as well. Basically there is no way to do local dev testing with Azure OAuth2 login without this. I shared my comments in a ticket https://github.com/supabase/supabase/issues/41883 to improve the documentation.
The billing cycle on each of these receipts is different. It must be two different orgs.
The error message starts with `535 5.7.8 Error:` which is an error coming from your SMTP service. The code 535 is an SMTP protocol authentication failure. The reset of that error message looks like a base64 string (the `+` at the end was my biggest clue). Decoding that, it shows: `<3334311811280004.1766579779@p-pm-outboundg04a-aws-useast1a>` which to my eye looks like a message ID or some other ID your mail service is using to track this attempt. Go check the logs at your mail server on AWS (I'm assuming you're using SES based on the domain name) and search for this string and see what errors are there. My diagnosis is that Supabase is not able to use the credential you gave it to send mail via your SMTP provider. This is what you need to solve. Maybe you copy/pasted the credentials incorrectly by accident.
Just curious how you learned this if it is not documented. My next task is to enable oauth and this will be helpful for testing it.
Are you expecting that the table size shrink when running vacuum? It will not. It will accomplish nothing beyond what the auto vacuum will do, and that will be triggered pretty quickly after a large delete anyway. You won't be able to run these commands inside a function block since these are implicitly inside a transaction. You will need to use some external cron job that connects to the database as the postgres user and runs those commands.
What problem are you solving by running vacuum analyze daily? The system maintains itself since many versions of Postgres.
In the dark ages, before we had cloud everything, this was solved by making everything redundant in different data centers. The layers upon layers of what you have to do is almost endless, including redundancy at each data center itself. At each data center, we had twin fail-over routers, twin database servers, twin (or more) load balanced app and web servers. The DB servers would use streaming replication (at the time I used Slony-1 extension to Postgres) to create a backup which could be activated on failure of the primary. The firewalls would auto-fail over since they were mostly stateless, as were the web and app servers. On top of that you need global redundancy on your DNS at multiple providers in multiple locations. You need to be able to load balance or route traffic to the different data centers. We had nothing like Cloudflare to provide anycast routing and DDoS protection. You just had to hope and pray your upstream could handle it. Running all of that is extremely expensive in both physical hardware and in people resources. Not everything can be fully automated. So when you say backup and failover, you really are asking about every layer of your stack. At some point you just need to rely on the cloud vendors to do it for you. This is what I do now, and I take with it the risks of them failing from time to time. That said, one thing I would really like from Supabase is a multi-region failover option using streaming replication, ideally using multiple clouds. I don't see how you get out from the need of a Cloudflare-like service for DDoS and bot protection and that cannot realistically be self-hosted, so you will always be at the mercy of someone.
I've been using `mdxeditor` for rich text editing. It renders ok on mobile.