We previously had a slightly hidden usage summary in the "Upcoming Invoice" section. This section has been revamped and moved to the organization's usage page.
The improved usage summary features:
Per-project breakdown for usage
Displays costs for over-usage on usage-based plans (pro with spend cap off, team, enterprise)
Displays usage in percent for usage-capped plans (free/pro with spend cap on)
Metrics with higher usage/costs will be sorted to the top
Insights into compute usage in summary
Usage can now be retrieved for a custom period and not just the current billing cycle
Usage summary can be filtered by project
Indicators if you're exceeding/approaching limits which could lead to restrictions
The new usage summary section (usage-capped plan):
New usage summary with a usage-based plan (Pro with spend cap off, Team, Enterprise):
When hovering over the circular progress bars, you get per-project breakdowns of usage and some further information:
We now also allow you to filter the total usage by a single project or a different period than the current billing cycle. Simply change the timeframe at the top of the usage page.
Usage filtered with a custom timeframe (not relative to billing cycle):
The organization's usage page shows daily stats for all sorts of usage-based metrics and was still missing insights for compute hours. Compute Usage insights have been added to the usage page.
New section on the usage page:
Sample usage with a single project:
When running multiple projects or projects on different compute sizes:
The "Upcoming invoice" section on the organization billing page has been vastly improved and now offers per-project breakdown of metrics and project add-ons. Additionally, there is a simple projection of your cost at the end of the month.
Here's an overview of the new section with all project breakdowns collapsed:
You can expand any usage-based item or project add-on to get a per-project breakdown:
The line items have also been improved to show included quotas and costs for over-usage:
On usage-capped plans (Free Plan or Pro Plan with Spend Cap toggled on), you will now also see a warning on the top of the subscription page, in case you're exceeding your plan's limits. A more detailed breakdown is available on the organization's usage page.
When you are about to upgrade your organization's subscription plan from free to paid or between paid plans, we show you a confirmation screen. That confirmation screen has been improved to show a per-project breakdown for compute costs. Additionally, some useful information about usage-billing for compute and links to related docs have been added.
New confirmation modal:
Break down add-ons on a per-project basis:
Education about usage-billing for compute, mixing paid/non-paid plans and links to related docs:
Table Editor row edit side panel fix boolean fields rendering stale value#
There was issue in the Table Editor when you're editing rows in the side panel, specifically for column types that are rendering the Listbox component, whereby the data rendered in that input field is stale (from the previous row that you opened). This was caused by the Listbox component not re-rendering correctly when the value passed to it has changed and is now fixed.
Added recommendation to enable PITR when enabling branching#
We strongly recommend enabling point in time recovery for your project if you're planning to enable branching. This is to ensure that you can always recover data if you make a "bad migration". For example, if you accidentally delete a column or some of your production data.
Previously, it was possible to directly insert/update rows on the pg_cron extension's cron.job table. This bypasses security checks that would've been asserted when jobs are scheduled/modified via pg_cron functions.
You can see how to schedule/modify cron jobs using the examples in our docs.
LinkedIn has modified the required scopes for their API and OAuth Applications created prior to 1st Aug 2023 do not contain the appropriate scopes. This could cause errors when attempting to sign in with OAuth via LinkedIn. If you have LinkedIn provider enabled on your project a follow up notification will be sent to your email as you could potentially have a LinkedIn OAuth application created before 1st Aug 2023 and be affected. As we don't have access to LinkedIn OAuth configuration we cannot tell with certainty when your OAuth application was created and have to reach out to all users with LinkedIn enabled.
To adjust to this change, we have introduced a new LinkedIn (OIDC) provider which contains the new required scopes and we have deprecated the existing LinkedIn provider.
If you are using a LinkedIn OAuth Application created before 1st August 2023 we ask that you create a new LinkedIn application and migrate your Dashboard credentials from the deprecated LinkedIn provider to the new LinkedIn (OIDC) provider as shown in the screenshot below. Please do so before 4th Jan 2024 as we will be removing the provider from the dashboard then.
Edge Functions has some predefined secrets: SUPABASE_DB_URL, SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY. Previously, if you reset your DB password or JWT secret, these secrets will become stale. Now, these changes should be propagated into Edge Functions secrets. This fixes https://github.com/supabase/supabase/issues/12415.
If you've previously had this issue, you can reset your DB password using the old value to avoid downtime for your app. If you're resetting the JWT secret, you need to update your app to use the new API keys, which incurs some downtime.
Realtime runs migrations for tables under the realtime schema when it connects to databases. Sometimes this fails. These changes handle Realtime migration failures better.
Support for column encryption in the table editor has been removed. You can still use it, but you must use SQL. Your data is already encrypted-at-rest, so this is an advanced feature that should be used sparingly.
Previously, the Table Editor in the Supabase dashboard supported encrypting newly created columns using pgsodium’s Transparent Column Encryption (TCE).
While this makes it easy to use, we found that the easiness has led to a lot of “mis-use” of Encryption. We’ve decided to remove it from the UI for now because TCE has a few sharp edges and the dashboard makes it too easy to encrypt columns without considering trade-offs.
This mis-use led to multiple users frequently running into unrecoverable issues with encryption. A non-exhaustive list of issues which we observed users running into when using TCE through the dashboard includes the following:
TCE is prone to inappropriate usage - we’ve seen users encrypting all kinds of stuff that does not need to be encrypted (e.g email address of sender/receivers). This incurs a performance penalty and results in a bad experience.
TCE makes migrating between projects (or local to hosted) a problem as you’d also have to copy the root encryption key separately, although this is nonetheless by design. Developers should be aware that “just works” and “advanced encryption” are very difficult goals to align.
Triggers (which are used by TCE) are executed in alphabetical order. When users add their own triggers on encrypted tables, they are frequently unaware if they are dealing with encrypted or unencrypted contents which has been a source of confusion.
Upserting into an encrypted column could produce doubly encrypted content.
Since TCE uses a view into an encrypted table, RLS rules that are applied on the underlying table do not apply to the views as views use the permissions of the creator rather than the query-er, leading to another source of confusion. There is a fix for this which is to add a security label to pg_sodium to make the view a security invoker.
As of now, you can use TCE in SQL by following the pg_sodium documentation so users who already are using TCE can continue doing so via the SQL editor on the dashboard, while new users will have to learn the nuts and bolts of what they are doing before trying to use the feature.
If you use Deno Postgres or other Postgres clients to connect to your database instance from a Supabase Edge Function, those connections are now secured with SSL. You don't need to add any extra configurations to your client setup.
Databases larger than 100GB are being transitioned to using physical backups for their daily backups.
Physical backups are more performant, have lower impact on the db, and avoid holding locks for long periods of time. Restores continue to work as expected, but backups taken using this method can no longer be downloaded from the dashboard.
Over the next few months, we'll be introducing functionality to restore to a separate, new database, allowing for the perusal of the backed up data without disruption to the original project.
Postgres 12 is deprecated as of 14th October 2023 and support for it will be fully removed on 27th November 2023.
Postgres 15 comes with numerous features, bug fixes and performance improvements. Check out the announcement blog posts to find out what each version introduces.
15th October: All users are notified via email about Postgres 12 Deprecation.
27th October: Users can self serve upgrade to Postgres 15 from our dashboard. If you want to upgrade your database to Postgres 15 before 27th October, reach out to our support. A dashboard notification will be sent about this deprecation.
13th November: Users are notified via email.
27th November: All Postgres 12 databases are automatically upgraded to Postgres 15.
You will receive three notifications via email before 27th November notifying you about the deprecation of Postgres 12 and deprecation of IPv4 and PGBouncer.
With IPv4 addresses becoming increasingly scarce and cloud providers starting to charge for it, we won’t be assigning IPv4 addresses to Supabase projects from January 15th 2024. db.projectref.supabase.co will start resolving to a IPv6 address instead. If you plan on connecting to your database directly, you must ensure that your network can communicate over IPv6. Supavisor will continue to return IPv4 addresses, so you can update your applications to connect to Supavisor instead.
There will be a few minutes of downtime during this migration.
We recently announcedSupavisor, our new connection pooler. Supavisor is a direct replacement for PgBouncer. Using our own pooler is going to let us do things like load balancing queries across read replicas, query results caching, and a lot more.
Supavisor is now enabled for all projects created on or after Wednesday September 27th 2023. All existing projects will have Supavisor enabled by October 15th 2023.
Supavisor does not currently support Network Restrictions. Network restrictions support will be enabled from 24th January 2024. If you are blocked on the migration because of this, please reach out to support and we will extend the deadline for your project.
You don’t need to change anything in your application, except for the URL. The pooler connection string is available in the database settings in your dashboard.
PgBouncer will be available to use along side Supavisor until January 31st 2024.
The full timeline is:
27 September 2023: Supavisor is available for all new projects.
15 October 2023: Supavisor will be available for all projects, including existing projects. We will notify you via email when it is enabled for your project. PgBouncer is officially deprecated after this date.
15th January 202426th January 2024: You will need to start using Supavisor before then.
29th January 2024: Your Supabase database domain (db.projectref.supabase.co) will start resolving to IPv6 addresses. PgBouncer will be removed. Projects will be migrated over starting this day. No changes are required if your network supports communicating via IPv6. If it doesn't, update your applications to use Supavisor which will continue to return IPv4 addresses.
You will receive deprecation notices throughout November, December, and January.
Can I pay for a IPv4 address to directly access the database via IPv4 instead of going through Supavisor?#
You can purchase the IPv4 addon for 4$/project in the project add-on page here. PGBouncer will still be removed for users with the IPv4 add-on.
Can I use PgBouncer and Supavisor at the same time?#
While we are providing the ability to use PgBouncer or Supavisor during this migration you cannot use both at the same time. With the default configuration using both will exhaust your database connections because they both will try and spin up a connection pool.
The solution is to temporarily increase your databases connection limit with a custom Postgres config to accommodate both connection pools.
If the URL you use to connect to your Supabase Database looks like this, you're using the API, and no changes are necessary:
https://[YOUR-PROJECT-ID].supabase.co
If the URL you use to connect looks like either of these options, you're already using Supavisor, and no further changes are necessary:
postgres://[db-user]:[db-password]@aws-0-[aws-region].pooler.supabase.com:6543/[db-name]?options=reference%3D[project-ref] or postgres://[db-user].[project-ref]:[db-password]@aws-0-[aws-region].pooler.supabase.com:6543/[db-name]
If the URL you use to connect looks like this, you are using pgBouncer, and you need to upgrade (notice port 6543):
If the URL you use to connect looks like this, you are connecting directly, and will either need to be able to connect via IPv6, OR you will need to update to the Supavisor URL:
How will I know if my project has been migrated to IPv6?#
In the database settings page, the label when connection pooling is disabled, reads Will resolve to IPv6 if your project has not been migrated. If your project has been migrated to IPv6, it reads `Resolves to IPv6'.
What are the errors that I might see when connecting to the database if my network doesn't support IPv6?#
The error thrown will depend on how you are connecting to the database. Here are some examples of error messages you might see
(dial tcp [2001:db8:3333:4444:5555:6666:7777:8888]:5432: connect: no route to host)
connect to db.example.supabase.co (2001:db8:3333:4444:5555:6666:7777:8888) port 5432 (tcp) failed: Network is unreachable
could not translate host name "db.example.supabase.co" to address: nodename nor servname provided, or not known
Error: P1001: Can't reach database server at db.example.supabase.co:5432
(2001:db8:3333:4444:5555:6666:7777:8888), port 5432 failed: could not create socket: Address family not supported by protocol
Note that these errors may manifest in cases other than your client network not supporting IPv6, but if you run into these errors after your project was migrated, it is likely that it is due to IPv6 support.
How will I know if PgBouncer has been removed from my project?#
The database settings page does not show PgBouncer connection settings. If you see a warning label called PgBouncer pending removal, it means that PgBouncer has not been removed from your project. If you see no such label, PgBouncer has already been removed from your project.
Prepared statements are supported with session mode. You can change your pool mode to session in your dashboard.
You can also use a session mode pool with your Supavisor pooler url and port 5432 (vs 6543). If you need to run something using prepared statements while your production application uses transaction mode you can use this port to do that.
Initial support for prepared statements with transaction mode landed but some bugs were found and should be fixed shortly.
If you are using Prisma, please check out our updated Prisma Guide for instructions on how to configure your connections for both querying and migrations.
The environment variables POSTGRES_URL and POSTGRES_PRISMA_URL point to Supavisor and POSTGRES_URL_NON_POOLING points to Supavisor in session mode. Redeploy your Vercel application to pick up the latest environment variables. This is required since Vercel does not support IPv6.
How do I use direct database connections in my Vercel application instead of using the connection pooler?#
Enable the IPv4 add-on. Set the direct connection url as a environment variable not managed by the Supabase integration. You can now use the environment variable in your application.
Do I need to make any changes if I am using the CLI?#
If you are using a version before 1.136.3, please upgrade to a later version of the CLI and run supabase link. If you haven’t run supabase link since 1st January 2024, please run it again after upgrading. This will enable the CLI to communicate to the database from IPv4 only environments because the communication happens via Supavisor. This change is required if you are using from the CLI from an environment without IPv6 support, like Github actions or possibly from your home network.
Special Considerations for .NET users using npgSQL#
You will need to add Pooling=false to your Supavisor connection string.
We are in the midst of transitioning all projects to IPv6. As part of this process, If your project is still being assigned an IPv4 address then pg_upgrade will be temporarily disabled for your project until the transition is completed.
We’re fixing the billing system at Supabase - moving from “project-based” to “organization-based”. We should have started with this model, but I wasn’t wise enough to know that when we started. We need to make these changes to roll out Preview Environments / Branching. It also includes:
First, and most importantly - there is only one change that affects the free plan, and that is a good one for you: you now an extra 1GB of egress.
Usage Item
Old plan (per project)
New plan (org based)
Egress
4GB - (2GB Database + 2GB Storage)
5GB across Database + Storage
Database Space
500MB
500MB
Storage Space
1GB
1GB
Monthly Active Users
50K
50K
Edge Function Invocations
500K
500K
Edge Function Count
10
10
Realtime Message Count
2 million
2 million
Realtime Peak Connections
200
200
2 free projects
2 free orgs (1 free database per org)
On top of an extra 1GB of egress for free, now that egress is unified across your org it means that if you aren’t using Supabase Storage, you get even more Database Egress (5GB instead of 2GB previously)
If you are currently running 2 free projects however, this does require some work from you. Because we are now working on an Org-level, instead of Projects, you will need to:
Create a new “Free org”
Transfer one of your free projects into the newly-created org
This should be done before the end of October, but don’t worry - we’ll give you frequent comms and clear instructions once the change has been rolled out (4th Sept).
This is a major change, and we've tried to design it in a way that's cheaper for everyone. If your bill has increased as a result of this change, that's not our intention. Please submit a Support ticket on the dashboard and we'll figure out a solution.
We welcome any questions/feedback about this change, but please keep this discussion focused only on this change! It's important for those who want to learn more or are confused. If you have something off-topic, please open a new discussion or join an existing discussion
To better secure your Supabase server instances, we will be removing superuser access from the dashboard SQL Editor over the next 30 days. Existing projects with tables, functions, or other Postgres entities created via the dashboard SQL Editor require a one time migration to be run. This migration should take less than 10 seconds to run but since it modifies your existing schema, we will be rolling out this change over a buffer period to minimise breakages.
During the opt-in period, a notification will be delivered to all affected Supabase projects. The notification contains instructions to manually apply the migration. If you have separate staging and production Supabase projects, apply it on the staging project first to verify everything is working as expected.
If you only have one Supabase project, try to avoid hours of high application traffic when applying the migration to minimise potential downtime. If you notice elevated error rates or other unusual activities after migrating, follow the rollback instructions to revert the change. Both apply now and rollback actions are idempotent. If you encounter any problems during migration or rollback, please contact support@supabase.io for further assistance.
For paused projects, applying now will schedule the migration script to run the next time your project is restored. We suggest that you restore your project immediately to verify that everything works or rollback if necessary. If you project is in any other states, please contact support@supabase.io to bring it to an active healthy state before continuing with the migration.
After successfully applying the migration, all entities you have created from the dashboard's SQL Editor will be owned by a temporary role. These entities are currently owned by supabase_admin role by default. You can check the current owner of all your schemas using the query below.
select *, nspowner::regrole::name from pg_namespace;
New entities created via the SQL Editor will also be owned by this temporary role. Since the temporary role is not a superuser, there are some restrictions with using the SQL Editor after migrating. If you are unsure whether those restrictions affect your project, please contact support@supabase.io for assistance.
After the opt-in period, you will receive another notification to drop the temporary role and reassign all entities owned by the temporary role to postgres role. The SQL Editor will also default to using postgres role. New projects created after 5 Nov will also default to using the postgres role. Since this change is irreversible, it is crucial that you run the migration during the opt-in period to verify that your project continues to work.
For any projects not migrated after 5 Nov deadline, we will run the migration on your behalf to reassign all entities to postgres role. No temporary role can be used for rollback. If you notice any breakages then, please do not hesitate to contact support@supabase.io.
You will no longer be able to create, alter, or drop event triggers directly through SQL statements.
Event triggers can only be created by superusers and you will not be able to manage them after the migration. One exception is Postgres extensions. When toggling extensions, they can still create or drop event triggers as needed.
If you are currently using custom event triggers, please contact support@supabase.io to explain your use case. We will try our best to figure out an alternative for your project. Note that regular triggers are unaffected by the migration.
You will no longer be able to: create, alter, or drop tables, views, functions, triggers, sequences, and other entities in Supabase managed schemas, including extensions, graphql, realtime, and supabase_functions.
Supabase managed schemas are used to support platform features for all projects. Entities in these schemas are owned by supabase_admin role to prevent users from accidentally overriding them and breaking platform features. Unless explicitly granted, non-superuser roles cannot manage entities in Supabase managed schemas after the migration.
If you think modifying these schemas is necessary for your project, please contact support@supabase.io to explain your use case. We will try our best to accommodate your use case using alternative suggestions.
Entities in auth and storage schemas have been explicitly granted all permissions to postgres role. Therefore, you can still manage these schemas directly through SQL statements. If you have existing triggers created on these schemas, they will continue to work as well.
All user defined schemas and the public schema will be owned by postgres role after the migration. Therefore, you should be able to manage entities in those schemas directly through SQL statements. One exception is if you have manually changed the owner of specific schemas before. In that case, you can either reassign their owner to postgres role manually or leave them untouched. Please reach out to support@supabase.io if you are unsure what to do.
You will no longer be able to create or drop RLS policies on entities in Supabase managed schemas.
RLS policies can only be created or dropped by entity owners or superusers. After the migration, you can’t manage RLS policies in Supabase managed schemas through the SQL Editor. If you need to expose certain tables in realtime schema to anon or authenticated users, one way is to create a view in the public schema using the postgres role.
RLS policies in auth, storage, public, and all user defined schemas can still be managed directly through SQL statements. Unless you have policies that check for supabase_admin role, all existing RLS policies should be unaffected by the migration.
You will no longer be able to alter role attributes of replication, superuser, and reserved roles directly through the SQL Editor.
Only superuser roles can alter attributes of other superuser and replication roles. Reserved roles include anon, authenticated, postgres, service_role, etc. After the migration, you will not be able to change attributes of these roles directly through SQL statements. You can still alter attributes of other roles created by yourself, except to elevate those roles to superuser or replication.
Some common attributes that can’t be changed include password, login, and bypassrls. Here are some known workarounds:
To change your postgres role password, you can do it via dashboard settings page.
If you need to run one-off scripts that bypass RLS, you can use the provided service key.
If you are pushing schema migrations from CLI, superuser privilege is no longer required as all entities are owned by postgres role after the migration.
A number of users reported the following error accessing the dashboard restoring a paused project.
Error: [500] failed to get pg.tables: password authentication failed for user "postgres_temporary_object_holder"
It is due to a bug in the restore script that we have since fixed. If you are still experiencing this issue, you may pause and restore the project again to fix it manually. If that fails, please don't hesitate to contact support@supabase.io.
The PostgREST release notes document some changes to the way GUC variables are handled here.
Supabase has created a config flag in the Dashboard to ensure that this will not be a breaking change. These changes are required before you can upgrade to PostgreSQL 14+, or use Realtime RLS.
Supabase has already updated all the default auth functions (auth.uid(), auth.role() and auth.email()), however we have no way of updating functions which we have not written ourselves.
Any project that have custom auth functions or generally any function that use legacy GUC naming convention to access JWT claims (eg current_setting('request.jwt.claims.XXX', true).
This change is required for PostgreSQL 14+.
This change is required for Realtime row level security
You need to update all functions that are using the legacy GUC naming convention (current_setting('request.jwt.claims.XXX', true)) to use the new convention (current_setting('request.jwt.claims', true)::json->>'XXX').
Three new Auth providers, multi-schema support, and we're gearing up for another Launch Week.
Let's dive into what's been happening at Supabase during the month of October.
We're warming up for another Launch Week! Last time was "Launch Week II: the SQL". We're going to need another month to come up with a good pun again, so we'll aim for November.
We raised our Series A.
We'll use the funds to do more of the same - ship features and hire open source developers.
We'll release more details soon. Read more on TechCrunch.
If you've been waiting for Row Level Security to land in Postgres subscriptions,
then you're going to love our new repo:
Write Ahead Log Realtime Unified Security (WALRUS).
The name might be a bit forced, but the security design is deliberate.
It's not in production yet, but we're making the repo public for comments using an
RFC process.
RLS can be a bit foreign for developers getting started with Postgres.
This video by @_dijonmusters demystifies it. If you find the video a useful medium for learning, consider subscribing to our channel.
Last December we moved from Alpha to Beta, with a focus on Security, Performance, and Reliability. After a couple of Launch Weeks pushing out new and sexy features, we have decided it's time to focus on these again.
By the time we're done, Supabase will be production-ready for all use cases.
Following the success of our first Launch Week in March, we finished the July with "Launch Week II: The SQL".
The community has been sieving through a slew of bad puns and retro memes to discover the new feature announcements.
Your users can now log in with SMS based mobile auth! We have a Twilio integration (Guide Here) and will be adding more providers soon.
Other Auth updating include, Twitch logins, and the ability to generate invite, recovery, confirmation, and magic links via the API,
for people who want more control over the email templating flow. Read the blog post here.
We made some major new additions to the dashboard including usage statistics, a new project home, and tons of database insights.
Check the post here on what you get and how we built it.
You'll find us hanging out regularly in the #hangout channel.
We even "live-fixed" some production errors in there on Monday night (which occurred literally 1 hour before our first announcement of the week! Typical!).
We're fast approaching 1,500 members so come and join the action! discord.supabase.com
All new Supabase projects will be launched with PostgreSQL 13.3, and we're working on a migration path for old projects.
This gives you looooaads of new stuff out the box.
We worked with our friends at PostgREST to make some huge improvements.
For those of you who don't know, every Supabase instance comes with a dedicated PostgREST server by default,
which provides the auto-generated CRUD API that we wrap with supabase-js.
We're running a week long hackathon starting NOW. There are some legit prizes, and you can win in a bunch of different categories.
Check the full instructions here on how to participate. Submissions close next Friday at midnight PST.
We made an announcement on the progress of functions, and even shipped a few preliminary components, try them out and give us feedback as we continue to move towards this next major milestone.
Read the latest updates here.
Vercel just released their new integrations, which means you can now deploy a Postgres database on Supabase directly from your Vercel account.
Check it out! vercel.com/integrations/supabase
Building a community? There's almost no better tool than Discord (we're even trialling it ourselves).
If you're building a community product, Discord logins are the perfect option.
Want to share all your favourite memes? Now it's even easier with Public Storage Buckets. Simply mark a bucket as
"Public" and the content will be accessible without a login.
When things go wrong, sometime the best thing you can do is reboot. We released a restart button in the Dashboard,
the first of many debugging tools we'll be releasing over the next few months.
This month was a "gardening" month for Supabase. The team focused on stability, security, and community support.
Check out what we were working on below, as well as some incredible Community contributions.
We're a developer tool, which means that Dark Mode is extremely popular.
While Dark mode is great, for some people it's not an option. Dark Mode is difficult to use for developers with astigmatisms,
or even just working in brightly-lit environments.
So today we're shipping Light Mode. Access it in the settings of your Dashboard.
We open-sourced a server which keeps any Postgres database in sync with Stripe.
This is experimental only. We're evaluating other tools such as Singer,
which provide a more general solution (but are less "realtime"), and we're opening it up here to gather feedback.
It looks like @burggraf2 got tired of waiting for us to ship Functions, and decided to
build a whole JS ecosystem within his Supabase database. If you want to write PG functions in JS, import remote libraries
from the web, and console log to your browser, check out this SupaScript repo.
You might have noticed our Dashboard slowly changing (improving), as we migrate the components out to our open source UI Library. This progression is an important step towards offering a UI for Local Development and Self Hosting.
We're also working on our Workflows engine. This is quite a large task, but we're making progress and aiming to ship sometime in July.
Need to store images, audio, and video clips? Well now you can do it on Supabase Storage. It's backed by S3 and our new OSS storage API written in Fastify and Typescript. Read the full blog post.
The Supabase API already handles Connection Pooling, but if you're connecting to your database directly (for example, with Prisma) we now bundle PgBouncer. Read the full blog post.
We open sourced our internal UI component library, so that anyone can use and contribute to the Supabase aesthetic. It lives at ui.supabase.io . It was also the #1 Product of the Day on Product Hunt.
Now you can run Supabase locally in the terminal with supabase start. We have done some preliminary work on diff-based schema migrations, and added some new tooling for self-hosting Supabase with Docker. Blog post here.
Thanks to a comunity contribution (@_mateomorris and @Beamanator), Supabase Auth now includes OAuth scopes. These allow you to request elevated access during login. For example, you may want to request access to a list of Repositories when users log in with GitHub. Check out the Documentation.
New year, new features. We've been busy at Supabase during January and our community has been even busier. Here's a few things you'll find interesting.
Anyone who has worked with Firebase long enough has become frustrated over the lack of count functionality. This isn't a problem with PostgreSQL! Our libraries now have support for PostgREST's exact, planned, and estimated counts. A massive thanks to @dshukertjr for this adding support to our client library.
We enabled 2 new Auth providers - Facebook and Azure. Thanks to @Levet for the Azure plugin, and once again to Netlify's amazing work with GoTrue to implement Facebook.
In case our Auth endpoints aren't easy enough already, we've built a React Auth Widget for you to drop into your app and to get up-and-running in minutes.
Performance: We migrated all of our subdomains to Route53, implementing custom Let's Encrypt certs for your APIs. As a result, our read benchmarks are measuring up 12% faster.
Performance: We upgrade your databases to the new GP3 storage for faster and more consistent throughput.
Last month we announced an improved SQL Editor, and this month we've taken it even further. The SQL Editor is now a full Monaco editor, like you'd find in VS Code. Build your database directly from the browser.
We're now 8 months into building Supabase. We're focused on performance, stability, and reliability but that hasn't prevented us from shipping some great features.
In the lead-up to our Beta launch, we've releasedsupabase-js version 1.0 and it comes with some major Developer Experience improvements. We received a lot of feedback from the community and we've incorporated it into our client libraries for our 1.0 release.
Although it was only intended to be a temporary feature, the SQL Editor has become one of the most useful features of Supabase. This month we decided to make give it some attention, adding Tabs and making it full-screen. This is the first of many updates, we've got some exciting things planned for the SQL Editor.
For the heavy table editor users, we've gone ahead and added a bunch of key commands and keyboard shortcuts so you can zip around and manipulate your tables faster than ever.
One of the most requested Auth features was the ability to send magic links that your users can use to log in. You can use this with new or existing users, and alongside passwords or stand alone.