Changelog

New updates and product improvements

We are changing how taxes are calculated on your Supabase invoices for your organizations.

What's changing Within the next several weeks, applicable taxes (such as sales tax, VAT, or GST) will be included on your Supabase invoices based on your billing address. This change is part of our ongoing tax compliance efforts.

When will this be changing? The rollout of these changes will be incremental, beginning on May 1 and completing by June 30.

What this means for you Tax will be assessed using the billing address associated with your organization in accordance with tax law in that location. Please review your organization's billing settings to make sure your billing address and Tax ID (e.g. VAT ID, GSTIN, EIN, ABN), if applicable, are accurate and up to date. A valid Tax ID ensures you're correctly classified for tax purposes and may affect how tax is applied to your invoices.

If you have any questions, please check our FAQ for more information.

Experimental: Declarative Schema Management with pg-delta#

We've been working on a new way to handle database schema changes in the Supabase CLI, and it's now at a point where we want people to try it and tell us what they think.

TL;DR#

The CLI now ships with pg-delta, our own Postgres schema diffing engine, and an experimental declarative schema workflow on top of it. Basically: you describe your schema as SQL files, run a command, and the CLI generates the migration for you. No more writing migrations by hand if you don't want to.

Available today behind an experimental flag. This is still very much alpha, things will break, coverage is not complete yet. We want your feedbacks.

Why we built our own#

Schema diffing in the CLI has been relying on third-party tools (migra, pgAdmin) and each one comes with its own set of issues, missing coverage, and maintenance problems. On top of that, the whole migration workflow has always been imperative: you write the migration yourself, or you diff against a live database and hope it catches everything.

We looked at existing diffing tools but none of them really fit what we needed. Most of them carry a lot of legacy support for older Postgres versions, and we didn't want that tech debt. pg-delta only targets Postgres 15+, which lets us start clean and take advantage of newer catalog features without workarounds. More importantly, we want a tight integration with all of Supabase's features long term: all the extensions we ship, auth/storage schemas, RLS policies, the whole platform. Building on top of someone else's diffing engine means we'd always be fighting upstream to get Supabase-specific stuff supported. Owning the diffing layer gives us full control over that.

A lot of developers, especially those coming from ORMs or tools like Terraform, expect something more declarative: you describe what the schema should look like, and the tooling figures out the diff. We wanted to offer that.

What's new#

pg-delta as a diffing backend#

@supabase/pg-delta is our own schema diff engine, written in TypeScript from scratch. It lives in the supabase/pg-toolbelt monorepo alongside pg-topo (topological sorting for DDL statements). You can already use it as the diffing backend for supabase db diff:


_10
supabase db diff --use-pg-delta

It also supports explicit source/target references now:


_10
supabase db diff --from migrations --to local --use-pg-delta

Declarative schema workflow#

Everything goes through a single command: supabase db schema declarative sync.

One thing we cared about: we wanted to stay pure SQL. No custom DSL, no YAML, no config files describing your tables. Just .sql files. But we also wanted the freedom you get with code based declaration, so you don't need to worry about the order of statements in your declarative schema. You can split and organize your files however makes sense to you, pg-delta will figure out the right execution order. If your users table references a type defined in another file, that's fine. Just write your schema the way you want it organized.

If you don't have a declarative schema yet, the CLI will guide you through the setup. It asks where to generate from (local database or a custom URL), exports your schema into SQL files under supabase/database/, and you're ready to go:


_20
$ supabase db schema declarative sync
_20
_20
No declarative schema found. Generate a new one ? [Y/n]
_20
_20
Generate declarative schema from:
_20
_20
> 1. Local database [generate from local Postgres]
_20
2. Custom database URL [enter a connection string]
_20
_20
Reset local database to match migrations first? (local data will be lost) [y/N] y
_20
Creating shadow database...
_20
Applying declarative schemas via pg-delta...
_20
Applied 21 statements in 1 round(s).
_20
Declarative schema written to supabase/database
_20
Creating shadow database...
_20
Initialising schema...
_20
Seeding globals from roles.sql...
_20
Applying migration 20260320160742_init.sql...
_20
Applying migration 20260410150534_disable_pg_net.sql...
_20
No schema changes found

From there, the SQL files in supabase/database/ are your source of truth. You edit them directly, for example adding a column to a table, and run sync again:


_14
$ supabase db schema declarative sync
_14
_14
Creating shadow database...
_14
Applying declarative schemas via pg-delta...
_14
Applied 21 statements in 1 round(s).
_14
Generated migration SQL:
_14
ALTER TABLE public.projects ADD COLUMN deleted_at timestamp without time zone;
_14
_14
Enter a name for this migration (press Enter to keep 'declarative_sync'): add_deleted_at_to_projects
_14
Created new migration at supabase/migrations/20260413181801_add_deleted_at_to_projects.sql
_14
Apply this migration to local database? [Y/n]
_14
Connecting to local database...
_14
Applying migration 20260413181801_add_deleted_at_to_projects.sql...
_14
Migration applied successfully.

The full workflow is: edit your schema files and run sync. The CLI will create a migration and ask you to apply it.

If your changes include destructive statements, the CLI will warn you before applying:


_19
$ supabase db schema declarative sync
_19
_19
Creating shadow database...
_19
Initialising schema...
_19
Seeding globals from roles.sql...
_19
Applying migration 20260320160742_init.sql...
_19
Applying migration 20260410150534_disable_pg_net.sql...
_19
Applying migration 20260413181801_add_deleted_at_to_projects.sql...
_19
Generated migration SQL:
_19
ALTER TABLE public.projects DROP COLUMN deleted_at;
_19
_19
Enter a name for this migration (press Enter to keep 'declarative_sync'): drop_deleted_at
_19
Created new migration at supabase/migrations/20260413182237_drop_deleted_at.sql
_19
Found drop statements in schema diff. Please double check if these are expected:
_19
ALTER TABLE public.projects DROP COLUMN deleted_at
_19
Apply this migration to local database? [Y/n]
_19
Connecting to local database...
_19
Applying migration 20260413182237_drop_deleted_at.sql...
_19
Migration applied successfully.

It also uses catalog caching so subsequent runs don't redo all the shadow-database work.

You can also skip all the interactive prompts with flags, which is useful for CI or agentic use:


_10
supabase db schema declarative sync --apply --name add_back_deleted_at

How to enable it#

Add this to your config.toml:


_10
[experimental.pgdelta]
_10
enabled = true

Or just pass --experimental on any of the declarative commands.

The declarative schema path defaults to database/ in your project root but you can change it:


_10
[experimental.pgdelta]
_10
enabled = true
_10
declarative_schema_path = "my-schema/"

What we want to know#

This is experimental and we want to shape it based on real usage. Some things we're curious about:

  • Does the edit > sync loop work for you? Or do you need something different?
  • What's missing? Any Postgres objects or patterns that pg-delta doesn't handle well?
  • How does it compare to whatever diffing tools you've been using (migra, pgAdmin, Atlas, pgroll, etc.)?
  • Debugging: when something goes wrong, the CLI generates a debug bundle you can attach to issues. Is that helpful? What else would make debugging easier?

What's next#

To be clear: this is alpha. Even if pg-delta cover a lot of Postgres object type it isn't battle tested yet, and you will probably run into cases where the diff is wrong or incomplete. That's expected at this stage and exactly why we're putting it out now, we need real world usage to find the gaps.

We're actively improving pg-delta coverage and the declarative workflow. Goal is to make this the default diffing engine and eventually move the declarative commands out of experimental. What you tell us here directly affects what we work on next.


Give it a try and let us know what you think. File issues on pg-toolbelt for diffing bugs and on cli for workflow stuff. Or just drop your thoughts here.

Here’s everything that happened with Supabase in the last month:

Multigres Operator is now open source#

The Multigres Kubernetes operator is now open source, with direct pod management, zero-downtime rolling upgrades, pgBackRest PITR backups, and OTel tracing.

GitHub | Twitter

GitHub integration on all plans#

GitHub integration is now available on all plans. Connect your repo on the free tier to deploy migrations from your main branch via CI/CD, no branching required.

GitHub

Supabase joins the Stripe Projects developer preview#

Supabase is a co-design partner in Stripe Projects, a new CLI tool that provisions and connects services like Supabase, Vercel, and Clerk from your terminal, with credentials synced to your .env automatically.

Blog

Supabase docs over SSH#

Browse all Supabase docs with standard Unix tools, or pipe them directly into Claude Code: ssh supabase.sh setup | claude.

Blog

Supabase Security Newsletter#

Subscribe to the Supabase security newsletter, sent only when there are important security updates.

Sign Up

Quick Product Announcements#

  • Studio now has "Fix with Assistant" buttons across touchpoints, with a dropdown to send the prompt to Claude or ChatGPT. Twitter
  • Browser tabs now show your exact navigation path so you can tell your tabs apart at a glance. Twitter
  • Supabase secret keys now have Push Protection on GitHub, blocking accidental commits before they land. Twitter
  • Schema Visualiser: relation lines are now clickable, tables and columns have context actions, and popovers appear between connected tables. Link

Made with Supabase#

  • Menugo - AI-powered QR code menu generator for restaurants
  • Gasindex - AI voice agents call thousands of businesses to track real-time gas prices in America, with Vision AI and crowdsourced submissions keeping data fresh
  • Guinndex - AI voice agents call thousands of businesses to track real-time Guinness prices in Ireland with Vision AI and crowdsourced submissions keeping data fresh
  • Festie - The unofficial Coachella 2026 companion
  • burn0 - Track Supabase costs per-request in real time

Community Highlights#


This discussion was created from the release Developer Update - April 2026.

On Friday, March 06 2026, 08:00:00 UTC, we introduced a new rate limit on recursive/nested Edge Functions calls on the hosted platform.

What gets rate-limited?#

Rate limiting applies to outbound fetch() calls made by your Edge Functions to other Edge Functions within your project. This includes:

Direct recursion: A function calling itself Function chaining: Function A calling Function B Circular calls: Function A calling Function B, which calls Function A Fan-out patterns: A function calling multiple other functions concurrently

NOTE: Inbound requests to your Edge Functions and requests to external APIs (e.g., Stripe, OpenAI) are not subject to this rate limit. Only outbound calls from one Edge Function to another Edge Function are counted.

Rate limit budget#

Each request chain has a minimum budget of 5,000 requests per minute. In busier regions, this budget may be higher. All function-to-function calls within the same request chain share this budget.

Why was this introduced now?#

Over the last few weeks, we observed increased response times for Edge Functions across multiple regions. Upon analysing traffic, we noticed that a small number of projects that do recursive/nested function calls are adding significant strain on our servers. In particular, the incident we had on Feb 28th was caused by recursive function calls. We had to put this in as a safeguard to ensure reasonable performance for all projects hosted on the platform.

Based on our metrics, the rate limit has so far affected only 0.4% of projects. If you were affected, we apologize for the inconvenience.

How can you avoid being rate-limited?#

It's still possible to follow recursive/nested patterns within the rate-limit. We've published a guide that provides several examples of how you can avoid rate limits.

Please contact support if you have any further questions about these rate limits.

Here’s everything that happened with Supabase in the last month:

Webinar: Ship Fast, Stay Safe#

agencywebinar

Learn how top agencies balance velocity with control when using AI coding tools to build production applications on Supabase.

Register

Logs Drains on Pro#

logdrainsonproog

Log Drains are now available on Pro. Send your Postgres, Auth, Storage, Edge Functions, and Realtime logs to Datadog, Grafana Loki, Sentry, Axiom, S3, or your own endpoint.

Blog Post

Docs now export to Markdown for AI tools#

docsog

Every guide on docs.supabase.com now has a "Copy as Markdown" option, plus direct links to ask ChatGPT and Claude. Copy any page into your agent or tool of choice with one click.

Docs

Storage: major performance and security overhaul#

storagethumb

Object listing is up to 14.8x faster on 60M+ row datasets. The prefixes table and its 6 triggers are gone, replaced with a hybrid skip-scan algorithm and cursor-based pagination. Security fixes close a path traversal vulnerability and prevent orphan objects from direct SQL deletes.

Blog Post

Edge Functions dashboard for self-hosted and CLI#

edgefunctionsog2

List and search your functions, view details, test directly from the dashboard, and download as .zip. No longer cloud-only.

Twitter

Multigres Postgres parser: 2.5x faster than the cgo alternative#

postgresparserog

Built in 8 weeks using Claude Code. A comparable MySQL parser took over a year.

Blog Post Twitter

Quick Product Announcements#

  • ⚠️ Action Required: OpenAPI spec access via anon key deprecated March 11. The /rest/v1/ schema endpoint will only be accessible via service role or secret API keys after this date. Existing data API usage is unaffected. GitHub
  • Observability Overview page is rolling out. Twitter
  • Table filters now use AI. Describe what you want to find and the dashboard applies the right Postgres filters. Available under Feature Previews. Twitter GitHub
  • Queue table operations in the Table Editor. Stage inserts, edits, and deletes, review in Diff View, then commit with cmd + s. Twitter
  • Supabase plugin for Cursor is live. Twitter
  • Copy AI prompts from the dashboard. The same prompts powering the Supabase AI Assistant are now exportable for use in your local agent or tool of choice. Twitter
  • Inline SQL Editor saves SQL Snippets. Create and update snippets from Studio. Share via git in the supabase/snippets folder. GitHub
  • Command Menu gets Create and Search shortcuts. Hit cmd + k to create tables, RLS policies, Edge Functions, and Storage buckets — or jump directly to an existing one. Twitter
  • Read replicas now managed from the database replication page. Rolling out gradually — if you manage read replicas, look in Database settings.
  • Receipt downloads now available. Download receipts from the Invoices section in your org billing page.

Made with Supabase#

  • A purpose-built tool for running powerful affiliate and referral campaigns. Website
  • Supabase x YCombinator Hackathon winner: An AI Agent Personal Trainer Website
  • AI video production for professionals Website
  • Generate APA citation-ready references from a URL or DOI in seconds Website
  • SupaClaw - A basic version of OpenClaw but built entirely on Supabase built-in features GitHub

Community Highlights#

  • Supabase is sponsoring Postgres Conference 2026. Deepthi and Sugu are speaking on Multigres: horizontal scalability and intelligent sharding for Postgres. April 21-23 in San Jose. Use code 2026_SUPABASE20 for 20% off. Register
  • Codepup AI launched Supabase integration. Build a complete web app with a real Supabase backend — auto-generated, tested, and fixed by AI in under 30 minutes. Blog Post
  • BKND joins Supabase. Dennis Senn, creator of BKND, is joining to build a Lite offering for agentic workloads. BKND stays open source. Blog Post
  • Hydra joins Supabase. Joe Sciarrino, co-creator of Hydra, is joining to build Supabase Warehouse: an open data warehouse architecture for developers. Hydra co-developed pg_duckdb, which accelerates analytics queries on Postgres by over 600x. Blog Post
  • Getting Started with Supabase - Official Guide YouTube
  • Supabase on Observable Flutter - Episode YouTube
  • Inside Supabase Edge Functions: How Serverless Magic Actually Works Blog
  • Unlocking Scalable Backend Development: Why Supabase and Node.js are Revolutionizing Modern Applications in 2026 Blog
  • Adding GitHub, Google, and X Login to Next.js 15 with Supabase Auth Blog

What’s Changing?#

The Data API returns the full OpenAPI spec for any schema exposed to the Data API at the root path: https://[projectref].supabase.co/rest/v1/

Starting March 11, we will begin deprecating support for accessing this endpoint via the anon key. You will get the following error message if this endpoint is accessed via the anon key


_10
{"message":"Access to schema is forbidden","hint":"Accessing the schema via the Data API is only allowed using a secret API key."}

The endpoint remains accessible and the behaviour doesn't change if you are using the service role keys or the new secret API keys.

This does not affect normal Data API usage. Accessing data via /rest/v1/your_table or any client library will continue to work exactly as they do today.

Why?#

Today, the endpoint returns schema details (tables, columns, and types of an exposed schema) to anyone with the anon key. While this does not expose actual row data, it provides more information about your schema than most production applications need.

As part of an ongoing effort to tighten default security across Supabase, we are removing this exposure. In practice, the schema spec is mostly useful during development, where you can use the service_role key. There are few cases where you would need it client-side in production (less than 0.1% of our projects have made a request to this endpoint using the anon key in the last 24 hours), and we do not think supporting those use cases is worth the security tradeoff.

Am I Affected?#

You are affected if your app currently uses the anon key to fetch the Swagger spec.

You can check by reviewing requests to the /rest/v1/ endpoint via this log query.

If you see requests:

  1. Click into the event.
  2. Check whether the request is coming from the anon role.

What Should I Do?#

  1. Check your logs. Use the log query above to see if any of your application traffic relies on this endpoint with the anon key.
  2. Move affected calls server-side If your application fetches the schema spec, move that call to a server-side context like Edge Functions where you can safely use the service_role or the new secret API keys.

Rollout and Communications Timeline#

DateChange
17 FebChangelog published
4 MarchChange announced in monthly newsletter
6 MarchEmail notification to customers observed using this endpoint
11 MarchNewly created projects cannot access endpoint with anon key
24 MarchFinal email notification to customers observed using this endpoint
8 AprilAll existing projects cannot access endpoint with anon key

We may push these dates back based on customer feedback, but we will not move them forward.

What’s Next?#

This is the first in a series of changes we are making to tighten default security settings across Supabase. Stay tuned for improvements to RLS usability, default table grants, and additional security features.

Update 23.03.2026#

We now have a new Management API endpoint for the CLI and third party integration that only requires the "Read-only project database access" permission. Details are here.

Here’s everything that happened with Supabase in the last month:

og

Connect your database to AWS resources over private networks. No public internet exposure. Traffic stays within AWS infrastructure using VPC Lattice.

Blog Post

Postgres Best Practices for AI Agents#

og 2

30 rules across 8 categories teaching AI agents to write correct Postgres code. Works with Claude Code, Cursor, GitHub Copilot, and other tools.

Blog Post

Query Ethereum directly from Postgres#

og (1)

Use SQL to query real-time Ethereum blockchain data with the Infura wrapper.

View docs

Supabase is now an official Claude connector#

generate-og (3)

Connect your Supabase projects to Claude and manage your database by telling Claude what you need.

Blog Post

Vibe coding, done right#

og (2)

Join us for a 45 minute online workshop with Bolt where we’ll walk through several success stories and best practices for introducing vibe coding safely into your company

Register now

Free eBook: Using Postgres to its full extent#

free-ebook

Manning Publications and Supabase created a free eBook on using Postgres to its full extent—contemporary SQL techniques, full-text search, data types, and avoiding design mistakes that cost performance.

Download

Quick Product Announcements#

  • Action Required: pg_graphql disabled by default on new projects. Ships mid-February. New projects won't have pg_graphql enabled automatically. Existing projects with zero GraphQL requests will also have it disabled. If you use GraphQL, manually enable the extension. GitHub
  • TRAE SOLO integration with Supabase. Manage your database, storage, and auth inside ByteDance's AI IDE. Blog Post
  • Edge Functions now support drag-and-drop zip files. Upload entire function bundles to migrate between projects. Docs
  • SQL snippets save locally in Studio. Share queries via git with your team in supabase/snippets folder. GitHub
  • Supabase Assistant helps with database query performance. Get optimization suggestions directly in the dashboard. Twitter
  • postgrest-js hits 9M weekly downloads. Twitter

Made with Supabase#

  • Fanakin - Organize movies, shows, books, games, and more in one place. Create lists, share your profile, and get AI-powered recommendations based on your taste. Website
  • PolicyCheck - Free client-side security analysis for your Supabase project. See what's exposed through your public API with just your anon key or user authenticated mode. Website
  • Renamify - AI-powered bulk file renaming with 99% accuracy. Rename hundreds of photos instantly with intelligent, descriptive names. Built to make the web more accessible. Website

Community Highlights#

  • Supabase becomes a Tailwind partner. Announcement
  • New contributor site launched at supabase.com/contribute. Search issues across GitHub, Reddit, and Discord filtered by technology. Visit Site
  • SupaSquad community program now open. Join as a Contributor, Content Creator, Trusted Host, or Event Speaker. Get early access to features, partner deals, and direct team access. Apply Now

Queue table operations#

Screenshot 2026-02-04 at 2 37 23 PM

Hi everyone, super excited to showcase this new feature on the table editor where instead of saving values right away you can batch edit them, view them in a rich diff and then save it in a single transaction. The idea is to avoid unneeded edit and give more confidence in editing from our UI.

To enable it simply go to "Feature previews" > "Queue table operations" > "Enable Feature"

Looking for feedback and any strange bugs you find - but we'd love to know the following as well

  • Would you prefer this as the default behaviour or the existing UX as the default behaviour
  • If we were to make this mode a configurable setting that you can toggle, would you prefer this to be a user-level preference, or a project-level preference (e.g you can configure all users for a project to use this behaviour)

In a forthcoming release within approximately 3 weeks, pg_graphql will be disabled by default on new Supabase projects.

This change aligns pg_graphql with our security-first approach of minimizing exposed API surface area by default. Services and extensions that expose schema metadata are now opt-in rather than opt-out, reducing the default attack surface for new projects.

Who is affected:

New projects will no longer have pg_graphql enabled automatically

Existing projects older than 30 days with zero graphql requests will also have the extension disabled (where previously it was enabled by default). Existing projects with requests will be unaffected.

Action required: If your application relies on GraphQL, you can enable pg_graphql manually via the Database Extensions page in your dashboard. You can also add create extension pg_graphql to your migrations as well if you wish to keep using pg_graphql

We continue to fully support pg_graphql for projects that need it. This change simply ensures it's an intentional choice rather than a default.

We'll follow up on this thread with links to relevant documents for actions required.

Saving SQL snippets now works in the local Studio! This has been a top community request for a long time, and we’re happy to finally release it.

You can save SQL snippets directly while working in the local Studio via the CLI. Snippets are stored in supabase/snippets, making them easy to commit to Git and share with your team working in the same repo—or ignore entirely with .gitignore if you prefer.

Your saved snippets automatically appear in Studio, just like they do in the hosted Dashboard.

This feature is available since CLI v2.72.7. You can check the version by running supabase -v.

Build in a weekend, scale to millions