Platform

Migrate from Postgres to Supabase

Migrate your existing Postgres database to Supabase.


This is a guide for migrating your Postgres database to Supabase. Supabase is a robust and open-source platform. Supabase provides all the backend features developers need to build a product: a Postgres database, authentication, instant APIs, edge functions, real-time subscriptions, and storage. Postgres is the core of Supabase—for example, you can use row-level security, and there are more than 40 Postgres extensions available.

This guide demonstrates how to migrate your Postgres database to Supabase to get the most out of Postgres while gaining access to all the features you need to build a project.

This guide provides three methods for migrating your Postgres database to Supabase:

  1. Google Colab - Guided notebook with copy-paste workflow
  2. Manual Dump/Restore - CLI approach, works for all versions
  3. Logical Replication - Minimal downtime, requires Postgres 10+

Connection modes

Supabase provides the following connection modes:

  • Direct connection
  • Supavisor session mode
  • Supavisor transaction mode

Use Supavisor session mode for the database migration tasks (pg_dump/restore and logical replication).

Method 1: Google Colab (easiest)

Supabase provides a Google Colab migration notebook for a guided migration experience: Supabase Migration Colab Notebook

This is ideal if you prefer a step-by-step, copy-paste workflow with minimal setup.

Method 2: Manual dump/restore

This method works for all Postgres versions using CLI tools.

Prerequisites

Source Postgres requirements

  • Connection string with rights to run pg_dump
  • No special settings required for dump/restore
  • Network access from migration VM

Migration environment

  • Cloud VM running Ubuntu in the same region as source or target database
  • Postgres client tools matching your source database version
  • tmux for session persistence
  • Sufficient disk space (usually ~50% of source database size is enough, but varies case by case)

Pre-Migration checklist

1
2
3
4
5
6
7
8
9
10
11
-- Check database sizeselect pg_size_pretty(pg_database_size(current_database())) as size;-- Check Postgres versionselect version();-- List installed extensionsselect * from pg_extension order by extname;-- Check active connectionsselect count(*) from pg_stat_activity;

Check available extensions in Supabase

1
2
3
4
5
6
7
8
-- Connect to your Supabase database and check available extensionsSELECT name, comment FROM pg_available_extensions ORDER BY name;-- Compare with source database extensionsSELECT extname FROM pg_extension ORDER BY extname;-- Install needed extensionsCREATE EXTENSION IF NOT EXISTS extension_name;

Step 1: Set up migration VM

Set up Ubuntu VM

1
2
3
4
5
6
7
8
9
10
# Install Postgres client and toolssudo apt updatesudo apt install software-properties-commonsudo sh -c 'echo "deb http://apt.Postgres.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'wget --quiet -O - https://www.Postgres.org/media/keys/ACCC4CF8.asc | sudo apt-key add -sudo apt updatesudo apt install Postgres-client-17 tmux htop iotop moreutils# Start or attach to tmux sessiontmux a -t migration || tmux new -s migration

Step 2: Prepare Supabase project

  1. Create a Supabase project at supabase.com/dashboard
  2. Note your database password
  3. Install required extensions via SQL or Dashboard
  4. Get your connection string:
    • Go to Project → Settings → Database → Connection Pooling
    • Select Session pooler (port 5432) and copy the connection string
    • Connection format: Postgres://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres

Important Notes:

  • Users/roles are not migrated - You'll need to recreate roles and privileges after import (Supabase Roles Guide)
  • Row Level Security (RLS) status on tables is not migrated - You'll need to enable RLS for tables after migration.

Resource Requirements:

Database SizeRecommended ComputeRecommended VMAction Required
< 10 GBDefault2 vCPUs, 4 GB RAMNone
10-100 GBDefault-Small4 vCPUs, 8 GB RAMConsider compute upgrade
100-500 GBLarge compute8 vCPUs, 16 GB RAM, NVMeUpgrade compute before restore
500 GB - 1 TBXL compute16 vCPUs, 32 GB RAM, NVMeUpgrade compute before restore
> 1 TBCustomCustomContact support first

Also, you can temporarily increase compute size and/or disk IOPS and throughput via Settings → Compute and Disk if you want faster database restore (you can use larger -j for pg_restore if you do so).

Step 3: Create database dump

Set source database to read only mode for production migration

If doing a maintenance window migration, prevent data changes:

1
2
-- Connect to source database and run:ALTER DATABASE your_database_name SET default_transaction_read_only = true;

For testing without a maintenance window, skip this step but use lower -j values.

Dump the database

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Determine number of parallel jobs based on:# - Source database CPU cores (don't saturate production)# - VM CPU cores# - For testing without maintenance window: use lower values to be gentle# - For production with maintenance window: can use higher valuesDUMP_JOBS=4 # Adjust based on your setup# Check available cores on VMnproc# Create dump with progress loggingpg_dump \ --host=<source_host> \ --port=<source_port> \ --username=<source_username> \ --dbname=<source_database> \ --jobs=$DUMP_JOBS \ --format=directory \ --no-owner \ --no-privileges \ --no-subscriptions \ --verbose \ --file=./db_dump 2>&1 | ts | tee -a dump.log

Notes about dump flags:

  • --no-owner --no-privileges: Applied at dump time to prevent Supabase user management conflicts. While these could be used in pg_restore instead, applying them during dump keeps the dump file cleaner and more portable.
  • --no-subscriptions: Logical replication subscriptions won't work in the target
  • The dump captures all data and schema but excludes ownership/privileges that would conflict with Supabase's managed environment
  • To only migrate a single database schema, add the --schema=PATTERN parameter to your pg_dump command.
  • To exclude a schema: --exclude-schema=PATTERN.
  • To only migrate a single table: --table=PATTERN.
  • To exclude a table: --exclude-table=PATTERN.

Run pg_dump --help for a full list of options.

Database SizeTesting (no maintenance window)Production (with maintenance window)Limiting Factor
< 10 GB24Source CPU
10-100 GB2-48Source CPU
100-500 GB416Disk IOPS
500 GB - 1 TB4-816-32Disk IOPS + CPU

Note: For testing without a maintenance window, use lower -j values to avoid impacting production performance.

Step 4: Restore to Supabase

Set connection and restore

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Set Supabase connection (Session Pooler on port 5432 or direct connection)export SUPABASE_DB_URL="Postgres://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres"# Determine restore parallelization based on your Supabase compute size:# Free tier: 2 cores → use -j 2# Small compute: 2 cores → use -j 2# Medium compute: 4 cores → use -j 4# Large compute: 8 cores → use -j 8# XL compute: 16 cores → use -j 16RESTORE_JOBS=8 # Adjust based on your Supabase compute size# Restore the dump (parallel mode)# Note: -j cannot be used with --single-transactionpg_restore \ --dbname="$SUPABASE_DB_URL" \ --jobs=$RESTORE_JOBS \ --format=directory \ --no-owner \ --no-privileges \ --verbose \ ./db_dump 2>&1 | ts | tee -a restore.log

If restore fails with extension errors, check that errors are only extension-related.

Step 5: Post-Migration tasks

Update statistics (important)

1
psql "$SUPABASE_DB_URL" -c "VACUUM VERBOSE ANALYZE;"

Verify migration

1
2
3
4
5
6
-- Check row countsselect schemaname, tablename, n_live_tupfrom pg_stat_user_tablesorder by n_live_tup desclimit 20;-- Verify data with application-specific queries

Re-enable writes on source (if keeping it)

1
ALTER DATABASE your_database_name SET default_transaction_read_only = false;

Migration time estimates

Database SizeDump TimeRestore TimeTotal Time
10 GB~5 min~10 min~15 min
100 GB~30 min~45 min~1.5 hours
500 GB~2 hours~3 hours~5 hours
1 TB~4 hours~6 hours~10 hours

Times vary based on hardware, network, and parallelization settings

Important notes

  1. Region proximity matters: VM should be in the same region as the source or target for best performance
  2. Downgrade migrations: While technically possible in some cases, highly not recommended
  3. Testing without downtime: Use lower -j values for pg_dump to avoid impacting production
  4. For pg_restore: Can use full parallelization regardless of production impact
  5. Monitor resources: Watch CPU, disk I/O with htop, iotop
  6. Disk I/O: Often the bottleneck before network bandwidth

Method 3: Logical replication

This method allows migration with minimal downtime using Postgres's logical replication feature. Requires Postgres 10+ on both source and target.

When to use logical replication

  • You need minimal downtime (minutes instead of hours)
  • Source database is Postgres 10 or higher
  • You can configure logical replication on the source
  • Database has high write activity that can't be paused for long

Source Postgres prerequisites

Access & privileges

  • Connection string with rights to CREATE PUBLICATION and read tables
  • Superuser or replication privileges recommended

Required settings for logical replication

  • wal_level = logical
  • max_wal_senders ≥ 1
  • max_replication_slots ≥ 1
  • Sufficient max_connections (current + 1 for subscription)

Replica identity

Every table receiving UPDATE/DELETE must have a replica identity (typically a PRIMARY KEY). For tables without one:

1
ALTER TABLE schema.table_name REPLICA IDENTITY FULL;

Non-Replicated items

  • DDL changes (schema modifications)
  • Sequences (need manual sync)
  • Large Objects (LOBs) (use dump/restore or store in regular bytea columns)

Plan a schema freeze, sequence sync before cutover, and handle LOBs separately.

Step 1: Configure source database

Edit Postgres configuration files:

Postgres.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Set Supabase connection (Session Pooler on port 5432 or direct connection)export SUPABASE_DB_URL="Postgres://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres"# Set WAL level to logicalwal_level = logical# Ensure sufficient replication slotsmax_replication_slots = 10# Ensure sufficient WAL sendersmax_wal_senders = 10# Set appropriate max_connections (current connections + 1 for subscription)max_connections = 200 # Adjust based on your needs# Optional: Enable SSL for secure replicationssl = on# Allow connections from Supabaselisten_addresses = '*' # Or specific IP addresses

pg_hba.conf

1
2
3
4
5
6
7
8
# Allow replication connections from Supabase# Replace <supabase_ip_range> with actual Supabase IP rangehost replication all <supabase_ip_range> md5host all all <supabase_ip_range> md5# With SSL:hostssl replication all <supabase_ip_range> md5hostssl all all <supabase_ip_range> md5

Restart Postgres:

1
2
sudo systemctl restart Postgressudo systemctl status Postgres

Step 2: Verify configuration

1
2
3
4
5
6
7
8
9
-- Should return 'logical'SHOW wal_level;-- Check other parametersSHOW max_replication_slots;SHOW max_wal_senders;-- Check current connectionsSELECT count(*) FROM pg_stat_activity;

Step 3: Check and set replica identity

1
2
3
4
5
6
7
8
9
10
11
-- Find tables without primary keysSELECT n.nspname, c.relnameFROM pg_class cJOIN pg_namespace n ON n.oid = c.relnamespaceLEFT JOIN pg_constraint pk ON pk.conrelid = c.oid AND pk.contype = 'p'WHERE c.relkind = 'r' AND pk.oid IS NULL AND n.nspname NOT IN ('pg_catalog','information_schema');-- For tables without a primary key, set REPLICA IDENTITY FULLALTER TABLE my_schema.my_table REPLICA IDENTITY FULL;

Step 4: Export and restore schema only

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Export schema from sourcepg_dump \ -h <source_host> \ -U <source_user> \ -p <source_port> \ -d <source_database> \ --schema-only \ --no-privileges \ --no-subscriptions \ --format=directory \ -f ./schema_dump# Restore schema to Supabase (use Session Pooler)pg_restore \ --dbname="$SUPABASE_DB_URL" \ --format=directory \ --schema-only \ --no-privileges \ --single-transaction \ --verbose \ ./schema_dump

Step 5: Create publication on source

1
2
3
4
5
6
7
8
9
10
11
-- Create publication for all tablesCREATE PUBLICATION supabase_migration FOR ALL TABLES;-- Or for specific tables only (doesn't require superuser)CREATE PUBLICATION supabase_migration FOR TABLE schema1.table1, schema1.table2, public.table3;-- Verify publication was createdSELECT * FROM pg_publication;

Step 6: Create subscription on Supabase

Connect to your Supabase database:

1
2
3
4
5
6
7
8
9
-- Create subscription with SSL (recommended)CREATE SUBSCRIPTION supabase_subscriptionCONNECTION 'host=<source_host> port=<source_port> user=<source_user> password=<source_password> dbname=<source_database> sslmode=require'PUBLICATION supabase_migration;-- Or without SSL (if source doesn't support it)CREATE SUBSCRIPTION supabase_subscriptionCONNECTION 'host=<source_host> port=<source_port> user=<source_user> password=<source_password> dbname=<source_database> sslmode=disable'PUBLICATION supabase_migration;

Step 7: Monitor replication status

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-- On Supabase (subscriber) - check subscription statusselect * from pg_subscription_rel;-- srsubstate = 'r' means ready (synchronized)-- srsubstate = 'i' means initializing-- srsubstate = 'd' means data is being copied-- Overall subscription statusselect * from pg_stat_subscription;-- On source database - check replication statusselect * from pg_stat_replication;-- Check replication lagselect slot_name, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) as lag_sizefrom pg_replication_slots;

Wait until all tables show srsubstate = 'r' (ready) status.

Step 8: Synchronize sequences

After initial data sync is complete, but BEFORE switching to Supabase:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Set source to read-onlypsql -h <source_host> -c "ALTER DATABASE <source_database> SET default_transaction_read_only = true;"# Export sequences from sourcepg_dump \ -h <source_host> \ -U <source_user> \ -p <source_port> \ -d <source_database> \ --data-only \ --table='*_seq' \ --table='*_id_seq' > sequences.sql# Import sequences to Supabasepsql "$SUPABASE_DB_URL" -f sequences.sql

Step 9: Switch to Supabase

  1. Ensure replication lag is zero:
1
2
3
-- On Supabaseselect * from pg_stat_subscription;-- Check that latest_end_lsn is current
  1. Stop writes to the source database (if not already read-only)

  2. Drop subscription on Supabase:

1
DROP SUBSCRIPTION supabase_subscription;
  1. Update application connection strings to point to Supabase

  2. Verify application functionality

Step 10: Cleanup

On source database (after successful migration):

1
2
3
4
5
6
7
8
9
-- Remove publicationDROP PUBLICATION supabase_migration;-- Check and remove any remaining replication slotsSELECT * FROM pg_replication_slots;DROP REPLICATION SLOT slot_name; -- if any remain-- The source database should remain read-only or be decommissioned-- Do NOT re-enable writes to avoid a split-brain scenario!

Troubleshooting logical replication

IssueSolution
"could not connect to the publisher"Check network connectivity, firewall rules, pg_hba.conf
"role does not exist"Ensure replication user exists on source with REPLICATION privilege
"publication does not exist"Verify publication name and that it was created successfully
Replication lag growingCheck network bandwidth, source database load, add more WAL senders
Tables stuck in i stateCheck for locks on source tables, verify table structure matches
"out of replication slots"Increase max_replication_slots in Postgres.conf

Important limitations

  • DDL changes: Schema modifications are not replicated - freeze schema during migration
  • Sequences: Need manual synchronization before cutover
  • Large Objects (LOBs): Not replicated - use dump/restore or store in regular bytea columns
  • Custom types: May need special handling
  • Users and roles: Must be recreated manually on Supabase

For detailed restrictions, see Postgres Logical Replication Restrictions

When to use which method

Use Dump/Restore when:

  • Downtime window is acceptable
  • Source is Postgres < 10
  • Simpler process preferred
  • Cannot configure logical replication on the source

Use Logical Replication when:

  • Minimal downtime required
  • Postgres 10+ on both sides
  • Can modify source configuration
  • Have replication privileges

Getting help