Self-Hosting

Copy Storage Objects from Platform

Copy storage objects from a managed Supabase project to a self-hosted instance using rclone.


This guide walks you through copying storage objects from a managed Supabase platform project to a self-hosted instance using rclone with S3-to-S3 copy.

Before you begin

You need:

  • A working self-hosted Supabase instance with the S3 protocol endpoint enabled - see Configure S3 Storage
  • Your platform project's S3 credentials - generated from the S3 Configuration page
  • Matching buckets created on your self-hosted instance
  • rclone installed on the machine running the copy

Step 1: Get platform S3 credentials

In your managed Supabase project dashboard, go to Storage > S3 Configuration > Access keys. Generate a new access key pair and copy:

  • Endpoint: https://<project-ref>.supabase.co/storage/v1/s3
  • Region: your project's region (e.g., us-east-1)
  • Access Key ID and Secret access key

Step 2: Create buckets on self-hosted

Buckets must exist on the destination before you can copy objects into them. You can create them through dashboard UI, or with SQL Editor.

To list your platform buckets, connect to your platform database and run:

1
select id, name, public from storage.buckets order by name;

Then create matching buckets on your self-hosted instance. Connect to your self-hosted database and run:

1
insert into storage.buckets (id, name, public)
2
values
3
('your-storage-bucket', 'your-storage-bucket', false)
4
on conflict (id) do nothing;

Repeat for each bucket, setting public to true or false as appropriate.

Step 3: Configure rclone

Create or edit your rclone configuration file (~/.config/rclone/rclone.conf):

1
[platform]
2
type = s3
3
provider = Other
4
access_key_id = your-platform-access-key-id
5
secret_access_key = your-platform-secret-access-key
6
endpoint = https://your-project-ref.supabase.co/storage/v1/s3
7
region = your-project-region
8
9
[self-hosted]
10
type = s3
11
provider = Other
12
access_key_id = your-self-hosted-access-key-id
13
secret_access_key = your-self-hosted-secret-access-key
14
endpoint = http://your-domain:8000/storage/v1/s3
15
region = your-self-hosted-region

Replace the credentials with your actual values. For self-hosted, use the REGION, S3_PROTOCOL_ACCESS_KEY_ID and S3_PROTOCOL_ACCESS_KEY_SECRET you configured in Configure S3 Storage.

Verify both remotes connect:

1
rclone lsd platform:
2
rclone lsd self-hosted:

Both commands should list your buckets.

Step 4: Copy objects

Copy a single bucket:

1
rclone copy platform:your-storage-bucket self-hosted:your-storage-bucket --progress

To copy all buckets:

1
for bucket in $(rclone lsf platform: | tr -d '/'); do
2
echo "Copying bucket: $bucket"
3
rclone copy "platform:$bucket" "self-hosted:$bucket" --progress
4
done

Verify

Compare object counts between source and destination:

1
rclone size platform:your-storage-bucket && \
2
rclone size self-hosted:your-storage-bucket

Open Studio on your self-hosted instance and browse the storage buckets to confirm files are accessible.

Troubleshooting

Signature errors

If you see SignatureDoesNotMatch when connecting to either remote:

  • Platform: Regenerate S3 access keys from your project's Storage Settings. Ensure the endpoint URL includes /storage/v1/s3.
  • Self-hosted: Verify that REGION, S3_PROTOCOL_ACCESS_KEY_ID and S3_PROTOCOL_ACCESS_KEY_SECRET in .env file match your rclone config.

Bucket not found

If rclone reports that a bucket doesn't exist on the self-hosted side, create it first - see Step 2. The S3 protocol does not auto-create buckets on copy.

Timeouts on large files

For very large files, increase rclone's timeout:

1
rclone copy platform:your-storage-bucket self-hosted:your-storage-bucket --timeout 30m

Empty listing on platform

If rclone lsd platform: returns nothing, verify the endpoint URL ends with /storage/v1/s3 and that the S3 access keys have not expired. Regenerate them from the dashboard if needed.