# Configure S3 Storage

Enable S3-compatible client endpoint and set up an S3 backend for self-hosted Supabase Storage.

Self-hosted Supabase Storage has two independent S3-related features:

- **S3 protocol endpoint** - an S3-compatible API that Storage exposes at `/storage/v1/s3`. This allows standard S3 tools like `rclone` and the AWS CLI to interact with your Storage instance.

- **S3 backend** - where Storage keeps data. By default, files are stored on the local filesystem. You can switch to an S3-compatible service (AWS S3, MinIO, etc.) for durability, scalability, or to use existing infrastructure.

You can configure either feature independently. For example, you can enable the S3 protocol endpoint to use `rclone` while keeping the default file-based storage, or switch to an S3 backend without enabling the S3 protocol endpoint.

## Enable the S3 protocol endpoint

The S3 protocol endpoint at `/storage/v1/s3` allows standard S3 clients to interact with your self-hosted Storage instance. It works with any storage backend, including the default file-based storage - you do not need to configure an S3 backend first. The Supabase REST API and SDK do not use the S3 protocol.

Make sure to check that `REGION`, `S3_PROTOCOL_ACCESS_KEY_ID` and `S3_PROTOCOL_ACCESS_KEY_SECRET` are properly configured in your `.env` file. Read more about the secrets and passwords in [Configuring and securing Supabase](/docs/guides/self-hosting/docker#configuring-and-securing-supabase).

```yaml name=docker-compose.yml
storage:
  environment:
    # ... existing variables ...
    REGION: ${REGION}
    S3_PROTOCOL_ACCESS_KEY_ID: ${S3_PROTOCOL_ACCESS_KEY_ID}
    S3_PROTOCOL_ACCESS_KEY_SECRET: ${S3_PROTOCOL_ACCESS_KEY_SECRET}
```

### Test with the AWS CLI

```sh
( set -a && \
source .env > /dev/null 2>&1 && \
echo "" && \
AWS_ACCESS_KEY_ID=$S3_PROTOCOL_ACCESS_KEY_ID \
AWS_SECRET_ACCESS_KEY=$S3_PROTOCOL_ACCESS_KEY_SECRET \
aws s3 ls \
--endpoint-url http://localhost:8000/storage/v1/s3 \
--region $REGION \
s3://your-storage-bucket )
```

### Test with rclone

```sh
( set -a && \
source .env > /dev/null 2>&1 && \
echo "" && \
rclone ls \
--s3-endpoint http://localhost:8000/storage/v1/s3 \
--s3-region $REGION \
--s3-provider Other \
--s3-access-key-id "$S3_PROTOCOL_ACCESS_KEY_ID" \
--s3-secret-access-key "$S3_PROTOCOL_ACCESS_KEY_SECRET" \
:s3:your-storage-bucket )
```

Use `aws login` and `rclone config` for persistent configuration.

## How to configure an S3 backend

In general, the following configuration variables define S3 backend configuration for Storage in `docker-compose.yml`:

```yaml name=docker-compose.yml
storage:
  environment:
    # ... existing variables ...
    STORAGE_BACKEND: s3
    GLOBAL_S3_BUCKET: your-s3-bucket-or-dirname
    GLOBAL_S3_ENDPOINT: https://your-s3-endpoint
    GLOBAL_S3_PROTOCOL: https
    GLOBAL_S3_FORCE_PATH_STYLE: 'true'
    AWS_ACCESS_KEY_ID: your-access-key-id
    AWS_SECRET_ACCESS_KEY: your-secret-access-key
    REGION: your-region
```

Depending on your setup, you may need to adjust these values - for example, to use a local S3-compatible service like RustFS, MinIO or a cloud provider like AWS.

### Using RustFS

An overlay `docker-compose.rustfs.yml` configuration can be added to enable RustFS container and provide an S3-compatible API for Storage backend:

```sh
docker compose -f docker-compose.yml -f docker-compose.rustfs.yml up -d
```

Make sure to review the Storage section in your `.env` file for related configuration options.

### Using MinIO

MinIO no longer publishes open source Docker images or maintains their open source repository. The MinIO configuration is provided for backward compatibility and uses images built by [Chainguard](https://images.chainguard.dev/directory/image/minio/overview) (`cgr.dev/chainguard/minio`). For new deployments, consider using [RustFS](#using-rustfs) instead.

An overlay `docker-compose.s3.yml` configuration can be added to enable MinIO container and provide an S3-compatible API for Storage backend:

```sh
docker compose -f docker-compose.yml -f docker-compose.s3.yml up -d
```

Make sure to review the Storage section in your `.env` file for related configuration options.

### Using AWS S3

Create an S3 bucket and an IAM user with access to it. Then configure the storage service:

```yaml name=docker-compose.yml
storage:
  environment:
    # ... existing variables ...
    STORAGE_BACKEND: s3
    GLOBAL_S3_BUCKET: your-aws-bucket-name
    AWS_ACCESS_KEY_ID: your-aws-access-key
    AWS_SECRET_ACCESS_KEY: your-aws-secret-key
    REGION: your-aws-region
```

For AWS S3, you do not need `GLOBAL_S3_ENDPOINT` or `GLOBAL_S3_FORCE_PATH_STYLE` - the Storage S3 client automatically resolves the endpoint from the region and uses virtual-hosted-style URLs, which is what AWS S3 expects. These variables are only needed for non-AWS S3-compatible providers.

### S3-compatible providers

Use the same configuration as MinIO, but point to your provider's endpoint, e.g.:

```yaml name=docker-compose.yml
storage:
  environment:
    # ... existing variables ...
    STORAGE_BACKEND: s3
    GLOBAL_S3_BUCKET: your-bucket-name
    GLOBAL_S3_ENDPOINT: https://your-account-id.r2.cloudflarestorage.com
```

## Verify

- Open Studio and upload a file to a bucket. List the file using the AWS CLI or `rclone` to confirm the S3 endpoint works.
- If using an S3 backend: confirm the file appears in your S3 provider's console.

## Session token

You can authenticate to Supabase's S3-compatible storage using a user’s JWT to enforce Row-Level Security (RLS) across S3 operations. This is useful when initializing the S3 client on the server for a specific user session, or when using the client directly from the frontend.

All operations performed with a session token are scoped to the authenticated user, and any RLS policies defined in the storage schema will be applied.

To authenticate with S3 using a session token, provide the following credentials:

- **region:** value from the `REGION` environment variable in your `.env` file
- **access_key_id:** value from the `STORAGE_TENANT_ID` environment variable in your `.env` file
- **secret_access_key:** value from the `ANON_KEY` environment variable
- **session_token:** a valid user JWT

Example using the `aws-sdk` library:

```javascript
import { S3Client } from '@aws-sdk/client-s3'

const {
  data: { session },
} = await supabase.auth.getSession()

const client = new S3Client({
  forcePathStyle: true,
  region: 'stub', // REGION in .env
  endpoint: 'http://<your-domain>/storage/v1/s3', // Edit <your-domain>
  credentials: {
    accessKeyId: 'stub', // STORAGE_TENANT_ID in .env
    secretAccessKey: 'your-anon-key', // ANON_KEY in .env
    sessionToken: session.access_token,
  },
})
```

## Troubleshooting

### Signature mismatch errors

S3 clients sign requests using the access key ID and secret. If you see `SignatureDoesNotMatch`, verify that the `REGION`, `S3_PROTOCOL_ACCESS_KEY_ID` and `S3_PROTOCOL_ACCESS_KEY_SECRET` in your `.env` file match what your S3 client is using.

**If you use a custom reverse proxy**: with the [new API keys and auth](/docs/guides/self-hosting/self-hosted-auth-keys) configuration, requests to Storage should be forwarded to the API gateway for proper handling. If you are still using legacy API keys and proxy directly to Storage, make sure your proxy sets the `X-Forwarded-Prefix` header to `/storage/v1` so that signed URLs are generated correctly. In both cases, `STORAGE_PUBLIC_URL` must be [set properly](https://github.com/supabase/supabase/blob/a5f4a59e0e262394b345600e8d8a2241d6ac3b64/docker/docker-compose.yml#L369) in `docker-compose.yml`.

### TUS upload errors on Cloudflare R2

If resumable (TUS) uploads fail with HTTP 500 and a message about `x-amz-tagging`, add `TUS_ALLOW_S3_TAGS: "false"` to the storage service environment. Cloudflare R2 does not implement this S3 feature.

### Permission denied on uploads

Setting a bucket to "Public" only allows unauthenticated **downloads**. Uploads are always blocked unless you create an RLS policy on the `storage.objects` table. Go to **Storage** > **Files** > **Policies** in Studio and create a policy that allows `INSERT` for the appropriate roles.

### Upload URLs point to localhost

If uploads from a browser fail (CORS or mixed content errors), check that `API_EXTERNAL_URL` and `SUPABASE_PUBLIC_URL` in your `.env` file match your actual domain and protocol - not `http://localhost:8000`.

### Additional resources

- [Storage repository `.env.sample`](https://github.com/supabase/storage/blob/master/.env.sample)
- [S3 Authentication](/docs/guides/storage/s3/authentication)