Telemetry

Log Drains


Log drains will send all logs of the Supabase stack to one or more desired destinations. It is only available for customers on Team and Enterprise Plans. Log drains is available in the dashboard under Project Settings > Log Drains.

You can read about the initial announcement here and vote for your preferred drains in this discussion.

Supported destinations

The following table lists the supported destinations and the required setup configuration:

DestinationTransport MethodConfiguration
Generic HTTP endpointHTTPURL
HTTP Version
Gzip
Headers
DataDogHTTPAPI Key
Region
LokiHTTPURL
Headers
SentryHTTPDSN

HTTP requests are batched with a max of 250 logs or 1 second intervals, whichever happens first. Logs are compressed via Gzip if the destination supports it.

Generic HTTP endpoint

Logs are sent as a POST request with a JSON body. Both HTTP/1 and HTTP/2 protocols are supported. Custom headers can optionally be configured for all requests.

Note that requests are unsigned.

DataDog logs

Logs sent to DataDog have the name of the log source set on the service field of the event and the source set to Supabase. Logs are gzipped before they are sent to DataDog.

The payload message is a JSON string of the raw log event, prefixed with the event timestamp.

To setup DataDog log drain, generate a DataDog API key here and the location of your DataDog site.

If you are interested in other log drains, upvote them here

Loki

Logs sent to the Loki HTTP API are specifically formatted according to the HTTP API requirements. See the official Loki HTTP API documentation for more details.

Events are batched with a maximum of 250 events per request.

The log source and product name will be used as stream labels.

The event_message and timestamp fields will be dropped from the events to avoid duplicate data.

Loki must be configured to accept structured metadata, and it is advised to increase the default maximum number of structured metadata fields to at least 500 to accommodate large log event payloads of different products.

Sentry

Logs are sent to Sentry as part of Sentry's Logging Product. Ingesting Supabase logs as Sentry errors is currently not supported.

To setup the Sentry log drain, you need to do the following:

  1. Grab your DSN from your Sentry project settings. It should be of the format {PROTOCOL}://{PUBLIC_KEY}:{SECRET_KEY}@{HOST}{PATH}/{PROJECT_ID}.
  2. Create log drain in Supabase dashboard
  3. Watch for events in the Sentry Logs page

All fields from the log event are attached as attributes to the Sentry log, which can be used for filtering and grouping in the Sentry UI. There are no limits to cardinality or the number of attributes that can be attached to a log.

If you are self-hosting Sentry, Sentry Logs are only supported in self-hosted version 25.9.0 and later.

Pricing

For a detailed breakdown of how charges are calculated, refer to Manage Log Drain usage.