How to Interpret and Explore the Postgres Logs

Last edited: 1/18/2025

A complimentary guide was made for the API logs

Debugging and monitoring Postgres with logs

Logs provide insights into Postgres operations. They help meet compliance requirements, detect suspicious activity, and troubleshoot problems.

Table of contents

  • Querying Logs
    • postgres_logs Table Structure
  • Filtering Logs
    • Routine Events
    • By Timeframe
    • By Error Severity
    • By Query
    • By APIs/Roles
    • By Supabase Dashboard Queries
    • Full Example For Finding Errors
  • Logging for Compliance and Security
  • Reviewing Log Settings
  • Changing Log Settings
    • Severity levels
    • Configuring queries logged
    • Logging within functions
  • Frequently Asked Questions
  • Other resources

Querying logs

The most practical way to explore and filter logs is through the Logs Explorer.

It uses a subset of Bigquery SQL syntax and preparses queries for optimization. This imposes three primary limitations:

  • No subqueries or WITH statements
  • No * wildcards for column names
  • No ILIKE statements

Although there are many strategies to filter logs, such as like and in statements, a helper function called regexp_contains provides the most flexibility and control.

The postgres_logs table contains Postgres events.

postgres_logs table structure

The table contains 3 fundamental columns:

columndescription
event_messagethe log's message
timestamptime event was recorded
parsed metadatametadata about event

The parsed metadata column is an array that contains relevant information about events. To access the information, it must be unnested. This is done with a cross join.

Unnesting example


_10
select
_10
event_message,
_10
parsed.<column name>
_10
from
_10
postgres_logs
_10
-- Unpack data stored in the 'metadata' field
_10
cross join unnest(metadata) AS metadata
_10
-- After unpacking the 'metadata' field, extract the 'parsed' field from it
_10
cross join unnest(parsed) AS parsed;

Parsed metadata fields

Query information

FieldDescriptionExample
parsed.queryThe SQL query executedSELECT * FROM table;
parsed.command_tagTag identifying the type of command (e.g., SELECT)SELECT, INSERT, UPDATE...
parsed.internal_queryAn internal query that is used to facilitate a primary query. Often used by realtime for certain tasksselect to_jsonb()

Suggested use cases:

  • Identifying slow queries
  • Identifying failing queries

Error/Warning information

FieldDescriptionExample
parsed.error_severityevent severityLOG, WARNING, ERROR...
parsed.detailExplanation of the event according to Postgres"Key (fk_table)=(553585367) already exists."
parsed.sql_state_codeAn error code that maps to Postgres's error table42501
parsed.hintHint on how to solve the error"No function matches the given name and argument types. You might need to add explicit type casts."
parsed.contextProvides insight into where an error may have occurred"PL/pgSQL function public.find_text(public.vector,integer) line 3 at IF"

Suggested use cases:

  • Filter by error severity or sql code
  • Get hints, details, and context about error events

Connection/Identification information

FieldDescriptionExample
parsed.session_idThe session ID12345
parsed.session_start_timeThe start time of the session2024-05-08 15:30:00
parsed.connection_fromThe connection IP192.165.1.100
parsed.user_nameThe name of the connecting database userpostgres
parsed.application_nameThe name of the applicationSupavisor, postgREST
parsed.database_nameThe name of the databasepostgres
parsed.process_idThe process ID, often used to identify extension workers1234
parsed.backend_typeDetermine if the event originated internally (e.g., from background workers like pg_net, timescale, or pg_cron) or externally from a client (client backend)client backend

Suggested use cases:

  • Identify events by server/API
  • Filter connections by IP
  • Identify connections to specific databases
  • Filter connections by sessions for debugging
  • identify extension events

Filtering logs

Excluding routine events

Most Postgres logs during normal periods are routine events, such as connection authorizations and checkpoints. To see the default types of events that are logged, you can check this guide.

When exploring the logs for atypical behavior, it's often strategic to filter out expected values. This can be done by adding the following filter to your queries:


_10
...query
_10
where
_10
-- Excluding routine events related to cron, PgBouncer, checkpoints, and successful connections
_10
not regexp_contains(event_message, '^cron|PgBouncer|checkpoint|connection received|authenticated|authorized';

By timeframe

To investigate issues around a specific period:


_10
-- filtering by time period
_10
...query
_10
where
_10
timestamp between '2024-05-06 04:44:00' and '2024-05-06 04:45:00'

By error severity

This filter finds all errors, fatals, and panics:

SeverityUsage
ERRORReports an error that caused the current command to abort.
FATALReports an error that caused the current session to abort.
PANICReports an error that caused all database sessions to abort.

_10
-- find error events
_10
... query
_10
where
_10
parsed.error_severity in ('ERROR', 'FATAL', 'PANIC')

Failure events include an sql_state_code that can be referenced in the Postgres Docs

By query

NOTE: Unless pg_audit is configured, only failed queries are logged


_10
-- find queries executed by the Dashboard
_10
...query
_10
where
_10
regexp_contains(parsed.query, '(?i)select . <some table>')

Queries can use complex syntax, so it is often helpful to isolate by referenced database objects, such as functions, tables, and columns. Because query structures can be complex, it is advised to use regex to find matches. Some common regex patterns are:

  • (?i): ignore case sensitivty
  • .: wildcard
  • ^: look for values at start of string
  • |: or operator

By APIs/roles

All failed queries, including those from PostgREST, Auth, and external libraries (e.g., Prisma) are logged with helpful error messages for debugging.

Server/Role mapping

API servers have assigned database roles for connecting to the database:

RoleAPI/Tool
supabase_adminUsed by Supabase to configure projects and for monitoring
authenticatorPostgREST
supabase_auth_adminAuth
supabase_storage_adminStorage
supabase_realtime_adminRealtime
supabase_replication_adminSynchronizes Read Replicas
postgresSupabase Dashboard and External Tools (e.g., Prisma, SQLAlchemy, PSQL...)
Custom rolesExternal Tools (e.g., Prisma, SQLAlchemy, PSQL...)

Filter by the parsed.user_name role to only retrieve logs made by specific roles:


_10
-- find events based on role/server
_10
... query
_10
where
_10
-- find events from the relevant role
_10
parsed.user_name = '<ROLE>'
_10
...

By Dashboard queries

Queries from the Supabase Dashboard are executed under the postgres role and include the comment -- source: dashboard. To isolate or exclude Dashboard requests during debugging, you can filter by this comment.


_10
-- find queries executed by the Dashboard
_10
...query
_10
where
_10
regexp_contains(parsed.query, '-- source: dashboard')

Full example for finding errors


_22
select
_22
cast(postgres_logs.timestamp as datetime) as timestamp,
_22
event_message,
_22
parsed.error_severity,
_22
parsed.user_name,
_22
parsed.query,
_22
parsed.detail,
_22
parsed.hint,
_22
parsed.sql_state_code,
_22
parsed.backend_type
_22
from
_22
postgres_logs
_22
cross join unnest(metadata) as metadata
_22
cross join unnest(metadata.parsed) as parsed
_22
where
_22
regexp_contains(parsed.error_severity, 'ERROR|FATAL|PANIC')
_22
and parsed.user_name = 'postgres'
_22
and regexp_contains(event_message, 'duration|operator')
_22
and not regexp_contains(parsed.query, '<key words>')
_22
and postgres_logs.timestamp between '2024-04-15 10:50:00' and '2024-04-15 10:50:27'
_22
order by timestamp desc
_22
limit 100;

Logging for compliance and security

Customized object and role activity logging

⚠️ NOTE: This is specifically designated for those using the postgres role or custom roles to interact with their database. Those utilizing the Database REST API should reference the Database API Logging Guide instead.

When recording what is accessed and by whom, logging based on database roles and objects is the most reliable way to ensure a proper trail of activity.

You can use the pg_audit extension to selectively log relevant queries (not just errors) by certain roles, against specific database objects.

You should take care when using the extension to not log all database events, but only what is absolutely necessary. Overlogging can strain the database and create log noise that makes it difficult to filter for relevant events.

Filtering by pg_audit:


_10
... query
_10
where
_10
-- all pg_audit recorded events start with 'AUDIT'
_10
regexp_contains(event_message, '^AUDIT')
_10
and
_10
-- Finding queries executed from the relevant role (e.g., 'API_role')
_10
parsed.user_name = 'API_role'

Filtering by IP

If you are connecting from a known, limited range of IP addresses, you should enable network restrictions.

Monitoring IPs becomes tricky when dealing with dynamic addressing, such as those from serverless or edge environments. This challenge amplifies when relying on certain poolers, such as Prisma Accelerate, Supavisor, or Cloudflare's Hyperdrive, as they record the pooler's IP, not the true origin.

IP tracking is most effective when consistently relying on direct database connections from servers with static IP addresses:


_16
-- filter by IP
_16
select
_16
event_message,
_16
connection_from as ip,
_16
count(connection_from) as ip_count
_16
from
_16
postgres_logs
_16
cross join unnest(metadata) as metadata
_16
cross join unnest(parsed) as parsed
_16
where
_16
regexp_contains(user_name, '<ROLE>')
_16
and regexp_contains(backend_type, 'client backend') -- only search for connections from outside the database (excludes cron jobs)
_16
and regexp_contains(event_message, '^connection authenticated') -- only view successful authentication events
_16
group by connection_from, event_message
_16
order by ip_count desc
_16
limit 100;

Reviewing log settings

The pg_settings table describes system and logging configurations.


_10
-- view system variables
_10
select * from pg_settings;

The settings that affect logs are categorized under:

CategoryDescription
Reporting and Logging / What to LogSpecifies system events worth logging.
Reporting and Logging / When to LogSpecifies certain conditions or rules for logging
Customized OptionsConfigures extensions and loaded modules, including those enhancing logging like auto_explain and pg_audit.

To view all log settings for your database, you can execute the following SQL:


_10
-- view all log related settings
_10
select *
_10
from pg_settings
_10
where
_10
(
_10
category like 'Reporting and Logging / What to Log'
_10
or category like 'Reporting and Logging / When to Log'
_10
or category = 'Customized Options'
_10
)
_10
and name like '%log%';

Changing log settings

WARNING: lenient settings can lead to overlogging, impacting database performance while creating noise in the logs.

Severity levels

The log_min_messages variable determines what is severe enough to log. Here are the severity thresholds from the Postgres docs.

SeverityUsage
DEBUG1 .. DEBUG5Provides successively-more-detailed information for use by developers.
INFOProvides information implicitly requested by the user, e.g., output from VACUUM VERBOSE.
NOTICEProvides information that might be helpful to users, e.g., notice of truncation of long identifiers.
WARNINGProvides warnings of likely problems, e.g., COMMIT outside a transaction block.
ERRORReports an error that caused the current command to abort.
LOGReports information of interest to administrators, e.g., checkpoint activity.
FATALReports an error that caused the current session to abort.
PANICReports an error that caused all database sessions to abort.

In most cases, the default is adequate. However, if you must adjust the setting, you can do so with the following query:


_10
alter role postgres set log_min_messages = '<NEW VALUE>';
_10
_10
-- view new setting
_10
show log_min_messages; -- default WARNING

Configuring queries logged

By default, only failed queries are logged. The PGAudit extension extends Postgres's built-in logging abilities. It can be used to selectively track all queries in your database by:

  • role
  • session
  • database object
  • entire database

Logging within database functions

To track or debug functions, logging can be configured by followin the function debugging guide

Frequently Asked Questions

Can I join together different log tables, such as edge_logs and postgres_logs?

No, log tables are independent from each other and do not share any primary/foreign key relations for joining.

Can I download the logs?

At the moment, the way to download logs is through the Log Dashboard as a CSV

What is logged?

To see the default types of events that are logged, you can check this guide.

Other resources: