How to Interpret and Explore the Postgres Logs

Last edited: 2/21/2025

A complimentary guide was made for the API logs

Debugging and monitoring Postgres with logs

Logs provide insights into Postgres operations. They help meet compliance requirements, detect suspicious activity, and troubleshoot problems.

Table of contents

  • Querying Logs
    • postgres_logs Table Structure
  • Filtering Logs
    • Routine Events
    • By Timeframe
    • By Error Severity
    • By Query
    • By APIs/Roles
    • By Supabase Dashboard Queries
    • Full Example For Finding Errors
  • Logging for Compliance and Security
  • Reviewing Log Settings
  • Changing Log Settings
    • Severity levels
    • Configuring queries logged
    • Logging within functions
  • Frequently Asked Questions
  • Other resources

Querying logs

The most practical way to explore and filter logs is through the Logs Explorer.

It uses a subset of BigQuery SQL syntax and pre-parses queries for optimization. This imposes three primary limitations:

  • No subqueries or WITH statements
  • No * wildcards for column names
  • No ILIKE statements

Although there are many strategies to filter logs, such as like and in statements, a helper function called regexp_contains provides the most flexibility and control.

The postgres_logs table contains Postgres events.

postgres_logs table structure

The table contains 3 fundamental columns:

columndescription
event_messagethe log's message
timestamptime event was recorded
parsed metadatametadata about event

The parsed metadata column is an array that contains relevant information about events. To access the information, it must be unnested. This is done with a cross join.

Unnesting example

1
2
3
4
5
6
7
8
9
select event_message, parsed.<column name>from postgres_logs-- Unpack data stored in the 'metadata' fieldcross join unnest(metadata) AS metadata-- After unpacking the 'metadata' field, extract the 'parsed' field from itcross join unnest(parsed) AS parsed;

Parsed metadata fields

Query information

FieldDescriptionExample
parsed.queryThe SQL query executedSELECT * FROM table;
parsed.command_tagTag identifying the type of command (e.g., SELECT)SELECT, INSERT, UPDATE...
parsed.internal_queryAn internal query that is used to facilitate a primary query. Often used by realtime for certain tasksselect to_jsonb()

Suggested use cases:

  • Identifying slow queries
  • Identifying failing queries

Error/Warning information

FieldDescriptionExample
parsed.error_severityevent severityLOG, WARNING, ERROR...
parsed.detailExplanation of the event according to Postgres"Key (fk_table)=(553585367) already exists."
parsed.sql_state_codeAn error code that maps to Postgres's error table42501
parsed.hintHint on how to solve the error"No function matches the given name and argument types. You might need to add explicit type casts."
parsed.contextProvides insight into where an error may have occurred"PL/pgSQL function public.find_text(public.vector,integer) line 3 at IF"

Suggested use cases:

  • Filter by error severity or SQL code
  • Get hints, details, and context about error events

Connection/Identification information

FieldDescriptionExample
parsed.session_idThe session ID12345
parsed.session_start_timeThe start time of the session2024-05-08 15:30:00
parsed.connection_fromThe connection IP192.165.1.100
parsed.user_nameThe name of the connecting database userpostgres
parsed.application_nameThe name of the applicationSupavisor, PostgREST
parsed.database_nameThe name of the databasepostgres
parsed.process_idThe process ID, often used to identify extension workers1234
parsed.backend_typeDetermine if the event originated internally (e.g., from background workers like pg_net, timescale, or pg_cron) or externally from a client (client backend)client backend

Suggested use cases:

  • Identify events by server/API
  • Filter connections by IP
  • Identify connections to specific databases
  • Filter connections by sessions for debugging
  • identify extension events

Filtering logs

Excluding routine events

Most Postgres logs during normal periods are routine events, such as connection authorizations and checkpoints. To see the default types of events that are logged, you can check this guide.

When exploring the logs for atypical behavior, it's often strategic to filter out expected values. This can be done by adding the following filter to your queries:

1
2
3
4
...querywhere -- Excluding routine events related to cron, PgBouncer, checkpoints, and successful connections not regexp_contains(event_message, '^cron|PgBouncer|checkpoint|connection received|authenticated|authorized';

By timeframe

To investigate issues around a specific period:

1
2
3
4
-- filtering by time period...querywhere timestamp between '2024-05-06 04:44:00' and '2024-05-06 04:45:00'

By error severity

This filter finds all errors, fatals, and panics:

SeverityUsage
ERRORReports an error that caused the current command to abort.
FATALReports an error that caused the current session to abort.
PANICReports an error that caused all database sessions to abort.
1
2
3
4
-- find error events... querywhere parsed.error_severity in ('ERROR', 'FATAL', 'PANIC')

Failure events include an sql_state_code that can be referenced in the Postgres Docs

By query

NOTE: Unless pg_audit is configured, only failed queries are logged

1
2
3
4
-- find queries executed by the Dashboard...querywhere regexp_contains(parsed.query, '(?i)select . <some table>')

Queries can use complex syntax, so it is often helpful to isolate by referenced database objects, such as functions, tables, and columns. Because query structures can be complex, it is advised to use regex to find matches. Some common regex patterns are:

  • (?i): ignore case sensitivity
  • .: wildcard
  • ^: look for values at start of string
  • |: or operator

By APIs/roles

All failed queries, including those from PostgREST, Auth, and external libraries (e.g., Prisma) are logged with helpful error messages for debugging.

Server/Role mapping

API servers have assigned database roles for connecting to the database:

RoleAPI/Tool
supabase_adminUsed by Supabase to configure projects and for monitoring
authenticatorPostgREST
supabase_auth_adminAuth
supabase_storage_adminStorage
supabase_realtime_adminRealtime
supabase_replication_adminSynchronizes Read Replicas
postgresSupabase Dashboard and External Tools (e.g., Prisma, SQLAlchemy, PSQL...)
Custom rolesExternal Tools (e.g., Prisma, SQLAlchemy, PSQL...)

Filter by the parsed.user_name role to only retrieve logs made by specific roles:

1
2
3
4
5
6
-- find events based on role/server... querywhere -- find events from the relevant role parsed.user_name = '<ROLE>'...

By Dashboard queries

Queries from the Supabase Dashboard are executed under the postgres role and include the comment -- source: dashboard. To isolate or exclude Dashboard requests during debugging, you can filter by this comment.

1
2
3
4
-- find queries executed by the Dashboard...querywhere regexp_contains(parsed.query, '-- source: dashboard')

Full example for finding errors

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
select cast(postgres_logs.timestamp as datetime) as timestamp, event_message, parsed.error_severity, parsed.user_name, parsed.query, parsed.detail, parsed.hint, parsed.sql_state_code, parsed.backend_typefrom postgres_logs cross join unnest(metadata) as metadata cross join unnest(metadata.parsed) as parsedwhere regexp_contains(parsed.error_severity, 'ERROR|FATAL|PANIC') and parsed.user_name = 'postgres' and regexp_contains(event_message, 'duration|operator') and not regexp_contains(parsed.query, '<key words>') and postgres_logs.timestamp between '2024-04-15 10:50:00' and '2024-04-15 10:50:27'order by timestamp desclimit 100;

Logging for compliance and security

Customized object and role activity logging

⚠️ NOTE: This is specifically designated for those using the postgres role or custom roles to interact with their database. Those utilizing the Database REST API should reference the Database API Logging Guide instead.

When recording what is accessed and by whom, logging based on database roles and objects is the most reliable way to ensure a proper trail of activity.

You can use the pg_audit extension to selectively log relevant queries (not just errors) by certain roles, against specific database objects.

You should take care when using the extension to not log all database events, but only what is absolutely necessary. Over-logging can strain the database and create log noise that makes it difficult to filter for relevant events.

Filtering by pg_audit:

1
2
3
4
5
6
7
... querywhere -- all pg_audit recorded events start with 'AUDIT' regexp_contains(event_message, '^AUDIT') and -- Finding queries executed from the relevant role (e.g., 'API_role') parsed.user_name = 'API_role'

Filtering by IP

If you are connecting from a known, limited range of IP addresses, you should enable network restrictions.

Monitoring IPs becomes tricky when dealing with dynamic addressing, such as those from serverless or edge environments. This challenge amplifies when relying on certain poolers, such as Prisma Accelerate, Supavisor, or Cloudflare's Hyperdrive, as they record the pooler's IP, not the true origin.

IP tracking is most effective when consistently relying on direct database connections from servers with static IP addresses:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
-- filter by IPselect event_message, connection_from as ip, count(connection_from) as ip_countfrom postgres_logs cross join unnest(metadata) as metadata cross join unnest(parsed) as parsedwhere regexp_contains(user_name, '<ROLE>') and regexp_contains(backend_type, 'client backend') -- only search for connections from outside the database (excludes cron jobs) and regexp_contains(event_message, '^connection authenticated') -- only view successful authentication eventsgroup by connection_from, event_messageorder by ip_count desclimit 100;

Reviewing log settings

The pg_settings table describes system and logging configurations.

1
2
-- view system variablesselect * from pg_settings;

The settings that affect logs are categorized under:

CategoryDescription
Reporting and Logging / What to LogSpecifies system events worth logging.
Reporting and Logging / When to LogSpecifies certain conditions or rules for logging
Customized OptionsConfigures extensions and loaded modules, including those enhancing logging like auto_explain and pg_audit.

To view all log settings for your database, you can execute the following SQL:

1
2
3
4
5
6
7
8
9
10
-- view all log related settingsselect *from pg_settingswhere ( category like 'Reporting and Logging / What to Log' or category like 'Reporting and Logging / When to Log' or category = 'Customized Options' ) and name like '%log%';

Changing log settings

WARNING: lenient settings can lead to over-logging, impacting database performance while creating noise in the logs.

Severity levels

The log_min_messages variable determines what is severe enough to log. Here are the severity thresholds from the Postgres docs.

SeverityUsage
DEBUG1 .. DEBUG5Provides successively-more-detailed information for use by developers.
INFOProvides information implicitly requested by the user, e.g., output from VACUUM VERBOSE.
NOTICEProvides information that might be helpful to users, e.g., notice of truncation of long identifiers.
WARNINGProvides warnings of likely problems, e.g., COMMIT outside a transaction block.
ERRORReports an error that caused the current command to abort.
LOGReports information of interest to administrators, e.g., checkpoint activity.
FATALReports an error that caused the current session to abort.
PANICReports an error that caused all database sessions to abort.

In most cases, the default is adequate. However, if you must adjust the setting, you can do so with the following query:

1
2
3
4
alter role postgres set log_min_messages = '<NEW VALUE>';-- view new settingshow log_min_messages; -- default WARNING

Configuring queries logged

By default, only failed queries are logged. The PGAudit extension extends Postgres's built-in logging abilities. It can be used to selectively track all queries in your database by:

  • role
  • session
  • database object
  • entire database

Logging within database functions

To track or debug functions, logging can be configured by following the function debugging guide

Frequently Asked Questions

How to join different log tables

No, log tables are independent from each other and do not share any primary/foreign key relations for joining.

How to download logs

At the moment, the way to download logs is through the Log Dashboard as a CSV

What is logged?

To see the default types of events that are logged, you can check this guide.

Other resources: