How to Interpret and Explore the Postgres Logs
Last edited: 1/18/2025
A complimentary guide was made for the API logs
Debugging and monitoring Postgres with logs
Logs provide insights into Postgres operations. They help meet compliance requirements, detect suspicious activity, and troubleshoot problems.
Table of contents
- Querying Logs
postgres_logs
Table Structure
- Filtering Logs
- Routine Events
- By Timeframe
- By Error Severity
- By Query
- By APIs/Roles
- By Supabase Dashboard Queries
- Full Example For Finding Errors
- Logging for Compliance and Security
- Reviewing Log Settings
- Changing Log Settings
- Severity levels
- Configuring queries logged
- Logging within functions
- Frequently Asked Questions
- Other resources
Querying logs
The most practical way to explore and filter logs is through the Logs Explorer.
It uses a subset of Bigquery SQL syntax and preparses queries for optimization. This imposes three primary limitations:
- No subqueries or
WITH
statements - No
*
wildcards for column names - No
ILIKE
statements
Although there are many strategies to filter logs, such as like
and in
statements, a helper function called regexp_contains
provides the most flexibility and control.
The postgres_logs
table contains Postgres events.
postgres_logs
table structure
The table contains 3 fundamental columns:
column | description |
---|---|
event_message | the log's message |
timestamp | time event was recorded |
parsed metadata | metadata about event |
The parsed metadata column is an array that contains relevant information about events. To access the information, it must be unnested. This is done with a cross join
.
Unnesting example
_10select_10 event_message,_10 parsed.<column name>_10from_10 postgres_logs_10-- Unpack data stored in the 'metadata' field_10cross join unnest(metadata) AS metadata_10-- After unpacking the 'metadata' field, extract the 'parsed' field from it_10cross join unnest(parsed) AS parsed;
Parsed metadata fields
Query information
Field | Description | Example |
---|---|---|
parsed.query | The SQL query executed | SELECT * FROM table; |
parsed.command_tag | Tag identifying the type of command (e.g., SELECT) | SELECT , INSERT , UPDATE ... |
parsed.internal_query | An internal query that is used to facilitate a primary query. Often used by realtime for certain tasks | select to_jsonb() |
Suggested use cases:
- Identifying slow queries
- Identifying failing queries
Error/Warning information
Field | Description | Example |
---|---|---|
parsed.error_severity | event severity | LOG , WARNING , ERROR ... |
parsed.detail | Explanation of the event according to Postgres | "Key (fk_table)=(553585367) already exists." |
parsed.sql_state_code | An error code that maps to Postgres's error table | 42501 |
parsed.hint | Hint on how to solve the error | "No function matches the given name and argument types. You might need to add explicit type casts." |
parsed.context | Provides insight into where an error may have occurred | "PL/pgSQL function public.find_text(public.vector,integer) line 3 at IF" |
Suggested use cases:
- Filter by error severity or sql code
- Get hints, details, and context about error events
Connection/Identification information
Field | Description | Example |
---|---|---|
parsed.session_id | The session ID | 12345 |
parsed.session_start_time | The start time of the session | 2024-05-08 15:30:00 |
parsed.connection_from | The connection IP | 192.165.1.100 |
parsed.user_name | The name of the connecting database user | postgres |
parsed.application_name | The name of the application | Supavisor, postgREST |
parsed.database_name | The name of the database | postgres |
parsed.process_id | The process ID, often used to identify extension workers | 1234 |
parsed.backend_type | Determine if the event originated internally (e.g., from background workers like pg_net, timescale, or pg_cron) or externally from a client (client backend ) | client backend |
Suggested use cases:
- Identify events by server/API
- Filter connections by IP
- Identify connections to specific databases
- Filter connections by sessions for debugging
- identify extension events
Filtering logs
Excluding routine events
Most Postgres logs during normal periods are routine events, such as connection authorizations and checkpoints. To see the default types of events that are logged, you can check this guide.
When exploring the logs for atypical behavior, it's often strategic to filter out expected values. This can be done by adding the following filter to your queries:
_10...query_10where_10 -- Excluding routine events related to cron, PgBouncer, checkpoints, and successful connections_10 not regexp_contains(event_message, '^cron|PgBouncer|checkpoint|connection received|authenticated|authorized';
By timeframe
To investigate issues around a specific period:
_10-- filtering by time period_10...query_10where_10 timestamp between '2024-05-06 04:44:00' and '2024-05-06 04:45:00'
By error severity
This filter finds all errors, fatals, and panics:
Severity | Usage |
---|---|
ERROR | Reports an error that caused the current command to abort. |
FATAL | Reports an error that caused the current session to abort. |
PANIC | Reports an error that caused all database sessions to abort. |
_10-- find error events_10... query_10where_10 parsed.error_severity in ('ERROR', 'FATAL', 'PANIC')
Failure events include an sql_state_code that can be referenced in the Postgres Docs
By query
NOTE: Unless pg_audit is configured, only failed queries are logged
_10-- find queries executed by the Dashboard_10...query_10where_10 regexp_contains(parsed.query, '(?i)select . <some table>')
Queries can use complex syntax, so it is often helpful to isolate by referenced database objects, such as functions
, tables
, and columns
. Because query structures can be complex, it is advised to use regex to find matches. Some common regex patterns are:
(?i)
: ignore case sensitivty.
: wildcard^
: look for values at start of string|
: or operator
By APIs/roles
All failed queries, including those from PostgREST, Auth, and external libraries (e.g., Prisma) are logged with helpful error messages for debugging.
Server/Role mapping
API servers have assigned database roles for connecting to the database:
Role | API/Tool |
---|---|
supabase_admin | Used by Supabase to configure projects and for monitoring |
authenticator | PostgREST |
supabase_auth_admin | Auth |
supabase_storage_admin | Storage |
supabase_realtime_admin | Realtime |
supabase_replication_admin | Synchronizes Read Replicas |
postgres | Supabase Dashboard and External Tools (e.g., Prisma, SQLAlchemy, PSQL...) |
Custom roles | External Tools (e.g., Prisma, SQLAlchemy, PSQL...) |
Filter by the parsed.user_name
role to only retrieve logs made by specific roles:
_10-- find events based on role/server_10... query_10where_10 -- find events from the relevant role_10 parsed.user_name = '<ROLE>'_10...
By Dashboard queries
Queries from the Supabase Dashboard are executed under the postgres role and include the comment -- source: dashboard
. To isolate or exclude Dashboard requests during debugging, you can filter by this comment.
_10-- find queries executed by the Dashboard_10...query_10where_10 regexp_contains(parsed.query, '-- source: dashboard')
Full example for finding errors
_22select_22 cast(postgres_logs.timestamp as datetime) as timestamp,_22 event_message,_22 parsed.error_severity,_22 parsed.user_name,_22 parsed.query,_22 parsed.detail,_22 parsed.hint,_22 parsed.sql_state_code,_22 parsed.backend_type_22from_22 postgres_logs_22 cross join unnest(metadata) as metadata_22 cross join unnest(metadata.parsed) as parsed_22where_22 regexp_contains(parsed.error_severity, 'ERROR|FATAL|PANIC')_22 and parsed.user_name = 'postgres'_22 and regexp_contains(event_message, 'duration|operator')_22 and not regexp_contains(parsed.query, '<key words>')_22 and postgres_logs.timestamp between '2024-04-15 10:50:00' and '2024-04-15 10:50:27'_22order by timestamp desc_22limit 100;
Logging for compliance and security
Customized object and role activity logging
⚠️ NOTE: This is specifically designated for those using the
postgres
role or custom roles to interact with their database. Those utilizing the Database REST API should reference the Database API Logging Guide instead.
When recording what is accessed and by whom, logging based on database roles and objects is the most reliable way to ensure a proper trail of activity.
You can use the pg_audit extension to selectively log relevant queries (not just errors) by certain roles, against specific database objects.
You should take care when using the extension to not log all database events, but only what is absolutely necessary. Overlogging can strain the database and create log noise that makes it difficult to filter for relevant events.
Filtering by pg_audit:
_10... query_10where_10 -- all pg_audit recorded events start with 'AUDIT'_10 regexp_contains(event_message, '^AUDIT')_10 and_10 -- Finding queries executed from the relevant role (e.g., 'API_role')_10 parsed.user_name = 'API_role'
Filtering by IP
If you are connecting from a known, limited range of IP addresses, you should enable network restrictions.
Monitoring IPs becomes tricky when dealing with dynamic addressing, such as those from serverless or edge environments. This challenge amplifies when relying on certain poolers, such as Prisma Accelerate, Supavisor, or Cloudflare's Hyperdrive, as they record the pooler's IP, not the true origin.
IP tracking is most effective when consistently relying on direct database connections from servers with static IP addresses:
_16-- filter by IP_16select_16 event_message,_16 connection_from as ip,_16 count(connection_from) as ip_count_16from_16 postgres_logs_16 cross join unnest(metadata) as metadata_16 cross join unnest(parsed) as parsed_16where_16 regexp_contains(user_name, '<ROLE>')_16 and regexp_contains(backend_type, 'client backend') -- only search for connections from outside the database (excludes cron jobs)_16 and regexp_contains(event_message, '^connection authenticated') -- only view successful authentication events_16group by connection_from, event_message_16order by ip_count desc_16limit 100;
Reviewing log settings
The pg_settings
table describes system and logging configurations.
_10-- view system variables_10select * from pg_settings;
The settings that affect logs are categorized under:
Category | Description |
---|---|
Reporting and Logging / What to Log | Specifies system events worth logging. |
Reporting and Logging / When to Log | Specifies certain conditions or rules for logging |
Customized Options | Configures extensions and loaded modules, including those enhancing logging like auto_explain and pg_audit. |
To view all log settings for your database, you can execute the following SQL:
_10-- view all log related settings_10select *_10from pg_settings_10where_10 (_10 category like 'Reporting and Logging / What to Log'_10 or category like 'Reporting and Logging / When to Log'_10 or category = 'Customized Options'_10 )_10 and name like '%log%';
Changing log settings
WARNING: lenient settings can lead to overlogging, impacting database performance while creating noise in the logs.
Severity levels
The log_min_messages
variable determines what is severe enough to log. Here are the severity thresholds from the Postgres docs.
Severity | Usage |
---|---|
DEBUG1 .. DEBUG5 | Provides successively-more-detailed information for use by developers. |
INFO | Provides information implicitly requested by the user, e.g., output from VACUUM VERBOSE. |
NOTICE | Provides information that might be helpful to users, e.g., notice of truncation of long identifiers. |
WARNING | Provides warnings of likely problems, e.g., COMMIT outside a transaction block. |
ERROR | Reports an error that caused the current command to abort. |
LOG | Reports information of interest to administrators, e.g., checkpoint activity. |
FATAL | Reports an error that caused the current session to abort. |
PANIC | Reports an error that caused all database sessions to abort. |
In most cases, the default is adequate. However, if you must adjust the setting, you can do so with the following query:
_10alter role postgres set log_min_messages = '<NEW VALUE>';_10_10-- view new setting_10show log_min_messages; -- default WARNING
Configuring queries logged
By default, only failed queries are logged. The PGAudit extension extends Postgres's built-in logging abilities. It can be used to selectively track all queries in your database by:
- role
- session
- database object
- entire database
Logging within database functions
To track or debug functions, logging can be configured by followin the function debugging guide
Frequently Asked Questions
Can I join together different log tables, such as edge_logs
and postgres_logs
?
No, log tables are independent from each other and do not share any primary/foreign key relations for joining.
Can I download the logs?
At the moment, the way to download logs is through the Log Dashboard as a CSV
What is logged?
To see the default types of events that are logged, you can check this guide.