Add an AI-assisted "pre-flight check" that runs before supabase db push (CLI) or "Apply migration" (dashboard). It reads the pending migration, analyzes it against a ruleset plus AI review, and surfaces data-loss risks, lock-duration risks, and structural red flags before they hit production — not after.
-- a migration like this lands on a 50M-row table:
ALTER TABLE orders ALTER COLUMN status SET NOT NULL;
ALTER TABLE orders ADD COLUMN region text DEFAULT 'us-east' NOT NULL;
DROP COLUMN legacy_code;
In production this is three separate foot-guns:
SET NOT NULL on a non-empty column does a full table scan under AccessExclusiveLock — every row must be validated, and every other query on orders blocks during it.NOT NULL + DEFAULT on a large table rewrites the whole table in Postgres < 11, and on modern versions still takes an exclusive lock.DROP COLUMN legacy_code is destructive and irreversible once replication completes — no confirmation, no dry-run.The CLI and dashboard currently accept all of this without warning. supabase db push just applies it. Splinter catches runtime problems after deploy; there's nothing that catches deploy-time risk.
A pre-flight check that runs automatically in three places:
supabase db push runs the check first, shows findings, and requires --force (or an interactive y/N) to apply anything flagged High.supabase db push --check-only exits non-zero on High findings, drop into any GitHub Actions workflow.Rule-based (fast, local, no LLM call):
| Severity | Pattern | Rule |
|---|---|---|
| High | DROP TABLE / DROP COLUMN without IF EXISTS-guarded rename-first pattern | data loss, irreversible |
| High | ALTER TABLE ... ALTER COLUMN ... SET NOT NULL on a table with rows | full-table scan under AccessExclusiveLock |
| High | ALTER TABLE ... ADD COLUMN ... NOT NULL without default on a non-empty table | fails at apply time |
| High | TRUNCATE in a migration | data loss |
| Medium | ALTER TABLE ... ALTER COLUMN TYPE requiring rewrite (e.g. text → ) |
Rule-based findings show row-count estimates from pg_class.reltuples so the user sees "SET NOT NULL on a 50M-row table" rather than an abstract warning.
AI-assisted (one LLM call, only on request or when rules miss intent):
users.email to email_address and then drops email — any code still referencing users.email will break. Grep of your project found 14 references." (Uses AI Assistant + repo context already available in Studio.)(email) but existing rows have 3 duplicates — CREATE UNIQUE INDEX will fail." (Quick pre-check query.)ALTER TABLE here holds an exclusive lock for an estimated 4-7 minutes based on row count. A non-blocking sequence would be: add column nullable → backfill in batches → add NOT NULL."This feature cannot exist as a third-party tool — it needs things only Supabase has:
Phase 1 (MVP, ships in days not weeks):
supabase db push --dry-run that prints findings, doesn't applyPhase 2:
pg_class.reltuples--check-only exits non-zeroPhase 3:
squawk today, but it can't see row counts or Supabase conventions, and it's not in the default workflow.supabase db lint: Already exists in intent (#29341 Splinter in CLI). This proposal is an applied-at-migration-time variant focused on destructive/blocking operations, complementary to advisory linting.db push on High findings by default, or warn-only?Abhijeet Rane proposes an AI-assisted pre-flight check for Supabase migrations to identify data-loss and lock risks before applying changes. The feature would analyze pending migrations against a ruleset and AI review, highlighting potential issues. Biswajeet Arukha supports the idea and suggests an architectural framework for implementation, emphasizing the importance of proactive prevention for large production tables.
Proposal: Architecture for AI-Assisted "Zero-Downtime" Migration Guardrails Implementing a "Pre-flight check" is critical for Supabase, especially as projects scale to large production tables where a single ALTER TABLE can lock out the entire application.
Here is a proposed architectural framework for how this engine should work:
Mandatory Concurrent Checks: Flag any CREATE INDEX that lacks the CONCURRENTLY keyword. Phased Constraint Validation: Flag ADD CONSTRAINT (Foreign Keys/Check) that aren't marked NOT VALID. The engine should suggest a two-step migration: ADD CONSTRAINT ... NOT VALID (Fast lock) VALIDATE CONSTRAINT (Scan without blocking writes) 2. Automated "Safety Wrappers" The AI assistant shouldn't just "detect"; it should "remediate" by injecting safety headers into the migration files. Every migration analyzed by the pre-flight check should be suggested to include:
sql -- AI Suggested Safety Header SET statement_timeout = '30s'; SET lock_timeout = '5s'; -- Prevents the migration from queuing and blocking the app 3. LLM "Semantic Linting" Strategy Traditional linters (like Squawk) are great for syntax, but an AI-assisted check can understand intent.
The AI Agent should be prompted with a "Danger Checklist":
"Is this migration rewriting a 10M+ row table?" "Is there a default value being added to an existing column that triggers a full table rewrite in this PG version?" "Are we dropping a column that is still being referenced in the edge function code?" (Requires cross-repo analysis). 4. CLI Integration (supabase migration check) The workflow should be baked into the CLI. Before a supabase db push, the user runs a check:
bash supabase migration check --ai --target production The output would provide a Risk Score (1-10) and a Lock Analysis Table, showing exactly which tables will be locked and for how long based on metadata statistics from the production branch.
python import re def pre_flight_migration_check(sql_content): issues = []
# Check for non-concurrent index creation
if re.search(r"CREATE INDEX", sql_content, re.I) and not re.search(r"CONCURRENTLY", sql_content, re.I):
issues.append({
"risk": "CRITICAL",
"msg": "Found 'CREATE INDEX' without 'CONCURRENTLY'. This will lock the table for writes.",
"fix": "Use 'CREATE INDEX CONCURRENTLY' instead."
})
# Check for Access Exclusive lock triggers
exclusive_triggers = ["ALTER TABLE", "DROP TABLE", "TRUNCATE"]
for trigger in exclusive_triggers:
if re.search(rf"{trigger}", sql_content, re.I):
issues.append({
"risk": "HIGH",
"msg": f"Detected {trigger} which requires an Access Exclusive lock.",
"fix": "Ensure 'lock_timeout' is set before running this migration."
})
return issues
migration_sql = "CREATE INDEX idx_user_email ON users(email);" report = pre_flight_migration_check(migration_sql) print(report)
uuid| rewrite under exclusive lock |
| Medium | CREATE INDEX without CONCURRENTLY on large table | write-blocking |
| Medium | UPDATE / DELETE affecting > N% of rows | long lock, bloat |
| Low | Adding public table without RLS enabled | reuses existing Splinter rule 0013 |
| Low | FK column without a matching index | reuses existing Splinter rule 0003 |