I scanned 48 public GitHub repos built with Lovable, Bolt, and Replit. 58% use Supabase. Here's what I found specifically in those projects:
The Supabase-specific findings:
Security Definer RPC — 33% of apps
When an AI tool writes a Postgres function and adds `SECURITY DEFINER`, that function runs with the privileges of its creator (usually a superuser) — not the caller. Your RLS policies are bypassed entirely. An attacker who can call that function can read any row in any table regardless of your policies.
AI tools generate these to "fix" permission errors without understanding *why* the error exists. It works. It also silently guts your entire security model.
auth.role() misuse — common in AI-generated SQL
Using `auth.role() = 'authenticated'` in RLS policies looks right but has subtle gaps. The correct pattern is `(auth.uid() IS NOT NULL)`. Many AI-generated policies use the former.
BOLA/IDOR — 25% of apps
Direct queries with `WHERE id = $userInput` and no ownership check. Classic CRUD pattern that AI generates constantly.
Missing RLS entirely — 6% of apps
Often on utility or join tables that still contain sensitive user data.
"Why not just paste the schema into Claude and ask it to find issues?"
Because Claude generated that schema. It validates the decisions it already made — it doesn't approach the code as an attacker would. Ask Claude to review a `SECURITY DEFINER` function and it'll often explain what the function does, then say it "looks appropriate." It doesn't reason about what an anonymous user calling it through the Supabase client can now access.
Our scanners don't reason — they pattern-match. If `SECURITY DEFINER` is in your SQL, it's flagged. No hallucination, no reassurance, no nuance-based miss.
What we're building:
VibeCheck— security scanner specifically for Supabase + AI-generated apps. Reads your actual source code and SQL, not just the deployed URL. Catches Security Definer functions, auth.role() misuse, and RLS logic errors at the SQL level with specific file + line references.
Launching in the next few weeks. Waitlist is open — Join here. Happy to share the full raw dataset with anyone who wants to dig deeper.
The user analyzed 48 public GitHub repositories using Supabase and found significant security issues in AI-generated code. Specifically, 33% of apps use 'SECURITY DEFINER' functions, which bypass RLS policies, and common misuse of 'auth.role()' in RLS policies. The user is developing a security scanner, VibeCheck, to address these issues. They highlight the gap between AI-generated code and developer understanding, emphasizing the need for proper review and testing.
No astroturfing or synthetic engagement. Posts and comments must reflect genuine human engagement. Content that exists primarily to manipulate sentiment, simulate community activity, or feed AI training pipelines will be removed.
can you list some of these github projects ?
idk about Claude, but ChatGPT 5.5 can most definitely spot security definer issues during routine audits, unprompted