This looks really useful — syncing Salesforce data into Postgres without dealing with APIs is a huge time saver. I like that it supports incremental and bi-directional sync too. I’ll definitely check it out. Nice work building something from real-world pain points
It’s a strong, modern stack if you treat Postgres as a central system of record and leverage edge caching + batching.
The stack is solid, modern, and scalable, though slightly overkill for a simple studio site. Next.js + Vercel is an excellent combination. Supabase is useful if future features are planned, but optional for a basic site. For domains, Porkbun is recommended over Hostinger due to better pricing transparency, fewer upsells, and more developer-friendly DNS management. The setup is a good choice either simplify if you want minimalism or keep it as-is for future growth.
Your dashboard currently triggers a new Supabase query every time a user navigates, causing redundant requests and loading screens. SSR isn’t needed since SEO isn’t required for a dashboard client-side rendering is faster. Solution: Use TanStack Query (React Query) to fetch and cache data (leads, analytics, org details). Cache data globally per resource to avoid repeated queries on navigation. Use separate hooks for each resource (useLeads, useAnalytics, etc.). Store auth/user info in context or global state to prevent repeated session checks. Result: Smooth UX, faster navigation, and no unnecessary queries.
You’re not stuck as badly as you think — this is a very recoverable situation, and yes, your idea can work, but it needs to be framed correctly so it makes technical sense and so you can explain it confidently to your teacher.
Auth merge: Reconcile providers; pick a canonical user for duplicates; copy metadata; some re-authentication may be needed. Users table & FKs: Update IDs and related references; use migration scripts in SQL or code. RLS policies: Adapt BC’s RLS for imported users; assign correct roles/claims. Storage: Copy files to BC bucket, preserve paths, update DB references, ensure access policies. Approach: Use Supabase API/scripts; back up first; do a phased migration (auth → users → tables → storage → testing).
This error usually isn’t your functions code — it’s almost always a local Supabase auth / JWT mismatch after the containers were rebuilt. Key for the ES256 algorithm must be of type CryptoKey. Received Uint8Array That happens when the JWT signing setup inside the local stack is out of sync (auth service, functions runtime, and env keys don’t agree on the format). When you deleted the containers, Supabase regenerated secrets, and now your local env + functions runtime are misaligned.
This is actually really useful. Seeding data is one of those things everyone needs, but nobody enjoys setting up every time. Respecting RLS and foreign keys is a big deal that’s where most fake data tools fall apart, so that alone makes this interesting. Being able to quickly spin up tens of thousands of realistic rows to test pagination, loading states, and edge cases is exactly the kind of thing that exposes real UI/UX problems early.
Here’s what I’ve gathered from experience and testing with Supabase branching and Lovable: Edge functions are branch-specific – Supabase treats each branch like a full project copy, so changes to dev edge functions don’t affect prod. Each branch has its own DB schema, functions, and configs. Lovable + Supabase branching – Lovable itself doesn’t fully “know” about Supabase branches. It’ll let you edit your git dev branch, but you need to make sure you connect it to the correct Supabase branch when deploying. Otherwise, you might accidentally deploy dev changes to prod. Merging dev → staging / prod – Key things to watch: Schema changes: Check formigrations conflicts. Supabase migrations can break if a table/column already exists or was modified differently in staging/prod. Edge functions: Make sure you test them in staging before merging; a small typo can break the live version. Data considerations: If you’re moving dev schema to prod, don’t overwrite production data unless that’s intentional. Some teams keep a migration-first approach: all schema changes are written as SQL migration files in Git, then applied per branch. That way, you can safely promote changes and dev → staging → prod without accidental overwrites.
Approach: Incremental migration is safer than full rewrite; import existing React components into Next.js pages. Routing: Convert react-router routes to Next.js file-based routing; use [param].js for dynamic routes. Environment variables: Client-side vars need NEXT_PUBLIC_ prefix; server-only vars should not. API calls & SSR: Can fetch from client components or use Next.js API routes / getServerSideProps / getStaticProps. State management: Redux/RTK works mostly as-is, but adjust for SSR if needed. Tools: Use next-codemod and follow migration guides; ESLint + Prettier help fix path/import issues.
“Yes, if RLS is properly set up, exposing the anon key and project URL is safe. The service role key should never go in the client. By extra rate-limiting, I mean adding controls to prevent abuse or spammy requests, which can be done either in your Swift app (e.g., limiting login attempts, debouncing API calls) or via a lightweight backend that checks request frequency before passing them to Supabase. Supabase doesn’t provide full request throttling for anon calls, so this is usually where a custom layer helps. You’d move to a client → backend → Supabase setup when you need to do operations that should stay private (like using the service role key), enforce complex business logic, handle payments, or aggregate data before returning it to the client. Otherwise, direct client calls with proper RLS are common and safe.”
Using Supabase client-side with anon key + RLS is standard and secure if your policies are correct. Always keep service_role keys server-side, and use extra rate-limiting layers if abuse is a concern.
Do’s ✅ Enable RLS (Row-Level Security) for user-level data protection. Use Supabase Auth with RLS for secure access. Take advantage of PostgREST, functions, and triggers for API and business logic. Use Supabase Storage for files with signed URLs. Implement TypeScript or strong typing for safer queries. Use upserts (onConflict) to simplify insert/update operations. Regularly back up your database. Monitor API usage and performance, especially for real-time subscriptions. Don’ts ❌ Don’t expose admin keys in frontend code. Don’t overuse realtime on large tables. Don’t ignore indexes on frequently queried columns. Don’t skip data validation – enforce it server-side or with DB constraints. Don’t rely solely on realtime for critical updates. Don’t write RLS policies that leak sensitive data. Don’t leave storage permissions open – use signed URLs for private files.
With properly configured RLS, you can safely let client or server components interact directly with Supabase without a separate API layer. Direct access is simpler and works well for small to medium apps, prototypes, or apps where security is enforced by RLS. A separate API layer is recommended for larger or production apps to centralize business logic, validations, and improve maintainability. Many real-world apps colocate Supabase calls in server components, which is considered idiomatic with the Next.js App Router. For portfolio or team projects, an API layer often looks cleaner and more professional.
Hey! I’m seeing some issues with Supabase Auth too — not sure if it’s on their side or just my project. You can check their status here: https://status.supabase.com to see if there’s a reported outage.
If you want to ship quickly and stay focused on frontend: Supabase is perfect. You can even start small and migrate later if needed. If cost efficiency with massive storage or full control over backend is critical, and you’re willing to learn AWS, then go with AWS.
The user is comparing pricing/scaling models of all-in-one no-code platforms (Bubble, Glide, AppSheet) versus split-stack setups (like WeWeb + Supabase or FlutterFlow + Supabase). They explain how: Bubble uses Workload Units (WU) Glide uses “updates” (each CRUD operation, workflows, integrations, etc.) These costs can become unpredictable and expensive as apps scale. They are asking: What is the equivalent of “updates” or WU in split-stack tools like Supabase setups? How are costs measured there (database reads/writes, storage, auth, functions, bandwidth, etc.)? Whether split stacks generally have lower and more predictable scaling costs due to less platform overhead. They want guidance and opinions on how scaling economics compare between these two approaches.
Yes, you can replicate Supabase’s default “no session until verified” behaviour almost exactly, even when you fully control the email via an Edge Function. The key is understanding how Supabase enforces verification and then mirroring that flow I can help you do it, tho
Totally understand the frustration — Supabase’s default confirm emails can be annoying when you want full control. What you want is possible, but you’ll need to either: Use the built-in email templates and make sure your edits are deployed in the “Auth Settings → Templates” section — sometimes changes don’t take effect if the project isn’t redeployed. Route all confirmations through your Edge Function: disable the default email, and have your Edge Function send a fully custom email while still verifying the user. Basically, you can either fix the template in Supabase or fully take over with your function. Most devs I know go with the Edge Function route for full control.