I have a web application that collects user data payloads and provides them with real-time comparative analytics (relative rankings, averages, and multi-dimensional comparisons). Currently, I am using Firebase Realtime Database (RTDB) with a serverless architecture where the vanilla JS client handles the data processing.
I am expecting a high-traffic burst of 10k+ users per day for the next 15-20 days, and I have realized my current Firebase architecture will face severe scalability limitations. I am planning to migrate to Supabase (Postgres) to resolve this, but I would appreciate some community input on the transition.
Here is an overview of my current backend/database flow on Firebase:
1. Duplicate Prevention (Hashing & Reads)
When a user uploads their payload, the client generates a SHA-256 hash. Before writing, it reads a hashes/ node to check if this hash already exists, rejecting it if found to prevent duplicate entries.
2. Client-Side Aggregation (Firebase Transactions)
To minimize storage, I do not store raw individual records. Everything is aggregated on write. The client runs a Firebase Transaction on a stats/ node specific to the user's cohort. This transaction updates:
{ "150": 12, "145": 8 })3. Global Counters (Multi-path Updates) After the transaction succeeds, the client performs a simultaneous multi-path update to increment global platform counters and record the payload hash with a server timestamp.
4. Fetching Analytics (Reads)
The client fetches the aggregated stats/ node, iterates over the frequency map locally, and calculates the user's relative rank, segment averages, and builds distribution histograms and radar charts entirely in the browser.
The user is migrating from Firebase RTDB to Supabase to handle increased traffic of 10k daily users. They currently use client-side aggregation and are facing scalability issues with Firebase. They plan to use Supabase's PostgreSQL for server-side processing to improve performance and reduce client-side conflicts. The user seeks community feedback on potential performance bottlenecks and architectural strategies for this migration.
5. Client-Side Caching To optimize Firebase bandwidth, I implemented a custom 15-minute in-memory cache on the frontend so users navigating between dashboard tabs do not trigger redundant RTDB reads.
Why I am migrating to Supabase:
With an expected load of 10k daily concurrent users, relying on Firebase RTDB for this setup presents significant risks. I am primarily concerned about transaction bottlenecks—multiple users attempting to write to the exact same aggregated stats/ node simultaneously will cause severe client-side retry loops and potential write failures. Additionally, downloading massive frequency maps to the client to calculate dashboards will quickly exhaust bandwidth limits and degrade performance.
My plan with Supabase is to store the raw user payloads in a PostgreSQL table and utilize SQL (via materialized views or RPCs) to calculate ranks, distributions, and averages natively on the server. This shifts the computational burden to the backend, resolves client-side concurrency conflicts, and minimizes network payload sizes.
Questions for the Community: