I have a web application that collects user data payloads and provides them with real-time comparative analytics (relative rankings, averages, and multi-dimensional comparisons). Currently, I am using Firebase Realtime Database (RTDB) with a serverless architecture where the vanilla JS client handles the data processing.
I am expecting a high-traffic burst of 10k+ users per day for the next 15-20 days, and I have realized my current Firebase architecture will face severe scalability limitations. I am planning to migrate to Supabase (Postgres) to resolve this, but I would appreciate some community input on the transition.
Here is an overview of my current backend/database flow on Firebase:
1. Duplicate Prevention (Hashing & Reads)
When a user uploads their payload, the client generates a SHA-256 hash. Before writing, it reads a hashes/ node to check if this hash already exists, rejecting it if found to prevent duplicate entries.
2. Client-Side Aggregation (Firebase Transactions)
To minimize storage, I do not store raw individual records. Everything is aggregated on write. The client runs a Firebase Transaction on a stats/ node specific to the user's cohort. This transaction updates:
{ "150": 12, "145": 8 })3. Global Counters (Multi-path Updates) After the transaction succeeds, the client performs a simultaneous multi-path update to increment global platform counters and record the payload hash with a server timestamp.
4. Fetching Analytics (Reads)
The client fetches the aggregated stats/ node, iterates over the frequency map locally, and calculates the user's relative rank, segment averages, and builds distribution histograms and radar charts entirely in the browser.
The user is planning to migrate from Firebase RTDB to Supabase Postgres to handle an expected increase in traffic to 10k daily users. They currently use client-side aggregation and are concerned about scalability and transaction bottlenecks. They seek community feedback on architectural considerations and potential performance bottlenecks in moving to server-side processing with PostgreSQL.
"10k+ users" doesn't mean much, what's the actual expected amount of submissions and processing, how much data, how much time per calculations is required...
Given how fast computers are at making calculations there's a fair chance a single server could handle the whole thing.
Honestly sounds like the right move. RTDB starts getting painful once too many users are hammering the same aggregation nodes at once. I ran into similar retry storms on hot paths before, and Postgres handled it way better once the aggregation logic moved server-side.
I’d probably watch materialized view refresh frequency pretty closely though. At higher write volumes they can become their own bottleneck if you refresh too aggressively.
Including potential concurrent users
5. Client-Side Caching To optimize Firebase bandwidth, I implemented a custom 15-minute in-memory cache on the frontend so users navigating between dashboard tabs do not trigger redundant RTDB reads.
Why I am migrating to Supabase:
With an expected load of 10k daily concurrent users, relying on Firebase RTDB for this setup presents significant risks. I am primarily concerned about transaction bottlenecks—multiple users attempting to write to the exact same aggregated stats/ node simultaneously will cause severe client-side retry loops and potential write failures. Additionally, downloading massive frequency maps to the client to calculate dashboards will quickly exhaust bandwidth limits and degrade performance.
My plan with Supabase is to store the raw user payloads in a PostgreSQL table and utilize SQL (via materialized views or RPCs) to calculate ranks, distributions, and averages natively on the server. This shifts the computational burden to the backend, resolves client-side concurrency conflicts, and minimizes network payload sizes.
Questions for the Community: