by Diz_Eliel
Hi everyone,
I’m trying to better understand the *intended* way to deal with worker lifecycle when comparing Supabase Edge Runtime to a more traditional self-hosted setup.
In my case, I have:
- a few self-hosted Edge functions
- a main dispatcher that routes requests to individual functions
What I’m running into is that each request *appears* to behave like it spins up a fresh worker / isolate, which becomes costly once functions have:
- non-trivial initialization
- schema loading
- external integrations
This is mostly fine for lightweight APIs, but for heavier workloads it adds noticeable overhead.
I couldn’t find clear documentation around controlling worker reuse or lifecycle in Edge Runtime, so I experimented with an approach where:
- workers are isolated per function
- reused while the main process stays alive
- recreated only if the worker becomes unavailable
I’m **not advocating this as “the right solution”**, just trying to understand trade-offs and best practices.
So I’d love to hear from others:
- Is this a known limitation of Edge Runtime compared to classic self-hosted servers?
- How do you usually mitigate worker-per-request overhead?
- Is explicit worker reuse considered an anti-pattern, or just an undocumented compromise?
- Are there recommended patterns for heavier workloads on the edge?
If helpful, I can share a small example of how I’m experimenting with manual worker reuse.
Thanks!
Edge Runtime vs self-hosted: how do you deal with worker lifecycle and heavy initialization?
The user is seeking guidance on managing worker lifecycle and initialization in Supabase Edge Runtime compared to a self-hosted setup. They are experiencing overhead due to worker-per-request behavior and are experimenting with manual worker reuse. They seek community input on best practices and whether explicit worker reuse is considered an anti-pattern.
No spam or duplicate posts - Do not post the same content more than once. Repeated posts or self-promotion without value will be removed.