@ankitjaitly Check yesterday's [CHANGELOG](https://github.com/supabase/supabase/blob/master/docker/CHANGELOG.md) :) @kallebysantos has recently added some initial UI for edge functions in [self-hosted](https://supabase.com/docs/guides/self-hosting). You will need to update your `docker-compose.yml` with a new volume, though. There will be more added to the edge functions UI at some point in the next couple of months, I believe. Not 100% sure yet if CLI will also work with it for deploys, etc. - we'll see.
No, it's not available on AWS Aurora. I think I put a note about the limitations somewhere in that branch :)
Yeah, I went through the same.
It probably makes sense to mention a Windows environment here? On a *nix system (e.g., Linux), you could use `reset.sh` and the scripts in `utils`. Maybe also on Windows with WSL.
This was indeed an oversight for too long - apologies. Addressed via PR #42857 and will also consider adding a how-to on how to re-enable external access to Logflare in case there's a need to ingest logs from outside of a [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting) setup. Separately - I don't think DigitalOcean's Supabase droplet, or even Terraform to deploy self-hosted Supabase have been maintained on the DO's side for the past couple of years or more. Official upstream configuration is [here](https://github.com/supabase/supabase/tree/master/docker).
@Dudeonyx Hm.. just checked with 17.6.1.063 - a clean new install launched seemingly just fine, all logs visible.
@Dudeonyx I'll check.
For an upgrade path on top of an existing Postgres 15 self-hosted - not yet, working on it. Upgrading data directory from Postgres 15 to Postgres 17 requires quite a few additional steps.
If you don't have a legacy self-hosted setup, you can try to launch a **clean new self-hosted** instance with the Postgres 17 version you see on platform. Images for Postgres 17 are also public. (Basically, change the image for `db` in docker-compose.yml before starting for the first time.) For an existing Postgres 15 self-hosted database the upgrade is more complex (working on scripts/docs now).
This is fair. With the self-hosted Supabase configuration as described [here](https://supabase.com/docs/guides/self-hosting) the UI is functional (e.g., SQL Editor works). That said, there are certain differences between what's available on platform and via self-hosting. Multiple organizations and projects would be one of the limitations. Regarding Coolify [template](https://github.com/coollabsio/coolify/blob/v4.x/templates/compose/supabase.yaml) - it is technically a 3rd party installation option - independent from Supabase. It sometimes doesn't reflect the current upstream self-hosted configuration. I'm not entirely sure how exactly it is maintained. Running your own server(s) - via Coolify or some other admin tools - is definitely more straightforward when you are more [comfortable](https://supabase.com/docs/guides/self-hosting/docker#before-you-begin) with certain engineering tasks.
You can also add an issue, @xapple :)
Thanks for the feedback! Current self-hosted configuration is both "interactive" (i.e. manual) install option, and an upstream for any 3rd party deployment options such as the ones you've mentioned. You can definitely just generate all keys and secrets your own way following the notes in the guide. I'd be certainly interested in any PRs that might improve the current configuration. Regarding shell (specifically it's not Bash, btw :) I'd like to avoid any additional [language] dependencies, though. Having a POSIX shell script might not be ideal these days because many people are less familiar, however it still seems to be the most portable option. I have to add something like `-y` for non-interactive installs, and thanks for spotting the missing S3 configuration variables. They aren't mandatory for the default install, but they are needed if you are accessing Storage using S3 protocol.
@kallebysantos
@sweatybridge
@br0kenpixel Current [CLI](https://supabase.com/docs/guides/local-development) is a bit ahead of current [self-hosted](https://supabase.com/docs/guides/self-hosting). One big difference is Postgres 17 in CLI-started development stack vs Postgres 15 in self-hosted (I'm planning to add an upgrade option / docs). You also can't "link" to a self-hosted instance the way you link to your remote projects on managed. Regarding this: ``` ERROR: must be able to SET ROLE "supabase_admin" (SQLSTATE 42501) At statement: 21 ALTER TABLE "public"."todos" OWNER TO "supabase_admin" ``` I'm not entirely sure how you are using your self-hosted instance, or whether you also have a deployment on managed Supabase, but one other difference between current self-hosted and managed platform is that Studio uses `supabase_admin` and not `postgres` - so if you happened to create that `todos` table via self-hosted UI, it might have a different owner in self-hosted. I have to check, but I think Studio started by CLI also actually still uses `supabase_admin` - not entirely sure atm. In short, unfortunately the latest CLI isn't probably 100% aligned with the state of self-hosted (or rather - vice versa). There's an internal re-alignment project / effort currently ongoing.
Default configuration for [self-hosted](https://supabase.com/docs/guides/self-hosting) Supabase is [Supavisor on port 5432](https://code.claude.com/docs/en/best-practices), not Postgres - this is why this placeholder tenant id is required, even though it's not of much use in a single server / single project environment.
Really appreciate these details :)
A quick interim update on the state of [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting) :) First of all, the main goal so far has been to bring self-hosted configuration up to speed, make it more predictable as the "upstream" repo, collect feedback and ideas, identify & validate the gaps, and plan for enhancements. What's happened lately: - Many bugfixes and updates (see [CHANGELOG](https://github.com/supabase/supabase/blob/master/docker/CHANGELOG.md)) - A couple of feature gaps addressed (e.g., SQL snippets management in Studio UI) with more being planned - Logs should work fine since about 2-3 months ago - New maintainer started to work on [supabase-kubernetes](https://github.com/supabase-community/supabase-kubernetes) with great results - the repo is back at work after a long pause - Still figuring out the proper update cadence, but I'm hoping it has stabilized a bit the last few months Next steps would be along the lines of working on major component updates (Postgres 17, Envoy as API gateway, further enhancements for Studio, support for new API keys, more work on the Helm chart, etc.) Appreciate everyone's feedback here, and thanks, much, for being a Supabase user :)
Thanks, useful! > Supabase UI doesn't work Would love to know more :) Do you mean you had issues using Studio overall? Some specific parts of Studio? Expectations from self-hosted Studio functionality not matching your Supabase platform experience? Something else? :)
Thanks for the detailed feedback! This is really helpful. > The lack of a dashboard is annoying Can you elaborate on this one? Did you mean lack of certain configuration UI components (e.g. for Auth), or something else?
@kallebysantos fyi :)
I'm not in the Auth team, but this is being planned, could be available quite soon even. Please be patient, appreciate your tolerance. And please, I understand this might seem harmless, but would be great to keep our GitHub discussions civil :)
As an update - two changes in the most recent [update](https://github.com/supabase/supabase/blob/master/docker/CHANGELOG.md) of [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting): - Realtime healthchecks logging is off by default (a new env-var in docker-compose.yml) - If you turn it on, only responses to healthchecks are logged This shoud reduce the stored logs volume quite a bit. Hope this helps
Two changes in the most recent [update](https://github.com/supabase/supabase/blob/master/docker/CHANGELOG.md) of [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting): - Realtime healthchecks logging is off by default (a new env-var in docker-compose.yml) - If you turn it on, only responses to healthchecks are logged Hope this helps
> we chose Caddy security module cause it does an incredible work when it comes to authorization policies, to be honest i'm not sure if that's possible with Gotrue Good question :)
@krishparmar22242 Appreciate you trying to help, but the above looks quite a bit along the lines of what the users might very well obtain themselves via Claude Code or ChatGPT. It would be best to first understand what kind of a use case we are dealing here with.
@Adarsh-Kumar-Gupta re: "This not works, I'm using docker postgres, and after adding all this into the .config file, nothing works" What is your environment if I may? Are you using [local development & CLI](https://supabase.com/docs/guides/local-development), or [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting) configuration?
This is nice! Curious if you're planning to actively maintain it by continuing to pull changes from the "upstream" [Docker Compose](https://github.com/supabase/supabase/tree/master/docker) configuration. Also, regarding additional auth - what would be your thoughts on Studio being able to authenticate against locally running Auth/Gotrue (vs. yet another external dependency).
@maprohu I'm not sure what "Supabase v1.26.01" is :) However, if you're using [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting/docker), then yes - that's a bit of an annoying detail about current Realtime logging. There are a couple of workarounds: 1. You can set `LOG_LEVEL: warning` for Realtime in docker-compose.yml, along the lines of: ``` environment: LOG_LEVEL: warning PORT: 4000 DB_HOST: ${POSTGRES_HOST} ``` Unfortunately this will also disable a lot of other info-level events - but if you don't need those, could be a temporary fix. 2. I've been meaning to add a filter to vector.yml and reduce the frequency of healthchecks - see here: https://github.com/supabase/supabase/blob/self-hosted/filter-realtime-logs/docker/volumes/logs/vector.yml and https://github.com/supabase/supabase/blob/f34574eb08bd28856061e3465e02597b1e9f98e5/docker/docker-compose.yml#L215 This will still leave HTTP responses logged, but less frequently. I've also asked the Realtime team to have a look at any possible logging enhancements there.
@kallebysantos
@itslenny
@mattrossman @Rodriguespn
I don't think it's going to be easy, @uday770202 :( For one, it's defined here: https://github.com/supabase/cli/blob/b3449760723eddbe32281f804dd13212a57b8062/pkg/config/config.go#L344 and then it's populated in the env-vars across all containers that are being started by CLI (you can check via `docker inspect`). If you need a unique password, I guess the easiest would be to clone the CLI repo, change the default password in the code, and rebuild for local use (via `go build` & `go install`). If you need to change the password for the existing setup, it's probably possible to use something like we have for self-hosted: https://github.com/supabase/supabase/blob/7013388c0c6204061948a9d2129289031d03fb59/docker/utils/db-passwd.sh#L111 You could connect to the Postgres container and run the same. Still, CLI would use the hardcoded password when starting the containers. Bear in mind, [CLI](https://supabase.com/docs/guides/local-development) is **not meant to be used in a production or open dev/staging/test environment** - it should never be exposed to any public traffic. A lot of credentials in CLI are default values and hardcoded. CLI is supposed to be used in an local, isolated, tightly controlled setup only.
Might be related - [auth#2334](https://github.com/supabase/auth/pull/2334)
@Aloukat I'm so sorry for overlooking this one! Any updates on your side? Didn't work in the end no matter what? It's a bit of a puzzling situation that you've described. Feel free to create an issue - I'll try to have a look and help. Please include the same kind of details, including if you have any customizations to the docker-compose configuration or else.
While not exactly the same discussion in #40686 - still leaving it here for visibility. There was a lot of details discussed in that issue regarding Kong configuration, ports different from the default 8000, etc.
@kallebysantos jfyi
@kallebysantos jfyi
@luizfelmach Jfyi, re https://github.com/supabase-community/supabase-kubernetes/
> Would it be a good idea to remove current docker image and start supabase with a specific version (not containing asymmetric keys as default)? If you'd like to keep your application as-is for now (using symmetric JWT, etc.) - then I'd suggest this downgrade experiment, yes.
Um, yeah, the new token isn't a symmetric JWT - it's a new asymmetric one: ``` % echo eyJhbGciOiJFUzI1NiIsImtp | base64 -d {"alg":"ES256","ki% ``` But again, it might be because of the issue with the updated CLI and how it (both updated and) generated new docker configs. Do you think you could try it on a clean repo clone with an older version of CLI that worked for you before?
While I'm also asking the CLI and Auth team internally, I guess you might try to set up a clean cloned repo, install an older CLI (do you remember the last one that worked for you), and try it out.
I think it probably has less to do with Auth 2.185.0 and more with the updated CLI. The latest versions of CLI transitioned to supporting asymmetric keys by default (basically - generating a new type of env-var and other configuration items when starting containers), but the legacy JWT api keys, and the symmetric JWT tokens should still work. Do you mean your CLI-started Auth doesn't accept previously minted session JWT tokens anymore? Or that you can't obtain a new token while trying to log in? One way of verifying the keys configuration would be via `docker inspect` for the running containers and checking what kind of keys are passed via env-vars to every component.
Can you describe it a bit more regarding what exactly broke? (Also, I assume you're referring to [CLI/local-dev](https://supabase.com/docs/guides/local-development).)
@sweatybridge
@sweatybridge
@brunocek Could you possibly explain what are you trying to achieve via the nginx configuration above? Is the goal to set up your own locally running Supabase stack for testing / staging? Something else? In the original post you mentioned you were trying to use CLI to pull your project from managed Supabase and re-set up your development environment so that you develop locally first (possibly with declarative schema, migrations, etc.) and then push to test/staging/prod on managed Supabase platform. The issue with the missing Auth image the last few days was an unfortunate coincidence and I believe it was resolved (see [cli#4696](https://github.com/supabase/cli/issues/4696) and [auth#2319](https://github.com/supabase/auth/issues/2319)). If you're facing difficulties pulling from your existing project on Supabase to set up a development environment, best to describe the exact problem in a new [CLI repository issue](https://github.com/supabase/cli/issues) - this way the CLI team will see it and will hopefully help. If you're working on something else than my understanding above, could you possibly elaborate? 🙏
Likewise, HNY and best wishes for 2026!
Ok, as a disclaimer - this is going to be largely an AI-originated suggestion (Claude Code looking into the codebase) - but, it looks like `waitUntil` isn't added for "main" workers, it's only added for "user" workers. Quoting verbatim: "The `--main-service` mode is typically used for routing/orchestration code that spawns user workers. The `waitUntil` mechanism is designed for user workers, not the main orchestrator." My understanding is - the `--main-service` should be [launching](https://github.com/supabase/supabase/blob/7168162ed1efcd1f9034668637c1626ad9082717/docker/docker-compose.yml#L338) something like `main/index.ts` ([see here](https://github.com/supabase/supabase/blob/master/docker/volumes/functions/main/index.ts)), which in turn is responsible for orchestrating the "user" functions. Does it make sense?
@korotovsky Could you possibly share a bit more? Do you use edge runtime as part of [self-hosted Supabase](https://supabase.com/docs/guides/self-hosting/docker) stack? Are you running functions as a standalone container? Something else? Do you think you could share the minimum function code to reproduce this?