A few months back, we introduced support for running AI Inference directly from Supabase Edge Functions.
Today we are adding Mozilla Llamafile, in addition to Ollama, to be used as the Inference Server with your functions.
Mozilla Llamafile lets you distribute and run LLMs with a single file that runs locally on most computers, with no installation! In addition to a local web UI chat server, Llamafile also provides an OpenAI API compatible server, that is now integrated with Supabase Edge Functions.
Want to jump straight into the code? You can find the examples on GitHub!
Getting started
Follow the Llamafile Quickstart Guide to get up and running with the Llamafile of your choice.
Once your Llamafile is up and running, create and initialize a new Supabase project locally:
_10npx supabase bootstrap scratch
If using VS Code, when promptedt Generate VS Code settings for Deno? [y/N]
select y
and follow the steps. Then open the project in your favoiurte code editor.
Call Llamafile with functions-js
Supabase Edge Functions now comes with an OpenAI API compatible mode, allowing you to call a Llamafile server easily via @supabase/functions-js
.
Set a function secret called AI_INFERENCE_API_HOST to point to the Llamafile server. If you don't have one already, create a new .env
file in the functions/
directory of your Supabase project.
Next, create a new function called llamafile
:
_10npx supabase functions new llamafile
Then, update the supabase/functions/llamafile/index.ts
file to look like this:
Call Llamafile with the OpenAI Deno SDK
Since Llamafile provides an OpenAI API compatible server, you can alternatively use the OpenAI Deno SDK to call Llamafile from your Supabase Edge Functions.
For this, you will need to set the following two environment variables in your Supabase project. If you don't have one already, create a new .env
file in the functions/
directory of your Supabase project.
Now, replace the code in your llamafile
function with the following:
Note that the model parameter doesn't have any effect here! The model depends on which Llamafile is currently running!
Serve your functions locally
To serve your functions locally, you need to install the Supabase CLI as well as Docker Desktop or Orbstack.
You can now serve your functions locally by running:
_10supabase start_10supabase functions serve --env-file supabase/functions/.env
Execute the function
_10curl --get "http://localhost:54321/functions/v1/llamafile" \_10 --data-urlencode "prompt=write a short rap song about Supabase, the Postgres Developer platform, as sung by Nicki Minaj" \_10 -H "Authorization: $ANON_KEY"
Deploying a Llamafile
There is a great guide on how to containerize a Lllamafile by the Docker team.
You can then use a service like Fly.io to deploy your dockerized Llamafile.
Deploying your Supabase Edge Functions
Set the secret on your hosted Supabase project to point to your deployed Llamafile server:
_10supabase secrets set --env-file supabase/functions/.env
Deploy your Supabase Edge Functions:
_10supabase functions deploy
Execute the function:
_10curl --get "https://project-ref.supabase.co/functions/v1/llamafile" \_10 --data-urlencode "prompt=write a short rap song about Supabase, the Postgres Developer platform, as sung by Nicki Minaj" \_10 -H "Authorization: $ANON_KEY"
Get access to Supabase Hosted LLMs
Access to open-source LLMs is currently invite-only while we manage demand for the GPU instances. Please get in touch if you need early access.
We plan to extend support for more models. Let us know which models you want next. We're looking to support fine-tuned models too!
More Supabase Resources
- Edge Functions: supabase.com/docs/guides/functions
- Vectors: supabase.com/docs/guides/ai
- Semantic search demo
- Store and query embeddings in Postgres and use them for Retrieval Augmented Generation (RAG) and Semantic Search