AI & Vectors

AI Examples

Vector search with Next.js and OpenAI

Learn how to build a ChatGPT-style doc search powered by Next.js, OpenAI, and Supabase.

While our Headless Vector search provides a toolkit for generative Q&A, in this tutorial we'll go more in-depth, build a custom ChatGPT-like search experience from the ground-up using Next.js. You will:

  1. Convert your markdown into embeddings using OpenAI.
  2. Store you embeddings in Postgres using pgvector.
  3. Deploy a function for answering your users' questions.

You can read our Supabase Clippy blog post for a full example.

We assume that you have a Next.js project with a collection of .mdx files nested inside your pages directory. We will start developing locally with the Supabase CLI and then push our local database changes to our hosted Supabase project. You can find the full Next.js example on GitHub.

Create a project

  1. Create a new project in the Supabase Dashboard.
  2. Enter your project details.
  3. Wait for the new database to launch.

Prepare the database

Let's prepare the database schema. We can use the "OpenAI Vector Search" quickstart in the SQL Editor, or you can copy/paste the SQL below and run it yourself.

  1. Go to the SQL Editor page in the Dashboard.
  2. Click OpenAI Vector Search.
  3. Click Run.

Pre-process the knowledge base at build time

With our database set up, we need to process and store all .mdx files in the pages directory. You can find the full script here, or follow the steps below:

1

Generate Embeddings

Create a new file lib/generate-embeddings.ts and copy the code over from GitHub.


_10
curl \
_10
https://raw.githubusercontent.com/supabase-community/nextjs-openai-doc-search/main/lib/generate-embeddings.ts \
_10
-o "lib/generate-embeddings.ts"

2

Set up environment variables

We need some environment variables to run the script. Add them to your .env file and make sure your .env file is not committed to source control! You can get your local Supabase credentials by running supabase status.


_10
NEXT_PUBLIC_SUPABASE_URL=
_10
NEXT_PUBLIC_SUPABASE_ANON_KEY=
_10
SUPABASE_SERVICE_ROLE_KEY=
_10
_10
# Get your key at https://platform.openai.com/account/api-keys
_10
OPENAI_API_KEY=

3

Run script at build time

Include the script in your package.json script commands to enable Vercel to automaticall run it at build time.


_10
"scripts": {
_10
"dev": "next dev",
_10
"build": "pnpm run embeddings && next build",
_10
"start": "next start",
_10
"embeddings": "tsx lib/generate-embeddings.ts"
_10
},

Create text completion with OpenAI API

Anytime a user asks a question, we need to create an embedding for their question, perform a similarity search, and then send a text completion request to the OpenAI API with the query and then context content merged together into a prompt.

All of this is glued together in a Vercel Edge Function, the code for which can be found on GitHub.

1

Create Embedding for Question

In order to perform similarity search we need to turn the question into an embedding.


_19
const embeddingResponse = await fetch('https://api.openai.com/v1/embeddings', {
_19
method: 'POST',
_19
headers: {
_19
Authorization: `Bearer ${openAiKey}`,
_19
'Content-Type': 'application/json',
_19
},
_19
body: JSON.stringify({
_19
model: 'text-embedding-ada-002',
_19
input: sanitizedQuery.replaceAll('\n', ' '),
_19
}),
_19
})
_19
_19
if (embeddingResponse.status !== 200) {
_19
throw new ApplicationError('Failed to create embedding for question', embeddingResponse)
_19
}
_19
_19
const {
_19
data: [{ embedding }],
_19
} = await embeddingResponse.json()

2

Perform similarity search

Using the embeddingResponse we can now perform similarity search by performing an remote procedure call (RPC) to the database function we created earlier.


_10
const { error: matchError, data: pageSections } = await supabaseClient.rpc(
_10
'match_page_sections',
_10
{
_10
embedding,
_10
match_threshold: 0.78,
_10
match_count: 10,
_10
min_content_length: 50,
_10
}
_10
)

3

Perform text completion request

With the relevant content for the user's question identified, we can now build the prompt and make a text completion request via the OpenAI API.

If successful, the OpenAI API will respond with a text/event-stream response that we can simply forward to the client where we'll process the event stream to smoothly print the answer to the user.


_48
const prompt = codeBlock`
_48
${oneLine`
_48
You are a very enthusiastic Supabase representative who loves
_48
to help people! Given the following sections from the Supabase
_48
documentation, answer the question using only that information,
_48
outputted in markdown format. If you are unsure and the answer
_48
is not explicitly written in the documentation, say
_48
"Sorry, I don't know how to help with that."
_48
`}
_48
_48
Context sections:
_48
${contextText}
_48
_48
Question: """
_48
${sanitizedQuery}
_48
"""
_48
_48
Answer as markdown (including related code snippets if available):
_48
`
_48
_48
const completionOptions: CreateCompletionRequest = {
_48
model: 'gpt-3.5-turbo-instruct',
_48
prompt,
_48
max_tokens: 512,
_48
temperature: 0,
_48
stream: true,
_48
}
_48
_48
const response = await fetch('https://api.openai.com/v1/completions', {
_48
method: 'POST',
_48
headers: {
_48
Authorization: `Bearer ${openAiKey}`,
_48
'Content-Type': 'application/json',
_48
},
_48
body: JSON.stringify(completionOptions),
_48
})
_48
_48
if (!response.ok) {
_48
const error = await response.json()
_48
throw new ApplicationError('Failed to generate completion', error)
_48
}
_48
_48
// Proxy the streamed SSE response from OpenAI
_48
return new Response(response.body, {
_48
headers: {
_48
'Content-Type': 'text/event-stream',
_48
},
_48
})

Display the answer on the frontend

In a last step, we need to process the event stream from the OpenAI API and print the answer to the user. The full code for this can be found on GitHub.


_62
const handleConfirm = React.useCallback(
_62
async (query: string) => {
_62
setAnswer(undefined)
_62
setQuestion(query)
_62
setSearch('')
_62
dispatchPromptData({ index: promptIndex, answer: undefined, query })
_62
setHasError(false)
_62
setIsLoading(true)
_62
_62
const eventSource = new SSE(`api/vector-search`, {
_62
headers: {
_62
apikey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY ?? '',
_62
Authorization: `Bearer ${process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY}`,
_62
'Content-Type': 'application/json',
_62
},
_62
payload: JSON.stringify({ query }),
_62
})
_62
_62
function handleError<T>(err: T) {
_62
setIsLoading(false)
_62
setHasError(true)
_62
console.error(err)
_62
}
_62
_62
eventSource.addEventListener('error', handleError)
_62
eventSource.addEventListener('message', (e: any) => {
_62
try {
_62
setIsLoading(false)
_62
_62
if (e.data === '[DONE]') {
_62
setPromptIndex((x) => {
_62
return x + 1
_62
})
_62
return
_62
}
_62
_62
const completionResponse: CreateCompletionResponse = JSON.parse(e.data)
_62
const text = completionResponse.choices[0].text
_62
_62
setAnswer((answer) => {
_62
const currentAnswer = answer ?? ''
_62
_62
dispatchPromptData({
_62
index: promptIndex,
_62
answer: currentAnswer + text,
_62
})
_62
_62
return (answer ?? '') + text
_62
})
_62
} catch (err) {
_62
handleError(err)
_62
}
_62
})
_62
_62
eventSource.stream()
_62
_62
eventSourceRef.current = eventSource
_62
_62
setIsLoading(true)
_62
},
_62
[promptIndex, promptData]
_62
)

Learn more

Want to learn more about the awesome tech that is powering this?