Storing Vectors
Insert and update vector embeddings with metadata using the JavaScript SDK or Postgres.
This feature is in alpha
Expect rapid changes, limited features, and possible breaking updates. Share feedback as we refine the experience and expand access.
Once you've created a bucket and index, you can start storing vectors. Vectors can include optional metadata for filtering and enrichment during queries.
Basic vector insertion
123456789101112131415161718192021222324252627282930313233343536373839import { } from '@supabase/supabase-js'const = ('https://your-project.supabase.co', 'your-service-key')// Get bucket and indexconst = ...('embeddings')const = .('documents-openai')// Insert vectorsconst { } = await .({ : [ { : 'doc-1', : { : [0.1, 0.2, 0.3 /* ... rest of embedding ... */], }, : { : 'Getting Started with Vector Buckets', : 'documentation', }, }, { : 'doc-2', : { : [0.4, 0.5, 0.6 /* ... rest of embedding ... */], }, : { : 'Advanced Vector Search', : 'blog', }, }, ],})if () { .('Error storing vectors:', )} else { .('✓ Vectors stored successfully')}Storing vectors from Embeddings API
Generate embeddings using an LLM API and store them directly:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253import { } from '@supabase/supabase-js'import from 'openai'const = ('https://your-project.supabase.co', 'your-service-key')const = new ({ : ..,})// Documents to embed and storeconst = [ { : '1', : 'How to Train Your AI', : 'Guide for training models...' }, { : '2', : 'Vector Search Best Practices', : 'Tips for semantic search...' }, { : '3', : 'Building RAG Systems', : 'Implementing retrieval-augmented generation...', },]// Generate embeddingsconst = await ..({ : 'text-embedding-3-small', : .(() => .),})// Prepare vectors for storageconst = .((, ) => ({ : ., : { : .[]., }, : { : ., : 'knowledge_base', : new ().(), },}))// Store vectors in batches (max 500 per request)const = ...('embeddings')const = .('documents-openai')for (let = 0; < .; += 500) { const = .(, + 500) const { } = await .({ : }) if () { .(`Error storing batch ${ / 500 + 1}:`, ) } else { .(`✓ Stored batch ${ / 500 + 1} (${.} vectors)`) }}Updating vectors
123456789101112131415161718192021const index = bucket.index('documents-openai')// Update a vector (same key)const { error } = await index.putVectors({ vectors: [ { key: 'doc-1', data: { float32: [0.15, 0.25, 0.35 /* ... updated embedding ... */], }, metadata: { title: 'Getting Started with Vector Buckets - Updated', updated_at: new Date().toISOString(), }, }, ],})if (!error) { console.log('✓ Vector updated successfully')}Deleting vectors
12345678910const index = bucket.index('documents-openai')// Delete specific vectorsconst { error } = await index.deleteVectors({ keys: ['doc-1', 'doc-2'],})if (!error) { console.log('✓ Vectors deleted successfully')}Metadata best practices
Metadata makes vectors more useful by enabling filtering and context:
12345678910111213141516171819202122232425262728const vectors = [ { key: 'product-001', data: { float32: [...] }, metadata: { product_id: 'prod-001', category: 'electronics', price: 299.99, in_stock: true, tags: ['laptop', 'portable'], description: 'High-performance ultrabook' } }, { key: 'product-002', data: { float32: [...] }, metadata: { product_id: 'prod-002', category: 'electronics', price: 99.99, in_stock: true, tags: ['headphones', 'wireless'], description: 'Noise-cancelling wireless headphones' } }]const { error } = await index.putVectors({ vectors })Metadata field guidelines
- Keep it lightweight - Metadata is returned with query results, so large values increase response size
- Use consistent types - Store the same field with consistent data types across vectors
- Index key fields - Mark fields you'll filter by to improve query performance
- Avoid nested objects - While supported, flat structures are easier to filter
Batch processing large datasets
For storing large numbers of vectors efficiently:
123456789101112131415161718192021222324252627282930313233343536373839404142import { createClient } from '@supabase/supabase-js'import fs from 'fs'const supabase = createClient(...)const index = supabase.storage.vectors .from('embeddings') .index('documents-openai')// Read embeddings from fileconst embeddingsFile = fs.readFileSync('embeddings.jsonl', 'utf-8')const lines = embeddingsFile.split('\n').filter(line => line.trim())const vectors = lines.map((line, idx) => { const { key, embedding, metadata } = JSON.parse(line) return { key, data: { float32: embedding }, metadata }})// Process in batchesconst BATCH_SIZE = 500let processed = 0for (let i = 0; i < vectors.length; i += BATCH_SIZE) { const batch = vectors.slice(i, i + BATCH_SIZE) try { const { error } = await index.putVectors({ vectors: batch }) if (error) throw error processed += batch.length console.log(`Progress: ${processed}/${vectors.length}`) } catch (error) { console.error(`Batch failed at offset ${i}:`, error) // Optionally implement retry logic }}console.log('✓ All vectors stored successfully')Performance optimization
Batch operations
Always use batch operations for better performance:
1234567// ❌ Inefficient - Multiple requestsfor (const vector of vectors) { await index.putVectors({ vectors: [vector] })}// ✅ Efficient - Single batch operationawait index.putVectors({ vectors })Metadata considerations
Keep metadata concise:
123456789101112// ❌ Large metadatametadata: { full_document_text: 'Very long document content...', detailed_analysis: { /* large object */ }}// ✅ Lean metadatametadata: { doc_id: 'doc-123', category: 'news', summary: 'Brief summary'}