AI & Vectors

Image Search with OpenAI CLIP

Implement image search with the OpenAI CLIP Model and Supabase Vector.

The OpenAI CLIP Model was trained on a variety of (image, text)-pairs. You can use the CLIP model for:

  • Text-to-Image / Image-To-Text / Image-to-Image / Text-to-Text Search
  • You can fine-tune it on your own image and text data with the regular SentenceTransformers training code.

SentenceTransformers provides models that allow you to embed images and text into the same vector space. You can use this to find similar images as well as to implement image search.

You can find the full application code as a Python Poetry project on GitHub.

Create a new Python project with Poetry

Poetry provides packaging and dependency management for Python. If you haven't already, install poetry via pip:


_10
pip install poetry

Then initialize a new project:


_10
poetry new image-search

Setup Supabase project

If you haven't already, install the Supabase CLI, then initialize Supabase in the root of your newly created poetry project:


_10
supabase init

Next, start your local Supabase stack:


_10
supabase start

This will start up the Supabase stack locally and print out a bunch of environment details, including your local DB URL. Make a note of that for later user.

Install the dependencies

We will need to add the following dependencies to our project:

  • vecs: Supabase Vector Python Client.
  • sentence-transformers: a framework for sentence, text and image embeddings (used with OpenAI CLIP model)
  • matplotlib: for displaying our image result

_10
poetry add vecs sentence-transformers matplotlib

Import the necessary dependencies

At the top of your main python script, import the dependencies and store your DB URL from above in a variable:


_10
from PIL import Image
_10
from sentence_transformers import SentenceTransformer
_10
import vecs
_10
from matplotlib import pyplot as plt
_10
from matplotlib import image as mpimg
_10
_10
DB_CONNECTION = "postgresql://postgres:postgres@localhost:54322/postgres"

Create embeddings for your images

In the root of your project, create a new folder called images and add some images. You can use the images from the example project on GitHub or you can find license free images on unsplash.

Next, create a seed method, which will create a new Supabase Vector Collection, generate embeddings for your images, and upsert the embeddings into your database:


_43
def seed():
_43
# create vector store client
_43
vx = vecs.create_client(DB_CONNECTION)
_43
_43
# create a collection of vectors with 3 dimensions
_43
images = vx.get_or_create_collection(name="image_vectors", dimension=512)
_43
_43
# Load CLIP model
_43
model = SentenceTransformer('clip-ViT-B-32')
_43
_43
# Encode an image:
_43
img_emb1 = model.encode(Image.open('./images/one.jpg'))
_43
img_emb2 = model.encode(Image.open('./images/two.jpg'))
_43
img_emb3 = model.encode(Image.open('./images/three.jpg'))
_43
img_emb4 = model.encode(Image.open('./images/four.jpg'))
_43
_43
# add records to the *images* collection
_43
images.upsert(
_43
records=[
_43
(
_43
"one.jpg", # the vector's identifier
_43
img_emb1, # the vector. list or np.array
_43
{"type": "jpg"} # associated metadata
_43
), (
_43
"two.jpg",
_43
img_emb2,
_43
{"type": "jpg"}
_43
), (
_43
"three.jpg",
_43
img_emb3,
_43
{"type": "jpg"}
_43
), (
_43
"four.jpg",
_43
img_emb4,
_43
{"type": "jpg"}
_43
)
_43
]
_43
)
_43
print("Inserted images")
_43
_43
# index the collection for fast search performance
_43
images.create_index()
_43
print("Created index")

Add this method as a script in your pyproject.toml file:


_10
[tool.poetry.scripts]
_10
seed = "image_search.main:seed"
_10
search = "image_search.main:search"

After activating the virtual environtment with poetry shell you can now run your seed script via poetry run seed. You can inspect the generated embeddings in your local database by visiting the local Supabase dashboard at localhost:54323, selecting the vecs schema, and the image_vectors database.

Perform an image search from a text query

With Supabase Vector we can easily query our embeddings. We can use either an image as search input or alternative we can generate an embedding from a string input and use that as the query input:


_23
def search():
_23
# create vector store client
_23
vx = vecs.create_client(DB_CONNECTION)
_23
images = vx.get_or_create_collection(name="image_vectors", dimension=512)
_23
_23
# Load CLIP model
_23
model = SentenceTransformer('clip-ViT-B-32')
_23
# Encode text query
_23
query_string = "a bike in front of a red brick wall"
_23
text_emb = model.encode(query_string)
_23
_23
# query the collection filtering metadata for "type" = "jpg"
_23
results = images.query(
_23
data=text_emb, # required
_23
limit=1, # number of records to return
_23
filters={"type": {"$eq": "jpg"}}, # metadata filters
_23
)
_23
result = results[0]
_23
print(result)
_23
plt.title(result)
_23
image = mpimg.imread('./images/' + result)
_23
plt.imshow(image)
_23
plt.show()

By limiting the query to one result, we can show the most relevant image to the user. Finally we use matplotlib to show the image result to the user.

That's it, go ahead and test it out by running poetry run search and you will be presented with an image of a "bike in front of a red brick wall".

Conclusion

With just a couple of lines of Python you are able to implement image search as well as reverse image search using OpenAI's CLIP model and Supabase Vector.