Introduction to Nomic Embed V1.5
Nomic Embed V1.5 is the latest version of the Nomic Embed model, a powerful text embedding model that frequently ranks near the top of the MTEB embedding model leaderboard. It is designed for various applications, including retrieval-augmented generation (RAG), clustering, and classication. This model allows you to convert sentences into dense vector representations, making it easier to perform semantic searches and analyze text data effectively.
Note that for each of these use cases, you will need to include a task instruction
prefix to the sentences you want to embed as part of the prompt. For more details, see the Nomic Embed documentation.
Example code for running the Nomic Embed V1.5 embedding model on Modal
To run the following code, you will need to:
- Create an account at modal.com
- Run
pip install modal
to install the modal Python package - Run
modal setup
to authenticate (if this doesn’t work, trypython -m modal setup
) - Copy the code below into a file called
app.py
- Run
modal run app.py
import modal
MODEL_ID = "nomic-ai/nomic-embed-text-v1.5"
MODEL_REVISION = "d802ae16c9caed4d197895d27c6d529434cd8c6d"
image = modal.Image.debian_slim().pip_install(
"torch", "sentence-transformers", "einops"
)
app = modal.App("example-base-nomic-embed", image=image)
GPU_CONFIG = modal.gpu.H100(count=1)
@app.cls(
gpu=GPU_CONFIG,
allow_concurrent_inputs=15,
container_idle_timeout=60 * 10,
timeout=60 * 60,
)
class Model:
@modal.build()
@modal.enter()
def setup(self):
from sentence_transformers import SentenceTransformer
self.model = SentenceTransformer(
MODEL_ID, revision=MODEL_REVISION, trust_remote_code=True
)
@modal.method()
def embed(self, sentences: list):
return self.model.encode(sentences)
# ## Run the model
@app.local_entrypoint()
def main():
sentences = [
"search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten"
]
print(Model().embed.remote(sentences))