Revolutionize your applications
with the power of embeddings.

What are embeddings?

Embeddings represent data in a vector space, with the distance between vectors indicating their relatedness. Small distances imply high relatedness, while large distances suggest low relatedness.

What are embeddings?

Why do they matter?

They are essential for Retrieval-Augmented Generation (RAG), fine-tuning, semantic search, clustering, recommendations, anomaly detection, classification and many other AI applications.

Why do embeddings matter?

How much do they cost?

Although creating embeddings is relatively cost-effective, it is not without expenses. For large datasets, such as millions of products or reviews, generating numerous embeddings can be costly.

How much do they cost?
Introducing Embedefy
Embeddings for everyone.
Embedefy simplifies the process of obtaining embeddings, making it easier to enhance a wide range of AI applications. The models are open-source, enabling you to discontinue using Embedefy at any time and generate your own embeddings using your preferred infrastructure.
Supercharge your applications with embeddings
There are many use cases for embeddings in AI, particularly in enhancing language models. For example the integration of Retrieval-Augmented Generation (RAG) with Large Language Models (LLM) offers a practical solution for incorporating real-time and proprietary data into responses. This approach not only enhances the accuracy of information provided but also addresses the challenge of generating reliable and context-specific answers.

Envision a scenario where your AI chatbot is not just responding to queries but is also integrating your data into every conversation, making each interaction more insightful and context-aware.
embeddings
Embeddings API
The Embeddings API provides a simple way to retrieve embeddings for a given text, which can then be used for Retrieval-Augmented Generation (RAG), fine-tuning, semantic search, clustering, recommendations, anomaly detection, classification, and many other AI applications.

To get embeddings, send your text inputs to the embeddings API endpoint with a chosen model. The API will respond with an embedding for each input, which can be integrated into your applications or workflows.
PostgreSQL Extension
The Embedefy PostgreSQL Extension provides access to embeddings directly from your database, without building and maintaining additional applications. Once the extension is installed, you can query your database as you normally would, but with the benefits of embeddings.

To use embeddings in PostgreSQL, install the pgembedefy extension and select a table and column to process. Once the table column is processed, you can query your database with natural language, getting results based on the semantic understanding.
-- Install the Embedefy PostgreSQL extension
CREATE EXTENSION embedefy CASCADE;
-- Create an embeddings table for an existing table
SELECT embedefy_embeddings_table_create('products', ARRAY['id'], 'bge-small-en-v1.5');
-- Process the name column in that table
SELECT embedefy_embeddings_table_process('products', 'name', 'bge-small-en-v1.5');

-- Query products based on user prompt, using cosine similarity
SELECT p.name
FROM products p, embedefy_products ep
WHERE p.id = ep.id
ORDER BY 1 - ((SELECT embedefy_embeddings('bge-small-en-v1.5', 'looking for breakfast items')::vector(384)) <=> ep.embedding) DESC
LIMIT 5;

name
-------------
 Pancake mix
 Omelette
 Oatmeal
 Noodles
 Bacon
Frequently asked questions

If you have more questions, visit our FAQ page.