We’re excited to announce that we’ve developed and open-sourced an integration with the LlamaIndex, a leading framework for building generative AI applications. Along with MariaDB Vector (introduced in version 11.6 Tech-preview), and, now available in SkySQL, this integration allows customers to seamlessly incorporate vector search capabilities into their AI solutions such as semantic search and recommendation systems.
In this post, we’ll walk through the MariaDB features that enable seamless vector support. We’ll then explore how you can leverage this to build an effective Retrieval-Augmented Generation (RAG) pipeline using LlamaIndex.
Semantic Search with MariaDB Vector
You’re probably familiar with traditional keyword-based full-text search available in MariaDB/MySQL, where finding results depends on matching exact keywords. But what if you need something more nuanced? Enter vector search. Rather than exact matches, vector stores use similarity to return items that are conceptually related to a query, even if they don’t share identical terms.
Imagine you have a database filled with product reviews, such as: “This laptop is a beast! The M2 chip is incredibly powerful, and the battery life is amazing.”
Now, instead of searching for exact words, you can use a vector store to find laptops that received positive sentiment — capturing not just the words but their meaning. This opens up new possibilities for semantic search, recommendation engines, and intelligent discovery in your applications.
Refining Results with LLMs
While the vector store can rapidly surface semantically similar results, the nature of vector searches means the results are approximate—you might still end up with a large set of results that need further refinement. Suppose you want to go deeper: “Show me the top 5 products with negative sentiment and the key reasons for those ratings.” Now you need more than just a vector search—you need an LLM capable of generating precise, context-aware answers.
Here’s how it works:
- Generate embeddings: Convert your product reviews into high-dimensional vectors, also called embeddings, using models from HuggingFace or OpenAI. Curious about what embeddings are and how they work? You can explore more about the underlying concepts here.
- Store in a vector database: Save these embeddings in a vector DB like MariaDB for fast semantic search.
- Combine search with LLMs: For user queries, perform a similarity search in the vector store and pass the results as “context” to the LLM. The LLM then generates a refined, targeted response, such as extracting the most relevant sentiments or insights from the reviews.
LlamaIndex simplifies this entire pipeline, orchestrating the process of embedding generation, vector search, and LLM-powered synthesis into one streamlined workflow.
Why MariaDB Vector? It is already raining Vector DBs
Why add complexity by integrating another vector store when you can manage vector embeddings right alongside your existing data? With your data and vector embeddings in MariaDB managed by SkySQL, you get seamless integration—your embeddings are stored, scaled, and secured within the same database as the rest of your data. This approach ensures data consistency, reduces operational overhead, and is often far more cost-effective than relying on external vector databases.
Plus, MariaDB Vector is engineered for performance— the ANN benchmark results highlight that it is not only fast but efficient with its resource usage. For more details, visit the full blog here.
By keeping everything within SkySQL, you gain the advantage of lower latency, auto-scaling, simplified security management, and the confidence that your data and embeddings will always stay in sync.
A peek at the SQL extensions to support Vector operations
Using these vector capabilities is simple but powerful. You define vectors using the VECTOR INDEX in your CREATE TABLE statement, letting you build indexes that capture the semantic meaning of data.
Here’s how to set it up:
In this example, the review_embedding column stores the semantic meaning of the review column as high-dimensional vectors. These vectors hold much more than just words—they capture the intent, sentiment, and context behind each review.
But how do you generate these embeddings from your text data? Let’s take a practical approach using LlamaIndex with a model from HuggingFace executed locally in Python:
This Python snippet generates a review’s embedding using HuggingFace and seamlessly inserts it into MariaDB. The embedding, now a BLOB, becomes part of your table’s structure, enriching the data with its semantic layer.
Once the embeddings are in place, querying them becomes straightforward with MariaDB’s built-in VEC_Distance function:
MariaDB uses the Modified HNSW algorithm for vector searches, defaulting to Euclidean distance for fast and efficient similarity searches. With this, your database is now more than just a collection of rows—it’s a rich, semantically aware system that can surface insights traditional searches might miss.
In this demo, we’ll show how to integrate MariaDB Vector with OpenAI’s LLM using LlamaIndex to build a smart, context-driven AI application.
We begin by launching MariaDB via SkySQL, where you’ll get access to vector capabilities in a cloud environment.
The following snippets highlight key parts of the code, guiding you through how to set up and connect your services. For the full implementation, you can jump into the Google Colab notebook.
1) Launch MariaDB in SkySQL using the API
2) Populate the DB with some ‘Product reviews’
3) Populate the MariaDB Vector (Product reviews) using LlamaIndex
Unlike the previous SQL example, you will notice that we now isolate the Vector Index and manage it using our LlamaIndex MariaDB Vector integration.
The code snippet below creates a VectorStoreIndex object in LlamaIndex using our SkySQL managed MariaDB as the actual Vector store.
4) Time to do semantic searches and engage the LLM
With the MariaDB Vector integration for LlamaIndex, available now in SkySQL, you have a powerful toolkit to build AI applications that leverage cutting-edge semantic search and vector capabilities. By combining the strengths of vector search with LLM-powered insights, you can unlock faster, more meaningful results from your data—all without the usual performance or cost trade-offs. Ready to build smarter, context-driven applications? Dive in and see how easy it is to get started.
Next Steps
- Dive deeper into MariaDB Vector to explore its capabilities.
- Want to contribute? Help improve the LlamaIndex-MariaDB integration here.
- Learn more about MariaDB in the cloud with SkySQL.