Retrieval Augmented Generation (RAG) with vector databases: Expanding AI Capabilities

Retrieval Augmented Generation (RAG) with vector databases: Expanding AI Capabilities

What is RAG?

Retrieval Augmented Generation (RAG) is a technique to enhance the intelligence of large language models (LLMs) with additional knowledge, such as reliable facts from specific sources, private or personal information not available to others, or just fresh news to improve their answers. Typically in the RAG, the additional knowledge is provided to the model from a vector database. For example, you can add internal data from your company, the latest news or the data from your personal devices to get responses that use your context. It can truly help you like an expert instead of giving generalized answers. This technique also reduces hallucinations. 

Why RAG?

Let’s take a look at the key benefits that RAG in general offers:

  • Customization and Adaptation: RAG helps LLMs to tailor responses to specific domains or use cases by using vector databases to store and retrieve domain-specific information. It turns general intelligence into expert intelligence.
  • Contextual Relevance: By incorporating information retrieved from a large corpus of text, RAG models can generate contextually relevant responses. It improves the quality of generated responses compared to traditional generation models.
  • Accuracy and diversity: Incorporation of external information also helps to generate more informative and accurate responses and keep LLM up-to-date. This also helps to avoid repetitive or generic responses and allows for more diverse and interesting conversations.
  • Cost-effective implementation: RAG requires less task-specific training data compared to fine-tuning the foundation models. When we compare retrieval augmented generation vs fine-tuning, RAG’s ability to use external knowledge stands out. While fine-tuning requires lots of labeled data, RAG can rely on external sources. This can be particularly beneficial in scenarios where annotated training data is limited or expensive to obtain, thus, providing a cost-effective implementation. 
  • Transparency: RAG models provide transparency in their responses by explicitly indicating the source of retrieved information. This allows users to understand how the model arrived at its response and helps enhance trust in the generated output.

Therefore, RAG is suitable for applications where access to a vast amount of specialized data is necessary. For example, a customer support bot that pulls details from FAQs and generates coherent, conversational responses. Another example is an email drafting tool that fetches information about recent meetings and generates a personalized summary.

How retrieval augmented generation works

Let’s discuss the mechanics of how RAG operates with vector database, covering its main stages from dataset creation to response generation (see figure).

This image has an empty alt attribute; its file name is RAG.png
Retrieval augmented generation diagram


  • DB creation: Creation of external dataset

Before the real use, the vector database should be created. The new data, that lies outside the training dataset of LLM, should be identified and added to the dataset (e.g. up-to-date information or specific information). This dataset is then transferred into vector embeddings via an AI model (embedding language models) and is stored in the vector database. 

  • DB in use: Retrieval of relevant information
    Once a query comes in, it is also transferred into a vector / embedding. It is used then to retrieve the most relevant result from the database. To achieve this, RAG uses semantic search techniques also known as vector search to understand the user’s query and/or context, retrieving contextually relevant information from a large dataset. Vector search goes beyond keyword matching and focuses on semantic relationships, improving the quality of the retrieved information and the overall performance of the RAG system in generating contextually relevant responses. 
  • DB in use: Augmentation
    At this stage, the user’s query is augmented by adding the relevant data retrieved in the previous stage. Often, only the top responses from vector search are considered as relevant data. Many databases have additional filtering techniques in place here.
  • Generation
    The augmented query is sent to the LLM to generate an accurate answer.

The Role of Long Context Windows

The rise of the new LLMs with long (1+million tokens) context windows, like Gemini 1.5, raised the discussion on whether long context windows will replace RAG. A long context window enables users to directly incorporate huge amounts of data into a query. Thus, it increases context to the LLM to improve its efficiency. 

Long context length and RAG have pros and cons, and neither will kill the other. Rather than being mutually exclusive, large context windows and RAG can be complemented. Large context windows can enhance RAG applications by expanding the margin of precision and accommodating vast amounts of data. However, the capability of the model to take a long context does not mean that it can efficiently leverage all the information. If the relevant information is located in the middle of the context window, LLM’s ability to recall it is worse than the one located in the beginning. In order to use RAG with the long context window, the reranking (e.g. Cross-Encoder) should be used. The reranking model first calculates a matching score between a given query and vectors in the database (e.g. representing documents). And then it rearranges vector search results so that the most relevant ones are prioritized.

Future Directions of RAG

While RAG offers numerous benefits, there are still opportunities for improvement. Researchers are exploring ways to enhance RAG by combining it with other techniques. These include fine-tuning (RAFT) or the long context window (in combination with reranking). Another direction of research is expanding RAG capabilities by advancing data handling (including multimodal data), evaluation methodologies, and scalability. Finally, RAG is also affected by the new advances in optimizing LLMs to run locally on restricted devices (mobile, IoT), along with the emergence of the first on-device vector database. Now, RAG can be performed directly on your mobile device, prioritizing privacy, low latency, and offline capabilities.

The on-device Vector Database for Android and Java

The on-device Vector Database for Android and Java

ObjectBox 4.0 is the very first on-device, local vector database for Android and Java developers to enhance their apps with local AI capabilities (Edge AI). A vector database facilitates advanced vector data processing and analysis, such as measuring semantic similarities across different document types like images, audio files, and texts. A classic use case would be to enhance a Large Language Model (LLM), or a Small Language Model (SLM, like e.g. the Phi-3), with your domain expertise, your proprietary knowledge, and / or your private data. Combining the power of AI models with a specific knowledge base empowers high-quality, perfectly matching results a generic model simply cannot provide. This is called “retrieval-augmented generation” (RAG). Because ObjectBox works on-device, you can now do on-device RAG with data that never leaves the device and therefore stays 100% private. This is your chance to explore this technology on-device.

Vector Search (Similarity Search)

With this release, it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW (Hierarchical Navigable Small World). It’s scalable because it can find relevant data within millions of entries in a matter of milliseconds.

We pick up the example used in our vector search documentation. In short, we use cities with a location vector to perform proximity search. Here is the City entity and how to define a HNSW index on the location:

Vector objects are inserted as usual (the indexing is done automatically behind the scenes):

To perform a nearest neighbor search, use the new nearestNeighbors(queryVector, maxResultCount) query condition and the new “find with scores” query methods (the score is the distance to the query vector). For example, let’s find the 2 closest cities to Madrid:

Vector Embeddings

In the cities example above, the vectors were straight forward: they represent latitude and longitude. Maybe you already have vector data as part of your data. But often, you don’t. So where do you get the vector emebeddings of texts, images, video, audio files from?

For most AI applications, vectors are created by a embedding model. There are plenty of embedding models to choose from, but first you have to decide if it should run in the cloud or locally. Online embeddings are the easier way to get started and great for first testing; you can set up an account at your favorite AI provider and create embeddings online (only).

Depending on how much you care about privacy, you can also run embedding models locally and create your embeddings on your own device. There are a couple of choices for desktop / server hardware, e.g. check these on-device embedding models. For Android, MediaPipe is a good start as it has embedders for text and images.

Updated open source benchmarks 2024 (CRUD)

A new release is also a good occasion to update our open source benchmarks. The Android performance benchmark app provides many more options, but here are the key results:

CRUD is short for the basic operations a database does: create, read, update and delete. It’s an important metric for the general efficiency of a database.

Disclaimer 1: our focus is the “Object” performance (you may find a hint for that in our product name 🙂); so e.g. relational systems may perform a bit better when you directly work with raw columns and rows.

Disclaimer 2: ObjectBox delete performance was cut off at 800k per second to keep the Y axis within reasonable bounds. The actually measured value was 2.5M deletes per second.

Disclaimer 3: there cannot be enough disclaimers on any performance benchmark. It’s a complex topic where details matter. It’s best if you make your own picture for your own use case. We try to give a fair “arena” with our open source benchmarks, so it could be a starting point for you.

Feedback and Outlook: On-device vector search Benchmarks for Android coming soon

We’re still working on a lot of stuff (as always ;)) and with on-device / local vector search being a completely new technology for Android, we need your feedback, creativity and support more than ever. We’ll also soon release benchmarks on the vector search. Follow us on LinkedIn, GitHub, or Twitter to keep up-to-date.

Vector search: making sense of search queries

Vector search: making sense of search queries

Today, finding the most valuable information in your search is more complicated than finding a needle in a haystack. Traditional search engines match keywords and favor SEO-optimized content, but what if there was a way for search engines to truly understand the meaning behind our queries? Enter vector search – a powerful technology that is transforming how we navigate information, not just for users, but also for applications performing background searches. In this article, we will discuss what vector search is and how it works.

What is a vector search and why should you care?

Example Results with a traditional search for “Simple Fruit Cake”.

Vector search, which is also known as semantic search, is a technology that improves search accuracy by understanding the meaning (semantics) of the data and relations between its parts. Unlike traditional search, vector search efficiently handles synonyms, typos, ambiguous language, and broad or fuzzy queries. This is because it focuses on meaning, not just keywords.

Imagine that you are searching for a dessert to cook during the weekend. In a traditional search engine, the “simple fruit cake” query will reveal only websites that include these keywords. However, a vector search engine is able to provide results like “apple pie in 20 minutes” or “easy summer desserts”, which capture the essence of the query and align with your desire for a straightforward dessert option, providing more valuable results to you. 

At its core, vector search uses Large Language models (LLMs), like GPT, to transform data into mathematical vectors, also known as vector embeddings

What is a vector embedding?

2D Vector Space Representation. “Easy apple pie” is close to “simple fruit cake” as they are both simple and have fruit as an ingredient. “Easy chocolate mousse” shares simplicity but does not contain fruit. “Fancy plum cake” has fruit but is not simple to make. And “extravagant chocolate mousse” does not share either simplicity or fruit as an ingredient. Thus, it is the farthest from “simple fruit cake”.

A vector or vector embedding is a numerical representation of any kind of unstructured data (e.g. texts, images, videos, audio). It captures its meaning while being easy and efficient to compute with. Think of it like this: imagine you have a collection of cake recipes. You can convert each recipe into a vector embedding, which is like a unique numerical code that represents the recipe’s characteristics (ingredients, cooking methods, flavors, etc.).

Once all the recipes are encoded into embeddings, we can perform a similarity search. This means we can compare the vectors to see how similar the recipes are. For example, the vector for an easy apple pie recipe would be close to the vector for a simple fruit cake recipe because they share similar characteristics (e.g. simplicity, fruitiness). On the other hand, the vector for an extravagant chocolate mousse cake would be farther away because it involves different ingredients and methods.

How to compare vectors?

Vector similarity is a measure of how similar two vectors are (see ep. 4 of ObjectBox Bites). There are three ways to compare vectors: Jaccard Similarity, Cosine Similarity, and L2 Distance (also known as Euclidean distance). Jaccard Similarity calculates the ratio of elements that are common to both vectors divided by the total number of elements in both vectors. Cosine Similarity calculates the cosine of the angle between two vectors. The last method is the L2 distance. It calculates the straight-line distance between two points in space represented by the vectors. This is the most frequently used method in AI applications. It is important to note that the choice of vector comparison method does not affect the mechanics of similarity search.

What is a vector database and how is it related to vector search?

A vector database is a specialized database designed to store, manage, and search vectors efficiently. This efficiency is crucial for handling large datasets and performing fast vector similarity searches. Also, with a vector database, the knowledge of AI models can be improved, adapted, and updated. Therefore, today, most AI apps use a vector database.

Imagine having an AI that knows your habits, your preferences, your health data, maybe even what’s in your fridge, and can use this knowledge to suggest recipes that fit your lifestyle and individual preferences. A standard AI model doesn’t have that data and wouldn’t learn that way, but with a vector database it can. Now, when you search for a “fruit cake recipe”, using this data, it can suggest a “simple fruit cake” without sugar if you usually prefer quick, easy, and healthy recipes, or a “fancy plum cake” if you enjoy more challenging baking projects and don’t like apples. Or, a vegan option, if you have neither milk nor eggs left in the fridge.

This technique is called Retrieval-Augmented Generation (RAG). It enhances the capabilities of LLMs with additional data (e.g. personal data, company data, fresh data) stored in a vector database.

When you query a vector database, it uses the query’s vector representation to find the nearest neighbors in the database.

Nearest Neighbor Search

How do we find the nearest neighbor to our query vector? The most straightforward approach is a brute-force search. It calculates the distance between our query vector and all other vectors in the database, one by one. Any metrics discussed in “How to compare vectors” can be used. However, this brute-force approach has a time complexity of O(N*d), where N is the number of vectors and d is the dimensionality. This becomes computationally expensive for large datasets.

Since exact nearest neighbor search can be slow for massive datasets, we often turn to approximate nearest neighbor (ANN) algorithms. These algorithms prioritize efficiency by finding neighbors that are very close (but not necessarily the absolute closest) to the query vector, significantly reducing search time. 

Continuing with the cooking assistant app example, imagine you’re searching for a “fruit cake recipe”. Assume that in our database, the real closest recipe is “simple apple pie”. With a massive database, an exact nearest neighbor search might take a long time to find the perfect match. However, an ANN algorithm can quickly find a recipe that is very similar to what you’re looking for, such as a “simple fruit cake” or a “basic apple pie”, even if it might not be the exact closest match. This efficiency ensures you get relevant and useful recipe suggestions promptly, enhancing your overall experience without a noticeable compromise in quality.

Approximate Nearest Neighbour Search

Now, let’s delve into the world of Approximate Nearest Neighbor (ANN) algorithms. The way you search for nearest neighbors depends on how the data is stored in the vector database. One of the earliest ANN algorithms, established in 1975, is called k-d trees. These trees work by recursively splitting the data space using hyperplanes, making the search process more efficient (see ep. 5 of ObjectBox Bites). However, k-d trees, like many exact nearest neighbor algorithms, suffer from the dimensionality curse. This means that as the number of dimensions (features) in your data increases, the distance between points becomes less meaningful, making searching very slow in high-dimensional spaces like those used in vector databases. 

For instance, consider simple fruit recipes. With a few features, such as cooking time and number of ingredients, finding similar recipes would be relatively straightforward. However, if we also include many other features like sweetness level, calorie count, fruit type, all specific ingredients, preparation complexity, and user ratings, the number of dimensions increases significantly. In such high-dimensional spaces, the traditional k-d tree method becomes inefficient because the distances between points (recipes) become less distinct and meaningful.

To overcome this challenge, ANN algorithms leverage two main approaches: indexing methods and sketching methods. Indexing methods work by creating a hierarchical data structure that allows for faster exploration of the search space. Imagine a well-organized library with categorized sections instead of just randomly placed books.  Sketching methods, on the other hand,  don’t search the entire dataset directly.  Instead, they create compressed versions (sketches) of the data that are faster to compare with the query vector. This reduces the search time significantly. Often, these two approaches are combined for optimal performance.


A popular example of an ANN search implementation for high-dimensional data is the Hierarchical Navigable Small World (HNSW) algorithm (e.g. implemented in Azure AI). HNSW relies on graph-based indexing to efficiently navigate the data space and find nearest neighbors. For more details watch episodes 6, 7, and 8 of ObjectBox Bites miniseries, where we describe the fundamentals of HNSW.

Take-away notes

To sum up, vector search offers a significant leap forward in how we search for information. By understanding the meaning and relationships behind data, it delivers more relevant and accurate results, even for unstructured data and complex queries. This technology has the potential to revolutionize various fields, from enhancing search engines to empowering AI applications. As vector search continues to evolve, we can expect even more exciting possibilities for navigating the ever-growing ocean of information and unlocking its full potential. This includes operating with data directly on the devices it was created on, reducing cloud costs, eliminating reliance on an internet connection, and opening up using your private data without it ever being shared (100% private). If you’re interested in other AI and vector database-related topics, check out the ObjectBox mini-series. Stay tuned for more articles in the future.

Evolution of search: traditional vs vector search

Evolution of search: traditional vs vector search

Introduction

In today’s digital landscape, searching for information is integral to our daily lives, whether for education, research, work, or shopping. However, as the volume and complexity of data kept growing, traditional search methods faced more and more challenges in providing accurate and relevant results. That’s where vector search comes in. We’re already seeing Google changing its search engine to empower the vector search (RankBrain, BERT, Neural matching), and expecting even greater incorporation of AI tools to improve search experience. Let’s explore the differences between traditional (keyword) search and vector search to understand how these technologies are shaping our search experiences, and how this impacts the discoverability of any content you might produce.

Traditional (keyword) search

Traditional search performs exact keyword matching from user queries to the data to retrieve relevant results. For example, searching for “programming languages” with traditional search will list every source containing those words. A more advanced version can also incorporate additional rules to enhance search results, such as:

  • keyword frequency (how often the term “programming languages” is used within the result text),
  • the presence of related terms (e.g. “Java”, “Python”, “C++” versus “cooking”, “gardening”),
  • or location (results closer to your location are favored).

While this approach has served us well, it struggles with ambiguous language, synonyms as well as the impact of SEO strategies, often resulting in less accurate or less valuable search results. This can be especially frustrating for businesses who are trying to get their content seen by the right people. For example, a business that publishes a blog post about “sustainable fashion tips” might miss out on potential customers who are searching for “eco-friendly clothing recommendations” or “green clothing ideas,” simply because their keywords don’t exactly match.

Vector (semantic) search

2D Vector Space Representation. In this space, “Python” and “Java” are close to each other as well as to the “Programming language” query we are searching for because they are similar (they share high values in their features).


On the other hand, vector search takes a different approach by seeking out related objects that share similar characteristics or semantics. You can think of it as finding results based on meaning or understanding rather than just exact wording. For example, searching for “programming languages” with vector search will not only find sources mentioning those exact words but also identify specific languages like “Python” or “Java” as well as related concepts such as “coding tutorials” or“development frameworks”.

To do a vector search, first of all, the content, such as texts, images, audio files, or videos, needs to be represented as vectors/embeddings (also often called vector embeddings) by AI models. These embeddings represent data in a multidimensional vector space. Vectors capture the essence or semantics of the data they represent while remaining computationally efficient.

Once these vector representations are generated, they are basically sets of numbers, and therefore easy to compute with. For instance, instead of searching for a specific word in text, we aim to find the closest vector (from the text embeddings) to the query vector (representing the word we’re searching for). This process relies on well-established vector computing methods, such as calculating the distance between vectors or minimizing the angle between them (Cosine similarity).

Comparison

Let’s now compare different aspects of searching to understand the main differences between traditional search and vector search (also summarized in Table). 

  • Search Approach
    In traditional search, the approach relies on matching keywords directly from the user query to the content. Vector search uses vector embeddings to catch the semantics of the data to perform a meaning-based approach.
  • Ambiguity handling
    Therefore, vector search shines when it comes to handling ambiguity. It is superior for handling synonyms, ambiguous language, and broad or fuzzy queries compared to traditional search. This also automatically influences the relevance of the search results.
  • Search relevance calculation
    The metrics used for search relevance calculations are different. The traditional search uses term frequency-inverse document frequency (TF-IDF) and BM25, while vector search uses Jaccard Similarity, Cosine Similarity, and L2 Distance (or Euclidean).
  • Speed and Implementation
    Traditional search is easy to implement, straightforward in usage, and fast for simple queries. Vector search may be slower for simple queries and more complicated to implement, once it comes to huge datasets. However, the implementation of approximate nearest neighbor techniques (ANN) allows to significantly speed up the process.
  • Scalability
    Continuous expansion of content, challenges the scalability of traditional search, while for vector search scalability is one of the advantages. 
  • Cost
    While traditional search may have lower computational requirements, the superior performance and accuracy of vector search often justify the investment in additional computing power. Furthermore, the computational costs for vector search can be significantly reduced with the use of ANN.
Comparison table between keyword search and vector search

Conclusions

In summary, both traditional search and vector search offer distinct advantages and drawbacks. Vector search excels in handling ambiguity, correcting typos, enhancing relevance, and managing extensive datasets. Traditional search remains advantageous for straightforward queries, exact matches, or smaller datasets. Historically, limited computational resources, particularly for on-device computation (i.e. Edge Computing), favored traditional search. However, the landscape is evolving rapidly with the introduction of the first edge vector database solution by ObjectBox. This innovation promises to revolutionize the scenario by optimizing vector search for devices with constrained resources, extending the benefits of semantic search to the Edge.

On-device Vector Database for Dart/Flutter

On-device Vector Database for Dart/Flutter

ObjectBox 4.0 introduces the first on-device vector database for the Dart/Flutter platform, allowing Dart developers to enhance their apps with AI in ways previously not possible. A vector database facilitates advanced data processing and analysis, such as measuring semantic similarities across different document types like images, audio files, and texts. If you want to go all-in with on-device AI, combine the vector search with a large language model (LLM) and make the two interact with individual documents. You may have heard of this as “retrieval-augmented generation” (RAG). This is your chance to explore this as one of the first Dart developers.

Vector Search for Dart/Flutter

Now, let’s look into the Dart specifics! With this release, it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW. It’s highly scalable and can find relevant data within millions of entries in a matter of milliseconds.

Let’s have a deeper look into the example used in our vector search documentation. In this example, we use cities with a location vector to perform proximity search. Here is the City entity and how to define a HNSW index on the location (it would also need additional properties like an ID and a name, of course):

Vector objects are inserted as usual (the indexing is done automatically behind the scenes):

To perform a nearest neighbor search, use the new nearestNeighborsF32(queryVector, maxResultCount) query condition and the new “find with scores” query methods (the score is the distance to the query vector). For example, to find the 2 closest cities:

Vector Embeddings

In the cities example above, the vectors were straight forward: they represent latitude and longitude. Maybe you already have vector data as part of your data, but often, you don’t. So where do you get the vectors from?

For most AI applications, vectors are created by a so-called embedding model. There are plenty of embedding models to choose from, but first you have to decide if it should run in the cloud or locally. Online embeddings are the easier way to get started. Just set up an account at your favorite AI provider and create embeddings online. Alternatively, you can also run your embedding model locally on device. This might require some research. A good starting point for that may be TensorFlow lite, which also has a Flutter package. If you want to use really good embedding models (starting at around 90 MB), you can also check these on-device embedding models. These might require a more capable inference runtime though. E.g. if you are targeting desktops, you could use ollama (e.g. using this package).

CRUD benchmarks 2024

A new release is also a good occasion to refresh our open source benchmarks. Have a look:

CRUD is short for the basic operations a database does: Create, Read, Update and Delete. It’s an important metric for the general efficiency of a database.

Disclaimer 1: The benchmark uses synchronous methods if they are available. The rationale behind this was that we wanted to measure the plain database performance without any “async overhead” introduced by the Dart VM (eliminate an “external” factor). However, we are aware of this issue and will include asynchronous operations in a future version of the benchmarks.

Disclaimer 2: Our focus is the “Object” performance (you may find a hint for that in our product name 🙂); so e.g. relational systems may perform a bit better when you directly work with raw columns and rows.

Disclaimer 3: Hive’s read performance was cut off at 4 million/s. For reading, Hive is basically a hashtable that stores all objects in memory (e.g. everything must fit into memory). We measured 25 million/s.

Disclaimer 4: There cannot be enough disclaimers on any performance benchmark. It’s a complex topic where details matter. It’s best if you make your own picture for your own use case. We try to give a fair “arena” with our open source benchmarks, so it could be a starting point for you.

What’s next?

We are excited to see what you will build with the new vector search. Let us know! And please give us feedback. It’s the very first release of an on-device vector database ever – and the more feedback we get on it, the better the next version will be.

The first On-Device Vector Database: ObjectBox 4.0

The first On-Device Vector Database: ObjectBox 4.0

The new on-device vector database enables advanced AI applications on small restricted devices like mobile phones, Raspberry Pis, medical equipment, IoT gadgets and all the smart things around you. It is the missing piece to a fully local AI stack and the key technology to enable AI language models to interact with user specific data like text and images without an Internet connection and cloud services.

An AI Technology Enabler

Recent AI language models (LLMs) demonstrated impressive capabilities while being small enough to run on e.g. mobile phones. Recent examples include Gemma, Phi3 and OpenELM. The next logical step from here is to use these LLMs for advanced AI applications that go beyond a mere chat. A new generation of apps is currently evolving. These apps create “flows” with user specific data and multiple queries to the LLM to perform complex tasks. This is also known as RAG (retrieval augmented generation), which, in its simplest form, allows one to chat with your documents. And now, for the very first time, this will be possible to do locally on restricted devices using a fully fledged embedded database.

What is special about ObjectBox Vector Search?

We know restricted devices. Where others see limitations, we see the potential and we have repeatedly demonstrated creating superefficient software for these. And thus maximizing speed, minimizing resource use, saving battery life and CO2. With this knowledge, we approached vector search in a unique way.

Efficient memory management is the key. The challenge with vector data is that on the one hand, it consumes a lot of memory – while on the other hand, relevant vectors must be present in memory to compute distances between vectors efficiently. For this, we introduced a special multi-layered caching that gives the best performance for the full range of devices; from memory-constrained small devices to large machines that can keep millions of vectors in memory. This worked out so well that we saw ObjectBox outperform several vector databases built for servers (open source benchmarks coming soon). This is no small feat given that ObjectBox still holds up full ACID properties, e.g. caching must be transaction-aware.

Also, keep in mind that ObjectBox is a fully capable database that allows you to store complex data objects along with vectors. From an ObjectBox data model point of view, a vector is “just” another property type. This allows you to store all your data (vectors along with objects) in a single database. This “one database” approach also includes queries. You can already combine vector search with other conditions. Note that some limitations still apply with this initial release. Full hybrid search is close to being finished and will be part of one of the next releases.

In short, the following features make ObjectBox a unique vector database:

  • Embedded Database that runs inside your application without latency
  • Vector search based is state-of-the-art HNSW algorithm that scales very well with growing data volume
  • HNSW is tightly integrated within our internal database. Vector Search doesn’t just run “on top of database persistence”.
  • With this deep integration we do not need to keep all vectors in memory.
  • Multi-layered caching: if a vector is not in-memory, ObjectBox fetches it from disk.
  • Not just a vector database: you can store any data in ObjectBox, not just vectors. You won’t need a second database.
  • Low minimum hardware requirements: e.g. an old Raspberry Pi comfortably runs ObjectBox smoothly.
  • Low memory footprint: ObjectBox itself just takes a few MB of memory. The entire binary is only about 3 MB (compressed around 1 MB).
  • Scales with hardware: efficient resource usage is also an advantage when running on more capable devices like the latest phones, desktops and servers.
  • ObjectBox additionally offers commercial editions, e.g. a Server Cluster mode, GraphQL, and of course, ObjectBox Sync, our data synchronization solution.

Why is this relevant? AI anywhere & anyplace

With history repeating itself, we think AI is in a “mainframe era” today. Just like clunky computers from decades before, AI is restricted to big and very expensive machines running far away from the user. In the future, AI will become decentralized, shifting to the user and their local devices. To support this shift, we created the ObjectBox vector database. Our vision is a future where AI can assist everyone, anytime, and anywhere, with efficiency, privacy, and sustainability at its core.

What do we launch today?

Today, we are releasing ObjectBox 4.0 with Vector Search for a variety of languages:

*) We acknowledge Python’s popularity within the AI community and thus have invested significantly in our Python binding over the last months to make it part of this initial release. Since we still want to smooth out some rough edges with Python, we decided to label Python an alpha release. Expect Python to quickly catch up and match the comfort of our more established language bindings soon (e.g. automatic ID and model handling).

Let’s get you started right away? Check our Vector Search documentation to see how to use it!

One more thing: ObjectBox Open Source Database (OSS)

We are also very happy to announce that we will fully open source the core of ObjectBox. As a company we follow the open core model. Since we still have some cleaning up to do, this will happen in one of the next releases, likely 4.1.

“Release week”

With today’s initial releases, we are far from done yet. Starting next Tuesday, you can  expect additional announcements from us. Follow us to get the news as soon as it is released.

What’s next?

This is our very first version of a “vector database”. And while we are very happy with this release, there are still so many things to do! For example, we will optimize vector search by adding vector quantization and integrate it more tightly with our data synchronization. We are also focusing on expanding our solution’s reach through strategic partnerships. If you think you are a good fit, let us know. And as always, we are very eager to get some feedback from you! Take care.