ObjectBox 4.0 introduces the first on-device vector database for the Dart/Flutter platform, allowing Dart developers to enhance their apps with AI in ways previously not possible. A vector database facilitates advanced data processing and analysis, such as measuring semantic similarities across different document types like images, audio files, and texts. If you want to go all-in with on-device AI, combine the vector search with a large language model (LLM) and make the two interact with individual documents. You may have heard of this as “retrieval-augmented generation” (RAG). This is your chance to explore this as one of the first Dart developers.
Vector Search for Dart/Flutter
Now, let’s look into the Dart specifics! With this release, it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW. It’s highly scalable and can find relevant data within millions of entries in a matter of milliseconds.
Let’s have a deeper look into the example used in our vector search documentation. In this example, we use cities with a location vector to perform proximity search. Here is the City entity and how to define a HNSW index on the location (it would also need additional properties like an ID and a name, of course):
1 2 3 4 5 6 | @Entity() class City { @HnswIndex(dimensions: 2) @Property(type: PropertyType.floatVector) List<double>? location; } |
Vector objects are inserted as usual (the indexing is done automatically behind the scenes):
1 2 3 4 5 6 | final box = store.box<City>(); box.putMany( City("Barcelona", [41.385063, 2.173404]), City("Nairobi",[-1.292066, 36.821945]), City("Salzburg", [47.809490, 13.055010]), ]); |
To perform a nearest neighbor search, use the new nearestNeighborsF32(queryVector, maxResultCount) query condition and the new “find with scores” query methods (the score is the distance to the query vector). For example, to find the 2 closest cities:
1 2 3 | final madrid = [40.416775, -3.703790]; final query = box.query(City_.location.nearestNeighborsF32(madrid, 2)).build(); final closest = query.findWithScores()[0].object; |
Vector Embeddings
In the cities example above, the vectors were straight forward: they represent latitude and longitude. Maybe you already have vector data as part of your data, but often, you don’t. So where do you get the vectors from?
For most AI applications, vectors are created by a so-called embedding model. There are plenty of embedding models to choose from, but first you have to decide if it should run in the cloud or locally. Online embeddings are the easier way to get started. Just set up an account at your favorite AI provider and create embeddings online. Alternatively, you can also run your embedding model locally on device. This might require some research. A good starting point for that may be TensorFlow lite, which also has a Flutter package. If you want to use really good embedding models (starting at around 90 MB), you can also check these on-device embedding models. These might require a more capable inference runtime though. E.g. if you are targeting desktops, you could use ollama (e.g. using this package).
CRUD benchmarks 2024
A new release is also a good occasion to refresh our open source benchmarks. Have a look:
CRUD is short for the basic operations a database does: Create, Read, Update and Delete. It’s an important metric for the general efficiency of a database.
Disclaimer 1: The benchmark uses synchronous methods if they are available. The rationale behind this was that we wanted to measure the plain database performance without any “async overhead” introduced by the Dart VM (eliminate an “external” factor). However, we are aware of this issue and will include asynchronous operations in a future version of the benchmarks.
Disclaimer 2: Our focus is the “Object” performance (you may find a hint for that in our product name 🙂); so e.g. relational systems may perform a bit better when you directly work with raw columns and rows.
Disclaimer 3: Hive’s read performance was cut off at 4 million/s. For reading, Hive is basically a hashtable that stores all objects in memory (e.g. everything must fit into memory). We measured 25 million/s.
Disclaimer 4: There cannot be enough disclaimers on any performance benchmark. It’s a complex topic where details matter. It’s best if you make your own picture for your own use case. We try to give a fair “arena” with our open source benchmarks, so it could be a starting point for you.
What’s next?
We are excited to see what you will build with the new vector search. Let us know! And please give us feedback. It’s the very first release of an on-device vector database ever – and the more feedback we get on it, the better the next version will be.