First on-device Vector Database (aka Semantic Index) for iOS

First on-device Vector Database (aka Semantic Index) for iOS

Easily empower your iOS and macOS apps with fast, private, and sustainable AI features. All you need is a Small Language Model (SLM; aka “small LLM”) and ObjectBox – our on-device vector database built for Swift apps. This gives you a local semantic index for fast on-device AI features like RAG or GenAI that run without an internet connection and keep data private.

The recently demonstrated “Apple Intelligence” features are precisely that: a combination of on-device AI models and a vector database (semantic index). Now, ObjectBox Swift enables you to add the same kind of AI features easily and quickly to your iOS apps right now.


Not developing with Swift? We also have a Flutter / Dart binding (works on iOS, Android, desktop), a Java / Kotlin binding (works on Android and JVM), or one in C++ for embedded devices.

Enabling Advanced AI Anywhere, Anytime

Typical AI apps use data (e.g. user-specific data, or company-specific data) and multiple queries to enhance and personalize the quality of the model’s response and perform complex tasks. And now, for the very first time, with the release of ObjectBox 4.0, this will be possible locally on restricted devices.

Local AI Tech Stack Example for on-device RAG

Swift on-device Vector Database and search for iOS and MacOS

With the ObjectBox Swift 4.0 release, it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW. It’s scalable because it can find relevant data within millions of entries in a matter of milliseconds.
Let’s pick up the cities example from our vector search documentation. Here, we use cities with a location vector and want to find the closest cities (a proximity search). The Swift class for the City entity shows how to define an HNSW index on the location:

Inserting City objects with a float vector and HNSW index works as usual, the indexing happens behind the scenes:

To then find cities closest to a location, we do a nearest neighbor search using the new query condition and “find with scores” methods. The nearest neighbor condition accepts a query vector, e.g. the coordinates of Madrid, and a count to limit the number of results of the nearest neighbor search, here we want at max 2 cities. The find with score methods are like a regular find, but in addition return a score. This score is the distance of each result to the query vector. In our case, it is the distance of each city to Madrid.

The ObjectBox on-device vector database empowers AI models to seamlessly interact with user-specific data — like texts and images — directly on the device, without relying on an internet connection. With ObjectBox, data never needs to leave the device, ensuring data privacy.

Thus, it’s the perfect solution for developers looking to create smarter apps that are efficient and reliable in any environment. It enhances everything from personalized banking apps to robust automotive systems.

ObjectBox: Optimized for Resource Efficiency

At ObjectBox, we specialize on efficiency that comes from optimized code. Our hearts beat for creating highly efficient and capable software that outperforms alternatives on small and big hardware. ObjectBox maximizes speed while minimizing resource use, extending battery life, and reducing CO2 emissions.

With this expertise, we took a unique approach to vector search. The result is not only a vector database that runs efficiently on constrained devices but also one that outperforms server-side vector databases (see first benchmark results; on-device benchmarks coming soon). We believe this is a significant achievement, especially considering that ObjectBox still upholds full ACID properties (guaranteeing data integrity).

Cloud/server vector databases vs. On-device/Edge vector databases

Also, keep in mind that ObjectBox is a fully capable database. It allows you to store complex data objects along with vectors. Thus, you have the full feature set of a database at hand. It empowers hybrid search, traceability, and powerful queries.

Use Cases / App ideas

ObjectBox can be used for a million different things, from empowering generative AI features in mobile apps to predictive maintenance on ECUs in cars to AI-enhanced games. For iOS apps, we expect to see the following on-device AI use cases very soon:

  • Across all categories we’ll see Chat-with-files apps:
    • Travel: Imagine chatting to your favorite travel guide offline, anytime, anywhere. No need to carry bulky paper books, or scroll through a long PDF on your mobile.
    • Research: Picture yourself chatting with all the research papers in your field. Easily compare studies and findings, and quickly locate original quotes.
  • Lifestyle:
    • Health: Apps offering personalized recommendations based on scientific research, your preferences, habits, and individual health data. This includes data tracked from your device, lab results, and doctoral diagnosis.  
  • Productivity: Personal assistants for all areas of life.
    • Family Management: Interact with assistants tailored to specific roles. Imagine a parent’s assistant that monitors school channels, chat groups, emails, and calendars. Its goal is to automatically add events like school plays, remind you about forgotten gym bags, and even suggest birthday gifts for your child’s friends.
    • Professional Assistants: Imagine being a busy sales rep on the go, juggling appointments and travel. A powerful on-device sales assistant can do more than just automation. It can prepare contextual and personalized follow-ups instantly. For example, by summarizing talking points, attaching relevant company documents, and even suggesting who to CC in your emails.
  • Educational:
    • Educational apps featuring “chat-with-your-files” functionality for learning materials and research papers. But going beyond that, they generate quizzes and practice questions to help people solidify knowledge.

Run the local AI Stack with a Language Model (SLM, LLM)

Recent Small Language Models (SMLs) already demonstrate impressive capabilities while being small enough to run on e.g. mobile phones. To run the model on-device of an iPhone or a macOS computer, you need a model runtime. On Apple Silicone the best choice in terms of performance typically MLX – a framework brought to you by Apple machine learning research. It supports the hardware very efficiently by supporting CPU/GPU and unified memory.

To summarize, you need these three components to run on-device AI with an semantic index:

  • ObjectBox: vector database for the semantic index
  • Models: choose an embedding model and a language model to matching your requirements
  • MLX as the model runtime

Start building next generation on-device AI apps today! Head over to our vector search documentation and Swift documentation for details.

The on-device Vector Database for Android and Java

The on-device Vector Database for Android and Java

ObjectBox 4.0 is an on-device vector database allowing Android and Java developers to enhance their apps with local AI capabilities. A vector database facilitates advanced vector data processing and analysis, such as measuring semantic similarities across different document types like images, audio files, and texts. A classic use case would be to enhance a Large Language Model (LLM), or a small language model (like e.g. the Phi-3), with your domain expertise, your proprietary knowledge, and / or your private data. Combining the power of AI models with a specific knowledge base empowers high-quality, perfectly matching results a generic model simply cannot provide. This is called “retrieval-augmented generation” (RAG). Because ObjectBox works on-device, you can now do on-device RAG with data that never leaves the device and therefore stays 100% private. This is your chance to explore this technology on-device.

Vector Search (Similarity Search)

With this release it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW. It’s scalable because it can find relevant data within millions of entries in a matter of milliseconds.

We pick up the example used in our vector search documentation. In short, we use cities with a location vector to perform proximity search. Here is the City entity and how to define a HNSW index on the location:

Vector objects are inserted as usual (the indexing is done automatically behind the scenes):

To perform a nearest neighbor search, use the new nearestNeighbors(queryVector, maxResultCount) query condition and the new “find with scores” query methods (the score is the distance to the query vector). For example, let’s find the 2 closest cities to Madrid:

Vector Embeddings

In the cities example above, the vectors were straight forward: they represent latitude and longitude. Maybe you already have vector data as part of your data. But often, you don’t. So where do you get the vector emebeddings of texts, images, video, audio files from?

For most AI applications, vectors are created by a embedding model. There are plenty of embedding models to choose from, but first you have to decide if it should run in the cloud or locally. Online embeddings are the easier way to get started and great for first testing; you can set up an account at your favorite AI provider and create embeddings online (only).

Depending on how much you care about privacy, you can also run embedding models locally and create your embeddings on your own device. There are a couple of choices for desktop / server hardware, e.g. check these on-device embedding models. For Android, MediaPipe is a good start as it has embedders for text and images.

Updated open source benchmarks 2024 (CRUD)

A new release is also a good occasion to update our open source benchmarks. The Android performance benchmark app provides many more options, but here are the key results:

CRUD is short for the basic operations a database does: create, read, update and delete. It’s an important metric for the general efficiency of a database.

Disclaimer 1: our focus is the “Object” performance (you may find a hint for that in our product name 🙂); so e.g. relational systems may perform a bit better when you directly work with raw columns and rows.

Disclaimer 2: ObjectBox delete performance was cut off at 800k per second to keep the Y axis within reasonable bounds. The actually measured value was 2.5M deletes per second.

Disclaimer 3: there cannot be enough disclaimers on any performance benchmark. It’s a complex topic where details matter. It’s best if you make your own picture for your own use case. We try to give a fair “arena” with our open source benchmarks, so it could be a starting point for you.

Feedback and Outlook: On-device vector search Benchmarks coming soon

We’re still working on a lot of stuff (as always 😉 and with on-device / local vector search being a completely new technology for Android, we need your feedback, creativity and support more than ever. We’ll also soon release benchmarks on the vector search. Follow us on LinkedIn, GitHub, or Twitter to keep up-to-date.

The first On-Device Vector Database: ObjectBox 4.0

The first On-Device Vector Database: ObjectBox 4.0

The new on-device vector database enables advanced AI applications on small restricted devices like mobile phones, Raspberry Pis, medical equipment, IoT gadgets and all the smart things around you. It is the missing piece to a fully local AI stack and the key technology to enable AI language models to interact with user specific data like text and images without an Internet connection and cloud services.

An AI Technology Enabler

Recent AI language models (LLMs) demonstrated impressive capabilities while being small enough to run on e.g. mobile phones. Recent examples include Gemma, Phi3 and OpenELM. The next logical step from here is to use these LLMs for advanced AI applications that go beyond a mere chat. A new generation of apps is currently evolving. These apps create “flows” with user specific data and multiple queries to the LLM to perform complex tasks. This is also known as RAG (retrieval augmented generation), which, in its simplest form, allows one to chat with your documents. And now, for the very first time, this will be possible to do locally on restricted devices using a fully fledged embedded database.

What is special about ObjectBox Vector Search?

We know restricted devices. Where others see limitations, we see the potential and we have repeatedly demonstrated creating superefficient software for these. And thus maximizing speed, minimizing resource use, saving battery life and CO2. With this knowledge, we approached vector search in a unique way.

Efficient memory management is the key. The challenge with vector data is that on the one hand, it consumes a lot of memory – while on the other hand, relevant vectors must be present in memory to compute distances between vectors efficiently. For this, we introduced a special multi-layered caching that gives the best performance for the full range of devices; from memory-constrained small devices to large machines that can keep millions of vectors in memory. This worked out so well that we saw ObjectBox outperform several vector databases built for servers (open source benchmarks coming soon). This is no small feat given that ObjectBox still holds up full ACID properties, e.g. caching must be transaction-aware.

Also, keep in mind that ObjectBox is a fully capable database that allows you to store complex data objects along with vectors. From an ObjectBox data model point of view, a vector is “just” another property type. This allows you to store all your data (vectors along with objects) in a single database. This “one database” approach also includes queries. You can already combine vector search with other conditions. Note that some limitations still apply with this initial release. Full hybrid search is close to being finished and will be part of one of the next releases.

In short, the following features make ObjectBox a unique vector database:

  • Embedded Database that runs inside your application without latency
  • Vector search based is state-of-the-art HNSW algorithm that scales very well with growing data volume
  • HNSW is tightly integrated within our internal database. Vector Search doesn’t just run “on top of database persistence”.
  • With this deep integration we do not need to keep all vectors in memory.
  • Multi-layered caching: if a vector is not in-memory, ObjectBox fetches it from disk.
  • Not just a vector database: you can store any data in ObjectBox, not just vectors. You won’t need a second database.
  • Low minimum hardware requirements: e.g. an old Raspberry Pi comfortably runs ObjectBox smoothly.
  • Low memory footprint: ObjectBox itself just takes a few MB of memory. The entire binary is only about 3 MB (compressed around 1 MB).
  • Scales with hardware: efficient resource usage is also an advantage when running on more capable devices like the latest phones, desktops and servers.
  • ObjectBox additionally offers commercial editions, e.g. a Server Cluster mode, GraphQL, and of course, ObjectBox Sync, our data synchronization solution.

Why is this relevant? AI anywhere & anyplace

With history repeating itself, we think AI is in a “mainframe era” today. Just like clunky computers from decades before, AI is restricted to big and very expensive machines running far away from the user. In the future, AI will become decentralized, shifting to the user and their local devices. To support this shift, we created the ObjectBox vector database. Our vision is a future where AI can assist everyone, anytime, and anywhere, with efficiency, privacy, and sustainability at its core.

What do we launch today?

Today, we are releasing ObjectBox 4.0 with Vector Search for a variety of languages:

*) We acknowledge Python’s popularity within the AI community and thus have invested significantly in our Python binding over the last months to make it part of this initial release. Since we still want to smooth out some rough edges with Python, we decided to label Python an alpha release. Expect Python to quickly catch up and match the comfort of our more established language bindings soon (e.g. automatic ID and model handling).

Let’s get you started right away? Check our Vector Search documentation to see how to use it!

One more thing: ObjectBox Open Source Database (OSS)

We are also very happy to announce that we will fully open source the core of ObjectBox. As a company we follow the open core model. Since we still have some cleaning up to do, this will happen in one of the next releases, likely 4.1.

“Release week”

With today’s initial releases, we are far from done yet. Starting next Tuesday, you can  expect additional announcements from us. Follow us to get the news as soon as it is released.

What’s next?

This is our very first version of a “vector database”. And while we are very happy with this release, there are still so many things to do! For example, we will optimize vector search by adding vector quantization and integrate it more tightly with our data synchronization. We are also focusing on expanding our solution’s reach through strategic partnerships. If you think you are a good fit, let us know. And as always, we are very eager to get some feedback from you! Take care.

Data Viewer for Objects – announcing ObjectBox Admin

Data Viewer for Objects – announcing ObjectBox Admin

ObjectBox Admin (Docker container) allows you to analyze ObjectBox databases that run on desktop and server machines. Releasing ObjectBox Admin as a standalone Docker image makes it possible to run Admin on a larger number of platforms.

ObjectBox Admin is available as a Linux x86_64 Docker image, which runs on all common platforms including Windows and macOS. We offer a convenience script (objectbox-admin.sh) but it’s also simple enough to run it via plain Docker. See the docs for details, or get started by following this short tutorial.

Data Browser

The ObjectBox Admin Web App comprises a menu on the left (Data, Schema, Status, GraphQL…) and the corresponding content pane on the right-hand side.

ObjectBox Admin Web App (Data, Schema, Status, GraphQL...)

The data browser provides a table of objects of a specific type. By clicking on the Type we can select an entity type for viewing its entity objects.

WebAdmin_Data_Type

Next to the type selection is a small filter icon (the dashed triangle right of the type selection).

When selected, a query editor pops up that allows to filter data by adding a Property/Operator/Value expression.

ObjectBox Admin Filtering

When finished, click the check mark, and the data table gets updated with an active filter.

Data Filter

At the bottom, you will find a download link that exports the objects of the currently viewed box in JSON format.

DownloadDataAsJson

Schema Browser

You can get a detailed list of elements that make up an object type in the “Schema” pane.

Schema pane

In accordance with the “Data” pane, you can click on Type to select the schema of a specific entity type of your database.

Status

Base level database and ObjectBox Admin information can be viewed on the “Status” pane.

Status pane

GraphQL

The Docker-version of ObjectBox Admin offers a pane to query the database using GraphQL.

GraphQL Data Browser

ObjectBox Database Java 3.1 – Flex type

ObjectBox Database Java 3.1 – Flex type

We are happy to announce version 3.1 of ObjectBox for Java and Kotlin. The major feature of this version is the new Flex type. For a long time, ObjectBox worked on rigid data schemas, and we think that this is a good thing. Knowing what your data looks like is a feature – similar to programming languages that are statically typed. Fixed schemas make data handling more predictable and robust. Nevertheless, sometimes there are use cases which require flexible data structures. ObjectBox 3.1 allows exactly this.

Flex properties

Expanding on the string and flexible map support in 3.0.0, this release adds support for Flex properties where the type must not be known at compile time. To add a Flex property to an entity use Object in Java and Any? in Kotlin. Then at runtime store any of the supported types.

For example, assume a customer entity with a tag property:

Then set a String tag on one customer, and an Integer tag on another customer and just put them:

When getting the customer from its box the original type is restored. For simplicity the below example just casts the tag to the expected type:

A Flex property can be not justString or Integer. Supported types are all integers (Byte, Short, Integer, Long), floating point numbers (Float, Double), String and byte arrays.

It can also hold a List<Object> or a Map<String, Object> of those types. Lists and maps can be nested.

Behind the scenes Flex properties use a FlexBuffer converter to store the property value, so some limitations apply. See the FlexObjectConverter class documentation for details.

Query for map keys and values

If the Flex property contains integers or strings, or a list or map of those types, it’s also possible to do queries. For example, take this customer entity with a properties String to String map:

Why is properties not of type Object? ObjectBox supports using Map<String, String> (or Map<String, Object>) directly and will still create a Flex property behind the scenes.

Then put a customer with a premium property:

To query for any customers that have a premium key in their properties map, use the containsElement condition:

Or to only match customers where the map key has a specific value, here a specific premium tier, use the containsKeyValue condition:

What’s next?

ObjectBox database is free to use. Check out our docs and this video tutorial to get started today.

We strive to bring joy to mobile developers and appreciate all kinds feedback, both positive and negative. You can always raise an issue on GitHub or post a question on Stackoverflow. Otherwise, star the ObjectBox  Java database GitHub repo and up-vote the features you’d like to see in the next release.

 

Beginner C++ Database Tutorial: How to use ObjectBox

Beginner C++ Database Tutorial: How to use ObjectBox

Introduction

As a direct follow up from the ObjectBox database installation tutorial, today we’ll code a simple C++ example app to show how the database can be used. Before starting to program, let’s briefly overview what we want to achieve with this tutorial and what is the best way to work through it.

Overview of the app we want to build

In short, we will make a console calculator app with an option to save results into memory. These will be stored as objects of the Number class. Every Number will also have an ID for easy reference in future calculations. Apart from the function to make calculations, we will create a function to enter memory. It will list all the database entries and have an option to clear memory. By coding all of this, we will make use of such standard ObjectBox operations as put, get, getAll and removeAll.

Our program will consist of seven files: 

  • the FlatBuffers schema file, that defines the model of a class we want to store in the database
  • the header file, for class function definitions
  • the source file, for function implementation
  • the four files with objectbox binding code that will be created by objectbox-generator

How to use this tutorial

While looking at coding examples is useful in many cases, the best way to learn such a practical skill like programming is to solve problems independently. This is why we included an exercise for each step. You are encouraged to make the effort and do each of them, even if you don’t know the answer straight away. Only move to the next step after you test each part of your program and make sure that everything works as intended. Ideally, you should only use the code snippets presented here to check yourself or look for hints when you feel stuck. Bear in mind that sometimes there might be several different ways to achieve the same results. So if something that we ask you to do in this tutorial doesn’t work for you, try to come up with your own solution.

How to create the FlatBuffers file?

First, we’ll create the FlatBuffers schema (.fbs) for our app. This is required for the objectbox-generator to generate binding code that will allow us to use the ObjectBox library in our project. 

The FlatBuffers schema consists of a table, which defines the object we want to store in the database, and the properties of this object. Each property consists of a name and a type. We want to keep our example very simple, so just two properties is enough.

  1. To replicate a calculator’s memory, we want ObjectBox to store some numbers. We can define the Number object by giving the table a corresponding name.
  2. Inside the table, we want to have two properties: id and contents. The contents of each Number object is the number itself (double), while id is an ulong that our program will assign to each of them for easy identification.

Exercise: create a file called numbers.fbs and define the table in the format

Reveal code

Generating binding code

Now that the FlatBuffers file is ready, we can generate the binding code. To do this, run the objectbox-generator for our FlatBuffers file:

The following files will be generated:

  • objectbox-model.h
  • objectbox-model.json
  • numbers.obx.hpp
  • numbers.obx.cpp

The header file

This is where the main chunk of our code will be. It will contain the Calculator class and all the function definitions.

  1. Start by including the three ObjectBox header files: objectbox.hpp, objectbox-model.h and numbers.obx.hpp. Our whole program will be based on one class, called Calculator. It should only have two private members: Store and Box. Store is a reference to the database and will manage Boxes. Each Box stores objects of a particular class. In this example, we only need one Box. Let’s call it numberBox, as it will store Numbers that we want to save in the memory of our calculator.

Exercise: create a file called calculator.hpp and define the Calculator class with two private members: reference to the obx library member Store and a Box of Numbers.

Reveal code

2. After the constructor, we define the run function. It will be responsible for the menu of our program. There should be two main options: to perform calculations and enter memory. As discussed above, we want this app to do two things: perform calculations and show memory. We’ll define these as separate functions, called Calculate and Memory. The first one is quite standard, so we won’t go into a detailed explanation here. The only thing you should keep in mind is that we need to account for the case when the user wants to  operate on a memory item. To deal with this, we’ll process input in a function called processInput.

Exercise: define the parametrised constructor which takes a reference to Store as a parameter. Then define the run and Calculate functions.

Reveal code

3. The final part of this function is for saving results into memory. We start by asking the user if they want to do that. If the answer is positive, we create a new instance of Number and set the most recent result as a value of its contents. To save our object in the database, we can operate with put(object) on our Box. put is one of the standard ObjectBox operations, which is used for creating new objects and overwriting existing ones. 

Exercise: create an option to store the result in memory, making use of the ObjectBox put operation.

Reveal code

4. Next, we should define processInput, which will read input as a string and check whether it has the right format. Now, to make it recognise the memory items, we have to come up with a standard format for these. Remember, we defined an ID property for our Numbers. Every number in our database has an ID, so we can refer to them as, e.g. m1, m2, m3 etc. To read the numbers from memory, we can make use of the get(obx_id) operation. It returns a unique pointer to the corresponding Number, whose contents we need to access and use as our operand.

Exercise: define the processInput function, which detects when something like m1 was used as an operand and updates x, y, and op according to the input.

Reveal code

5. The last function in our header file will be Memory. It should list all the numbers contained in the database and have an option to clear data. We can read all the database entries by calling the getAll ObjectBox operator. It returns a vector of unique pointers. To clear memory, you can simply operate with removeAll on our Box.

Exercise: define the Memory function, which lists all the memory items, and can delete all of them by request.

Reveal code

The source file

To tie everything together, we create a source (.cpp) file. It should contain only the main function that initialises the objectbox model, creates an instance of the Calculator app, and runs it. To create the ObjectBox model, use

then passing options as a parameter when you initialise the Store.

Exercise: create the source file

Reveal code

Final notes

Now you can finally compile and run your application. At this point, a good exercise would be to try and add some more functionality to this project. Check out the ObjectBox C++ documentation to learn more about the available operations.

After you’ve mastered ObjectBox DB, why not try ObjectBox Sync? Here is another tutorial from us, showing how easily you can sync between different instances of your cross platform app.

Other than that, if you spot any errors in this tutorial or if anything is unclear, please come back to us. We are happy to hear your thoughts.