The Embedded Database for C++ and C

The Embedded Database for C++ and C

After 6 years and 21 incremental “zero dot” releases, we are excited to announce the first major release of ObjectBox, the high-performance embedded database for C++ and C. As a faster alternative to SQLite, ObjectBox delivers more than just speed – it’s object-oriented, highly efficient, and offers advanced features like data synchronization and vector search. It is the perfect choice for on-device databases, especially in resource-constrained environments or in cases with real-time requirements.

What is ObjectBox?

ObjectBox is a free embedded database designed for object persistence. With “object” referring to instances of C++ structs or classes, it is built for objects from scratch with zero overhead — no SQL or ORM layer is involved, resulting in outstanding object performance.

The ObjectBox C++ database offers advanced features, such as relations and ACID transactions, to ensure data consistency at all times. Store your data privately on-device across a wide range of hardware, from low-profile ARM platforms and mobile devices to high-speed servers. It’s a great fit for edge devices, iOS or Android apps, and server backends. Plus, ObjectBox is multi-platform (any POSIX will do, e.g. iOS, Android, Linux, Windows, or QNX) and multi-language: e.g., on mobile, you can work with Kotlin, Java or Swift objects. This cross-platform compatibility is no coincidence, as ObjectBox Sync will seamlessly synchronize data across devices and platforms.

Why should C and C++ Developers care?

ObjectBox deeply integrates with C and C++. Persisting C or C++ structs is as simple as a single line of code, with no need to interact with unfamiliar database APIs that disrupt the natural flow of C++. There’s also no data transformation (e.g. SQL, rows & columns) required, and interacting with the database feels seamless and intuitive.

As a C or C++ developer, you likely value performance. ObjectBox delivers exceptional speed (at least we haven’t tested against a faster DB yet). Having several 100,000s CRUD operations per second on commodity hardware is no sweat. Our unique advantage is that, if you want to, you can read raw objects from “mmapped” memory (directly from disk!). This offers true “zero copy” data access without any throttling layers between you and the data.

Finally, CMake support makes integration straightforward, starting with FetchContent support so you can easily get the library. But there’s more: we offer code generation for entity structs, which takes only a single CMake command.

“ObjectBox++”: A quick Walk-Through

Once ObjectBox is set up for CMake, the first step is to define the data model using FlatBuffers schema files. FlatBuffers is a building block within ObjectBox and is also widely used in the industry. For those familiar with Protocol Buffers, FlatBuffers are its parser-less (i.e., faster) cousin. Here’s an example of a “Task” entity defined in a file named “task.fbs”:

And with that file, you can generate code using the following CMake command:

Among other things, code generation creates a C++ struct for Task data, which is used to interact with the ObjectBox API. The struct is a straightforward C++ representation of the data model:

The code generation also provides some internal “glue code” including the method create_obx_model() that defines the data model internally. With this, you can open the store and insert a task object in just three lines of code:

And that’s all it takes to get a database running in C++. This snippet essentially covers the basics of the getting started guide and this example project on GitHub.

Vector Embeddings for C++ AI Applications

Even if you don’t have an immediate use case, ObjectBox is fully equipped for vectors and AI applications. As a “vector database,” ObjectBox is ready for use in high-dimensional vector similarity searches, employing the HNSW algorithm for highly scalable performance beyond millions of vectors.

Vectors can represent semantics within a context (e.g. objects in a picture) or even documents and paragraphs to “capture” their meaning. This is typically used for RAG (Retrieval-Augmented Generation) applications that interact with LLMs. Basically, RAG allows AI to work with specific data, e.g. documents of a department or company and thus individualizes the created content.

To quickly illustrate vector search, imagine a database of cities including their location as a 2-dimensional vector. To enable nearest neighbor search, all you need to do is to define a HNSW index on the location property, which enables the nearestNeighbors query condition used like this:

For more details, refer to the vector search doc pages or the full city vector search example on GitHub.

store.close(); // Some closing words

This release marks an important milestone for ObjectBox, delivering significant improvements in speed, usability, and features. We’re excited to see how these enhancements will help you create even better, feature-rich applications.

There’s so much to explore! Please follow the links to dive deeper into topics like queries, relations, transactions, and, of course, ObjectBox Sync.

As always, we’re here to listen to your feedback and are committed to continually evolving ObjectBox to meet your needs. Don’t hesitate to reach out to us at any time.

P.S. Are you looking for a new job? We have a vacant C++ position to build the future of ObjectBox with us. We are looking forward to receiving your application! 🙂

The Critical Role of Databases for Edge AI

The Critical Role of Databases for Edge AI

Edge AI vs. Cloud AI

Edge AI is where Edge Computing meets AI

What is Edge AI? Edge AI (also: “on-device AI”, “local AI”) brings artificial intelligence to applications at the network’s edge, such as mobile devices, IoT, and other embedded systems like, e.g., interactive kiosks. Edge AI combines AI with Edge Computing, a decentralized paradigm designed to bring computing as close as possible to where data is generated and utilized.

What is Cloud AI? As opposed to this, cloud AI refers to an architecture where applications rely on data and AI models hosted on distant cloud infrastructure. The cloud offers extensive storage and processing power.

An Edge for Edge AI: The Cloud 

 

Cloud AI to Edge AI architecture

Example: Edge-Cloud AI setup with a secure, two-way Data Sync architecture

Today, there is a broad spectrum of application architectures combining Edge Computing and Cloud Computing, and the same applies to AI. For example, “Apple Intelligence” performs many AI tasks directly on the phone (on-device AI) while sending more complex requests to a private, secure cloud. This approach combines the best of both worlds – with the cloud giving an edge to the local AI rather than the other way around. Let’s have a look at the advantages on-device AI brings to the table.

Benefits of Local AI on the Edge

  • Enhanced Privacy. Local data processing reduces the risk of breaches.
  • Faster Response Rates. Processing data locally cuts down travel time for data, speeding up responses.
  • Increased Availability. On-device processing makes apps fully offline-capable. Operations can continue smoothly during internet or data center disruptions.
  • Sustainability/costs. Keeping data where it is produced and used minimizes data transfers, cutting networking costs and reducing energy consumption—and with it, CO2 emissions.

Challenges of Local AI on the Edge

  • Data Storage and Processing: Local AI requires an on-device database that runs on a wide variety of edge devices (Mobile,IoT, Embedded) and performs complex tasks such as vector search locally on the device with minimal resource consumption.
  • Data Sync: It’s vital to keep data consistent across devices, necessitating robust bi-directional Data Sync solutions. Implementing such a solution oneself requires specialized tech talent, is non-trivial and time-consuming, and will be an ongoing maintenance factor. 
  • Small Language Models: Small Language Models (SLMs) like Phi-2 (Microsoft Research), TinyStories (HuggingFace), and Mini-Giants (arXiv) are efficient and resource-friendly but often need enhancement with local vector databases for better response accuracy. An on-device vector database allows on-device semantic search with private, contextual information, reducing latency while enabling faster and more relevant outputs. For complex queries requiring larger models, a database that works both on-device and in the cloud (or a large on-premise server) is perfect for scalability and flexibility in on-device AI applications.

On-device AI Use Cases

On-device AI is revolutionizing numerous sectors by enabling real-time data processing wherever and whenever it’s needed. It enhances security systems, improves customer experiences in retail, supports predictive maintenance in industrial environments, and facilitates immediate medical diagnostics. On-device AI is essential for personalizing in-car experiences, delivering reliable remote medical care, and powering personal AI assistants on mobile devices—always keeping user privacy intact.

Personalized In-Car Experience: Features like climate control, lighting, and entertainment can be adjusted dynamically in vehicles based on real-time inputs and user habits, improving comfort and satisfaction. Recent studies, such as one by MHP, emphasize the increasing consumer demand for these AI-enabled features. This demand is driven by a desire for smarter, more responsive vehicle technology.

Remote Care: In healthcare, on-device AI enables on-device data processing that’s crucial for swift diagnostics and treatment. This secure, offline-capable technology aligns with health regulations like HIPAA and boosts emergency response speeds and patient care quality.

Personal AI Assistants: Today’s personal AI assistants often depend on the cloud, raising privacy issues. However, some companies, including Apple, are shifting towards on-device processing for basic tasks and secure, anonymized cloud processing for more complex functions, enhancing user privacy.

ObjectBox for On-Device AI – an edge for everyone

Edge Cloud spectrum

The continuum from Edge to Cloud

ObjectBox supports AI applications from Edge to cloud. It stands out as the first on-device vector database, enabling powerful Edge AI on mobile, IoT, and other embedded devices with minimal hardware needs. It works offline and supports efficient, private AI applications with a seamless bi-directional Data Sync solution, completely on-premise, and optional integration with MongoDB for enhanced backend features and cloud AI.

Interested in extending your AI to the edge? Let’s connect to explore how we can transform your applications.

The rise of small language models

The rise of small language models

As artificial intelligence (AI) continues to evolve, companies, researchers, and developers are increasingly recognizing that bigger isn’t always better. Therefore, the era of ever-expanding model sizes is giving way to more efficient, compact models, so-called Small Language Models (SLMs). SLMs offer several key advantages that address both the growing complexity of AI and the practical challenges of deploying large-scale models. In this article, we’ll explore why the race for larger models is slowing down and how SLMs are emerging as the sustainable solution for the future of AI.

From Bigger to Better: The End of the Large Model Race

Up until 2023, the focus was on expanding models to unprecedented scales. But the era of creating ever-larger models appears to be coming to an end. Many newer models like Grok or Llama 3 are smaller in size yet maintain or even improve performance compared to models from just a year ago. The drive now is to reduce model size, optimize resources, and maintain power.

The Plateau of Large Language Models (LLMs)

2024_12_16_SLMs_2

Why Bigger No Longer Equals Better

As models become larger, developers are realizing that the performance improvements aren’t always worth the additional computational cost. Breakthroughs in knowledge distillation and fine-tuning enable smaller models to compete with and even outperform their larger predecessors in specific tasks. For example, medium-sized models like Llama with 70B parameters and Gemma-2 with 27B parameters are among the top 30 models in the chatbot arena, outperforming even much larger models like GPT-3.5 with 175B parameters.

The Shift Towards Small Language Models (SLMs)

In parallel with the optimization of LLMs, the rise of SLMs presents a new trend (see Figure). These models require fewer computational resources, offer faster inference times, and have the potential to run directly on devices. In combination with an on-device database, this enables powerful local GenAI and on-device RAG apps on all kinds of embedded devices, like on mobile phones, Raspberry Pis, commodity laptops, IoT, and robotics.

Advantages of SLMs

Despite the growing complexity of AI systems, SLMs offer several key advantages that make them essential in today’s AI landscape:

speed-icon

Efficiency and Speed
SLMs are significantly more efficient, needing less computational power to operate. This makes them perfect for resource-constrained environments like edge computing, mobile phones, and IoT systems. This enables quicker response times and more real-time applications. For example, studies show that small models like DistilBERT can retain over 95% of the performance of larger models in some tasks while being 60% smaller and faster to execute.

Accessibility
As SLMs are less resource-hungry (less hardware requirements, less CPU, memory, power needs), they are more accessible for companies and developers with smaller budgets. Because the model and data can be used locally, on-device / on-premise, there is no need for cloud infatstructure and they are also usable for use cases with high privacy requirements. All in all, SLMs democratize AI development and empower smaller teams and individual developers to deploy advanced models on more affordable hardware.

Cost Reduction and Sustainability
Training and deploying large models require immense computational and financial resources, and comes with high operational costs. SLMs drastically reduce the cost of training, deployment, and operation as well as the carbon footprint, making AI more financially and environmentally sustainable.

Gear

Specialization and Fine-tuning
SLMs can be fine-tuned more efficiently for specific applications. They excel in domain-specific tasks because their smaller size allows for faster and more efficient retraining. It makes them ideal for sectors like healthcare, legal document analysis, or customer service automation. For instance, using the ‘distilling step-by-step’ mechanism, a 770M parameter T5 model outperformed a 540B parameter PaLM model using 80% of the benchmark dataset, showcasing the power of specialized training techniques with a much smaller model size

Gear

On-Device AI for Privacy and Security
SLMs are becoming compact enough for deployment on edge devices like smartphones, IoT sensors, and wearable tech. This reduces the need for sensitive data to be sent to external servers, ensuring that user data stays local. With the rise of on-device vector databases, SLMs can now handle use-case-specific, personal, and private data directly on the device. This allows more advanced AI apps, like those using RAG, to interact with personal documents and perform tasks without sending data to the cloud. With a local, on-device  vector database users get personalized, secure AI experiences while keeping their data private.

 The Future: Fit-for-Purpose Models: From Tiny to Small to Large Language models

 The future of AI will likely see the rise of models that are neither massive nor minimal but fit-for-purpose. This “right-sizing” reflects a broader shift toward models that balance scale with practicality. SLMs are becoming the go-to choice for environments where specialization is key and resources are limited. Medium-sized models (20-70 billion parameters) are becoming the standard choice for balancing computational efficiency and performance on general AI tasks. At the same time, SLMs are proving their worth in areas that require low latency and high privacy.

Innovations in model compression, parameter-efficient fine-tuning, and new architecture designs are enabling these smaller models to match or even outperform their predecessors. The focus on optimization rather than expansion will continue to be the driving force behind AI development in the coming years.

 

 Conclusion: Scaling Smart is the New Paradigm

 

As the field of AI moves beyond the era of “bigger is better,” SLMs and medium-sized models are becoming more important than ever. These models represent the future of scalable and efficient AI. They serve as the workhorses of an industry that is looking to balance performance with sustainability and efficiency. The focus on smaller, more optimized models demonstrates that innovation in AI isn’t just about scaling up; it’s about scaling smart.

Local AI – what it is and why we need it

Local AI – what it is and why we need it

Artificial Intelligence (AI) has become an integral part of our daily lives in recent years. However, it has been tied to running in huge, centralized cloud data centers. This year, “local AI”, also known as “on-device AI” or “Edge AI”, is gaining momentum. Local vector databases, efficient language models (so-called Small Language Models, SLMs), and AI algorithms are becoming smaller, more efficient, and less compute-heavy. As a result, they can now run on a wide variety of devices, locally.

Figure 1. Evolution of language model’s size with time. Large language models (LLMs) are marked as celadon circles, and small language models (SLMs) as blue ones.

What is Local AI (on-device AI, Edge AI)?

Local AI refers to running AI applications directly on a device, locally, instead of relying on (distant) cloud servers. Such an on-device AI works in real-time on commodity hardware (e.g. old PCs), consumer devices (e.g. smartphones, wearables), and other types of embedded devices (e.g. robots and point-of-sale (POS) systems used in shops and restaurants). An interest in local Artificial Intelligence is growing (see Figure 2).

Figure 2. Interest over time according to Google Trends.

Why use Local AI: Benefits

Local AI addresses many of the concerns and challenges of current cloud-based AI applications. The main reasons for the advancement of local AI are: 

On top, local AI reduces:

  • latency, enabling real-time apps
  • data transmission and cloud costs, enabling commodity business cases

In short: By leveraging the power of Edge Computing and on-device processing, local AI can unlock new possibilities for a wide range of applications, from consumer applications to industrial automation to healthcare.

Privacy: Keeping Data Secure

In a world where data privacy concerns are increasing, local AI offers a solution. Since data is processed directly on the device, sensitive information remains local, minimizing the risk of breaches or misuse of personal data. No need for data sharing and data ownership is clear. This is the key to using AI responsibly in industries like healthcare, where sensitive data needs to be processed and used without being sent to external servers. For example, medical data analysis or diagnostic tools can run locally on a doctor’s device and be synchronized to other on-premise, local devices (like e.g. PCs, on-premise servers, specific medical equipment) as needed. This ensures that patient data never leaves the clinic, and data processing is compliant with strict privacy regulations like GDPR or HIPAA.

Accessibility: AI for Anyone, Anytime

One of the most significant advantages of local AI is its ability to function without an internet connection. This opens up a world of opportunities for users in remote locations or those with unreliable connectivity. Imagine having access to language translation, image recognition, or predictive text tools on your phone without needing to connect to the internet. Or a point-of-sale (POS) system in a retail store that operates seamlessly, even when there’s no internet. These AI-powered systems can still analyze customer buying habits, manage inventory, or suggest product recommendations offline, ensuring businesses don’t lose operational efficiency due to connectivity issues. Local AI makes this a reality. In combination with little hardware requirements, it makes AI accessible to anyone, anytime. Therefore, local AI is an integral ingredient in making AI more inclusive and to democratize AI.

Sustainability: Energy Efficiency

Cloud-based AI requires massive server farms that consume enormous amounts of energy. Despite strong efficiency improvements, in 2022, data centers globally consumed between 240 and 340 terawatt-hours (TWh) of electricity. To put this in perspective, data centers now use more electricity than entire countries like Argentina or Egypt. This growing energy demand places considerable pressure on global energy resources and contributes to around 1% of energy-related CO2 emissions.

The rise of AI has amplified these trends. According to McKinsey, the demand for data center capacity is projected to grow by over 20% annually, reaching approximately 300GW by 2030, with 70% of this capacity dedicated to hosting AI workloads. Gartner even predicts that by 2025, “AI will consume more energy than the human workforce”. AI workloads alone could drive a 160% increase in data center energy demand by 2030, with some estimates suggesting that AI could consume 500% more energy in the UK than it does today. By that time, data centers may account for up to 8% of total energy consumption in the United States.

In contrast, local AI presents a more sustainable alternative, e.g. by leveraging Small Language Models, which require less power to train and run. Since computations happen directly on the device, local AI significantly reduces the need for constant data transmission and large-scale server infrastructure. This not only lowers energy use but also helps decrease the overall carbon footprint. Additionally, integrating a local vector database can further enhance efficiency by minimizing reliance on power-hungry data centers, contributing to more energy-efficient and environmentally friendly technology solutions.

When to use local AI: Use case examples

Local AI enables an infinite number of new use cases. Thanks to advancements in AI models and vector databases, AI apps can be run cost-effectively on less capable hardware, e.g. commodity PCs, without the need for an internet connection and data sharing. This opens up the opportunity for offline AI, real-time AI, and private AI applications on a wide variety of devices. From smartphones and smartwatches to industrial equipment and even cars, local AI is becoming accessible to a broad range of users. 

  • Consumer Use Cases (B2C): Everyday apps like photo editors, voice assistants, and fitness trackers can integrate AI to offer faster and more personalized services (local RAG), or integrate generative AI capabilities. 
  • Business Use Cases (B2B): Retailers, manufacturers, and service providers can use local AI for data analysis, process automation, and real-time decision-making, even in offline environments. This improves efficiency and user experience without needing constant cloud connectivity.

Conclusion

Local AI is a powerful alternative to cloud-based solutions, making AI more accessible, private, and sustainable. With Small Language Models and on-device vector databases like ObjectBox, it is now possible to bring AI onto everyday devices. From the individual user who is looking for convenient, always-available tools to large businesses seeking to improve operations and create new services without relying on the cloud – local AI is transforming how we interact with technology everywhere.

First on-device Vector Database (aka Semantic Index) for iOS

First on-device Vector Database (aka Semantic Index) for iOS

Easily empower your iOS and macOS apps with fast, private, and sustainable AI features. All you need is a Small Language Model (SLM; aka “small LLM”) and ObjectBox – our on-device vector database built for Swift apps. This gives you a local semantic index for fast on-device AI features like RAG or GenAI that run without an internet connection and keep data private.

The recently demonstrated “Apple Intelligence” features are precisely that: a combination of on-device AI models and a vector database (semantic index). Now, ObjectBox Swift enables you to add the same kind of AI features easily and quickly to your iOS apps right now.

Not developing with Swift? We also have a Flutter / Dart binding (works on iOS, Android, desktop), a Java / Kotlin binding (works on Android and JVM), or one in C++ for embedded devices.

Enabling Advanced AI Anywhere, Anytime

Typical AI apps use data (e.g. user-specific data, or company-specific data) and multiple queries to enhance and personalize the quality of the model’s response and perform complex tasks. And now, for the very first time, with the release of ObjectBox 4.0, this will be possible locally on restricted devices.

 

Local AI Tech Stack Example for on-device RAG

Swift on-device Vector Database and search for iOS and MacOS

With the ObjectBox Swift 4.0 release, it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW. It’s scalable because it can find relevant data within millions of entries in a matter of milliseconds.
Let’s pick up the cities example from our vector search documentation. Here, we use cities with a location vector and want to find the closest cities (a proximity search). The Swift class for the City entity shows how to define an HNSW index on the location:

Inserting City objects with a float vector and HNSW index works as usual, the indexing happens behind the scenes:

To then find cities closest to a location, we do a nearest neighbor search using the new query condition and “find with scores” methods. The nearest neighbor condition accepts a query vector, e.g. the coordinates of Madrid, and a count to limit the number of results of the nearest neighbor search, here we want at max 2 cities. The find with score methods are like a regular find, but in addition return a score. This score is the distance of each result to the query vector. In our case, it is the distance of each city to Madrid.

The ObjectBox on-device vector database empowers AI models to seamlessly interact with user-specific data — like texts and images — directly on the device, without relying on an internet connection. With ObjectBox, data never needs to leave the device, ensuring data privacy.

Thus, it’s the perfect solution for developers looking to create smarter apps that are efficient and reliable in any environment. It enhances everything from personalized banking apps to robust automotive systems.

ObjectBox: Optimized for Resource Efficiency

At ObjectBox, we specialize on efficiency that comes from optimized code. Our hearts beat for creating highly efficient and capable software that outperforms alternatives on small and big hardware. ObjectBox maximizes speed while minimizing resource use, extending battery life, and reducing CO2 emissions.

With this expertise, we took a unique approach to vector search. The result is not only a vector database that runs efficiently on constrained devices but also one that outperforms server-side vector databases (see first benchmark results; on-device benchmarks coming soon). We believe this is a significant achievement, especially considering that ObjectBox still upholds full ACID properties (guaranteeing data integrity).

 Cloud/server vector databases vs. On-device/Edge vector databases

Also, keep in mind that ObjectBox is a fully capable database. It allows you to store complex data objects along with vectors. Thus, you have the full feature set of a database at hand. It empowers hybrid search, traceability, and powerful queries.

Use Cases / App ideas

ObjectBox can be used for a million different things, from empowering generative AI features in mobile apps to predictive maintenance on ECUs in cars to AI-enhanced games. For iOS apps, we expect to see the following on-device AI use cases very soon:

    • Across all categories we’ll see Chat-with-files apps:
        • Travel: Imagine chatting to your favorite travel guide offline, anytime, anywhere. No need to carry bulky paper books, or scroll through a long PDF on your mobile.

        • Research: Picture yourself chatting with all the research papers in your field. Easily compare studies and findings, and quickly locate original quotes.
    • Lifestyle:
        • Health: Apps offering personalized recommendations based on scientific research, your preferences, habits, and individual health data. This includes data tracked from your device, lab results, and doctoral diagnosis.
    • Productivity: Personal assistants for all areas of life.
        • Family Management: Interact with assistants tailored to specific roles. Imagine a parent’s assistant that monitors school channels, chat groups, emails, and calendars. Its goal is to automatically add events like school plays, remind you about forgotten gym bags, and even suggest birthday gifts for your child’s friends.

        • Professional Assistants: Imagine being a busy sales rep on the go, juggling appointments and travel. A powerful on-device sales assistant can do more than just automation. It can prepare contextual and personalized follow-ups instantly. For example, by summarizing talking points, attaching relevant company documents, and even suggesting who to CC in your emails.
    • Educational:
          • Educational apps featuring “chat-with-your-files” functionality for learning materials and research papers. But going beyond that, they generate quizzes and practice questions to help people solidify knowledge.

    Run the local AI Stack with a Language Model (SLM, LLM)

    Recent Small Language Models (SMLs) already demonstrate impressive capabilities while being small enough to run on e.g. mobile phones. To run the model on-device of an iPhone or a macOS computer, you need a model runtime. On Apple Silicone the best choice in terms of performance typically MLX – a framework brought to you by Apple machine learning research. It supports the hardware very efficiently by supporting CPU/GPU and unified memory.

    To summarize, you need these three components to run on-device AI with an semantic index:

      • ObjectBox: vector database for the semantic index
      • Models: choose an embedding model and a language model to matching your requirements
      • MLX as the model runtime

    Start building next generation on-device AI apps today! Head over to our vector search documentation and Swift documentation for details.

        The on-device Vector Database for Android and Java

        The on-device Vector Database for Android and Java

        ObjectBox 4.0 is the very first on-device, local vector database for Android and Java developers to enhance their apps with local AI capabilities (Edge AI). A vector database facilitates advanced vector data processing and analysis, such as measuring semantic similarities across different document types like images, audio files, and texts. A classic use case would be to enhance a Large Language Model (LLM), or a Small Language Model (SLM, like e.g. the Phi-3), with your domain expertise, your proprietary knowledge, and / or your private data. Combining the power of AI models with a specific knowledge base empowers high-quality, perfectly matching results a generic model simply cannot provide. This is called “retrieval-augmented generation” (RAG). Because ObjectBox works on-device, you can now do on-device RAG with data that never leaves the device and therefore stays 100% private. This is your chance to explore this technology on-device.

        Vector Search (Similarity Search)

        With this release, it is possible to create a scalable vector index on floating point vector properties. It’s a very special index that uses an algorithm called HNSW (Hierarchical Navigable Small World). It’s scalable because it can find relevant data within millions of entries in a matter of milliseconds.

        We pick up the example used in our vector search documentation. In short, we use cities with a location vector to perform proximity search. Here is the City entity and how to define a HNSW index on the location:

        Vector objects are inserted as usual (the indexing is done automatically behind the scenes):

        To perform a nearest neighbor search, use the new nearestNeighbors(queryVector, maxResultCount) query condition and the new “find with scores” query methods (the score is the distance to the query vector). For example, let’s find the 2 closest cities to Madrid:

        Vector Embeddings

        In the cities example above, the vectors were straight forward: they represent latitude and longitude. Maybe you already have vector data as part of your data. But often, you don’t. So where do you get the vector emebeddings of texts, images, video, audio files from?

        For most AI applications, vectors are created by a embedding model. There are plenty of embedding models to choose from, but first you have to decide if it should run in the cloud or locally. Online embeddings are the easier way to get started and great for first testing; you can set up an account at your favorite AI provider and create embeddings online (only).

        Depending on how much you care about privacy, you can also run embedding models locally and create your embeddings on your own device. There are a couple of choices for desktop / server hardware, e.g. check these on-device embedding models. For Android, MediaPipe is a good start as it has embedders for text and images.

        Updated open source benchmarks 2024 (CRUD)

        A new release is also a good occasion to update our open source benchmarks. The Android performance benchmark app provides many more options, but here are the key results:

        CRUD is short for the basic operations a database does: create, read, update and delete. It’s an important metric for the general efficiency of a database.

        Disclaimer 1: our focus is the “Object” performance (you may find a hint for that in our product name 🙂); so e.g. relational systems may perform a bit better when you directly work with raw columns and rows.

        Disclaimer 2: ObjectBox delete performance was cut off at 800k per second to keep the Y axis within reasonable bounds. The actually measured value was 2.5M deletes per second.

        Disclaimer 3: there cannot be enough disclaimers on any performance benchmark. It’s a complex topic where details matter. It’s best if you make your own picture for your own use case. We try to give a fair “arena” with our open source benchmarks, so it could be a starting point for you.

        Feedback and Outlook: On-device vector search Benchmarks for Android coming soon

        We’re still working on a lot of stuff (as always ;)) and with on-device / local vector search being a completely new technology for Android, we need your feedback, creativity and support more than ever. We’ll also soon release benchmarks on the vector search. Follow us on LinkedIn, GitHub, or Twitter to keep up-to-date.