Edge AI refers to decentralized artificial intelligence systems that process data locally on in-store devices, e.g. POS terminals, smart shelves, Raspberry Pis, mobile phones, or cameras, rather than relying on distant cloud servers. This architecture works independently from distant cloud servers or internet connectivity, and therefore offline with minimized latency. Both, offline-capability and speed, are critical for applications like fraud detection and checkout automation. Accordingly, IDC emphasizes that 45% of retailers now prioritize “near-the-network” edge deployments. There, AI models run locally on in-store servers or IoT devices, balancing cost and performance.
Key Components of Edge AI Systems
For Edge AI to deliver real-time, offline-capable intelligence, its architecture must integrate on-device databases, local processing, and efficient data synchronization. These three pillars ensure seamless AI-powered retail operations without dependence on the cloud, minimizing latency, costs, and privacy concerns.
Edge AI system architecture in retail, integrating local processing, real-time data sync, and various applications like POS or signage
Retail generates vast real-time data from IoT sensors, POS transactions, smart cameras, and RFID tags. To ensure instant processing and uninterrupted availability you need:
On-device data storage: All kinds of devices from IoT sensors to cameras capture data. Depending on the device capabilities, with small on-device databases, data can be stored and used directly on the devices.
Local central server: A centralized on-premise device (e.g. a PC or Raspberry Pi, or more capable hw) ensures operations continue even if individual devices are resource-limited or offline.
Bi-directional on-premise data sync: Local syncing between devices and with a central on-site server ensures better decisions and fail-safe operations. It keeps all devices up-to-date without internet dependence.
2. Local Data Processing & Real-Time AI Decision-Making
Processing data where it is generated is critical for speed, privacy, and resilience:
On-device AI models: Small, quantized AI models (SLMs) like Microsoft’s Phi-3-mini (3.8B parameters, <2GB memory footprint) can run directly on many devices (e.g. tablets, and POS systems), enabling real-time fraud detection, checkout automation, and personalized recommendations.
Local on-premise AI models: Larger SLMs or LLMs run on the more capable in-store hardware for security, demand forecasting, or store optimization.
On-device & on-premise vector databases: AI models leverage on-device vector databases to structure and index data for real-time AI-driven insights (e.g., fraud detection, smart inventory management), fast similarity searches, and real-time decision-making.
Selective Cloud Sync: Bi-directional cloud data sync extends the on-premise data sync. Select data, such as aggregated insights (e.g., sales trends, shrinkage patterns), payment processing, and select learnings are synced with the cloud to enable Enterprise-wide analytics & compliance, Remote monitoring & additional backup, and Optimized centralized decision-making.
Cloud Database & Backend Infrastructure: A cloud-based database acts as the global repository. It integrates data from multiple locations to store aggregated insights & long-term trends for AI model refinement and enterprise reporting, facilitating cross-location comparisons.
Centralized cloud AI model: A centralized cloud AI model is optional for larger setups. It can be used to continuously learn from local insights, refining AI recommendations and operational efficiencies across all connected stores.
Use Cases of Edge AI for Retailers
Edge AI is unlocking new efficiencies for retailers by enabling real-time, offline-capable intelligence across customer engagement, marketing, in-store operations, and supply chains.
Key applications of Edge AI in retail, driving personalization, operational efficiency, and smarter decision-making.
Enhancing Customer Experiences in Retail Stores with Edge AI – Examples
Edge AI transforms the shopping experience, enabling retailers to offer more streamlined and more personalized services based on real-time data, thereby boosting customer satisfaction and sales. Key benefits include:
Contactless Checkout: AI-driven self-checkouts allow customers to select products captured by cameras. Thus, bypassing the need for scanning product codes, which streamlines both standard and automated checkout processes. For example, Amazon’s Just Walk Out technology allows customers to pick up items and leave the store without traditional checkout, enhancing convenience and reducing wait times.
Retail operational excellence and cost optimization with Edge AI – Examples
Edge AI also significantly enhances operational efficiency, especially operational in-store efficiency, reduces losses, and helps lower costs (while at the same time enhancing sustainability):
Loss Prevention: Retail shrink, exacerbated by inflation-driven shoplifting and self-checkout vulnerabilities, costs the industry over $100 billion annually. Advanced sensors and intelligent cameras combined with Edge AI help detect early signs of theft or fraud. Thus, allowing security measures to intervene promptly, and independently from an internet connection.
Energy Savings: Smart sensors and Edge AI can be used to optimize the use of energy for lighting, heating, ventilation, water use, etc. For example, 45 Broadway, a 32-story office building in Manhattan, implemented an AI system that analyzes real-time data. That included temperature, humidity, sun angle, and occupancy patterns – to proactively adjust HVAC settings. This integration led to a 15.8% reduction in HVAC-related energy consumption. Plus, saving over $42,000 annually and reducing carbon emissions by 37 metric tons in just 11 months.
Conclusion: Edge AI as Retail’s Strategic Imperative
Yet, Edge AI isn’t just about running AI models locally. It’s about creating an autonomous, resilient system where on-device vector databases, local processing, and hybrid data sync work together. This combination enables real-time retail intelligence while keeping costs low, data private, and operations uninterrupted. To stay ahead, businesses should invest in edge-ready infrastructure with on-device vector databases and data sync that works on-premise at their core. Those who hesitate risk losing ground to nimble competitors who have already tapped into real-time, in-store intelligence.
Hybrid systems, combining lightning-fast offline-first edge response times with the power of the cloud, are becoming the norm. IDC projects that 78% of retailers will adopt these setups by 2026, saving an average of $3.6 million per store annually. In an inflation-driven market, Edge AI isn’t just a perk – it’s a critical strategy for thriving in the future of retail. By leveraging Edge AI-powered on-device databases, retailers gain the speed, efficiency, and reliability needed to stay competitive in an AI-driven retail landscape.
Ever waited to order or pay with a waiter holding their ordering device in the air for a signal? These moments show why offline-first Data Sync is essential. With more and more services relying on the availability of on-device apps and the IoT market projected to hit $1.1 trillion by 2026, choosing the right solution – particularly online-only or offline-first data sync – is more crucial than ever. In this blog, we discuss their differences and highlight common Data Sync alternatives.
What is Data Sync?
Data synchronization (Sync) aligns data between two or more devices to maintain consistency over time. It is an essential component in applications ranging from IoT and mobile apps to cloud computing. Challenges in data synchronization include asynchrony, conflicts, and managing data across flaky networks.
Data Sync vs. Data Replication
Data Synchronization is often confused with Data Replication. Nevertheless, they serve different purposes:
Data Replication: A unidirectional process (works in one direction only) that duplicates data across storage locations to ensure availability and prevent loss. It is simple but limited in its application, and efficiency, and lacks conflict management.
Data Synchronization: A bidirectional process that harmonizes all or a subset of data between two or more devices. It ensures consistency across devices and entails conflict resolution. It is inherently more complex but also more flexible.
Online vs Offline Solutions: Why Offline Sync Matters
Online-only synchronization solutions rely entirely on cloud infrastructure, requiring a stable internet connection to function. While these tools offer simplicity and scalability, their dependency on constant cloud connectivity brings limitations: Online Data Sync solutions cannot guarantee response rates and their speed varies depending on the network. They do not work when offline or in on-premise settings. Using an Online Sync solution often entails sharing the data and might not comply with data privacy requirements. So, do read the terms and conditions.
Offline-first solutions (offline Sync) focus on local data storage and processing, ensuring the app remains fully functional even without an internet connection. When a network is available, the app synchronizes seamlessly with a server, the cloud, or other devices as needed. These solutions are ideal for on-premise scenarios with unreliable or no internet access, mission-critical applications that must always operate, real-time and high-performance use cases, as well as situations requiring high data privacy and data security compliance.
A less discussed, but in our view also relevant point, is sustainability. While there might be exceptions depending on the use case, for most applications offline-first solutions are more resourceful and therefore more sustainable. If CO2 footprint or battery usage is of concern to you, you might want to look into offline-first Data Sync alternatives.
The CE Sync is a bare minimum and typically not usable; Self-hosted Sync with Couchbase Servers is available as part of their Enterprise offering
✅ as part of the Enterprise offering; gets expensive quickly
Edge: Couchbase Lite; Server: Couchbase
Multi-model NoSQL document-oriented database
Couchbase Lite: iOS, Android, macOS, Linux, Windows, Raspbian and Raspberry Pi OS
Couchbase Sync Gateway: Red Hat Enterprise Linux (RHEL) 9.x, Alma Linux 9.x, Rocky Linux 9.x, Ubuntu, Debian (11.x, 12.x), Windows Server 2022
.Net C Go Java JavaScript info Kotlin PHP Python Ruby Scala
✅
Couchbase Lite is available under different licenses; the open source Community Edition does not get regular updates and misses many features especially around Sync (e.g. it does not include Delta Sync making it slow and expensive)
Typically requires Couchbase servers, quickly gets expensive
MongoDB: NoSQL document store; RealmDB: Embedded NoSQL DB
MongoDB: Linux OS X Solaris Windows Mongo Realm DB: Android, iOS
more than 20 languages, e.g. Java, C, C#, C++
✅
MongoDB changed its license from open source (AGPL) to MongoDB Inc.’s Server Side Public License (SSPL) in 2018. RealmDB is open source under the Apache 2.0 License. The Data Sync was proprietary.
While SQLite does not offer a sync solution out-of-the-box, various vendors have built something on top, or integrated with SQLite giving them offline persistence.
Key Considerations for Choosing a Data Sync Solution
When selecting a synchronization solution, consider:
Connectivity Requirements: Will the application function in offline environments; how will it work with flaky network conditions; how is the user experience when there is intermittent connectivity?
Data Privacy & Security: How critical is it to ensure sensitive data remains local? Data compliance? How important is it that data is not breached?
Scalability and Performance: What are the expected data loads and network constraints? How important is speed for the users? Is there any need to guarantee QoS parameters? How much will the cloud and networking costs be?
Conflict Resolution: How does the solution handle data conflicts?
Delta Sync: Does the solution always synchronize all data or only changes (data delta)? Can a subset of data be synchronized? How efficient is the Sync protocol (affecting costs and speed)?
The Shift Towards Edge Computing
The trend toward Edge Computing highlights the growing preference for offline-first solutions. By processing and storing data closer to its source, Edge Computing reduces cloud dependency, enhances privacy, and improves efficiency. Data synchronization plays an important role in this shift, ensuring seamless operation across decentralized networks.
Offline and online synchronization solutions each have their merits, but the rise of edge computing and data privacy concerns has propelled offline Sync to the forefront. Developers must assess their application’s unique requirements to select the most appropriate synchronization method. As the industry evolves, hybrid and offline-first solutions are going to dominate, offering the best balance of functionality, privacy, and performance.
Can Small Language Models (SLMs) really do more with less? In this article, we discuss the unique strengths of SLMs, the top SLMs, their integration with local vector databases, and how SLMs + local vector databases are shaping the future of AI,prioritizing privacy, immediacy, and sustainability.
Now, Small Language Models (SLMs) are stepping into the spotlight, enabling sophisticated AI to run directly on devices (local AI) like your phone, laptop, or even a smart home assistant. These models not only reduce costs and energy consumption but also bring the power of AI closer to the user, ensuring privacy and real-time performance.
What Are Small Language Models (SLMs)?
LLMs are designed to understand and generate human language. Small Language Models (SLMs) are compact versions of LLMs. So, the key difference between SLMs and LLMs is their size. While LLMs like GPT-4 are designed with hundreds of billions of parameters, SLMs use only a fraction of that. There is no strict definition of SLM vs. LLM yet. At this moment, SLM sizes can be as small as single-digit million parameters and go up to several billion parameters. Some authors suggest 8B parameters as the limit for SLMs. However, in our view that opens up the question if we need a definition for Tiny Language Models (TLMs)?
Advantages and disadvantages of SLM
According to Deloitte’s latest tech trends report, SLMs are gaining increasing importance in the AI landscape due to their cost-effectiveness, efficiency, and privacy advantages. Small Language Models (SLMs) bring a range of benefits, particularly for local AI applications, but they also come with trade-offs.
Benefits of SLMs
Privacy: By running on-device, SLMs keep sensitive information local, eliminating the need to send data to the cloud.
Offline Capabilities: Local AI powered by SLMs functions seamlessly without internet connectivity.
Speed: SLMs require less computational power, enabling faster inference and smoother performance.
Sustainability: With lower energy demands for both training and operation, SLMs are more environmentally friendly.
Accessibility: Affordable training and deployment, combined with minimal hardware requirements, make SLMs accessible to users and businesses of all sizes.
Limitations of SLMs
The main disadvantage is the flexibility and quality of SLM responses: SLMs typically cannot tackle the same broad range of tasks as LLMs in the same quality. However, in certain areas, they already match their larger counterparts. For example, Artificial Analysis AI Review 2024 highlights that GPT-4o-mini (July 2024) has a similar Quality Index to GPT-4 (March 2023), while being 100x cheaper in price.
Small Language Models vs LLMs
A recent study comparing various SLMs highlights the growing competitiveness of these models, demonstrating that in specific tasks, SLMs can achieve performance levels comparable to much larger models.
Overcoming limitations of SLMs
A combination of SLMs with local vector databases is a game-changer. With a local vector database, the variety of tasks and the quality of answers cannot only be enhanced but also for the areas that are actually relevant to the use case you are solving. E.g. you can add internal company knowledge, specific product manuals, or personal files to the SLM. In short, you can provide the SLM with context and additional knowledge that has not been part of its training via a local vector database. In this combination, an SLM can already today be as powerful as an LLM for your specific case and context (your tasks, your apps, your business). We’ll dive into this a bit more later.
In the following, we’ll have a look at the current landscape of SLMs – including the top SLMs – in a handy comparison matrix.
"The Gemma performs well on the Open LLM leaderboard. But if we compare Gemma-2b (2.51 B) with PHI-2 (2.7 B) on the same benchmarks, PHI-2 easily beats Gemma-2b."
iPhone 14: Phi-3-mini processing speed of 12 tokens per second. From the H2O Danube3 benchmarks you can see that the Phi-3 model shows top performance compared to similar size models, oftentimes beating the Danube3
OpenELM
270M, 450M, 1.1B, 3B
Apple
Apple License, but pretty much reads like you can do as much with it as a permissive oss license (of course not use their logo)
OpenELM 1.1 B shows 1.28% (Zero Shot Tasks), 2.36% (OpenLLM Leaderboard), and 1.72% (LLM360) higher accuracy compared to OLMo 1.2 B, while using 2× less pretraining data
"competitive performance compared to popular models of similar size across a wide variety of benchmarks including academic benchmarks, chat benchmarks, as well as fine-tuning benchmarks"
GPT-4o mini scores 82% on MMLU and currently outperforms GPT-4 on chat preferences in LMSYS leaderboard. GPT-4o mini surpasses GPT-3.5 Turbo and other small models on academic benchmarks across both textual intelligence and multimodal reasoning, and supports the same range of languages as GPT-4o
Smaller and faster variant of 1.5 Flash features half the price, twice the rate limits, and lower latency on small prompts compared to its forerunner. Nearly matches 1.5 Flash on many key benchmarks.
MMLU score of 69.4% and a Quality Index across evaluations of 53. Faster compared to average, with a output speed of 157.7 tokens per second. Low latency (0.37s TTFT), small context window (128k).
MMLU score 60.1%. Mistral 7B significantly outperforms Llama 2 13B on all metrics, and is on par with Llama 34B (since Llama 2 34B was not released, we report results on Llama 34B). It is also vastly superior in code and reasoning benchmarks. Was the best model for its size in autumn 2023.
Claimed (by Mistral) to be the world's best Edge models.
Ministral 3B has MMLU score of 58% and Quality index across evaluations of 51. Ministral 8B has MMLU score of 59% and Quality index across evaluations of 53.
Granite 3.0 8B Instruct matches leading similarly-sized open models on academic benchmarks while outperforming those peers on benchmarks for enterprise tasks and safety.
Quality Index across evaluations of 77, MMLU 85%, Supports a 16K token context window, ideal for long-text processing. Outperforms Phi3 and outperforms on many metrices or is comparable with Qwen 2.5 , and GPT-4o-mini
SLM Use Cases – best choice for running local AI
SLMs are perfect for on-device or local AI applications. On-device / local AI is needed in scenarios that involve hardware constraints, demand real-time or guaranteed response rates, require offline functionality or need to comply with strict data privacy and security needs. Here are some examples:
Mobile Applications: Chatbots or translation tools that work seamlessly on phones even when not connected to the internet.
IoT Devices: Voice assistants, smart appliances, and wearable tech running language models directly on the device.
Healthcare: Embedded in medical devices, SLMs allow patient data to be analyzed locally, preserving privacy while delivering real-time diagnostics.
Industrial Automation: SLMs process language on edge devices, increasing uptime and reducing latency in robotics and control systems.
By processing data locally, SLMs not only enhance privacy but also ensure reliable performance in environments where connectivity may be limited.
On-device Vector Databases and SLMs: A Perfect Match
Imagine a digital assistant on your phone that goes beyond generic answers, leveraging your company’s (and/or your personal) data to deliver precise, context-aware responses – without sharing this private data with any cloud or AI provider. This becomes possible when Small Language Models are paired with local vector databases. Using a technique called Retrieval-Augmented Generation (RAG), SLMs access the additional knowledge stored in the vector database, enabling them to provide personalized, up-to-date answers. Whether you’re troubleshooting a problem, exploring business insights, or staying informed on the latest developments, this combination ensures tailored and relevant responses.
Key Benefits of using a local tech stack with SLMs and a local vector database
Privacy. SLMs inherently provide privacy advantages by operating on-device, unlike larger models that rely on cloud infrastructure. To maintain this privacy advantage when integrating additional data, a local vector database is essential. ObjectBox is a leading example of a local database that ensures sensitive data remains local.
Personalization. Vector databases give you a way to enhance the capabilities of SLMs and adapt them to your needs. For instance, you can integrate internal company data or personal device information to offer highly contextualized outputs.
Quality. Using additional context-relevant knowledge reduces hallucinations and increases the quality of the responses.
Traceability. As long as you store your metadata alongside the vector embeddings, all the knowledge you use from the local vector database can give the sources.
Offline-capability. Deploying SLMs directly on edge devices removes the need for internet access, making them ideal for scenarios with limited or no connectivity.
Cost-Effectiveness. By retrieving and caching the most relevant data to enhance the response of the SLM, vector databases reduce the workload of the SLM, saving computational resources. This makes them ideal for edge devices, like smartphones, where power and computing resources are limited.
Use case: Combining SLMs and local Vector Databases in Robotics
Imagine a warehouse robot that organizes inventory, assists workers, and ensures smooth operations. By integrating SLMs with local vector databases, the robot can process natural language commands, retrieve relevant context, and adapt its actions in real time – all without relying on cloud-based systems.
For example:
A worker says, “Can you bring me the red toolbox from section B?”
The SLM processes the request and consults the vector database, which stores information about the warehouse layout, inventory locations, and specific task history.
Using this context, the robot identifies the correct toolbox, navigates to section B, and delivers it to the worker.
The future of AI is – literally – in our hands
AI is becoming more personal, efficient, and accessible, and Small Language Models are driving this transformation. By enabling sophisticated local AI, SLMs deliver privacy, speed, and adaptability in ways that larger models cannot. Combined with technologies like vector databases, they make it possible to provide affordable, tailored, real-time solutions without compromising data security. The future of AI is not just about doing more – it’s about doing more where it matters most: right in your hands.
After 6 years and 21 incremental “zero dot” releases, we are excited to announce the first major release of ObjectBox, the high-performance embedded database for C++ and C. As a faster alternative to SQLite, ObjectBox delivers more than just speed – it’s object-oriented, highly efficient, and offers advanced features like data synchronization and vector search. It is the perfect choice for on-device databases, especially in resource-constrained environments or in cases with real-time requirements.
What is ObjectBox?
ObjectBox is a free embedded database designed for object persistence. With “object” referring to instances of C++ structs or classes, it is built for objects from scratch with zero overhead — no SQL or ORM layer is involved, resulting in outstanding object performance.
The ObjectBox C++ database offers advanced features, such as relations and ACID transactions, to ensure data consistency at all times. Store your data privately on-device across a wide range of hardware, from low-profile ARM platforms and mobile devices to high-speed servers. It’s a great fit for edge devices, iOS or Android apps, and server backends. Plus, ObjectBox is multi-platform (any POSIX will do, e.g. iOS, Android, Linux, Windows, or QNX) and multi-language: e.g., on mobile, you can work with Kotlin, Java or Swift objects. This cross-platform compatibility is no coincidence, as ObjectBox Sync will seamlessly synchronize data across devices and platforms.
Why should C and C++ Developers care?
ObjectBox deeply integrates with C and C++. Persisting C or C++ structs is as simple as a single line of code, with no need to interact with unfamiliar database APIs that disrupt the natural flow of C++. There’s also no data transformation (e.g. SQL, rows & columns) required, and interacting with the database feels seamless and intuitive.
As a C or C++ developer, you likely value performance. ObjectBox delivers exceptional speed (at least we haven’t tested against a faster DB yet). Having several 100,000s CRUD operations per second on commodity hardware is no sweat. Our unique advantage is that, if you want to, you can read raw objects from “mmapped” memory (directly from disk!). This offers true “zero copy” data access without any throttling layers between you and the data.
Finally, CMake support makes integration straightforward, starting with FetchContent support so you can easily get the library. But there’s more: we offer code generation for entity structs, which takes only a single CMake command.
“ObjectBox++”: A quick Walk-Through
Once ObjectBox is set up for CMake, the first step is to define the data model using FlatBuffers schema files. FlatBuffers is a building block within ObjectBox and is also widely used in the industry. For those familiar with Protocol Buffers, FlatBuffers are its parser-less (i.e., faster) cousin. Here’s an example of a “Task” entity defined in a file named “task.fbs”:
1
2
3
4
tableTask{
id:ulong;
text:string;
}
And with that file, you can generate code using the following CMake command:
Among other things, code generation creates a C++ struct for Task data, which is used to interact with the ObjectBox API. The struct is a straightforward C++ representation of the data model:
1
2
3
4
structTask{
obx_id id;// uint64_t
std::stringtext;
};
The code generation also provides some internal “glue code” including the method create_obx_model() that defines the data model internally. With this, you can open the store and insert a task object in just three lines of code:
1
2
3
obx::Store store(create_obx_model());// Create the database
obx::Box<Task>box(store);// Main API for a type
obx_id id=box.put({.text="Buy milk"});// Object is persisted
And that’s all it takes to get a database running in C++. This snippet essentially covers the basics of the getting started guide and this example project on GitHub.
Vector Embeddings for C++ AI Applications
Even if you don’t have an immediate use case, ObjectBox is fully equipped for vectors and AI applications. As a “vector database,” ObjectBox is ready for use in high-dimensional vector similarity searches, employing the HNSW algorithm for highly scalable performance beyond millions of vectors.
Vectors can represent semantics within a context (e.g. objects in a picture) or even documents and paragraphs to “capture” their meaning. This is typically used for RAG (Retrieval-Augmented Generation) applications that interact with LLMs. Basically, RAG allows AI to work with specific data, e.g. documents of a department or company and thus individualizes the created content.
To quickly illustrate vector search, imagine a database of cities including their location as a 2-dimensional vector. To enable nearest neighbor search, all you need to do is to define a HNSW index on the location property, which enables the nearestNeighbors query condition used like this:
This release marks an important milestone for ObjectBox, delivering significant improvements in speed, usability, and features. We’re excited to see how these enhancements will help you create even better, feature-rich applications.
As always, we’re here to listen to your feedback and are committed to continually evolving ObjectBox to meet your needs. Don’t hesitate to reach out to us at any time.
P.S. Are you looking for a new job? We have a vacant C++ position to build the future of ObjectBox with us. We are looking forward to receiving your application! 🙂
What is Edge AI?Edge AI (also: “on-device AI”, “local AI”) brings artificial intelligence to applications at the network’s edge, such as mobile devices, IoT, and other embedded systems like, e.g., interactive kiosks. Edge AI combines AI with Edge Computing, a decentralized paradigm designed to bring computing as close as possible to where data is generated and utilized.
What is Cloud AI? As opposed to this, cloud AI refers to an architecture where applications rely on data and AI models hosted on distant cloud infrastructure. The cloud offers extensive storage and processing power.
An Edge for Edge AI: The Cloud
Example: Edge-Cloud AI setup with a secure, two-way Data Sync architecture
Today, there is a broad spectrum of application architectures combining Edge Computing and Cloud Computing, and the same applies to AI. For example, “Apple Intelligence” performs many AI tasks directly on the phone (on-device AI) while sending more complex requests to a private, secure cloud. This approach combines the best of both worlds – with the cloud giving an edge to the local AI rather than the other way around. Let’s have a look at the advantages on-device AI brings to the table.
Faster Response Rates. Processing data locally cuts down travel time for data, speeding up responses.
Increased Availability. On-device processing makes apps fully offline-capable. Operations can continue smoothly during internet or data center disruptions.
Sustainability/costs. Keeping data where it is produced and used minimizes data transfers, cutting networking costs and reducing energy consumption—and with it, CO2 emissions.
Challenges of Local AI on the Edge
Data Storage and Processing: Local AI requires an on-device database that runs on a wide variety of edge devices (Mobile,IoT, Embedded) and performs complex tasks such as vector search locally on the device with minimal resource consumption.
Data Sync: It’s vital to keep data consistent across devices, necessitating robust bi-directional Data Sync solutions. Implementing such a solution oneself requires specialized tech talent, is non-trivial and time-consuming, and will be an ongoing maintenance factor.
Small Language Models:Small Language Models (SLMs) like Phi-2 (Microsoft Research), TinyStories (HuggingFace), and Mini-Giants (arXiv) are efficient and resource-friendly but often need enhancement with local vector databases for better response accuracy. An on-device vector database allows on-device semantic search with private, contextual information, reducing latency while enabling faster and more relevant outputs. For complex queries requiring larger models, a database that works both on-device and in the cloud (or a large on-premise server) is perfect for scalability and flexibility in on-device AI applications.
On-device AI Use Cases
On-device AI is revolutionizing numerous sectors by enabling real-time data processing wherever and whenever it’s needed. It enhances security systems, improves customer experiences in retail, supports predictive maintenance in industrial environments, and facilitates immediate medical diagnostics. On-device AI is essential for personalizing in-car experiences, delivering reliable remote medical care, and powering personal AI assistants on mobile devices—always keeping user privacy intact.
Personalized In-Car Experience: Features like climate control, lighting, and entertainment can be adjusted dynamically in vehicles based on real-time inputs and user habits, improving comfort and satisfaction. Recent studies, such as one by MHP, emphasize the increasing consumer demand for these AI-enabled features. This demand is driven by a desire for smarter, more responsive vehicle technology.
Remote Care: In healthcare, on-device AI enables on-device data processing that’s crucial for swift diagnostics and treatment. This secure, offline-capable technology aligns with health regulations like HIPAA and boosts emergency response speeds and patient care quality.
Personal AI Assistants: Today’s personal AI assistants often depend on the cloud, raising privacy issues. However, some companies, including Apple, are shifting towards on-device processing for basic tasks and secure, anonymized cloud processing for more complex functions, enhancing user privacy.
ObjectBox for On-Device AI – an edge for everyone
The continuum from Edge to Cloud
ObjectBox supports AI applications from Edge to cloud. It stands out as the first on-device vector database, enabling powerful Edge AI on mobile, IoT, and other embedded devices with minimal hardware needs. It works offline and supports efficient, private AI applications with a seamless bi-directional Data Sync solution, completely on-premise, and optional integration with MongoDB for enhanced backend features and cloud AI.
Interested in extending your AI to the edge? Let’s connect to explore how we can transform your applications.
In today’s fast-paced, decentralized world valuable data is generated by everything, everywhere, and all at once. To harness the vast opportunities offered by this data for data-driven organizations and AI applications, you need to be able to access the data and seamlessly distribute it to when and where it’s needed.
The key to achieving this lies in efficient, offline-first on-device data storage, reliable bi-directional data sync, and a scalable data management backend in the cloud. In other words, you need the infrastructure to manage data flows bi-directionally to tap into fresh data throughout your organization, processes, and applications at the right time.
Together, MongoDB and ObjectBox provide developers with a robust solution to empower seamless workload and data flows on the edge and from the edge to the cloud. ObjectBox seamlessly syncs data bi-directionally across devices even without Internet and syncs back to the cloud and MongoDB when connected. With ObjectBox devices stay in sync also in environments with intermittent connectivity, high latency, or flaky networks. Capture and unlock the value of all your data, anytime, anywhere, without relying on a constant Internet connection, with MongoDB + ObjectBox.
Seamless Offline-First Data Sync for Edge Devices
Maintaining service continuity is essential, even when devices are offline. Your customers, users, operations, and employees need to be able to rely on essential data at all times. That’s where ObjectBox comes in. It comprises of two key components: the ObjectBox Database and ObjectBox Data Sync.
The ObjectBox Database is a lightweight, on-device solution that is highly resource-efficient and fast on restricted hardware like mobile, IoT, and embedded devices, and even in the cloud.
ObjectBox Data Sync enables seamless bi-directional data synchronization between devices. By handling only incremental changes in a compressed binary format, ObjectBox Sync ensures minimal data transfer, automatic conflict resolution, and a seamless user experience even in fluctuating network conditions. This approach effectively simplifies the development process by offering complex sync logic via easy native-language APIs, allowing developers to focus on core app functionality.
Once a connection is available, ObjectBox Data Sync instantly synchronizes changes with MongoDB, providing real-time, bi-directional data sync between edge devices and MongoDB’s robust cloud backend.
The Benefits of Offline-First and Real-Time Data Sync with MongoDB and ObjectBox:
Resource-efficiency & Highspeed: ObjectBox excels at consuming minimal computational resources (CPU, power, memory, …) while delivering data persistence speed that is typically on-par with in-memory caches for read operations.
Offline-First Operation: Ensure continuous app performance, even with no internet connection. ObjectBox stores and syncs data bi-directionally on the edge and additionally with MongoDB once connected.
Real-Time Data Sync: Get reliable, bi-directional data synchronization across devices and MongoDB, enabling real-time updates and data consistency.
Scalable Edge: Easily handle 100k operations / second on a single device. Host the Sync server on any edge device (like a phone) and easily handle 3M clients with a three-node cluster.
Scalable Cloud Backend: With MongoDB, businesses can scale their applications to handle growing data and performance demands, seamlessly syncing data between millions of devices and the cloud.
Flexible Setup Scenarios: Tailor Data Sync to Your Needs
ObjectBox and MongoDB offer flexible setup scenarios to meet different application needs. The two main setup options are the central sync and the edge sync setup.
The Central Sync Setup syncs data between edge devices and MongoDB in the cloud, providing centralized data management while retaining offline-first functionality. The ObjectBox Sync Server runs in the cloud or on-premise.
The Edge Sync Setup allows devices to operate and sync data efficiently offline between ObjectBox instances within an edge, e.g. within one location, or within a car. When reconnected, changes are synchronized back to MongoDB making it ideal for environments with intermittent connectivity or distributed devices that need to function independently while syncing back to the cloud when possible.
This structure offers a flexible approach to integrating edge and cloud systems, empowering organizations to choose the setup that best fits their specific use case. More details.
Use Cases for MongoDB + ObjectBox :
Data-Driven Organizations: In a data-driven organization, every decision relies on access to relevant, up-to-date data. ObjectBox enables real-time data collection and synchronization from edge devices, ensuring access to critical data, even when devices are intermittently connected. This streamlines operations, improves decision-making, and enhances analysis across distributed teams and IoT systems. With MongoDB’s scalable cloud infrastructure, decentralized data integrates seamlessly with the cloud backend for efficient management.
Point-of-Sale (PoS) & Retail Edge Computing: A seamless customer experience and the ability to keep selling and never lose a transaction, even during internet outages, are essential for PoS systems / in retail. ObjectBox enables offline-first data storage and syncing for PoS systems, allowing transactions to be processed locally, even without internet connectivity. When connectivity returns, ObjectBox syncs transaction data back to MongoDB in real time, ensuring data consistency across multiple locations. Retailers can then leverage MongoDB’s analytics to gain insights into customer behavior and optimize inventory management.
Software-Defined Vehicle (SDV) & Connected Cars: Modern vehicles generate vast amounts of data from sensors and onboard systems. ObjectBox enables efficient on-device storage and processing, providing real-time access to data for navigation, diagnostics, and infotainment systems. ObjectBox Data Sync ensures that local data is synced back to MongoDB when connectivity is available, supporting centralized analytics, fleet management, and predictive maintenance, optimizing performance and safety while enhancing the user experience.
Manufacturing & Smart Shopfloor Apps: In smart factories, machines and sensors continuously generate data that must be analyzed and processed in real time. ObjectBox enables local data storage and fast data sync on-premise without the necessity for an Internet connection, ensuring that critical systems that are not connected to the Internet can run smoothly on-site. With a connected instance, ObjectBox takes care of synchronizing this data with the cloud and MongoDB for further analysis and central dashboards.
AI-Applications with On-device Vector Search: ObjectBox is the first and only on-device vector database, empowering developers to run AI locally on mobile, IoT, embedded, and other commodity devices (Edge AI). In combination with a Small Language Model (SLM), this allows developers to build local AI applications (e.g. RAG, genAI) that run directly on the device—without needing a cloud connection. By syncing with MongoDB, businesses can combine the power of on-device AI with centralized cloud data for even greater insights and performance. This is especially beneficial in scenarios requiring real-time decision-making, such as personalized customer experiences and predictive maintenance.
In today’s data-driven world, a data-first strategy requires seamless integration between edge and cloud data management. The combination of MongoDB and ObjectBox unlocks the full potential of your data. MongoDB’s powerful cloud platform, together with ObjectBox’s efficient on-device database and offline-first capabilities, is ideal for capturing the value of your data from anywhere, including distributed edge devices where valuable data is generated all the time. This partnership empowers businesses to seamlessly handle decentralized data, enabling fast and reliable operations at the edge while syncing back to the cloud for centralized management. Whether on IoT devices, mobile, embedded systems, or commodity hardware, ObjectBox and MongoDB ensure optimal performance everywhere. From remote areas to bad networks, our joint solution keeps data flowing reliably between the edge and the MongoDB backend, even when connectivity or nodes are lost.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok