Data Sync Alternatives: Offline vs. Online Solutions

Data Sync Alternatives: Offline vs. Online Solutions

Ever waited to order or pay with a waiter holding their ordering device in the air for a signal? These moments show why offline-first Data Sync is essential. With more and more services relying on the availability of on-device apps and the IoT market projected to hit $1.1 trillion by 2026, choosing the right solution – particularly online-only or offline-first data sync – is more crucial than ever. In this blog, we discuss their differences and highlight common Data Sync alternatives.

What is Data Sync?

Data synchronization (Sync) aligns data between two or more devices to maintain consistency over time. It is an essential component in applications ranging from IoT and mobile apps to cloud computing. Challenges in data synchronization include asynchrony, conflicts, and managing data across flaky networks.

Data Sync vs. Data Replication

Data Synchronization is often confused with Data Replication. Nevertheless, they serve different purposes:

  • Data Replication: A unidirectional process (works in one direction only) that duplicates data across storage locations to ensure availability and prevent loss. It is simple but limited in its application, and efficiency, and lacks conflict management.
  • Data Synchronization: A bidirectional process that harmonizes all or a subset of data between two or more devices. It ensures consistency across devices and entails conflict resolution. It is inherently more complex but also more flexible.

Online vs Offline Solutions: Why Offline Sync Matters

Online-only synchronization solutions rely entirely on cloud infrastructure, requiring a stable internet connection to function. While these tools offer simplicity and scalability, their dependency on constant cloud connectivity brings limitations: Online Data Sync solutions cannot guarantee response rates and their speed varies depending on the network. They do not work when offline or in on-premise settings. Using an Online Sync solution often entails sharing the data and might not comply with data privacy requirements. So, do read the terms and conditions.

Offline-first solutions (offline Sync) focus on local data storage and processing, ensuring the app remains fully functional even without an internet connection. When a network is available, the app synchronizes seamlessly with a server, the cloud, or other devices as needed. These solutions are ideal for on-premise scenarios with unreliable or no internet access, mission-critical applications that must always operate, real-time and high-performance use cases, as well as situations requiring high data privacy and data security compliance.

A less discussed, but in our view also relevant point, is sustainability. While there might be exceptions depending on the use case, for most applications offline-first solutions are more resourceful and therefore more sustainable. If CO2 footprint or battery usage is of concern to you, you might want to look into offline-first Data Sync alternatives.

Now, let’s have a look at current options:

Data Sync Alternatives

(If you are on mobile, click here for a view that’s optimized for mobile)

Solution Company Type Offline Support Self-hosted Sync Decentralized Sync Database Type of DB OS/Platforms Languages Open-Source Component License Other Considerations Country
Firebase Google
 (Firebase was acquired by Google in 2014)
Online Local cache only, no persistence, syncs when online Cloud: Firebase Realtime Database; Edge: Only caching, no DB (called Firestore) Document store iOS, Android, Web Java
JavaScript
Objective-C
Swift
Kotlin
C++
Dart
C#
Python, Go, Node.js
proprietory Tied to Google Cloud, requires internet connectivity 🇺🇸
Supabase Supabase Online Limited Cloud DB: PostgreSQL Relational document store Primarily a cloud solution JavaScript/TypeScript
Flutter/Dart
C#
Swift
Kotlin
Python
Apache License 2.0 Supabase is mainly designed as a SaaS, for use cases with constant connectivity 🇸🇬
ObjectBox Sync ObjectBox Offline-first In development ObjectBox Object-oriented embedded NoSQL DB Android, Linux, Ubuntu,
Windows,
macOS, iOS,
QNX, Raspbian,
any POSIX system really,
any cloud (e.g. AWS/Azure/Google Cloud),
bare metal
C
C++
Java
Kotlin
Swift
Go
Flutter / Dart
Python
DB: Open source bindings, Apache 2.0, closed core Highly efficient (saves CPU, Memory, battery, and bandwidth); fully offline-first, supports on-premise settings, 100% cloud optional 🇩🇪
Couchbase (Lite + Couchbase Sync Gateway) Couchbase (a merger of Couch One and Membase) Online The CE Sync is a bare minimum and typically not usable; Self-hosted Sync with Couchbase Servers is available as part of their Enterprise offering ✅ as part of the Enterprise offering; gets expensive quickly Edge: Couchbase Lite; Server: Couchbase Multi-model NoSQL document-oriented database Couchbase Lite: iOS, Android, macOS, Linux, Windows, Raspbian and Raspberry Pi OS

Couchbase Sync Gateway: Red Hat Enterprise Linux (RHEL) 9.x, Alma Linux 9.x, Rocky Linux 9.x, Ubuntu, Debian (11.x, 12.x), Windows Server 2022
.Net
C
Go
Java
JavaScript info
Kotlin
PHP
Python
Ruby
Scala
Couchbase Lite is available under different licenses; the open source Community Edition does not get regular updates and misses many features especially around Sync (e.g. it does not include Delta Sync making it slow and expensive) Typically requires Couchbase servers, quickly gets expensive 🇺🇸
MongoDB Realm + Atlas Device Sync MongoDB
 (Realm was acquired by MongoDB in 2019)
Offline-First Cloud-based sync only Cloud: MongoDB, Edge: Mongo Realm DB MongoDB: NoSQL document store; RealmDB: Embedded NoSQL DB MongoDB: Linux
OS X
Solaris
Windows
Mongo Realm DB:
Android, iOS
more than 20 languages, e.g. Java, C, C#, C++ MongoDB changed its license from open source (AGPL) to MongoDB Inc.’s Server Side Public License (SSPL) in 2018. RealmDB is open source under the Apache 2.0 License. The Data Sync was proprietary.  Deprecated (in Sep 2024); End-of-life in Sep 2025; ObjectBox offers a migration option 🇺🇸

While SQLite does not offer a sync solution out-of-the-box, various vendors have built something on top, or integrated with SQLite giving them offline persistence.

Key Considerations for Choosing a Data Sync Solution

When selecting a synchronization solution, consider:

  1. Connectivity Requirements: Will the application function in offline environments; how will it work with flaky network conditions; how is the user experience when there is intermittent connectivity?
  2. Data Privacy & Security: How critical is it to ensure sensitive data remains local? Data compliance? How important is it that data is not breached?
  3. Scalability and Performance: What are the expected data loads and network constraints? How important is speed for the users? Is there any need to guarantee QoS parameters? How much will the cloud and networking costs be?
  4. Conflict Resolution: How does the solution handle data conflicts?
  5. Delta Sync: Does the solution always synchronize all data or only changes (data delta)? Can a subset of data be synchronized? How efficient is the Sync protocol (affecting costs and speed)?

The Shift Towards Edge Computing

The trend toward Edge Computing highlights the growing preference for offline-first solutions. By processing and storing data closer to its source, Edge Computing reduces cloud dependency, enhances privacy, and improves efficiency. Data synchronization plays an important role in this shift, ensuring seamless operation across decentralized networks.

Offline and online synchronization solutions each have their merits, but the rise of edge computing and data privacy concerns has propelled offline Sync to the forefront. Developers must assess their application’s unique requirements to select the most appropriate synchronization method. As the industry evolves, hybrid and offline-first solutions are going to dominate, offering the best balance of functionality, privacy, and performance.

IoT, Edge Computing, and Digitalization in Healthcare

IoT, Edge Computing, and Digitalization in Healthcare

The healthcare industry is experiencing an unprecedented surge in data generation, responsible for approximately 30% of the world’s total data volume. This vast and fast-growing amount of health data is the primary force behind the digital transformation of healthcare. Only through the adoption of advanced technologies can healthcare providers manage, analyze, and secure this information. While COVID-19 accelerated this shift, contributing to the explosion of health data, the ongoing demand for real-time patient insights, personalized treatment, and improved operational efficiency continues to drive the sector toward digitalization and AI. Simultaneously, growing data privacy concerns, increasing costs, and heavier regulatory requirements are challenging the use of cloud computing to manage this data. A megashift to Edge Computing and Edge AI is addressing these challenges, enabling a faster, safer, and more reliable digital healthcare infrastructure.

The digital healthcare market 2024 and beyond, a high-speed revolution

Prior to COVID, growth in digital health adoption stalled. However, digitalization in the healthcare industry has sky-rocketed since the start of the pandemic. Reflecting this market turnaround, followed by the rise of advanced digital tools like AI, recent years have been record-breaking for investments in healthcare companies. A trend that will continue in the next years, as analysts predict rapid growth across digital healthcare market sectors:

Healthcare market overview

Drivers of growth and change in digital healthcare

 

Digital Healthcare Growth Driver 1: Growing Medical IoT Device Adoption

There will be a projected 40 billion IoT devices by 2030. IoMT devices already accounted for 30% of the entire IoT device market in 2020. Internet of Medical Things (IoMT) are hardware devices designed to process, collect, and/or transmit healthrelated data via a network. According to Gartner, 79% of healthcare providers are already using IoT in their processes, i.e. remote health monitoring via wearables, ingestible sensors, disinfection robots, or closed-loop insulin delivery systems. IoMT devices increase safety and efficiency in healthcare, and future technical applications, like smart ambulances or augmented reality glasses that assist during surgery, are limitless.

IoMT devices accounted for 30% of the IoT device market

health-care-edge-computing

Digital Healthcare Growth Driver 2: The Explosion of Health Data

Growing IoMT adoption is subsequently driving a rapid increase in the amount of collected health data. According to the RBC study, the healthcare industry is now responsible for approximately 30% of the world’s total data volume. By 2025, healthcare data is expected to continue growing at a 36% CAGR, outpacing data volumes from sectors like manufacturing, financial services, and media. Big health data sets are being used to revolutionize healthcare, bringing new insights into fields like oncology, and improving patient experience, care, and diagnosis. According to the Journal of Big Data: “taken together, big data will facilitate healthcare by introducing prediction of epidemics (in relation to population health), providing early warnings of disease conditions, and helping in the discovery of novel biomarkers and intelligent therapeutic intervention strategies for an improved quality of life.” In fact, the healthcare analytics market is projected to reach $129.7 billion by 2028, growing at a 23.5% CAGR​. This growth is driven by the need for real-time data processing, personalized medicine, and predictive analytics to manage chronic conditions and optimize hospital operations.

health-care-edge-computing

Healthcare data occupies ~30% of the world’s total data volume

Digital Healthcare Growth Driver 3: Artificial Intelligence

The increase in healthcare data opens up new opportunities and challenges to apply advanced technologies like big data analytics and artificial intelligence (AI) to improve healthcare delivery, patient outcomes, and operational efficiency. For instance, AI is being used to analyze medical imaging data, identify patterns in electronic health records, and predict patient outcomes, contributing to improved patient care. By 2026, AI is projected to save the global healthcare industry over $150 billion annually, by answering “20 percent of unmet clinical demand.” 

Generative AI, which includes Large Language Models (LLMs) such as GPT-4, is playing a crucial role in this transformation. According to the survey from McKinsey, 70% of surveyed healthcare organizations are either currently testing or actively using generative AI tools for both clinical and administrative applications​. This is unsurprising, as LLM Chatbots can reduce waiting times by 80% in healthcare facilities. In diagnostics, LLMs are being applied to interpret electronic health records and assist with predictive analytics, leading to a reduction in hospital readmissions by up to 22%. Additionally, LLMs have helped improve medication adherence rates by 60%, demonstrating their impact on patient care quality​.

70% of healthcare organizations plan or use AI

health-care-edge-computing

Digital Healthcare Growth Driver 4: Artificial Intelligence

With the rise of IoMT and the boost in healthcare data, Edge Computing is becoming a key driver of healthcare digitalization. The majority of IoMT devices (55.3 %) currently operate on-premise rather than in the cloud, ensuring faster, more secure real-time data processing. This shift to Edge Computing enhances data privacy and reduces latency, which is critical in life-critical medical applications. Additionally, the development of Small Language Models (SLMs) for on-device AI (Edge AI) allows healthcare providers to deploy AI-powered solutions directly on medical devices. This helps with tasks like remote monitoring and diagnostics without the need for cloud connectivity, which is particularly beneficial in environments with limited internet access​. 

As IoMT continues to evolve, Edge Computing will play an essential role in supporting healthcare’s increasing demand for real-time data processing. By 2025, it is projected that 75% of the healthcare data will be generated at the Edge, further driving the adoption of these technologies across the industry​.

AI-Health-Icon

75% of the healthcare  data will be generated at the Edge in 2025

Digital Healthcare Growth Driver 5: Underlying Social Megatrends

The global population is growing; global life expectancy is rising. Accordingly, by 2030 the world needs more energy, more food, and more water. Explosive population growth in some areas versus declines in others contributes to shifts in economic power, resource allocation, societal habits, and norms. Many Western populations are aging rapidly. E.g. in America, the number of people 65+ is expected to nearly double to 72.1 million by 2034. Because the population is shrinking at the same time, elder care is a growing challenge and researchers are looking to robots to solve it. 

Health megatrends focus not only on the prevention of disease, but also on the perception of wellness, and new forms of living and working. Over this decade more resources will be spent on health and longevity, leading to artificially and technologically enhanced human capabilities. More lifestyle-related disorders and diseases are expected to emerge in the future.

A focus on health and longevity will
lead to artificial & tech-enhanced
human capabilities

health-care-edge-computing

The Challenges of Healthtech

Along with more data, more devices, and more opportunities also comes more responsibility and more costs for healthcare providers.

health-care-edge-computing

Data Volume and Availability With the growing number of digital healthcare and medical devices, a dazzling volume of health data is created and collected across many different channels. It will be vital for the healthcare industry to reliably synchronize and combine data across devices and channels. Due to the sheer volume, reliable collection and analysis of this data is a major challenge. After it’s been processed, data needs to be available on demand, i.e. in emergency situations that require reliable, fast, available data.

health-care-edge-computing

Reliability, Privacy, and Data Security are extremely important in health technology; 70% of healthcare consumers are concerned about data privacy. Data use is often governed by increasingly strict national regulations, i.e. HIPAA (USA) and/or GDPR (Europe). With the number of cyber-attacks in the healthcare industry on the rise, healthcare professionals must be even more diligent about the storage and processing of data. In addition, healthtech must be extremely well vetted; failures can cost lives – typical “banana products”, which ripen with the customers, are a no-go.

health-care-edge-computing

IT Costs Medical devices contribute a large portion to healthcare budgets. However as data volumes grow, data costs will also become a relevant cost point. Sending all health data to the cloud to be stored and processed is not only slow and insecure, it is also extremely costly. To curb mobile network and cloud costs, much health data can be stored and processed at the edge, on local devices, with only necessary data being synced to a cloud or central server. By building resilient data architecture now, healthcare providers (e.g. hospitals, clinics, research centers) can avoid future costs and headaches.

Edge Computing is Integral to Data-driven Healthcare Ecosystems

With big data volumes, industries like healthcare need to seek out resilient information architectures to accommodate growing numbers of data and devices. To build resilient and secure digital infrastructure, healthcare providers will need to utilize both cloud computing and edge computing models, exploiting the strengths of both systems.

Cloud & Edge: What’s the Difference?

Cloud Computing information is sent to a centralized data center, to be stored, processed and sent back to the edge. This causes latency and a higher risk of data breaches. Centralized data is useful for large-scale data analysis and the distribution of data between i.e. hospitals and doctors’ offices.

Edge Computing Data is stored and processed on or near the device it was created on. Edge Computing works without an internet connection, and thus is reliable and robust in any scenario. It is ideal for time-sensitive data (real-time), and improved data privacy and security.

health-care-edge-computing

Edge Computing contributes to resilient and secure healthcare data systems

health-care-edge-computing

Transforming Healthcare with Edge Computing

Use Case: Secure and Up to Date Digital Record Keeping in Doctors Offices

For private doctors’ offices, embracing digitalization comes with different hurdles than larger healthcare providers. Often, offices do not keep a dedicated IT professional on staff, and must find digital solutions that serve their needs, while allowing them to comply with ever-increasing data regulations. As an industry used to legislative challenges, GPs know that sensitive patient data must be handled with care.

Solution providers serving private doctors’ offices are using edge databases to help keep patient data secure. An edge database allows private GPs to collect and store digital data locally. In newer practice setups, doctors use tablets, like iPads, throughout their practice to collect and track patient data, take notes and improve flexibility. This patient data should not be sent or stored in a central cloud server as this increases the risk of data breaches and opens up regulatory challenges. In a cloud-centered setup, the doctor also always needs to rely on a constant internet connection being available, making this also a matter of data availability

health-care-edge-computing

Accordingly, the patient data is stored locally, on the iPads, accessible only by the doctor treating the patient. Some of the data is synchronized to a local, in-office computer at the front desk for billing and administration. Other data is only synchronized for backup purposes and encrypted. Such a setup also allows synchronizing data between iPads, enabling doctors to share data in an instant.

Use Case: Connected Ambulances – Real-Time Edge Data from Home to Hospital

health-care-edge-computing

Between an incidence location and the hospital, a lot can happen. What if everything that happened in the ambulance was reliably and securely tracked and shared with the hospital, seamlessly? There are already trials using 5G technology to stream real-time data to hospitals, allowing ambulance medics to access patient data while in transit. Looking to the future, Edge Computing will enable digital healthcare applications to function in real-time and reliably anywhere and anytime, e.g. a moving ambulance, in the tunnel, or a remote area, enabling ambulance teams and doctors to give the best treatment instantly / on-site, while using available bandwidth and networks when available to seamlessly synchronize the relevant information to the relevant healthcare units, e.g. the next hospital. This will decrease friction, enhance operational processes, and improve time to treatment.

Digital Healthcare: Key Take-Aways

Digital healthcare is a fast-growing industry; more data and devices alongside new tech are empowering rapid advances. Finding ways to utilize growing healthcare data, while ensuring data privacy, security and availability are key challenges ahead for healthcare providers. The healthcare industry must find the right mix of technologies to manage this data, utilizing cloud for global data exchange and big data analytics, while embracing Edge Computing for it’s speed, security, and resilience.

health-care-edge-computing

Underutilized data plays a major role in health-tech innovation, data is the lifeline of future healthcare offerings; however, there is still much work to be done to improve the collection, management, and analysis of this data.

health-care-edge-computing

It’s all about data availability. Either in emergency situations, or simply to provide a smooth patient experience, data needs to be fast, reliable, and available: when you need it where you need it.

health-care-edge-computing

Edge computing alongside other developing technologies like 5G or Artificial Intelligence will empower a new and powerful digital healthcare ecosystem.

ObjectBox provides edge data software, to empower scalable and resilient digital innovation on the edge in healthcare, automotive, and manufacturing. ObjectBox’ edge database and data synchronization solution is 10x faster than any alternative, and empowers applications
that respond in real-time (low-latency), work offline without a connection to the cloud, reduce energy needs, keep data secure, and lower mobile network and cloud costs.

Retrieval Augmented Generation (RAG) with vector databases: Expanding AI Capabilities

Retrieval Augmented Generation (RAG) with vector databases: Expanding AI Capabilities

What is RAG?

Retrieval Augmented Generation (RAG) is a technique to enhance the intelligence of large language models (LLMs) with additional knowledge, such as reliable facts from specific sources, private or personal information not available to others, or just fresh news to improve their answers. Typically in the RAG, the additional knowledge is provided to the model from a vector database. For example, you can add internal data from your company, the latest news or the data from your personal devices to get responses that use your context. It can truly help you like an expert instead of giving generalized answers. This technique also reduces hallucinations. 

Why RAG?

Let’s take a look at the key benefits that RAG in general offers:

  • Customization and Adaptation: RAG helps LLMs to tailor responses to specific domains or use cases by using vector databases to store and retrieve domain-specific information. It turns general intelligence into expert intelligence.
  • Contextual Relevance: By incorporating information retrieved from a large corpus of text, RAG models can generate contextually relevant responses. It improves the quality of generated responses compared to traditional generation models.
  • Accuracy and diversity: Incorporation of external information also helps to generate more informative and accurate responses and keep LLM up-to-date. This also helps to avoid repetitive or generic responses and allows for more diverse and interesting conversations.
  • Cost-effective implementation: RAG requires less task-specific training data compared to fine-tuning the foundation models. When we compare retrieval augmented generation vs fine-tuning, RAG’s ability to use external knowledge stands out. While fine-tuning requires lots of labeled data, RAG can rely on external sources. This can be particularly beneficial in scenarios where annotated training data is limited or expensive to obtain, thus, providing a cost-effective implementation. 
  • Transparency: RAG models provide transparency in their responses by explicitly indicating the source of retrieved information. This allows users to understand how the model arrived at its response and helps enhance trust in the generated output.

Therefore, RAG is suitable for applications where access to a vast amount of specialized data is necessary. For example, a customer support bot that pulls details from FAQs and generates coherent, conversational responses. Another example is an email drafting tool that fetches information about recent meetings and generates a personalized summary.

How retrieval augmented generation works

Let’s discuss the mechanics of how RAG operates with vector database, covering its main stages from dataset creation to response generation (see figure).

This image has an empty alt attribute; its file name is RAG.png
Retrieval augmented generation diagram


  • DB creation: Creation of external dataset

Before the real use, the vector database should be created. The new data, that lies outside the training dataset of LLM, should be identified and added to the dataset (e.g. up-to-date information or specific information). This dataset is then transferred into vector embeddings via an AI model (embedding language models) and is stored in the vector database. 

  • DB in use: Retrieval of relevant information
    Once a query comes in, it is also transferred into a vector / embedding. It is used then to retrieve the most relevant result from the database. To achieve this, RAG uses semantic search techniques also known as vector search to understand the user’s query and/or context, retrieving contextually relevant information from a large dataset. Vector search goes beyond keyword matching and focuses on semantic relationships, improving the quality of the retrieved information and the overall performance of the RAG system in generating contextually relevant responses. 
  • DB in use: Augmentation
    At this stage, the user’s query is augmented by adding the relevant data retrieved in the previous stage. Often, only the top responses from vector search are considered as relevant data. Many databases have additional filtering techniques in place here.
  • Generation
    The augmented query is sent to the LLM to generate an accurate answer.

The Role of Long Context Windows

The rise of the new LLMs with long (1+million tokens) context windows, like Gemini 1.5, raised the discussion on whether long context windows will replace RAG. A long context window enables users to directly incorporate huge amounts of data into a query. Thus, it increases context to the LLM to improve its efficiency. 

Long context length and RAG have pros and cons, and neither will kill the other. Rather than being mutually exclusive, large context windows and RAG can be complemented. Large context windows can enhance RAG applications by expanding the margin of precision and accommodating vast amounts of data. However, the capability of the model to take a long context does not mean that it can efficiently leverage all the information. If the relevant information is located in the middle of the context window, LLM’s ability to recall it is worse than the one located in the beginning. In order to use RAG with the long context window, the reranking (e.g. Cross-Encoder) should be used. The reranking model first calculates a matching score between a given query and vectors in the database (e.g. representing documents). And then it rearranges vector search results so that the most relevant ones are prioritized.

Future Directions of RAG

While RAG offers numerous benefits, there are still opportunities for improvement. Researchers are exploring ways to enhance RAG by combining it with other techniques. These include fine-tuning (RAFT) or the long context window (in combination with reranking). Another direction of research is expanding RAG capabilities by advancing data handling (including multimodal data), evaluation methodologies, and scalability. Finally, RAG is also affected by the new advances in optimizing LLMs to run locally on restricted devices (mobile, IoT), along with the emergence of the first on-device vector database. Now, RAG can be performed directly on your mobile device, prioritizing privacy, low latency, and offline capabilities.

Vector search: making sense of search queries

Vector search: making sense of search queries

Today, finding the most valuable information in your search is more complicated than finding a needle in a haystack. Traditional search engines match keywords and favor SEO-optimized content, but what if there was a way for search engines to truly understand the meaning behind our queries? Enter vector search – a powerful technology that is transforming how we navigate information, not just for users, but also for applications performing background searches. In this article, we will discuss what vector search is and how it works.

What is a vector search and why should you care?

Example Results with a traditional search for “Simple Fruit Cake”.

Vector search, which is also known as semantic search, is a technology that improves search accuracy by understanding the meaning (semantics) of the data and relations between its parts. Unlike traditional search, vector search efficiently handles synonyms, typos, ambiguous language, and broad or fuzzy queries. This is because it focuses on meaning, not just keywords.

Imagine that you are searching for a dessert to cook during the weekend. In a traditional search engine, the “simple fruit cake” query will reveal only websites that include these keywords. However, a vector search engine is able to provide results like “apple pie in 20 minutes” or “easy summer desserts”, which capture the essence of the query and align with your desire for a straightforward dessert option, providing more valuable results to you. 

At its core, vector search uses Large Language models (LLMs), like GPT, to transform data into mathematical vectors, also known as vector embeddings

What is a vector embedding?

2D Vector Space Representation. “Easy apple pie” is close to “simple fruit cake” as they are both simple and have fruit as an ingredient. “Easy chocolate mousse” shares simplicity but does not contain fruit. “Fancy plum cake” has fruit but is not simple to make. And “extravagant chocolate mousse” does not share either simplicity or fruit as an ingredient. Thus, it is the farthest from “simple fruit cake”.

A vector or vector embedding is a numerical representation of any kind of unstructured data (e.g. texts, images, videos, audio). It captures its meaning while being easy and efficient to compute with. Think of it like this: imagine you have a collection of cake recipes. You can convert each recipe into a vector embedding, which is like a unique numerical code that represents the recipe’s characteristics (ingredients, cooking methods, flavors, etc.).

Once all the recipes are encoded into embeddings, we can perform a similarity search. This means we can compare the vectors to see how similar the recipes are. For example, the vector for an easy apple pie recipe would be close to the vector for a simple fruit cake recipe because they share similar characteristics (e.g. simplicity, fruitiness). On the other hand, the vector for an extravagant chocolate mousse cake would be farther away because it involves different ingredients and methods.

How to compare vectors?

Vector similarity is a measure of how similar two vectors are (see ep. 4 of ObjectBox Bites). There are three ways to compare vectors: Jaccard Similarity, Cosine Similarity, and L2 Distance (also known as Euclidean distance). Jaccard Similarity calculates the ratio of elements that are common to both vectors divided by the total number of elements in both vectors. Cosine Similarity calculates the cosine of the angle between two vectors. The last method is the L2 distance. It calculates the straight-line distance between two points in space represented by the vectors. This is the most frequently used method in AI applications. It is important to note that the choice of vector comparison method does not affect the mechanics of similarity search.

What is a vector database and how is it related to vector search?

A vector database is a specialized database designed to store, manage, and search vectors efficiently. This efficiency is crucial for handling large datasets and performing fast vector similarity searches. Also, with a vector database, the knowledge of AI models can be improved, adapted, and updated. Therefore, today, most AI apps use a vector database.

Imagine having an AI that knows your habits, your preferences, your health data, maybe even what’s in your fridge, and can use this knowledge to suggest recipes that fit your lifestyle and individual preferences. A standard AI model doesn’t have that data and wouldn’t learn that way, but with a vector database it can. Now, when you search for a “fruit cake recipe”, using this data, it can suggest a “simple fruit cake” without sugar if you usually prefer quick, easy, and healthy recipes, or a “fancy plum cake” if you enjoy more challenging baking projects and don’t like apples. Or, a vegan option, if you have neither milk nor eggs left in the fridge.

This technique is called Retrieval-Augmented Generation (RAG). It enhances the capabilities of LLMs with additional data (e.g. personal data, company data, fresh data) stored in a vector database.

When you query a vector database, it uses the query’s vector representation to find the nearest neighbors in the database.

Nearest Neighbor Search

How do we find the nearest neighbor to our query vector? The most straightforward approach is a brute-force search. It calculates the distance between our query vector and all other vectors in the database, one by one. Any metrics discussed in “How to compare vectors” can be used. However, this brute-force approach has a time complexity of O(N*d), where N is the number of vectors and d is the dimensionality. This becomes computationally expensive for large datasets.

Since exact nearest neighbor search can be slow for massive datasets, we often turn to approximate nearest neighbor (ANN) algorithms. These algorithms prioritize efficiency by finding neighbors that are very close (but not necessarily the absolute closest) to the query vector, significantly reducing search time. 

Continuing with the cooking assistant app example, imagine you’re searching for a “fruit cake recipe”. Assume that in our database, the real closest recipe is “simple apple pie”. With a massive database, an exact nearest neighbor search might take a long time to find the perfect match. However, an ANN algorithm can quickly find a recipe that is very similar to what you’re looking for, such as a “simple fruit cake” or a “basic apple pie”, even if it might not be the exact closest match. This efficiency ensures you get relevant and useful recipe suggestions promptly, enhancing your overall experience without a noticeable compromise in quality.

Approximate Nearest Neighbour Search

Now, let’s delve into the world of Approximate Nearest Neighbor (ANN) algorithms. The way you search for nearest neighbors depends on how the data is stored in the vector database. One of the earliest ANN algorithms, established in 1975, is called k-d trees. These trees work by recursively splitting the data space using hyperplanes, making the search process more efficient (see ep. 5 of ObjectBox Bites). However, k-d trees, like many exact nearest neighbor algorithms, suffer from the dimensionality curse. This means that as the number of dimensions (features) in your data increases, the distance between points becomes less meaningful, making searching very slow in high-dimensional spaces like those used in vector databases. 

For instance, consider simple fruit recipes. With a few features, such as cooking time and number of ingredients, finding similar recipes would be relatively straightforward. However, if we also include many other features like sweetness level, calorie count, fruit type, all specific ingredients, preparation complexity, and user ratings, the number of dimensions increases significantly. In such high-dimensional spaces, the traditional k-d tree method becomes inefficient because the distances between points (recipes) become less distinct and meaningful.

To overcome this challenge, ANN algorithms leverage two main approaches: indexing methods and sketching methods. Indexing methods work by creating a hierarchical data structure that allows for faster exploration of the search space. Imagine a well-organized library with categorized sections instead of just randomly placed books.  Sketching methods, on the other hand,  don’t search the entire dataset directly.  Instead, they create compressed versions (sketches) of the data that are faster to compare with the query vector. This reduces the search time significantly. Often, these two approaches are combined for optimal performance.


A popular example of an ANN search implementation for high-dimensional data is the Hierarchical Navigable Small World (HNSW) algorithm (e.g. implemented in Azure AI). HNSW relies on graph-based indexing to efficiently navigate the data space and find nearest neighbors. For more details watch episodes 6, 7, and 8 of ObjectBox Bites miniseries, where we describe the fundamentals of HNSW.

Take-away notes

To sum up, vector search offers a significant leap forward in how we search for information. By understanding the meaning and relationships behind data, it delivers more relevant and accurate results, even for unstructured data and complex queries. This technology has the potential to revolutionize various fields, from enhancing search engines to empowering AI applications. As vector search continues to evolve, we can expect even more exciting possibilities for navigating the ever-growing ocean of information and unlocking its full potential. This includes operating with data directly on the devices it was created on, reducing cloud costs, eliminating reliance on an internet connection, and opening up using your private data without it ever being shared (100% private). If you’re interested in other AI and vector database-related topics, check out the ObjectBox mini-series. Stay tuned for more articles in the future.

Evolution of search: traditional vs vector search

Evolution of search: traditional vs vector search

Introduction

In today’s digital landscape, searching for information is integral to our daily lives, whether for education, research, work, or shopping. However, as the volume and complexity of data kept growing, traditional search methods faced more and more challenges in providing accurate and relevant results. That’s where vector search comes in. We’re already seeing Google changing its search engine to empower the vector search (RankBrain, BERT, Neural matching), and expecting even greater incorporation of AI tools to improve search experience. Let’s explore the differences between traditional (keyword) search and vector search to understand how these technologies are shaping our search experiences, and how this impacts the discoverability of any content you might produce.

Traditional (keyword) search

Traditional search performs exact keyword matching from user queries to the data to retrieve relevant results. For example, searching for “programming languages” with traditional search will list every source containing those words. A more advanced version can also incorporate additional rules to enhance search results, such as:

  • keyword frequency (how often the term “programming languages” is used within the result text),
  • the presence of related terms (e.g. “Java”, “Python”, “C++” versus “cooking”, “gardening”),
  • or location (results closer to your location are favored).

While this approach has served us well, it struggles with ambiguous language, synonyms as well as the impact of SEO strategies, often resulting in less accurate or less valuable search results. This can be especially frustrating for businesses who are trying to get their content seen by the right people. For example, a business that publishes a blog post about “sustainable fashion tips” might miss out on potential customers who are searching for “eco-friendly clothing recommendations” or “green clothing ideas,” simply because their keywords don’t exactly match.

Vector (semantic) search

2D Vector Space Representation. In this space, “Python” and “Java” are close to each other as well as to the “Programming language” query we are searching for because they are similar (they share high values in their features).


On the other hand, vector search takes a different approach by seeking out related objects that share similar characteristics or semantics. You can think of it as finding results based on meaning or understanding rather than just exact wording. For example, searching for “programming languages” with vector search will not only find sources mentioning those exact words but also identify specific languages like “Python” or “Java” as well as related concepts such as “coding tutorials” or“development frameworks”.

To do a vector search, first of all, the content, such as texts, images, audio files, or videos, needs to be represented as vectors/embeddings (also often called vector embeddings) by AI models. These embeddings represent data in a multidimensional vector space. Vectors capture the essence or semantics of the data they represent while remaining computationally efficient.

Once these vector representations are generated, they are basically sets of numbers, and therefore easy to compute with. For instance, instead of searching for a specific word in text, we aim to find the closest vector (from the text embeddings) to the query vector (representing the word we’re searching for). This process relies on well-established vector computing methods, such as calculating the distance between vectors or minimizing the angle between them (Cosine similarity).

Comparison

Let’s now compare different aspects of searching to understand the main differences between traditional search and vector search (also summarized in Table). 

  • Search Approach
    In traditional search, the approach relies on matching keywords directly from the user query to the content. Vector search uses vector embeddings to catch the semantics of the data to perform a meaning-based approach.
  • Ambiguity handling
    Therefore, vector search shines when it comes to handling ambiguity. It is superior for handling synonyms, ambiguous language, and broad or fuzzy queries compared to traditional search. This also automatically influences the relevance of the search results.
  • Search relevance calculation
    The metrics used for search relevance calculations are different. The traditional search uses term frequency-inverse document frequency (TF-IDF) and BM25, while vector search uses Jaccard Similarity, Cosine Similarity, and L2 Distance (or Euclidean).
  • Speed and Implementation
    Traditional search is easy to implement, straightforward in usage, and fast for simple queries. Vector search may be slower for simple queries and more complicated to implement, once it comes to huge datasets. However, the implementation of approximate nearest neighbor techniques (ANN) allows to significantly speed up the process.
  • Scalability
    Continuous expansion of content, challenges the scalability of traditional search, while for vector search scalability is one of the advantages. 
  • Cost
    While traditional search may have lower computational requirements, the superior performance and accuracy of vector search often justify the investment in additional computing power. Furthermore, the computational costs for vector search can be significantly reduced with the use of ANN.
Comparison table between keyword search and vector search

Conclusions

In summary, both traditional search and vector search offer distinct advantages and drawbacks. Vector search excels in handling ambiguity, correcting typos, enhancing relevance, and managing extensive datasets. Traditional search remains advantageous for straightforward queries, exact matches, or smaller datasets. Historically, limited computational resources, particularly for on-device computation (i.e. Edge Computing), favored traditional search. However, the landscape is evolving rapidly with the introduction of the first edge vector database solution by ObjectBox. This innovation promises to revolutionize the scenario by optimizing vector search for devices with constrained resources, extending the benefits of semantic search to the Edge.