fbpx
ObjectBox EdgeX v1.1 – database with ARM32 support

ObjectBox EdgeX v1.1 – database with ARM32 support

With EdgeX Foundry just reaching v1.1, we continue to provide ObjectBox as an embedded high-performance database backend so you can start using ObjectBox EdgeX v1.1 right away. If you need data reliability and high-speed database operations, ObjectBox is for you. Additionally, starting with ObjectBox EdgeX 1.1, you can use it on 32-bit ARM devices.

Combining the speed and size advantages of ObjectBox on the EdgeX platform, we empower companies to analyze more data locally on the machine, enabling new use cases.

With ObjectBox-backed EdgeX we’re bringing the efficiency, performance and small footprint of the ObjectBox database to all EdgeX applications. It is fully compatible, so you can use it as a drop-in replacement: you call against the same REST and Go EdgeX APIs. As simple as that;no need to change any code.

Performance comparison of EdgeX database backends

EdgeX Foundry comes with a choice of two database engines: MongoDB and Redis. ObjectBox EdgeX brings an alternative to Redis and MongoDB to the table.  Because ObjectBox is an embedded database, optimized for high speed and ease of use while also delivering data reliability, it enables a new set of use cases. As we all know, benchmarks are hard to do. This is why all our benchmarks are open source and we invite you to check them out for yourself. To give you a quick impression of how you could benefit from using ObjectBox, let’s have a look at how each compares in basic database operations on “Device Readings”, one of the most performance intensive data points.

Note: The Read and Write operations (all CRUD (Create, Read, Update, Delete) operations are measured in objects / second). The benchmarks test internal EdgeX database layer performance, not the REST APIs throughput.

These benchmarks provide a good perspective why you should consider ObjectBox with EdgeX. Benchmark sources are available publicly in ObjectBox EdgeX github repo.

So, why is ObjectBox EdgeX faster?

First of all, you are probably aware of the phrase “Lies, damned lies, and statistics benchmarks”. Of course, you should look at performance for yourself and consider based on your specific use case needs. That’s why we make our benchmarks available as open source. It is a good starting point.

To make it easier to compare ObjectBox (in addition to our open source benchmarks) here are some of the high-level “unfair advantages” that make ObjectBox fast:

  • Objects: As you can derive from its name, ObjectBox is all about for objects. It’s highly optimized for persisting objects. The EdgeX architecture and Go sources are a great fit here as it puts Go’s objects (structs) in the center of its interface. This means, we can omit costly transformations and this helps with speed.
  • Embedded database: Redis and MongoDB are client/server databases running in separate processes. ObjectBox, however, is running in the same process as EdgeX itself (each EdgeX microservice, to be precise). This has definite efficiency advantages, but it also comes with some restrictions: Whereas you can put Redis/MongoDB in separate Dockers or machines, this option is not available for ObjectBox yet.
  • Transaction merging: ObjectBox can execute individual write operations in a common database transaction. This means, we can reduce the costly transactions for a number of write operations. This is a great way to add on performance, delaying the transaction end by single digit milliseconds.

Get started with ObjectBox EdgeX

The simplest way to get started is to fetch the latest docker-compose.yml and start the containers:

You can check the status of your running services by going to http://localhost:8500/. At this point, you have the REST services running at their respective ports, available to access from your EdgeX applications.

Find more details, instructions for ARM32, and sources in our GitHub repo at  https://github.com/objectbox/edgex-objectbox.

If you’re new to EdgeX, find out all about the open source  IoT Edge Platform here. The EdgeX project is led by the Linux Foundation and supported by many industry players, including Dell, IBM, and Fujitsu.

We love to hear from you 🙏

We’re very interested to hear about the challenges you are facing on the edge and in IoT. As performance experts, we are always embracing a tough challenge. Reach out to us to set up a pilot project.

Is there something you are missing? Please do reach out to us. We want to make ObjectBox the best edge data persistence layer available. We love to receive your feedback.

What next?

Find out more about ObjectBox EdgeX and get started, go directly to GitHub or download the snap on Snapcraft.

Sync.Drone: a drone project based on ObjectBox

Sync.Drone: a drone project based on ObjectBox

This spring, a student group from Augsburg University of Applied Sciences build a drone application based on ObjectBox Database and Sync. This is a guest blog post by Michelle from the sync.drone project group, describing the project from start to finish and sharing the results. 

The goal: Showcasing the ObjectBox database and Synchronization solution with drones

The goal of the project was to synchronize the colors and flight patterns between two drones to coordinate themselves autonomously in formation flight to showcase the ObjectBox solution. In the future, this technology could be used in many drone applications. First, due to ObjectBox’ speed, more data can be processed faster on each drone, saving resources, specifically battery. This allows drones to fly longer. At the same time, going beyond the scope of this initial showcase, the technology could be used to synchronize swarms of drones, making their use more reliable and flexible – and less dependant upon a constant Internet connection. For example, as an artistic installation, or in emergency situations during a large-scale search for missing persons. Drones can also be used in large warehouses to facilitate the organization of different parts, and pass on the position of a particular part.

 In this article, we will explore the process we used to build our self-synchronizing drones, sharing our software and hardware, so you can try it out yourself.

Hardware: Raspberry Pi, 3D printing, and more

In order to build our drones and turn our vision into reality, we had to consider a number of hardware options. It was important that our drone was compatible and programmable with ObjectBox. The drone had to be localizable and airworthy, so that a safe autonomous flight was possible. All parts had to be compatible so we could easily swap parts if something did not meet our requirements.

We built the drone frame from scratch, using 3D printers. The housing was created in the 3D program Autodesk Inventor and the parts were assembled to a drone frame. We used NeoPixel RGB LED sticks to make the drone glow in color. We chose the following components. 

Microcomputer

A Raspberry Pi was the most suitable central computer on our drone. It offered both performance and size. We chose the Raspberry Pi 3 B+, which would later control the processes of our drone independently.

Tracking system 

After looking at different tracking systems, we chose the “POZYX” UWB tracking system. This ensured an accurate and user-friendly handling.

Accumulator

We had to make sure that the drone’s battery would last long enough to power a Raspberry Pi, LEDs and a POZYX tag in addition to the flight hardware. First started with a 6 cell LiPo battery with 5000mAh. However, later in the project, we replaced the battery with a lighter and more compact 6-cell LiPo battery with 1800mAh.

Engines

The engines (1750 kV) from the Drone-Racing sector had enough power to make the drone fly. Motors with even lower kV would have given the drone more power, but are much more expensive.

Flight controller 

As flight controller we chose the “Omnibus F4 V6” chip, which ran with the open source software “Beta Flight” and was accessible via the so-called Multiwii Serial Protocol (MSP). This allowed us to use the advantages of a proven flight software, and also transfer the flight instructions via USB directly to the flight controller using the MSP.

Electronic Speed Controller (ESC) 

For the ESC , which implements the instructions of the flight controller by direct voltage changes at the motors, we chose a 4-in-1 model. With only one connection cable to the flight controller, all four motors can be controlled at the same time. Usually one ESC is required per motor. It was also compatible with our hardware.

Software – Tracking, Flying and Syncing the Drones

Except for a start signal, the drone was supposed to operate without a remote control. Several drones would coordinate themselves at the same time according to the instructions. We decided to develop the code in three separate “cores”, which were merged at the end of the project. These were divided into “tracking”, “flying” and “syncing”. Using the university git lab as a repository, we were able to simplify development and share the code with the group. This allowed structured work on the code. With the help of ObjectBox and Prof. Dr. -Ing. Alexandra Teynor we were able to assemble the following code parts.

Tracking 

For collision avoidance it was important to implement tracking, so that the drone knows it’s own position. We solved this by using the position calculated by the POZYX tag, which was then transmitted to the Raspberry Pi in the tracking core.  We read the coordinates from the IMU sensors (“inertial measurement unit” = unit of measurement based on multiple sensors ) from the POZYX tag, but not the exact positioning.

The so called “heading”, or yaw of the drone, is read out by a magnetometer. However, this internal “compass” reacts to disruptive factors and can deliver inaccurate results. We solved the correction of the heading via an algorithm using OpenCV. This algorithm uses a small camera module on the drone and special markings on the ground to detect its orientation. This allows the direction vectors of the drones to be calculated more accurately.

Flying

In the flying core, the flight instructions were developed based on the tracking core data, and then implemented by passing this data on to the flight controller. First of all the drones have to be lifted off the ground. For this purpose we used a laptop keyboard control, which forwarded flight instructions to the drone via a web socket.

Flight control

The Raspberry Pi establishes a serial connection to the flight controller via USB. As soon as this connection is established, flight instructions are transmitted in the form of inclination values for roll, pitch, yaw and throttle (thrust). These values may lie between 1000 and 2000. In a neutral position, roll, pitch and yaw are at an average value of 1500.  

Using Python, we calculated the required roll, pitch, yaw and throttle values and assembled them using the Multiwii Serial Protocol. This was translated into pure byte code and sent to the flight controller via the USB cable. The flight controller now tries to reach the corresponding values. In order to turn to the right, the left motors are turned slightly up and the right motors slightly down. The ESC received the commands for the desired motor speed from the flight controller. It then applied the required voltage to each motor according to its instructions. The communication between the flight controller and ESC happened either by an analog (PWM) signal or a digital signal (D-Shot).

Keyboard control 

The computer runs a Python script that registers keystrokes and converts them into instructions. For example, pressing the right arrow key creates the command “raise-roll” and pressing the left arrow key triggers the command “lower-roll”.

The drone also runs a Python script that opens a web socket to which the PC script connects. Each time a key is pressed on the laptop, a corresponding command string is generated (e.g. “raise-yaw”) and sent to the drone via the web socket. As soon as a string arrives, the relevant value (roll, pitch, yaw, throttle) is increased or decreased.

To prevent the drone from crashing if a connection is lost, the values are flattened algorithmically.

ObjectBox Database and Sync Drone Implementation  

In the syncing core, the position data of all drones as well as the LED color, should be exchanged and commands passed on. The RGB color space of the LEDs was mapped to the x-, y- and z-position. In this way, the sync features of the drones could be displayed without them flying. For the implementation we used the ObjectBox database and the ObjectBox Sync Server.

Originally, we had planned to use the ObjectBox Go Binding because it is precompileable and very fast. However, the POZYX system we choose used Python. There was also already a Python implementation available for our flight controller, but none available for Go. Luckily, ObjectBox offered to develop and provide a small Python binding of their database according to our needs. This included all ObjectBox functions that were relevant to us. It was officially released in version 0.1.0 specifically for our project. As a result, the ObjectBox database could be easily integrated into our code.

Realization of syncing

In Python version 0.1.0, ObjectBox incorporated the basic features we needed. For our application the simple CRUD functions and the Sync feature, which synchronizes the data in near real time, were sufficient. The database is compact and the speed and ease of use is optimized for restricted IoT devices, for example the Raspberry Pi used in this project.

The sync server is started by running the init-server.py script on the master drone. At first, an empty database (model) was initialized. The master drone then communicates with the other drones via WLAN network connection and synchronizes the ObjectBox database between the respective devices.

Three entities (classes) are stored:
– the identification and position data of the anchors
– the identification and position data of the tags
– the color values of the LEDs.

The drone stored it’s position and LED color in the database. The master drone then reads out this information and overwrites it  with the values calculated by the master drone (e.g. LED color or target position in the future).

Thank you!

At the end of our project, we had three drones. Depending on the position of the master drone, all drones could synchronize their LED colors. Unfortunately we were not able to finish the flight due to a defect in the flight controller and a delayed delivery of parts. Finally we decided to publish the code for the drone control on GitHub. Additionally, you can get inspired on our website as well as on our social media platforms. 

Furthermore, we would be happy, if the project would be continued by another group of students in the future. With our work we have created a basis for many more ideas. In summary, our project still has a lot of ambitious potential for the future.

Thanks to ObjectBox for this great opportunity – we mastered many problems along the way and learned a lot. Thanks for the constant support.We also thank our professors Prof. Dr. -Ing. Alexandra Teynor, Prof. KP Ludwig John and our coach Sandra Hobelsberger for their professional advice and patience. Finally, we would like to thank HSA_Innolab for their additional financial support and FabLab for their advice and resources.

In collaboration with interactive media students of the University of Applied Sciences in Augsburg.

 

The best IoT Databases for the Edge – an overview and compact guide

The best IoT Databases for the Edge – an overview and compact guide

For many IoT projects, relying on the cloud for data storage and analysis is inefficient and costly. Some IoT projects, however, cannot work cloud-based due to hardware or network/bandwidth limitations or a need for realtime response rates. Then Edge Computing is the only solution. And in order to bring computing closer to the source of the data, you need an IoT database optimized for the edge

What to look for in an IoT Database for the Edge

There are several things to consider when choosing an IoT database for the edge. The five most important criteria to take into account are the edge-capability, performance, ACID-support, language support and data type support.

Edge-capability

Of course, in order to qualify as edge-capable, the database needs to run directly on a broad spectrum of edge devices – either embedded or in-memory. Many IoT devices are physically small and have limited resources, so a database for the edge needs to have a small footprint. For that reason, the list below does not include databases with a core library larger than 10MB.

Performance

Many edge cases have a need for speed; for example: In additive manufacturing making necessary adjustments to the next layer added to an asset needs to happen in near real-time. Because this decision is based on a multitude of environmental factors from the factory floor, tons of data from sensors need to be processed extremely quickly.

ACID-compliance

Depending on whether you can afford to lose some of your data sometimes, you need to check if the database is fully ACID compliant – and under which conditions any benchmarks have been run. What does ACID mean? In the database world, this popular acronym refers to how data in transactions is handled by a database and stands for: Atomicity, Consistency, Isolation, Durability. In short, a fully ACID-compliant database is transactionally safe and ensures that, despite errors, power failures etc., no data is lost and the transactions are always executed in a valid way.

Language support

Another important criteria obviously is the language used to implement the database: Does it match your project’s language and developer skills? Generally speaking it is more efficient to keep to one language; this is also why many developers love to avoid dealing with SQL.

Data Type support

Finally, you need to decide on the overall structure data shall be stored in. In this article, we will only focus on full databases that enable complex computing on small devices – thus it only includes traditional databases, i.e. those that are relational, object-oriented or graph-based. Databases that are limited to time-series data only (e.g. InfluxDB, TimescaleDB) or any ORMs will not be discussed here.

IoT Databases for the Edge

In order to help you in choosing the best IoT database for your next project swiftly, we had a look around and compared available databases. Here is a list of IoT databases for use on the edge:

Badger calls itself a distributed, fast graph database. It is an ACID-compliant, NoSQL, LSM tree-based key-value store written fully in and available only for Go. As Badger does not focus on being run on IoT devices, it supports easy horizontal scaling, synchronous replication to prevent data loss, load balancing and using the full capacity of SSDs instead of the RAM. Central or P2P synchronization are not available.

Berkeley DB is an ACID compliant embedded key-value store. Due to a static library size of less than 1 MB, and runtime dynamic memory requirements of only a few KB, it is suitable for a variety of edge devices. The database is usable in many different languages, such as C++, C#, Java, Perl, PHP, Python, Ruby, Smalltalk and Tcl. Berkeley DB does not offer any synchronization support.

LevelDB is a key-value storage library that provides an ordered mapping from string keys to string values. It is written in C++ and has bindings for languages such as C, Go, NodeJS and Java. LevelDB runs on-disk and is queried without SQL. Applications need to use it as a library, as the database does not provide any server or command line interface. Indexes and synchronization are not supported.

ObjectBox is a fast, object-oriented, ACID-compliant database with strong relation support. It was designed specifically for Edge IoT and embedded and mobile applications. ObjectBox has a memory footprint of less than 1 MB. Language support includes C, Go, Java, Kotlin, Swift, Python (Beta), and Dart. Centralized synchronization support is currently in its alpha stage and P2P synchronization is a work in progress.

Realm, which has been acquired by MongoDB in summer 2019, is an ACID-compliant NoSQL database. It has been strongly focused on mobile platforms from its start and is only beginning to move into IoT. It offers central as well as P2P synchronization. Supported languages include Java, Kotlin, Swift, C# and NodeJS.

Redis is a key-value database, which is per default not ACID-compliant. However, it offers an optional durability transaction concept, which when turned on reduces the database performance significantly. In contrast to other database systems, it works in memory with user commands not being data queries, but specific operations to be performed on abstract data types. Redis has many different client bindings, e.g. C, C++, Dart, Go, Java, NodeJS, Python and Rust.

RocksDB is a persistent key-value store for SSD and RAM storage. It is not ACID-compliant, but works using concurrent transactions with conflict resolution. This embedded NoSQL database supports Java, Python, NodeJS, Go, PHP and Rust, to name only a few languages. Synchronization in any form is not natively possible with RocksDB.

SQLite is the only fully SQL-based relational database library in this list. SQLite comes with a small footprint and is fully ACID-compliant. It offers encryption as a paid service. There is no support for synchronization.

Let us know your thoughts!

Different use cases call for different databases, and we hope that this list gives you a good starting point for your edge computing project. Let us know your thoughts in the comments below – what is your favorite database work with with and why?

Car Tolling – A case for Edge Computing

Car Tolling – A case for Edge Computing

Governments often face tight budgets on infrastructure development; car tolling is increasingly seen as the answer for raising funds¹, making it more and more prevalent. From 2008 to 2018 the total length of tolled roads in Europe increased by 23%² and tolling revenue in Europe increased by 37%³ to €31.3 bn. per year; similarly, from 2010 to 2015 the United States experienced a 63% increase in transponders and 52% more tolling revenue, resulting in $13.8 bn. in 2015. On top, despite car sharing efforts, car ownership and traffic is still increasing in many countries, e.g. Germany, France and India. Increasing amounts of traffic, devices, and data points bring current tolling solutions to their limits. Taking data to the edge in new and existing tolling solutions, for example with the ObjectBox data storage and synchronization solution, can make tolling more efficient and reliable.

Setting the stage: a typical car tolling situation

A national infrastructure company has deployed several hundred car tolling stations all over the country. These stations automatically recognize passing cars by detecting licence plates, using visual recognition or wirelessly, e.g. by receiving data from an RFID transponder in the car. In order to ensure that only eligible cars are passing through the tolling station and violators are fined, it is necessary for the tolling station software to look up the gathered vehicle information – among millions of entries – as fast as possible. If the data look-up is not  fast enough, or the data on the roadsides/tolling stations isn’t up to date and in sync with the central data, the tolling station loses money.

“The importance of mobile apps is increasing for Kapsch TrafficCom so that we see ObjectBox’ edge computing database solution as an interesting future base technology for all types of mobility apps.”

Peter Ummenhofer

Executive VP Solution Management, Kapsch TrafficCom

Why edge computing and fast lookup is key to today’s car tolling systems

In general, modern nationwide tolling infrastructure consists of three systems: tolling stations operated by the respective agencies, central open road, also called mobile tolling, and central transaction clearing houses. Within this infrastructure, all data related to violators and other operational information needs to be synchronized between these three systems in a consistent way, with as little delay as possible. If this is not the case, together with other problems, car tolling system operators are faced with high monetary losses every day.

Today’s car tolling systems are based on the fundamental idea that cars do not need to stop to be checked or charged. Thus, as the cars move quickly through the scanning area, the challenge of implementing a car tolling system directly relates to the amount of data that needs to be searched within a very short time frame.  To be successful, this process needs to happen in near real-time. From a development perspective, these problems are rooted in:

  • accessing data from a remote location (speed of communication, speed of network)
  • keeping data in synchronization with car tolling stations that are closer to the drivers and/or roadside units
  • database speed on remote servers
  • database speed on roadside units (car tolling edge devices)
  • limitations of existing hardware as some systems are quite old, and rolling out new hardware is expensive

Furthermore, it is possible that stations shut down from time to time, due to the weather, power outages, vandalism or simply technical failures. However, tolling providers generally need to provide strict uptime guarantees and thus service level agreements often include penalty fees in case of excessive downtime. Such events cost the providers substantial amounts of money – and data loss, i.e. undetected violators, even more so.

Adding to this, privacy and legal requirements differ from country to country and increase the complexity of the systems and timings. For example, in Austria the pictures and derived license plate information may only be used for checking, but in case no violation was detected, they need to be removed in an unrecoverable manner¹⁰. On the other hand, the data of potential violators may be stored for the sole purpose of toll collection or prosecution, but only for a maximum of three years.

How fast data storage and syncing can help in car tolling

To solve these problems, a data storage and data synchronization solution like ObjectBox can be deployed on every type of tolling station, i.e. open and static stations, as well as on the central server. From a technical point of view, this is not a problem, because the ObjectBox library supports virtually all platforms and operating systems. Financially, it is considerably cheaper to update software, than it is to upgrade hardware.

Having the library installed, with ObjectBox Sync, it is guaranteed that the vehicle data in the internal stations’ memory is always up-to-date with the central server, so the station will make a decision based on the most accurate data every time. Additionally, the other systems involved in the tolling infrastructure consistently receive the most recent information with no further effort required.

Deploying the synchronization solution also means, because ObjectBox is particularly reliable (ACID compliant) and well-tested, that station shutdowns or internet connection issues are not a problem anymore. The stations’ operating company will no longer lose violator’s information due to technical reasons.

Summary – Car tolling is moving to the edge

As this case study shows, the use of edge computing is a perfect fit for modern infrastructure. In the context of car tolling, speed, reliable data storage and synchronization are indispensable, resulting in ObjectBox being an effective solution for today’s and future technological advancements.

If you are interested in learning more, feel free to get in touch with us! We appreciate any kind of feedback.

How to benchmark database performance – and ObjectBox

How to benchmark database performance – and ObjectBox

Benchmarking the performance of databases is a science in and of itself and it’s hard to get reliable and comparable results. Therefore, we decided to note down some standard patterns and pitfalls when doing database benchmarks we learned about over the years. We are including specific notes and things to consider when benchmarking ObjectBox.

Designing a benchmarking test

Phrase your research question

When you want to benchmark databases, you will usually have a specific use case in mind and want to answer questions regarding that case. Therefore, before writing your first line of code, have a look at what it is you want to benchmark, why, and what statement you want to be able to make at the end. Just writing down a simple question like “Is X faster than Y doing A?” can help you verify that the finished benchmark actually measures what you want it to.

Start documenting

Whatever you do, document each step diligently. Benchmarks that are not properly documented are challenging to reproduce, and thus of limited worth. So, the mantra in benchmarking really is: Document, document, document. That way it is also easier to go back later and make adjustments if needed.

Select the sample

If you have a clear use case and goal in mind, this probably determines the type of databases you are going to consider for your benchmark. Generally speaking, benchmarks should compare databases of similar type and not mix approaches that are too different. A database might be designed to run on a cluster of servers and not, like ObjectBox, on a constrained device (e.g. a smartphone, Raspberry Pi or IoT device). Or data may be stored as documents (e.g. NoSQL) and not in a relational table-like structure (e.g. SQL); or on disk and not in-memory. Consistency guarantees can also make a difference (ACID-compliance vs. not transactionally safe). Document why you chose these databases for comparison.

Become an expert

Once you have decided on what databases and features to compare, familiarize yourself with the APIs of the products, e.g. by reading the documentation or looking at code examples. Making wrong assumptions about how a feature works can skew your results and make a product appear much faster or slower than it actually is. If the code is open source, it can also help to dive into the code to see how things actually work. For example: an insert function runs asynchronously so the function call returns immediately, but the actual insert still executes in the background. To correctly measure the actual time it takes to complete the insert, you need to do some additional work.

Things to watch out for when coding benchmarks

Let’s move on to some specific coding tips. A benchmark is only as good as its time measurement. Check what APIs your test platform offers to get the time; they might be affected by how and from which thread they are called. Make sure, you find an accurate way to measure the time that does not skew up your results. For example, on Android there are numerous possibilities beyond using currentTimeMillis().

If a measured code block executes so fast it is near the available precision of your clock (e.g. time is in milliseconds, but code takes microseconds), consider running it multiple times and measure the total time of all runs.

Next, before starting each measurement, make sure to free up memory and clear references no longer in use by the previous measurement or setup code. If the environment your benchmark is written for uses a garbage collector, check if it can be triggered manually to free memory (e.g. System.gc() in Java). Otherwise each consecutive measurement might be skewed due to less and less memory being available or a garbage collector halting execution to free memory. In your benchmarking results, you should look out for strange results like a continual decrease in performance from run to run.

Furthermore, take into account that some runtime environments, for example the Java Virtual Machine, do just-in-time compilation. This can cause a delay the first time code is executed, but provide better performance on subsequent executions. The effect of this on the final result can however be minimized by proper testing procedure, e.g. by running a code block multiple times instead of once and measuring the total execution time.

Then something obvious, but easily overlooked, is to ensure that between the start and end of a measurement only the functionality that you actually want to compare is executed. So avoid or turn off logging (a seemingly innocent string concatenation can skew results) and construct test data outside of a measured code block.

To be absolutely sure that your code is doing what you intended it to, use a profiler to inspect resource usage during a trial run. IDEs like Android Studio and Xcode come with an embedded profiler, and there are also several standalone profilers to choose from.

Optimizing benchmarks for meaningful results – with ObjectBox examples

Before benchmarking the chosen databases, make sure you understand their differences and default settings to adjust all settings to be comparable. In the following section, we will go through the most important differences and settings you need to look out for based on ObjectBox as an example.

Transactions, Durability, Consistency, ACID – how to make your benchmarks comparable

First, be aware of the impact the use of transactions or lack thereof can have. For databases, committing a transaction is an expensive operation as it requires waiting until the disk has safely stored the data. If possible, group multiple operations into a single transaction. For example, in the ObjectBox code snippet below, there is a notable speed-up when wrapping multiple box operations into a transaction block.

Speaking of transactions, also check if using bulk operations is possible. These also use transactions to speed up execution. E.g. instead of performing a put on each entity in a list, put the whole list.

The ObjectBox transactions docs provide more details and are available for Java, Swift or Go – though the basic principle is the same across languages.

Second, and closely related to transactions, are durability guarantees when writing data. This is about the “D” in the popular ACID acronym (Atomic, Consistent, Isolated and Durable). ObjectBox transactions and standard (non-async) operations are fully ACID-compliant.

Thus, pay close attention to what durability modes other tested databases guarantee, or respectively, which durability mode you want to measure. Most NoSQL databases don’t give hard durability guarantees. Some provide an extra command or special mode to enforce durability. Therefore, if your use case needs to ensure data is actually stored safely after a write operation, you would need to enable this durability for other databases when comparing to ObjectBox.

On the other hand, if you are interested in scenarios that emphasize performance over durability, you should look into the OjectBox async APIs. Those don’t come with durability guarantees unless you define “checkpoints” in your code to wait for async operations.

Indexing – how to make your database queries efficient

Third, when measuring query operations, see if you can use indexes, another typical database optimization. If the database has an index on a property that is used in a query condition, it can find matches much faster.

An index makes queries “scalable” – the more objects are stored, the more an index makes sense. Without an index, a database has to do a “full scan” over all potential results.

Number of objects and the disk bottleneck – how to measure the database and not the disk

And lastly, keep an eye on the number of objects you operate on. For example, if you put a single object, something like 99.99% percent will be spent on disk I/O. Thus, if you test this on several databases, the chance of getting about the same results on all databases is quite high. The limiting factor is always the disk. So if you want to measure the efficiency of converting objects into their persisted counterparts instead, you should look at much higher object counts to factor the disk out of the equation. Depending on the disk and device speed, a bulk “put” of 10K, 100K or 1 million objects will make more sense to measure in this context. 

Multi threaded tests – how to set writers and readers

ObjectBox is build upon a multiversion concurrency control foundation, and thus is ready for multi threaded access. Each thread will have a consistent transactional view of the data. ObjectBox differentiates between “readers” (a thread currently in a read transaction) and “writers” (a thread currently in a write transaction). Readers never block; no matter what goes on in other threads. However, a writer will block other writers when using standard transactions. Writers are sequential; only a single writer can be run at any time. Thus, if your load is write-heavy in multiple threads, you may want to look at the asynchronous APIs of ObjectBox. These handle write operations very efficiently; no matter how many threads are involved.

Extending the database size limit – size matters

By default, ObjectBox limits the database size to 1 GB to avoid filling up the disk by accident (e.g. your code has a bug in a data insertion loop). So if you use large data sets to benchmark, we recommended increasing the maximum database size when building the box store.

Preparing the benchmarks: devices and platforms

Once your benchmark code is ready, it’s time to set up the test device(s). The first step is to ensure other apps or processes are not doing (too much) work while your measurements run. Ideally, you would use a clean device with no apps installed or services configured. This is especially true for mobile devices: once you connect them and they start charging, the operating system might wake numerous apps to perform background work or network updates. You can somewhat avoid this by switching on airplane mode. And to be safe, wait a few seconds after connecting the device.

Also ensure, this is again important for mobile devices, that your device does not enter a low power mode which reduces performance. For example on Android, keeping the screen on typically prevents that. When running on a laptop, check whether the power supply is plugged in and which power mode your operating system is set to. You may explicitly want to test in a certain power state or with the default behavior.

And just to mention it, make sure to turn off any profiling or monitoring tools that you used during building your benchmark. They can significantly skew your results.

When running on multiple devices, pay attention to the differences in hardware, like available memory and processor model, but also in software, like the operating system version. Different hardware and software might have different optimizations that can skew your results. Again, do not forget to document your device and software setup, so results can be properly interpreted and reproduced.

Running the database benchmark and collecting results

When it is time to run the benchmark it’s good to run it not once, not twice, but many times. This minimizes the impact of the various side effects we discussed above. Output the results of each run into a comma separated (CSV) or tabulator separated (TSV) format, that can easily be imported in a spreadsheet application for analysis. You can look at how the ObjectBox benchmark does it.

Once you have collected some data, verify its quality. You might have already spotted outliers while perusing the results. Alternatively, calculate the average of all measurements and see if it deviates a lot from their median value. If you are familiar with it, look for a high variance value instead. Too many outliers or a high variance might hint at side effects of your benchmark code or device setup you have not considered. Better double check to be sure.

How to publish meaningful database performance benchmarks

The final step is to share your results with the world (or just your team). Make sure it’s clear what exactly you have measured and how you have arrived at those results. So include which device and software was used, how they were configured, what you did before running the benchmark and how the benchmark was run.

If possible, share the generated raw data so others can verify that your calculations, and remember to lead to the published results. Even better, publish the source code as well, so others can run it on other devices or help you spot and fix issues.

Last not least: Be careful to draw conclusions. Rather let the data speak for itself. Respond to questions and feedback. Be honest, if you learn your benchmarks may be skewed and update them. In the end, everyone wins by getting more meaningful results.

ObjectBox Go 1.0 release and performance benchmarks

ObjectBox Go 1.0 release and performance benchmarks

ObjectBox 1.0 is here. Late last year, we introduced ObjectBox to the world of Go – and were awestruck by the GitHub traction and feedback we got. So we doubled down on our efforts and are thrilled to announce 1.0 with many new features, a stabilized API, and improved performance. From now on your projects are safely able to upgrade.

For those of you new to ObjectBox, here is how ObjectBox persists Go structs (objects):

[crayon-5de9c58ce37ca157096664/]

ObjectBox Go 1.0 – Highlights

Let’s recap what ObjectBox Go brings to the table:

  • automatic schema migration (e.g. adding new structs or fields just works)
  • type-safe APIs, e.g. no interface{} arguments
  • embedded database – no server needed, all data stored on the device
  • Go structs as data (also embedded structs); no ORM, no SQL
  • relations: to-one, to-many (eager and lazy fetching)
  • powerful queries with indexes for scalable lookups
  • ACID compliant with implicit (automatic) and explicit (user defined) transactions
  • superfast bulk/batch operations
  • low memory usage
  • runs on 64-bit Linux, macOS, Windows, small 32-bit ARM-based Linux devices (e.g. Raspberry Pi)

How fast is ObjectBox Go? Performance Benchmarks

To see how ObjectBox compares performance-wise, we looked at libraries with comparable levels of storage abstraction – so not just plain SQL/Key-value storage but also ORM features. Finding embedded data storage libraries satisfying those simple criteria wasn’t so easy, but we have picked a few based on GitHub popularity:

  • GORM using SQLite – GORM supports multiple backends including the embedded SQLite database.
  • Storm using bbolt – while Bolt DB is not maintained anymore, it’s fork bbolt seems to be picking up traction.

What we tested

To get an overview of the databases, we tested CRUD (create, read, update, delete), indexed and non-indexed queries. Additionally, we have compared a peak memory usage of each database during a test run. Each test was run multiple times, with garbage collection turned off and executed manually outside of the measured time. Data preparation and evaluation was also done outside of the measured time.

We tried to keep the test implementations as close as possible to each other while picking the approaches recommended by the docs for each database. The test code is available at https://github.com/objectbox/objectbox-go-performance.

Test setup

The benchmarks were the only foreground application running on the PC and the given results are an average over 20 runs.

Specs: i7-8850H laptop CPU, Manjaro 18, Linux 4.19.56
Disk speeds (LUKS encrypted SSD): ~ 480 MB/s read, ~120 MB/s write 

 

CRUD

All operations were executed in batches, not one-operation-per-transaction. For GORM and Storm it was necessary to do a batch insert, update and remove inside a manual transaction because they don’t support batch operations out of the box.

The numbers were measured by running benchmarks for 100.000 object-sets for ObjectBox and GORM, while we have chosen to use just 10.000 objects with Storm because it was achieving better operations per second with a smaller set. We will follow up with a post comparing how each database fares with different data sets.

The standard operations you would be using all the time with any database show between 6.8 times speedup (update ObjectBox vs Storm) up to 124 times speedup (Create ObjectBox vs Storm). 

Queries

Next we looked at query performance by executing two simple queries. The first test uses an index (actually the “primary-key”) to fetch 100 objects. The second one to find 1111 objects based on a string prefix match (not indexed). The tests were executed 20 times and the average times (ms) are shown below. Note, the arbitrary number, 1111, is just based on how the test code prepares the data (strings contain an object ID, thus 1111 objects out of 10.000 start with the number “1”).

GORM/SQLite performance is very good for a primary-key based query but falls off when it has to use “LIKE” over the whole dataset without an index. Storm doesn’t catch up in either case.

Memory usage

Lastly, we compared the memory usage of each solution. The peak memory usage was measured externally, executing the benchmark binary using /usr/bin/time -v, taking the “Maximum resident set size”. The memory-usage test was run on 10.000 objects with one “test-run” (as opposed to 20 runs in previous benchmarks), repeated manually to make sure there’s no reasonable difference between the runs.

This benchmark shows massive difference in memory usage, which is a good thing to think about not only for small devices but also for cloud-based resources (where RAM is comparatively pricey).

Of course any results you get from performance benchmarks will always be just that, benchmarks. You should consider your application needs and, ideally, benchmark the critical scenarios yourself. If you do, here you can read up a couple of tips on how to do good benchmarks. And because sharing is caring: Do let us know what you find out. Having said that, ObjectBox does show clear lead across the board so why don’t you go ahead and start using it now 🙂

Getting started

Check the ObjectBox Go docs which come with setup instructions, an introduction and information on many more topics. The ObjectBox Go GitHub repository contains all sources if you are interested in internals. Also, it comes with an example application.

Outlook

With version 1.0 we hit an important milestone. We’re looking forward to your feedback and we’ll progress with additional features around queries, relations, and async operations. And of course, data synchronization that will allow you to share data among servers and clients. Stay tuned and let us know your thoughts.