Digital Healthcare – Market, projections, and trends

Digital Healthcare – Market, projections, and trends

Note to Readers (2024 Update)
This white paper, originally published in 2020, explores key projections, market dynamics, and trends in digital healthcare. Since its publication, the field has evolved significantly, and the accompanying detailed report has been updated to reflect the state of digital healthcare in 2024. For the most current insights and analysis, please refer to the updated report (also linked at the end of this blog).  

 

If you work in the healthcare industry, you are likely familiar with some uses of IoT devices. According to Gartner (2020), 79% of healthcare providers are already successfully employing IoT solutions.[1] However, this is just the beginning. While before COVID-19, the growth of digital health adoption had stalled [2], the market is picking up speed again. Indeed, Q3 2020 was a record year for investments in healthcare companies [3] and the market expects rising investments in healthtech for next years [4]. Today, underutilized data plays a major role in healthtech innovation [17] and the growing importance of healthcare data for future offerings is evident [5]. Take a look how analyts from Gartner to Accenture and Forrester expect the market to grow:

The digital healthcare market 2020 and beyond

digital-healthcare-market-trends-2020-edge-iot
  • Analysts expect Artificial Intelligence in healthcare to reach $6.6 billion by 2021 (with a 40% CAGR). [6]
  • The Internet of Medical Things (IoMT) market is expected to cross $136 billion by 2021. [11
  • Analysts expect the healthcare wearable market to have a market volume of $27 billion by 2023 (with a 27.9% CAGR). [7]
  • The IoT industry is projected to be worth $6.2 trillion by 2025 and around 30% of that market (or about $167 billion) will come from healthcare. [8]
  • Analysts expect the global Medical Health Apps market to grow to $236 billion by 2026, reflecting a shift towards value based care. [9]
  • The projected global digital health market is estimated to reach $510.4 billion by 2026 (with a 29% CAGR). [10]

The Healthcare industry has been struggling with shrinking payments and cost optimizations for years. [18] Fueled by the need to adopt in light of the COVID pandemic, digital technologies bring extensive changes quickly to this struggling industry now. Data is moving to the center of this changing ecosystem and harbors both risks and opportunities in a new dimension. [21] The basic architecture and infrastructure to have the data reliably, securely and quickly available where they are needed will be decisive for the success or failure of digital healthcare solutions. [17] [21]

We recommend keeping an eye on the following five trends

The 5 biggest digital healthcare trends to watch

AI-health-growth-market-tech

Artificial Intelligence (AI)  

Accenture estimates that AI applications can help save up to $150 billion annually for the US healthcare economy by 2026. [6] Therefore, it is no wonder that the healthcare sector is expected to be among the top five industries investing in AI in the next couple of years. [19] The top three greatest near-term value AI applications in healthcare are: 1. robot-assisted surgery ($40 billion), 2. virtual nursing assistants ($20 billion), and 3. administrative workflow assistance ($18 billion). 

big-data-health-analytics

Big Data / Analytics

The goal of big data analytic solutions is to improve the quality of patient care and the overall healthcare ecosystem. The global healthcare Big Data Analytics market is predicted to reach $39 billion by 2025. [12] The main areas of growth are medical data generation in the form of Electronic Health Records (EHR), biometric data, sensors data. 

internet-of-medical-things-digital-healthtech

Internet of Medical Things (IoMT)

IoMT is expected to grow to $508.8 billion by 2027. [13] According to Gartner, 79% of healthcare providers are already using IoT in their processes. [27] During COVID, IoMT devices have been used to increase safety and efficiency in healthcare, i.e. providing and automating clinical assistance and treatment to the infected patient, to lessen the burden of specialists. Future applications, like augmented reality glasses that assist during surgery, are leading to a focus more on IoMT-centric investments. [14]

telemedicine-virtual-healthcare-online

Telehealth / Telemedicine

Telecommunications technology enables doctors to diagnose and treat patients remotely. Consumer adoption of telehealth has skyrocketed in 2020 and McKinsey believes that up to $250 billion of current US healthcare spend could potentially be virtualized. [25] Also, many patients view telehealth offerings more favorable and – having made good experiences – are planning to continue using telehealth in the future. [26] Not astonishingly, telemedicine stocks also grow rapidly. [14]

edge-computing-hospital-clinic-offline

Edge Computing

Edge computing is a technological megashift happening in computing. [23] Instead of pushing data to the cloud to be computed, processing is done locally, on ‘the edge’. [15] Edge Computing is one of the key technologies to make healthcare more connected, secure, and efficient. [22]  Indeed, the digital healthcare ecosystem of the future depends on an infrastructure layer that makes health data accessible when needed where needed (data liquidity). [21] Accordingly, IDC expects the worldwide edge computing market to reach $250.6 billion in 2024 with a (12.5% CAGR) [24with healthcare identified as one of the leading industries that will adopt edge computing. [16

The healthcare market is in the middle of a fast digital transformation process. Drivers such as COVID,  growing IoT adoption in healthcare, and underlying social mega-trends are pushing digital healthcare growth to new heights. Therefore, the digital healthcare industry faces many challenges, both technical and regulatory. At the same time the healthcare market is offered a wealth of opportunities.

References

[1] https://www.computerworld.com/article/3529427/how-iot-is-becoming-the-pulse-of-healthcare.html / https://www.gartner.com/en/documents/3970072
[2] https://www.accenture.com/us-en/insights/health/leaders-make-recent-digital-health-gains-last
[3] https://sifted.eu/articles/europes-healthtech-industry-2020/
[4] https://www.mobihealthnews.com/news/emea/health-tech-investments-will-continue-rise-2020-according-silicon-valley-bank
[5] https://news.crunchbase.com/news/for-health-tech-startups-data-is-their-lifeline-now-more-than-ever/
[6] https://www.accenture.com/us-en/insight-artificial-intelligence-healthcare%C2%A0
[7] https://www.grandviewresearch.com/industry-analysis/wearable-medical-devices-market
[8] https://www.marketsandmarkets.com/PressReleases/iot-healthcare.asp
[9] https://www.grandviewresearch.com/press-release/global-mhealth-app-market
[10] https://www.globenewswire.com/news-release/2020/05/23/2037920/0/en/Global-Digital-Health-Market-was-Valued-at-USD-111-4-billion-in-2019-and-is-Expected-to-Reach-USD-510-4-billion-by-2025-Observing-a-CAGR-of-29-0-during-2020-2025-VynZ-Research.html
[11] https://www2.stardust-testing.com/en/the-digital-transformation-trends-and-challenges-in-healthcare
[12] https://www.prnewswire.com/news-releases/healthcare-analytics-market-size-to-reach-usd-40-781-billion-by-2025–cagr-of-23-55—valuates-reports-301041851.html#:~:text=Healthcare%20Big%20Data%20Analytics%20Market,13.6%25%20during%202019%2D2025 
[13] https://www.globenewswire.com/news-release/2020/11/25/2133473/0/en/Global-Digital-Health-Market-Report-2020-Market-is-Expected-to-Witness-a-37-1-Spike-in-Growth-in-2021-and-will-Continue-to-Grow-and-Reach-US-508-8-Billion-by-2027.html
[14] https://www.nasdaq.com/articles/iomt-meets-new-healthcare-needs%3A-3-medtech-trends-to-watch-2020-11-27
[15] https://go.forrester.com/blogs/predictions-2021-technology-diversity-drives-iot-growth/
[16] https://www.prnewswire.com/news-releases/state-of-the-edge-forecasts-edge-computing-infrastructure-marketworth-700-billion-by-2028-300969120.html
[17] https://news.crunchbase.com/news/for-health-tech-startups-data-is-their-lifeline-now-more-than-ever/ 
[18] https://www.gartner.com/en/newsroom/press-releases/2020-05-21-gartner-says-50-percent-of-us-healthcare-providers-will-invest-in-rpa-in-the-next-three-years
[19] https://www.idc.com/getdoc.jsp?containerId=prUS46794720 
[20] https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-great-acceleration-in-healthcare-six-trends-to-heed 
[21] https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-next-wave-of-healthcare-innovation-the-evolution-of-ecosystems 
[22] https://www.cbinsights.com/research/internet-of-medical-things-5g-edge-computing-changing-healthcare/
[23] https://siliconangle.com/2020/12/08/future-state-edge-computing/
[24] https://www.idc.com/getdoc.jsp?containerId=prUS46878020
[25] https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/telehealth-a-quarter-trillion-dollar-post-covid-19-reality
[26] https://go.forrester.com/blogs/will-virtual-care-stand-the-test-of-time-if-youre-asking-the-question-its-time-to-catch-up/
[27] https://www.computerworld.com/article/3529427/how-iot-is-becoming-the-pulse-of-healthcare.html

 

What are Time Series Database Use Cases?

What are Time Series Database Use Cases?

What do self-driving cars, smart homes, autonomous stock/crypto trading algorithms, or energy sensor systems have in common? These applications are all based on a form of data that measures how things change over time. It’s called time-series data and it plays a very important role in our lives today.

Accordingly, time-series databases also became a hot topic.

time series database use cases

What is a time-series database?

A time-series database (TSDB) can be defined simply as a database optimized for storing and using time-stamped or time-series data. You don’t need to use a TSDB to work with time-series data. Any relational or NoSQL database or a key-value-store will do, e.g. MongoDB or redis. However, when dealing with time-series data (e.g. temperature, air pressure or car velocity data), a TSDB makes your life as a developer a hell of a lot easier.

Indeed, the two main reasons why TSDBs is the fastest-growing category of databases, are usability and scalability. A purpose-built time-series database typically includes common functions of time-series data analysis, which is convenient when working with time-series data. Because time-series data typically continually produces new data entries, data grows pretty quickly, and with high-frequency data or many time-series data sources, data ingestion quickly becomes a challenge. Time-series databases are optimized to scale well for time-series data with time being a common denominator and outperform any other database without specific time-series optimizations. This is why more and more people are adopting time-series databases and using them for a variety of use cases.

What are time-series database use cases?

Monitoring Use Case time series

Monitoring sensor data 

One of the use cases is the monitoring of sensor data for safety measurements, predictive maintenance, or assistance functions. E.g. a car stores and uses all kinds of sensor data like tyre pressure, surrounding temperature and humidity for driver assistance and maintenance support. An aircraft monitors gravity and aerodynamic principles to reassure pilots that everything is alright – or to alert them that something has gone wrong. In fact, a Boeing creates on average half a terabyte of data per flight, most of which is time-series data.  [1]

Logistics Use Case time series database

Tracking assets

Tracking assets is ideal for a time-series database as you constantly want to monitor where assets are, e.g. the cars of a fleet or any goods you might be stocking or shipping. These applications typically include unique vehicle or asset IDs, GPS coordinates, and additional metadata per timestamp. Apart from keeping track of the assets in realtime, you also can use the data for logistics and optimize e.g. your stocking and delivery processes.

edge time series ecommerce

Analyzing and predicting shopping behavior

Or, many e-commerce systems store all information of an item from product inventory, logistics data and any available environmental data to transaction amount, all items of the shopping cart purchased, to payment data, order information etc. In this case, a TSDB will be used to collect these large amounts of data and analyze them quickly to determine e.g. what to recommend to customers to buy next or optimize the inventory or predict future shopping behavior.

What are the most popular time series databases?

Well, here is our list of popular / established time series databases to use in 2020 to get you started:

  • InfluxDB: an open-source time series database, written in Go and optimized for high-availability storage and retrieval of time series data for operations monitoring, application metrics, IoT sensor data, and real-time analytics
  • KairosDB: a fast distributed scalable time series database written on top of Cassandra. 
  • Kdb+:  is a column-based relational time series database with a focus on applications in the financial sector.
  • Objectbox TS: superfast object persistence with time-series data on the edge. Collect, store, and query time-series data on the edge and sync selective data to / from a central location on-premise or in the cloud as needed.
  • TimescaleDB: an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL and packaged as a PostgreSQL extension with full SQL support.

For an overview of time-series databases currently available for productive use, see DB Engines. The database of databases is also a good resource if you are deeply interested in the database landscape; it is more extensive, but it includes any DB available independent of the level of support or if it is still maintained, also hobby projects. 

Time Series Database Use Cases

What do you do when you have more than just time-series data?

Typically, a time-series database is not well suited to model non-time-based data. Therefore, many companies choose to implement two databases. This increases overhead, disk space, and is especially impractical when you deal with edge devices. 

Time Series + Object-Oriented Data Persistence

Storing and processing both time series data and objects, developers can collect complex datasets and combine them with time-series data. Combining these data types gives a more complete understanding and context to the data – not just what happens over time, but also other factors that affect the results. 

The best option is a robust object-oriented database solution that lets you model your data as it reflects the factual use case / the real world in objects and on-top is optimized for time series data. You can model your world in objects and combine this with the power of time-series data to identify patterns in your data. If this is indeed a database optimized for restricted devices and Edge Computing, you can even use this data in real-time and on the device. By combining time series data with more complex data types, an object time-series edge database can empower new use cases on the edge based on a fast and easy all-in-one data persistence solution. 

Still have questions? Feel free to contact us here!

—–

[1] Time Series Management Systems: A Survey Søren Kejser Jensen, Torben Bach Pedersen, Senior Member, IEEE, Christian Thomsen

What Drives Edge Computing?

What Drives Edge Computing?

Data is exploding in every respect: in data volume, data velocity, and data variety (the 3 v’s). One driver of this phenomenon is the growing number of Mobile and IoT devices and thus, data sources. Making this data useful is one of the driving forces behind the adoption of Edge Computing. New use cases don’t only rely on using this data, but also upon the usability and speed of usability of this ever growing data. There are several practical challenges with this growing data volume that drive the adoption of Edge Computing:

New Use Cases Drive Edge Computing

what-drives-edge-computing

Bandwidth Limitations

The existing network infrastructure cannot support sending all the data to the cloud. Particularly in urban areas there is a concentration of devices and data overburdens existing infrastructure. While 5G promises some relief, it is no hailbringer. First of all, if you want to implement your IoT project now, 5G is not yet available and many questions about 5G remain, e.g. pricing. But moreover, as the number of devices and data is growing ever faster, it is already clear that data volumes will outpace what 5G can support. Edge Computing will be an important technology alongside 5G to enable IoT.

Fast Data Requirements  

Response time requirements are growing at the same time as data volumes are increasing. Sending data to the cloud for computation and storage means applications’ response times have a higher latency and depend on the network, which cannot guarantee response rates. Edge computing guarentees significantly faster data. Use cases that need speedy response times or guaranteed responses cannot rely on cloud computing. For example, in driver assistance, where every millisecond counts or in factory floors, where downtimes are too costly.

Sustainability

Sending data to the cloud and storing it there is inefficient and therefore costly – not only in plain €, but with regards to CO2 emissions too. The distance the data needs to travel needs hardware, connectivity and electric power. Therefore, sending data unnecessarily back and forth is wasteful beaviour and burdens the environment unnecessarily. With growing data volumes, that burden is growing. In fact, analysts predict  that cloud computing data centers will consume as much as 21% of the total global energy by 2030. [1]

To scale your prototype, you need to move to the edge

At the start of IoT projects, quick prototyping, testing and piloting on early iterations of an application’s functionalities, can effectively be done in the cloud. However, in productive environments when applications scale it is often hard or impossible to keep cloud costs at scale, making the business not viable. Then it is time to move to the edge.

At the same time, decreasing hardware costs and hardware sizes are enabling more complex local computing, meaning there is less need for additional cloud usage. E.g. increasingly AI and ML is done at the edge, including model training.

data-volume-edge-computing-solution-iot-mobile

Data accessibility and Smart Syncing

Today’s successful businesses require a smarter approach to data management and integration. Data synchronization increases operational efficiencies, saving time and resources by eliminating redundant data transfer. With data synchronization, only predefined, useful parts of a data set are sent to a central instance. This means that while large volumes of data can be collected and analyzed locally, not all of this data is sent to and saved in the cloud. This reduces the impact on bandwidth, utilizes the local hardware resources for fast guaranteed response times, and keeps project cloud costs low – ultimately creating a more sustainable and efficient model of data architecture, enabling long term project scalability. 

ObjectBox’ current database technology is enabling companies to persist and use data on edge devices, faster than any alternative on the market. It enables networks of edge devices working without a central instance, enabling even more new use cases.

How Building Green IoT Solutions on the Edge Can Help Save Energy and CO2

How Building Green IoT Solutions on the Edge Can Help Save Energy and CO2

The internet of things (IoT) has a huge potential to reduce carbon emissions, as it enables new ways of operating, living, and working [1] that are more efficient and sustainable. However, IoT’s huge and growing electricity demands are a challenge. This demand is due primarily to the transmission and storage of data in cloud data centers. [2] While data center efficiency and the use of green energy will reduce the CO2 emissions needed for this practice, it is not addressing the problem directly. [3

iot-data-cloud-energy-waste

With ObjectBox, we address this unseen and fast-growing CO2 source at the root: ObjectBox empowers edge computing, reducing the volume of data transmitted to central data storage, while at the same time, heightening data transmission and storage efficiency. [4] We’ve talked before about how edge computing is necessary for a sustainable future, below we dive into the numbers a bit deeper. TLRD: ObjectBox enables companies to cut the power consumption of their IoT applications, and thus their emissions, by 50 – 90%. For 2025, the potential impact of ObjectBox is a carbon emission reduction of 594 million metric tons (see calculations below).

How ObjectBox’ Technology Reduces Overall Data Transmission

 ObjectBox reduces data transmission in two ways: 1. ObjectBox reduces the need for data transmission, 2. ObjectBox makes data transmission more efficient. ObjectBox’ database solution allows companies to build products that store and process data on edge devices and work with that data offline (as well as online). This

Green IoT Solution

not only improves performance and customer experience, it also reduces the overall volume of data that is being sent to the cloud, and thus the energy needed to transfer the data as well as store it in the cloud. ObjectBox’ Synchronization solution makes it easy for companies to transmit only the data that needs to be transmitted through 1) selective two-way syncing and 2) differential delta syncing. Synchronizing select data reduces the energy required for unnecessarily transmitting all data to the cloud.

We have demonstrated in exemplary case studies that ObjectBox can reduce total data transmissions by 70-90%, depending on the case. There will, however, typically be value in transmitting some parts of data to a central data center (cloud); ObjectBox Sync combines efficient compression based on standard and proprietary edge compression methods to keep this data small. ObjectBox also has very little overhead. Comparing the transmission of the same data sets, ObjectBox saves 40-60% on transmission data volume through the delta syncing and compression, and thus saves equivalent CO2 emissions for data transmissions. Additional studies support these results, and have shown that moving from a centralized to a distributed data structure, saves between 32 and 93% of transmission data. [5

sync-sustainable-data-save-energy

Calculations: How Does ObjectBox Save CO2?

Physically using a device consumes little energy directly; it is the wireless cloud infrastructure in the backend (data center storage and data transmission) that is responsible for the high carbon footprint of mobile phones [6] and IoT devices. Estimates say that IoT devices will produce around 2,8 ZB of data in 2020 (or 2,823,000,000,000  GB), globally. [7] Only a small portion of that data actually gets stored and used; we chose to use a conservative estimate of 5% [8] (141,150,000,000 GB) and of that portion, 90% is transferred to the cloud [9] (127,035,000,000 GB). Transferring 1 GB of data to the cloud and storing it there costs between 3 and 7 kWh. [10] Assuming an average of 5 kWh this means a 127,035,000,000 GB multiplied by 5kWh, resulting in a total energy expenditure of 635,175,000,000 kWh. Depending on the energy generation used, CO2 emissions vary. We are using a global average of 0,475 kgCO2 / 1 kwH. [11] In total this means that there will be 301,708,125,000 KG of CO2, or roughly 301 million metric tons of CO2 produced to transfer data to the cloud and store it there in 2020. 

Projections for 2025 have data volumes as high as 79.4 ZB. [12] Following the same calculations as above, IoT devices would be responsible for 8 billion metric tons of CO2 in 2025.* We estimate that using ObjectBox can cut CO2 caused by data transmission and data centers by 50-90%, by keeping the majority of data on the device, and transmitting data efficiently. It will take time for ObjectBox to enter the market, so assuming a 10% market saturation by 2025 and an average energy reduction of 70%, using ObjectBox could cut projected CO2 emissions by 594 million metric tons in 2025.

ObjectBox is on a mission to reduce digital waste which unnecessarily burdens bandwidth infrastructure and fills cloud servers, forcing the expansion of cloud farms and in turn, contributing to the pollution of the environment. As our digital world grows, we all need to give some thought to how we should structure our digital environments to optimize and support useful, beneficial solutions, while also keeping them efficient and sustainable. 

*Of course, in that time, the technologies will all be more efficient and thus use less electricity while at the same time CO2 emissions / kWh will have dropped too. Thus, we are aware that this projection is an oversimplification of a highly complex and constantly changing system.

[1] https://www.theclimategroup.org/sites/default/files/archive/files/Smart2020Report.pdf
[2] https://www.iea.org/reports/tracking-buildings/data-centres-and-data-transmission-networks
[3]“Data centres… have eaten into any progress we made to achieving Ireland’s 40% carbon emissions reduction target.” from https://www.climatechangenews.com/2017/12/11/tsunami-data-consume-one-fifth-global-electricity-2025/
[4] https://medium.com/stanford-magazine/carbon-and-the-cloud-d6f481b79dfe
[5] https://www.researchgate.net/publication/323867714_The_carbon_footprint_of_distributed_cloud_storage
[6] https://www.resilience.org/stories/2020-01-07/the-invisible-and-growing-ecological-footprint-of-digital-technology/
[7] https://www.idc.com/getdoc.jsp?containerId=prUS45213219, https://priceonomics.com/the-iot-data-explosion-how-big-is-the-iot-data/, https://www.gartner.com/en/newsroom/press-releases/2018-11-07-gartner-identifies-top-10-strategic-iot-technologies-and-trends, https://www.iotjournaal.nl/wp-content/uploads/2017/02/white-paper-c11-738085.pdf, ObjectBox research
[8] Forrester (https://internetofthingsagenda.techtarget.com/blog/IoT-Agenda/Preventing-IoT-data-waste-with-the-intelligent-edge), Harvard BR (https://hbr.org/2017/05/whats-your-data-strategy), IBM (http://www.redbooks.ibm.com/redbooks/pdfs/sg248435.pdf), McKinsey (https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/the-internet-of-things-the-value-of-digitizing-the-physical-world)
[9] https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
[10] According to the American Council for an Energy-Efficient Economy: 5,12 kWh of electricity / GB of transferred data. According to a Carnegie Mellon University study: 7 kWh / GB. The American Council for an Energy-Efficient Economy concluded: 3.1 kWh / GB.
[11] https://www.iea.org/reports/global-energy-co2-status-report-2019/emissions
[12] https://www.idc.com/getdoc.jsp?containerId=prUS45213219

Time Series & Objects: Using Data on the Edge

Time Series & Objects: Using Data on the Edge

Many IoT projects collect, both time series data and other types of data. Typically, this means they will run two databases: A time-series database and a traditional database or key/value store. This creates fracture and overhead, which is why ObjectBox TS brings together the best of both worlds in one database (DB). ObjectBox TS is a hybrid database: an extremely fast object-oriented DB plus a time-series extension, specially optimized for time series data. In combination with its tiny footprint, ObjectBox is a perfect match for IoT applications running on the edge. The out-of-the-box synchronization takes care of synchronizing selected data sets super efficiently and it works offline and online, on-premise, in the cloud.

time-series-data-example-temperature

What is time series data?

There are a lot of different types of data that are used in IoT applications. Time-series is one of the most common data types in analytics, high-frequency inspections, and maintenance applications for IIoT / Industry 4.0 and smart mobility. Time series tracks data points over time, most often taken at equally spaced intervals. Typical data sources are sensor data, events, clicks, temperature – anything that changes over time.

Why use time series data on the edge?

Time-series data sets are usually collected from a lot of sensors, which sample at a high rate – which means that a lot of data is being collected.

For example, if a Raspberry Pi gateway collects 20 data points/second, typically that would mean 1200 entries a minute measuring e.g. 32 degrees. As temperatures rarely change significantly in short time frames, does all of this data need to go to the cloud? Unless you need to know the exact temperature in a central location every millisecond, the answer is no. Sending all data to the cloud is a waste of resources, causing high cloud costs without providing immediate, real-time insights.

time-series-objects-edge

The Best of Both Worlds: time series + object oriented data persistence

With ObjectBox you aren’t limited to only using time series data. ObjectBox TS is optimized for time series data, but ObjectBox is a robust object oriented database solution that can store any data type. With ObjectBox, model your world in objects and combine this with the power of time-series data to identify patterns in your data, on the device, in real time. By combining time series data with more complex data types, ObjectBox empowers new use cases on the edge based on a fast and easy all-in-one data persistence solution. 

Bring together different data streams for a fusion of data; mix and match sensor data with the ObjectBox time series dashboard and find patterns in your data. On top, ObjectBox takes care of synchronizing selected data between devices (cloud / on-premise) efficiently for you.

time-series-data-visualization-dashboard

Get a complete picture of your data in one place

Use Case: Automotive (Process Optimization)

Most manufacturers, whether they’re producing cars, the food industry, or utilities, have already been optimizing production for a long period of time. However, there are still many cases and reasons why costly manual processes prevail.  One such example is automotive varnish. In some cases, while the inspection is automatic and intelligent, a lot of cars need to be touched up by hand, because the factors leading to the errors in the paint are not yet discovered. While there is a lot of internal expert know-how available from the factory workers, their gut feel is typically not enough to adapt production processes.

How can this be improved using time series and object data? 

The cars (objects) are typically already persisted including all the mass customization and model information. If now, all data, including sensor data, of the manufacturing site like temperature, humidity, spray speed (all time-series data) is persisted and added to each car object, any kind of correlations between production site variables, individual car properties and varnish quality can be detected. Over time, patterns will emerge. The gut feel of the factory workers would provide a great starting point for analyzing the data to discover Quick Wins before longterm patterns can be detected. Over time, AI and automatic learning kicks in to optimize the factory setup best possible to reduce the need for paint touch ups as much as possible. 

Use Case: Smart Grids

Utility grid loads shift continually throughout the day, effecting grid efficiency, pricing, and energy delivery. Using Smart Grids, utilities companies can increase efficiency and reliability in real time. In order to get insights from Smart Grids, companies need to collect a large volume of data from existing systems. A huge portion of this data is time series, e.g. usage and load statistics. On top, they incorporate other forms of data, e.g. asset relationship data, weather conditions, and customer profiles. Using visualization and analytical tools, these data types can be brought together to generate business insights and actionable operative goals.

ObjectBox TS: time series with objects

Storing and processing both time series data and objects on the edge, developers can gather complex data sets and get real time insight, even when offline. Combining these data types gives a fuller understanding and context for data – not only what happens over time, but what other factors could be influencing results. Using a fast hybrid edge database allows developers to save resources, while maintaining speed and efficiency. By synchronizing useful data to the cloud, real time data can be used for both immediate action, and post-event analysis.

Get in touch with our team to get a virtual demo of ObjectBox TS, or check out the sample GitHub repo to see more about the code.

Introducing: ObjectBox Generator, plus C++ API [Request for Feedback!]

Introducing: ObjectBox Generator, plus C++ API [Request for Feedback!]

We are introducing the ObjectBox Generator today to simplify ObjectBox development for more programming languages, starting with C/C++. Additionally, we are releasing a brand new C++ API that goes hand in hand with the new generator. Historically, our C API was rather low level as it was focused on providing the foundation for our Swift and Go APIs. With this release we want to provide C/C++ developers with ObjectBox convenience and ease of use. 

ObjectBox Generator takes over the burden of writing the binding code and data model declaration. Based on a single input file, it generates the code for you, so you can focus on the actual application logic.

Generator Example

ObjectBox let’s you handle data as FlatBuffers. For example, you can put and get data objects as FlatBuffers encoded bytes. To work with FlatBuffers, you need to define a FlatBuffer schema file (.fbs). And this file is also the input for ObjectBox Generator. This way, everything is defined in a single location.

Let’s say we have a FlatBuffers schema file “task.fbs” with the following content:

Now, we can tell ObjectBox Generator to use this file to generate C++ sources:

This makes ObjectBox Generator to generate the following files:

  • objectbox-model.h: source code to build the internal data model, that you need to pass when creating a store.
  • objectbox-model.json: keeps track of internal schema IDs; you don’t need to worry about this except that you should put it in your source control.
  • task-cpp.obx.h: the C++ value structs (data objects), binding code for FlatBuffers and the new Box class.

C++ API Example

Now, let’s use the previously generated code and the new C++ API around the Store and Box classes. A simple CRUD application boils down to a few lines:

Note that the generated code is header-only and compatible with the existing ObjectBox C-API, allowing both to be used from the same application. The C and C++ APIs both have unique advantages: the C++ API uses RAII so you do not need to worry about cleaning up, while the C API has additional features, e.g. queries.

Open Source, Docs

ObjectBox Generator is open source and available on GitHub. The repository comes with a readme file that also serves as a documentation. Among other things, you will find ObjectBox specific annotations there, which are used in fbs files to express ObjectBox-specific concerns. For example, in the definition of Task above, we used ulong as a FlatBuffers type to store dates. However, FlatBuffers does not know what a date is and we use ObjectBox annotations to express this:

For our initial release of ObjectBox Generator and the public C++ API we decided on labeling it as version 0.9. Although we are already very close to a 1.0 and we wanted to gather some feedback before our first major release. As we can still change the API or smooth out any rough edges you may find, we cannot stress enough how much we welcome and appreciate your feedback at this point. Thank you!