Why Edge Computing is More Relevant in 2021 Than Ever

Why Edge Computing is More Relevant in 2021 Than Ever

The world has been forced to digitize more quickly and to a greater extent in 2020 and 2021. COVID has created the need to remodel how work, socializing, production, entertainment, and supply chains function. Despite decades of digitization efforts, with the pandemic upon us, digitization challenges have become transparent. Many companies and countries realize now, they have fallen behind. And those that have not yet digitized were hit hardest by the pandemic. [1] With people leaning heavily on online digital solutions, internet infrastructure is at its capacity limit. [2] Accordingly, users are seeing broadband speeds drop by as much as half. [3] In Europe, governments even requested to reduce the quality of Netflix, Amazon Prime, Youtube and other streaming services to improve network speed. [4]

These challenges demonstrate the growing need for an alternative to cloud computing. Cloud computing is an inherently centralized computing paradigm. Edge Computing is a decentralized topology that is based on keeping data local, at the ‘edge’ of the network, as close to the source as possible. Edge Computing is ideal for applications that are data-intensive, have high latency-requirements, or need to work offline, independant from a cloud connection. Using data on the edge, directly on or near the source of the data, not only increases the efficiency and speed of data use, but it reduces unecessary network burden and data traffic waste.

Coronavirus accelerates the need to digitize

It was clear even before the outbreak that internet infrastructure was struggling to keep up with growing data volumes. However, the pandemic has made broadband limitations more apparent to everyday users.

Projections estimate that by 2025 there will be 20 million IoT devices [5] and 1.7MB of data created per second per person. It is slow, expensive, and wasteful to send all of this data to the cloud for storage and processing. This practice overburdens bandwidth and data center infrastructure. It makes projects expensive and unsustainable. Working with the data, locally, on the edge, where it was produced and is used, is more efficient than sending everything to the cloud and back. It brings reduced latency, reduced cloud usage and costs, independence from a network connection, more secure data and heightened data privacy – and even reduces CO2. Indeed, prior to the pandemic, edge computing was on the strategic roadmap for over 50% of mobility decision makers. [6]

As the world begins to recover from the coronavirus pandemic, digitization efforts will no doubt increase. We will see intelligent systems implemented across industries and value chains, accelerating innovation and alongside: data volumes and subsequent strain on network bandwidth. Edge computing is a key technology to ensure that this digitalization is both scalable and sustainable.  

Edge Computing takes the ‘edge’ off bandwidth strain

what is edge computing?

What is Edge Computing?

With edge computing, data is stored and used on devices at the “edge” of the network – away from centralized cloud servers. Computing on the edge means that data is stored and used locally, on the device, e.g. a smart phone or IoT device. Edge computing delivers faster decision making, local and offline data processing, as well as reduced data transfer to the cloud (e.g. filtered, computed, extra- or interpolated data), which saves both bandwidth and cloud storage costs. 

The Edge complements the Cloud

Although some might set cloud and edge in competition, the reality is that edge computing and cloud computing are both useful and relevant technologies. Both have different strengths and ideal use cases. Together they can provide the best of both worlds: decentralized local storage and processing, making efficient use of hardware on the edge and central storing and processing of some data, enabling additional centralized insights, data backups (redundancy), and remote access. To combine the best of both worlds, relevant and useful data must be synchronized between the edge and cloud in a smart and efficient way.  

Edge computing is an ideal technology to reduce the strain on data centers, so those functions that need cloud connection have adequate bandwidth; while those use cases that benefit from reduced latency and offline functionality are optimized on the edge.

The Edge: interface between the Physical and the Digital World

Edge devices handle the interface between the physical world and the cloud, enabling a whole set of new use cases. “Data-driven experiences are rich, immersive and immediate. But they’re also delay-intolerant data hogs”. [8] And therefore need to happen locally, on the edge. We may see edge computing enabling new forms of remote engagement [9], particularly in a post-corona environment.

Edge devices can be anything from a thermostat or small sensor to a fridge or mobile phone or car – and they are part of our direct physical world and use data from their local environment to enable new use cases. Think self-stocking fridges, self-driving cars, drone-delivered pizzas. In the same way, Edge Computing is the key to the first real world search engine. I am waiting for it every day: “Hey Google, where are my keys?” Within a location like a house, the concepts and technologies to enable such a real-world search engine are all clear and available – it is just a matter of time and ongoing digitization. The basis will need to be a fast and sustainable edge infrastructure. 

Sustainability on the Edge

Centralized data centers consume a lot of energy, produce a lot of carbon emissions and cause significant electronic waste. [10] While data centers are seeing a positive trend towards using green data centers, an even more sustainable approach is to cut unnecessary cloud traffic, central computation and storage as much as possible by shifting computation to the edge. Edge Computing strategies that harness the power of already deployed available hardware (like e.g. smartphones, machines, desktops, gateways) make the solution even more sustainable.

sustainability on the edge

Intelligent Edge: AI and Edge advance hand in hand

The growth of Artificial Intelligence (AI) and the Edge will go hand in hand. As more and more data is generated at the edge of the network, there will be a greater demand for intelligent data processing and structured optimization to reduce raw data loads going to the cloud. [11] Edge AI will have the power to work with data on local devices, keeping data streams more useful and usable. In the near future, Machine Learning applications will have the ability to learn and create unique, localized, decentralized insights on the edge – based on local inputs.

“With Edge AI, personalization features that we want from the app can be achieved on device. Transferring data over networks and into cloud-based servers allows for latency. At each endpoint, there are security risks involved in the data transfer”. [12] Which is part of the reason why the Edge AI Software market is forecasted to reach 1.12 trillion dollars volume by 2023. The development of AI accelerators, which improve model inferencing on the edge, namely from NVIDIA, Intel and Google are helping to make AI on the edge more viable. [13] A fast edge database is a necessary base technology to enable more AI on the edge. 

Edge Computing – an answer to Data Privacy concerns and a need for Resilience

As data collection grows in both breadth and depth, there is a stronger need for data privacy and security. Edge computing is one way to tackle this challenge: keeping data where it is produced, locally, makes data ownership clear and data less likely to be attacked and compromised. If compromised, the data compromised is clearly defined, making notification and subsequent actions manageable. ObjectBox, in its core and as an edge technology, is designed to keep data private, on those devices it was created on, and only share select data as needed. 

The more our private and working lives as well as the larger economy depend on digitalization, the more important it is that systems, underlying computing paradigms as well as networks have strong resilience and security. In computer networking, resilience is the ability to “provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.” [14]

ing initEdge Computing shifts computer workloads – the collection, processing, and storage of data – from central locations (like the cloud) to the edge of the networks to many individual devices such as cell phones. Accordingly, any strain is distributed to many devices. Therefore, the risk of a total breakdown is reduced: If one device does not work anymore, the rest is still working. Depending on the setup, the individual devices could even compensate for devices that have a problem.

The same applies to security risks: Even if data from one device is compromised, all other data sets are still safe; the loss is thus very limited and clear.  Overall, as a complement to the cloud, edge computing provides improved strength and security in local networks around the world. These local infrastructures can relieve the pressure on the existing complex dependencies, and in turn make the wider system more resilient and flexible. With Edge Computing crisis response can therefore in all likelihood be faster, better informed, and more effective. [15]

Why Corona-Tracking-Apps need to work on the edge

There was initially quite some debate about taking a centralized versus decentralized approach to Corona-Tracking-Apps. [16] Many people were worried about their data. Edge Computing – storing most parts of the data locally, on the user’s device – is a great way to avoid unnecessary data sharing and keep data ownership clear. At the same time, data is by and large much more secure and less likely to be attacked and hacked, as the data to be gained is very reduced. An intelligent syncing mechanism like ObjectBox Sync ensures that the data which needs to be shared, is shared in a selective, transparent and secure way.

The next few years will see big cultural changes in both our personal and professional lives – a portion of those changes will be driven by increased digitalization. Edge computing is an important paradigm to ensure these changes are sustainable, scalable, and secure. Ultimately, we have the chance to rise from this crisis with new insights, new innovation, and a more sustainable future.

1. https://www.netzoekonom.de/2020/04/11/die-oekonomie-nach-corona-digitalisierung-und-automatisierung-in-hoechstgeschwindigkeit/
2. https://www.cnet.com/news/coronavirus-has-made-peak-internet-usage-into-the-new-normal/
3. https://www.nytimes.com/2020/03/26/business/coronavirus-internet-traffic-speed.html
4. https://www.theverge.com/2020/3/27/21195358/streaming-netflix-disney-hbo-now-youtube-twitch-amazon-prime-video-coronavirus-broadband-network
5. https://www.gartner.com/imagesrv/books/iot/iotEbook_digital.pdf
6. https://www.forbes.com/sites/forrester/2019/12/02/predictions-2020-edge-computing-makes-the-leap/#1aba50104201
7. https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
8. https://www.iotworldtoday.com/2020/03/19/ai-at-the-edge-still-mostly-consumer-not-enterprise-market/
9. https://www.accenture.com/us-en/insights/high-tech/edge-processing-remote-viewership
10. https://link.springer.com/article/10.1007/s12053-019-09833-8
11. https://www.forbes.com/sites/cognitiveworld/2020/04/16/edge-ai-is-the-future-intel-and-udacity-are-teaming-up-to-train-developers/#232c8fab68f2
12. https://www.forbes.com/sites/cognitiveworld/2020/04/16/edge-ai-is-the-future-intel-and-udacity-are-teaming-up-to-train-developers/#232c8fab68f2
13. https://www.forbes.com/sites/janakirammsv/2019/07/15/how-ai-accelerators-are-changing-the-face-of-edge-computing/#2c1304ce674f
14. https://en.wikipedia.org/wiki/Resilience_(network)
15. https://www.coindesk.com/how-edge-computing-can-make-us-more-resilient-in-a-crisis
16. https://venturebeat.com/2020/04/13/what-privacy-preserving-coronavirus-tracing-apps-need-to-succeed/

Digital Healthcare – Market, projections, and trends

Digital Healthcare – Market, projections, and trends

Note to Readers (2024 Update)
This white paper, originally published in 2020, explores key projections, market dynamics, and trends in digital healthcare. Since its publication, the field has evolved significantly, and the accompanying detailed report has been updated to reflect the state of digital healthcare in 2024. For the most current insights and analysis, please refer to the updated report (also linked at the end of this blog).  

 

If you work in the healthcare industry, you are likely familiar with some uses of IoT devices. According to Gartner (2020), 79% of healthcare providers are already successfully employing IoT solutions.[1] However, this is just the beginning. While before COVID-19, the growth of digital health adoption had stalled [2], the market is picking up speed again. Indeed, Q3 2020 was a record year for investments in healthcare companies [3] and the market expects rising investments in healthtech for next years [4]. Today, underutilized data plays a major role in healthtech innovation [17] and the growing importance of healthcare data for future offerings is evident [5]. Take a look how analyts from Gartner to Accenture and Forrester expect the market to grow:

The digital healthcare market 2020 and beyond

digital-healthcare-market-trends-2020-edge-iot
  • Analysts expect Artificial Intelligence in healthcare to reach $6.6 billion by 2021 (with a 40% CAGR). [6]
  • The Internet of Medical Things (IoMT) market is expected to cross $136 billion by 2021. [11
  • Analysts expect the healthcare wearable market to have a market volume of $27 billion by 2023 (with a 27.9% CAGR). [7]
  • The IoT industry is projected to be worth $6.2 trillion by 2025 and around 30% of that market (or about $167 billion) will come from healthcare. [8]
  • Analysts expect the global Medical Health Apps market to grow to $236 billion by 2026, reflecting a shift towards value based care. [9]
  • The projected global digital health market is estimated to reach $510.4 billion by 2026 (with a 29% CAGR). [10]

The Healthcare industry has been struggling with shrinking payments and cost optimizations for years. [18] Fueled by the need to adopt in light of the COVID pandemic, digital technologies bring extensive changes quickly to this struggling industry now. Data is moving to the center of this changing ecosystem and harbors both risks and opportunities in a new dimension. [21] The basic architecture and infrastructure to have the data reliably, securely and quickly available where they are needed will be decisive for the success or failure of digital healthcare solutions. [17] [21]

We recommend keeping an eye on the following five trends

The 5 biggest digital healthcare trends to watch

AI-health-growth-market-tech

Artificial Intelligence (AI)  

Accenture estimates that AI applications can help save up to $150 billion annually for the US healthcare economy by 2026. [6] Therefore, it is no wonder that the healthcare sector is expected to be among the top five industries investing in AI in the next couple of years. [19] The top three greatest near-term value AI applications in healthcare are: 1. robot-assisted surgery ($40 billion), 2. virtual nursing assistants ($20 billion), and 3. administrative workflow assistance ($18 billion). 

big-data-health-analytics

Big Data / Analytics

The goal of big data analytic solutions is to improve the quality of patient care and the overall healthcare ecosystem. The global healthcare Big Data Analytics market is predicted to reach $39 billion by 2025. [12] The main areas of growth are medical data generation in the form of Electronic Health Records (EHR), biometric data, sensors data. 

internet-of-medical-things-digital-healthtech

Internet of Medical Things (IoMT)

IoMT is expected to grow to $508.8 billion by 2027. [13] According to Gartner, 79% of healthcare providers are already using IoT in their processes. [27] During COVID, IoMT devices have been used to increase safety and efficiency in healthcare, i.e. providing and automating clinical assistance and treatment to the infected patient, to lessen the burden of specialists. Future applications, like augmented reality glasses that assist during surgery, are leading to a focus more on IoMT-centric investments. [14]

telemedicine-virtual-healthcare-online

Telehealth / Telemedicine

Telecommunications technology enables doctors to diagnose and treat patients remotely. Consumer adoption of telehealth has skyrocketed in 2020 and McKinsey believes that up to $250 billion of current US healthcare spend could potentially be virtualized. [25] Also, many patients view telehealth offerings more favorable and – having made good experiences – are planning to continue using telehealth in the future. [26] Not astonishingly, telemedicine stocks also grow rapidly. [14]

edge-computing-hospital-clinic-offline

Edge Computing

Edge computing is a technological megashift happening in computing. [23] Instead of pushing data to the cloud to be computed, processing is done locally, on ‘the edge’. [15] Edge Computing is one of the key technologies to make healthcare more connected, secure, and efficient. [22]  Indeed, the digital healthcare ecosystem of the future depends on an infrastructure layer that makes health data accessible when needed where needed (data liquidity). [21] Accordingly, IDC expects the worldwide edge computing market to reach $250.6 billion in 2024 with a (12.5% CAGR) [24with healthcare identified as one of the leading industries that will adopt edge computing. [16

The healthcare market is in the middle of a fast digital transformation process. Drivers such as COVID,  growing IoT adoption in healthcare, and underlying social mega-trends are pushing digital healthcare growth to new heights. Therefore, the digital healthcare industry faces many challenges, both technical and regulatory. At the same time the healthcare market is offered a wealth of opportunities.

References

[1] https://www.computerworld.com/article/3529427/how-iot-is-becoming-the-pulse-of-healthcare.html / https://www.gartner.com/en/documents/3970072
[2] https://www.accenture.com/us-en/insights/health/leaders-make-recent-digital-health-gains-last
[3] https://sifted.eu/articles/europes-healthtech-industry-2020/
[4] https://www.mobihealthnews.com/news/emea/health-tech-investments-will-continue-rise-2020-according-silicon-valley-bank
[5] https://news.crunchbase.com/news/for-health-tech-startups-data-is-their-lifeline-now-more-than-ever/
[6] https://www.accenture.com/us-en/insight-artificial-intelligence-healthcare%C2%A0
[7] https://www.grandviewresearch.com/industry-analysis/wearable-medical-devices-market
[8] https://www.marketsandmarkets.com/PressReleases/iot-healthcare.asp
[9] https://www.grandviewresearch.com/press-release/global-mhealth-app-market
[10] https://www.globenewswire.com/news-release/2020/05/23/2037920/0/en/Global-Digital-Health-Market-was-Valued-at-USD-111-4-billion-in-2019-and-is-Expected-to-Reach-USD-510-4-billion-by-2025-Observing-a-CAGR-of-29-0-during-2020-2025-VynZ-Research.html
[11] https://www2.stardust-testing.com/en/the-digital-transformation-trends-and-challenges-in-healthcare
[12] https://www.prnewswire.com/news-releases/healthcare-analytics-market-size-to-reach-usd-40-781-billion-by-2025–cagr-of-23-55—valuates-reports-301041851.html#:~:text=Healthcare%20Big%20Data%20Analytics%20Market,13.6%25%20during%202019%2D2025 
[13] https://www.globenewswire.com/news-release/2020/11/25/2133473/0/en/Global-Digital-Health-Market-Report-2020-Market-is-Expected-to-Witness-a-37-1-Spike-in-Growth-in-2021-and-will-Continue-to-Grow-and-Reach-US-508-8-Billion-by-2027.html
[14] https://www.nasdaq.com/articles/iomt-meets-new-healthcare-needs%3A-3-medtech-trends-to-watch-2020-11-27
[15] https://go.forrester.com/blogs/predictions-2021-technology-diversity-drives-iot-growth/
[16] https://www.prnewswire.com/news-releases/state-of-the-edge-forecasts-edge-computing-infrastructure-marketworth-700-billion-by-2028-300969120.html
[17] https://news.crunchbase.com/news/for-health-tech-startups-data-is-their-lifeline-now-more-than-ever/ 
[18] https://www.gartner.com/en/newsroom/press-releases/2020-05-21-gartner-says-50-percent-of-us-healthcare-providers-will-invest-in-rpa-in-the-next-three-years
[19] https://www.idc.com/getdoc.jsp?containerId=prUS46794720 
[20] https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-great-acceleration-in-healthcare-six-trends-to-heed 
[21] https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-next-wave-of-healthcare-innovation-the-evolution-of-ecosystems 
[22] https://www.cbinsights.com/research/internet-of-medical-things-5g-edge-computing-changing-healthcare/
[23] https://siliconangle.com/2020/12/08/future-state-edge-computing/
[24] https://www.idc.com/getdoc.jsp?containerId=prUS46878020
[25] https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/telehealth-a-quarter-trillion-dollar-post-covid-19-reality
[26] https://go.forrester.com/blogs/will-virtual-care-stand-the-test-of-time-if-youre-asking-the-question-its-time-to-catch-up/
[27] https://www.computerworld.com/article/3529427/how-iot-is-becoming-the-pulse-of-healthcare.html

 

What Drives Edge Computing?

What Drives Edge Computing?

Data is exploding in every respect: in data volume, data velocity, and data variety (the 3 v’s). One driver of this phenomenon is the growing number of Mobile and IoT devices and thus, data sources. Making this data useful is one of the driving forces behind the adoption of Edge Computing. New use cases don’t only rely on using this data, but also upon the usability and speed of usability of this ever growing data. There are several practical challenges with this growing data volume that drive the adoption of Edge Computing:

New Use Cases Drive Edge Computing

what-drives-edge-computing

Bandwidth Limitations

The existing network infrastructure cannot support sending all the data to the cloud. Particularly in urban areas there is a concentration of devices and data overburdens existing infrastructure. While 5G promises some relief, it is no hailbringer. First of all, if you want to implement your IoT project now, 5G is not yet available and many questions about 5G remain, e.g. pricing. But moreover, as the number of devices and data is growing ever faster, it is already clear that data volumes will outpace what 5G can support. Edge Computing will be an important technology alongside 5G to enable IoT.

Fast Data Requirements  

Response time requirements are growing at the same time as data volumes are increasing. Sending data to the cloud for computation and storage means applications’ response times have a higher latency and depend on the network, which cannot guarantee response rates. Edge computing guarentees significantly faster data. Use cases that need speedy response times or guaranteed responses cannot rely on cloud computing. For example, in driver assistance, where every millisecond counts or in factory floors, where downtimes are too costly.

Sustainability

Sending data to the cloud and storing it there is inefficient and therefore costly – not only in plain €, but with regards to CO2 emissions too. The distance the data needs to travel needs hardware, connectivity and electric power. Therefore, sending data unnecessarily back and forth is wasteful beaviour and burdens the environment unnecessarily. With growing data volumes, that burden is growing. In fact, analysts predict  that cloud computing data centers will consume as much as 21% of the total global energy by 2030. [1]

To scale your prototype, you need to move to the edge

At the start of IoT projects, quick prototyping, testing and piloting on early iterations of an application’s functionalities, can effectively be done in the cloud. However, in productive environments when applications scale it is often hard or impossible to keep cloud costs at scale, making the business not viable. Then it is time to move to the edge.

At the same time, decreasing hardware costs and hardware sizes are enabling more complex local computing, meaning there is less need for additional cloud usage. E.g. increasingly AI and ML is done at the edge, including model training.

data-volume-edge-computing-solution-iot-mobile

Data accessibility and Smart Syncing

Today’s successful businesses require a smarter approach to data management and integration. Data synchronization increases operational efficiencies, saving time and resources by eliminating redundant data transfer. With data synchronization, only predefined, useful parts of a data set are sent to a central instance. This means that while large volumes of data can be collected and analyzed locally, not all of this data is sent to and saved in the cloud. This reduces the impact on bandwidth, utilizes the local hardware resources for fast guaranteed response times, and keeps project cloud costs low – ultimately creating a more sustainable and efficient model of data architecture, enabling long term project scalability. 

ObjectBox’ current database technology is enabling companies to persist and use data on edge devices, faster than any alternative on the market. It enables networks of edge devices working without a central instance, enabling even more new use cases.

Time Series & Objects: Using Data on the Edge

Time Series & Objects: Using Data on the Edge

Many IoT projects collect, both time series data and other types of data. Typically, this means they will run two databases: A time-series database and a traditional database or key/value store. This creates fracture and overhead, which is why ObjectBox TS brings together the best of both worlds in one database (DB). ObjectBox TS is a hybrid database: an extremely fast object-oriented DB plus a time-series extension, specially optimized for time series data. In combination with its tiny footprint, ObjectBox is a perfect match for IoT applications running on the edge. The out-of-the-box synchronization takes care of synchronizing selected data sets super efficiently and it works offline and online, on-premise, in the cloud.

time-series-data-example-temperature

What is time series data?

There are a lot of different types of data that are used in IoT applications. Time-series is one of the most common data types in analytics, high-frequency inspections, and maintenance applications for IIoT / Industry 4.0 and smart mobility. Time series tracks data points over time, most often taken at equally spaced intervals. Typical data sources are sensor data, events, clicks, temperature – anything that changes over time.

Why use time series data on the edge?

Time-series data sets are usually collected from a lot of sensors, which sample at a high rate – which means that a lot of data is being collected.

For example, if a Raspberry Pi gateway collects 20 data points/second, typically that would mean 1200 entries a minute measuring e.g. 32 degrees. As temperatures rarely change significantly in short time frames, does all of this data need to go to the cloud? Unless you need to know the exact temperature in a central location every millisecond, the answer is no. Sending all data to the cloud is a waste of resources, causing high cloud costs without providing immediate, real-time insights.

time-series-objects-edge

The Best of Both Worlds: time series + object oriented data persistence

With ObjectBox you aren’t limited to only using time series data. ObjectBox TS is optimized for time series data, but ObjectBox is a robust object oriented database solution that can store any data type. With ObjectBox, model your world in objects and combine this with the power of time-series data to identify patterns in your data, on the device, in real time. By combining time series data with more complex data types, ObjectBox empowers new use cases on the edge based on a fast and easy all-in-one data persistence solution. 

Bring together different data streams for a fusion of data; mix and match sensor data with the ObjectBox time series dashboard and find patterns in your data. On top, ObjectBox takes care of synchronizing selected data between devices (cloud / on-premise) efficiently for you.

time-series-data-visualization-dashboard

Get a complete picture of your data in one place

Use Case: Automotive (Process Optimization)

Most manufacturers, whether they’re producing cars, the food industry, or utilities, have already been optimizing production for a long period of time. However, there are still many cases and reasons why costly manual processes prevail.  One such example is automotive varnish. In some cases, while the inspection is automatic and intelligent, a lot of cars need to be touched up by hand, because the factors leading to the errors in the paint are not yet discovered. While there is a lot of internal expert know-how available from the factory workers, their gut feel is typically not enough to adapt production processes.

How can this be improved using time series and object data? 

The cars (objects) are typically already persisted including all the mass customization and model information. If now, all data, including sensor data, of the manufacturing site like temperature, humidity, spray speed (all time-series data) is persisted and added to each car object, any kind of correlations between production site variables, individual car properties and varnish quality can be detected. Over time, patterns will emerge. The gut feel of the factory workers would provide a great starting point for analyzing the data to discover Quick Wins before longterm patterns can be detected. Over time, AI and automatic learning kicks in to optimize the factory setup best possible to reduce the need for paint touch ups as much as possible. 

Use Case: Smart Grids

Utility grid loads shift continually throughout the day, effecting grid efficiency, pricing, and energy delivery. Using Smart Grids, utilities companies can increase efficiency and reliability in real time. In order to get insights from Smart Grids, companies need to collect a large volume of data from existing systems. A huge portion of this data is time series, e.g. usage and load statistics. On top, they incorporate other forms of data, e.g. asset relationship data, weather conditions, and customer profiles. Using visualization and analytical tools, these data types can be brought together to generate business insights and actionable operative goals.

ObjectBox TS: time series with objects

Storing and processing both time series data and objects on the edge, developers can gather complex data sets and get real time insight, even when offline. Combining these data types gives a fuller understanding and context for data – not only what happens over time, but what other factors could be influencing results. Using a fast hybrid edge database allows developers to save resources, while maintaining speed and efficiency. By synchronizing useful data to the cloud, real time data can be used for both immediate action, and post-event analysis.

Get in touch with our team to get a virtual demo of ObjectBox TS, or check out the sample GitHub repo to see more about the code.

What is Edge Computing?

What is Edge Computing?

Today, over 90 percent of enterprise data is sent to the cloud. In the next years, this number will drop to just 25 percent according to Gartner. The rest of the data is not going anywhere. It is being stored and used locally, on the device it was created on – e.g. cars, trains, phones, machines, cameras. This is Edge Computing – and since the Corona outbreak it is more relevant than ever. Edge Computing is a topology rather than technology and spans devices as well as industries.

Obviously, this is cutting the discussion short. With edge consortia springing up like mushrooms, there is no lack of overlapping definitions around the terms Edge Computing and Fog Computing.

what-is-edge-computing

Benefits of Edge Computing put simply

The benefits of edge computing stem from its underlying paradigm: Edge Computing is a decentralized computing architecture as opposed to a centralized computing model (today typically cloud computing).

  • Data ownership / privacy: With Edge Computing data can stay where it is produced, used and where it belongs (with the user / on the edge devices)
  • Networking costs / Cloud costs: Reducing data transferrals and central storage reduces networking and cloud costs significantly
  • Bandwidth constrains: Bandwidth is limited and the data volumes are growing much faster than the bandwidth can be expanded (e.g. with 5G networks); it therefore puts a hard stop on many applications that can be overcome by building on the edge
  • Application / Data speed: Processing on the device – instead of sending data to the cloud and waiting for an answer – is way faster (latency)
  • Offline-capability: With Edge Computing, devices operate independent from a network / cloud connection, so the application always works and data parts that are needed centrally can be synced when convenient, needed, connected
  • The decentralized edge: Edge devices can communicate between each other directly. This decentralized Edge Computing approach more efficient (usually translating to speed) due to the short distances and because the power and information of several devices can be combined (for more info see: ultra low latency networks, peer-2-peer, M2M actions). On top, it adds resilience.
  • Security: A central instance with millions of data is more attractive to hack; also the data transferral adds an additional vulnaribility.

From mist to fog to edge to cloud – a short overview

To bring some light into the terminology mess: The terms “mist computing” and “cloud computing” constitute the ends of a continuum. In our definition, the edge covers everything from cloud to any end device, however tiny and limited it may be. In this definition, there really is only the cloud and the edge.

However, some authors additionally use the terms fog computing and mist computing.

Mist covers the computing area that takes place on really tiny, distributed, and outspread devices, e.g. humidity or temperature sensors. To make it a bit more tangible: These devices generally are too small to run an operating system locally. They just generate data and send that data to the network.

As opposed to mist computing, the cloud refers to huge centralized data centers. The terms “fog” and “edge” fall within this continuum and – depending on whose definition you follow – can be used interchangeably.

what is edge computing

From edge to cloud and back: History repeating itself

If these terms seem familiar to you, that is probably because edge computing is just another cycle in a series of computing developments.

Computing has seen constant turns between centralized and distributed computing over the decades, and with recent developments in hardware capacity, we’re again entering into a decentralized cycle.

edge vs cloud

Edge Computing has been around for 20 years, see a quick history here:

Cloud or edge? – one to rule them all?

Neither the cloud nor the edge is a solution for all cases. As always: It depends. There are cases, where the edge makes more sense than the cloud and vice versa. Most cases however, do need both. If you can, putting the bulk of your computational workload on the edge does make sense though from an economic as well as environmental perspective.

 Interested in learning more? Read why Android developers should care about Edge Computing or discover Edge IoT use cases.

A last word on “edge consortia”

There is no lack of consortia defining terms around edge computing – it’s a lot like the Judean People’s Front against the People’s Front of Judea. After a year of battle, the most prominent edge consortium emerging currently seem to be EdgeX under the umbrella of the Linux Foundation – fully open source, while also largely supported and driven by Dell, who initiated it. Other notable players trying to get a foothold in this space is the Deutsche Telekom with MobiledgeX and HPE with Edge Worx. A European counterpart, ECCE, formed in spring 2019 and might be worthwhile watching, as it is supported by many industry players like e.g. KUKA, Intel, and Huawei.