Demystifying the edge: How businesses can build boldly for a bright future - The EE

Demystifying the edge: How businesses can build boldly for a bright future

Perry Krug of Couchbase

What is edge computing? If you asked five different people that question, you’d probably get five different answers, says Perry Krug, director shared services at Couchbase.

That’s because it’s hard to pin down. With a myriad of use cases, edge computing is a strategic architecture that’s not so easily described. Simply put, it’s all about the storing and processing of data closer to the devices and users that consume it to make applications quicker, more dynamic and more reliable.

Although it’s a hard concept to follow, it’s an area that’s becoming increasingly popular. The edge computing market is expected to grow by 35.4% from a value worth US$1.47 billion (€1.30 billion) in 2017 to US$6.72 billion (€5.94 billion) in 2022. For organisations to unlock the full potential of edge computing, it’s important it’s demystified as quickly as possible so they can understand the impact it’ll have.

Peeling away the layers

In a nutshell, edge computing complements cloud computing to support applications that need high speed and availability. Because these apps rely solely on the cloud for storing and processing data, they’re extremely dependent on having a reliable internet connection. When the internet lags or becomes unavailable, the entire application slows or fails to run.

That’s what makes edge computing ideal. It gets around internet dependencies by locating data as closely as possible to where it’s being produced and consumed, which speeds up applications and improves their availability. It reduces latency, insulates against internet outages and promises to power a realm of new innovations.

Let’s look at it in greater detail. Think of an oil drilling platform in the middle of the sea. Operators regularly collect data from hundreds of sensors all over the platform, to measure things like wave height, operating capacity and pressure on a daily basis. All this data requires a real-time response and needs to be sent fast as things are always changing.

Imagine that all that data is stored and processed within a cloud data centre. This would require the oil operator to spend lots of money investing in satellite internet, to send that data back and forth just to evaluate their measurements. Now what do you think happens when a sensor detects a harmful change of events, or a potential break down? It would take too long to intervene due to the time it takes to send data to the cloud for processing. Under these circumstances, time and reliability are critical. If the connection slows or completely fails, then it could be too late to rectify the situation. 

Enter edge computing. It’s a simple solution: eliminate the risks of disaster by putting a data centre on the oil platform itself. Any issues around latency and downtime are solved by shifting data processing away from the cloud to the place where it’s happening. Instead, data is processed in an edge data centre where measurements or readings can be detected instantly. Operators are then better able to respond in real time, operations are more efficient and safety risks are significantly reduced. 

A tiered approach

Edge computing works by using tiered, edge data centres and embedded data storage onboard devices to move data processing closer to applications. This tiered architecture protects applications from any central or regional data centre outages, with each tier leveraging local connectivity which is more reliable and synchronising data across tiers as connectivity permits. 

The top layer of this tiered system represents cloud data centres, which still play a crucial role within an edge computing architecture. They are the final destination for information. However, local applications still can’t rely upon them. 

The next layer is the edge layer, which could be an oil platform like earlier or could just as easily be a restaurant or retail shop. The edge layer consists of edge data centres and IoT gateways which all run on a local area network such as 5G or Wi-Fi. There’s also a tier for edge devices such as smartphones, laptops and IoT devices which are all communicating with the edge data centre and each other. 

However, to enable all these tiers to work effectively for edge computing, you need the right kind of database. You need a database that runs in all layers, can distribute its data footprints throughout its architecture and can synchronise all data changes instantly. In essence, you need to create a synchronous flexible fabric of data processing that spans the entire architecture: from the cloud through the edge to the end device. 

While having a database on each layer is important, it’s even more critical that those databases can communicate with each other in tandem to replicate and synchronise data across the entire edge network. This prevents data from becoming lost or corrupted. In addition, spreading this data processing across each of the tiers will mean applications can run at a greater speed, with increased resilience, security and better bandwidth efficiency. If the cloud data centre and edge data centre become unavailable for any reason, then apps with embedded databases can continue to run and in real time by processing data directly on and between devices. 

An edge-ready database

For edge computing success, it’s important to choose the right database. You need a system that can distribute its workloads to all tiers and has the ability to instantly replicate all data across database instances whether in the cloud or in an edge data centre.

Having independent database technologies across the different layers adds development friction and a slower time to market due to the different data models and programming paradigms. This makes it more beneficial to have a one single database that covers all layers. However, even this can result in extra complexity if the database isn’t designed to operate in a multi-master, disconnected fashion. It could make it harder for the operator to manage data consistency and replication.

A database also needs to be embeddable. For applications to continue to run when offline, data storage must be integrated directly into the edge device. As such, the embedded database needs to be able to operate without a central cloud control point and sync all data once connectivity is restored. Another consideration businesses must think about is the need for data to flow securely and efficiently through your edge architecture. To this end, the database should allow for bi-directional data flow that can be controlled.

Build boldly on the edge

When planning out your own edge architecture, you should only consider a database that meets all the above data processing requirements. It’s only with the right database in place that organisations can build boldly and take full advantage of the edge. If you’re unable to save, sync and replicate data in each of the layers, then edge computing fails to work. 

With low latency and resilience to internet outages, businesses that embrace an edge-ready database and edge computing will see applications become faster and more reliable. This will not only boost the customer experience but will increase revenue as more and more end users turn to brands that harness edge computing. By processing data closer to where it happens, edge computing will power a new class of modern applications and future innovations that empower enterprises to build boldly.

The author is Perry Krug, director shared services at Couchbase.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.