To IoT’s great benefit, edge computing is about to take the spotlight. Consider that each day billions of devices connected to the Internet of Things come online. As they do, they generate mountains of information. One estimate predicts the amount of data will soar to 79.4 zettabyes within five years. Imagine storing 80 zettabytes on DVDs. All those DVDs would circle the Earth more than 100 times.
In other words, a whole lot of data.
Indeed, thanks to the IoT, a dramatic shift is underway. More enterprise-generated data is being created and processed outside of traditional, centralized data centers and clouds. And unless we make a course correction, the forecasts could come unglued. We must make better use of edge computing to deal more effectively with this ocean of data,
If we do this right, our infrastructure should be able to handle this data flow in a way that maximizes efficiency and security. The system would let organizations benefit from instantaneous response times. It would allow them to use the new data at their disposal to make smarter decisions and — most importantly — make them in real-time.
That’s not what we have nowadays.
In fact, when IoT devices ship their data back to the cloud for processing, transmissions are both slow and expensive. Too few devices are taking advantage of the edge.
Instead, many route data to the cloud. In that case, you’re going to encounter network latency measuring around 25 milliseconds. And that’s in best-case scenarios. Often, the lag time is a lot worse. If you have to feed data through a server network and the cloud to get anything done, that’s going to take a long time and a ton of bandwidth.
An IP network can’t guarantee delivery in any particular time frame. Minutes might pass before you realize that something has gone wrong. At that point, you’re at the mercy of the system.
Until now, technologists have approached Big Data from the perspective that the collection and storage of tons of it is a good thing. No surprise, given how the cloud computing model is very oriented toward large data sets.
The default behavior is to want to keep all that data. But think about how you collect and store all that information. There is simply too much data to push it all around the cloud. So why not work at the edge instead?
Consider, for example, what happens to the imagery collected by the millions of cameras in public and private. What happens once that data winds up in transit? In many – and perhaps most – instances, we don’t need to store those images in the cloud.
Let’s say that you measure ambient temperature settings that produce a reading once a second. The temperature reading in a house or office doesn’t usually change on a second-by-second basis. So why keep it? And why spend all the money to move it somewhere else?
Obviously, there are cases where it will be practical and valuable to store massive amounts of data. A manufacturer might want to retain all the data it collects to tune plant processes. But in the majority of instances where organizations collect tons of data, they actually need very little of it. And that’s where the edge comes in handy.
The edge also can save you tons of money. We used to work with a company that collected consumption data for power management sites and office buildings. They kept all that data in the cloud. That worked well until they got a bill for hundreds of thousands of dollars from Amazon.
Edge computing and the broader concept of distributed architecture offers a far better solution.
Some people treat the edge as if it were a foreign, mystical environment. It’s not.
Think of the edge as a commodity compute resource. Better yet, it is located relatively close to the IoT and its devices. Its usefulness is precisely due to its being a “commodity” resource rather than some specialized compute resource. That most likely takes the form of a resource that supports containerized applications. These hide the specific details of the edge environment.
In that sort of edge environment, we can easily imagine a distributed systems architecture where some parts of the system are deployed to the edge. At the edge, they can provide real-time, local data analysis.
Systems architects can dynamically decide which components of the system should run at the edge. Other components would remain deployed in regional or centralized processing locations. By configuring the system dynamically, the system is optimized for execution in edge environments with different topologies.
With this kind of edge environment, we can expect lower latencies. We also achieve better security and privacy with local processing.
Some of this is already getting done now on a one-off basis. But it hasn’t yet been systematized. That means organizations must figure this out on their own by assuming the role of a systems integrator. Instead, they must embrace the edge and help make IoT hum.