Covering Scientific & Technical AI | Thursday, October 3, 2024

Network Computing Moves Closer to the Edge 

(Supphachai Salaeman/Shutterstock)

In the two decades since Sun Microsystems declared “the network is the computer,” cloud computing has established itself as the de-facto model.

The economic benefits of a near infinite, elastic infrastructure that customers don’t need to manage themselves have assured that. Storing information centrally has made it possible to deliver information wherever it’s needed, to any device, supporting the explosion in remote work, smartphone apps, social networks and more.

Now we are entering a new phase of network computing. The emergence of IoT and sensor networks, along with demanding applications, such as multiplayer online gaming, real-time analytics and self-driving cars, mean cloud computing is not the right model to carry us into the future. Instead, we’re entering the era of edge computing, in which most computation and storage will happen as close as possible to the user. This change matters because it will require us to think differently about how we design applications, and about where in the network the data and computation for that application should reside.

Why the change? A few reasons.

First, devices at the edge of the network increasingly need to communicate directly with each other to reduce latency. We’re moving from a spoke and hub model to a connected mesh in which devices increasingly need to exchange data in close to real time. We’re just starting to see this in virtual reality (VR), both for consumer uses like multiplayer games and business applications like Microsoft Halo. An optimal experience requires minimal latency, and that means storing and processing as much of the data as possible close to the user. Self-driving cars are another example, particularly as they will evolve in future. The federal government has said it wants all cars to communicate directly with each other to minimize accidents, constantly sharing their speed and position. A high-latency centralized architecture does not make sense for this type of real-time sharing.

In addition, networks are already stretched to capacity by bandwidth-intensive applications, such as 4K streaming video. As billions of sensors come online with the industrial IoT it will not make economic or technical sense to transport all of the data generated back to a central location for processing. Computing is extremely low cost, and it will make sense to analyze more data closer where it can be acted on in real time to control industrial turbines, aircraft and other complex equipment.

So what does the network of the future look like?

There will always be a use for centralized computing — when the data isn’t needed for real-time applications at the edge. So edge computing will not completely replace the cloud — just as the cloud hasn’t completely replaced client-server and client-server hasn’t completely replaced the mainframe.

But far more of our storage and processing will happen at the edge, where most cutting edge innovation will take place. This includes the “extreme” edge, meaning in devices themselves, as well as close to the edge, in small data centers located close to end users. These “edge data centers” are already being built in secondary markets to cache video content, and we will see more of them as technologies like VR, for which low latency is even more critical, become widespread.

As well as application needs, this move to the edge is being driven by advances in distributed computing made over the past decade. Apache Kafka, for instance, provides a high-throughput, low-latency messaging platform that can move data in real time between edge devices. There are now also massively distributed databases, such as Google’s Cloud Spanner and Microsoft’s Cosmos DB, which can store data throughout the network while maintaining data consistency. These and other advances make it easier to build and maintain edge applications without having to custom-build the infrastructure underneath.

As we design applications in the future, we will need to think about where in the network they should be located. A central cloud will be applicable for some data, such as a database of customer records, or for large data sets used to train machine learning models. But when data is more unique to end users and applications — as most of it will be — computing will be distributed throughout the network to the edge. At that point, Sun’s original vision will have been realized — computing will happen everywhere in the network, and the network really will be the computer.
Tyler McMullen is CTO and co-founder of Fastly.

AIwire