Advanced Computing in the Age of AI | Friday, March 29, 2024

Latency Needs Driving an Edge Server Boom 

The conventional wisdom holds that Internet of Things and data analytics applications are driving the steady shift of server deployments to the networks. Certainly, the overriding requirement is placing processing horsepower closer to edge devices and end users.

It turns out that the forecast doubling of edge server deployments over the next several years is instead being driven by network carriers seeking reduced latency and greater bandwidth to meet surging demand for digital content. Those content delivery systems require collection and real-time processing of customer and other data, notes a market analyst.

Hence, London-based Omdia forecasts a doubling of edge deployments through 2024, totaling an estimated 4.7 million servers. Most deployed by telcos are being used for content delivery, with edge server deployments “justified” through cost savings achieved via virtual network functions. That combination is boosting network operators’ revenues via delivery of new services.

As network services shift inexorably to the edge, Omdia predicts edge server workloads will expand to include self-driving car telemetry, augmented and virtual reality (AR/VR) applications and cinema-quality gaming that has boomed during the pandemic.

Meanwhile, Omdia forecast this week that 37.6 percent of servers shipped to enterprises will be deployed at edge locations by 2024, up nearly 10 percent over 2019 shipments. Those totals reflect early adoption of edge computing for health care and industrial applications along with data analytics requiring ever-lower latency.

“One driver for enterprises moving more servers to the edge is the expansion of automated manufacturing and the use of IoT devices,” said Vlad Galabov, Omdia’s principal analyst for datacenter IT. Edge applications range from real-time factory asset management to new retail operating models.

Cloud service providers, rather than network carriers, are traditionally viewed as driving edge deployments as AI and data analytics workloads shift from on-premises to a range of cloud combinations. While not growing at the same rate as telco and enterprise users, Omdia forecasts that 12.2 percent of servers shipped to hyper-scale cloud services providers will be deployed at the edge by 2024, up from just 5 percent last year.

Application drivers include video streaming, cloud gaming and AR/VR deployments where low latency is critical for emerging industrial applications.

Indeed, Omdia sees latency, or “round trip time,” as a cloud service differentiator among “tier-two” cloud providers, many operating within individual countries. Close proximity to customers reduces latency, allowing some regional provider to compete with hyper-scalers.

By contrast, the centralized cloud server operations of larger second-tier cloud providers like Apple, IBM and Oracle have hindered their efforts to compete with the likes of Amazon Web Services, Microsoft Azure and Google Cloud.

Hence, Omdia expects the latter group to boost the number of edge locations to launch new services in an attempt to keep pace with cloud hyper-scalers.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI