DARPA Targets Network Bottlenecks
DARPA will seek to unclog the networking bottlenecks that are hindering wider use of powerful hardware in computing-intensive applications.
The Pentagon research agency has unveiled another in a series of post-Moore’s Law computing initiatives, this one seeking an overhaul of the network stack and interfaces that fall well short of connecting high-end processors with external networks and the data-driven applications they support.
The DARPA initiative, Fast Network Interface Cards, or FastNICs, aims to boost network performance by a factor of 100 through a transformation of the network stack from the application to the system software layers running on top of steadily faster hardware.
“The true bottleneck for processor throughput is the network interface used to connect a machine to an external network, such as an Ethernet, therefore severely limiting a processor’s data ingest capability,” said Jonathan Smith, a program manager in DARPA’s Information Innovation Office.
“Today, network throughput on state-of-the-art technology is about 1014 bits per second and data is processed in aggregate at about 1014 bps. Current stacks deliver only about 1010 to 1011 bps application throughputs,” Smith added.
The program acknowledges the rise of distributed computing that requires far more network bandwidth than currently available. Data-driven applications such as image classification and deep neural networks have also exposed network bottlenecks. On the processing end, machine learning and other applications are being accelerated by a combination of graphics and x86-based processors.
Network interfaces have failed to keep pace.
While emerging network approaches such as service meshes and NVM Express over storage network fabrics have yielded incremental gains in network bandwidth, DARPA’s FastNICs effort attempts to get ahead of bottlenecks through what program managers call “clean-slate” approaches that re-work existing network architectures.
The effort also seeks to address the current lack of enterprise incentives to move beyond what DARPA’s Smith described as “cautious incremental technology advances across multiple, independent market silos.”
The network interface initiative will therefore focus on hardware development to boost “aggregate raw server data path speed,” DARPA said. Among the goals is demonstrating 10-Tbps network interface hardware.
“It starts with the hardware,” Smith said. “If you cannot get that right, you are stuck. Software can’t make things faster than the physical layer will allow so we have to first change the physical layer.
The next step toward achieving the agency’s 100-fold performance goal at the application level is developing system software to manage FastNICs hardware. The open-source software based on at least one open-source OS would enable faster, parallel data transfer between network hardware and applications.
Among the design goals is requiring researchers to demonstrate an application that executes the 100-fold performance increase on a new network stack.
The networking initiative focus on two applications: distributed machine learning and sensors. The latter includes military applications such as ingesting and analyzing sensor data from unmanned platforms and satellites. Meanwhile, improved network interfaces are seen as critical for machine learning applications that harness clusters of machines but fall short when it comes to data movement.
“If you can move data more quickly between machines with a successful FastNICs result then you should be able to shrink the performance gap,” Smith predicted.
Details on DARPA’s FastNICs program are available here.