Advanced Computing in the Age of AI | Thursday, March 28, 2024

Mellanox Advances ‘In-Network Computing’ with ConnectX-5 Adapter 

Networking specialist Mellanox has announced ConnectX-5, the next-generation of its 100G InfiniBand and Ethernet adapter line. The company says the new device will help organizations take advantage of real-time data processing for high performance computing (HPC), data analytics, machine learning, national security and ‘Internet of Things’ applications.

ConnectX-5 was designed to connect with any computing infrastructure – x86, Power, GPU, ARM, and FPGA – and it employs a variety of offload engines, which can be classified into two camps. The more established offloading capability supports network functions, such as RDMA, transport offload, and SR-IOV. There’s also a new generation of acceleration engines which are running data algorithms, essentially making the ConnectX-5 a coprocessor.

Significant for HPC, ConnectX-5 continues the approach begun with Switch-IB2 and moves more MPI capabilities into the network. While Switch-IB2 offloads MPI collectives for running on the switch architecture, ConnectX-5 enables MPI Tag Matching and MPI AlltoAll operations, as well as advanced dynamic routing.

With ConnectX-5 and Switch-IB2, 60 percent of the MPI algorithms are now being executed on the network, said Mellanox’s Gilad Shainer. “Looking ahead, we’re probably going to see the entire MPI moved to the network as part of the co-design approach,” he added.

ConnectX-5 also exposes what Mellanox is referring to as in-network memory. With a small memory address space accessible by the application, data can be stored or made accessible on the network devices with the goal of enabling faster reach from different endpoints.

Mellanox positions the offloading approach as part of the larger transition to co-design principles that mine synergies between software and hardware or between the different hardware components. “The way to solve the performance bottlenecks that are now emerging is by running different algorithms in different places,” said Shainer. “ConnectX-5 is the first adapter that brings the co-design architecture into the NIC side.”

“Ten years ago process runtime or MPI collective approaches were running at hundreds of microsecond latencies,” he went on to explain. “Network device latencies were in the range of tens of microseconds, so it was a big part of the overall latency. Fast forward to today and process latencies are in the range of tens of microseconds and network device latency is running about 100 nanoseconds. The question we’re addressing is how do you make another performance improvement in the process latency – move from 10 microseconds to a low single digit of a microseconds – when CPU frequency doesn’t go faster.”

“Computing within network devices makes sense when multiple nodes need to act on the same data,” observed Addison Snell, CEO of analyst firm Intersect360 Research. “In essence, it’s the complement to pushing a computation all the way to a GPU with something like RDMA and you don’t have to move the data off of the GPU in order to compute on it. If something’s extremely local, it can be – on the one side – all the way down at the processing element on the node, but at the other end of the spectrum where it’s something that’s shared between nodes, it can be more effective to do it in the network as opposed to in the microprocessor.”

The complete story can be read on HPCwire....

EnterpriseAI