News & Insights for the AI Journey|Sunday, May 26, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

CXL Consortium Launches CPU-to-Anything High Speed Interconnect Protocol 

Another front has been opened in the long campaign to enable any-to-any connectivity in high performance data center computing. The Compute Express Link (CXL) consortium – with tech heavies Intel, Google, HPE, Dell EMC, Microsoft, Facebook, Cisco, Huawei and Alibaba – has ratified version 1.0 of the CXL specification, an open interconnect CPU-to-device and CPU-to-memory ecosystem designed to remove the bottlenecks between general-purpose and accelerated processing architectures.

CXL joins a growing field of efforts – OpenCAPI, CCIX, GenZ, and NVLink – focused on high-speed host-to-device and device-to-device interconnection. Created by Intel, the CXL interconnect standard is focused on enabling high-speed communications between the CPU and workload accelerators, such as GPUs, FPGAs and Arm, used in AI, machine learning, HPC and other compute- and data-intensive workloads.

CXL technology is built on the PCI Express (PCIe) infrastructure, leveraging the PCIe 5.0 interface to provide protocols for I/O, memory and coherency interface. The consortium said CXL maintains memory coherency between the CPU memory space and memory on attached devices, “which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. This permits users to simply focus on target workloads as opposed to the redundant memory management hardware in their accelerators.”

For performance, CXL delivers 32GT/s for 16 lanes, or 128 GB/s – compared with OpenCAPI at 25G per lane for 16 lanes, which is 100 GB/s, according to the CXL consortium.

Blogging on the CXL announcement, Navin Shenoy, EVP /GM of Intel’s Data Center Group, said Intel developed the technology behind CXL and donated it to the consortium to become the initial release of the new specification, “much like our roles with Universal Serial Bus (USB) and PCI Express.”

Shenoy said CXL is a response to the explosion in data volumes and emerging, specialized workloads, such as compression, encryption and AI, which require the mixing and matching general-purpose CPUs and purpose-built accelerators.

“These accelerators need a high-performance connection to the processor, and, ideally, they share a common memory space to reduce overhead and latency,” said Shenoy. “CXL is a key technology that enables memory coherence between the accelerator and CPU, with very high bandwidth, and does so using well-understood infrastructure based on PCI Express Gen 5.”

Maintaining memory coherency between devices is designed to let users focus on target workloads, rather than redundant memory management hardware in their accelerators. The result: resource sharing for higher performance, reduced software stack complexity and lower overall system cost, according to Shenoy.

“While there exist other interconnect protocols, CXL is unique in delivering CPU/device memory coherence, reduced complexity on the device, and an industry-standard physical and electrical interface together in a single technology for the best plug-and-play experience,” Shenoy said.

In a pre-announcement, Jim Pappas, director of technology initiatives at Intel, said a CXL principle is multi-version compatibility.

“Unlike most coherency interfaces that are CPU-specific and change every time a CPU changes from one generation to the next, we’re dedicated to make this a backward-compatible specification,” said Pappas, citing PCIe – “you could have a PCIe Gen 4 system and plug in a PCIe Gen 1 card, and it will work. That level of compatibility is very important for industry adoption.”

About the relatively small size of the initial consortium membership, Pappas said, “We believe by having the right system-level companies, all of the companies that build the target devices, this creates demand and fuels their investment to build the complementary products that can plug into all these systems.”

“We have nine,” he said, adding that the consortium is putting together workgroups to develop version 2 of the CXL spec, “that’s generally, when we put together these types of initiatives, we like to be very, very focused with the right leaders, leaders who are aligned and dedicated to bring this technology to market, and we believe that that will pull the rest of the industry along.”

From Microsoft’s perspective, Dr. Leendert van Doorn, distinguished engineer, Azure, said, “Microsoft is joining the CXL consortium to drive the development of new industry bus standards to enable future generations of cloud servers. Microsoft strongly believes in industry collaboration to drive breakthrough innovation. We look forward to combining efforts of the consortium with our own accelerated hardware achievements to advance emerging workloads from deep learning to high performance computing for the benefit of our customers.”

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This