Advanced Computing in the Age of AI | Saturday, April 20, 2024

Why NVMe is the Storage Protocol of Choice for High Performance Workloads 

via Shutterstock

NVMe has quickly become the storage medium of choice for high performance applications. Its streamlined protocol, optimized for transactions to flash memory, is very lightweight relative to the legacy storage protocols like ATA and SCSI. The manner in which NVMe leverages the PCIe bus also has the advantage of eliminating a translation layer between the processor and storage. Using PCIe also allows easy scaling of multi-lane solutions for enterprise-grade drives that need redundant data paths, without introducing unfamiliar connector types.

For these reasons, NVMe on its own has been an excellent direct attached storage (DAS) solution for servers. However, the benefits don’t end there. While the NVMe specification leverages the PCIe transport to great advantage, the NVMe protocol itself was designed to be relatively transport-agnostic, meaning its data can be carried over any physical transport with only minor modifications. Those modifications are defined in a binding specification, which, as the name implies, binds the NVMe protocol over a given transport mechanism. The binding specifications that have seen the most traction today are for RDMA (used in RoCE [RDMA over Converged Ethernet] and Infiniband networks) and for fibre channel (FC).

This enables a host system to connect to high performance, low-latency NVMe storage over an Ethernet or FC network, and have that remote storage appear to the host as local storage. There is no need for an intervening filesystem to add latency and overhead.

NVMe also defines a reservation system to enable multiple hosts to access a single drive simultaneously without interfering with each other. Combining this with NVMe-oF means that massive pools of storage can be created and managed, eliminating the ‘islands’ of storage problem by providing a robust and extensible solution for accessing all the storage connected to the fabric.

A further advantage comes from its ability to leverage existing infrastructure investment while introducing NVMe-oF to an enterprise. In fact, understanding these tradeoffs will be important in determining the best way to deploy NVMe-oF in a given data center.

Perhaps significant investment has been made in a cable plant to support 25G and 100G Ethernet. NVMe-oF Hosts and Storage Arrays with Ethernet interfaces will be able to connect to the cable plant without issue. However, it may be necessary to upgrade to switches that are RDMA-capable in order to create the lossless Ethernet network that a storage use case needs.

On the FC side, perhaps an enterprise has invested heavily in FC storage arrays. FC-NVMe arrays can be introduced and co-exist on the same SAN with the legacy FC arrays, with perhaps only upgrading to higher speed FC switches to handle the new FC-NVMe arrays.

Clearly there is a lot of upside to the migration to NVMe and NVMe-oF solutions, both in terms of performance and flexibility. Really, one of the biggest challenges with NVMe-oF will be in determining exactly how to best use all of that flexibility. Up and down the stack, there’s opportunity for cost and performance tradeoffs.

A few of those tradeoffs are outlined below:

  • Different types of flash memory (SLC, MLC, TLC) have different price, performance and endurance parameters. Which one is ideal for the desired workload?
  • NVMe drives may be single-, dual- or multi-port. Will those ports be used for redundant data paths, performance optimization or both?
  • Existing infrastructure may determine how and when to introduce NVMe-oF to the datacenter.
  • NVMe and NVMe-oF are already highly optimized for low latency. Further optimization can be achieved by integrating more functions into silicon, and several startups are doing exactly that for NVMe-oF. Will that extra initial cost pay off?
  • RoCE networks can be optimized for different traffic profiles. It’s necessary to determine what configuration will best suite a given workload.

Finding the answers to these questions will be unique for each application. NVMe and NVMe-oF offer a great deal of power and flexibility, and it’s clear that any high performance application would be able to benefit from deploying some flavor of NVMe-oF. The challenge will be in determining the best performing combination of these variables for your application.

David Woolf is senior engineer, data center technologies, at the University of New Hampshire InterOperability Laboratory.

EnterpriseAI