Advanced Computing in the Age of AI | Friday, April 19, 2024

Startup MemVerge Launches Memory-centric Mission 

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved data center systems performance. Combined with accelerated processing (GPUs, FPGAs, etc.), faster interconnects (NVLink, optical) and more powerful networks (5G and its successors), the result will be next-generation, memory-centric compute enabling whole new classes of enterprise AI, HPDA and HPC application workloads. That’s the dream, anyway.

It’s all about getting more data into memory, forgoing latency-laden retrieval of data from traditional storage. We recall a memorable presentation at the 2017 ISC conference in which Dr. Eng Lim Goh, HPE’s SVP/CTO, AI, talked of a future in which a decade of corporate data would reside in live memory. An important step in that direction came with Intel’s April 2019 launch of the Optane DC storage class memory device that gives applications access to petabyte-size data pools

But MemVerge, a three-year-old Milpitas, CA startup, contends that a layer of data services software needs to operate on top of persistent memory hardware to move the memory-centric vision closer to reality. Today, MemVerge launched a strategy around what it calls big memory computing, a combination of DRAM, persistent memory and MemVerge Memory Machine software technologies designed to deliver memory that is abundant, persistent and highly available.

While in-memory computing has grown over the past decade, DRAM’s high cost, small scale and volatility has largely relegated it to performance-critical workloads, MemVerge CEO Charles Fan told us. But citing an IDC study finding that over the next five years, 25 percent of all data will be real time data, he said “more and more mission critical applications will need to be processing data at an increasing rate as well as volume. And the answer to that we believe is a memory centric architecture. The way I/O works today, where data is constantly being moved between storage devices and memory media, will no longer be acceptable for many of these use cases… We think persistent memory changes the game. It's really creating a new media that is bigger and cheaper and persistent.”

Intel Optane DC

In its current incarnation, the company’s offering is enabled by Optane, which Fan called “a groundbreaking technology. And we also believe this technology will get better over the coming years, whether it is from Intel or other makers of memory. We think by 2022, we'll see at least two or three additional memory makers delivering persistent memory … serving the same purpose of enabling this memory centric data center.”

Fan, a veteran of VMware and EMC, said Machine Memory software runs on servers and server clusters from all the prominent hardware vendors as well as in virtual machines and on clouds “once the cloud has the right hardware.” In beta now, Fan said the product will enter an early access program with a limited number of customers later this month and is scheduled for GA later this year. It will support Intel Optane Gen 2 “out of the box” when that product becomes available – in fact, Fan said Intel has already provided MemVerge with Gen 2 samples.

Three roadblocks stand in the way of broad-based adoption of memory-centric computing, Fan said: existing applications are not plug-and-play compatible – they need to be re-written for persistent memory; lack of data services makes for slow recovery from system crashes; and no shared memory due to memory siloed in separate servers.

Fan said Machine Memory software addresses all three problems.
On running existing applications in memory-centric infrastructures, Fan said you’d normally have to program them “to take full advantage of big memory that is byte addressable and that is persistent… You have to change your application to program it to a new API, which is a persistent memory API or persistent memory programming model… How you get them to work on top of this new infrastructure is a challenge.”

Fan said Memory Machine software virtualizes DRAM and PMM (persistent memory modules), both locally as well as across a server cluster, over low latency networks. “We support both RDMA over Converged Ethernet (RoCE), as well as RDMA over InfiniBand, and we can create a memory lake supporting the application above in a transparent way. So you do not have to change applications.”

As for memory availability, Fan said MemVerge data services include ZeroIO data snapshotting, replication and tiering. At the core of Memory Machine software is its distributed persistent memory engine for shared memory across a cluster, "the software engine that processes memory through our persistent memory allocator,” Fan said. “And it connects them through our RDMA transport across a cluster managed by our cluster manager.”

“Memory til now is still primarily a local property, you access your local memory,” Fan said. “And the new memory is bigger..., it's up to six terabytes of persistent memory per server that has two sockets of CPUs. If you need more than that, you need a layer of software that can scale out your memory to the other servers so that they can be pulled together into a memory lake that's available for applications.”

“These are the three areas that hardware itself does not provide a full solution,” he said, “where you need what we call big memory software. It needs to create compatibility; it needs to deliver the data services and it needs to provide the scale out of pooling of memory across multiple servers. That is what our software does.”

“The widespread move of enterprises to much more data-centric business models…is exposing some real performance limitations in the data infrastructure,” said Eric Burgener, research VP, Infrastructure Systems, Platforms and Technologies, at IDC. “Recent advancements in persistent memory and software-defined memory technology have been brought together by MemVerge, a visionary software startup, to create a new technology category to address these limitations, called Big Memory Computing. Without requiring any existing application rewrites, Big Memory Computing delivers the highest category of data persistence performance with the enterprise-class data services needed for the real-time, mission-critical workloads that will increasingly drive competitive differentiation for digitally transformed enterprises.”

MemVerge CEO Charles Fan

Fan said MemVerge is currently comprised of about 40 people, and the company, having recently completed a $19 million round of new venture funding partially funded by Intel, along with Cisco Investments, NetApp, SK Hynix and others, plans to expand both its technical and sales/marketing teams.

"We came out of stealth about a year ago, same time Intel announced the Optane memory, and at that time the product was entering beta,” Fan said. “This is the same technology, just more mature now. We are making it available to customers to deploy in production environment."

Fan warmed to the memory of Dr. Goh’s comments three years ago at ISC in Frankfurt.

“It’s very aligned with our vision,” he said. “We are not new with this vision. People have been dreaming about this starting about 20 years ago. But now I think there's a realistic chance for it to become reality. Even from let's say an HPC developer point of view, imagine if you have hundreds of terabytes of memory accessible per computing node? And all that memory is persistent and protected? And what additional possibilities you have in developing your simulations, your analytics and computation? I think this opens up a new world, for the people on the frontiers… in particular for those on the cutting edge.”

EnterpriseAI