Advanced Computing in the Age of AI | Wednesday, April 24, 2024

Thanks for the Memory: Diablo/Inspurs’ In-Memory Benchmark Boast 

Cramming more data into memory is a permanent objective for technology strategists seeking to satisfy the surging demand for real-time big data analytics. With the ultimate goal of reducing data center "server sprawl," memory technology specialist Diablo and server maker Inspur announced last July a collaborative effort to expand in-memory computing for Apache Spark workloads. Now the two companies have announced benchmark results that they claim cuts processing times for graph analytics by half.

At the core of the joint effort is Diablo’s Memory1, a flash-as-memory DIMM (dual in-line memory module) that Diablo says is the first to combine DIMM with NAND flash, delivering the highest-capacity byte-addressable memory modules. The benefits that flow from this approach include higher memory capacity than DRAM DIMMs, improved performance due to increased data locality and reduced access times (latency), and lower TCO because fewer servers are required to support memory-constrained applications.

According to the two companies, Inspur Memory1 servers deliver up to 40TB of application memory in a single rack.

“Dramatically expanding the application memory available in a single server directly addresses key issues found in traditional, DRAM-only deployments for big data processing platforms like Apache Spark,” said Maher Amer, Diablo CTO. “Because each server is capable of doing more work, jobs can be more efficiently handled with fewer servers, which also minimizes the associated networking and operational expenses. A tiered NAND flash approach is key to providing the benefits of real-time analysis while minimizing the expense required to collect and interpret valuable information.”

Apache Spark’s open-source platform enables high-speed data processing for large and complex datasets. The joint benchmarking used the k-core decomposition algorithm of Spark’s GraphX analytics engine, which Diablo/Inspur characterized as a particularly stressful series of memory-intensive tests.

The two companies tested performance on the same cluster of five servers (Inspur NF5180M4, two Intel Xeon CPU E5-2683 v3 processors, 28 cores each, 256GB DRAM, 1TB NVME drive). The servers were first configured to use only the installed DRAM to process multiple datasets. Then the cluster was set up with 2TB of Diablo Memory1 per server. The Apache Spark k-core algorithm was run against three graph datasets of varying sizes.

Diablo/Inspur said that for the smaller data sets (164 Gb) the DRAM-only servers actually outperformed the Memory1 servers by 27 percent, while also noting that “graph analysis workloads rarely operate on such a small amount of information.”

However, according to Diablo/Inspur, the medium-sized sets using Memory1 were completed in 156 minutes, versus 306 minutes; and on the large sets, the Memory1 servers completed the job in 290 minutes, while the DRAM servers were unable to complete due to lack of memory space.

Clive Longbottom, a founder of UK-based industry analyst firm Quocirca, said Diablo’s flash-as-memory strategy could be significant.

“There is an inevitable requirement for faster and faster compute capabilities,” Longbottom told EnterpriseTech. “Although the move from spinning disk to flash has provided a couple of orders of magnitude-possible data speed improvements, just based on the move from spinning disk to all-flash arrays, this does not allow anyone to rest on their laurels. As more organizations move to all-flash arrays, all that is happening is that the general bar to compete is raised: the capability to differentiate in the market becomes harder as everyone does the same….”

“However, NVDIMM takes this further still. Being able to use memory bus speeds removes any data stream constraints. The problem to date has been expense and availability: an all-volatile DIMM approach is extremely expensive and difficult to get to work effectively. A mixed volatile DIMM/NVDIMM approach needs a lot of ‘secret sauce’ in how the data tiering is managed. This is where Diablo seems to have created for itself a useful and very lucrative niche.”

Longbottom added that while benchmarks “are easy enough to game…, they do provide a standard means of comparison.” He also said using Spark’s GraphX analytics engine for the benchmark is meaningful because use of graph analysis, with its ability to define complex relationships between disparate data, “requires a lot of compute power against a quantity of data – the higher the quantity of data, the growth in compute/fast data handling grows almost exponentially.”

EnterpriseAI