Advanced Computing in the Age of AI | Thursday, March 28, 2024

In-Memory Platform Targets Real-Time Data 

Querying real-time data on a massive scale remains a huge infrastructure hurdle that is increasingly being addressed by in-memory data grids. Hazelcast Inc., an in-memory computing vendor and partner C24 Technologies have rolled out an in-memory approach that promises to reduce enterprise data storage by as much as half.

The partners said Wednesday (Sept. 9) their "Hypercast" in-memory computing platform responds to the inability of relational database management systems to keep pace with massive data volumes. Its low-latency in-memory approach is said to provide faster processing and querying of real-time data.

The partners maintain that one of the keys to achieving real-time, low-latency performance is the ability to ingest, store and query data in-memory. The catch is that traditional in-memory computing approaches sacrifice real-time queries for data compression and faster transmission speeds.

The Hypercast partnership combines Hazelcast's open-source in-memory computing platform with London-based C24's Preon Java-based data binding technology. The partners are promising that the combination of in-memory data grids with data compression techniques would yield "sub-millisecond" data access speeds.

The partners claimed that internal benchmark testing of Preon data running on the Hypercast platform reached "microsecond speeds."

While Hazelcast claims its binary storage approach helps reduce memory and storage requirements for complex data by "many orders of magnitude," Preon is leveraged in the new platform to ingest and compact high volumes of complex data messages into byte arrays using C24's "virtual getter" interface. In testing, C24 said 7 KB messages were compacted to 200 bytes.

The partners added that compression capability has been folded into the Hypercast platform that runs on Hazelcast's elastic cluster of memory servers. They can be be sized to scale depending on the application.

Among Hazelcast's financial services customers are Capital One, the Chicago Board Options Exchange and Deutsche Bank. In-memory vendors and financial services software providers are looking for new ways to reduce the cost of huge data volumes by reducing memory footprint without sacrificing fast database queries.

The partners said an unnamed financial services software vendor benchmarked Hypercast against other platforms. The Preon data format was found to store "nearly twice as much data in the same memory footprint" as competing serialization approaches, the partners claimed.

The benchmark results are available here.

The result, the partners assert, is much faster processing of tens of millions of messages per second along with an equal volume of queries.

The demand for real-time data access at scale is growing, especially in the financial services sector. That partners cited industry estimates that the in-memory data grid market is forecast to grow at a 32 percent compound annual rate over the next five years.

They Hypercast platform appears to meet at least some of the key requirements for migrating to in-memory platforms. Among these, experts note, is the ability to optimized in-memory computing for specific applications. As data volumes soar, in-memory systems also need to scale up to the industry standard of about 6 TB. Experts predict that larger systems operated by financial services firms running systems like SAP HANA may require databases as large as 30 TB.

Hazelcast said the new platform incorporates its high-density memory store designed to move "hundreds of terabytes."

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI