Advanced Computing in the Age of AI | Friday, April 26, 2024

‘Computational Storage’: Edge Intelligence for Next-Gen IoT 

The intelligent edge has gained much buzz recently, including interesting comments from Microsoft CEO Satya Nadella during a keynote at Mobile World Congress. But as IoT devices and applications become more advanced, producing more data and demanding greater speed and power, the need arises for a more efficient edge computing approach.

Consider a common IoT product, such as a connected refrigerator. It can link with your phone or laptop to let you see temperature, adjust ice settings and even stream music or video. That’s pretty awesome, but a truly intelligent refrigerator can do far more, telling you how much milk is left, estimate calories or gauge spoilage. Early edge computing platforms leveraged common server and storage technology as sufficient for supporting the traditional type of IoT devices and apps. However, intelligent edge platforms are needed to support more advanced functions, which involve far more data. To become intelligent, edge platforms require major innovations to introduce new efficiencies and help manage how to analyze and manage the date in the most effective way.

The example above and other next-generation IoT use cases involve moving large quantities of data, which is always challenging. Even with the advent of 5G, moving big data sets to and from edge networks creates major bottlenecks, since these datasets are approaching petabyte scale. This challenges existing edge platforms as they utilize a typical Von Neumann computing architecture. Due to the size and power constraints of edge platforms, adding core CPU processing to an edge platform hasn’t been feasible. Simply increasing the number of systems in a deployment is impossible because of the same size and power limits.

For edge platforms to bridge this gap and become intelligent they need innovative architectures. Vendors are attempting to address the challenge of data movement by delivering disaggregated solutions, such as NVMe-oF fabrics, composable architectures and GPU and FPGA accelerators. While these can speed up the process to some degree, they don’t move the needle enough to make next-generation IoT use cases work smoothly. All these solutions involve space and power needs that may not exist, and they are not innovating the way to move and manage the stored data itself.

But what if you didn’t need to move all that data?

“Computational storage” is a new approach that minimizes data movement and creates intelligent edge computing with intelligent storage. In-situ processing is the key to computational storage. It creates data processing capabilities within storage devices, such as NVMe SSDs, eliminating the need for total data movement. In-situ processing solves the problem outlined above by bringing compute capabilities to the where the data resides. This allows you to pre-process data rather than move all the data for host CPU processing, which will be faster and more efficient. Overall, computational storage has the capability to reduce the time to process a petabyte of data for high capacity-driven, read-intensive analytics applications.

And it’s not hard to deploy. Computational storage does not require a true ground-up approach and instead can be implemented by modifying existing edge platforms, making adoption easier and more scalable.  In essence, the concept is to take a host-driven and memory-limited application and execute that workload in each device installed on the storage bus. In one case, where 4 cores are present, if you have a system with 10 drives, you have effectively added 40 cores of parallel processing to the system with no net physical changes or adds, save using computational storage SSDs instead of the traditional ones. The ability to move compute into storage, where the data resides, saves host CPU and memory from the traditional round robin (data from storage into memory, analyze, dump, repeat) data management. Instead the host CPU simply has to aggregate the results from all the parallel paths.

By eliminating most data movement, computational storage and in-situ processing remove a major bottleneck that has prevented more advanced IoT applications from taking off. This method ensures that the data gathered by these platforms can deliver on its promise of improving analytics and enabling important new use cases.

Imagine a commercial jet that can determine in seconds rather than hours what its maintenance needs are as it sits outside the gate before its next takeoff. Another great example of an edge implementation is object tracking in surveillance. Consider a remote camera platform that can analyze and track a single person in a stadium in real time by running the AI-based search algorithm while the data is being stored on cameras. No need to ‘look back’ over the data. We can even take this to the Autonomous “anything,” in which telemetry, statistics and use parameters are all stored locally to the machine and only the truly valuable bits are sent “over the air” via 5G, saving bandwidth and allowing for faster aggregation of data from all the inputs. The next generation of advanced IoT will rely on an intelligent edge infrastructure that’s powered by computational storage.

Scott Shadley is principal technologist, NGD Systems.

 

EnterpriseAI