Advanced Computing in the Age of AI | Friday, March 29, 2024

InstaDeep Powers AI as a Service With Shared NVMe 

SAN JOSE, Calif., May 7, 2019 -- Excelero, a disruptor in software-defined block storage, today announced that global Artificial Intelligence (AI) innovator, InstaDeep, has deployed Excelero’s NVMesh software on Boston Ltd. Flash-IO Talynstorage systems to create a highly efficient data center infrastructure using a scalable pool of high-performance NVMe flash that ensures full utilization of GPU processing power and maximum ROI.

InstaDeep offers a pioneering AI as a Service solution enabling organizations of any size to leverage the benefits of AI and Machine Learning (ML) without the time, costs and expertise required to run their own AI stacks. Excelero’s NVMesh®, in turn, allows InstaDeep to access the low-latency, high-bandwidth performance that is essential for running customer AI and ML workloads efficiently – and gain the scalability vital to InstaDeep’sown rapid growth.

“Finding a storage infrastructure that would scale modularly and was highly efficient for AI and ML workflows is no small challenge,” explained Amine Kerkeni, Head of AI Product at InstaDeep. “Our clients simply will not achieve the performance they need if an infrastructure starves the GPUs with slow storage or wastes time copying data to and from systems. Excelero NVMesh ticked all the boxes for us and more.”

By allowing the GPU optimized servers to access remote scalable, high-performance NVMe flash storage drives as if they were local flash - with full IOPs and bandwidth capabilities, the InstaDeep team is achieving highly efficient usage of the GPUs themselves and the associated NVMe flash. The end-result is higher ROI, easier workflow management and faster time to results.

InstaDeep’s first Excelero system includes a 2U Boston Flash-IO Talyn server with Micron NVMe flash and Excelero NVMesh software that provides access to up to 100TB external high-performance storage. Leveraging the Mellanox 100GB Infiniband network cards in the DGX, the GPUs use the NVMe storage with local performance. The ability to choose any file system to run on NVMesh was an immense benefit. Early tests immediately indicated that external NVMe storage with Excelero gives equal or better performance than local cache in the NVIDIA DGX.

“The GPU systems powering the AI and ML explosion have an amazing appetite for data, but many organizations are finding they quickly create a storage bottleneck,” explained Yaniv Romem, Excelero's CTO. “The only storage that is fast enough to keep up with these GPUs is local NVMe flash, due to the high competition for valuable PCIe connectivity amongst GPUs, networking and storage. Excelero’s NVMesh eliminates the need to compromise between performance and storage functionality by unifying remote NVMe devices into a logical block pool that performs the same as local NVMe flash with the ability to easily share data and protect it.”

“We’re excited to provide essential capabilities to help propel InstaDeep’s growth as the company elevates its game to meet AI demands,” Romem said.

About Excelero

Excelero delivers low-latency distributed block storage for web-scale applications such as AI, machine learning and GPU computing. Founded in 2014 by a team of storage veterans and inspired by the Tech Giants’ shared-nothing architectures for web-scale applications, the company has designed a software-defined block storage solution that meets the low-latency performance and scalability requirements of the largest web-scale and enterprise applications.

Excelero’s NVMesh enables shared NVMe across any network and supports any local or distributed file system. Customers benefit from the performance of local flash with the convenience of centralized storage while avoiding proprietary hardware lock-in and reducing the overall storage TCO. NVMesh is deployed by major web-scale customers, for data analytics and machine learning applications and in Media & Entertainment post-production and HPC environments.

About InstaDeep

Founded in Tunis in 2015 by Karim Beguir and Zohra Slim, InstaDeep is today an industry renowned AI firm delivering AI products and solutions for the enterprise, with headquarters in London, and offices in Paris, Tunis, Nairobi and Lagos.

Powered by high-performance computing and outstanding research and development breakthroughs, InstaDeep utilises deep reinforcement learning to create AI systems that can optimise decision-making processes in real-life industrial environments. Our skilled in-house team of AI researchers, Machine Learning engineers, Hardware and Visualization experts, harness the expertise to build end-to-end products that can tackle the most challenging optimisation and automation challenges, and provide real value and ROI to your business. InstaDeep offers a host of AI solutions, ranging from optimised pattern-recognition, GPU-accelerated insights, to self-learning decision making systems.

InstaDeep partners with organisations such as Deep Learning Indaba, Google Launchpad Accelerator, Facebook Dev Circles and Data Science Nigeria to support the rise of AI in Africa and across the globe.


Source: InstaDeep

EnterpriseAI