Advanced Computing in the Age of AI | Tuesday, March 19, 2024

Nvidia Accelerates AI Inference in the Datacenter 

Source: Nvidia

Nvidia is bringing AI inference to the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based T4 GPU--and a refresh of its inference server software packaged as a container-based microservice.

The GPU leader also this week announced a new robotics effort centered around an AI platform for autonomous machines along with rollout of a new AI-driven health care platform.

The TensorRT inference platform consists of Nvidia’s latest GPU, the Tesla T4, based on its Turing architecture, the chipmaker said Thursday (Sept. 13). The T4 is the successor to the P4 Pascal-based chips, introduced two years ago almost to the day. Peak performance for the refreshed chip (which has 320 Turing Tensor Cores and 2,560 CUDA cores) is 8.1 teraflops of single-precision performance, 65 teraflops of mixed-precision, 130 teraops of INT8 and 260 teraops of INT4 performance. Impressive for the performance, the T4 sits on a low-profile (half-height, half-width) 75 watt PCI-e card.

The other components of the AI platform include real-time inference server software and runtime engine dubbed TensorRT 5 designed to boost neural network performance. The TensorRT server is a container-based microservice designed to allow applications to use AI models within datacenters.

The new inference platform is an attempt to address “the difficulties in deploying datacenter inference,” explained, Ian Buck, vice president of Nvidia’s accelerated computing business unit. Among the performance issues are overused systems while other components are underutilized for inference, Buck added.

The goal is to help accelerate inference in the datacenter. Hence, Nvidia claims its combination of AI hardware and software can process queries 40 times faster than datacenter CPUs.

Meanwhile, the TensorRT 5 inference engine that supports the Turing cores aims to expand the set of trained neural networks running in datacenters to accelerate production workloads. That capability would help deliver, for example, better recommendations in response to queries.

The datacenter platform would offer inference acceleration for visual search, video analysis, targeted advertising and recommendation services that are swamping enterprise datacenters. “This is why they call hyperscale [datacenters] ‘hyperscale’,” Buck noted.

Nvidia estimates the market for AI inference centered around the deployment of neural networks in datacenters delivering live video, speech recognition and product recommendations could soar to $20 billion over the next five years.

Nvidia also joins a growing list of chip and software vendors embracing datacenter microservices as a way to accelerate the delivery of distributed applications. The company’s inference server software allows applications to use AI models in production while boosting GPU utilization and, ultimately, datacenter performance for delivering a range of AI-based services.

Along with supporting most AI frameworks and models, Nvidia said its inference server is integrated with Docker containers and the Kubernetes cluster orchestrator. The inference server is available on Nvidia GPU cloudcontainer registry. Those GPU-accelerated containers are used to package deep learning software as well as HPC applications and visualizations.

Source: Nvidia

Google said this week it would offer early access to T4 GPUs on its cloud platform. Support is also forthcoming from the usual system makers--HPE, IBM, Dell EMC, Fujitsu, Cisco, Oracle and SuperMicro--by year's end.

Nvidia (NASDAQ: NVDA) also this week announced a developer kit for the next wave of robotics. The Jetson AGX Xavier platform targets next-generation autonomous machines that could be used for industrial and manufacturing applications ranging from bridge inspections to package delivery via drones. The AGX kit that includes an embedded AI processor and a software stack was released during a company event this week in Tokyo. Nvidia also announced partnerships with several Japanese manufacturers to develop next-generation autonomous machines.

Rob Csongor, Nvidia’s vice president for autonomous machines, said the AI platform is aimed at the $250 billion robotics market. The AGX platform would “enable broad development across a variety of industries,” Csongor added.

The AGX family also is being extended to include development of future AI-based medical devices. The Clara AGX platform released this week is based on Nvidia’s Xavier AI computing module and Turing GPUs. It targets early detection, diagnostics and treatment.

Kimberly Powell of Nvidia’s health care unit said the Clara developer kit addresses the disconnect between legacy diagnostic tools such as medical imagers and the current shortfall in running modern applications. The Clara framework would allow those tools to connect with GPU servers, boosting their capacity to process raw instrument data and imagery.

Powell said Nvidia is working with GE Healthcare, Mayo Clinic and other major medical providers.

Tiffany Trader contributed to this report.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI