Advanced Computing in the Age of AI | Thursday, March 28, 2024

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost 

via Shutterstock

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narrowly-defined tasks. AI of the next decade will leverage the greater power of deep learning and become broader, solving a greater array of more complex problems.  In addition, the general-purpose technologies used today for AI deployments will be replaced by a technology stack that’s AI-specific and exponentially faster – and it’s going to take a lot of money.

Seeking to take center stage in AI’s unfolding, IBM – in combination with New York state and several technology heavies – is investing $2 billion in the IBM Research AI Hardware Center, focused on developing next generation AI silicon, networking and manufacturing that will, IBM said, deliver 1,000x AI performance efficiency improvement over the next decade.

IBM's Mukesh Khare

“Today, AI’s ever-increasing sophistication is pushing the boundaries of the industry’s existing hardware systems as users find more ways to incorporate various sources of data from the edge, internet of things, and more,” stated Mukesh Khare, VP, IBM Research Semiconductor and AI Hardware Group, in a blog announcing the project. “…Today’s systems have achieved improved AI performance by infusing machine-learning capabilities with high-bandwidth CPUs and GPUs, specialized AI accelerators and high-performance networking equipment. To maintain this trajectory, new thinking is needed to accelerate AI performance scaling to match to ever-expanding AI workload complexities.”

IBM said the center will be the nucleus of a new ecosystem of research and commercial partners collaborating with IBM researchers. Partners announced today include Samsung for manufacturing and research, Mellanox Technologies for high-performance interconnect equipment, Synopsys for software platforms, emulation and prototyping, and IP for developing high-performance silicon chips, and semiconductor equipment companies Applied Materials and Tokyo Electron.

Hosted at SUNY Polytechnic Institute, Albany, NY, in collaboration with neighboring Rensselaer Polytechnic Institute Center for Computational Innovations, IBM said the company and its partners will “advance a range of technologies from chip level devices, materials, and architecture, to the software supporting AI workloads.”

IBM roadmap for 1,000x improvement in AI compute performance efficiency.

Big Blue said research at the center will focus on overcoming “current machine-learning limitations through approaches that include approximate computing through Digital AI Cores and in-memory computing through Analog AI Cores. These technologies will provide the thousand-fold increases in performance efficiency required for full realization of deep learning AI, the next major milestone in AI evolution, according to IBM.

“A key area of research and development will be systems that meet the demands of deep learning inference and training processes,” Khare said. “Such systems offer significant accuracy improvements over more general machine learning for unstructured data. Those intense processing demands will grow exponentially as algorithms become more complex in order to deliver AI systems with increased cognitive abilities.”

Khare said the research center will host R&D, emulation, prototyping, testing and simulation activities for new AI cores specially designed for training and deploying advanced AI models, including a test bed in which members can demonstrate innovations in real-world applications. Specialized wafer processing for the center will be done in Albany with some support at IBM’s Thomas J. Watson Research Center in Yorktown Heights, NY.

EnterpriseAI