Advanced Computing in the Age of AI | Monday, June 24, 2024

Google Announces Sixth-generation AI Chip, a TPU Called Trillium 

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium. 

The chip, essentially a TPU v6, is the company’s latest weapon in the AI battle with GPU maker Nvidia and cloud providers Microsoft and Amazon, which have their own AI chips.

The TPU v6 will succeed the TPUv5 chips, which came in two flavors: TPUv5e and TPUv5p. The company said the Trillium chip is “the most performant and most energy-efficient TPU to date.

(Source: Google)

The Trillium chip will run the AI models that will succeed the current Gemini large-language model, Google said at its IO conference in Mountain View, California.

Performance

Google made all-around improvements to the chip. The chip provides 4.7 times more peak compute performance per chip. It also doubles the high-bandwidth memory, internal bandwidth, and chip-to-chip interconnect speed.

“We got to the 4.7x number by comparing the peak compute performance per chip (bf16) of Trillium TPU vs Cloud TPU v5e,” a Google spokeswoman said in an email to HPCwire. 

The BF16 performance on TPU v5e was 197 teraflops, and a 4.7x improvement would put BF16 peak performance on Trillium at 925.9 teraflops. 

A large performance improvement in Google’s TPUs was long overdue. The TPU v5e’s 197 teraflops BF16 performance actually declined from 275 teraflops on the TPU v4.

Memory and Bandwidth

Trillium chips have next-generation HBM memory but didn’t specify whether it was HBM3 or HBM3e, which Nvidia uses in its H200 and Blackwell GPUs. 

The HBM2 capacity on TPU v5e was 16GB, so Trillium will have 32GB of capacity, which is available in both HBM3 and HBM3e. HBM3e provides the most bandwidth.

Up to 256 Trillium chips can be paired in server pods, and inter-chip communication has improved twofold compared to TPU v5e. Google didn’t share inter-chip communication speeds, but they could be 3,200 Gbps, which is two times that of 1,600 Gbps with TPU v5e.

The Trillium TPUs are also 67% more energy-efficient than the TPU v5e, Google said in a blog entry.

Faster Chip Release Cycle

Trillium is replacing the TPU brand name, and will be the branding behind future generations of the chip. Trillium is based on the name of the flower, and not to be confused with AWS’s Trainium, which is an AI training chip.

Google wasted no time releasing its sixth-generation TPU — it hasn’t even been a year since the company released TPU v5 chips. 

TPU v4  introduced in 2020 – hung around for three years until the release of TPU v5. The development of TPU v5 itself was mired in controversy. 

Google claimed that AI agents helped floor-plan the TPU v5 chip about six hours faster than human experts. 

Researchers connected to the TPU v5 AI design project were fired or left, and the claims are currently under investigation by Nature Magazine. (https://www.hpcwire.com/2023/10/03/googles-controversial-ai-chip-paper-under-scrutiny-again/)

The Systems

Server pods will host 256 Trillium chips, and the AI chips will communicate two times faster than similar TPU v5 pod setups. 

The pods can be combined into larger clusters, and communication occurs via optical networking. Communication between pods will also be two times faster, providing the scalability required for larger AI models.

“Trillium TPUs can scale to hundreds of pods, connecting tens of thousands of chips in a building-scale supercomputer interconnected by a multi-petabit-per-second datacenter network,” Google said.

A technology called Multislice strings large AI workloads across thousands of TPUs in a large cluster. That ensures high uptime and power efficiency of TPUs.

The Chip

The chip has third-generation SparseCores, an intermediary chip closer to high-bandwidth memory, where most of the AI crunching takes place. 

The SparseCores bring processing closer to the data in the memory, supporting the emerging computing architecture being researched by AMD, Intel, and Qualcomm.

Typically, data has to move from memory to processing units, which consumes bandwidth and creates chokepoints. The sparse computing model tries to free up network bandwidth by moving processing units closer to memory clusters.

“Trillium TPUs make it possible to train the next wave of foundation models faster and serve those models with reduced latency and lower cost,” Google said.

Trillium also has TensorCores for matrix math. The Trillium chip is designed for AI and won’t run scientific applications.

The company recently announced its first CPU, Axion, which will be paired with Trillium. 

The Hypercomputer

The Trillium chip will be part of Google’s homegrown Hypercomputer AI supercomputer design, which is optimized for its TPUs. 

The design merges compute, network, storage and software to meet varying AI consumption and scheduling models. A “Calendar” system meets hard deadlines on when a task should start, while the “Flex Start” model provides guarantees on when a task will end and deliver results. 

The Hypercomputer includes a software stack and other tools to develop, optimize, deploy, and orchestrate AI models for inference and training. This includes JAX, PyTorch/XLA, and Kubernetes.

The Hypercomputer will continue to work with GPU-optimized interconnect technologies, such as the Titanium offload system and technology, which is based on the Nvidia H100 GPUs.    

Availability

Expect the Trillium chips to be available in Google Cloud, but Google did not provide an availability date. It will be a top-line offering, costing more than TPU v5 offerings. 

The high prices of GPUs in the cloud may make Trillium attractive to customers. Customers already using AI models available in Vertex, which is an AI platform in Google Cloud, may also switch to Trillium.

AWS’s Trainium chip is also available, while Microsoft’s Azure Maia chip is mainly for inference.

Possible Relief From the GPU Squeeze

Google has historically presented its TPUs as an AI alternative to Nvidia’s GPUs. Google has released research papers comparing the performance of TPUs to comparable Nvidia GPUs.

Google recently announced it will host Nvidia’s new GPU, B200, and specialized DGX boxes with Blackwell GPUs.

Nvidia also recently announced it would acquire Run.ai in a deal valued at $700 million. The Run.ai acquisition will allow Nvidia to keep its software stack independent of Google’s stack when running AI models. 

The TPUs were initially designed for Google’s homegrown models, but the company is trying to better map to open-source models that include Gemma, an offshoot of Gemini.

EnterpriseAI