Advanced Computing in the Age of AI | Thursday, June 13, 2024

Google Pulls Back the Covers on Its First Machine Learning Chip 

Source: Google

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Google has been using the machine learning accelerator in its datacenters since 2015, but hasn’t said much about the hardware until now.

In a blog post published yesterday (April 5, 2017), Norm Jouppi, distinguished hardware engineer at Google, observes, “The need for TPUs really emerged about six years ago, when we started using computationally expensive deep learning models in more and more places throughout our products. The computational expense of using these models had us worried. If we considered a scenario where people use Google voice search for just three minutes a day and we ran deep neural nets for our speech recognition system on the processing units we were using, we would have had to double the number of Google data centers!”

The paper, “In-Datacenter Performance Analysis of a Tensor Processing Unit​,” (the joint effort of more than 70 authors) describes the TPU thusly:

“The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU’s deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, …) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power.”

Google researchers compared the performance and energy-efficiency of the TPU to commercial CPUs and GPUs (a server-class Intel Haswell CPU and an Nvidia K80 GPU) on inferencing workloads. The workload was written in the TensorFlow framework and uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95 percent of Google datacenters’ NN inference demand.

The results show significant speedups and energy-savings for the TPU:

● Inference apps usually emphasize response-time over throughput since they are often user-facing.

● As a result of latency limits, the K80 GPU is under-utilized for inference, and is just a little faster than the Haswell CPU.

● Despite having a much smaller and lower power chip, the TPU has 25 times as many MACs and 3.5 times as much on-chip memory as the K80 GPU.

● The TPU is about 15X – 30X faster at inference than the K80 GPU and the Haswell CPU.

● Four of the six NN apps are memory-bandwidth limited on the TPU; if the TPU were revised to have the same memory system as the K80 GPU, it would be about 30X – 50X faster than the GPU and CPU.

● The performance/Watt of the TPU is 30X – 80X that of contemporary products; the revised TPU with K80 memory would be 70X – 200X better.

● While most architects have been accelerating CNNs, they represent just 5% of our datacenter workload.

Impressive leads for the TPU, but as with most benchmarking claims, some additional context is helpful. The K80 used for the testing is Nvidia’s Kepler-generation Tesla, released in November 2014. Unlike the newest-generation Pascal silicon (not even a year old), Kepler was not optimized for 16-bit and 8-bit neural net computing tasks. Nvidia has since released stronger inferencing engines, the P4 and P40 GPUs, which feature specialized instructions based on 8-bit (INT8) operations. The upshot of INT8 is that it enables 4X the throughput of single-precision floating point (FP32).

The Google report lists the TPU as capable of 92 peak 8-bit Tera-Operations per second (TOPS). The Tesla P40 is capable of 47 8-bit TOPS. Not an overwhelming discrepancy. However, on power, the gap widens: TDP is 75 watts for the TPU compared with 250 watts for the P40. The P4 offers a better performance-per-watt profile than the P40: 22 8-bit TOPS in a 75 watt TDP – still about a fourth the efficiency of the TPU. Obviously we’re just looking at spec’d ratings here; the chart below shows the TPU staying well under its TDP at run-time.

The peak TOPS of a single K80 GPU die without GPU Boost enabled is 2.8 (versus 8.7 32-bit TOPS for a full card with boost mode enabled). Google opted not to use GPU Boost because of power and cooling limitations of the study but did further analysis to show that “boost mode would have a minor impact on our energy-speed analysis.” Google also discusses why they presented all CPU results as floating point rather than 8-bit (facilitated with AVX2 integer support) — see Section 8 for more.

Anticipating claims that it didn’t compare its TPU to the latest Nvidia gear, Google notes that “the 16-nm, 1.5GHz, 250W P40 datacenter GPU can perform 47 Tera 8-bit ops/sec, but was unavailable in early 2015, so isn’t contemporary with our three platforms. We also can’t know the fraction of P40 peak delivered within our rigid time bounds. If we compared newer chips, Section 7 shows that we could triple performance of the 28nm, 0.7GHz, 40W TPU just by using the K80’s GDDR5 memory (at a cost of an additional 10W).”

At any rate, Nvidia isn’t the only company advancing hardware for machine learning. AI-focused silicon efforts abound. Intel has a full stack of AI hardware and software from its Nervana acquisition, and its next-gen Phi product, Knights Mill (due out this year), will incorporate support for variable precision compute. AI startups GraphCore in the UK, Wave Computing in San Diego, and KnuPath in Austin are all working on specialized lower-precision, higher-performance silicon. FPGAs also show promise for inferencing.

While Google compared its TPU to an older-generation of Nvidia silicon, Google itself may have been using a “previous generation” TPU. “There is plenty of headroom to improve the TPU, so it’s not an easy target,” note the authors. More pointedly, a reference in the blog post to “this first generation of TPUs” implies that a second-generation is on Google’s roadmap or perhaps already in existence. Typically when Google releases projects into the community (MapReduce, TensorFlow), you can bet that their internal version is a good few years ahead.

This leads to the big question on everyone’s mind, whether Google will commercialize the TPU for use outside the company. As a stand-alone product, this is unlikely as the big tech companies, hyperscalers and specialized hardware startups all race to establish dominance in an AI market predicted by market research firm Tractica to grow to $36 billion over the next decade. The TPUs have a better shot at showing up inside the Google cloud, although right now the company is focused on incorporating Nvidia Tesla P100s and AMD FirePro S9300 x2 GPUs into its IaaS platform.

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.