Advanced Computing in the Age of AI | Saturday, April 27, 2024

Google’s Controversial AI Chip Paper Under Scrutiny Again  

A controversial research paper by Google that claimed the superiority of AI techniques in creating chips is under the microscope for the authenticity of its claims. Science publication Nature is investigating Google's claims that artificial intelligence techniques helped floor-plan -- or establish the basics construct of its AI chip -- in under six hours, faster than human experts. 

Nature put an editor's note on the paper, saying, "Readers are alerted that the performance claims in this article have been called into question. The Editors are investigating these concerns, and, if appropriate, editorial action will be taken once this investigation is complete." 

Initially published in 2021, the paper was related to using AI to construct a version of its Tensor Processing Unit, or TPU, which the company is using in its cloud data centers for AI in applications that include Search, Maps, and Google Workspace. 

The chip in question was identified on Twitter as TPU v5 by researcher Anna Goldie, who was among the 20 authors of the paper. Nature has put an asterisk on the paper. 

Google said the intention was not to replace human designers but to show how AI could be a collaborative technique to speed up chip designs. 

A version of the TPU-v5, the TPU-v5e, came out last month and is now available in Google Cloud.   

It was Google's first AI chip released with a suite of software and development and virtualization tools so customers can budget and manage the orchestration and deployment of AI techniques. The new AI chip competes with Nvidia's H100 GPU and succeeds the previous-generation TPUv4, which was used to train the PaLM 2 large language models. 

The controversial research paper has been plagued with trouble from the start. The paper's merits were questioned internally, and one of the paper's authors who spoke out, Satrajit Chatterjee, was fired and filed a lawsuit against Google for wrongful termination

Google researchers said that the paper has gone through peer review. But the research has not held up well under challenges from independent researchers. 

Google was criticized for releasing minimal amounts of information related to the research and resisting calls for the full release of data for public scrutiny. The company ultimately placed limited amounts of information on GitHub.

Google TPU board

Google TPU board. Source: Google, "Inside a Google Cloud TPU Data Center" video

The research provides a framework to use deep-reinforcement learning to floor-plan the chip or lay down the building blocks of the TPU-v5 chip. The paper revolves around using AI to place large circuit blocks that perform specific macro functions in logical spots to generate chip designs. Macro placement is critical to chip design and a very challenging process. 

Google's reinforcement learning technique developed a chip design using input information such as a circuit netlist comprised of connected circuit components and data such as configuring available tracks for wire rounds. The output was a clean chip design conducive to good macro placements.  

In six hours, Google was able to put together the building blocks of a cohesive chip over a specific area within a specific power and performance envelope. Over time, the AI agent uses past learnings that reinforce its current knowledge to better place the chip modules under 10 nanometers.  

The Google technique used a learning model that took 48 hours to train over 200 CPUs and 20 GPUs, and those hours were not accounted for in the total time it took to design the chip.  

One challenger to Google's research, Andrew B. Kahng, a professor of computer science at the University of California, San Diego, found Google needed to be more cooperative. He criticized Google's unwillingness to release critical data such as circuit training datasets, baseline information, or other code for other researchers to reproduce the results. 

He had to reverse engineer Google's chip-design techniques and found human chip designers and automated tools could sometimes be faster than Google's AI-only technique. In March, he presented a paper on his findings at the International Symposium on Physical Design, in which he detailed chip design involving humans and standard software tools sometimes being faster or more effective. However, he did not question the value of Google's techniques. 

Flaws aside, the research contributes to chip design research, and Google is one of the few companies to share information on AI techniques it uses for chip design. It builds on behind-the-doors work already done by Cadence and Synopsys to bring AI to chip design. AMD and Amazon have claimed to use AI in chip design but have not discussed their techniques. 

The Nature fiasco is not the first time Google's hardware research has come under the microscope. Google, in 2019, claimed quantum supremacy, with quantum computers outperforming classical computers. Google argued that its 54-qubit system called Sycamore, in which the qubits are arranged in a 2D array in 200 seconds, solved a specific problem that would take classical supercomputers 10,000 years. 

IBM disputed the claim, saying the paper was flawed and was creating confusion on quantum and supercomputing performance, and set out to disprove Google's theory. A subsequent IBM paper claimed that its Summit computer, with the help of additional secondary storage, could achieve six times better performance than the one declared in Google's quantum supremacy paper and solve problems in a reasonable amount of time. 

Google's controversial 2019 quantum paper, considered groundbreaking at that time, was also based on closed-door experiments and has not aged well. In subsequent years, more researchers stepped forward to challenge Google's claims. The flaw was Google's apples-to-oranges comparison of its optimized quantum algorithms against older, slower classical algorithms. 

It is unclear if TPU v5e was designed using the reinforcement learning technique, but Google has claimed superior performance with the chip compared to the previous generation TPUv4.  

Eight TPU v5e chips can train large language models with up to 2 trillion parameters. This month, Google claimed that "each TPU v5e chip provides up to 393 trillion int8 operations per second (TOPS), allowing fast predictions for the most complex models," implying the chip is primarily designed for low-leverage inferencing operations. Training typically requires a floating-point pipeline. 

Google is trying to play catch up in AI with Microsoft, which uses OpenAI's GPT-4 and Nvidia GPUs in its Azure AI supercomputer. The company recently integrated its Bard chatbot into Google Workspace, web search, and other tools. The Bard tools run their AI calculations off the TPUs. 

EnterpriseAI