News & Insights for the AI Journey|Thursday, October 17, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Google’s ML Compiler Initiative Advances 

via Shutterstock

Machine learning models running on everything from cloud platforms to mobile phones are posing new challenges for developers faced with growing tool complexity. Google’s TensorFlow team unveiled an open-source machine learning compiler called MLIR in April to address hardware and software fragmentation.

MLIR, for Multi-Level Intermediate Representation, has now been transferred to the LLVM Foundation, a non-profit group that specializes in compiler and related tool technologies, Google (NASDAQ: GOOGL)

said this week. MLIR applies abstraction concepts first developed by LLVM, including TensorFlow and “fixed” hardware operations.

MLIR is described by Google engineers as a flexible representation of a compiler tool “that is language-agnostic and can be used as a base compiler infrastructure.”

“The machine learning ecosystem is dependent on many different technologies with varying levels of complexity that often don't work well together,” Google engineers noted in a blog post announcing the MLIR contribution.

The machine learning compiler infrastructure “addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications,” they added.

MLIR is targeted at compiler researchers and developers seeking to tweak model performance and memory consumption along with hardware developers connecting hardware such as TPUs and custom ASICs to TensorFlow. It is also geared to programmers looking to accelerate hardware through improved compilers.

Google said its ultimate goal is to make MLIR a new standard for machine learning development as it upgrades its TensorFlow infrastructure to deal with growing hardware and software complexity. Technology partners include chip makers like AMD, Arm, Cerebras, IBM, Nvidia, Samsung and Xilinx. Those and other wireless backers account for 95 percent of hardware acceleration frameworks used in datacenters as well as more than 4 billion mobile phones and many more Internet of Things devices.

As an ML development standard used in conjunction with TensorFlow, Google engineers predicted MLIR would help accelerate model training and scaling on a range of hardware. Google said it is incorporating the open source compiler technology across its own server and mobile hardware development.

The ML compiler also could be used to apply machine learning techniques within compilers, the company added, including emerging low-code strategies. Among the possible implementations is developing code generation algorithms for “domain-specific” hardware accelerators.

An MLIR code repository on GitHub is here. An MLIR overview is here.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This