Intel CPUs Gaining Optimized Deep Learning Inferencing from Deci in New Collaboration
Intel Corp. and deep learning startup Deci are partnering to help enterprises dramatically optimize inferencing and make their deep learning models more efficient and faster using Intel CPUs.
The collaboration, which brings together Deci’s algorithmic acceleration technology with Intel chip architectures, aims to make it possible for enterprises to accelerate inferencing speeds by up to 11x at scale, based on MLPerf tests, according to the companies.
“We believe that the core of making deep learning models better is doing better algorithms,” Jonathan Elial, the COO and co-founder of Deci, told EnterpriseAI. “And we developed a technology where AI is leveraging AI, so through an algorithm we create much more efficient algorithms. We take a customer's deep learning model, you define what hardware you want to deploy on, and our technologies redesign the core architecture of the model to make it much more efficient for the task.”
The two companies began collaborating after Deci participated in an earlier Intel Ignite event and was then connected with an Intel business unit to discuss their technology further, said Elial. The formal collaboration was announced on March 11 (Thursday).
“We built with [Intel] a great relationship where they evaluated the technology and we saw really good fits and synergies in working together and going to customers together,” said Elial. “It's not yet another collaboration of doing a proof of concept together. It's an actual business agreement where we go together to customers and we can add value to their customers.”
For enterprises using ever more powerful and complex models that have hungrier compute demands, the tools from the Intel Deci partnership will be critical, he said. “The trend is very clear, [demand] is going up. But the supply side is not keeping up the pace. The hardware is advancing, but not as fast as what the models require. And the hardware manufacturers and chip manufacturers, especially are thinking of ways to minimize this gap.”
One thing hardware and chip manufacturers have figured out, he added, is that it’s no longer just a battle about silicon. “It's also about the software and about the algorithms. And there's where, Deci comes into the picture.”
Using MLPerf benchmark testing, Deci and Intel announced in late 2020 that Deci’s AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the well-known ResNet-50 neural network, reducing the submitted models’ latency by a factor of up to 11.8x and increasing throughput by up to 11x. The AutoNAC technology uses machine learning to redesign any model and maximize its inference performance on any hardware, while still preserving its accuracy.
For customers, the collaboration between Deci and Intel aims to make it easier to do deep learning inference at scale on Intel CPUs while lowering costs and latency and enabling new applications of deep learning inference, according to the companies.
“There's multiple ways to run inference on multiple kinds of hardware,” said Elial. “There's a GPU solution. And there are CPUs. Always, the customer has to do the value for money [analysis]. But when you take a CPU and you make the model, say three times better, for example, you can make this solution feasible. That's big news.”
Deci can also bring the same gains for customers who are using GPUs, said Elial, but the Intel collaboration is specifically built to show how Intel CPUs can gain from Deci’s technology. “In the context of Intel, yes, we can add much more value to the existing hardware, and that's the key value proposition.”
Analysts: A Beneficial Collaboration
Karl Freund, founder and principal analyst with Cambrian AI Research, told EnterpriseAI that the deep learning collaboration between Intel and Deci is a logical and cool development.
“Most data center inference processing today is run on Intel Xeon CPUs, which have been increasingly optimized for efficient AI, using low-precision numeric formats and functions,” said Freund. “While heavy-duty inference processing requires dedicated accelerators, such as Intel Habana Labs and Nvidia GPUs, additional optimization that Deci can enable will continue the important role that AI on CPUs in the cloud and on-premises data centers rely upon.”
Another analyst, Charles King of Pund-IT, agreed.
“It's a good example of how strategic partnerships can benefit the vendors involved,” said King. “Intel is gaining from the expertise of Deci's deep understanding of AI processes, and Deci's work is being highlighted by one of the industry's premiere vendors. It should spark interest among companies looking at emerging AI-related workloads. In many cases, Intel silicon could be a better alternative than conventional GPU technologies.”
Helping Companies Bolster AI
Many companies today are constantly saying they have new and better algorithms, said Deci’s Elial. But his company’s approach is radically different, he explained.
“We have a new approach where we don't want to tweak the algorithms in the architecture,” he said. “We want to bring in a new concept of an AI that has ten-fold better architecture, and help data scientists to do their job to find solutions for business problems. It's not about incremental change, it's about a different approach.”
For data scientists, the idea is to break ceilings for them so they can conceptualize better AI to help them in their work, said Elial.
“It's about collaboration between data scientists and AI to do better AI,” he said. “It's about leveraging the hardware below it. We want the architecture to fit the underlying hardware … to have the maximum utilization of the hardware. So, the optimization for a CPU will be different from GPU standards.”
Founded in 2019, Deci has raised $9.1 million in funding so far through a seed round led by Israel-based VC firm Emerge and global VC fund Square Peg.