Advanced Computing in the Age of AI|Monday, September 28, 2020
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Arm-based AI Inference Edge Server Takes on GPU Price/Performance 

Edge computing specialist SolidRun and ASIC solutions company Gyrfalcon Technology this week announced an Arm-based AI inference edge server that the companies say outperforms GPU performance for less cost and power consumption.

The server, called the Janux GS31, can be configured with up to 128 Gyrfalcon Lightspeeur SPR2803S neural accelerator chips, delivering a maximum of 24 TOPS per watt, outperforming “SoC- and GPU-based systems by orders of magnitude, while using a fraction of the energy required by systems with equivalent computational power,” the companies said in a joint announcement. The hardware supports low latency decoding and video analytics of up to 128 channels of 1080p/60Hz video designed for such edge AI use cases as monitoring smart cities and infrastructure, intelligent enterprise/industrial video surveillance applications and tagging photos and videos for text-based searching.

"AI is rapidly moving to the edge of the network to address the performance and security needs of many applications,” said Jim McGregor, founder and principal analyst, Tirias Research. “As a result, new networks will drive increasing demand for processing performance and efficiency. The SolidRun platform, leveraging the GTI AI acceleration technology, will provide a powerful and efficient way to build a new intelligent network bridging the gap between devices and the cloud."

Milpitas, CA-based Gyrfalcon bills itself as a developer of high-performance AI accelerators that use low power small-sized chips. SolidRun is an Israeli Arm and x86 computing and network technology company focused on AI edge deployment and 5G.

"Powerful, new AI models are being brought to market every minute, and demand for AI inference solutions to deploy these AI models is growing massively," said Dr. Atai Ziv, CEO at SolidRun. "While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability."

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This