Advanced Computing in the Age of AI | Sunday, May 29, 2022

inferencing

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly four years old, MLPerf still struggles to ...Full Article

AWS Boosting Performance with New Graviton3-Based Instances Now Available in Preview

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services just announced that the third generation of the processors – the ...Full Article

Intel’s 30x AI Performance Aim for Xeon Sapphire Rapids CPUs May Not Solve All AI Needs: Analysts

With its upcoming Intel Sapphire Rapids CPUs, designed as the next generation of Intel Xeon CPUs after Ice Lake and slated for release in 2022, chipmaker Intel Corp. is ...Full Article

Nvidia GPUs Stay in Lead In Latest MLPerf Inference Results, but CPUs and Intel Gaining Ground

Nvidia again dominated the latest round of MLPerf inference benchmark (v 1.1) results when they were unveiled Sept. 23 (Thursday), sweeping the top spots in the closed data center ...Full Article

With $70M in New Series C Funding, Mythic AI Plans Mass Production of Its Inferencing Chips

Six months after unveiling its first M1108 Analog Matrix Processor (AMP) for AI inferencing, Mythic AI has just received a new $70 million Series C investment round to bring ...Full Article

Intel CPUs Gaining Optimized Deep Learning Inferencing from Deci in New Collaboration

Intel Corp. and deep learning startup Deci are partnering to help enterprises dramatically optimize inferencing and make their deep learning models more efficient and faster using Intel CPUs. The ...Full Article
EnterpriseAI