Advanced Computing in the Age of AI | Tuesday, July 27, 2021

AI inference

Better, More Efficient Neural Inferencing Means Embracing Change

Artificial intelligence is running into some age-old engineering tradeoffs. For example, increasing the accuracy of neural networks typically means increasing the size of the networks. That requires more compute ...Full Article

AWS-Designed Inferentia Chips Boost Alexa Performance

Almost two years after unveiling its Inferentia high-performance machine learning inference chips, Amazon has almost completed a migration of the bulk of its Alexa text-to-speech ML inference workloads to ...Full Article

Photonics Processor Aimed at AI Inference

Silicon photonics is exhibiting greater innovation as requirements grow to enable faster, lower-power chip interconnects for traditionally power-hungry applications like AI inferencing. With that in mind, scientists at Massachusetts ...Full Article

Xilinx Keeps Pace in AI Accelerator Race

FPGAs are increasingly used to accelerate AI workloads in datacenters for tasks like machine learning inference. A growing list of FPGA accelerators are challenging datacenter GPU deployments, promising to ...Full Article

AI Inference Benchmark Bake-off Puts Nvidia on Top

MLPerf.org, the young AI-benchmarking consortium, has issued the first round of results for its inference test suite. Among organizations with submissions were Nvidia, Intel, Alibaba, Supermicro, Google, Huawei, Dell ...Full Article

At Hot Chips, Intel Shows New Nervana AI Training, Inference Chips

At the Hot Chips conference last week, Intel showcased its latest neural network processor accelerators for both AI training and inference, along with details of its hybrid chip packaging ...Full Article
EnterpriseAI