Advanced Computing in the Age of AI|Thursday, July 16, 2020
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

inference

Xilinx Keeps Pace in AI Accelerator Race

FPGAs are increasingly used to accelerate AI workloads in datacenters for tasks like machine learning inference. A growing list of FPGA accelerators are challenging datacenter GPU deployments, promising to ...Full Article

NeoML Released as TensorFlow Alternative

A new open source library for training machine learning models is billed as rivaling the performance of AI models trained with established libraries like TensorFlow, especially models running on ...Full Article

SiFive Adds Tools for Cloud-Based Chip Design

Chip designers are drawing on new cloud resources along with conventional electronic design automation (EDA) tools to accelerate IC templates from tape-out to custom silicon. Among the challengers to ...Full Article

AI Inference Benchmark Bake-off Puts Nvidia on Top

MLPerf.org, the young AI-benchmarking consortium, has issued the first round of results for its inference test suite. Among organizations with submissions were Nvidia, Intel, Alibaba, Supermicro, Google, Huawei, Dell ...Full Article

AWS Upgrades Nvidia GPU Cloud Instances for Inferencing, Graphics

Graphics processor acceleration in the form of G4 cloud instances have been unleashed by Amazon Web Services for machine learning applications. AWS (NASDAQ: AMZN) on Friday (Sept. 20) announced ...Full Article

AI Used to Convert Brain Signals to Speech

A deep learning framework developed by university researchers aims to convert brain signals recorded by an implant into synthesized speech, aiding those who have lost the ability to speak ...Full Article

Google Cloud Goes Global with Nvidia T4 GPUs

Nvidia’s T4 GPUs unveiled earlier this year for accelerating workloads such as AI inference and training are making their “global” debut as cloud instances on Google Cloud. Google (NASDAQ: ...Full Article

Alexa Gets a Filter

Amazon’s ubiquitous voice-controlled Alexa home assistant has been re-trained by researchers using Nvidia GPUs designed for AI training and inference. The overhaul resulted in what company and university researchers ...Full Article

AWS Upgrades its GPU-Backed AI Inference Platform

The AI inference market is booming, prompting well-known hyperscaler and Nvidia partner Amazon Web Services to offer a new cloud instance that addresses the growing cost of scaling inference. ...Full Article

Inference Engine Uses SRAM to Edge AI Apps

Flex Logix, the embedded FPGA specialist, has shifted gears by applying its proprietary interconnect technology to launch an inference engine that boosts neural inferencing capacity at the network edge ...Full Article
Page 1 of 212
Do NOT follow this link or you will be banned from the site!