News & Insights for the AI Journey|Monday, November 18, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Microsoft Azure Adds Graphcore’s IPU

Microsoft Azure Adds Graphcore’s IPU

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units ...

AMD Gains Design Win for Tencent Server

AMD Gains Design Win for Tencent Server

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. ...

SC19: AI and Machine Learning Sessions Pepper Conference Agenda

SC19: AI and Machine Learning Sessions Pepper Conference Agenda

AI and HPC are increasingly intertwined – machine learning workloads demand ever increasing compute power – so ...

AI Inference Benchmark Bake-off Puts Nvidia on Top

AI Inference Benchmark Bake-off Puts Nvidia on Top

MLPerf.org, the young AI-benchmarking consortium, has issued the first round of results for its inference test suite. ...

Happening Now
Monday, November 18 Friday, November 15 More Happening Now

Silicon

November 15, 2019
Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designed for machine learning workloads. Azure support for IPUs is the culmination of more than two years of collaboration between the software giant (NASDAQ: MSFT) and ... Full article
November 13, 2019
Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the global cloud and datacenter server markets. The partners announced this week that Tencent Cloud’s new servers will implement AMD’s “Star Lake” platform based on the ... Full article
November 13, 2019
AI and HPC are increasingly intertwined – machine learning workloads demand ever increasing compute power – so it’s no surprise the annual supercomputing industry shindig, SC19 at the Colorado Convention Center in Denver next week, has taken on a strong AI cast. As we noted recently (“Machine Learning Fuels a Booming HPC Market”) based on findings by industry watcher ... Full article
November 12, 2019
At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU), codenamed “Keem Bay,” for edge media, computer vision and inference applications. The ... Full article
November 7, 2019
MLPerf.org, the young AI-benchmarking consortium, has issued the first round of results for its inference test suite. Among organizations with submissions were Nvidia, Intel, Alibaba, Supermicro, Google, Huawei, Dell and others. Not bad considering the inference suite (v.5) itself was just introduced in June. Perhaps predictably, GPU powerhouse Nvidia quickly claimed early victory issuing a press release coincident with the ... Full article
November 6, 2019
Nvidia has launched what it claims to be the world’s smallest supercomputer, an addition to its Jetson product line with a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) of throughput, according to the company. The Jetson Xavier NX module consumes as little as 10 watts of power, costs $399 and is designed to be ... Full article
October 29, 2019
A potential interim step between conventional semiconductors and quantum devices has emerged, promising improved information processing schemes that outperform current electronic charge- and spin-based chip architectures. The emerging quantum process dubbed “valleytronics” focuses on low energy “valleys” or extremes in the electronic band structure of semiconductors. Those valleys of electrons can be used to encode, process and store information, ... Full article
October 16, 2019
GPUs are famously expensive – high end Nvidia Teslas can be priced well above $10,000. Now a New York startup, Paperspace, has announced a free cloud GPU service for machine/deep learning development on the company’s cloud computing and deep learning platform. Designed for students and professional learning how to build, train and deploy machine learning models, the service can ... Full article
October 15, 2019
Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ll get there at last month’s MIT-IBM Watson AI Lab’s AI Research Week held at MIT. Just as Moore’s law, now fading, was always a metric with many ingredients baked into it, Gil’s ... Full article
October 15, 2019
The ability to share and analyze data while protecting patient privacy is giving medical researchers a new tool in their efforts to use what one vendor calls “federated learning” to train models based on diverse data sets. To that end, researchers at GPU leader Nvidia (NASDAQ: NVDA) working with a team at King’s College London came up with a ... Full article
October 9, 2019
An HPC cluster with deep learning techniques will be used to process petabytes of scientific data as part of workload-intensive projects spanning astrophysics to genomics. AI partners Intel (NASDAQ: INTC) and Lenovo (OTCMKTS: LNVGY) said they are providing the Flatiron Institute of New York City with high-end servers running on Intel second-generation Xeon Scalable processors and the chip maker’s ... Full article
October 1, 2019
DARPA will seek to unclog the networking bottlenecks that are hindering wider use of powerful hardware in computing-intensive applications. The Pentagon research agency has unveiled another in a series of post-Moore’s Law computing initiatives, this one seeking an overhaul of the network stack and interfaces that fall well short of connecting high-end processors with external networks and the data-driven applications ... Full article
September 30, 2019
New York is always an exciting, energetic city to visit and ... was made even more so as I attended the (recent) ‘HPC & AI on Wall Street’ conference, which HPCWire are now championing. It was well worth the train ride from Boston and interesting to see the varied mix of attendees present and hear how HPC and AI is ... Full article
September 26, 2019
Chiplet-based designs, so called because they combine multiple components into a single package, are increasingly seen as addressing the inability of general-purpose CPUs to handle skyrocketing performance requirements. Lower-cost chiplets have helped address the need for workload-specific accelerators, proponents note. Those workload requirements prompted the Open Compute Project (OCP) to launch a chiplet design project in March 2019 aimed ... Full article
September 24, 2019
The discussion surrounding enterprise adoption of AI technologies is shifting from if to when, and what mix of IT infrastructure will be required to deploy and scale machine learning and other workloads. HPC vendors that have been making inroads in the enterprise market view AI as another opportunity. Among them is supercomputer leader Cray Inc. (NASDAQ: CRAY) which released ... Full article
September 20, 2019
Graphics processor acceleration in the form of G4 cloud instances have been unleashed by Amazon Web Services for machine learning applications. AWS (NASDAQ: AMZN) on Friday (Sept. 20) announced general availability of the new Elastic Compute Cloud instance providing access to Nvidia’s (NASDAQ: NVDA) T4 Tensor Core GPUs, which are based on the Turing architecture. The EC2 instances are available ... Full article
September 17, 2019
Dell Technologies rolled out redesigned servers this week based on AMD’s latest Epyc processor that are geared toward data-driven workloads running on increasingly popular multi-cloud platforms. Dell, which has seen its lead shrink in the contracting global server market, is banking on AMD’s 7-nm Rome server processor introduced in August to provide the bandwidth and computational horsepower needed to scale ... Full article
September 12, 2019
Machine learning models running on everything from cloud platforms to mobile phones are posing new challenges for developers faced with growing tool complexity. Google’s TensorFlow team unveiled an open-source machine learning compiler called MLIR in April to address hardware and software fragmentation. MLIR, for Multi-Level Intermediate Representation, has now been transferred to the LLVM Foundation, a non-profit group that ... Full article
September 9, 2019
The global server market contacted for the first time since 2016 after a year of historic growth, signaling an abundance of IT capacity and economic uncertainty as global trade frictions grow. Despite a 11.6 percent year-on-year decline, International Data Corp. also reported in its latest quarterly survey of the worldwide server market that Hewlett Packard Enterprise (NYSE: HPE) gained ... Full article
August 28, 2019
Could wafer-scale silicon from Cerebras Systems be the first “supercomputer on a chip” worthy of the designation? Last week at Hot Chips at Stanford University, the Silicon Valley startup debuted the largest chip ever built, a 46,225 square millimeter wafer packing 1.2 trillion transistors. Cerebras says the chip’s 400,000 AI-optimized cores can train models 100-1,000 times faster than the ... Full article

More articles

Do NOT follow this link or you will be banned from the site!
Share This