Advanced Computing in the Age of AI|Tuesday, February 18, 2020
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

AMAX DL Solutions Now Available with NVIDIA V100S Tensor Core GPUs 

FREMONT, Calif., Jan. 21, 2020 -- AMAX, a leading provider of deep learning (DL) , high performance computing (HPC), and Cloud/IaaS servers and appliances, today announced its BrainMax series of GPU solutions, including Deep Learning Platforms, are now shipping with NVIDIA V100S Tensor Core GPUs.

Powered by the NVIDIA Volta architecture, AMAX computing solutions using V100S GPUs are some of the most powerful on the market for accelerating HPC, DL, and data analytics workloads. The combination of Intel Xeon Scalable Processor series with NVIDIA V100S GPUs enable 6x the tensor FLOPS for DL inference.

The NVIDIA V100S GPU features up to 25% increase in memory bandwidth and higher FLOPS. Most scientific applications benefit from its mainstream interconnect, resulting in a typical performance boost up to 17%. The NVIDIA V100S delivers clock speeds of 1601MHz and HBM2 DRAM speeds of 1.1 Gbps that delivers over a terabyte of memory bandwidth — 1134GB/s. The combined graphics and memory clock speeds help the V100S GPU dramatically accelerate the performance of HPC and AI workloads.

AMAX solutions that will feature the NVIDIA V100S Tensor Core GPU include:

  • [INTELLI]Rack AI — [INTELLI]Rack AI is a turnkey Machine Learning cluster for training and inference at scale. The solution features up to 96x NVIDIA V100S GPU accelerators to deliver up to 448 TFLOPS of double-precision performance, 896 TFLOPS of single precision performance, and 7,168 TOPS of Tensor performance. Delivered plug-and-play, the solution also features an All-Flash data repository, 25G high-speed networking, [SMART]DC Data Center Manager, an In-Rack Battery for graceful shutdown during a power loss scenario.
  • BrainMax DL-E48A — The DL-E48A is a robust 4U 8x V100S GPU platform for HPC and DL workloads, delivering 65+ TFLOPS of double precision,131+ TFLOPS of single precision, and 1,040+ TOPS tensor performance. The DL-E48A is also the first-in-industry accelerated GPU computing solution to feature re-configurable single- and dual-root complex PCIe architecture, allowing for hardware optimization on the fly for AI and DL training, inference, HPC compute, rendering and virtualization applications.
  • BrainMax DSW-2 — The BrainMax DSW-2 Deep-Learning-in-a-Box workstation, with 4x V100S GPUs, provides everything a data scientist needs for DL development, delivering 32+ TFLOPS of double precision, 65+ TFLOPS of single precision, and 520+ TOPS Tensor performance. It is optimized for compute-intensive, single-GPU workloads, scientific computing centers and higher education and research institutions running HPC and AI (training/ inference) workloads, along with enterprises running mixed workloads.

As an NVIDIA Elite Partner, AMAX offers a comprehensive line of GPU-integrated solutions optimized for deep learning at any scale. To schedule a technical consultation, please contact AMAX at [email protected] or visit www.amax.com to learn more about AMAX Deep Learning and Inference solutions.

About AMAX 

AMAX is a leading service provider of integrated supply chain manufacturing and orchestration. As a Foxconn Technology Group affiliate, we build trusted relationships and help our customers realize greater quality, time-to-market, and delivery at any scale. We lead through collaborative design, fully traceable, trusted and visible manufacturing and supply chain solutions, and bring high-performance computing infrastructure to the forefront of industrial digitalization. From new product introductions and full-scale production to global deployment and after-market services-- our blending of people, process and technology across North America, Europe, and Asia enables our customers to move product personalization and manufacturing closer to the edge of business consumption.


Source: AMAX 

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This