Advanced Computing in the Age of AI | Friday, April 26, 2024

Making Sense and Cents of AI 
Sponsored Content by Dell Technologies

Today, more than a quarter of organizations report full‑scale deployment of five or more types of AI applications. And the market is booming, with anticipated 40.2% compound annual growth rate (CAGR) through 2028. AI algorithms are producing value and propelling digital progress using millions of data points collected from data sources everywhere.

But in a typical organization, only a small portion of available data is used to guide decision making. Widening the spectrum of usable data enables organizations to tap its hidden potential, leveraging powerful AI applications to help them answer bigger questions and make more amazing discoveries, faster, to keep pace with competition coming from every angle.

Two sides of AI

AI is a complex set of technologies underpinned by machine learning (ML) and deep learning (DL) algorithms. Together, AI, ML and DL enable deeper insights that drive more value from data. The capabilities of AI, ML and DL can drive revenue growth through predictive and prescriptive analytics on a massive scale. Organizations can gain deeper, more accurate insights to quickly identify trends and patterns that would otherwise be difficult and time‑consuming — or impossible — to detect.

  • AI is an umbrella term that describes a machine’s ability to act autonomously and/or interact in a human‑like way.
  • ML refers to the ability of a machine to perform a programmed function with the data given to it, getting progressively better at the task over time as it analyzes more data and receives feedback from users or engineers.
  • DL uses artificial neural networks (ANNs) and deep neural networks (DNNs), inspired by the human brain, to process huge volumes of data. ANNs and DNNs allow the machine to determine on its own if a prediction is accurate so that it can train itself without human intervention.

The data‑to‑insight journey to AI

At a basic level, AI models are designed to crunch data faster and smarter to deliver actionable business insights at the touch of a button. But before answers stream in, a lot of work has to go on behind the scenes. The typical AI model lifecycle is composed of five steps:

  • Problem: The journey starts with a problem that needs to be solved. This usually takes the form of predicting something (such as customer behavior or machine maintenance requirements) or improving a process (such as providing better customer service or streamlining the supply chain).
  • Possibilities: The next step is exploring what’s possible. This involves determining what type of AI model can best solve the problem, based on the available data.
  • Model building: Next, data scientists build and train the model using clean, usable data. Once the model is trained on existing data, it’s ready for the inferencing phase, using live data to continually improve the model, and create actionable results.
  • Integration: For the insights to be useful, the model needs to be integrated into workflows. This usually involves putting it into a data production environment and building a dashboard on top of it that can be integrated into a user interface.
  • Improvement: One of the biggest differences between traditional applications and AI is that AI models continually learn and change based on the new data coming in, without user involvement.

Get the data to make a great gear investment

MLCommons has a suite of AI inferencing and training benchmarks that hardware and software vendors can use to demonstrate and optimize the performance of their AI systems. These global competitions are hosted every few months, alternating between inferencing and training, providing comparison data on how servers and public cloud computing systems perform across a wide variety of AI workloads including image classification, object detection, natural language processing, recommendation engines and more.

MLPerf Inference

For MLPerf Inference v.2.1, Dell Technologies submitted results for a wide variety of server CPU-accelerator combinations to provide the data needed for you to make the best choices for your workloads and environments.

Working with Dell Technologies: “It really is a partnership,” says Ralph Zottola, Ph.D. and Assistant VP at UAB. “Dell has the ability to work holistically, to take a big-picture engineering approach. It’s not just about the hardware. They work to identify the right type of resources, connections and services that we will need. But most importantly, they are a partner who helps us think through problems and find ideal solutions.”

Dell Technologies works with customers and partners including AMD, Deci, Intel, NVIDIA and Qualcomm to optimize software-hardware stacks for performance and efficiency. Take a closer look at the PowerEdge R750xa performance per GPU numbers. Zoom in on the Dell PowerEdge XE8545 performance per watt in nine categories! Don’t miss the ruggedized PowerEdge XR12 for performance per watt at the edge in telco, utilities, marine and defense.

MLPerf Training

In the latest round of MLPerf Training, Dell Technologies submitted 42 results across 12 system configurations. Results are available for single-node and multi-node PowerEdge XE8545, R750xa and DSS8440 servers with NVIDIA A100-PCIe-80GB, A30, A100-SXM-40GB and A100-SXM-80GB GPUs on MLPerf training models.

  • See how AI training scales. As AI training continues to scale with the need for speed, the Dell Technologies Innovation Lab team submitted training results with up to 32x PowerEdge XE8545 servers with 128 NVIDIA A100 SXM GPUs in the TOP500 Rattler supercomputer to show scalable performance.
  • Evaluate price/performance. Results are available with different operating systems, with/without NVIDIA NVLink, NVBridge and SXM, SSDs and more.
  • Leverage Dell engineering expertise. Easily transport results to your AI or HPC environment with scripts. Dell Technologies engineers created a script for Singularity available for download.

You’ve got the power . . . numbers. Across multiple servers and accelerators in different configurations, Dell Technologies submitted power consumption metrics across Datacenter and Edge suites with Open and Closed divisions. With this data, you can get insight into operating costs and total cost of ownership, while creating an opportunity to compute more and use less.

Dig into the engineering test results. Test for yourself in one of the worldwide Dell Technologies Customer Solution Centers. Collaborate with the HPC & AI Innovation Lab and/or tap into one of the HPC & AI Centers of Excellence.

 

EnterpriseAI