News & Insights for the AI Journey|Monday, June 24, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Containers Emerge as Deep Learning Tool 

(Tashatuvango/Shutterstock)

Applications containers are now being tuned to specific features that can be incorporated into cloud-native enterprise apps.

Amazon Web Services and GPU leader Nvidia are both offering Docker containers geared to deep learning frameworks.

For example, AWS unveiled a new tool this week called Deep Learning Containers. The Docker-based images are intended for model training and inference using Apache MXNet or TensorFlow. AWS said it plans to add other deep learning frameworks.

Jeff Barr, the public cloud giant’s chief technology evangelist, said the impetus for deep learning containers originated with customers using its container and Kubernetes orchestrator services to deploy TensorFlow workloads to the cloud.

“While we were at it, we optimized the images for use on AWS with the goal of reducing training time and increasing inferencing performance,” Barr noted in a blog post.

AWS (NASDAQ: AMZN) said the Docker images are pre-configured for deep learning development and can be used to set up specific cloud environments and workflows on AWS container and Kubernetes orchestration services.

Multiple deep learning containers are available, Barr said, based on either MXNet or TensorFlow frameworks as well as training or inference modes using either CPUs or GPUs. AWS users can train on a single node or a multiple-node cluster.

The AWS deep learning containers run on top of 16.04 version of the Ubuntu operating system, the company said.

An example included with the AWS blog post explains how the deep learning containers can support “a pre-trained model to perform inferencing” on the AWS cloud, Barr noted.

As more containers are used to deploy custom machine learning environments that run consistently across different platforms, AWS notes that building and testing container images specifically for deep learning remains difficult and is prone to errors. The custom deep learning container images are designed to eliminated deployment issues such as software dependencies and version compatibility so machine learning workloads can be scaled across a cluster of cloud instances.

Meanwhile, Nvidia’s container runtime for Docker enables GPU-based applications that is says are portable across multiple machines.

Nvidia (NASDAQ: NVDA) released a user guide for its deep learning container approach earlier this month

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This