News & Insights for the AI Journey|Sunday, September 15, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Docker Goes Enterprise As Rocket Containers Launch 

The momentum for Docker software containers is not just building, but is exploding as the eponymous company behind the technology builds its partner ecosystem and offers a compelling alternative to full-on server virtualization to enterprises. It is ironic that at precisely that moment an upstart vendor of a hyperscale-tuned Linux operating system called CoreOS is breaking away from the Docker movement and offering its own software container alternative, called Rocket.

Docker is hosting its second user and partner conference this week in Amsterdam, and the company is using the event to preview an on-premises Hub Enterprise application and container repository to complement its existing hosted version as well as a set of orchestration services that it has created to manage Docker software containers through their lifecycle from development to deployment to retirement.

This sophisticated, multi-container orchestration is something that Ben Golub, CEO at Docker, says that enterprise customers are driving it to do as they are testing out software containers in proofs of concept and in production. The enthusiasm for Docker is staggering. Through the first DockerCon in June of this year, the Docker software container tools had been downloaded 3 million times since the project started in April 2013. Fast forward a mere six months later, and there have been 57 million additional downloads, a factor of 44X growth per month if you average it out. This is a textbook hockey stick curve if there ever was one, and perhaps we have not seen a curve like this since Linux took off in commercial settings in about 1998 or so. "It's an insane number," Golub said with a laugh when talking about the momentum of the project with EnterpriseTech.

Being an open source project that doesn't have licensing to track usage means Docker can only get at the nature of the installed base from oblique angles, but there are a number of indicators that Docker is getting lots of enterprise action. First, Golub says that the company has identified thousands of blog posts of companies saying they are using Docker. In June, Groupon and eBay described how they were using Docker in production, and at this week's DockerCon ING Bank, Societe Generale, and the BBC will be among those who will stand up and talk about how they are putting Docker software containers to use. Docker itself has over 200 enterprises in the pipeline for its support licenses and services, ranging from PoCs to production environments, says Golub.

"This has very quickly become an enterprise technology," Golub says. "Of course, people are being measured about it. The more conservative they are the more likely enterprises are going to start with their newer apps or to deploy Docker onto VMs instead of bare metal. But we are still exceptionally excited about what the activity is."

Docker is not just an abstraction layer for runtimes but also a kind of packaging system for application components. Companies want to be able to start with a single container running on a single system and manage an app running inside of it, but they also want to be able to create applications that span multiple containers that run across clusters of servers – and sometimes multiple datacenters – and manage them all as a logical whole. (Google's open source Kubernetes project was launched earlier this year precisely to provide this functionality for Docker containers running atop its Google Compute Cloud.)

"This obviously creates a need for orchestration writ large, composition, networking, and service discovery, and this is very clearly a need that our users are expressing," Golub explains. "But what they also tell us very clearly is that they don't want to break portability when they go from a single container to a multi-container mode, and they don't want to break compatibility to leverage this great ecosystem that is out there."

Docker has hundreds of partners, up and down the software stack, that are supporting Docker containers, with over 65,000 applications uploaded into the Docker Enterprise Hub and over 18,000 different tools in some fashion interfacing with Docker. Significantly Microsoft has promised that Docker containers will run atop the next iteration of Windows Server. (Tentatively called Windows Server 10, but possibly branded something else when it formally arrives.) With Linux support from the get-go and Windows support on the way, Docker has covered the lion's share of the X86 system market, and should ARM servers take off, Docker will no doubt make that jump, too. The APIs for Docker are open, so not only does it have hooks for Kubernetes management tools, but will also support the Mesos cluster and application management tool from Mesosphere (inspired by Google's own Borg and Omega cluster automation tools) and will eventually allow for VMware's virtualization management tools to order Docker containers around, too.

Golub says that some of the management tools out there today for Docker work well on specific sets of infrastructure or public clouds, but they don't span all clouds or infrastructure. Or, in other cases, they do parts of the orchestration, such as job scheduling and clustering but not application composition and managing hosts. Three new modules for an upcoming release of the Docker stack are aimed at providing more portable orchestration.

The first component is called Docker Machine, and it provides a single interface to provision the Docker Engine, the core runtime environment for the Docker container system, onto a bare metal machine running Linux, a virtual machine, or a compute instance on a public cloud. The idea is to manage all three from the same user interface inside the Docker Client, whether the host is local or remote, physical or virtual. The Docker Machine can get a Docker Engine up and running in seconds, and has a back-end API that lets any infrastructure or service provider reach in and provision using their own tools as well.

The second component is called Docker Swarm, and this clusters hosts running the Docker Engine into a pool of resources for executing Docker containers, abstracts that cluster, and based on resource utilization of the applications in the containers and the availability of raw capacity in the cluster, places those Docker containers on the best hosts for the job. Docker Swarm can recover from host failures and reload containers and rebalance the cluster, and there are a set of standard and customizable governors on capacity usage to fine tune this all. Down the road, Docker Swarm will be opened up with a back-end API to hook into various cluster management tools so Docker can be a subset of a larger cluster that is not necessarily virtualized and if it is, not necessarily with Docker.

The final new component of the Docker stack is called Docker Compose, and as the name suggests, it is a composition tool that allows for programmers to create composite applications from the 65,000-plus app containers out on the public Enterprise Hub or from in-house repositories. Docker says that the distributed apps built with Docker Compose will be independent of the underlying infrastructure and therefore portable, meaning if you compose an application for in-house use, you can point the composition at a public cloud supporting Docker Engine and it will just deploy out to virtual machines and work.

These three new components are in alpha testing now and can be used in conjunction with the current Docker 1.3.2 release. Golub says that Docker 1.4 is expected to be released this Friday, and that the software is on a roughly two month cadence for updates. The new components will be released in commercial form in a release subsequent to Docker 1.4, but the precise release has not yet been determined. The Docker Machine, Swarm, and Compose components are expected to be ready for production sometime in the second quarter of 2015. The APIs for the Docker Machine are exposed now, and the APIs for the other components will be opened up sometime in the first half of next year. Pricing for these features has not been set yet.

In addition to the preview on the new features, Docker is also previewing a private version of its Docker Hub repository, called Docker Hub Enterprise. This private repository is being demoed this week and will be available to early access customers in February next year.

As you might expect, enterprises do not want to plunk their Docker application containers out into a public repository, even if Docker can shield them from the public (and will do so for a fee). They want their own self-contained Docker system that runs behind their firewalls, hooks into their own identity and access management front ends, and plugs into their compliance software as well. The Docker Hub Enterprise repository will have hooks out to the public Docker Hub repository so programmers can mix and match internal code and external apps in containers in their environment in a seamless fashion. Microsoft Azure, Amazon Web Services, and IBM SoftLayer will be offering Docker Hub Enterprise in a hosted fashion for enterprise customers. Pricing has not been set for the private repository for Docker containers, but Golub tells EnterpriseTech that the plan is to offer an annual subscription or a longer-term license that scales with the number of hosts and the size of the repositories that a company deploys. Private repositories in Docker Hub cost $7 per month for up to five images; public repositories are free.

Taking Off Like A Rocket

This is all a bit much for CoreOS, which peddles a stripped-down, tuned-up variant of the Linux operating system by the same name. CoreOS was an immediate and enthusiastic supporter of Docker when it launched in early 2013, but now thinks that Docker is moving away from its original mission of supplying a simple, compatible container environment for Linux platforms.

In a blog post early this week ahead of the DockerCon Europe event, Alexi Polvi, CEO at CoreOS, announced the formation of the Rocket project and its AppContainer runtime environment and software image definition, which he said was more faithful to the original plans of Docker when it was formed by public cloud provider dotCloud.

"We thought Docker would become a simple unit that we can all agree on," wrote Polvi. "Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned."

Polvi also contends that the Docker process model, where a central daemon controls security and the composition of the containers as they run is "fundamentally flawed." That said, Polvi said that there was a possibility that the AppContainer format could be adopted by Docker and therefore allow some kind of interoperability between the two software container formats – much as VMware is extending its virtual machine management tools to take control of Docker containers.

Docker is watching the Rocket project closely, and keeping an open mind and doesn't seem much threatened by the Rocket effort. With that hockey stick adoption curve and $55 million in two rounds of venture funding this year from Greylock Partners, Sequoia Capital, Trinity Ventures, Insight Venture Partners, and former Yahoo CEO Jerry Yang, Docker is way out ahead and moving fast.

"Overall, competition is good and experimentation is good," Golub says. "We welcome an open community and I think that in this day and age, everybody is still trying to find the right level of abstraction and how they are going to add value in this ecosystem. We have tried very hard to have an open design process and deliver things in a modular way so that people can leverage Docker as a single container or leverage Docker with more complex orchestration capabilities, and the desire for more complex orchestration is clearly coming from the users. Other vendors would rather Docker didn't get into orchestration and focus on the single container format, and that's fine, they have their options. But we are following what the community is telling us."

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This