News & Insights for the AI Journey|Tuesday, May 21, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

How to Avoid Container Resource Allocation Problems 

Automation and planning empower enterprises to reap cost savings, innovation from investment in solutions from Docker and CoreOS in advanced scale computing environments.

Just as enterprise IT departments quickly embrace containers to foster innovation and rapid app development, high performance computing professionals see more interest in this technology. But if they fail to correctly allocate this resource, organizations could actually slow – not speed – their infrastructures and development initiatives.

In HPC, the payoff (and risk) increase due to the size and scope of the systems, said Levi DeHaan, team lead and software programmer at Levi DeHann Consulting.

Nick Espinosa

Nick Espinosa

"We are going to see a shift from traditional virtualization to containerization like Docker or CoreOS’ Rocket. Most infrastructure admins are kind of going nuts of over this, in the sense that that this method will mitigate bandwidth load and overhead. I think we going to see this explode into an infrastructure phenomenon sooner than later," added Nick Espinosa, CIO at BSSI2, via email. "Like an HPC super server being able to run 100 clusters, containers can deploy and push more applications and information without having to invest in more infrastructure until the demand for performance overwhelms them. This makes containerization a cost-mitigating factor for some time until the growth demands outweigh the available infrastructure for containers."

But adding too many containers can create a new problem, cautioned technologists. That's why it's important to address resource allocation challenges before they happen and service slows. To accomplish this, administrators must consider the three key technologies within containers – chroot, namespaces, and croups – which combine to build up the container as one unit, Nikolay Todorov, CTO of SiteGround Web Hosting, told EnterpriseTech.

"With the cgroups, an administrator can limit the CPU resources for a container to a single core or to a quarter of a core. This means that we do not limit the CPU resources to bare metal units; instead we limit the number of CPU seconds a container can use within a period of time," he said. "The same can be applied to both the memory resources of a single container and the IOPS."

Since everyone from developers to infrastructure administrators like containers because they are modular, unified, and have fewer layers to support, enterprises run the risk of container creation run amok. Regardless of the available infrastructure, there's a risk that – left unchecked – too many containers could reach a breaking point and surpass the infrastructure's capabilities, Espinosa said.

"They can do more with the same specs as a competing traditional [virtual machine] system but that doesn’t mean you can build a skyscraper out of a 10-story building though you can get well beyond 10 stories," he said. "Infrastructure admins have the task of reigning in the developers and also balancing and metering available resources for the containers though admins can now stretch those resources much further."

Container resource allocation differs from virtualization allocation because it's much more flexible and easier to maintain within cgroups, said Todorov. Typically, changing a limit simply requires changing a single line or digit in a file, Espinosa added.

Nikolay Todorov

Nikolay Todorov

"With hypervisor virtualization, the host node divides the server hardware into smaller slices by emulating virtual hardware. Then each guest node (the virtual machine itself) needs its own operating system (e.g., Linux) to operate with the emulated virtual hardware. Each request sent from the guest node goes through its kernel and emulated hardware and only then to the host kernel and actual bare metal hardware," Todorov said. "This flow requires more time to complete and also more resources on the host node and this is where the biggest advantage of containers comes from: saving a huge percentage of server overhead. All containers on a host node share the same single core (kernel) and OS and each request sent to the hardware is processed as if it comes from the host node kernel. Thanks to the flexibility that cgroups allow, a container can be scaled vertically super easily - which to us as a hosting company is of prime importance."

The comparative designs simplify allocation analysis, said Espinosa. Virtualization environments feature the physical server or cluster with host operating system, hypervisor, and guest OS layer, plus bins and libs for each guest OS which runs the final application layer. By comparison, containers include the physical server, host OS, and the engine for the containers, which share bins and libs and plug in each container, he said.

"You’re cutting down the overhead of having a hypervisor and several guest OSes handle the workload," said Espinosa. "Containers are more modular but also more unified with less layers to support."

If an organization over-allocates containers it risks missing the very benefits it sought through its adoption of containers. At some point, the host node might not process containers' requests and operations.

"So in an environment of host nodes with containers the administrator must provide a system that guarantees that none of the host nodes is out of resources. Generally, for such a system to be most effective it requires that each host node is connected to a distributed storage," said Todorov, whose company uses container infrastructure provider Kyup has a smart system that monitors and collects statistics in real time on each host-node resource allocation. In addition, SiteGround keeps a resource reserve and uses alerts if a host node begins experiencing a resource shortage, he said.

Levi DeHaan

Levi DeHaan

Automated tools are key, agreed DeHaan, in an interview.

"You can put too many containers on a machine. But there's a scheduler – the one I prefer is [Apache] Mesos – and it knows all the machines that are available and it assigns specific machines. If a container is on a machine with 8 gigs of RAM and it's only using 4, it will assign it to one with 4 keeping one with 8 open for a larger job. It will do that for all machines so u can do more jobs than you could before," he said. "Mesos doesn't run just with Docker containers. It can do lots of deployment technologies."

 

About the author: Alison Diana

Managing editor of Enterprise Technology. I've been covering tech and business for many years, for publications such as InformationWeek, Baseline Magazine, and Florida Today. A native Brit and longtime Yankees fan, I live with my husband, daughter, and two cats on the Space Coast in Florida.

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This