Advanced Computing in the Age of AI | Thursday, June 8, 2023

SUSE Cranks Cloud Controller Up to 2.0 With ‘Grizzly’ Update 

SUSE Cloud, the variant of the OpenStack cloud controller put together by commercial Linux distributor SUSE Linux, is now riding the "Grizzly" release of OpenStack. And that means that SUSE Linux is bracing for customers to move from tire kicking and proofs of concept to real deployments.

The OpenStack community has a six-month development release cadence, with major releases coming in April and October. The Grizzly iteration came out in April, and is arguably the first version of the cloud controller that has enough features to be considered production ready inside of enterprises. April was a long time ago in the computer business, and with the "Havana" release due on October 17, you might expect for SUSE Linux to just wait a few weeks and base SUSE Cloud 2.0 on this code. But that is not how enterprise software works. After each OpenStack release comes out, companies that roll up their own versions of it – Red Hat, Piston Cloud, Rackspace Hosting, and IBM do, among others – need time to test the code and integrate it into their support processes. So the there is a pretty big time lag between raw OpenStack code and these distros. It will be about six months, and sometimes more if an OpenStack release is particularly feature heavy.

SUSE Linux put out its first release of OpenStack in August 2012, based on the "Essex" release of OpenStack that came out the prior April. Essex was missing many of the virtual networking features, diverse hypervisor support, and block storage that customers are looking for in the software that is orchestrating the workloads on their virtualized servers and thus turning them into clouds.

Doug Jarvis, program manager for SUSE Cloud, tells EnterpriseTech, that the proofs of concept on SUSE Cloud 1.0 were mostly limited to automating application test and development environments, which is exactly where server virtualization on X86 platforms got its toehold a decade ago when VMware started peddling hypervisors on servers. But now, says Jarvis, with SUSE Cloud 2.0 out, customers are looking to build private clouds to host applications.

"Most customers either already have or are planning to have multiple hypervisors, and they are either optimizing for performance or for software licenses," says Jarvis. For a lot of mission-critical applications that have already been deployed on VMware's ESXi hypervisor, there is no desire on the part of corporations to change their virtualization layer. But for other less-important workloads, they are looking at using KVM, Xen, or Hyper-V hypervisors (from Red Hat, Citrix Systems, and Microsoft, respectively) to lower the cost of virtualizing those workloads. (As we report elsewhere in EnterpriseTech, those vSphere licenses are anything but cheap.) And vCloud Director, VMware's analog to OpenStack, is also a bit pricey even if VMware does have a compelling and very complete story to tell when it comes to virtualizing compute, networking, and storage.

The Xen and KVM hypervisors supported by the SUSE Cloud variant of are embedded in SUSE Linux Enterprise Server 11 SP 3, which underpins all of the nodes in the chameleon-colored OpenStack cloud. SUSE Linux and Microsoft have done a lot of integration work to make Hyper-V play nice with OpenStack. Jarvis says that this is the first OpenStack release that has full support for Hyper-V. VMware has similarly done work to allow ESXi to be babysat by OpenStack. Specifically, says Jarvis, the Nova compute controller at the heart of OpenStack can talk to the vCenter console that manages ESXi and give it orders. However, this ESXi support is only in tech preview in SUSE Cloud 2.0, so it is not quite done yet as far as SUSE Linux is concerned.

The support of the "Neutron" OpenStack Networking and "Cinder" OpenStack Block Storage is a key part of the new release. The former provides a means for OpenStack to reach down into virtual switches added to hypervisors and changed their settings as virtual machines they flit around a cluster of servers using live migration. Basically, the software creates virtual ports linked to virtual network interface cards running on virtual servers, with plug-ins for the popular virtual switches from Cisco Systems, VMware, and others. The Cinder software is a block storage management layer for OpenStack that interfaces with disk arrays from EMC, NetApp, and others. Block storage is needed for databases and other applications and is distinct from the object storage used for storing zillions of files out there on the public clouds.

SUSE Linux has also put the "Heat" clone of Amazon Web Services' CloudFormation service aggregation templating system into SUSE Cloud 2.0. The idea is that by mimicking the way AWS aggregates services and their settings and controls them through CloudFormation templates, it will be easier to migrate composite applications using multiple services from AWS to OpenStack-based clouds. (We suspect such migration will be a lot harder than it sounds, and presumably it can cut both ways, helping move workloads from OpenStack to AWS.)

The "Ceilometer" capacity metering system is also in SUSE Cloud 2.0. This metering software is necessary for all OpenStack components so the resources used by multiple tenants on a cloud can be billed for the capacity they use.

Both Heat and Ceilometer are in tech preview in SUSE Cloud 2.0, however, so again, Jarvis warns these are not ready for primetime.

Having integrated OpenStack with its SUSE Linux Enterprise Server support systems, SUSE Linux is keen on making some dough off support contracts for its rendition of OpenStack. There are three components to its OpenStack setup. An administration server, which includes the Chef system configuration templating system and the Crowbar tool based on it that was created by Dell to automate cloud deployments, is used to deploy compute and storage nodes in an OpenStack cloud. This administration server costs $10,000 per socket, and includes on control node for the various OpenStack controllers (Nova for compute, Swift for object storage, Cinder for block storage, Neutron for networking, and so forth). You can split these elements up and run them on separate physical servers to scale up the cloud. Each additional control node costs $2,500 for the SUSE Cloud license. Both the administrator and control node licenses include a license to SLES 11 SP3 bundled in. That leaves the compute and storage nodes in the cloud, and these cost $800 per pair of server sockets, and you have to buy SLES licenses on top of that if you want to use the embedded KVM or Xen hypervisor to virtualize them or buy licenses to ESXi or Hyper-V if you want to go that route.

SUSE Cloud 2.0 is available now.