Factors to Consider When Moving to a Hyper-Converged Enterprise Infrastructure
Chances are, if you’re reading this article, you’re one of the many IT leaders looking at how hyper-convergence can help you better manage your enterprise data center infrastructure. Hyper-convergence is catching on in organizations of all sizes and across industries because it allows IT teams to simplify the deployment and management of virtualized resources.
It used to take weeks or even months to deploy advanced-scale compute, storage, and networking resources. Each element had its own management plane and required specific (and expensive) expertise to deploy and manage. Convergence brought disparate platforms closer together, providing compatibility and shared control through common software elements that reduced complexity and cost. Deployment times shrank from weeks to days or even hours.
Hyper-convergence takes this to the next logical level, bringing compute and storage resources together in a single, pre-configured appliance with one set of management controls. Think of it as the data center equivalent of plug-and-play, with all of the complexity of servers, storage, and networking hidden behind simple, software-based controls. That might be an oversimplification, but you get the picture. Complexity is reduced as are the operating costs associated with managing that complexity.
Hyper-converged appliances give faster deployment, simpler operation, and lower operating costs. What’s not to like, right? But before you jump on the bandwagon, you need to understand that hyper-convergence is not for everybody. It’s a great solution for some use-cases, and there are probably better alternatives for other circumstances.
The point of hyper-convergence is to remove complexity and cost from your data center, so any solution that doesn’t do that isn’t worth your time. As you evaluate hyper-converged options from different vendors, here are six factors to consider:
A hyper-converged solution should give you the flexibility to handle growth and the agility to deploy new applications or services quickly. In traditional IT, scaling can be a major event. In a hyper-converged environment, scaling should be a feature that is just another part of day-to-day operations. Scaling isn’t just about data or user growth, it’s about growing the business as well. To keep up with competition and customer demands, deploying new services must be quick and painless or the business will suffer.
Operational simplicity is the whole point of convergence. It’s becoming more difficult and more costly to retain IT resources with specialized knowledge of specific systems (think server team, storage team, etc.). Hyper-converged appliances allow you to take more of a generalist approach, and the best solutions are simple enough that you don’t need specialized knowledge to run them. One way to gauge operational simplicity is through setup. Setup should be simple, whether you’re adding a new cluster or installing a new node to an existing cluster, and should look something like this: Unbox the appliance, mount it in a rack, plug it in, power it on, execute a deployment wizard, and start provisioning VMs. You should be able to get from power-on to provisioning in minutes. As more boxes are added, they simply expand existing resource pools.
To scale with ease, data needs to be fluid. Therefore, a hyper-converged solution should adapt to easily allow data into different storage tiers (flash, disk, etc.) to meet SLA requirements. Data also needs to be able to move to new systems to handle common things like system failures and the adoption of new technology.
To allow data to move freely, an infrastructure should be built with agnostic components that integrate different hardware form factors, media, hypervisors and open source technologies. This gives you the flexibility to change your mind, change your business, and change your infrastructure resourcing.
If your infrastructure isn’t running, your business isn’t running. In today’s “always on” world, you simply can’t afford downtime or outages. This requires a look under the hood to see how the solution handles failure. Is there striping across the disks and systems? What’s the reliability level? Is there component redundancy? It’s the little things that can sometimes cause the biggest problems.
While the infrastructure is designed for business continuity, that doesn’t guard against human errors, surprise audits, natural disasters, and ever-changing policies. Look for features like RAID or mirroring, replication at the site level and between sites, and automated workflow to support disaster recovery. These features are the bare minimum to cover short-term and long-term data protection and give you the ability to retrieve a lost file, replace a corrupt database, keep continuity in case of device failure, or spin up a new site in case of disaster.
The Way Forward
Hyper-converged appliances give you the convenience of the entire infrastructure stack (compute, storage, hypervisor, and management) in a single, fully-integrated system.
This can remove resource silos and complexity while cutting operating costs from your data center and remote-site operations. With the right solution, hyper-converged appliances can peacefully coexist with your existing environment, allowing you to phase in the benefits of a hyper-converged approach.
Rob Strechay is director of product management for storage at Hewlett Packard Enterprise.