Blue Cross Blue Shield Streamlines Networking, Virtualization
You want your healthcare insurance provider to run a lean and mean IT shop, and Blue Cross Blue Shield of Alabama is always looking at new technologies to make its operations more efficient. The latest ones being adopted by the healthcare company are new networking gear for Hewlett-Packard's BladeSystem blade servers and shifting from VMware's ESXi server virtualization hypervisor to Microsoft's Hyper-V alternative.
The Blue Cross Blue Shield association provides healthcare to over 100 million people in the United States, and there are 37 different organizations that administer their services, generally at the state level. Each BCBS associate runs independently of each other, but they get common services – for which they collectively pay hundreds of millions of dollars – from the association, such as governance or networking cross-state coverage. The BCBS associates tend to have backend systems running on IBM mainframes, but beyond that, the associates tend to pick and choose their own platforms.
Blue Cross Blue Shield of Alabama was an early and enthusiastic adopter of blade servers and their integrated virtual networking, and to this day still keeps on the leading edge of technologies developed by Hewlett-Packard. The organization received serial numbers 1 and 2 of the c-Class blade enclosures from HP, Russ Stringer, server engineer and virtual architect at BCBS of Alabama tells EnterpriseTech. These days the organization has 24 blade enclosures packed full of blades that run various software that wraps around the mainframe systems that do claims processing.
Like other BCBS associates, the one in Alabama is chartered by the state to provide healthcare services to 2.1 million residents as well as another 900,000 people who live outside of the state. The organization is not designed to make a profit, but rather to use as much of its funds that it gets from premiums to provide healthcare services. Stringer says that IT is a big part of lowering healthcare costs in Alabama, and is proud of the fact that the automation that the organization has created allows for more than 90 percent of claims to be processed accurately and reliably without any human intervention. The organization, which has in excess of $4 billion in revenues, employs about 4,000 people, and roughly 400 of them are developers who maintain the homegrown applications that make this possible. The vast majority of those applications are coded in Java, as is the case in many large enterprises.
Like many mainframe shops, BCBS of Alabama has long since opted to use IBM's WebSphere Application Server as its Java middleware platform on the mainframe, but over the years the vast majority of WebSphere instances now run on a much larger complex of outboard X86 servers. This is one way to lower costs. So is having a very low turnover rate in the IT department, says Stringer, who has been there since 2003 and says he is one of the newbies still.
Another way to cut costs is to move to converged infrastructure and to take a "virtual first" attitude to middleware and applications, strategies which BCBS of Alabama implemented in 2003 because when he joined the organization, "there was zero U of space in the datacenter and I could not draw one more watt of power out of it or put one more BTU of heat into it." Outside of the IBM mainframes, BCBS of Alabama was a Compaq server shop before HP acquired it in 2001, and because of its power, cooling, and space constraints, it jumped to the front of the line with the BladeSystem c-Class. Those first two blade enclosures, by the way, are used in application testing even though they are eight years old.
Today, BCBS of Alabama has a total of 384 blade servers running in its 24 BladeSystem enclosures, all of them running Windows Server. One third of the nodes in these enclosures get upgraded each year. At the point three years ago when the organization built a new datacenter in Birmingham (shown in the opening image at the top of this article), about 65 percent of the nodes were equipped with VMware's ESXi hypervisor and its vSphere management tools and the remaining ones were configured with Microsoft's Hyper-V and Systems center analogs.
The reason there was any Hyper-V at all in the stack was that WebSphere didn't like ESXi. "Any time we tried to VMotion live migrate it, the WebSphere that we were running would just throw up and everything would die," explains Stringer. "We learned a hard lesson and we decided to put WebSphere on Hyper-V and keep everything else on ESXi. But we were also, in 2012, looking at using VMware's Site Recovery Manager, and for us, the licensing costs were going to be too expensive. We had to buy the licensing for Windows Server 2012 Datacenter Edition anyway, so we did some testing, and we told VMware you're a great partner but we found somebody new."
Instead of paying for ESXi, vSphere and Site Recovery Manager, BCBS of Alabama is paying for Datacenter Server and System Center, which it was going to buy anyway. Once again demonstrating the power (both technical and financial) of software bundling. "I can buy a lot of memory with that money," says Stringer, referring to the money the organization saved.
Stringer says that Hyper-V and ESXi deliver about the same number of virtual machines per physical server, so that was not a reason to move. With the ProLiant Gen7 blade servers, BCBS of Alabama had nodes with two six-core Xeon E5-2650 v1 processors with 256 GB of main memory, and these nodes supported somewhere between 10 and 15 virtual machines. With the ProLiant Gen8 machines, the organization shifted up to eight-core Xeon E5-2670 v2 chips and put 384 GB of memory on the nodes, yielding somewhere between 30 and 50 VMs per node. In the fourth quarter of this year, when Intel is expected to get a "Haswell" Xeon E5 v3 into the field and HP is expected to get is ProLiant Gen9 nodes out, Stringer says he will do a refresh on a third of the nodes and probably put 512 GB of memory on each one, allowing him to push the VM count up even higher in the same physical footprint and, perhaps, even buy fewer servers if the workloads do not demand it.
Don't think for a minute that BCBS of Alabama doesn't look at every component in its datacenter this way. It does, and it has some Unified Computing System blades in the datacenter, running its call center software, just to keep HP on its toes. It also uses Cisco's Nexus switches in the top of its racks, linking the BladeSystem enclosures to the mainframes and to each other. Every year, Stringer takes a look at AMD alternatives to Intel processors as well.
Like many enterprises, BCBS of Alabama has been a Cisco networking shop for a very long time. The first two BladeSystem enclosures had Cisco MDS storage area network switches, and the next three also had Cisco switches as well. Seven of the enclosures are the new "platinum" variants, which have enough internal networking to drive 40 Gb/sec links to nodes but probably cannot, in Stringer's estimation, drive 100 Gb/sec links.
Since implementing the ProLiant Gen7 blades, the organization has put the Virtual Connect virtual switching into the blade enclosures. Specifically, the top-of-rack Nexus switches reach down into the enclosures using Fibre Channel over Ethernet (FCoE) to hook to external storage arrays. This means that BCBS of Alabama will be able to get rid of the MDS switches that are used to link out to storage.
"We are trying to simplify and get everything as clean as possible," says Stringer. "I want as few different wires as possible." One of the key tools is the Virtual Connect Enterprise Manager, which is used to set up the networking for both physical nodes and virtual machines across the multiple blade enclosures, all from the Holy Grail of a single pane of glass.
At the moment, BCBS of Alabama is beta testing the new FlexFabric 20/40 F8 module, which is a networking device that plugs into the Virtual Connect hardware and that HP just announced in a blog post this week. The FlexFabric 20/40 F8 modules are installed in redundant pairs in the BladeSystem enclosures and provide a mix of downlinks to server nodes that are adjustable. The ports can be set up as eight Ethernet ports, six Ethernet and two Fibre Channel ports, or six Ethernet and two iSCSI ports. The module has twelve uplinks – that's eight Flexports and four QSFP+ ports, and with splitter cables you can double up the port count. This module has 1.2 Tb/sec of bridging fabric capacity and allows up to 255 virtual machines on the same physical node to access different storage arrays over the Ethernet fabric. The FlexFabric modules can be stacked and run as a single virtual switch across up to four BladeSystem enclosures, allowing any server in those enclosures to access any uplink in the FlexFabric stack.
In conjunction with the new FlexFabric module, HP has launched two new adapters for the server nodes. These include the FlexFabric 630FKP, which has two ports running at 20 Gb/sec and which can be subdivided into four 10 Gb/sec ports on the node. At the moment, this is only available for the ProLiant BL460c, BL465c, and BL660c blades in the Gen8 family. The FlexFabric 630M is a mezzanine adapter that provides two ports running at 20 Gb/sec as well and can be subdivided into four ports. There is enough bandwidth to stream 10 Gb/sec Ethernet and 8 Gb/sec Fibre Channel over a single port, and HP says the new FlexFabric devices have 73 percent lower latency than prior Virtual Connect devices, at around 1 microsecond for an Ethernet port and 1.8 microseconds for a combined Ethernet/Fibre Channel port across the FlexFabric-20/40 F8 module.