Advanced Computing in the Age of AI | Tuesday, April 23, 2024

VCE Forges First All-Flash Vblock Stack 

It is the fall and we are in the wake of the latest Xeon processor announcements from Intel, and that means it is time for system makers to hit the refresh button. The VCE partnership between Cisco Systems, EMC, and VMware is rolling out updated Vblock stacks, and significantly has created the first Vblock system that incorporates all-flash storage arrays to significantly juice performance.

Starting at the top of the lineup, the Vblock System 740 is an update of the existing 720 setup, although this time it has more dense storage and significantly greater storage performance, ranging from 1 million to 6 million I/O operations per second (IOPS). This is accomplished by moving to the VMAX3 storage area network from EMC as the back-end storage for the cloudy infrastructure, which Trey Layton, chief technology officer at the VCE partnership, tells EnterpriseTech that the 740 stack has about three times the storage performance and twice the SAN bandwidth as the pairing of the 720 stack with the VMAX2 arrays.

The Vblock System 740 uses Cisco's Unified Computing System blade servers as their compute engines, and customers can choose from the shiny new B-Series M4 nodes, which use Intel's "Haswell" Xeon E5-2600 v3 processors, or the prior generation B-Series M3 nodes, which are based on the earlier "Ivy Bridge" Xeon E5 v2 chips. Customers can choose from VMAX3 models 100K, 200K, and 400K as their needs dictate, and the machines have progressively more powerful controllers and more of them as they scale up. The VMAX3 100K has two to four controllers, 2 TB of cache, and supports up to 1,400 drives for a total of 494 TB of capacity, while the top-end VMAX3 has up to 16 controllers, 16 TB of cache, and supports up to 5,760 drives. The maximum addressable raw capacity of the VMAX3 is 4 PB. The Cisco UCS chassis supports up to eight half-width blades, and up to sixteen of these enclosures can be linked into a single domain and up to four domains can be lashed together to create one compute pod that scales up to 512 compute nodes. This is a large cluster by any measure in the enterprise. The nodes run VMware's ESXi hypervisor and its vSphere Enterprise Plus features (which unlocks VMotion virtual machine live migration and a bunch of other features) as well as its vCenter management console. The integrated Nexus 5500 series switches in the UCS chassis link the nodes to aggregation switches, which can be Nexus 9000s or MDS 9148S or 9706 switches from Cisco.

The most interesting new machine in the lineup is the Vblock System 540, which adds the XtremIO all-flash arrays from EMC to the UCS blade servers. Layton says that VCE has "seen a significant uptick in customers looking for all-flash arrays for multiple workloads." There was talk a year ago, after Cisco acquired WhipTail for its Invicta all-flash arrays, that these would eventually make their way into Vblock configurations. But for one reason or another, this has not happened even though Cisco has integrated the Invicta arrays with the UCS blades. There is always speculation about crankiness in the VCE partnership, with Cisco getting into storage, EMC moving into servers, and VMware moving into networking, but thus far, credit to the three vendors for working it out and leaving VCE with one of the fastest growing server businesses in the world, and indeed, one of the few that are growing at many multiples of the server market at large.

The Vblock System 540 is comprised of up to 24 UCS enclosures, for a total of 192 B-Series M4 blade server and offering about 1.5X the memory as the prior M3 nodes. (There was no prior Vblock System 500 series to compare this stack to, so VCE is just comparing the memory capacity with the prior generation of blades. You can use these older M3 blades in the system, as with the 740 above.)

As for the XtremIO configuration, VCE says that the setup used in the Vblock System 540 can deliver more than 1 million IOPS of storage performance with sub-millisecond response time on the file transfers back and forth between the servers. (Our assumption is that this is 100 percent random reads with 4 KB block sizes, as is standard in most storage array benchmarks. And a reminder that you cannot get full IOPS and the lowest latency at the same time – no storage array can do that.) Based on the data provided, it looks like the Vblock System 540 cited above will have a fully loaded XtremIO setup (PDF) with four dual-controller X-Brick enclosures. The specs say customers can choose one, two, or four X-Brick enclosures with 10 TB capacity or one, two, four, or six X-Bricks with 20 TB capacity each. That yields a top-end configuration of with 32.8 TB of usable capacity with the skinny bricks and 98.4 TB with the fat bricks. As far as we know, this is the first time EMC has talked about a six-brick configuration for the XtremIO arrays. Cisco is pitching the ACI-enabled Nexus 9396PX switches to aggregate the compute as well as its MDS line of switches, which support both Ethernet and Fibre Channel traffic.

VCE is positioning the Vblock System 540 at the key workloads where storage I/O performance is paramount, including relational databases and data warehouses, virtual server environments, and desktop virtualization. Because of the snapshotting capabilities for the XtremIO arrays, which just recently came out of beta testing with a new release of the XIOS storage operating system, VCE is also peddling the Vblock System 540 as a platform for development and testing, where hundreds of full-performance writable instances of a software stack can be deployed instantly for such work. The zippy flash also allows for production transaction processing, data extraction, test, and load, and online analytical processing workloads to be run on the same cluster and at real-time speeds thanks to the flash.

And finally, with something that VCE is calling Technology Extensions, there is a way to integrate EMC's Isilon NAS filers that are popular for Hadoop and other analytics tools wrestling with unstructured data with the Vblock System 540. This extension doesn't make the Isilon arrays a part of the Vblock stack proper, but rather an option.

Layton tells EnterpriseTech that up until now, adding Isilon arrays from EMC for Hadoop or Tesla GPU coprocessors from Nvidia for VDI workloads were two popular options that VCE customers were asking for, and these had to be integrated after the fact at the customer site by Cisco or EMC or their partners. Now, with the Technology Extension program, VCE itself can do this integration at its factories, which is less hassle for everyone. It is reasonable to assume other disk arrays from EMC and maybe even Cisco's WhipTail Invicta arrays will be made available through this extension program, and ditto for Intel's Xeon Phi parallel X86 coprocessors.

The third and last new stack from VCE is the Vblock System 240, which is based on UCS C-Series rack servers combined with the VNX 5200 series block storage. Like other 200 series Vblock stacks, this one is aimed at entry private clouds and small VDI setups. The precise feeds and speeds of this machine were not divulged at press time.

VCE has not divulged its pricing since the partnership was founded in the wake of Cisco entering the system market in early 2009.

All of the Vblock stacks come with cloud management tools from Cisco or VMware, with the former based on its UCS Director and the latter based on the vRealize software that is an amalgamation of a dozen tools that VMware bought or made over the past several years.

As VCE exited 2013, it had reached its goal of hitting a $1 billion run rate, and Layton tells EnterpriseTech that growth is still going for the Vblocks and the company now has a $1.8 billion run rate through the third quarter. The growth is in excess of 50 percent quarter on quarter for 2014, which is remarkable. VCE has around 700 customers and has shipped north of 2,000 systems to date (that is systems, not nodes), and with the average selling price of a system being $1.2 million, these are not small installations despite the fact that the configurations can get small down in the Vblock System 200 and 300 series.

EnterpriseAI