Advanced Computing in the Age of AI | Thursday, March 28, 2024

Pitting Bare Metal Against Amazon, Rackspace Clouds 

Cloud Spectator, which offers cloud performance monitoring services, has put together some interesting benchmark tests that pit bare metal cloud capacity from Internap against virtual machine slices from Amazon Web Services and Rackspace Hosting.

Cloud computing implies server virtualization, but it does not require it – at least by some definitions. Server virtualization allows for companies to share a server and thereby drive up utilization, but there is often a performance penalty associated with virtualization. And hence many cloud providers offer hourly capacity on bare metal machines that can be quickly configured just like virtual machines.

John Valich, chief marketing officer at Cloud Spectator, is the first to admit that trying to make comparisons between any of the clouds is problematic. "The tough thing about Amazon Web Services is that the CPU and RAM allocations are so different from everyone else," Valich tells EnterpriseTech. Still, an imperfect apples-to-applesauce comparison is sometimes the best you can do and at some point, enterprises have to make choices about what to buy, and where.

Internap, which offers both virtualized and bare-metal cloud capacity, is touting a recent performance report from Cloud Spectator that pit its AgileCloud Bare Metal against AWS EC2 and Rackspace Cloud Servers public cloud slices, and as you might expect the performance of the bare metal was significantly higher than for virtual server slices, and some of that has to do with the virtualization and some of it has to do with the underlying hardware that was used to run the benchmarks.

It is difficult to separate the two effects from each other, and it is downright impossible if you are not AWS or Rackspace. Only they know for sure the performance overhead from their respective custom Xen and Citrix Systems XenServer hypervisors, which are used to dice and slice their clouds. But EnterpriseTech did a little digging with Cloud Spectator to come up with the underlying processor hardware used to make the comparisons in the report.

The idea was to get three slices of cloudy machinery that were as close as possible in configuration. The tests involved a primary server for running workloads and a secondary server that was used to test the networking on the clouds. The bare metal server at Internap is a single-socket server node with am Intel Xeon E3-1230 processor configured with 8 GB of main memory, a 120 GB solid state disk, and a 2 TB SATA array. This E3-1230 processor has four cores running at 3.2 GHz and has HyperThreading activated so it presents two virtual threads per core for a total of eight cores. This machine costs 37 cents per hour to rent.

The AWS slice is an m1.large image, in the Amazon lingo, and AWS is pretty vague about what the server iron is, but Cloud Spectator figured out that it was a vintage Xeon E5507 processor. Yes, that is a "Nehalem-EP" generation processor from 2009, and apparently Amazon's cloud is still full of this iron – just like many other data centers are out there, by the way. The m1.large image on EC2 has two virtual CPUs (vCPUs) that have a total of 4 EC2 Compute Units (ECUs) of performance. An ECU, as the Amazon documentation explains, is roughly equivalent to the processing capacity of a 1 GHz to 1.2 GHz Opteron or Xeon core from around 2007 or a 1.7 GHz Xeon chip from 2006 when Amazon first launched. That Xeon E5507 processor underneath that EC2 image that Cloud Spectator ran its benchmark tests on has four cores spinning at 2.26 GHz and no HyperThreading. The fact that the m1.large image has two virtual CPUs means it is running on half of one of these processors or one quarter of the physical server sitting in the Amazon East-1a availability zone in Virginia. This cloud slice has 8 GB of virtual memory allocated to it as well as 50 GB of Elastic Block Storage.

Valich admits that Cloud Spectator could have chosen an m1.xlarge instance type, which would have four virtual CPUs and 8 ECUs of relative performance (which means one whole socket on this box), but that instance has 15 GB of memory, way more than the Internap bare metal and equivalent Rackspace instance. When Cloud Spectator ran tests on this instance, it did not find performance was appreciably better – and if you want to have SSD acceleration on EC2 instances, you have to choose the hi1.4xlarge instance type, which has 16 vCPUS, 35 ECUs of oomph, and two 1 TB SSDs. This is a very expensive EC2 image, at $3.10 per hour compared to 24 cents per hour for the m1.large and 48 cents per hour for the m1.xlarge instances. (Those are on demand, not reserved prices.) Elastic Block Storage costs 10 cents per GB per month for standard volumes.

For Rackspace, Cloud Spectator fired up an 8 GB instance of Cloud Servers, which in this case had an Opteron 4170HE processor from Advanced Micro Devices underneath it. This is a six-core chip that runs at 2.1 GHz. AMD doesn't use HyperThreading, and that means four of the six cores on that socket are allocated to virtual machine to get the four vCPUs in the configuration; 50 GB of Cloud Block Storage is allocated to the instance. This instance was running in Rackspace's Dallas/Fort Worth datacenter. This compute instance costs 48 cents per hour, and Cloud Block Storage costs 15 cents per gigabyte per month.

All three instances were configured with Canonical's Ubuntu Server 12.04 variant of Linux.

Obviously, the Internap bare metal machine has considerably more raw processing capacity and also has SSDs accelerating its performance. And this shows up in the benchmark test results. Here is how the three different instances stacked up on the UnixBench system-level benchmark:

cloud-spectator-unixbench

The UnixBench test is from BYTE magazine's test labs from two decades ago, and it has generic integer and floating point tests as well as other tests to stress for file transfers, graphics, and operating system function calls. It is not necessarily a great benchmark for calculating the relative performance of a modern workload like a technical or financial simulation, mind you. But obviously, that bare metal Internap server has a significant performance benefit.

Cloud Spectator ran a bunch of old school file compression, video encoding, and memory benchmark tests on all three instances, and these showed a similar gap between the bare metal instance and the virtual slices on Amazon and Rackspace. The results from the Mongoperf benchmark, which is a disk I/O performance test for the MongoDB NoSQL data store, show just how divergent performance can be on a cloud slice, and equally importantly, how your mileage can vary over time depending on who else is on the same physical server with you. This is the results for the three slices running disk read operations running the Mongoperf benchmark:

cloud-spectator-mongoperf

The performance of the EC2 slice was all over the map, while Rackspace and Internap were more or less steady. And on the write test, the Rackspace slice was all choppy and the Amazon slice did poorly:

cloud-spectator-mongoperf-write

This benchmark report from Cloud Spectator settles nothing, of course. And the reason EnterpriseTech brings it up at all is not just to show how one company analyzed performance, but to illustrate just how difficult it is to make comparisons across clouds, bare metal or not.

The important thing for enterprises is to benchmark their own applications and to see how well their applications respond to the different instance types available. In many cases, particularly where extreme performance is a necessity, running applications on dedicated iron, instead of the public cloud, could turn out to offer better performance and better economics.

EnterpriseAI