Red Hat Linux Adds Time Stamp, NVM, and Virtual Hot Plug
Red Hat Enterprise Linux is by far the dominant variant of commercially supported Linux in the enterprise, and based on anecdotal evidence has an even larger share of Linux installations at financial services firms. With Enterprise Linux 6.5, its latest release, Red Hat is adding more features specifically needed by financial services firms and any other organization where milliseconds and the order of transactions count.
The big new feature with RHEL 6.5 is support for the IEEE 1588-2008 standard with an implementation of the Precision Time Protocol (PTP) in both the Linux kernel and the user space where applications run. This feature was in technology preview with the prior RHEL 6.4 release that came out in February. You can drill down further into the PTP support in the release notes, which you can see here, and the more detailed technical notes, which are available here. PTP was traditionally used to synchronize the clocks in industrial automation networks while the similar Network Time Protocol (NTP) was used to synchronize the clocks on systems. The PTP is finer-grained than NTP, down to microseconds instead of a few milliseconds, and has therefore been in demand by high frequency traders and any other application where the order of transactions is important and has to be sorted out by a centralized time clock and the stamping of those transactions as they move through systems and networks.
The updated RHEL has an interesting new hot plug and unplug feature for virtual processors inside of virtual machines, mimicking real hot plugging support for real processors on physical systems. With this 6.5 release, the KVM hypervisor that Red Hat more or less controls and has designated as its preferred server virtualization tool can now remove virtual CPUs from a guest partition – or add them – without having to stop the running partition. This will allow system administrations to add processing capacity to the guest partitions on KVM on the fly as well as return spare vCPUs to the hypervisor for use on other workloads. At the moment, this feature is only available on KVM and only for Linux guests, but Red Hat could offer it on the open source Xen hypervisor, which it still supports and which was its preferred hypervisor before Red Hat took over the KVM project when it bought Qumranet a few years back.
The other big change with KVM is that a single guest partition can now have its operating system scale up to 4 TB of virtual memory. That is double the capacity of what was available with the KVM hypervisor embedded in RHEL 6.4, which topped out at 2 TB. The minimum memory that can be allocated to a guest partition is still 512 MB, as it has been for years across many versions and releases. The KVM hypervisor itself can scale across up to 64 TB of main memory and up to 160 cores (or threads if the processor has them).
This is more than enough capacity to handle the forthcoming “Ivy Bridge-EX” Xeon E7 processors, expected next year from Intel. The Xeon E7s are rumored to have as many as fifteen active cores on a die, and if this turns out to be true, then a four-socket box will have 60 cores and 120 threads; the expectation is that this machine will have 6 TB of physical memory. Server makers can make eight-socket machines out of these future Xeon E7s as well, as you could do with the prior “Westmere-EX” Xeon E7 processors. If that happens, then Red Hat will have to boost the vCPU scalability of the KVM hypervisor, since such a machine would have 120 cores and possibly 240 threads if HyperThreading is turned on. And when Hewlett-Packard delivers its sixteen-socket “Project Odyssey” NUMA machines, it will want a hypervisor to span 240 cores and 480 threads.
One last tweak to the KVM hypervisor comes with the RHEL 6.5 update. Up until now, if you wanted to have a guest operating system running atop KVM talk to the GlusterFS parallel file system (which the company commercializes under the name Red Hat Storage Server) directly instead of through the FUSE file system in user space agent that was required on clients in the past. Red Hat says that this “native approach offers considerable performance improvements” in the release notes, but it does not say how much.
RHEL 6.5 includes several hundred new kernel features and bug fixes as well as updated drivers for network adapters, storage adapters, and other peripherals, as is always the case with a new release. The kernel and driver stack has also been tweaked to support ECC memory scrubbing on a future generation of AMD processors as well as up to 1 TB of physical memory on an AMD processor – both of which chips were not identified by name to keep from outing future chips from the company. RHEL 6.5 also supports a future Atom-based SoC from Intel. The kernel has also been updated to support the NVM Express specification for standardizing the interface between the PCI-Express bus and solid state drives. Intel is very keen on NVM Express because it simplifies the SSD driver stack, cutting out chunks of SCSI and SAS driver code and improving performance.