News & Insights for the AI Journey|Thursday, March 21, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Oil and Gas Supercloud Clears Out Knights Landing Inventory: All 38,000 Wafers 

shutterstock: 469457198

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world this year, providing 250 single-precision petaflops for DUG’s geoscience services business and its expanding HPC-as-a-service client roster. Located on a 20-acre datacenter campus in Katy, Texas, the liquid-cooled infrastructure will comprise the largest installation of Intel Knights Landing (KNL) nodes in the world.

If you’d like to follow suit with your own KNL cluster and you don’t have the hardware already, you’re out of luck because not only has the product been discontinued (you knew this), but DUG has cleared out all the remaining inventory, snagging 38,000 wafers. We hear DUG similarly took care of Intel’s leftover Knights Corner inventory back in 2014 (and those cards are still going strong processing DUG’s workloads).

At the very well-attended Rice Oil & Gas conference in Houston last week, we spoke with Phil Schwan, CTO for DUG, who also delivered a presentation at the event. We chatted about DUG’s success with Phi, their passion for immersion cooling, and some of the interesting decisions that went into the new facility, like the choice to run at 240 volts, as well as McCloud’s custom network design.

DUG started off in oil services, in quantitative interpretation, before getting into processing and imaging, which has been the company’s bread and butter for over a decade, but Schwan emphasized DUG is first and foremost an HPC company. “That’s been our real focus in how we set ourselves apart – we have terrific geoscientists, but they are empowered to such a large degree by the hardware and the software,” he shared.

“Bruce,” DUG’s Perth cluster, comprised of KNL nodes, totaling 20 single-precision petaflops. The “Bubba” tanks currently being installed at the Houston Skybox facility will look similar to these. Photo provided by DUG.

DUG currently enjoys somewhere in the neighborhood of 50 aggregate (single-precision) petaflops spread across the world (the company has processing centers in Perth, London, Kuala Lumpur, and Houston) but it is continually hitting its head on this ceiling. At the Skybox Datacenters campus, located in Katy, Texas, eight miles east of the company’s U.S. headquarters in Houston, DUG will not only be adding to its internal resources for its geoscience services business, it will be priming the pump (significantly so) for its HPC as a Service business that it unveiled at SEG last year.

“Up until now it’s been a purely service business – processing, imaging, FWI, and so on, but as soon as Skybox opens in early Q2, we’ll have a lot more cycles to sell to third parties – and we have a few of those clients already beta testing the service both in Australia and here in the Americas.”

To meet that demand, DUG has ordered the remaining Phi Knights Landing inventory from Intel, all 38,000 wafers. Once dies are cut and rolled into servers, the nodes will be combined with an infusion of KNLs transferred from DUG’s other Houston site (the Houston capacity is collectively referred to as “Bubba”) to provide around 40,000 total nodes with a peak output of about 250 (single precision) petaflops.

Schwan describes why DUG is so partial to the Phi products (the company is almost certainly Intel’s largest customer of this line):

“There were a few reasons – number one, we came to the accelerator party fashionably late, and I think that worked well for us because if we had had to choose five years earlier, we would have chosen GPUs, and all of our codes would have gone in that direction and we’d be stuck there. Whereas our transition first to Knights Corner and then to Knights Landing – even if Intel did a bit of disservice by pretending that it’s trivial and you just recompile and run – they are so much closer to the classic x86 architectures that we are already used to that we were able to take all of our existing skill sets, our existing toolchains and so on and make incremental improvements to make it run really well on the KNLs.

“The other thing is we run a bunch of things that are not necessarily hyper-optimized for the KNL – we run a lot of Java on the KNL and it runs great. And there’s AVX512 vectorization in the JVM now as well – again if we write the code intelligently and it uses lots of threads and it’s not terrible with how it addresses memory, the KNLs for us have been a huge win.”

Memory was another plus in Phi’s column, but DUG will provide other alternatives based on price-performance advantage and customer demand. “If you just look at the price of a high-end GPU, KNL comes with 16 Gigs of on-package memory, which is huge, to get a 16 Gig GPU you are talking many multiples of the price we pay for KNLs,” he said. “So it’s a no brainer for just a bang-for-buck perspective. But at the end of the day we are not really religious about it – if something else comes along that has better TCO, then we’ll buy that instead. If we have McCloud clients as we already do who say we must have GPUs because we have this or that code that we don’t want to rewrite, then we’ll get the GPUs.”

For the rest of this story, please visit sister publication HPCwire.

About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This