SGI Supercomputer Upgrade to Boost Total Exploration in 2016
Energy giant Total will use its recently announced supercomputer upgrade to improve the productivity and efficiency of its seismic processing and reservoir simulation processes when the system goes live next year.
Late last month, Total said SGI would upgrade Pangea, its existing supercomputer, with an additional 9.2 petabytes of storage; 4,608 more nodes based on the Intel Xeon E5-2680 v3 processor that consist of 110,592 cores and contains 589 terabytes of memory built across 8 M-Cells, and a closed-loop airflow and warm-water cooling create an embedded hot-aisle containment, designed to reduce overall cooling requirements and significantly reducing overall energy consumption when compared with traditional HPC designs, Bob Braham, chief marketing officer at SGI, told Enterprise Technology. The current system is a 2.3 petaflop system based on the Intel Xeon E5-2670 v1 processor, consisting of 110,592 cores. It contains 442 terabytes of memory built on SGI ICE X. Integrated by SGI professional services, the data management solution for 18.4 petabytes of usable storage capacity includes SGI InfiniteStorage 17000 disk arrays with Intel Enterprise Edition for Lustre File system, and SGI DMF tiered storage virtualization, SGI said.
"With this upgrade Total has almost tripled its compute power [and] increased its storage capacity by 50 percent, while only doubling its consumption of electricity," Ian Lumb, Bright evangelist at Bright Computing, told Enterprise Technology. "Because it is likely to shorten the time it takes Total to produce results, potentially while working with larger volumes of data, this is an impactful and yet cost effective upgrade."
"When we spoke to SGI [in 2012-2013] it was already in our mind that at one stage we would have to upgrade this machine," he said. "It was fairly natural that we did it two years after or three years after. We are making the decision now. That is why there was no RFP, no request for proposal."
Total primarily uses its supercomputer in its exploration and production divisions where it's mainly dedicated to seismic processing and reservoir simulation, Malzac said. The company uses Pangea for geoscience and gathering intelligence on its vision of what is under the earth's surface, he said. Data drove Total's upgrade need.
"The seismic [software] is really the tool we use offshore and onshore to give us a good vision of what is under our feet and what sort of complexity is lying below our feet and what we need to understand and eventually to drill. In fact why do we need such a level of computing power to do seismic processing is actually linked to the volume of data we are acquiring in field which is growing not exponentially but not far from it, and also the algorithms we want to use," said Malzac. "For the reservoir simulation, which is coming later in the exploration process, we want to give a very good understanding of what is going on in an oil and gas reservoir in terms of the fluid movement and in terms of the oil and gas or both we have got in the reservoir. For doing that we need reservoir simulation tools."
Drilling is an expensive adventure. We need to gather as much insight as possible before drilling. But the amount of data these sophisticated supercomputers collect demands vast amounts of storage and powerful networks, Malzac said. Total uses two levels of storage – scratch storage used during computation of a job and disk storage for data that's kept for processing later.
"We have to upgrade storage at the same time as the machine. We've added a storage capacity which will be 27 petabytes which is fairly large," he said. "We are still using disks. Flash technology for that amount of data would be tremendously expensive. I know flash technology is becoming more and more affordable but when you're talking 27 petabytes of storage, it's not the storage you're talking about with your laptop."
Storage must still be physically located close to the supercomputer to avoid network speed problems, said Malzac. SGI is upgrading Total's network to support Pangea's upgrade, he said, but an undisclosed number of Total technical employees will upgrade various software applications. Malzac declined to specify the number or type of software used, for competitive reasons.
"When you go from an 8-core to a 12-core processor you need to adapt your software to take advantage of the additional cores the processors are providing. We need to adapt our software to make sure we are making good use of the power of the machine," he noted.
Although Total initially bought this computer with an upgrade in mind, Total has not decided whether it will upgrade Pangea a second time, said Malzac. The computer is based on Intel technology, so part of the decision rests on Intel's roadmap, he said. And since this upgrade's rollout won't start until the summer, it's probably premature to discuss its replacement, Malzac said.
One thing is more certain: Total will not have the fastest computer for long, industry executives agreed.
"We might be a little bit in advance in terms of computing power but we are not really in a race with our competitors to install the bigger machine before them," said Malzac. "This is not our main driver. It may be fairly important for SGI but it is not the main driver for Total. The new machine will allow us to use fairly sophisticated algorithms which we cannot really use on the old machine unless we run them for weeks so there is a productivity issue."
Total's upgrade should, however, incite more oil and gas companies to invest in new or upgraded high-end computers, said Bright's Lumb.
"Organizations like Total operate some of the largest commercial data centers on the planet. Because their success is inextricably linked to efficient and effective use of HPC, watershed events like significant purchases or upgrades, innovative use of technology, introduction of improved algorithms, are always on each other’s competitive radars," he said.