On the Front Lines of Advanced Scale Computing: HPC at BP and Rolls-Royce
This is the second of two articles based on the “HPC Impact Showcase” series at SC15 last week in Austin, real-world applications of advanced scale computing to improve business competitiveness and innovation. The first article looked at Boeing; here we spotlight HPC strategies at BP and Rolls-Royce.
The oil industry, for decades an early and eager adopter of advanced scale computing for energy discovery, can be viewed as a victim of its own success. Since 2009, oil companies have more than doubled domestic production of oil, in part through its use of HPC. In so doing, they have flooded the global market with supply, demand in some parts of the world is down, the price of a barrel of oil has dropped by nearly 50 percent in the past 18 months, and with it has come decline in oil industry profits.
Since mid-2014, more than half of the drilling rigs in the U.S. have been shut down and 200,000 oil workers have lost their jobs. But even in a boom-and-bust industry now in a downturn, one factor remains consistent throughout up-and-down business cycles at BP: the drive for more efficient energy discovery and extraction.
BP’s HPC resources, said to be the most powerful in a commercial research setting, process enormous seismic datasets, converting them into graphical images of the rock deep beneath the earth’s surface indicating the presence of oil.
According to Keith Gray, BP’s director of technical computing at the company’s Center for High Performance Computing in Houston, BP has cut the cost of acquiring seismic data by a factor of two to five. “We’ve improved the seismic enough to where we now see prospects we can drill and produce from that we wouldn’t have known about.”
The company’s HPC systems total 115,000 CPUs with throughput capacity of 3800 TF based on Intel Xeon "Haswell" (2700 TF) and "Ivy Bridge" (1130 TF) processors, according to the company, while storage is handled by DataDirect Network’s SFA12K high-performance storage platform utilizing between 1,200 and 1,600 hard drives ranging from 3 TB to 6 TB (with 8 TB drives on order) behind the Lustre file system.
This is housed in BP’s state-of-the-art HPC facility in Houston, opened in 2013, which has 15,000 square feet of compute space and 3.9 MW of electrical power, with growth capacity to 8.8 MW. Earning a PUE rating of 1.3 (better than the original goal of 1.35), the new building is 30 percent more energy efficient than the facility it replaced.
A listing of supercomputers previously used at BP reads like a history of the HPC industry: a 0.13 GF Cray 1 dating to 1976 followed by a Cray 2 and a C90, then a Thinking Machines CM-5 from 1993-2001, SGI Origin servers from 1997-2002, a 3 TF HP Itanium installed in 2002 followed by a 35 TF SGI Altix in 2005.
The ramp-up in processing means seismic imaging that would have taken, literally, years of compute time 10 to 15 years ago can now be accomplished in an hour. Major new finds are taking place at a time when many experts had predicted most of world’s reserves would be depleted.
Gray said more than 90 percent of compute resources at the Center for High Performance Computing is dedicated to seismic imaging. The center has 14 staffers, most of them Ph.D. mathematicians, geophysicists and computer scientists. The center has five systems administrators handling network storage, operating systems troubleshooting and application loading. Another two members of the staff are focused on hardware support and four are responsible for data processing quality analysis and data management.
“One of the real challenges we faced as we grew this computational scientist team,” Gray said, “is how do we get embedded in the research teams? We don’t want to sit on the other side of the wall and have questionable code get thrown over the fence. It was a real challenge: How do we step up our game? We know that we want to support their codes and facilitate them. But we need to be involved from Day 1, from idea generation to implementation. We’re making very good progress with that, but we’re not completely there.”
Gray cited a researcher who generated MATLAB code, tested it on a very small data set and then found that on the single node available to him, the job would run for three or four months. “So people on our team took the MATLAB code, created a binary and then ran thousands of copies of the binary and cut that three-month cycle time down to about six hours,” Gray said. “That’s nice, but the question is: How do we get involved even earlier so that we don’t end up with MATLAB code that ends up potentially being a bottleneck for us. We’re making progress, it’s not perfect.”
Gray said data movement is a major challenge.
“We support researchers who are all around BP’s world,” said Gray. “The datasets are so large now, it’s really impractical to move them quickly enough back to the researchers at the business sites, so we need to improve our capabilities at delivering remote graphics.”
To drive high-resolution graphics to researchers outside of Houston, BP currently uses an HP video capture and compression tool that, instead of moving data, moves pixels across the network.
BP’s HPC challenges will only increase as seismic sensors come down in price, driving much higher data densities. “That’s great for the business, it will let us have higher resolution, but it has significant implications on our compute and storage demands,” Gray said, explaining that increased processing capability is used to increase the size of the problem, rather than to run the same problem faster.
The result, according to Gray: BP’s Center for High Performance Computing will need to double, year-on-year compute performance to keep pace with ever expanding requirements for seismic imaging of the future.
Aerospace is an industry that lives at the leading edge, driven by extended, high-budget product development cycles that push design teams to the maximum of innovation. Rolls-Royce, with its civil and defense aerospace units, is no exception, having invested $1.9B in R&D in 2014 alone. The company utilizes HPC at the heart of its engine design operation with the goal of accelerating the design process while converting more design and testing into digital form – and delivering engines that exceed the performance of its predecessors.
Case in point: the Trent XWB, the most efficient turbofan aero engine flying today, saving $2.5 million in fuel savings per aircraft per year, according to Rolls-Royce. Designed for the Airbus 350, it went into development in 2007 and took flight in 2012. By July 2015, 1,500 engines had been sold to 40 customers.
Getting the Trent XWB off the ground was the end result of a complex and iterative process, according to Todd Simons, Senior Analyst, High Performance Computing at Rolls-Royce, Indianapolis, one designed to minimize physical testing while completing design cycles as quickly as possible. It’s also multidisciplinary, requiring engineers to account for weight reduction, stress, lifting, efficiency, dynamics, aerodynamics, heat transfer and, finally, manufacturing.
Rolls-Royce does not disclose details on the HPC systems or software it uses, but those systems are CDF and FEA workhorses that help engineers strike the right balance among many factors that come to bear in the design of a reliable, safe, efficient and powerful engine.
Rolls-Royce uses both commercial and in-house software, with an emphasis on internal software for CFD, which Simons said provides Rolls-Royce with technology leadership while also containing costs by reducing licensing fees.
The process begins with, relatively speaking, low-tech CFD and FEA software that was state-of-the-art in the 1970s.
“We use a wide variety of software tools, from low to high fidelity,” Simons said, “but we don’t jump in and use high fidelity CFD at the beginning of the design process. We have design tools developed back in the 1970s that were at the time computationally intensive.”
These tools are when Rolls-Royce design engineers spec out a new gas turbine engine – figuring out such requirements as blade counts and pressure ratios. “They’re lower fidelity tools but they provide good information, and then we have a portfolio of tools with increasing fidelity,” at increased computational cost, as the process moves forward.
“We have large computers, but they’re not large enough,” said Simons. “We’d like to run larger models, more simulations than we have capacity for. So there’s not only a trade-off between disciplines but also between working the constraints that we have. We size models using engineering judgment to find the right number of simulations and how to achieve more of the design space efficiently.”
As each design phase unfolds and higher fidelity tools are implemented, the process is characterized by increasing accuracy and higher cost, along with lower risk and decreasing assumptions. And increased pressure.
“At the end of the design cycle,” said Simons, “where we’ve had a small army of designers working on it, before flight testing starts we have to go through this design test, and there’s a risk if we don’t pass that test we have to back up and repeat the design. So that’s a high risk event, something we use computational methods to de-risk the design so we are confident early in the design phase that we are able to meet the requirements and pass those tests.”
A primary advantage delivered by HPC is reduction in physical testing. Simons said Rolls Royce has reduced engine compressor tests from approximately 30 in the 1980s to a single rig test today.
“The idea here is to take these advanced methods – CFD, extreme event finite element modeling – and speed them up,” he said. “So we speed them up by getting better scale, running on more processors, so that we can get faster turnaround time. That allows us to use higher fidelity tools earlier in the design process so we can impact designs and increase the design maturity. To us that means we’re reducing program risks so that at the end of the program we’ve met all the requirements, we have happy customers who have been delivered on time, on schedule and on cost.
According to Rolls-Royce, since 2000 its engines have achieved a 20 percent reduction in carbon dioxide emissions, 60 percent reduction in NOx (a chief cause of smog and acid rain) and noise levels have been cut in half.
“The design cycle is an intense process,” said Simons. “Larger models improve predictions but computational costs increase faster than model size increases. They also take longer to set up and analyze – and someone is always waiting for those results.”