Advanced Computing in the Age of AI | Tuesday, March 19, 2024

The Surge in Oil & Gas Supercomputing 

via Shutterstock

Supercomputing isn’t new in the world of oil and gas (O&G). It’s been a valuable tool for exploration, refining and processing for decades.

But in recent years, the need to explore unconventional reservoirs, which requires advanced compute capabilities, have spurred a supercomputing arms race as O&G companies seek innovative ways to strike a competitive advantage at every stop along the value chain. Some providers are even on the pathway to exascale computing, at least 10 times more powerful than today’s fastest supercomputers, with the power to compute a billion billion calculations per second.

It’s a classic supply-and-demand story. O&G companies, in an era of long-term oil price stability, are investing heavily once again in exploration and production. With most easy-to-find conventional oilfields in production, more focus is on pursuit of unconventional oilfields, as well as the deep water conventional fields that were long considered to be marginal and thus remained elusive.

Until now.

Technology advancements are transforming every aspect of the O&G value chain. Remote 3D visualization can accurately predict the location of productive oil reservoirs. Deep learning methodologies for geologic workloads are improving reclamation of existing wells. Precise sensors monitor pipeline data and report problems in real-time. Elastic full-waveform inversion (FWI) techniques are optimizing seismic imaging, reservoir modeling and geophysical tasks.

These and dozens of other O&G-specific technology advancements have one thing in common: They demand more sophisticated and faster compute power – such as massive parallel processing techniques that simultaneously perform different aspects of a program -- that only high-performance computing (HPC) can enable.

Discovering the undiscoverable

Legacy discovery and extraction processes are no longer cutting it – but the next wave of supercomputers will not only transform but revolutionize the industry. Early O&G adopters of supercomputers have already seen transformation in such operations as anisotropic reverse time migration (RTM). Once considered impractical for seismic depth imaging applications because the cost of necessary computational power was too high, RTM can now leverage massive parallel processing compute clusters to deliver affordable and highly accurate models.

The point is … time is money. When sophisticated calculations across a wide range of O&G applications that once required five weeks can be reduced to, say, three weeks, exploration for the next great discovery can be completed much more expeditiously. And that often means that the revenue engine begins churning earlier than ever before.

A textbook example of supercomputing’s efficacy was revealed in 2017, when BP discovered an estimated 200 million barrels of oil hidden for years by a salt dome. Because salt domes often distort subsurface seismic imaging, which makes data more difficult to interpret, O&G companies often lack enough data to make a confident determination about the value of drilling there.

By applying advanced mathematical algorithms developed by BP’s Subsurface Technical Center on seismic information run at BP’s Center for High Performance Computing (CHPC), data that would ordinarily take a year to be analyzed was processed in a few weeks. The innovation – which enhanced the FWI seismic modeling technique -- has enabled BP to enhance the clarity of images that it collects during seismic surveys, particularly areas below the earth’s surface previously obscured or distorted by complex salt structures. And the sharper seismic images revealed insights that prompted BP to drill new development wells with higher confidence and accuracy. The find would have been impossible to make – or led to costly failed attempts – if BP hadn’t upgraded its CHPC in Houston into a powerful, world-class supercomputer for commercial research.

Other O&G companies have also turned to HPC to host algorithms that support workloads like high-accuracy, high-resolution seismic imaging, geological modeling and reservoir simulation upstream, as well as to enable big data management throughout entire operations. This is indicative of the anticipated value O&G companies are expecting to realize from supercomputing, especially as we get closer to an exascale level of computing, which would have the ability to perform up to one quintillion calculations per second without increasing power consumption.

The path forward

As companies move more of their operations in-house, the need for substantial upgrades to computing capacity, speed and security is even more magnified. As O&G firms embark on the next phase of their supercomputing build-out, they should consider a number of factors to support such a major endeavor:

  • Workloads expertise. As computers become more advanced, the data sets are becoming larger and more detailed because O&G companies are able to collect and compute data on a more minute level than ever before. Complete knowledge of O&G workloads – beyond the basics of automobile and aerospace products -- is key to being successful in discovery and extraction.
  • Innovation roadmap. With new technology advancements always on the cusp, O&G companies need technology partners that have the ability to support growth and landscape shifts with a product roadmap that has the vision and means to scale, not only to keep pace with demands, but also to propel them ahead of competition.
  • Energy experience is essential. If a prospective vendor is inexperienced in O&G, and lacks thorough knowledge and experience in such areas as seismic processing, reservoir modeling and remote visualization, it should be excluded from your search. Too much money and too many reputations are at stake to proceed with an energy vertical novice.

Faster data processing and analysis lead to insights that can be leveraged to make real-time decisions with billion-dollar implications. And those companies making shrewd investments in HPC are enjoying a leg up on the competition, while gaining significant returns across the enterprise.

Tom Tyler is the Americas director of HPC at Hewlett Packard Enterprise (HPE).

 

 

EnterpriseAI