Advanced Computing in the Age of AI | Thursday, April 18, 2024

Code Modernization: Unlocking Old Code (and Incorporating AI) for Parallel Architectures 

via Shutterstock

The ways that advanced computing performance depends on more – much more – than the processor take many forms. Regardless of Moore’s Law's validity, it’s indisputable that other aspects of the computing ecosystem must keep pace with the processor if the system is to deliver the results everyone’s after.

One aspect is the performance of widely used, public domain high performance codes, some of which date back to the late 1950s when parallel architectures were a futuristic computer science vision. Still in use today, those codes have been goosed, tickled and jolted for better performance, but they remain at their core what they’ve always been: serial applications.

We recently caught up with Joe Curley, senior director of Intel’s code modernization organization, who shared observations about Intel’s efforts to optimize, or parallelize, long-in-tooth codes for the latest generations of highly parallel x86 CPUs.

This includes work around applications used by manufacturers in product design, such as OpenFOAM for CFD; advanced MRI diagnostics programs used in the medical industry; seismic code for the oil and gas industry and applications used by banks and other financial services organizations.

Obviously, it’s in Intel’s self-interest to extend the life of the 40-year-old x86 architecture by maintaining an up-to-date code library. But organizations all over the world are hampered by the old code that, unoptimized, drags down the throughput of high performance clusters and impedes the work they do.

Intel's Joe Curley

A recent development in code modernization, Curley said, has been incorporation of AI and machine learning techniques, which – when done right – can boost performance exponentially well beyond conventional, processor-focused code modernization work.

Much of Intel’s code modernization work comes out of its global network of Intel Parallel Computing Centers (IPCCs). Begun four years ago with six centers, the program has expanded to 72 and has worked on 120 codes in more than 21 domains.

The following are excerpts from our interview with Curley, some of which have been re-ordered for clarity.

Definition and Need

Code modernization can mean many things, from using a modern language to optimizing performance. We use code modernization in the literal sense: to become modern, using the newest information methods with technology.

The typical impact of a code modernization problem is giving someone the ability to take on a problem that was just too big to get at before. We’re trying to extract the maximum performance from an application and take full advantage of modern hardware. Other words have been used: optimization, parallelization and some others. But you can be parallel without being optimal, you can be optimal without being parallel. So we chose a slightly different term. It’s imperfect but it gets across the idea.

Modern, general purpose server processors have 18-22 processing cores with two threads and a vector unit built into it. They’re massively parallel processors. But by and large the applications we run on them have been derived from code that was generated in a sequential processing era. The fundamental problem that we work with is that many of the codes used in industry or in the enterprise today are derived from algorithms written anywhere from the 1950s to the 2000’s. And the microprocessors used at the time were primarily single core machines, so you have a very serial application.

In order to use a modern processor you could just take that serial application, create many copies of it and try to run it in parallel. And that’s been done for years. But the real power performance breakthroughs happen when someone steps back and asks: How can I start using all of these cores together computationally and in parallel?

What’s encompassed by code modernization

Our group does everything from training, academic engagement, building sample codes, working with ISVs in communities, both internally and externally. We focus efforts on open source communities and open source codes. The reason is that we’re not only trying to improve science, we’re also trying to improve the understanding of how to program in parallel and how to solve problems, so having the teaching example, or the example that a developer can look at, that’s incredibly important.

We’ve taken the output from the IPCCs, we’ve written it down, we’ve created case studies, we’ve created open source examples, graphs, charts – teaching examples – and then put it out through a series of textbooks. But importantly, all of the (output) can be used either by a software developer or an academic to teach people the state of the art.

For the IPCCs, the idea was to find really good problems that would most benefit from using the modern machine if only we could unlock performance of the code. Our work ranges practical academics to communities that generate community codes. In some cases they’re industrial and academic partnerships, some are in the oil and gas industry, working on refinement of core codes that will then go back in for use in seismic imaging. The idea is for these to be real hands-on workshops between domain scientists, computer scientists, and Intel that have actual practical use within the life of our products.

So not only are we getting the first order of benefit if, say, an auto manufacturer was using OpenFOAM and got a result faster. That’s great, we’ve made it more efficient. But we’re also creating a pool of programmers and developers who’ll be building code for the next 20 years, making them more efficient as well.

Example: Medical/Life Sciences

One of our IPCC’s was with Princeton University, they were trying to get a better understanding of what was happening inside the human brain through imaging equipment while a patient was in the medical imaging apparatus. It’s a form of MRI called fMRI. The science on that is pretty well established. They knew how to take the data that was coming from the MRI, and they could compute on it and create a model of what’s going on inside the brain. But in 2012, when we started the project, they estimated it would take 44 years on their cluster to be able to calculate. It wasn’t a practical problem to solve.

So instead of using the serial method they were using they could start using it in parallel on more energy efficient, modern equipment. They came up with a couple of things. One: they parallelized their code and saw huge increases in performance. But they also looked at it algorithmically, they began to look at the practicality of machine learning and AI, and how you could use that for science. Since these researchers happened to be from neural medicine centers they understood how the brain works. They were trying to use the same kind of cognition, or inference, that you have inside your brain algorithmically on the data coming from the medical imaging instrument.

They changed the algorithm, they parallelized their code, they put it all together and ended up with a 10,000X increase in performance. More practically, they were able to take something that would have taken 44 years down to a couple of minutes. They went from something requiring a supercomputing project at a national lab to something that could be done clinically inside a hospital.

That really captures what you can try to do inside a code modernization project. If you can challenge your algorithms, you can look at the best ways to compute, you can look at the parallelization, you can look at energy efficiency, and you can achieve massive increases in performance.

So now, how that hospital treats the neurology of the brain is different because of the advances offered by code modernization. Of course the application of that goes out into the medical community, and you can start looking at fMRI in more clinical environments.

Example: Industrial Design

One of the community applications, OpenFOAM, is used heavily in automobile manufacturing. We’ve worked with a number of fellow researchers to deliver breakthroughs in power and performance by 2 or 3x, which, across an application the size and magnitude of OpenFOAM is really substantial.

It also creates a lighthouse example for commercial ISVs of what can be done. This clearly showed that for computational fluid dynamics at scale, entirely new methods can be applied to the problem. We’ve had a lot of interest and pick-up from commercial ISVs on some of the work being done using some of the community codes.

Here’s the thing we want to get at: What’s the real value in computing a model faster? Most people tend to think of code modernization simply as making a simulation run faster. But one of the things we’ve done is develop software that can help you better visualize your physical design.

Audi, for example, has worked with Autodesk as an ISV partner, they’ve developed modern Raytracer (rendering engine) examples of things we work on inside our code modernization group. We have another group that works on visualization and how to take your images and make them look lifelike. Autodesk has come up with clever ways of doing that and building that into their product line, and then allowing Audi to remove physical prototypes both for assembly as well as for interior and exterior design from their process.

Think of someone building a clay model of a car and taking it to a wind tunnel, or building a fit-and-finish model of a car, to see how the interior design will look and to see if it’s pleasing to the customer. They’ve removed all that modeling. It’s all being done digitally, not only the digital design and simulation but also the digital prototyping, and then visualizing it through modern software on a departmental-sized computer.

The impact of that, according to Audi when they spoke at ISC, is that it removed seven months from their process for the fit-and-finish prototypes and six for the physical prototypes. If you can shave that much time out of your process you can gain major competitive advantage from HPC.

It’s all made possible by new highly parallel codes and interestingly, all the visualization is done entirely on general-purpose CPUs.

Example: Financial Services

For financial services companies, with code modernization there’s the opportunity to use the same cluster, that you’d use for the rest of your bank’s operations, for the most high performance tasks. Whether it’s options valuation or risk management or some of the tasks you use HPC for, we can do that on general-purpose Xeon CPUs.

In banking, most of those codes are the crown jewels of the banks. So we can’t talk about them. In many cases we don’t even see them. But we can work on the STAC-A2 benchmark – it’s a consortium of banks that’s built a suite of benchmarks for a variety of problems that operate sufficiently like what they do to get an idea of how fast they can run their software, and the STAC-A2 results get published.

On both our general-purpose Xeon and Xeon Phi CPUs through code modernization we’ve set world records for the STAC-A2 repeatedly. It’s an arms race. But we’ve done it multiple times with general purpose code.

That allows the bank to take that code as an exemplar, and apply it to their own special algorithms and their own financial science, and get the most performance out of their general-purpose infrastructure.

 

EnterpriseAI