Advanced Computing in the Age of AI | Friday, March 29, 2024

Wu Feng Analyzes Green500 Data to Find a Path to Exascale Computing 

Feng, co-founder of the Green500, has collected and analyzed the data from seven years of Green500 winners. Computers are getting bigger, faster, and more efficient every year. But is that enough to get us to an exabyte computer with an affordable energy cost in the foreseeable future? Maybe not...

You might consider the first Green500 supercomputer to be Green Destiny, a computer created over a decade ago by Wu Feng from Virginia Tech. It was a 240-node computer that took up five square feet of floor space and ran on 3.2 kW of power – roughly equivalent to the amount of power needed to run two hair dryers.

More accurately, though, Green Destiny should be considered the precursor to the Green500, the inspiration that got the whole thing started. The creation of that energy-efficient computer led Feng to give a speech at the International Conference on Parallel Processing in August 2002, titled, “Honey, I Shrunk the Beowulf!” A couple speeches, one paper, and a Green500 Request for Proposal later, Wu and his Virginia Tech colleague Kirk Cameron launched the Green500.

That was 2007.

The supercomputing industry has come a long way since then, of course. We're now producing computers that can hit nearly 18 petaflops, with six to seven times the energy efficiency of the best from 2007. It would seem possible to create, in the near future, an exascale computer that doesn't require more energy than large countries can afford.

But that future may be further off than most people hope. Without some significant breakthroughs in energy efficiency in the next five years, the goal of an affordable exaflop machine will keep receding into the future.

In the five years since 2007, Wu has collected a lot of data about how the efficiency of high-performance computers have improved. He shared some of that data in a presentation at the SC12 conference last November.  

There's no question that there have been enormous strides since the Green500 was launched. It's a big part of the reason that the Green500 was created. “The ultimate goal of the Green500 is to raise awareness of energy-efficient supercomputing,” Wu said. “We want to drive energy efficiency as a first order design constraint on par with performance.”

Here are some of the numbers. In 2007 the Green500 list was topped by an IBM Blue Gene computer that clocked in at just over 357 MFLOPS/W. Its total power usage was 31.1 kW. Five years later the number one spot was claimed by Beacon, created by the National Institute for Computational Sciences (NIST) at the University of Tennessee, which reached nearly 2,500 MFLOPS/W (or, to be more precise, 2,499 MFLOPS/W.)  That's about a 6X improvement in energy efficiency. It uses just 45 kW of power.

The computers have also been packed with much more punch since then as well. In 2007 the most efficient Blue Gene machine could reach about 11 gigaflops. Beacon is a fairly small machine by today's standards, with a peak of 112 teraflops. But the TOP500 winner last November was Titan from Oak Ridge National Laboratory, weighing in at 17,590 teraflops.

Titan is also the third most energy efficient computer on the current Green500 list, at 2,143 MFLOPS/W. But that still gives it a big appetite, needing 8,209 kW at peak.

That is a long way from the target goals for an exaflop computer, set by the Defense Advanced Research Projects Agency (DARPA) several years ago, at 20 MW by 2018.  But DARPA's own study later projected that it could not be done for under 67 MW – or maybe not even under 100 MW, which means 100,000 MFLOPS/W.

The data Feng has put together shows that the most efficient computers are improving their power profiles a lot faster than the average supercomputer. While Green500 winner Beacon gets roughly six times the MFLOPS/W than did the 2007 winner, the median efficiency of each year's group of the Green500 hasn't come close to following that curve – increasing less than 3X since 2007. In fact, the median efficiency of the 2012 Green500 winners was only about half that of the single computer that topped off the Green500 in 2007.

Green boxes represent middle two quartiles of G500; black line is median; dots are outliers

“The energy efficiency of the top end supercomputers are increasing exponentially,” noted Feng. But at the median, he added, “energy efficiency has not measurably increased, or has increased very very slowly – a linear increase.”

Overall, therefore, each generation of computer continues to use up more power. “The [top computers] are getting more energy efficient because their performance is improving faster than their power consumption is increasing,” said Feng. That seems to keep that exaflop/100MW goal elusively in the distance.

Given all this data, how much power will be required when the first exaflop computer is turned on, which should sometime in 2018?

Weng has been analyzing the efficiency trends of the Green500 and TOP500 winners in order to get an idea of how close each year's technology gets us within range of the desired energy efficiency. For each winner, he extrapolated up to an exaflop to determine how much power would be required to run an exaflop computer (assuming one could have been built) given the efficiency of that machine.

The trendline has been pretty good. An exaflop machine with the efficiency of the November 2007 Green500 winner would have required nearly 3,000 megawatts. The TOP500 winner that year extrapolated to nearly 5,000 MW. In November 2012, both the Green500 and TOP500 winners extrapolate to 500 MW of power.

But the trendline has been too inconsistent to follow into the future. Take the data up to November 2011, for example. “If we had trended that, we would have been generating power by 2018!” he quipped. But since then, the curves have flattened out, seeming to reach an asymptote at about 450 MW, casting serious doubt on whether it will ever be possible to get the power down to even 100 MW. It will require some new breakthroughs to once again move the trendline down.

“The question is, what are the innovations going to be both on the hardware and software side, to finish closing that gap?” he asks.

It's a difficult question to answer. Some look to more efficient microprocessors coupled with graphics processors or co-processors. But the data there is also ambiguous.

Feng looked at the energy efficiency differences between heterogeneous computers (those that use multiple types of processors) and homogeneous computers (those that stick to just one type of processor, typically a CPU.)  The heterogeneous computers have consistently won the efficiency race. They were also doing better on performance until 2010, when GPUs became popular. At that point, the, the performance of heterogeneous systems suddenly dropped below those for homogeneous.

Does that mean GPUs are less efficient than their supporters make them out to be? Not at all, says Feng. “The (GPU) device on its own is quite efficient,” Feng noted. They're simply not yet well-integrated with the CPU. “One of the main issues is that you have to move data back and forth” between the two processors. That causes the drop in efficiency.” But on many of these systems about 90% of the FLOPS come from the GPU.

Perhaps better coordination between the CPU and the GPU will give the industry the performance boost it needs. Or maybe the use of lower-power microprocessors, such as ARM, will do the trick. 

But for now, we'll just have to wait and see what brilliant ideas the best computer scientists and engineers can come up with in the next several years.

[This story has been edited to correct an error: Wu Feng and Kirk Cameron are at Virginia Tech, not University of Tennessee.]

EnterpriseAI