Advanced Computing in the Age of AI | Friday, March 29, 2024

High Performance Computing for Manufacturing – Why Is It Not Used Everywhere? 

"Manufacturing is a key part of our economy." This is more than a cliché. It is the basis of wealth creation and the source of the goods that we sell nationally and internationally. A healthy manufacturing economy creates jobs and keeps our balance of trade with the rest of the world in check.

"Manufacturing is a key part of our economy." This is more than a cliché. It is the basis of wealth creation and the source of the goods that we sell nationally and internationally. A healthy manufacturing economy creates jobs and keeps our balance of trade with the rest of the world in check. This is especially timely with our economic situation and the increased national emphasis on manufacturing.

Showing the visibility of manufacturing as a part of our national economic health, the Obama administration has called for an Advanced Manufacturing Initiative to put an emphasis on the technology of modern manufacturing. This announcement stems from recommendations in Ensuring American Leadership in Advanced Manufacturing (EALAM) by the President's Council of Advisors on Science and Technology (PCAST):

EALAM states:

"powerful computational tools and resources for modeling and simulation could allow many U.S. manufacturing firms to improve their processes, design, and fabrication"

and points out that the "America Competes Act" passed by Congress in 2010 and signed by President Obama in January directs the Department of Commerce to

"study barriers to use of high-end computing simulation and modeling by small- and medium-sized U.S. manufacturers, including access to facilities and resources, availability of software and technologies, and access to expertise, and tools to manage costs."

Has America lost jobs to low wage countries? This may have been true in previous decades, but most manufacturing jobs are now high tech all over the world. Companies in all countries must introduce the most efficient design and production processes. Digital modeling and simulation, high-end computing simulation and modeling, in the lingo of the PCAST report, is one of these technologies.

Large companies and laboratories have pioneered modeling and simulation and have used them for years. One may even argue that practical computational fluid dynamics and structures emanated from collaborations between government laboratories and the leading aerospace and automotive industries in the last century, so CFD and computational structures have been a part of practical product design for decades.

However these proven techniques have not penetrated small and medium businesses (SMBs) to the same extent, not even to many companies that are in the supply chains of the large companies. Certainly the science and the engineering have been known for many years. The return on investment for large companies is well known and they made the leap long ago. Why is it, that modeling and simulation is not the ubiquitous daily tool of all manufacturers?

Numerous reports addressing these questions have been issued. Reveal, written by IDA and issued by the Council on Competitiveness, in 2008, analyzed the reasons for this seeming failure. Has anything changed? If so, what should we do about it?

Reveal was based upon a survey of 77 companies and found that most (97 percent) were thoroughly versed in "virtual prototyping or large-scale data modeling" on desktop computers, and about half were limited by the computing capability offered by their desktop systems, forcing them to "scale down their advanced problems to fit their desktop computers." A bit more than half of the companies were open to using high performance computing under the right circumstances. The National Manufacturers association in The Facts about Modern Manufacturing, estimates that there are almost 300,000 manufacturing companies in the US with the large majority of jobs represented by companies with less than 500 employees. If the Reveal report can be extrapolated to this large market, it represents a substantial opportunity for improvement.

Reveal finds that there are three systematic barriers that are "stalling HPC adoption." They are lack of application software, lack of sufficient talent and cost constraints.

As I would expect, Reveal found that "lack of application software" was considerably more important than access to free HPC resources. Lack of software "strategic fit" implies that many technical questions to be answered are simply not addressed closely enough by current independent software vendor products. Although many excellent ISV software packages exist, the many-more specialized needs of the 300,000 businesses mentioned above are not generally met and the small businesses do not have the internal resources to adapt what is available to their specific problems.

As an example, Reveal cites "an automaker that wants to forge ahead of its competitors by designing vehicles with quieter more comfortable passenger cabins would not be helped by a crash testing model or application. Only a noise, vibration and harshness (NVH) application would be a 'strategic fit' for this objective." Although I am sure that many of the current ISV offerings could be adapted to fit many of the special circumstances, small businesses do not have the resources to make this investment on faith. And unfortunately, return on investment (ROI) studies that might persuade them to invest, first depend upon the existence of the application in order to evaluate — a Catch 22? Or is it a chicken and egg?

Reveal also cites the desire of users to "exploit the problem-solving power of contemporary HPC servers with hundreds, thousands, or (soon) tens of thousands of processors" but laments that "many of the most used applications scale to only a few processors in practice." They acknowledge that, "The ISVs are not at fault here. The business model for HPC-specific application software has all but evaporated in the past decade."

These are closely related. Companies that have a believable ROI presented to them are likely to create the demand for ISVs to produce scalable, strategically-fitted software. What has changed since 2008? IMHO, not much. ISVs are smart and they respond to customer demand. The techniques for scaling applications to modest parallelism are well known. Many of the well-known ISV packages now scale reasonably well to hundreds and even thousands of processors, maybe not to the hundreds of thousands or millions or processors that are spoken of in the exascale computing community, but certainly far beyond the immediate needs of most manufacturing simulation. The barriers to application scalability at this relatively-modest scale are not technical.

One barrier that Reveal cites is the cost of software licenses. Many current licenses are based upon processor- or core-count and the cost of a scaled-up application run becomes prohibitive, even if the physical parallel cores are available. ISVs are business savvy. This is the license model that reflects current demand and business realities, but is subject to change as the market changes. Companies are exploring other models reflecting changes since 2008. These are changes like multicore processors that now place a very modest parallel computer inside the standard workstations that manufacturers now customarily buy. But significant changes in license models will await changes in customer demand. Maybe there is also a chicken and egg problem here?

Hardware cost is no longer the barrier that it once was. The latest high-end desktop is as powerful as the fastest supercomputer of fifteen years ago. A fully-loaded MacPro will run the Linpack benchmark (your ticket to a place on the TOP500, the list of the fastest computers in the world) at about 100 Gigaflops, a value that would have gotten you the number one position on the TOP500 list in 1995. You can have this machine for less than $10,000 today. A department-level server with the power of the #1 supercomputer of the year 2000 can be bought for less than $100,000. Clearly the capital cost is not a barrier any longer. Since 2008 there is yet another solution for those prospective users for whom a small capital purchase is a hindrance to use, cloud providers allow rental-by-the-hour, making access a matter of possession of a laptop and a credit card.

Reveal also cites "lack of sufficient talent." But this is also a cost barrier. Scientists and engineers with the broad palette of skills needed to understand the technical business, master the applications and the science they represent as well as apply them on parallel computers are scarce and expensive. Only large companies and laboratories have been able to assemble and maintain this wide range of expertise to have it available on-call. Universities and colleges are not producing a sufficient stream of these "renaissance scientists." There has simply not been enough demand to cause them to create these curricula, but is this what we really want them to do? Or is it even reasonable to expect? What is the prospect for creating hundreds of thousands of skilled renaissance scientists to service the thousands of manufacturers? Or is something else needed?

Existing renaissance scientists are either the expert users themselves or are the expert assistance to users. They are the oracles in the current "high touch" user-model that requires individual mastery of all the technologies. This model has developed organically over many years in large laboratories and companies and has been quite successful for them. But I would argue that this model has been expensive and is not scalable to the needs of 300,000 manufacturers. Some of the progressive large computing laboratories have instituted affiliates programs, placing their renaissance scientists at the disposal of manufacturing companies and actively assisting them to introduce HPC into their workflow. But their ability to reach more than a handful of companies is limited -- I would argue, because the user-model is not scalable. Something else is needed.

It's easy point out problems, but much harder to find solutions. As Reveal and its predecessors detail, easy solutions to HPC in manufacturing have been elusive for many years. But we now know a couple more things that can be addressed in our next attack on the problem. We know that demand must come from SMBs themselves, but they are not convinced of the real return on investment. Even after a convincing ROI argument, demand will be paced by competitive pressure and requirements imposed by customers. Most companies will not introduce HPC unless it becomes clear that they will "get the jump" on competition or are forced into it by customers. The development of ROI analysis depends upon the availability of software, closely matching their real business needs. And the introduction of HPC into a small company's workflow must be demonstrably, very low risk. The ROI must be convincing and there must be a clear, proven recipe for the workflow.

We also know that the user-model of current HPC is too "high touch" requiring too much specialized knowledge in too many technology areas to be either scalable or affordable to many companies. And that available hardware, software, applications, and expertise to apply them, must be encapsulated in a more usable way.

The Alliance for High Performance Digital Manufacturing (AHPDM) is group of companies, laboratories and universities formed to attack these problems. AHPDM supports the creation of collaborative projects that will produce ROI analyses, user models and representative workflows by concentrating on the immediate problems of selected SMBs. In 2008 Reveal suggested public-private partnerships as a mechanism to realize these collaborations. Reveal's authors may have been more prescient than they realized, unwittingly anticipating the decline in public funding that has supported many of the laboratory efforts to address manufacturing HPC problems. And perhaps these partnerships are the means by which we will make progress in the near future. AHPDM is starting to take a good run at this through collaborations. I invite you to watch us and consider bringing your good ideas and work to the Alliance.

About the Author

Bill FeiereisenBill Feiereisen is a Senior Scientist and Corporate Strategist in High Performance Computing at Intel Corporation. He also holds a research appointment in computer science at the University of New Mexico. Bill works in all aspects of HPC from applications to hardware, but specializes in computational sciences.

He has broad experience in the community, having served as the Director of High Performance Computing for Lockheed Martin Corporation, the Chief Technologist and Division Director of the Computer and Computational Sciences Division at Los Alamos and before that as the director of the NASA Advanced Supercomputing Facility at Ames Research Center.

 

EnterpriseAI