Advanced Computing in the Age of AI | Friday, April 19, 2024

Op-Ed: Intel Confronts Its IBM Moment 

via Shutterstock

On July 28, 1993, IBM announced a staggering quarterly loss of over $8 billion, the second largest in corporate history after GM’s $21 billion the previous year. Louis V. Gerstner Jr., the newly installed chairman of the company wanted to end the “Chinese water torture” of payroll cuts year after year and “get this process behind us.” The announcement included another round of employment reduction, this time to the tune of 35,000 jobs. By the end of 1994 the company had shrunk its payroll from 405,000 workers to about 225,000. Competitors and pundits gleefully predicted IBM’s demise by the turn of the century.

But a slimmed down IBM did not give up the ghost. In fact, after that watershed event Gerstner proved he had the Midas touch; not only had he staved off impending doom by downsizing, but he had also managed to find a golden goose: Services. Gerstner pushed IBM into providing integrated, end-to- end, client-specific solutions, encompassing hardware, software, architecture and applications. The results were spectacular: In less than 10 years the Global Services Division went from practically nothing to a $30 billion business in 2001. Soon after, IBM bolstered the Services division with the acquisition of PwC, the computer services arm of Price Waterhouse Cooper.

But, eventually, this, too, hurt the company. Services have no protective “moat” of intellectual property, or manufacturing expertise. As services became commoditized, IBM could only compete by offshoring the work and quality suffered. Services became IBM’s largest business segment, but also its least profitable.

IBM would never return to its past glories. In the past ten years the company's revenue has fallen from more than $100 billion to under $80 billion in 2019. Analysts expect it will continue to drop for the foreseeable future.

This colossus of American industry, once one of the most valuable companies in the world, is now worth far less than its current competitors (and sometime collaborators) in the business-to-business technology world: Microsoft, Alphabet, Cisco, Oracle and Salesforce.com. Even Intel, in which IBM once owned a 20% stake – an investment made in 1982 to prop up a vital chip supplier ravaged by Japanese competition – now boasts a market capitalization twice as large.

This, too, could be Intel’s fate.

On July 23, Intel announced quarterly results that beat expectations on both the top and the bottom line. In fact, it set a record for second quarter revenue. But the stock price plunged by over 16 percent by the end of the day because, buried within the report, was this comment: “The company's 7nm-based CPU product timing is shifting approximately six months relative to prior expectations. The primary driver is the yield of Intel's 7nm process, which based on recent data, is now trending approximately twelve months behind the company's internal target.”

The delay in implementing the 7nm generation of technology is disturbingly familiar; Intel had also delayed the previous technology generation – 10 nm – because of unspecified problems with device yield. Intel's 10nm technology was supposed to go into mass production by the end of 2015, but, so far, it’s shipped in low volumes. High volume CPUs based on 10nm aren’t expected till next year.

Adding insult to injury was Apple’s decision to replace Intel x86 processors with its custom CPUs in the flagship Mac line of computers. Ostensibly, the switch was driven by Apple’s need for higher performance at lower power. But engineering dissatisfaction with Intel’s products also played a part.

The DNA of Intel’s business is the so-called x86 computer architecture, which traces its lineage to the 8086 CPU that Intel introduced in the late seventies. By most accounts the architecture was acknowledged to be “kludgy,” compared to technically superior, contemporaneous designs from Motorola and Zilog. But design can only do so much in the semiconductor business. Intel’s manufacturing muscle (coupled with some strong-arm tactics in the marketplace) kept the other CPU manufacturers at bay. For the next several decades, Intel, guided by its north star, founder Gordon Moore’s eponymous law that predicts the doubling of device density on a chip every 18 months or so, squeezed out ever increasing speed, lowered energy consumption and slashed the cost per logic gate of its chips at a breathtaking pace. The manufacturing strategy was termed “tick, tock” – the “tock” indicated a microarchitecture change, while the “tick” was the subsequent die shrink made possible by next generation process technology. Thanks to its manufacturing prowess and clockwork-like execution Intel became the largest and most valuable semiconductor company in the world.

Intel’s announcement last month – coupled with the delay in deploying the previous technology generation – could only mean one thing: Intel’s vaunted manufacturing engine had stalled. Worse, Taiwan Semiconductor Manufacturing Co., the giant chip foundry in Taiwan had already successfully implemented the 7nm “node” and was manufacturing Intel-compatible 7nm processors for Intel’s arch-rival in the CPU business: AMD. Now it meant that AMD, which had made significant recent inroads into Intel’s market share with its newest processors – AMD’s share of the PC market currently stands at an all-time high of 20 percent-- could have the field to itself until Intel recovers from its stumble. Essentially, with its new chips running on the most advanced processes, AMD had now stolen a performance advantage in both the PC and the data center, which together represent over 80 percent of Intel’s business.

The IBM and Intel announcements, though decades apart, reveal similar trajectories.

IBM’s misfortunes stemmed from the fact that computing was moving beyond the mainframe, IBM’s bread and butter business. Digital Equipment Corporation (DEC), invented and dominated the minicomputer business, forcing IBM to play catch up. By the late seventies, with the introduction of the high-end VAX, DEC was poised to strike at IBM’s mainframe heart. But it turns out that both IBM and DEC were on the losing side. The computer was getting even smaller.

IBM recognized this better than DEC, whose founder Ken Olsen famously commented that he saw no use for a computer in the home. In contrast, it could be argued that IBM was the true progenitor of the modern PC industry. IBM’s 1981 entry into the PC was a genius move on several levels. Recognizing that the weight of its “big iron” culture would hobble efforts to radically downsize computers, IBM allowed its Entry Systems Division to flout tradition by assembling its PC by cobbling together parts available on the open market. Notably, Intel supplied the processor and Microsoft the operating system. IBM even distanced the new division physically by locating it in Boca Raton, far from the swaddling influence of Poughkeepsie.

In some ways, the most far-reaching decision made by IBM’s PC division was to use an open architecture, rather than one that was proprietary to IBM. That decision led to the market for add-in boards, for large numbers of third-party applications, and eventually to a large number of competitors all creating "IBM-compatible" machines. By that point, the PC architecture the Boca Raton team had created had already become the industry standard, resulting in thousands of applications, a huge variety of add-in boards, and PC-compatible machines from dozens of vendors. The downside of creating a market by stoking competition was that PC hardware – outside of the CPU -- became a commodity. Over time Intel commandeered most of the value, and almost all of the profits, of the “box.” After enduring a $1 billion loss over its final three and half years, IBM sold the business to the Chinese company, Lenovo.

Intel’s problems, too, can be traced to a generational shift in computing; the platform was getting even smaller and more mobile. Enter the handhelds: The tablet and the smart phone.

In mobile computing Intel’s swagger proved of no avail. Since, in this market, extending battery life was more important than increasing horsepower, low-power chips based on ARM cores and instruction sets dominated. A number of chip vendors – chief among them Qualcomm -- integrated modems, graphics processors and other peripheral functions with ARM CPUs into SoCs (systems on a chip) which proved to be cost-effective, all-in-one packages for most smartphone makers. Soon, ARM CPUs were in 95% of all smartphones worldwide, and Qualcomm was its largest chipmaker. For a while, Qualcomm’s market capitalization even surpassed Intel’s.

Intel positioned its X86 Atom processors at this market but aside from Microsoft, its partner in the “Wintel” hegemony in PCs, it did not find much acceptance. Microsoft’s share in mobile computing still languishes in the single digits. In the end, Intel watched from the sidelines as the mobile computing market exploded.

Both IBM and Intel tripped over that immutable law of business: The leader in one generation of product or technology is seldom the leader in the next, a subject well explored in Clayton Christiansen’s The Innovator’s Dilemma.

Monopolistic or near monopolist businesses (Intel in PC CPUs; IBM in mainframes) are high-volume, high-margin businesses and, as such, are basically unstable. In fact, so demanding are they of attention that they blind the organization to other opportunities and threats.

High-volume, high-margin businesses need to be protected on all fronts with all manner of means. But sometimes those means cross the line: Intel has been fined by the European Commission and lost a lawsuit brought by AMD for illegal rebates and payments to customer OEMs to limit their purchases of AMD processors. Each of these settlements cost Intel in excess of a billion dollars.

Meanwhile, IBM operated for nearly forty years under a consent decree that limited its moves to quash competition.

Both companies used strong-arm tactics also against customers, not just competitors. IBM virtually invented the practice of FUD (fear, uncertainty, doubt) to gain business advantage. “Nobody got fired for buying IBM,” is a cautionary phrase, the implication being that somebody, somewhere, did get fired – for buying from IBM’s competition. IBM consistently bypassed – and thus alienated – direct users of its products to appeal directly to their top management.

Intel, too, used a FUD-like approach as part of Operation Crush, a full-court press in support of its first 16-bit processor, the 8086, against what were generally considered to be superior competitive products. Intel mustered a full range of support products --and promises --and focused on a new sales target: The company CEO, not the engineer, nor the programmer. But, in the long run, forcing engineers to design in products that they think are less than ideal never works out well; turns out that Apple’s decision to switch out of Intel to its own ARM-based chips was ultimately driven by engineering frustrations with Intel’s bug-ridden Skylake CPUs that powered Macs released between 2015 and 2017.

What does the future bode for IBM and Intel? Will they overcome the Christiansen Curse and reinvent themselves to take on the next generation of technology, even at the risk to their current businesses?

IBM’s new CEO, Arvind Krishna, has unequivocally stated that “Hybrid cloud and AI are two dominant forces driving change for our clients and must have the maniacal focus of the entire company.” The 2018 acquisition of Red Hat – driven by Krishna – is likely to play a key role in that effort. So, also is the IBM Z series of mainframes. (Yes, IBM is repositioning its mainframes as key hardware infrastructure for the cloud.) But Krishna’s biggest challenge is likely to be cutting through IBM’s bloated bureaucracy before he can develop the maniacal focus that he needs.

Intel recognizes that the PC market is in secular decline. The big markets for microprocessors are shaping up as data center, AI, 5G, driver assistance and autonomous driving, and edge computing. Intel sees its role at the intersection of all or most of these opportunities.

But what IBM’s quarter century-long foray into the PC business proved is that a high-margin culture does not easily adapt to a high-volume, low-margin opportunity. That’s one of the reasons that Intel did not succeed in mobile computing. Supplying microprocessors for the “edge” is likely to be the mother of high-volume, low margin opportunities. It’s not one is which Intel is likely to thrive.

In the automotive space, Intel’s acquisitions, especially Mobileye, give it a fighting chance. There is scope yet for Intel to develop an automotive presence that merits an “Intel Inside” logo on the car. A transformational acquisition of a Tier 1 supplier to automotive OEMs, could be an accelerant and prove Intel’s seriousness.

IBM and Intel have a long, intertwined history in computing, beyond their customer and supplier roles in the PC business. Intel’s first product – which was also the first commercial DRAM -- was the 1103, a 1-kbit DRAM based on the single transistor cell invented by Robert Dennard, an IBM researcher (later an IBM Fellow). Dennard also discovered that as transistors are reduced in size their power density remains constant so that power use stays in proportion with area, a phenomenon known as “Dennard Scaling.” This allowed CPU manufacturers to raise clock frequencies from one generation to the next without significantly increasing overall circuit power consumption. Dennard scaling, until it broke down around 2006 from quantum effects, combined with Moore’s law became an invaluable predictor of integrated circuit performance, not just for Intel, but also for the semiconductor industry at large.

Could IBM and Intel work together again? Can they reprise their seminal partnership of the nascent PC business? As IBM’s Krishna has observed, there is significant opportunity – again -- at the high end of computing. To tackle the cloud and AI requires extreme performance from the underlying hardware. Extreme performance in microprocessors happens to be Intel’s forte.

Or, will they settle into the technology firmament as “red giants,” still big and bright, but in the late stage of evolution, their inner fuel too depleted to sustain nuclear fusion?

--Girish Mhatre is the former editor and publisher of EE Times. His commentary was originally posted to LinkedIn.com.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI