Advanced Computing in the Age of AI | Thursday, March 28, 2024

Back to the Future – The Mainframe in a Connected World 

Since the mid 90’s, industry watchers have predicted a slow, gradual death for the mainframe, as x86-based server farms and the cloud gained a stronger foothold in the high-performance computing (HPC) market. This hasn’t happened – primarily because the mainframe remains the most scalable, reliable and secure computing platform on the planet. In fact, we view the mainframe as the original HPC platform, filling an irreplaceable role as the mission-critical heartbeat of many HPC environments.

The first mainframe, the IBM 360, was launched in April 1964 and could perform 229,000 calculations per second. This innovation actually powered the equations that ensured John Glenn got into orbit, returned to earth and descended into a predetermined landing zone. Fast-forward to now, and the most recent incarnation, the IBM z14, is the most powerful mainframe ever built. According to IBM, the z14 can process 12 billion encrypted transactions per day – a five-fold increase over its predecessor, the z13, which was launched just two years earlier.  The mainframe may be old, but in no way is it past its prime.

The z14 can run the world's largest MongoDB instance with 2.5 times faster performance compared to x86-based platform, support 2,000,000 Docker containers and 1,000 concurrent NoSQL databases. Experts also view the z14 as a key driver for blockchain, a new generation of transactional applications poised to revolutionize the way industries securely conduct business.

The mainframe enables all of this, while also proving to be more cost-effective than alternative architectures like commodity x86-based servers. Why then, do some mainframe user organizations running HPC environments consider moving off? While the mainframe’s strength is unparalleled, these platforms have historically existed in silos, isolated from other systems and requiring specialized expertise. This makes it difficult to integrate the mainframe with the broader pool of hardware and software solutions that fuel enterprise performance, productivity, flexibility and competitive edge.

Data analytics are one example. For years, organizations have copied business-critical data from their mainframe transactional systems to other platforms to perform sophisticated analytics. This process was inefficient and risky, while adding data latency inhibiting the real-time power of analytics. Running data analytics directly at the point of transaction processing - the mainframe - makes much more sense, but the challenge has been connecting transactional analytics on the mainframe to other analytics applications and tools. For developers creating these types of cross-platform applications, a lack of mainframe familiarity creates obstacles when building end-to-end analytic workflows that support competitive edge (example – a retailer using real-time analytics to support targeted upsells and cross-sells at the customer’s point of purchase).

The huge surge in mobile transaction volumes is another example. The mainframe is the ideal platform to handle this surge, and banks are innovating relentlessly, rolling out new, advanced mobile apps that deliver a whole new level of convenience. Consider London’s Natwest Bank, which recently introduced a mobile app allowing customers to make cardless withdrawals at ATMs. This type of innovation is at the heart of digital transformation and helps banks differentiate. These applications typically span multiple systems before connecting to a mainframe for transaction completion. A lack of mainframe familiarity will slow time-to-development, putting an unnecessary drag on innovation efforts.

In our view, the solution for this lack of connectivity isn’t to replace the mainframe. History has shown doing so results in less performant systems, increased risk and wasted time and money. One need only look at the failed migration efforts of some state agencies to get a sense of how costly the exercise can be. The mainframe, however, does need to be modernized for a connected world. This means updating the mainframe development environment and making it compatible with modern software development best practices like DevOps. When this happens, the mainframe is taken out of its silo and becomes just “another platform” on which developers work and innovate.

Mainframe users also need to abandon their “all or nothing” perspective when it comes to keeping the mainframe or moving to the cloud.  The mainframe and the cloud can actually be used in unison, through a complementary approach leveraging the unique strengths of each platform. For example, certain applications and components can be more efficiently sourced from the cloud. When necessary, they can be connected to mainframe data and apps through APIs. This can be done while keeping core work (mission-critical transaction processing) on-premises, thus ensuring a higher level of reliability, performance and security.

In this way, workloads are intelligently paired with the optimal platform, and organizations can reap the strategic benefits of both. The mainframe thus plays a critical role in the API economy, powering single workflows requiring integration of transactional data. Another way to marry the mainframe in the cloud is through new cloud-based solution delivery models, which enable large development teams to access a modernized mainframe development environment and its tools much more quickly.

In summary, the recent resurgence in mainframe interest for HPC environments is valid, and the competitive edge they can provide is very real.  But a focus on achieving better connectedness for the mainframe within these environments is critical, in order to future-enable these tried and true systems.

Spencer Hallman is a product developer at Compuware.

EnterpriseAI