Adaptive Computing Launches Big Workflow
Adaptive Computing today launches Big Workflow, an industry term coined by Adaptive Computing that accelerates insights by more efficiently processing intense simulations and big data analysis.
Adaptive Computing’s Moab HPC Suite and Moab Cloud Suite are an integral part of the Big Workflow solution, which unifies all data center resources, optimizes the analysis process and guarantees services, shortening the time to discovery. Adaptive Computing’s Big Workflow solution derives its name from its ability to solve big data challenges by streamlining the workflow to deliver valuable insights from massive quantities of data across multiple platforms, environments and locations.
While current solutions solve big data challenges with just cloud or just HPC, Adaptive utilizes all available resources – including bare metal and virtual machines, technical computing environments (e.g., HPC, Hadoop), cloud (public, private and hybrid) and even agnostic platforms that span multiple environments, such as OpenStack – as a single ecosystem that adapts as workloads demand.
Traditional IT operates in a steady state, with maximum uptime and continuous equilibrium. Big data interrupts this balance, creating a logjam to discovery. Big Workflow optimizes the analysis process to deliver an organized workflow that greatly increases throughput and productivity, and reduces cost, complexity and errors. Even with big data challenges, the data center can still guarantee services that ensure SLAs, maximize uptime and prove services were delivered and resources were allocated fairly.
“The explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics,” said Rob Clyde, CEO of Adaptive Computing. “A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage. We are confident that Big Workflow will enable enterprises across all industries to leverage big data that inspires game-changing, data-driven decisions.”
DigitalGlobe, a leading global provider of high-resolution Earth imagery solutions, uses Moab to dynamically allocate resources, maximize data throughput and monitor system efficiency to analyze its archived Earth imagery, which contains more than 4.5 billion square kilometers of global coverage. Each year, DigitalGlobe adds two petabytes of raw imagery to its archives and turns that into eight petabytes of new product.
With Moab at the core of its data center, DigitalGlobe has been able to operate at a global scale with timelines their customers need by breaking down silos of isolated resources and increasing its maximum workflow capacity, helping decision makers better understand the planet in order to save lives, resources and time.
Adaptive Computing also launched Moab 7.5, which adds new features that make the software more robust to tackle stubborn big data and enhance Big Workflow by unifying data center resources, optimizing the data analysis process and guaranteeing services. These new cloud and HPC features include:
- Role-based Access Control – Portal-based multi-tenancy for secure resource sharing among different users, departments and customers.
- Cray Integration – Moab now integrates with Cray ALPS BASIL 1.3.
- Hardened Power Management – Advanced power management policies for true resource power down. In addition, power scripts now comply with IPMI (integrated power management interface) green computing standards.
- Message Bus Communication – Increased job-scheduling speed by delegating communication to the message bus, which allows Moab to stay focused on scheduling versus communication.
- Moab Accounting Manager (MAM) Enhancements – Several new enhancements, including non-blocking accounting calls, High Availability connection, synchronization between Moab and MAM accounts and users, discrete allocations, simplified charge rate specification and additional tracking metrics.
- Moab Viewpoint Upgrade – In addition to advanced dashboard notifications and gadgets, Moab Viewpoint now reveals lifecycle states to quickly diagnose the status of a job.
- Custom Reporting Expansion Capabilities – Expanded reporting API to include accounting data for generating accounting reports.
- Service Phase Transition – Reduced diagnosis time for error transparency throughout the service life cycle.
- Standardized Logging – Moab logs are now SPLUNK-ready across all components of Moab and its web services.
According to a hands-on survey of more than 400 data center managers, administrators and users Adaptive Computing recently conducted at the Supercomputing, HP Discover and Gartner Data Center conferences, 91 percent believe some combination of big data, HPC and cloud should occur for a better big data solution. This finding underscores the intensifying collision between big data, HPC and cloud and is supported by the International Data Corporation (IDC) Worldwide Study of HPC End-User Sites.
According to the HPC sites included in the IDC’s 2013 study, 67 percent said they perform Big Data analysis on their HPC systems, with 30 percent of the available computing cycles devoted on average to big data analysis work. In addition, the proportion of sites exploiting cloud computing to address parts of their HPC workloads rose from 13.8 percent in 2011 to 23.5 percent in 2013, with public and private cloud use about equally represented.
“Our 2013 study revealed that a surprising two thirds of HPC sites are now performing big data analysis as part of their HPC workloads, as well as an uptick in combined uses of cloud computing and supercomputing,” said Chirag Dekate, Ph.D., research manager, High-Performance Systems at IDC. “As there is no shortage of big data to analyze and no sign of it slowing down, combined uses of cloud and HPC will occur with greater frequency, creating market opportunities for solutions such as Adaptive’s Big Workflow.”