News & Insights for the AI Journey|Tuesday, September 17, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Is Scale-Up Your Best Scale-Out Option? 


Looking at the industry today, you’d think scale-out architectures are the only way to deliver application performance and availability at low-cost. Major technology companies, such as Amazon, Google, and Facebook, use scale out, so it must be the right way for everyone to do enterprise applications, right?

Not necessarily. There are other models to be considered. When you need the absolute highest performance from scale out applications, such as HPC or complex modeling/simulation, you should consider an unconventional approach: hosting your scale-out application infrastructure in a single scale-up system. This isn’t an option for everyone, it can be more expensive than scale-out, but if you need very high performance then scale-up should be looked at.

Bruce Jones

Let’s define terms: Scale-out is an architecture where many small servers (usually two-processor socket systems) all run the same application independently. If more capacity is needed, more servers are spun up to run more instances. In some cases, it can be challenging to make enterprise workloads parallel enough to run in this model, and very few enterprise applications are completely parallel. Most apps have a serial element in them – meaning they need to communicate their results with other instances of the program or have their results rolled up into a different program.

While some scale-out applications, such as HADOOP MapReduce, include a set of integrated tools, some scale-out infrastructures also need separate load balancing and job scheduling software to make it all work efficiently. And oftentimes, additional tools are needed to ensure that each of the systems are synched for operating system and application levels. Without this type of orchestration, scale-out architectures may be prone to unplanned problems.

Scale-out applications cover a wide range of uses, including web services, such as Amazon, Facebook, and Google, along with other advanced scale computing applications.

A scale-up architecture, by contrast, is one in which a single physical system runs many workloads simultaneously. These multiprocessor systems can get very large and can handle hundreds or even thousands of applications. Load balancing and job scheduling are handled by the operating system and/or built-in virtualization suite, which make these tasks significantly easier to manage.

Most applications hosted on scale-up systems are large databases or applications that can’t be easily parallelized, such as apps that have a significant serial component to them. Many analytics/simulation/modeling applications aren’t “embarrassingly parallel” and need to communicate with other instances to hand off data, results, etc.; it is on these chores where the scale-up architecture will shine.

The most compelling factor of the scale-up model is performance. As we mentioned above, most enterprise scale-out applications are not completely parallel, meaning that the instances must either intercommunicate with each other or with other programs in order to function correctly. With scale-out architecture, these communications take place at Ethernet speeds – currently limited to around 12.5GB per second. On the other hand, in a scale-up system, this communication takes place at system bus speeds – typically 25GB/s or more for internal system communication.

There is also less protocol overhead in a scale-up model, which aids performance. This is particularly true when you consider the smaller sized, but frequent, messages that come with serialization and control synchronization.

Latency, the time it takes a message to move from one point to another, also plays a significant role in determining the performance advantage in scale-up architectures.

In the scale-out model, latency is how long it takes for a message to move from one system to another utilizing an external network connection. Using even a high performance Ethernet interconnection, the latency of a message is measured in microseconds.

But a scale-up architecture measures latencies in nanoseconds (one one-thousandth of a microsecond) because messages move between processors or memory using the very high-speed internal bus infrastructure.

Probably the most important thing to consider when selecting the architecture for your applications is how your organization uses, and doesn’t use, applications. One thing to throw away is the idea that your enterprise should use the same methodologies and architectures as Facebook, Amazon, and Google. Odds are your company, application suite and requirements are not anything like what those companies require, and the scale your enterprise needs is not nearly as large either, right? Select the right model for your enterprise, not theirs.

Bruce Jones, Product Specialist, Fujitsu Oracle Center of Excellence, Fujitsu Technology and Business of America, Inc.

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This