Scheer
Scheer
Scheer Wiki

Menu

Wiki

27.01.2021

Scalability of IT Infrastructures: Popping Up Like Mushrooms

“How does the system scale?”

“Is efficiency planning already integrated here?”

“How flexible can the system scale up and scale down if required?”

Any IT professional will certainly be well acquainted with these common issues. Indeed, scalability as a main criterion of any IT infrastructure has also been an important part of my work from the beginning. The reason for the importance of this subject is clear: IT must continually meet the demands of the digital world, and to this end companies need a flexible and scalable infrastructure.

“Scalability” refers to the capability of a system, network or process to change in size. Especially important in the scalability of IT structures is their adjustability to the changing requirements of applications, as resources are added or removed. The question then arises as to the type of scalability needed in order to satisfy demand.  In most cases, scaling up and/or scaling out will suffice.

Vertical scaling – or scaling up

Let’s start at the beginning: SAP solutions usually move within a monolithic environment, an indivisible unit – at least at present. Enormous databases are the result.

As the system grows, the underlying hardware must grow as well. In the classical monolithic world, one variant in particular is applied: the scale-up variant.

The system is made more efficient through continually upgrading the computer’s hardware. We take the same large data record and simply allocate more and more hardware resources to it until they too run short.

In the classical environment, we can easily extend this example to nature:

think of our database as a tree.

The server on which the database operationally runs is then comparable to a large flowerpot. We thus arrive at the first problem: the flowerpot is a precise fit only for a “short” time. While in the beginning our little plant must first “grow into” the pot, eventually it will quickly “outgrow” the pot.

IT infrastructures present exactly the same problem – amply illustrated by graphs like the following:

Source: own representation

The times in which our database is “used” are phases when our metaphorical tree will be very large, while it will be very small during times of non-use. Our tree will shrink and grow in the course of a working day – at least once. (assuming that all operations are performed in the same time zone).

Moreover, our database will grow a bit more each day. Considered over the year, the blue boxes thus represent the successively larger servers planned and pre-calculated according to the given trend at the time and therefore assured of a perfect fit.

Once procured, our flowerpot will therefore never be exhausted, and towards the end of its service life will always be filled to the maximum.

This scaling method is known as “scaling up,” since basically a system will always be placed in a larger box in order to meet the requirements.

Progress in IT and higher speeds lead to smaller and smaller time windows in which the system hardware must be changed, in order to cover the performance requirements.

This method can of course also be implemented on the various SAP-certified hyperscalers. Here you can always change from VM size to VM size, as the need arises. Important here is which VM sizes have been certified by SAP.

On Azure: What is SAP-certified?

A closer look quickly reveals that only very large increments are possible here:

2tb 4tb 6tb 8tb 9tb 12tb  16 18 20 24 RAM

Beyond a certain point, no “bigger” machine is possible at present. Since no manufacturer offers more, the database must not get bigger in scale-up systems.

The methodology itself is moreover impractical. To illustrate the situation, I’d like to go through a few steps that are necessary for scaling up such a system:

Source: Scheer

This shows a classical infrastructure life cycle. Things can also be more efficient nowadays, however:

Horizontal scalability – scaling out

Representing scaling out by a tree is rather difficult – so let’s rather take the cue from the title of this article.

Mushrooms or mushroom cultures are planted in small horizontal “shelves”.

If you need more mushrooms, you can conveniently add a shelf above or below, since the mushrooms themselves never grow especially tall.

Applied to our IT infrastructure, rather than a tree continually growing in height, we now have a new mushroom shelf created for each server added. The conjunction of these servers then forms a database that can be extended arbitrarily.

The advantage is clear: should the workload increase and my system require more processing units, I can simply add further servers and enhance the system’s performance.

This yields two significant advantages:

  • The system continues to run while being extended
  • The extension can proceed in the necessary stages

In addition, you can profit from the possibilities:

  • Downsizing is quickly implemented
  • Availability can be improved with standby nodes so that maintenance tasks no longer necessitate downtimes (in OS patching, for example).
  • The availability of the overall system increases, while losing the SPOF (Single Point of Failure) of the “one running server”

SPOF: Single Point of Failure

Horizontal scaling allows avoiding a single point of failure. In other words, the failure of a component will not result in failure of the overall system. In this way, capacity levels can be increased during operation.

In terms of our model:

If it’s possible that my database will never grow “tall,” I can place several “boxes” next to another.

The database itself, the consumed memory, can continue to grow; only the parts operationally running need be separated and moved into individual “boxes”.

Fortunately, SAP offers such an option for various databases:

Source: SAP

If we provide multiple “workers” reserving the active parts of the database, we will absolutely require a storage medium accessible from all systems. NetApp offers finished storage units for saving the database and connecting to different hosts in a high-performing way:

Source: Microsoft

In order words, we obtain a completed plant box for our mushroom culture.

Conclusion

To summarize: scaling up continually increases the existing hardware in size as the basic system is continually extended, while scaling out adds and connects new hardware to the existing components. What’s clear is that each of the two variants has its justification for existing. For example, scaling up is simply more practical up to a certain point, as far as small databases are concerned. But if the data volume quickly grows or if they are to run in a highly available scenario, it makes sense to learn at an early stage about the scaling-out method and the related options.

Now that I’ve explained the fundamentals of the scalability of IT infrastructures, my next blog article will address the interaction with NetApp storage in the case of an Azure environment.