Computer scientists are a funny lot: hyperactive about the smallest problems and always eager to help – sometimes even going way beyond their limits of endurance.
After all, they’re working on things they like.
Still, the switch to cloud resources makes many “IT Guys” feel a bit queasy. After all, the company will no longer be running things on its own internally assembled server, but on a system in a data center with “shared resources” – things are different when you can actually touch the hardware.
Or are they?
Current developments, especially in IT, make it difficult for many computer scientists to set up a “home” in the new and “agile” cloud like the one in the company’s own data center.
They have already settled in their own data center: the hypervisor provides perfectly prepared images, containing a fine tuned set of configurations based on years of practical experience.
On the other hand, in Cloud environments it all looks like a shop. Prefabricated images correctly configured for the Cloud are already available – a feature that can lead to laziness. Accumulated, trained knowledge no longer comes into play.
Indeed, in the Cloud world some things function differently than before, so that research into “HOW?” can itself become a mammoth task in some scenarios.
Moreover, these images from the Cloud shop do not necessarily remain unchanged.
The shop images are patched and reconfigured. The shop itself and its user interface are growing and will continue to develop.
The new Cloud world changes rapidly.
The same goes for the data center “at home”, but in the past, you were on your own in dealing with patches for the storage units, the hypervisor, the switches, the routers and everything else.
That labor cost a lot of sweat and tears, but also established a sense of loyalty.
Cluster configurations across multiple data centers were and remain a great art. Configuration errors occur too frequently in manual operations.
So, there we are back at the beginning, with the years of experience and accumulated work that went into the configurations until we felt comfortable, and with ready-made images.
For example, if you want to provide a cluster of comprising multiple identical hosts, then you need a construction plan for what they are to look like. This configuration must be identical down to the tiniest parameters.
Creating and adapting the configuration from shop images is therefore only of limited use.
Just like the hypervisor “at home”, Azure offers a feature providing space for finished images and simplifying the deployment of identical hosts. For example:
Even Microsoft is like everyone else when it comes to the new Azure Cloud. In Azure too these images exist in our own storage system and therefore for the time it is only in our tenant.
Storage in Azure is referred to as “storage account” – today we speak of “blob storage”.
Since it keeps the images ready at hand and an image gallery should at some point grow into a large collection of ready-made images, initial considerations are important if only to plan the mandatory fields of the Azure infrastructure.
How do we name the resource group where the images are deposited?
Which region will we choose for storing our images?
Which synchronization options will we use?
What naming conventions do we have present for storage and how can we transfer them to Azure?
How can I make my storage secure? (My colleague Dominic Iselt has written a blog about RBAC)
How are images currently being stored in the data center?
In this storage account we now need a blob container.
Naming conventions should already be applied here, since the links to the image gallery access the respective blobs. A convention for a particular operating system would serve well as a container name, for example. There we could then deposit all our images based on this OS.
We could then upload our generalized image to the container – should we wish to create that image ourselves. Since the use of cloud providers does not necessarily afford us this option, we recommend taking a different route.
Creating a managed image from an Azure VM requires a VM in Azure that is preconfigured.
We begin by searching for our future image VM in the existing VMs in the Azure portal.
From this VM we can create a snapshot in the portal. This also includes the creation of an image. Note that first the machine must be generalized, which classically is done via Sysprep or generalize for Linux. The image gallery can also handle specialized images, however.
Following creation of the image, name, resource group and further operations with the VM must be confirmed. It’s possible to use the VM just for image creation and then directly delete it – something quite practical!
Especially when the images are to be used for cluster operation, selection from different availability options is extremely important, since they cannot be subsequently configured.
Like every resource in Azure, we create an image gallery in a resource group. It makes sense to create them in the same resource group as the blob and all images.
Afterwards we must create the image definitions; here the initial limits of the feature already become evident:
Within an “image” (in the gallery) the versions are then entered so that they too can be neatly structured and above all managed and patched by the administrator.
The image version name should follow Major(int).Minor(int).Patch(int) format. For example: 0.0.1, 1.5.13
Azure does not allow any other version syntax here.
You can create an image via the item “Images”; only the blobs from the storage account will then be linked here.
Microsoft’s storage account article provides a script for transferring new and generalized images to the storage account.
Alternatively: this is possible with the Storage Explorer via GUI.
Your contact person
Johannes SuckowInfrastructure Consultant
© 2022 Scheer GmbH
Necessary cookies enable basic functions and are required for the proper functioning of the website.
Statistics Cookies collect information anonymously. This information helps us understand how our visitors use our website.
Marketing cookies come from third party providers, these collect information to play out targeted content.
In order to display content from video platforms and social media platforms, cookies are set by these external media.