Inside IONOS podcast “What is the Cloud?” with Uwe Geier

Here you will find the transcript in english convenience translation of a very interesting podcast. In it, Uwe Geier, Head of Cloud Solutions at IONOS, explains in conversation with Andreas Maurer how the cloud differs from traditional hosting services.

Inside IONOS podcast “What is the Cloud?” with Uwe Geier

Andreas Maurer: Hello and welcome to a new episode of Inside. My name is Andreas Maurer. Today, we're going to have a mix of “sunny and Cloudy skies” as we talk about a topic that actually plays a role in every episode of Inside IONOS, but one that is rarely understood outside of the expert community: the Cloud. What exactly is it? How did the Cloud started? What's behind it? Today, I'll be getting to the bottom of these questions with an expert. And that expert is none other than Uwe Geier, our Head of Cloud Solutions at IONOS. Uwe, it's great to have you here.

Uwe Geier: Yes, thank you very much, Andreas. It's great to be here.

Andreas Maurer: Uwe, where does the term “Cloud” actually come from? We all talk about it and use it in our everyday work at the company, but what's behind it?

Uwe Geier: Yes, Cloud has now established itself as an acronym for the provision of IT resources. The term actually originated in the early 90s, when network connections were represented as Clouds, which were relatively abstract and actually only described an input and an output. But it has become established over the years, and the Cloud became the Cloud, and that's kind of the origin of the term.

Andreas Maurer: It was actually more of a coincidence, because someone once used this image

Uwe Geier: Yes, that was in those old Visio programs and network representation programs, maybe you remember them. Was that always a popular symbol for the abstract representation of connection?

Andreas Maurer: A question that I think we also discuss very often internally: Since when have people been talking about Cloud computing? When did the topic come up?

Uwe Geier: Cloud computing emerged in the early to mid-2000s and established itself, with virtualization technology being the key turning point. At the latest with the first offering from Amazon EC2 – I don't know exactly when it was, 2006 – the first Cloud computing service emerged, which is now one of the best known, alongside perhaps Microsoft and Google.

Andreas Maurer: And back then, I think it was more of a by-product of Amazon's normal e-commerce business, right?

Uwe Geier: Yes, in principle, but with a very bold and immense investment to date. And it has now actually developed into Amazon's core business.

Andreas Maurer: In 2008, we had already been doing web hosting for 20 years at IONOS, at that time still under the brands 1&1 and STRATO Partner. What exactly is the Cloud compared to web hosting? You could say that everything we do is Cloud. For us, the internet is everywhere. So what really distinguishes Cloud computing in the narrower sense from the hosting issues that existed before?

Uwe Geier: Well, hosting solutions are usually very rigid and relatively inflexible infrastructures that you use, whereas the Cloud enables very dynamic scaling in terms of performance increase: Compute, network, and storage are the three modular building blocks that ultimately make up the Cloud. In the web hosting sector, it's more of a relatively predefined package, usually consisting of storage space and the associated connectivity to the internet, but without direct access to the scaling that is then required to achieve the necessary service success.

Andreas Maurer: You've now mentioned two terms that I think need to be explained in a little more detail, and which are also related, namely dynamic and scaling.

Andreas Maurer: That means the opposite of dynamic would be static. In other words, the whole thing is flexible, mobile, and can scale. How can I imagine this in practical terms in the Cloud?

Uwe Geier: So the three main components that make up the Cloud are computing power, network for connectivity, and storage. These are available in different performance classes depending on the requirements. In this case, scaling means that it is possible to add resources in any dimension, i.e., to scale up or down, depending on the individual workload or abstract requirements. Classic examples of applications might be peak loads in certain seasonal areas, around Christmas for a shop system, or for data processing, invoicing at the end of the month for a customer. In this case, resources can be added as needed, scaled, or expanded in line with continuous business success, such as streaming services that are becoming more widespread and need to keep growing their infrastructure. So it's not about providing computing power at specific times, such as at the end of the month, seasonally at the end of the year, or during the Christmas shopping season, but rather enabling continuous growth. And that basically means scaling without a major planning perspective, because I don't have to buy infrastructure in that sense or install it in my data center, but can obtain it from the Cloud.

Andreas Maurer: That means the counterexample would be the dedicated server, which we sell as a root server or managed server, where I then have a dedicated resource explicitly assigned to me for my own use.

Uwe Geier: In a way, yes, although you can of course scale here too by booking several dedicated servers and connecting them to each other in a suitable way. All of them are connected to our Cloud in a way. But yes, basically, a dedicated server means a predefined set of computing and storage resources that cannot be dynamically expanded. In the Cloud, for example, I can initially provision a virtual instance, or VM (virtual machine) as we call it, with two cores and then add more cores both during runtime and through simple reboots, depending on the criticality of the application. This is not possible with a dedicated server, where I have a fixed framework of 2, 12, or 24 cores, depending on the package I have booked. So this dynamic is not possible in a single machine. However, I can add multiple systems.

Andreas Maurer: And as a rule, there is not only dynamic performance, but also “dynamic” costs.

Uwe Geier: Yes, that's right. But we work with very transparent cost models, for example, which incur little to no usage-based costs. In contrast to various competitive offers, where you have to navigate a certain complexity to see where your costs will end up at the end of the month in this dynamic, everything is transparently calculable. There is a price calculator that I can equip with parameters, and then it calculates the potential price of monthly usage based on the necessary CPUs, memory, and desired storage in the performance class, whereby the smallest unit we price is, of course, minutes.

Andreas Maurer: So, in other words, I pay for what I actually need, or what I think I need.

Uwe Geier: Exactly.

Andreas Maurer: Compared to the performance that might not be needed 80 or 90% of the time, but which the server would provide in the background in idle mode.

Uwe Geier: Exactly. Right.

Andreas Maurer: Exactly. Maybe let's go back to the terminology: whether it's on our homepage, in the industry, or in the Cloud, there are a lot of things that fall under that umbrella these days. So, we might be talking about what we generally mean by the Cloud, we might be talking about the Enterprise Cloud platform. Maybe you could break it down a bit. What is it and what other Cloud models and components are there?

Uwe Geier: Well, the functionality of the Enterprise Cloud platform is largely aimed at commercial enterprise customers who have a high demand for technically sophisticated and scalable infrastructures, as well as the need for certain certifications and evidence of technical and operational excellence. As we know, ISO is one such issue. ISO 27000, C5, the BSI standard, or IT baseline protection. These are potentially features that are required in certain areas in order to use Cloud services at all. And that's why we also talk about the Enterprise Cloud when it comes to using the infrastructure. In terms of CPU performance, we are more in the enterprise segment. We will also have entry-level segments in other areas, which may be more geared towards smaller and micro-enterprises or individuals. And that's why we tend to talk about enterprise because of the overall offering as a whole. Container services, to name one, other platform services, database as a service, or network file storage. So, these are certain elements that you would be more likely to use in an enterprise business environment than in a small or medium-sized or small business.

Andreas Maurer: Then there's a pair of terms that are used in a somewhat contradictory way: the Public Cloud versus the Private Cloud. What's that all about?

Uwe Geier: Yes, that's kind of inherent in the terminology. Of course, private means that it's only for me and at my place, and that's usually the case. I'm then in control of the entire value chain, from purchasing the servers to installing the necessary software and then using virtualization. And in the past, there were a few offerings that made private Cloud installation possible. The most prominent example is VMware. This means bringing together individual hardware components with a suitable software solution to run virtualization exclusively. Other smaller solutions include Proxmox, for example, or OpenStack, which is known as a large open-source project that allows you to use a Private Cloud tailored to your own specific purposes. 

Andreas Maurer: So that means I really have a predefined set of hardware that is available exclusively to me.

Uwe Geier: Exactly. But it can be expanded. It has an initial size. We suggest that the smallest unit should be three in order to ensure a certain level of reliability and also to have a quorum. With three, with two, it is difficult to have a quorum. With three or more, it makes sense. But of course, it can also be expanded to any number up to 300 or 3000. The core idea behind this is that I use this hardware exclusively for myself, without the usual part of the public Cloud where I share resources with other customers in competition. Of course, I have a distinct, logically defined segmentation, but I am initially in competition for infrastructure, use of infrastructure, with other customers who are then active in the public Cloud.

Andreas Maurer: So, in principle, you've already explained the Public Cloud.

Uwe Geier: Exactly, I've explained it a little bit. Public means that I am sharing the use, although, as I said, we of course strictly separate it logically and also take appropriate measures to ensure that no unauthorized use can take place. But I can't decide whether or not to add a new hardware system and am dependent on the configuration and packaging that we provide.

Andreas Maurer: Then I have three more terms on the list. One is something that I think has only emerged in recent years: Bare-metal Cloud.

Uwe Geier: Yes.

Andreas Maurer: What is that?

Uwe Geier: Bare-metal Cloud is basically what we just hinted at, namely the mass use of dedicated servers, in some cases even in a very special configuration for very large customers who cannot or do not want to use virtualization, depending on the actual purpose of use. So we're talking about hundreds, sometimes even thousands, of dedicated bare-metal servers that are provided to the customer in an almost Cloud-like manner, and the customer can use them in such a way that they can organize everything themselves, starting with the operating system. Put simply, we provide the building, the air conditioning, the power, the network, and the necessary service in the event of component failure. It happens now and then that a memory bar breaks or a hard drive fails. So we take care of all that. But from the operating system onwards, i.e., from the boot prompt onwards, the customer is responsible for themselves and uses a large hardware environment for an individual purpose that sometimes does not allow virtualization. Examples might be GPU training clusters.

Andreas Maurer: GPUs are graphics servers that are mainly used for AI today.

Uwe Geier: Exactly. For example, yes.

Andreas Maurer: Exactly. And the last two terms came up last year in a large project we worked on, namely air-gapped on-premise Cloud. We won the project in April last year from ITZBund, the IT service provider for the German federal government and federal administration. What do I need to understand by on-premise Cloud and then also by the keyword “air-gapped”?

Uwe Geier: Yes, good point. Well, on-premise means under the customer's control, i.e., on-premise means in the customer's buildings. And air-gapped, as the name suggests, means “air-sealed,” i.e., operating the Cloud without a network connection to the internet. So, there is no way to access this environment from the outside because the need for protection of this environment is so great that cutting off this external connection ensures that the entire workload remains in this environment: on-premise at the customer's premises within their jurisdiction. And, of course, this poses enormous challenges for the installation and operation of this Cloud because it is not possible to organize things from the outside. So, you always have to use very special access and prevent it technically across the board, to the extent that in some cases it is not even possible to establish or allow an external connection. And that's what this air-gapped means, and it also poses special challenges for operation in the end, because different principles come into play, different applications come into play than we provide in other structures. Also, I believe, something that not many providers can deliver, right?

Uwe Geier: No, very few indeed. There has been quite a lot of publicity in the press about the fact that the German Armed Forces, I believe, has also created an air-gapped environment with Google. And there aren't many of them. We are one of the few who have achieved this, and we are very proud of that. We hope to create further environments with the knowledge we have built up, the lessons we have learned, the operating models, and, above all, our daily operations.

Andreas Maurer: Our project takes time. We received the order in April last year, and I believe the first services have actually gone live recently.

Uwe Geier: Yes, the formal go-live was on August 1. However, we developed this in several stages with the customer. So, it was agreed and planned to first set up a so-called staging test environment in which the basic functionality would be tested, and then the first productive building block and subsequently a backup redundancy building block. Only then was the announced minimum level of the minimum environment established, which has now enabled the customer to go online and, yes, map their services to it.

Andreas Maurer: Redundancy is also a good keyword. 

Of course, we have redundancy. There are the famous RAIDs, which already have “redundant” in their name with the Redundant Array of Inexpensive Disks – I think it's now something other than independent. That means you have redundancy at the hard disk level.

With shared hosting, we don't want to go into too much detail here. That's basically the business that IONOS, 1&1, and STRATO Partner originally came from. With shared hosting, we started mirroring entire data centers at a different location a few years ago. This is difficult with a rather heterogeneous Cloud structure with very different use cases, in contrast to web hosting. How do I typically create redundancy if I want that as a customer?

Uwe Geier: Well, we provide the means and tools to do so, but of course, the customer also has to take some responsibility. We have individually operated centers and locations that are very well connected to each other via the public internet and that basically provide all the possibilities of a redundant setup. We also have the option of implementing smart interconnection that also connects internal structures with each other. So, not everything is always publicly connected. Sometimes there are also customer environments that only provide for a private network section. We can also connect these with each other. But in order to establish true redundancy, the customer must of course also organize certain things. So, it also depends on the application. For example, databases can be replicated. You can also set up simple redundancy by simply providing regular mirroring/backups at another location. This always has to be organized on an individual basis and with the involvement of the customer. We naturally provide detailed advice and show how a form of redundancy can be achieved with the help of our services, but ultimately it is up to the customer to decide what redundancy means to them and to what extent failover protection can be implemented.

Andreas Maurer: The exciting thing here is probably the buzzword “geo-redundancy,” which means that I can actually host my data in data centers that are thousands of kilometers apart and thus closer to the customer, right?

Uwe Geier: Yes, that's right. But you have to be a little careful here. The term “geo-redundancy” also refers to the failure of one location without the service being affected in its entirety. You always have to be a little careful about what geo-redundancy really means. Of course, with the help of our services, I can set up a service at several locations and thus bring it closer to the customer. Whether this really constitutes geo-redundancy, i.e., the complete, total failure of a location without affecting the actual service, always depends a little on the customer application and the extent to which it can be set up to be geo-redundant in the first place. Key words here are split brain and, as I just mentioned, quorum. Who decides whether the location is really down and offline? It always takes a bit of effort to map real geo-redundancy, but yes, you're right, we have many locations that allow us to be close to the customer and to build and distribute their application intelligently.

Andreas Maurer: This is particularly exciting for companies that are active in several countries, for example, throughout Europe, and can then distribute their load. You already mentioned fair pricing. Perhaps, to conclude, could you briefly summarize the unique selling points or special features of the IONOS Cloud solution? Let's start with a term that we haven't mentioned yet, but which has been appearing more frequently in the press in recent weeks and months: Cloud stack. We're talking about the Eurostack and the Deutschlandstack. What does the IONOS Cloud stack look like and how does it differ from other Clouds?

Uwe Geier: Yes, we differ in that we have full control over the value chain. We develop most things ourselves or use open-source components. We are a German company with German data centers and, in principle, we have control over this entire value chain in the spirit of sovereignty. And, importantly, we also have control over where data is stored and who has access to it. I believe this is a key distinguishing feature from the so-called hyperscalers, where I cannot be completely sure who has access to the data and where it is or will be stored. And I believe that is the main differentiator for IONOS. In addition, we have a very simple, transparent cost model and, in this respect, we have no usage-based cost shares except for outgoing network traffic, except for those that are defined in advance in the package price, i.e., in the selection of your cores, storage, or memory usage. And I believe that this sets us apart from other competitors, especially our American competitors. And we have relatively easy access to our Cloud in the form of a very understandable user interface together with a REST API, which basically provides me with everything I need to integrate it into automations or to make it very understandable in the first user experience. The user interface is the so-called Data Center Designer.

Andreas Maurer: Exactly, that's the DCD, our unique feature. It's still one of a kind and enables a canvas-based view, which provides an easy-to-understand and easy-to-grasp view of the virtual data center. And it's actually a really nice way to access it.

Andreas Maurer: Well, I'm also familiar with it from various trade shows, where we always showcase it prominently. So maybe that's a teaser. We'll be at it-sa in Nuremberg soon, then at the Smart Country Convention here in Berlin, where we're recording this conversation. So there are plenty of opportunities, and I have one more tip to share at the end. Last, no, penultimate question perhaps: Who actually benefits from Cloud computing? We just mentioned enterprises and large companies. What are some typical use cases where it makes perfect sense to move applications to the Cloud?

Uwe Geier: Well, it's actually a crossover across all industries, and that's also reflected in our customer base, where we really handle a wide variety of workloads. One very prominent [application] is software development companies that use Cloud instances to host their software as a service, for example. Others are, of course, e-commerce web shops, which, as mentioned at the beginning, represent a bit of a shop system, and, depending on the season, finance, which is also a large industrial sector. But also media and streaming services. In this respect, it is really very broadly positioned, and in fact, the use of Cloud services has almost become a commodity, because every company is somehow dependent on IT infrastructures, i.e., it has digital processing processes and must somehow make use of computing power for this. And we are now at the point in our Cloud evaluation where we say “Cloud first” and no longer put computing power in the basement, if you want to put it in somewhat simplified terms, or if you are striving to do so. This also means no longer operating your own data centers, because a) the know-how for this must now also be available in the companies to actually operate them. Secondly, but this is very well known, capital expenditure is being turned into operational expenditure, i.e., shifted to operating costs. And that's the case with the Cloud. In this respect, it's a very welcome solution for all industrial sectors to use infrastructure.

Andreas Maurer: And just now, going back to the on-premise example, I think there's another important aspect that we haven't talked about yet: security. We all know that there are millions of attacks on the internet every day, and for us as a Cloud provider, that's part of our daily business. We have a large team that takes care of combating this, and I know that I have also set up Linux servers and used to think it was a great idea to set up my own mail server. Well, I wouldn't even think of doing that anymore, because I know that it would probably be shot down immediately from the outside, and that is of course even more important with business-critical applications.

Uwe Geier: Yes, it's super exhausting. You're right, and we basically experience DDoS attacks every day, for which we are very well prepared. We use so-called scrubbing centers, which filter out unwanted traffic so that it doesn't even reach the customer. Every now and then, we get hit, and then we break out in a sweat. But it's actually no longer feasible to operate on a smaller scale. And that alone is reason enough to move to the Cloud, because there I'm largely protected from such exposure and, as you say, I no longer have to deal with the exhausting aspects of operating commodity services. That's no longer appropriate in this day and age.

Andreas Maurer: But now we're really coming to the end, last question, last complex. Actually, in almost every Inside IONOS episode, the question at the end is: what does the future hold? What are the big trends and developments? The keyword GPUs has already been mentioned. AI, I assume, will be one of them. What do we need to prepare for in the coming years?

Uwe Geier: Well, AI will become a big topic and will probably even become an even stronger growth market than the Cloud is currently and will remain. And AI applications will become more and more common in everyday life. We will be moving strongly in the direction of offering GPU computing power. We already do this today with access to physical GPUs – i.e. Nvidia cards – and in the future we will also offer virtual GPU instances, which will enable not only inference but also training. Training always requires a larger number of GPUs in order to be able to carry out real training on a large data set in a finite amount of time. But other points are, of course, things like blockchain applications for financial products. Edge computing will also become an increasingly important topic that is more and more dependent on the Cloud. Edge computing, which perhaps needs a brief explanation, refers to smaller data centers that are widespread. An example within our group would be the 5G data centers, the Open RAN network from our colleagues at 1&1.

Uwe Geier: Exactly. And in the edge data centers, data preparation or data processing takes place very close by, but then relies on additional Cloud infrastructure to perhaps be persisted or further processed there. And in this respect, it will remain a strong growth market, further fueled by the advent of AI models, and it is the symbiosis of the two, i.e., suitable AI infrastructure with Cloud infrastructure, that actually creates the added value of, for example, an AI agent. We continue to see enormous growth potential in this area for the foreseeable future and, in principle, a continuation of the current trend, possibly even a sharp acceleration now due to the increasing spread of AI.

Andreas Maurer: And perhaps we should also mention the AI Giga Factory at this point, where we will be applying for one of these large AI data centers that have been put out to tender by the EU. Ultimately, however, it is important to note that such an AI data center is ultimately also a Cloud data center.

Uwe Geier: Yes, it is definitely heavily dependent on Cloud infrastructure, because the results of AI must also be supported by computing power. But the construction of this Giga Factory is a separate issue. It really should be thought of as a factory designed for the mass interconnection of suitable hardware in order to organize appropriate training.

Andreas Maurer: We could perhaps meet again in a few weeks to discuss this topic. Yes, I think it's worth a podcast episode where we can take a closer look at what's behind it. Perhaps one more note on the topic of the Cloud. I believe we talked about Ethernet in Cloud data centers with Sebastian Hohwieler in the penultimate episode of Inside IONOS. We have just put a new data center into operation in Frankfurt. So, if you want to delve a little deeper into the technology, I recommend listening to that episode. We will, of course, link to it in the show notes. Until then, thank you very much, Uwe, for these exciting insights. That was a really in-depth look into the world of the Cloud, and I hope it has made the Cloud a little less Cloudy for our listeners. As always, if you have any questions, suggestions, or criticism, we would be happy to receive an email at podcast@ionos.com. We also welcome comments or ratings on your podcast platform of choice, or simply recommend us to others. Before you switch off, here's a tip for your calendar. We've already talked about a few events. On November 4, the IONOS Summit will take place again this year in Berlin, the event for digital sovereignty, secure Cloud solutions, and the important technology issues of tomorrow. You can already register for this great event free of charge at events.ionos.com. You can look forward to powerful insights from politics, business, and the tech industry, and experience how digital independence and digital sovereignty really work with security, data protection, and compliance. Thank you very much for listening. Bye for now, and see you next time.

Uwe Geier: Bye.

 

If you are interested in listening to the german podcast, you can click on the following link:

Link to the Podcast

 

You don't want to miss anything?

Sign up for our IR newsletter and stay informed about relevant investor relations topics of the IONOS Group.

Sign up now