Drifting history of cloud

Do you know what is cloud native?
Yes? No? Let me tell you a story.
A long time ago an operating system was written where everything was apparently doing everything. It was called Multics. In theory you could do anything – swap RAMs in a running OS, go crazy etc. It didn’t really work so some smart people said, let’s do the opposite – anything will only do 1 thing and do it properly. They called it Unix.
A key feature of Unix was that everything was supposed to be a file. A cdrom drive will be treated like a file. You open a file, right something and save it. If that file is actually a file, you save it. If that file is a cdrom, you burn it permanently on your CD. When you connect a computer to another computer, that computer shows up as a file. You write it on someone else’s computer by writing on that file. You get the idea. Different disks and different partitions on that disk – all were files.
All files were organized under directories. The root directory was where everything was. All directories were inside this directory. You couldn’t go “up” from root directory.
Some time passes and people realize that while we can access the files inside another OS from the running OS (let’s call it host OS), you couldn’t run the programs present in it because those programs were expected particular files to be present in particular location. They would, for example, require particular version of some particular file at /usr/lib – but the /usr/lib would be different between different OS. If the OS was at /mnt/os, then the file would be present at /mnt/os/usr/lib and /usr/lib would have host os files. To solve this, there was support for LD_PRELOAD etc. (just ignore this if you don’t know what it is) – but there were just too many programs and they had just too many different peculiar dependencies that a proper solution was needed.
So chroot was invented. This program would require elevated privilege and would silently add “/mnt/os” to the path whenever a program in /mnt/os would want to request a file. The program would never know it was running inside host OS. This solved almost every problem – but 1 remained. The main kernel would still be running from host OS. There was no way to run 2 kernels at the same time.
Then one day, just like that, virtualization was invented. CPUs started supporting it. You could suddenly run any number of kernels virtualized after your computer was “up” by running the host kernel.
And so, someone decided that if we have a nice program that is very user-friendly – that automatically launches a virtualized kernel and chroot into it – in fact it will have very simple config file where people will just configure where another OS is found and which program to run how and then our program will go download that OS and the program, run the virtualized kernel of this OS and chroot in it and run the program – all this would be nice. They then went ahead and created a place where everyone can store their programs and the OS so that everyone can easily search what is available and collaborate. They called this program – Docker – and the place where everyone stores everything – a Registry. The virtualized OS came to be known as “container” as it contained everything required to run a program.
A lot of people thought this was easy thing to do, so they created their own stores and this concept became known as “container registry”. Of course, with so many containers being uploaded everywhere, people started trying to have one container talk to another container. Inter-container networking became important and was very complicated to do. For this a new tool was written – kubernetes. There are other tools, but google made kubernetes public and so it was adopted widely and it became popular. A kubernetes to manage many dockers.
So of course, when you have kubernetes to many many dockers and millions of dollars to buy expensive equipment, then you can charge other people to run their program on your machine without letting them know what is your host OS or anything about hardware. You call this “cloud”. You can:
- Let them upload their docker config (also known as docker compose file)
- Or create your own scripting language and lock those people to your echo system. For example, ARM templates for Azure.
- And/or create a nice website that does everything that almost everyone wants to do – for example, load balancing between different containers.
And if you know that a cloud provider sells your load balancer between different containers, then you do not need to write your own. And if your program depends on a cloud provider’s load balancer (for example), then you program is cloud native.
In short, cloud native means you know what you are going offload to cloud firsthand and your program will in fact not run without the cloud.
Did that make sense?