Drifting history of cloud

Do you know what is cloud native?
Yes? No? Let me tell you a story.
A long time ago, an operating system was written in which everything was apparently doing everything. It was called Multics. In theory you could do anything – swap RAMs in a running OS, go crazy etc. It didn’t really work, so some smart people said, “Let’s do the opposite — everything will only do one thing and do it properly.” They called it Unix.
A key feature of Unix was that everything was supposed to be a file. A CD-ROM drive would be treated like a file. You open a file, write something and save it. If that file is actually a file, you save it. If that file is a CD-ROM, you burn it permanently onto your disc. When you connect a computer to another computer, that computer shows up as a file. But a computer can be a lot of things, so you have to mount where you want to write on that computer as a file. You write to someone else’s computer by writing to that file. You get the idea. Different disks and different partitions on that disk – all were files.
All files were organized under directories. The root directory was where everything was. All other directories were inside this root directory. You couldn’t go “up” from the root directory.
Some time passes and people realize that while we can access the files inside another OS from the running OS (let’s call it host OS), you couldn’t run the programs present in it because those programs were expecting particular files to be present in particular locations. They would, for example, require a particular version of some specific file at /usr/lib – but the /usr/lib would be different between different OS. If the OS was at /mnt/os, then the file would be present at /mnt/os/usr/lib, while /usr/lib would contain the host OS files. To solve this, some hacks like support for LD_PRELOAD etc. was added (just ignore this if you don’t know what it is) – but there were just too many programs and they had just too many different peculiar dependencies that a proper solution was needed.
So chroot was invented. This program required elevated privileges and would silently prepend “/mnt/os” to the path whenever a program in /mnt/os requested a file. The program would never know it was running inside host OS! This solved almost every problem – but 1 remained. The main kernel would still be running from host OS. There was no way to run 2 kernels at the same time.
Then one day, just like that, CPUs were fast enough to support virtualization. You could suddenly run any number of kernels virtualized after your computer was “up” by running the host kernel.
And so, someone decided that if we have a nice program that is very user-friendly – that automatically launches a virtualized kernel and chroot into it – in fact it will have very simple config file where people will just configure where another OS is found and which program to run how and then our program will go download that OS and the program, run the virtualized kernel of this OS and chroot in it and run the program – all this would be nice. They then went ahead and created a place where everyone can store their programs and the OS so that everyone can easily search what is available and collaborate. They called this program Docker, and the place where everyone stores everything a registry. The virtualized OS came to be known as “container” as it contained everything required to run a program.
A lot of people thought this was easy thing to do, so they created their own stores and this concept became known as “container registry”. Of course, with so many containers being uploaded everywhere, people started trying to have one container talk to another container. Inter-container networking became important and was very complicated to do. For this a new tool was written – kubernetes. There are other tools, but Google made kubernetes public and so it was adopted widely and it became popular. Kubernetes was created to manage many Docker containers.
So of course, when you have kubernetes to manage many many docker images and millions of dollars to buy expensive equipment, then you can charge other people to run their program on your machine without letting them know what is your host OS or anything about hardware. You call this “cloud”. You can:
- Let them upload their docker config (also known as docker compose file)
- Or create your own scripting language and lock those people into your ecosystem. For example, ARM templates for Azure.
- And/or create a nice website that does everything that almost everyone wants to do – for example, load balancing between different containers.
And if you know that a cloud provider sells your load balancer between different containers, then you do not need to write your own. And if your program depends on a cloud provider’s load balancer (for example), then your program is cloud-native.
In short, cloud native means you know what you are going to offload to cloud firsthand and your program will in fact not run without the cloud.
Does that make sense?
Note:
The above story is, of course, an oversimplification. Multics was not truly an operating system where everything did everything—it was more about modularity. Subsequently, Unix adopted a philosophy of “do one thing and do it well,” rather than enforcing a rule that everything must do only one thing. Files under /dev require mounting, as do NFS and SMB.
Similarly, chroot doesn’t actually add /mnt/os to paths—it virtualizes the filesystem. Virtualization predates Docker by decades, and Docker itself uses cgroups and namespaces. Kubernetes works with any container technology, not just Docker.
This note exists to clarify that the article is intended to provide a head start and is designed for easy reading.