How Kubernetes Took the World by Storm

 

When containers were first introduced in 2008, Virtual Machines, or VMs, were the state-of-the-art option to optimize a data centre’s physical resources. This arrangement worked well enough but had some flaws like Virtual machines utilized too many resources because they required both a complete operating system and emulated instructions to reach the physical CPU.

To solve this problem, a common kernel is shared with all applications that can choose any operational resources as necessary. Containers can run on bare metal while sharing resources, but without being able to access other containers’ resources. How do containers ensure high availability, disaster recovery, or scalability? Container orchestration systems such as Kubernetes (K8S) offer a solution.

So what is Kubernetes?

Kubernetes is a system for running and coordinating containerized applications across a cluster of machines. It is a platform designed to completely manage the life cycle of containerized applications and services using methods that provide predictability, scalability, and high availability.

It is an open-source project that has become one of the most popular container orchestration tools around, it allows you to deploy and manage multi-container applications at scale. While in practice Kubernetes is most often used with Docker, the most popular containerization platform, it can also work with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes. And because Kubernetes is open-source, with relatively few restrictions on how it can be used, it can be used freely by anyone who wants to run containers, most anywhere they want to run them on-premises, in the public cloud, or both.

Google and Kubernetes


Google is probably the first company that realized it needed a better way to implement and manage its software components to scale globally, and for years developed Borg (later called Omega) internally.

Kubernetes began life as a project within Google. It’s a successor to though not a direct descendent of Google Borg, an earlier container management tool that Google used internally. Google open-sourced Kubernetes in 2014, in part because the distributed microservices architectures that Kubernetes facilitates makes it easy to run applications in the cloud. Google sees the adoption of containers, microservices, and Kubernetes as potentially driving customers to its cloud services (although Kubernetes certainly works with Azure and AWS as well). Kubernetes is currently maintained by the Cloud Native Computing Foundation, which is itself under the umbrella of the Linux Foundation.

Why you need Kubernetes and what it can do?

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behaviour was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes let you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

Toward the end, let's know some case studies about how k8s solving the industry challenges …

CASE STUDY of Pinterest

Challenge

After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.

Solution

The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.

Impact

“By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins,” says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. “We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 per cent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 per cent fewer instance-hours per day when compared to the previous static cluster.”

CASE STUDY of Nokia

Challenge

Nokia’s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. “As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators has a bit different infrastructure,” says Gergely Csatari, Senior Open Source Engineer. “There are operators who are running on bare metal. Some operators are running on virtual machines. Some operators are running on VMware Cloud and OpenStack Cloud. We want to run the same product on all of these different infrastructures without changing the product itself.”

Solution

The company decided that moving to cloud-native technologies would allow teams to have infrastructure-agnostic behaviour in their products. Teams at Nokia began experimenting with Kubernetes in pre-1.0 versions. “The simplicity of the label-based scheduling of Kubernetes was a sign that showed us this architecture will scale, will be stable, and will be good for our purposes,” says Csatari. The first Kubernetes-based product, the Nokia Telephony Application Server, went live in early 2018. “Now, all the products are doing some kind of re-architecture work, and they’re moving to Kubernetes.”


Impact

Kubernetes has enabled Nokia’s foray into 5G. “When you develop something that is part of the operator’s infrastructure, you have to develop it for the future, and Kubernetes and containers are the forward-looking technologies,” says Csatari. The teams using Kubernetes are already seeing clear benefits. “By separating the infrastructure and the application layer, we have fewer dependencies in the system, which means that it’s easier to implement features in the application layer,” says Csatari. And because teams can test the same binary artefact independently of the target execution environment, “we find more errors in early phases of the testing, and we do not need to run the same tests on different target environments, like VMware, OpenStack, or bare metal,” he adds. As a result, “we save several hundred hours in every release.”

Conclusion

Kubernetes has not only helped in the vertical and horizontal scaling of containers but has turned the tables for innovative engineering expectations. It has been succeeded in deployment for an initial estimate of servers. Also, it has gained so much popularity in a short span of time. Hence, most industry engineers will respond with inspiring stories to get their businesses right on tracks.

Thanks for reading this article! Leave a comment below if you have any questions.

Comments

Popular posts from this blog

How to configure Hadoop cluster using Ansible playbook

Automation Using Python Menu-Driven Program

Configuring HAProxy using Ansible playbook