code
Tips

What is Kubernetes and Why Should You Use It?

Kubernetes is an open-source orchestrator for managing and deploying containerized applications at scale. It was originally designed by Google to help scale containerized apps in the cloud.

Kubernetes can manage the lifecycle of containers, creating and destroying them depending on the needs of the application, as well as providing a range of other features.

Simply put, Kubernetes provides tools needed to build and deploy reliable, scalable distributed apps.

If you are interested in learning more about what you can get by using Kubernetes or a Kubernetes alternative, keep reading.

What is Kubernetes really?

Kubernetes is formed by a cluster of servers (nodes), each of which is running Kubernetes agent processes while also communicating with one another.

The Master Node is made up of a collection of processes called the control plane that helps enact and maintain the desired state of the Kubernetes cluster, while Worder Nodes are responsible for running the containers that form your applications and services.

In short, the goal is to achieve a decent load balancing and horizontal scalability. This is what Kubernetes helps with.

Nowadays, a huge number of services are delivered over the network via APIs. These are delivered through distribution systems running on multiple servers and configurations in various locations all coordinating their actions through network communication.

These exposed APIs are used on a daily basis and they need to be reliable and available. In other words, they must not fail and should not have any downtime.

On top of that, these services are accessed from all around the world so they should be scalable too without a significant redesign of the existing system.

And the purpose of Kubernetes is to provide relevant services to achieve all of this for your app.

Now that you are familiar with the Kubernetes container orchestration platform, learn more about its benefits below.

Benefit 1: Immutability

When there is an immutable infrastructure, all artifacts created have to respect the given infrastructure. Nothing should be changed upon user modifications.

Containers and Kubernetes encourage developers to build distributed systems that adhere to the principles of this kind of immutable infrastructure.

The traditional way of doing things with mutable infrastructure meant allowing changes to happen on top of existing objects as incremental updates.

Hence, the current state of the infrastructure cannot be represented as one single artifact, but rather an accumulation of incremental updates and changes of that artifact.

If you are wondering how to go about updating your app, simply build a new container image with a new tag and deploy it. This will kill the old container with the old image version.

Consequently, you will always have an artifact record of what you did. On top of that, should an error occur, you will be able to roll back to the previous image easily.

Benefit 2: Speed

Back in the day, an app developer would have to face a lot of downtimes when pushing an update. This was usually done at midnight or over the weekend when the traffic was lower.

But now, things are a lot quicker. You won’t even be able to compare the speed at which you can update your app and deploy its new features to the speed using the older platforms.

However, you should be aware that you will not increase velocity if you start constantly deploying new features with downtime.

Update the app without downtime as users expect it to be constantly up. Also, you should measure velocity through the number of features you can ship per hour while maintaining a highly available service.

This high speed is achieved thanks to core concepts of Kubernetes such as immutability, declarative configuration, and self-healing systems.

Benefit 3: Declarative Configuration

code

Image by James Osborne from Pixabay

Everything in Kubernetes is a declarative configuration object that represents the desired state of the system.

This is an alternative to the traditional imperative configuration where the state of a system is defined by the execution of a series of instructions rather than a declaration of the desired state of the system.

For instance, consider the task of running three replicas of a piece of software. When using an imperative configuration, it would look like this:

‘run A, run B, run C’

A declarative configuration would make that look like this:

‘replicas = 3’

This kind of declarative configuration allows a user to describe exactly what state the system should be in and is far less error-prone.

Traditional tools of development such as source control, unit tests, and so on, can be used with declarative configurations in ways that are impossible with imperative configurations.

This makes rollbacks fairly easy for Kubernetes, which is impossible with imperative configurations. Imperative systems basically describe how to get from point A to B, but rarely include reverse instructions to get you back.

Benefit 4: Self-healing Systems

When Kubernetes receives a desired state configuration, it does not simply take action to make the current state match the desired state at a single time, but it will continuously take action to ensure it stays that way as time passes by.

For example, if you assert a desired state of 3 replicas of a certain application, Kubernetes would not only create 3 replicas, but it will continuously ensure that there are exactly 3 replicas. If you manually destroy one, Kubernetes will bring one back up to match the desired state.

Scaling

If you plan on growing your product (and you should), you will have to scale both your software and teams working on it.

Kubernetes achieves scalability by prioritizing decoupled architectures.

Decoupling architectures

In a decoupled architecture, each component is separated from other components by defined APIs and service load balancers.

APIs provide a buffer between implementer and consumer, and load balancers provide a buffer between running instances of each service.

Decoupling components use load balancers to make it easier to scale the programs that make up your service, since increasing the size of the program can be done without adjusting or reconfiguring any of the other layers of your service.

Scaling processes

Scaling is rather easy due to the immutable, declarative nature of Kubernetes, which was explained earlier.

Since the containers are immutable, the number of replicas is simply a number in the declarative configuration which can be changed whenever required.

Of course, a user can set up auto-scaling with Kubernetes too. But with auto-scaling, Kubernetes assumes that there are enough resources available.

If there aren’t enough resources, a user will have to scale up the cluster itself. The platform makes this task easier too since every machine in the cluster is identical to every other machine and the apps themselves are decoupled from the machine by containers.

This way, adding extra resources is a matter of creating a new machine with required binaries using a pre-baked image and joining it to the new cluster.

Creating decoupled microservice architectures

When creating microservice architectures, multiple teams work on a single service that would be consumed by other teams for their service implementation.

The aggregation of all these services ultimately provides the implementation of the overall product’s surface area.

The platform provides a few abstractions and APIs to make this work:

  • Pods or groups of containers can group together container images developed by different teams into a single deployable unit.
  • Kubernetes services provide load balancing, naming, and discovery to isolate one microservice from another.
  • Namespaces provide isolation and access control with the aim that each microservice can control the degree to which other services interact with it.

Essentially, decoupling the app container image and machine allows different microservices to colocate on the same machine without interfering with the other. This reduces the overhead and the cost of microservice architectures.

Separation of concerns

code

Image by Lukas Bieri from Pixabay

An app developer relies on the SLA (service level agreement) delivered by the container orchestration API.

On the other hand, the orchestration API reliability engineer focuses on delivering the orchestration API’s SLA without worrying about the apps that run on top of it.

This separation of concerns or decoupling means that a small team running a Kubernetes cluster can be responsible for supporting thousands of teams running the app within that cluster.