ship helm
Tue Feb 28

The Pros and Cons of Kubernetes

Container technology has rapidly grown and gained popularity among organizations as a preferred alternative to virtual machines. The ease of creating and deploying containers is a significant factor contributing to this shift in preference. Containers offer a simpler and more straightforward approach to application deployment, reducing the time required to manage and maintain them.

The task of handling a simple application with just one or two containers might be easy. Nevertheless, deploying containers to production on a large scale involves dozens or even hundreds of containers that could be difficult to maintain and manage. At the same time, scaling these containerized applications would be a formidable challenge to overcome.

Addressing the issue, Kubernetes can be a good choice to solve this problem very effectively for deploying these applications to production. It works as a container orchestrator that helps you manage the workloads of your application efficiently. Providing a way to automate deployment, manage containers, and scaling are the main reasons why Kubernetes was developed. Despite the numerous benefits, it comes at the cost of some tradeoffs.

Pros

Implementing Kubernetes might provide some benefits.

Feature-rich

Kubernetes has varieties of features including load balancing, health check and automatic deployment which are greatly beneficial for scaling and managing the containerized application. Aside from that, it also provides resource and storage management features for more fine-grained control over your application.

Portability

Kubernetes works with containerization technology, which is easy to move between different environments. Kubernetes does this by using pods to package the whole applications and their dependency from another containerization technology like Docker. In this way, a single pod can contain one or more containers at once.

Flexibility and modularity

Kubernetes is also highly customizable and can be configured specifically based on the user’s needs. With the rich features mentioned previously, Kubernetes is able to manage the applications no matter how complex it is. Kubernetes was built with modularity in mind, thus adding certain features or updating deployment will be easier to do with, allowing for more complex customizations and ease of maintenance.

Open source

In spite of Kubernetes being developed by Google, it is open source now, and maintained by CNCF (Cloud Native Computing Foundation). It means that Kubernetes is now available for anyone to use, modify, and distribute under the Apache License 2.0 .

Scalability

Kubernetes has tons of useful features for scalability like load balancing and ingress. Kubernetes also has another useful feature known as application scaling. Application scaling comprises two, the vertical scaling and the horizontal scaling.

Vertical scaling is basically upgrading the computational resource of our application which is limited by the actual resource of the node that the application is currently deployed. Horizontal scaling is another way of scaling of which you create a new pod so the workload could be evenly distributed. By using the autoscaler, either vertical or horizontal can be done automatically.

Resource Optimization

Optimizing the multiple pods at once is no longer an issue with Kubernetes. By default, Kubernetes pods will use the same resource in a node. In case of high traffic and multiple pods are being deployed within the same node, it is likely that the busier container in a pod will use more resources and slow down every running pod or container. By using computational resources, we can limit how much resource a single container can use so it won’t bother the other pod or container, in case of it being busy.

For example, you can use request and limit to control memory and CPU usage. Using request, a container is guaranteed to have a certain amount of resource (e.g. 1000 milicore of CPU and 500 mebibyte of RAM) and limit will set up the maximum resource threshold for this particular container (e.g. 2000 milicore of CPU and 750 mebibyte of RAM).

It is important to set both so they don’t exceed the actual node resource and capability. Otherwise, it would hold the deployment progress due to insufficient resources, leading to the pods going into ‘pending’ status endlessly.

Rapid and seamless deployment

Rapid deployment is probably why you have to consider using it in the first place. Kubernetes allows you to deploy your application with no downtime. Kubernetes does this by letting your updated application be deployed first on the cluster and then terminating the previous one, giving you the experience of seamless deployment experience. In case some errors occur in current deployment, it allows you to rollback previous deployment which is very handy in unwanted circumstances.

Cons

While Kubernetes is giving you tremendous advantages for deployment and scaling the applications, it comes with several disadvantages as well.

Learning curve

Kubernetes could be very complex to learn and implement. Expertise in Kubernetes is also required in order to do troubleshooting and monitoring which could be very time-consuming and resource-intensive. Plethora of features that the Kubernetes has may also lead to greater levels of complexity.

As a result, Kubernetes has a really steep learning curve, which makes it difficult to learn and adopt. There are also a bunch of terms you need to understand. The advanced knowledge of networking, storage, and containerization is also required as you are going to work with Kubernetes as a distributed system.

There are some terms you need to know at the very least to make it work. Some of them are as follows.

  • Pod: the smaller unit for Kubernetes that can contain one or more containers.
  • Node: the host where the pod belongs to.
  • Deployment: the set of instructions used to execute the applications with the combinations of many objects like pods, service, etc.
  • Configmaps + secrets: basically an environment variable and encrypted environment variables.
  • Volumes and persistent volumes: the volume you can use to share between the containers
  • Services: is a resource object to expose the pods or your network application in your Kubernetes cluster.

There are many other terms that would likely help you understand Kubernetes better such as replicaset, stateful, cronjob, job, probe, ingress, and etc.

Troubleshooting and problem investigation

In correlation with the learning curve, your understanding of networking or any underlying objects is critical for troubleshooting in case something goes wrong. The problem may exist from any angle, whether it was the issue with the node itself or networking problem, permission issues, or auto scaling issues, and so on. While in fact the Kubernetes has its own logging architecture, it may not be sufficient to address issues you might encounter later on.

Resource overhead

Kubernetes requires the master server (control plane) to manage the entire node. Unfortunately, it may require a lot of overhead cost to run that could lead to you end up paying more. It may not be suitable for small startups deploying small scale applications. However, it could be tremendously beneficial for large scale operations, as it is really easy to scale.

Misconfiguration & security

Regarding [the survey](https://developers.redhat.com/articles/2022/06/13/kubernetes-security- risks-keep-developers-night) done by Red Hat, about 93% of 300 production- level Kubernetes users have experienced at least one incident over the year, and 31% of the respondents suffered loss as a result. It was reported that about 53% respondents have experienced misconfiguration incidents in their environment and at the lower percentage are suffering from major vulnerability and some other security incidents.

From the survey, Kubernetes itself is a complex architecture to start with and as a result misconfiguration is more likely and prevalent. This is especially the case if you are a beginner, inexperienced, or have inadequate comprehension of the underlying concepts.

As a distributed system Kubernetes exposes a large attack surface. In cloud native, where the Kubernetes were meant to be, the security can be thought of as layers, including Cloud, Cluster, Container, and Code.

As an outer layer, Cloud will be based on the Cloud provider security best practices you choose. For the infrastructure itself, there are many areas to concern like preventing public access to the APi Server, limiting access to etcd (the datastore of Kubernetes) so it is only accessible by control plane only, and so on.

From the cluster standpoints, there are two areas of concern such securing configurable cluster components and securing the running application in the cluster. From the container, you need proper container isolation and scanning for container vulnerability. From the code, you need to prevent and dynamic probing attack by running automated tools of the well known service attack like CSRF, XSS, and so on.

These whole complexities can easily lead to misconfiguration, allowing unauthorized access to sensitive information or unauthorized actions execution that can risk your entire business.

Conclusion

Kubernetes is a powerful container orchestration platform that lets you deploy and scale in the most efficient timespan. Supporting a wide array of popular cloud native providers makes it a great choice for organization. Having a bunch of features, such as automatic scaling, load balancing, and rolling updates, Kubernetes can help simplify the management of containerized applications.

However, be aware of the steep learning curve it carries and complexity, potential security breach and unoptimized resource allocation, or the troubleshooting difficulties you might face in the future. Therefore, it’s important for organizations to carefully evaluate their needs and consider the trade-offs before deciding to adopt Kubernetes as their container orchestration platform.