First developed by Google, Kubernetes (pronounced koo-ber-net-ees) is a platform grounded in open source principles designed to help manage Linux-based containerised services and workloads. This open source project, which automates application deployment, was initially designed by Google in 2014, before it branched off into a separate entity managed by the Cloud Native Computing Foundation.
Before you try to get to grips with Kubernetes, though, it’s crucial to understand containerisation first. Containerisation, the process of running apps and services in isolated environments, may sound like a straightforward concept, but the underlying processes render this a much more complex undertaking.
What is containerisation?
The process of containerisation involves placing all the elements that create an app – from runtimes, config files, libraries and runtimes – in one isolated environment, known as a container. Since all the dependencies are in a single location, the container itself can be taken and moved from location to location without anything being affected. The container, for example, can be moved from an on-prem to a cloud environment, and the other way around, without all the compatibility and performance headaches that would normally arise.
The true power of containers is that they can be linked up together to create something greater, even if these containers aren’t in the exact same location. This is because these entities can communicate with one another across environments to create a complete application without having to employ a single virtualised environment or operating system.
This is an increasingly popular form of software deployment, particularly in recent years, but has also proven itself to be increasingly complex, especially with businesses that wish to deploy multiple containers across several machines – both physical and virtual machines (VMs). Manual processing may be required, as well as continuous management that deploying multiple containers demands.
This may not be such a significant barrier when engaging in containerisation on a simple level, but as development scales up, several containerised applications may be needed to work in tandem to power a business’ services. When containerisation becomes this complex, the number of containers may grow exponentially and become impossible to manage.
What is Kubernetes?
How to maximise the value of your data and apps with IaaS
Free yourself from infrastructure complexity
Kubernetes seeks to eliminate this. Originally developed by a team at Google, a company that today has everything running in containers, Kubernetes serves as an orchestration tool, giving users an overview of their container deployments. This makes it far easier to operate generally as well as making it possible to have hybrid, public and private cloud containers running simultaneously.
Kubernetes has an array of tools that make all of this possible, including the option to sort containers into groups, or 'pods', which then makes it easier to serve the applications with the necessary infrastructure, such as storage and networking capabilities. It handles a lot of the optimisation work so that businesses can focus on what they want their services to achieve, rather than worry about whether apps are talking to each other.
It's also able to optimise your hardware to ensure the correct amount of resources are being applied to each application, and add or remove resources depending on whether you want to scale up or down. Automated health checks also mean that errors can be corrected without human intervention, and it also has provisions to roll out updates to containers without downtime.
Perhaps the most important thing is that Kubernetes is not tied to a specific environment and it can operate regardless of where your containers are, whether that's in a public cloud, private cloud, virtualised system, or even a single laptop, and even combine all of these together.
Speaking at a VMworld conference in 2020, VMware CEO Pat Gelsinger took time to highlight Kubernetes as “the de facto API for today's multi-cloud world”.
“Much like Java, two decades ago, Kubernetes is a rare technology that brings everyone together,” he said.
Who owns Kubernetes?
Although it developed the system, Google would eventually donate the Kubernetes platform to the Cloud Native Computing Foundation in 2015, releasing it into the open source community to be used freely by anyone.
Although it primarily works with Docker, a programme that builds containers, Kubernetes will work with any platform that conforms to the Open Container Initiative (OCI) standards that define container formats. (Note: Docker has some higher-level orchestration tools that essentially perform the same functions as Kubernetes).
As Kubernetes is an open source technology, there's no single service available with dedicated support. The technology has essentially been adapted by various vendors into their own flavours, whether that's Google, AWS, or Red Hat, and choosing one will depend on the services you currently use or want as part of a contract.
Other providers include Docker, Canonical, CoreOS, Mirantis, and Rancher Labs. The latter was recently acquired by German-based Linux distribution company SUSE in a deal thought to be worth between $600 million to $700 million.
Rancher Labs, founded in 2014 and currently employing more than 200 people, provides open-source software that allows organisations to deploy and manage Kubernetes at scale.
The Cupertino-based startup claims to be the "most widely used enterprise Kubernetes platform", boasting 30,000 active users. Its customer base includes American Express, Comcast, Deutsche Bahn and Viasat.
What is the language of Kubernetes?
In order to fully understand Kubernetes, you need to learn the vernacular that comes with it.
Each deployment follows the same basic hierarchy: Cluster > Master > Nodes > Pods
Let's start at the top. Kubernetes is deployed in a 'cluster' – this is a collective term referring to both the group of machines that are running the platform and the containers that are managed by them.
Within each cluster there are multiple 'nodes' – these are normally the machines that the containers are running on, whether that's virtualised or physical, and multiple containers may be hosted on a single node (with each container hosting an application).
Each 'cluster' must always have a 'master', which acts like a management window from which admins can interact with the cluster. This includes scheduling and deploying new containers within the nodes.
Nodes are responsible for creating 'pods' – the term given to an instance of an application that's running within the cluster, usually involving multiple containers. This means that users are able to visualise all the individual containers supporting an application as a single entity.
Pods can be best thought of as the basic building block within Kubernetes, and are created based on the needs of the user.
How in-demand are Kubernetes skills?
In the past few years, containerisation has become more and more popular within app deployment, a trend that is also mirrored in the job market. In 2018, the demand for developers and engineers with experience in Kubernetes reached new heights, when IT Jobs Watch registered an almost eight-fold increase in these kinds of roles in a mere two years. When this happened, Josh Kirkwood, CyberArk DevOps Security Lead, said that Kubernetes had “become a massive money word”. He added that “these figures show that DevOps teams are seeking more skills to help them manage and deploy applications at scale”.
By now, even though many may have forgotten that Kubernetes has only been around for seven years, it has become a staple in the DevOps industry. Last year, IBM made headlines when it posted a job advert requiring a “minimum” of 12 years experience in Kubernetes, which included deploying microservices and other platforms, “hands-on” experience setting up platforms and managing secure secure secrets, as well as knowledge of container orchestration. According to several Twitter users, the requirements for the role were rather outlandish, especially since the very first GitHub post about Kubernetes originated on 7 June 2014.
It has become more accessible to learn and gain experience with Kubernetes, which has been more available following the increase in demand for skills with the system. Google Cloud announced in 2021 it would offer free training for artificial intelligence (AI), multi-cloud services, machine learning, and data analytics, which included routes to foundational certificates. One course asks learners to demonstrate core infrastructure skills, such as how to deploy a VM, write cloud shell commands, and run applications on Kubernetes.
Kubernetes.io includes a wealth of information on how to build a career in the field, with options for training and certification. Many of these pathways are free through edX, while budding professionals can also take a number of paid-for certifications and qualifications with the Linux Foundation.
How much does Kubernetes cost?
Different providers offer their services at marginally different rates, although the standard rate is in the region of $0.10 per hour for each cluster, charged in one-second increments. The trouble with running Kubernetes, though, is that many businesses are beginning to rely more heavily on it, but don't fully track how much they spend, which means price fluctuations often go under the radar.
Research published in January 2023, for example, suggests 10% of cloud developers have experienced a 50% surge in annual spending, with the majority experiencing a 25% hike in spending. More than half of the respondents to the Civo survey (57%) reported an increase in the number of Kubernetes clusters their organisations run in the last 12 months. Complementary research published in June 2021 suggested costs were also increasing for businesses, on top of the amount that's used.
Thankfully, there are several measures businesses can take to reduce Kubernetes costs, especially given the business imperative to reduce costs generally across the board. These steps begin with the use of cost analysis tools like OpenCost or Kubecost, and also include evaluating different providers, and reducing the number of clusters you run.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
Dale Walker is the Managing Editor of ITPro, and its sibling sites CloudPro and ChannelPro. Dale has a keen interest in IT regulations, data protection, and cyber security. He spent a number of years reporting for ITPro from numerous domestic and international events, including IBM, Red Hat, Google, and has been a regular reporter for Microsoft's various yearly showcases, including Ignite.