Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Understanding bare metal Kubernetes

Anton Smith

on 27 January 2022

This article is more than 2 years old.


Bare metal Kubernetes is a powerful set of technologies that builds on the best ideas behind the public and private cloud, yet abstracts away some toilsome aspects related to virtualisation management and networking. For operators and users, it provides significant benefits, making it easier and faster to ship and maintain complex, distributed applications.

A typical server aka bare metal
Image credit: https://unsplash.com/@freeche

Bare metal Kubernetes has grown in popularity primarily due to the maturation of several key technologies. Those technologies center around providing cloud-like semantics and interfaces for bare metal management and provisioning (for example, MAAS – Metal as a Service), container management and orchestration (provided by Kubernetes), and hybrid cloud technologies allowing for coherent management of on- and off-premises servers (such as Juju). 

But what exactly is bare metal Kubernetes?

Bare metal Kubernetes explained

As the name implies, two key technologies comprise bare metal Kubernetes. The whole picture is easily understandable by first examining both terms separately.

Some of the overlaps and touch-points between bare metal and Kubernetes introduce significant complexities in order to combine the two technologies, even though it might seem easy at first. This section introduces and examines these components.

What is bare metal?

A trip down memory lane — virtual machines

It is easiest to understand the term bare metal in the context of Virtual Machines (VMs).

Sometime around the mid 2000s, virtual machines for production services became increasingly interesting (perhaps because of hardware support  in AMD and Intel CPUs). VMs allowed an entirely separate OS to be installed on top of an existing machine and OS. This was enabled primarily through the usage of a hypervisor. One key benefit was that an application could now be ring fenced off into a different administrative/security domain.  This allowed the aforementioned web server admin the superuser rights needed to administer the web server, without granting access to either the host machine or to services running on other VMs. Significant productivity gains were created by decoupling the lifecycle of the underlying machine from applications (in VMs), as well as the life cycles of applications  from each other.

Virtual machines can be conceptualised as machines within the machine, mimicking many of the aspects of a real machine.

In addition, VMs could be spun up or down, or created or destroyed,  independently of each other. This provided more flexibility and overall better resource utilisation on the physical machine.

Furthermore, popular hypervisors for managing VMs provided increasingly feature rich management suites  (including APIs) and also led to the creation of open source initiatives, such as OpenStack. VMs led to the enablement of the public cloud and Infrastructure as a Service as we know it today. Users from completely different organisations could share any available excess capacity on servers.

Bare metal

The advent of virtual machines clearly offered great advantages and, as a result, most services today run on top of a VM infrastructure. It became commonplace to refer to VMs simply as “machines” or “servers”—which meant that they became ambiguous terms. The term “metal” is synonymous with real machines because metal is a primary physical component in servers. This makes it a good term to clearly differentiate from VMs.

Today, there are many applications that require, or can benefit greatly from, direct access to the metal without the hypervisor layer in between.

What are containers?

Containers are a virtualisation technology that, rather than abstracting the hardware from the operating system running inside the virtualisation, isolates one or more processes in a lightweight and standard environment. This allow users to package and quickly deploy entire applications parallel to each other on the same kernel and hardware, while maintaining isolation among the workloads. 

It is at this point that Kubernetes enters the picture.

What is Kubernetes?

While containers provide application separation and benefits for packaging and deployment, they don’t provide any orchestration or management capabilities.

Google pioneered and created the open-source platform today known as Kubernetes, or K8s for short. It started as a simple container orchestration tool but has grown into the first universal cloud platform. It’s one of the most significant advancements in IT since the public cloud came to being in 2009, and has an unparalleled 5-year 30% growth rate in both market revenue and overall adoption.

What is container orchestration?

Container orchestration is automating the process of managing the lifecycle of containers, particularly in large, dynamic environments. It automates the deployment, networking, scaling, and availability of containerised workloads and services.

Containers are lightweight and usually ephermereal in nature, and it is possible to manage containers in small numbers. However, managing them at scale in production environments can pose a significant challenge without the automation that container orchestration platforms offer. Kubernetes has become the standard for container orchestration in the enterprise world.

Containers being orchestrated – Image credit: https://unsplash.com/@chuttersnap

Why run bare metal Kubernetes?

There are several reasons why you would want to run a K8s cluster directly on bare metal.

Performance & predictability

All workloads have different performance profiles and requirements. Some critical workloads require predictable throughput and latency, particularly in health and safety, transport, and telecommunications.

Performance is the primary reason for running K8s clusters directly on bare metal. By stripping out the hypervisor, you can gain direct access to the hardware and avoid overheads for CPU, RAM, and storage.

Hardware compatibility and flexibility

VM guests can have most types of hardware passed through to them. However, specific types of hardware don’t virtualise well or at all. Removing the hypervisor is the best way to avoid this issue.

Many use cases require specific kernels. There are applications that either need direct access to the kernel or require hardware that doesn’t virtualise well.

Certain Telco use cases, such as baseband processing, require real-time or low latency kernels. Live video streaming with little or no time deviation between viewers also requires this. Specific hardware often provides accelerator functions. In such cases, having a hypervisor and two OSs in between the application and the hardware is unfeasible.

Security and control

Assuming single-tenancy, fewer SW components means a smaller attack surface. Depending on who controls the underlying hypervisor and kernel, there is no way for a tenant in a VM to control the patch levels or versions of the hypervisor and associated software.

Arguably, managing your own host OS presents different security challenges. However, in the end, for those who want complete control of their stack from top to bottom, including upgrades, patching, and control of the underlying hardware, bare metal is the only solution.

Container isolation can introduce security concerns compared to VMs, particularly in multi-tenant situations. Tenants can instead use separate physical machines in order to provide better security.

A poll run by Canonical showed that approximately 55-60% of respondents chose “Control and security” as the top reason to build their own metal cloud.

Cost and operational complexity

Depending on the hypervisor used, it can introduce commercial costs and represents another layer to manage and maintain. Note that this depends heavily on whether or not you have a bare metal provisioning system, such as MAAS. Without MAAS, cost might increase with bare metal K8s.

A GPU. Although PCIe passthrough for GPGPU allows passthrough to VM guests, sharing GPUs between applications in different guests is still problematic.
Image credit: https://unsplash.com/@thomasfos

Bare metal K8s fits well in single-tenant installations. The hypervisor, and the security it provides, becomes unnecessary.

Wrap up

The definition of bare metal Kubernetes is as follows:

  • Running applications directly on bare metal instead of on virtual machines
  • Providing containers for application isolation and packaging
  • Using Kubernetes for orchestrating and managing containers and their associated applications
  • Providing application isolation, resilience, scalability, and enhanced performance, thanks to the removal of hypervisors, VMs, and their associated overhead

Bare metal Kubernetes solutions

Bare metal K8s is an exciting set of technologies. However, implementing bare metal K8s can be challenging. Canonical provides bare metal as a service tooling with MAAS. In addition, Canonical Kubernetes, combines CNCF-compliant Kubernetes distributions, with a rich tooling ecosystem and a lifecycle automation framework. Canonical Kubernetes seamlessly integrates with MAAS, easing the challenges of bare metal K8s lifecycle management.

Canonical has released an extensive whitepaper for bare metal Kubernetes – going in-depth into many of the different aspects involved. The whitepaper includes solution overviews and represents a valuable resource to increase understanding.

You can also try out setting up your own bare metal K8s cluster by following the extensive video tutorial.

kubernetes logo

What is Kubernetes?

Kubernetes, or K8s for short, is an open source platform pioneered by Google, which started as a simple container orchestration tool but has grown into a platform for deploying, monitoring and managing apps and services across clouds.

Learn more about Kubernetes ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Data Centre AI evolution: combining MAAS and NVIDIA smart NICs

It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service...

Kubernetes backups just got easier with the CloudCasa charm from Catalogic

For a native integration for Canonical’s Kubernetes platform, Juju was the perfect fit, and the charm makes consuming CloudCasa seamless for users.

Canonical Delivers Secure, Compliant Cloud Solutions for Google Distributed Cloud

Today, Canonical is thrilled to announce our expanded collaboration with Google Cloud to provide Ubuntu images for Google Distributed Cloud. This partnership...