Kubeadm control plane

Using the Kubeadm control plane type to manage a control plane provides several ways to upgrade control plane machines.

Upgrading workload clusters

The high level steps to fully upgrading a workload cluster are to first upgrade the control plane and then upgrade the worker machines.

Upgrading the control plane machines

How to upgrade the underlying machine image

To upgrade the control plane machines underlying machine images, the MachineTemplate resource referenced by the KubeadmControlPlane must be changed. Since MachineTemplate resources are immutable, the recommended approach is to

  1. Copy the existing MachineTemplate.
  2. Modify the values that need changing, such as instance type or image ID.
  3. Create the new MachineTemplate on the management cluster.
  4. Modify the existing KubeadmControlPlane resource to reference the new MachineTemplate resource.

The final step will trigger a rolling update of the control plane using the new values found in the MachineTemplate.

How to upgrade the Kubernetes control plane version

To upgrade the Kubernetes control plane version, which will likely, depending on the provider, also upgrade the underlying machine image, make a modification to the KubeadmControlPlane resource’s Spec.Version field. This will trigger a rolling upgrade of the control plane.

Some infrastructure providers, such as CAPA, require that if a specific machine image is specified, it has to match the Kubernetes version specified in the KubeadmControlPlane spec. In order to only trigger a single upgrade, the new MachineTemplate should be created first and then both the Version and InfrastructureTemplate should be modified in a single transaction.

Upgrading workload machines managed by a MachineDeployment

Upgrades are not limited to just the control plane. This section is not related to Kubeadm control plane specifically, but is the final step in fully upgrading a Cluster API managed cluster.

It is recommended to manage workload machines with one or more MachineDeployments. MachineDeployments will transparently manage MachineSets and Machines to allow for a seamless scaling experience. A modification to the MachineDeployments spec will begin a rolling update of the workload machines. Follow these instructions for changing the template for an existing MachineDeployment.

For a more in-depth look at how MachineDeployments manage scaling events, take a look at the MachineDeployment controller documentation and the MachineSet controller documentation.

Adopting existing machines into KubeadmControlPlane management

WARNING: If you are adopting Machines that were created on a v1alpha2 cluster, you must use a version with the fix for #3144 to perform the adoption or your cluster will be broken.

If your cluster has existing machines labeled with cluster.x-k8s.io/control-plane, you may opt in to management of those machines by creating a new KubeadmControlPlane object and updating the associated Cluster object’s controlPlaneRef like so:

---
apiVersion: "cluster.x-k8s.io/v1alpha3"
kind: Cluster
...
spec:
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    kind: KubeadmControlPlane
    name: controlplane
    namespace: default
...

Caveats:

  • The KCP controller will refuse to adopt any control plane Machines not bootstrapped with the kubeadm bootstrapper.
  • The KCP controller may immediately begin upgrading Machines post-adoption if they’re out of date.
  • The KCP controller attempts to behave intelligently when adopting existing Machines, but because the bootstrapping process sets various fields in the KubeadmConfig of a machine it’s not always obvious the original user-supplied KubeadmConfig would have been for that machine. The controller attempts to guess this intent to not replace Machines unnecessarily, so if it guesses wrongly, the consequence is that the KCP controller will effect an “upgrade” to its current config. For full details, see SemanticMerge in the kubeadm bootstrapper’s api/equality package.
  • If the cluster’s PKI materials were generated by an initial KubeadmConfig reconcile, they’ll be owned by the KubeadmConfig bound to that machine. The adoption process re-parents these resources to the KCP so they’re not lost during an upgrade, but deleting the KCP post-adoption will destroy those materials.
  • The ClusterConfiguration is not currently reconciled with their ConfigMaps the workload cluster, and kubeadm considers the ConfigMap authoritative. These fields on the KCP will be effectively ignored, and most notably include:
    • kubeadmConfigSpec.clusterConfiguration.apiServer.extraArgs
    • kubeadmConfigSpec.clusterConfiguration.controllerManager.extraArgs
    • kubeadmConfigSpec.clusterConfiguration.scheduler.extraArgs
    • Anything underneath kubeadmConfigSpec.clusterConfiguration.etcd
    • etc.

Kubeconfig management

KCP will generate and manage the admin Kubeconfig for clusters. The client certificate for the admin user is created with a valid lifespan of a year, and will be automatically regenerated when the cluster is reconciled and has less than 6 months of validity remaining.