The Cluster API Book

Note: Impatient readers may head straight to Quick Start.

Using Cluster API v1alpha1? Check the legacy documentation

What is the Cluster API?

The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster.

Goals

  • To manage the lifecycle (create, scale, upgrade, destroy) of Kubernetes-conformant clusters using a declarative API.
  • To work in different environments, both on-premises and in the cloud.
  • To define common operations, provide a default implementation, and provide the ability to swap out implementations for alternative ones.
  • To reuse and integrate existing ecosystem components rather than duplicating their functionality (e.g. node-problem-detector, cluster autoscaler, SIG-Multi-cluster).
  • To provide a transition path for Kubernetes lifecycle products to adopt Cluster API incrementally. Specifically, existing cluster lifecycle management tools should be able to adopt Cluster API in a staged manner, over the course of multiple releases, or even adopting a subset of Cluster API.

Non-goals

  • To add these APIs to Kubernetes core (kubernetes/kubernetes).
    • This API should live in a namespace outside the core and follow the best practices defined by api-reviewers, but is not subject to core-api constraints.
  • To manage the lifecycle of infrastructure unrelated to the running of Kubernetes-conformant clusters.
  • To force all Kubernetes lifecycle products (kops, kubespray, GKE, AKS, EKS, IKS etc.) to support or use these APIs.
  • To manage non-Cluster API provisioned Kubernetes-conformant clusters.
  • To manage a single cluster spanning multiple infrastructure providers.
  • To configure a machine at any time other than create or upgrade.
  • To duplicate functionality that exists or is coming to other tooling, e.g., updating kubelet configuration (c.f. dynamic kubelet configuration), or updating apiserver, controller-manager, scheduler configuration (c.f. component-config effort) after the cluster is deployed.

Community, discussion, contribution, and support

  • Chat with us on the Kubernetes Slack in the #cluster-api channel
  • Subscribe to the SIG Cluster Lifecycle Google Group for access to documents and calendars
  • Participate in the conversations on Kubernetes Discuss
  • Join our Cluster API working group sessions where we share the latest project news, demos, answer questions, and triage issues
  • Provider implementers office hours where you can ask questions related to developing providers for Cluster API
    • Weekly on Tuesdays @ 12:00 PT (Zoom) and Wednesdays @ 15:00 CET (Zoom)
    • Previous meetings: [ notes ]

Pull Requests and feedback on issues are very welcome! See the issue tracker if you’re unsure where to start, especially the Good first issue and Help wanted tags, and also feel free to reach out to discuss.

See also: our own contributor guide and the Kubernetes community page.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Quick Start

In this tutorial we’ll cover the basics of how to use Cluster API to create one or more Kubernetes clusters.

Installation

Common Prerequisites

  • Install and setup kubectl in your local environment

Install and/or configure a kubernetes cluster

Cluster API requires an existing Kubernetes cluster accessible via kubectl; during the installation process the Kubernetes cluster will be transformed into a management cluster by installing the Cluster API provider components, so it is recommended to keep it separated from any application workload.

It is a common practice to create a temporary, local bootstrap cluster which is then used to provision a target management cluster on the selected infrastructure provider.

Choose one of the options below:

  1. Existing Management Cluster

For production use-cases a “real” Kubernetes cluster should be used with appropriate backup and DR policies and procedures in place.

export KUBECONFIG=<...>
  1. Kind

kind can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.

kind create cluster

Test to ensure the local kind cluster is ready:

kubectl cluster-info

Install clusterctl

The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.

Install clusterctl binary with curl on linux

Download the latest release; for example, to download version v0.3.0 on linux, type:

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.3/clusterctl-linux-amd64 -o clusterctl

Make the kubectl binary executable.

chmod +x ./clusterctl

Move the binary in to your PATH.

sudo mv ./clusterctl /usr/local/bin/clusterctl

Test to ensure the version you installed is up-to-date:

clusterctl version
Install clusterctl binary with curl on macOS

Download the latest release; for example, to download version v0.3.0 on macOS, type:

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.3/clusterctl-darwin-amd64 -o clusterctl

Make the kubectl binary executable.

chmod +x ./clusterctl

Move the binary in to your PATH.

sudo mv ./clusterctl /usr/local/bin/clusterctl

Test to ensure the version you installed is up-to-date:

clusterctl version

Initialize the management cluster

Now that we’ve got clusterctl installed and all the prerequisites in places, let’s transforms the Kubernetes cluster into a management cluster by using the clusterctl init.

The command accepts as input a list of providers to install; when executed for the first time, clusterctl init automatically adds to the list the cluster-api core provider, and if unspecified, it also adds the kubeadm bootstrap and kubeadm control-plane providers.

Initialization for common providers

Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before getting started with Cluster API.

Download the latest binary of clusterawsadm from the AWS provider releases and make sure to place it in your path.

The clusterawsadm command line utility assists with identity and access management (IAM) for Cluster API Provider AWS.

export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.

# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm alpha bootstrap create-stack

# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)

# Finally, initialize the management cluster
clusterctl init --infrastructure aws

See the AWS provider prerequisites document for more details.

For more information about authorization, AAD, or requirements for Azure, visit the Azure provider prerequisites document.

export AZURE_SUBSCRIPTION_ID=<SubscriptionId>

# Create an Azure Service Principal and paste the output here
export AZURE_TENANT_ID=<Tenant>
export AZURE_CLIENT_ID=<AppId>
export AZURE_CLIENT_SECRET=<Password>

export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"

# Finally, initialize the management cluster
clusterctl init --infrastructure azure

If you are planning to use to test locally Cluster API using the Docker infrastructure provider, please follow additional steps described in the developer instruction page.

# Create the base64 encoded credentials by catting your credentials json.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )

# Finally, initialize the management cluster
clusterctl init --infrastructure gcp
# The username used to access the remote vSphere endpoint
export VSPHERE_USERNAME="vi-admin@vsphere.local"
# The password used to access the remote vSphere endpoint
# You may want to set this in ~/.cluster-api/clusterctl.yaml so your password is not in
# bash history
export VSPHERE_PASSWORD="admin!23"

# Finally, initialize the management cluster
clusterctl init --infrastructure vsphere

For more information about prerequisites, credentials management, or permissions for vSphere, see the vSphere project.

# Initialize the management cluster
clusterctl init --infrastructure openstack

Please visit the Metal3 project.

The output of clusterctl init is similar to this:

Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.0" TargetNamespace="capa-system"

Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

  clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -

Create your first workload cluster

Once the management cluster is ready, you can create your first workload cluster.

Preparing the workload cluster configuration

The clusterctl config cluster command returns a YAML template for creating a workload cluster.

Required configuration for common providers

Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before configuring a cluster with Cluster API.

Download the latest binary of clusterawsadm from the AWS provider releases and make sure to place it in your path.

export AWS_REGION=us-east-1
export AWS_SSH_KEY_NAME=default
# Select instance types
export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
export AWS_NODE_MACHINE_TYPE=t3.large

See the AWS provider prerequisites document for more details.

export CLUSTER_NAME="capi-quickstart"
# Name of the virtual network in which to provision the cluster.
export AZURE_VNET_NAME=${CLUSTER_NAME}-vnet
# Name of the resource group to provision into
export AZURE_RESOURCE_GROUP=${CLUSTER_NAME}
# Name of the Azure datacenter location
export AZURE_LOCATION="centralus"
# Select machine types
export AZURE_CONTROL_PLANE_MACHINE_TYPE="Standard_D2s_v3"
export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"
# Generate SSH key.
# If you want to provide your own key, skip this step and set AZURE_SSH_PUBLIC_KEY to your existing public key.
SSH_KEY_FILE=.sshkey
rm -f "${SSH_KEY_FILE}" 2>/dev/null
ssh-keygen -t rsa -b 2048 -f "${SSH_KEY_FILE}" -N '' 1>/dev/null
echo "Machine SSH key generated in ${SSH_KEY_FILE}"
export AZURE_SSH_PUBLIC_KEY=$(cat "${SSH_KEY_FILE}.pub" | base64 | tr -d '\r\n')

For more information about authorization, AAD, or requirements for Azure, visit the Azure provider prerequisites document.

If you are planning to use to test locally Cluster API using the Docker infrastructure provider, please follow additional steps described in the developer instructions page.

See the GCP provider for more information.

It is required to use an official CAPV machine images for your vSphere VM templates. See uploading CAPV machine images for instructions on how to do this.

# The vCenter server IP or FQDN
export VSPHERE_SERVER="10.0.0.1"
# The vSphere datacenter to deploy the management cluster on
export VSPHERE_DATACENTER="SDDC-Datacenter"
# The vSphere datastore to deploy the management cluster on
export VSPHERE_DATASTORE="vsanDatastore"
# The VM network to deploy the management cluster on
export VSPHERE_NETWORK="VM Network"
# The vSphere resource pool for your VMs
export VSPHERE_RESOURCE_POOL="*/Resources"
# The VM folder for your VMs. Set to "" to use the root vSphere folder
export VSPHERE_FOLDER="vm"
# The VM template to use for your VMs
export VSPHERE_TEMPLATE="ubuntu-1804-kube-v1.17.3"
# The VM template to use for the HAProxy load balancer of the management cluster
export VSPHERE_HAPROXY_TEMPLATE="capv-haproxy-v0.6.0-rc.2"
# The public ssh authorized key on all machines
export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."

clusterctl init --infrastructure vsphere

For more information about prerequisites, credentials management, or permissions for vSphere, see the vSphere getting started guide.

A ClusterAPI compatible image must be available in your OpenStack. For instructions on how to build a compatible image see image-builder. Depending on your OpenStack and underlying hypervisor the following options might be of interest:

To see all required OpenStack environment variables execute:

clusterctl config cluster --infrastructure openstack --list-variables capi-quickstart

The following script can be used to export some of them:

wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc <path/to/clouds.yaml> <cloud>

A full configuration reference can be found in configuration.md.

Please visit the Metal3 getting started guide.

Generating the cluster configuration

For the purpose of this tutorial, we’ll name our cluster capi-quickstart.

clusterctl config cluster capi-quickstart --kubernetes-version v1.17.3 --control-plane-machine-count=3 --worker-machine-count=3 > capi-quickstart.yaml

Creates a YAML file named capi-quickstart.yaml with a predefined list of Cluster API objects; Cluster, Machines, Machine Deployments, etc.

The file can be eventually modified using your editor of choice.

See [clusterctl config cluster] for more details.

Apply the workload cluster

When ready, run the following command to apply the cluster manifest.

kubectl apply -f capi-quickstart.yaml

The output is similar to this:

cluster.cluster.x-k8s.io/capi-quickstart created
awscluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
awsmachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
awsmachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created

Accessing the workload cluster

The cluster will now start provisioning. You check status with:

kubectl get cluster --all-namespaces

To verify the first control plane is up:

kubectl get kubeadmcontrolplane --all-namespaces

You should see an output is similar to this:

NAME                              READY   INITIALIZED   REPLICAS   READY REPLICAS   UPDATED REPLICAS   UNAVAILABLE REPLICAS
capi-quickstart-control-plane             true          3                           3                  3

After the first control plane node is up and running, we can retrieve the workload cluster Kubeconfig:

kubectl --namespace=default get secret/capi-quickstart-kubeconfig -o jsonpath={.data.value} \
  | base64 --decode \
  > ./capi-quickstart.kubeconfig

Deploy a CNI solution

Calico is used here as an example.

kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.12/manifests/calico.yaml

After a short while, our nodes should be running and in Ready state, let’s check the status using kubectl get nodes:

kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes

Azure does not currently support Calico networking. As a workaround, it is recommended that Azure clusters use the Calico spec below that uses VXLAN.

kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/master/templates/addons/calico.yaml

After a short while, our nodes should be running and in Ready state, let’s check the status using kubectl get nodes:

kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes

Next steps

See the clusterctl documentation for more detail about clusterctl supported actions.

Concepts

Management cluster

The cluster where one or more Infrastructure Providers run, and where resources (e.g. Machines) are stored. Typically referred to when you are provisioning multiple clusters.

Workload/Target Cluster

A cluster whose lifecycle is managed by the Management cluster.

Infrastructure provider

A source of computational resources (e.g. machines, networking, etc.). Examples for cloud include AWS, Azure, Google, etc.; for bare metal include VMware, MAAS, metal3.io, etc. When there is more than one way to obtain resources from the same infrastructure provider (e.g. EC2 vs. EKS) each way is referred to as a variant.

Bootstrap provider

The bootstrap provider is responsible for (usually by generating cloud-init or similar):

  1. Generating the cluster certificates, if not otherwise specified
  2. Initializing the control plane, and gating the creation of other nodes until it is complete
  3. Joining master and worker nodes to the cluster

Control plane

The control plane (sometimes referred to as master nodes) is a set of services that serve the Kubernetes API and reconcile desired state through the control-loops.

  • Machine Based based control planes are the most common type deployment model and is used by tools like kubeadm and kubespray. Dedicated machines are provisioned running static pods for the control plane components such as kube-apiserver, kube-controller-manager and kube-scheduler.

  • Pod Based deployments require an external hosting cluster, the control plane is deployed using standard Deployment and StatefulSet objects and then the API exposed using a Service.

  • External control planes are offered and controlled by some system other than Cluster API (e.g., GKE, AKS, EKS, IKS).

As of v1alpha2 Machine Based is the only supported Cluster API control plane type.

Custom Resource Definitions (CRDs)

Machine

A “Machine” is the declarative spec for a Node, as represented in Kubernetes core. If a new Machine object is created, a provider-specific controller will handle provisioning and installing a new host to register as a new Node matching the Machine spec. If the Machine’s spec is updated, a provider-specific controller is responsible for updating the Node in-place or replacing the host with a new one matching the updated spec. If a Machine object is deleted, the corresponding Node should have its external resources released by the provider-specific controller, and should be deleted as well.

Fields like the kubelet version are modeled as fields on the Machine’s spec. Any other information that is provider-specific, though, is part of the InfraProviderRef and is not portable between different providers.

Machine Immutability (In-place Upgrade vs. Replace)

From the perspective of Cluster API all machines are immutable, once they are created they are never updated (except for maybe labels, annotations and status) - only deleted.

For this reason, it is recommended to use MachineDeployments which handles changes to machines by replacing them in the same way regular Deployments handle changes to the podSpec.

MachineDeployment

MachineDeployment work similar to regular POD Deployments reconciling changes to a machine spec by rolling out changes to 2 MachineSets, the old and newly updated.

MachineSet

MachineSets work similar to regular POD ReplicaSets, MachineSets are not meant to be used directly, but are rather the mechanism MachineDeployments use to reconcile desired state.

MachineHealthCheck

A “MachineHealthCheck” defines a set of conditions for Nodes which allow the user to specify when a Node should be considered unhealthy. If the Node matches the unhealthy conditions for a given user configured time, the MachineHealthCheck initiates remediation of the Node.

Remediation of Nodes is performed by deleting the Machine that created the Node. MachineHealthChecks will only remediate Nodes if they are owned by a MachineSet, this ensures that the Kubernetes cluster does not lose capacity, as the MachineSet will create a new Machine to replace the failed Machine.

BootstrapData

BootstrapData contains the machine or node role specific initialization data (usually cloud-init) used by the infrastructure provider to bootstrap a machine into a node.

Personas

This document describes the personas for the Cluster API 1.0 project as driven from use cases.

We are marking a “proposed priority for project at this time” per use case. This is not intended to say that these use cases aren’t awesome or important. They are intended to indicate where we, as a project, have received a great deal of interest, and as a result where we think we should invest right now to get the most users for our project. If interest grows in other areas, they will be elevated. And, since this is an open source project, if you want to drive feature development for a less-prioritized persona, we absolutely encourage you to join us and do that.

Use-case driven personas

Service Provider: Managed Kubernetes

Managed Kubernetes is an offering in which a provider is automating the lifecycle management of Kubernetes clusters, including full control planes that are available to, and used directly by, the customer.

Proposed priority for project at this time: High

There are several projects from several companies that are building out proposed managed Kubernetes offerings (Project Pacific’s Kubernetes Service from VMware, Microsoft Azure, Google Cloud, Red Hat) and they have all expressed a desire to use Cluster API. This looks like a good place to make sure Cluster API works well, and then expand to other use cases.

Feature matrix

Is Cluster API exposed to this user?Yes
Are control plane nodes exposed to this user?Yes
How many clusters are being managed via this user?Many
Who is the CAPI admin in this scenario?Platform Operator
Cloud / On-PremBoth
Upgrade strategies desired?Need to gather data from users
How does this user interact with Cluster API?API
ETCD deploymentNeed to gather data from users
Does this user have a preference for the control plane running on pods vs. vm vs. something else?Need to gather data from users

Service Provider: Kubernetes-as-a-Service

Examples of a Kubernetes-as-a-Service provider include services such as Red Hat’s hosted OpenShift, AKS, GKE, and EKS. The cloud services manage the control plane, often giving those cloud resources away “for free,” and the customers spin up and down their own worker nodes.

Proposed priority for project at this time: Medium

Existing Kubernetes as a Service providers, e.g. AKS, GKE have indicated interest in replacing their off-tree automation with Cluster API, however since they already had to build their own automation and it is currently “getting the job done,” switching to Cluster API is not a top priority for them, although it is desirable.

Feature matrix

Is Cluster API exposed to this user?Need to gather data from users
Are control plane nodes exposed to this user?No
How many clusters are being managed via this user?Many
Who is the CAPI admin in this scenario?Platform itself (AKS, GKE, etc.)
Cloud / On-PremCloud
Upgrade strategies desired?tear down/replace (need confirmation from platforms)
How does this user interact with Cluster API?API
ETCD deploymentNeed to gather data from users
Does this user have a preference for the control plane running on pods vs. vm vs. something else?Need to gather data from users

Cluster API Developer

The Cluster API developer is a developer of Cluster API who needs tools and services to make their development experience more productive and pleasant. It’s also important to take a look at the on-boarding experience for new developers to make sure we’re building out a project that other people can more easily submit patches and features to, to encourage inclusivity and welcome new contributors.

Proposed priority for project at this time: Low

We think we’re in a good place right now, and while we welcome contributions to improve the development experience of the project, it should not be the primary product focus of the open source development team to make development better for ourselves.

Feature matrix

Is Cluster API exposed to this user?Yes
Are control plane nodes exposed to this user?Yes
How many clusters are being managed via this user?Many
Who is the CAPI admin in this scenario?Platform Operator
Cloud / On-PremBoth
Upgrade strategies desired?Need to gather data from users
How does this user interact with Cluster API?API
ETCD deploymentNeed to gather data from users
Does this user have a preference for the control plane running on pods vs. vm vs. something else?Need to gather data from users

Raw API Consumers

Examples of a raw API consumer is a tool like Prow, a customized enterprise platform built on top of Cluster API, or perhaps an advanced “give me a Kubernetes cluster” button exposing some customization that is built using Cluster API.

Proposed priority for project at this time: Low

Feature matrix

Is Cluster API exposed to this user?Yes
Are control plane nodes exposed to this user?Yes
How many clusters are being managed via this user?Many
Who is the CAPI admin in this scenario?Platform Operator
Cloud / On-PremBoth
Upgrade strategies desired?Need to gather data from users
How does this user interact with Cluster API?API
ETCD deploymentNeed to gather data from users
Does this user have a preference for the control plane running on pods vs. vm vs. something else?Need to gather data from users

Tooling: Provisioners

Examples of this use case, in which a tooling provisioner is using Cluster API to automate behavior, includes tools such as kops and kubicorn.

Proposed priority for project at this time: Low

Maintainers of tools such as kops have indicated interest in using Cluster API, but they have also indicated they do not have much time to take on the work. If this changes, this use case would increase in priority.

Feature matrix

Is Cluster API exposed to this user?Need to gather data from tooling maintainers
Are control plane nodes exposed to this user?Yes
How many clusters are being managed via this user?One (per execution)
Who is the CAPI admin in this scenario?Kubernetes Platform Consumer
Cloud / On-PremCloud
Upgrade strategies desired?Need to gather data from users
How does this user interact with Cluster API?CLI
ETCD deployment(Stacked or external) AND new
Does this user have a preference for the control plane running on pods vs. vm vs. something else?Need to gather data from users

Service Provider: End User/Consumer

This user would be an end user or consumer who is given direct access to Cluster API via their service provider to manage Kubernetes clusters. While there are some commercial projects who plan on doing this (Project Pacific, others), they are doing this as a “super user” feature behind the backdrop of a “Managed Kubernetes” offering.

Proposed priority for project at this time: Low

This is a use case we should keep an eye on to see how people use Cluster API directly, but we think the more relevant use case is people building managed offerings on top at this top.

Feature matrix

Is Cluster API exposed to this user?Yes
Are control plane nodes exposed to this user?Yes
How many clusters are being managed via this user?Many
Who is the CAPI admin in this scenario?Platform Operator
Cloud / On-PremBoth
Upgrade strategies desired?Need to gather data from users
How does this user interact with Cluster API?API
ETCD deploymentNeed to gather data from users
Does this user have a preference for the control plane running on pods vs. vm vs. something else?Need to gather data from users

Using Custom Certificates

Cluster API expects certificates and keys used for bootstrapping to follow the below convention. CAPBK generates new certificates using this convention if they do not already exist.

Each certificate must be stored in a single secret named one of:

NameTypeExample
[cluster name]-caCAopenssl req -x509 -subj “/CN=Kubernetes API” -new -newkey rsa:2048 -nodes -keyout tls.key -sha256 -days 3650 -out tls.crt
[cluster name]-etcdCAopenssl req -x509 -subj “/CN=ETCD CA” -new -newkey rsa:2048 -nodes -keyout tls.key -sha256 -days 3650 -out tls.crt
[cluster name]-proxyCAopenssl req -x509 -subj “/CN=Front-End Proxy” -new -newkey rsa:2048 -nodes -keyout tls.key -sha256 -days 3650 -out tls.crt
[cluster name]-saKey Pairopenssl genrsa -out tls.key 2048 && openssl rsa -in tls.key -pubout -out tls.crt

Example

apiVersion: v1
kind: Secret
metadata:
  name: cluster1-ca
type: kubernetes.io/tls
data:
  tls.crt: <base 64 encoded PEM>
  tls.key: <base 64 encoded PEM>

Generating a Kubeconfig with your own CA

  1. Create a new Certificate Signing Request (CSR) for the system:masters Kubernetes role, or specify any other role under CN.
openssl req  -subj "/CN=system:masters" -new -newkey rsa:2048 -nodes -out admin.csr -keyout admin.key  -out admin.csr
  1. Sign the CSR using the [cluster-name]-ca key:
openssl x509 -req -in admin.csr -CA tls.crt -CAkey tls.key -CAcreateserial -out admin.crt -days 5 -sha256
  1. Update your kubeconfig with the sign key:
kubectl config set-credentials cluster-admin --client-certificate=admin.crt --client-key=admin.key --embed-certs=true

Upgrading from Cluster API v1alpha2 to Cluster API v1alpha3

We will be using the clusterctl init command to upgrade an existing management cluster from v1alpha2 to v1alpha3.

For detailed information about the changes from v1alpha2 to v1alpha3, please refer to the Cluster API v1alpha2 compared to v1alpha3 section.

Prerequisites

There are a few preliminary steps needed to be able to run clusterctl init on a management cluster with v1alpha2 components installed.

Delete the cabpk-system namespace

Delete the cabpk-system namespace by running:

kubectl delete namespace cabpk-system

Delete the core and infrastructure provider controller-manager deployments

Delete the capi-controller-manager deployment from the capi-system namespace:

kubectl delete deployment capi-controller-manager -n capi-system 

Depending on your infrastructure provider, delete the controller-manager deployment.

For example, if you are using the AWS provider, delete the capa-controller-manager deployment from the capa-system namespace:

kubectl delete deployment capa-controller-manager -n capa-system 

Optional: Ensure preserveUnknownFields is set to ‘false’ for the infrastructure provider CRDs Spec

This should be the case for all infrastructure providers using conversion webhooks to allow upgrading from v1alpha2 to v1alpha3.

This can verified by running kubectl get crd <crd name>.infrastructure.cluster.x-k8s.io -o yaml for all the infrastructure provider CRDs.

Upgrade the management cluster using clusterctl

Run clusterctl init with the relevant infrastructure flag. For the AWS provider you would run:

clusterctl init --infrastructure aws

You should now be able to manage your resources using the v1alpha3 version of the Cluster API components.

Configure a MachineHealthCheck

Prerequisites

Before attempting to configure a MachineHealthCheck, you should have a working management cluster with at least one MachineDeployment or MachineSet deployed.

What is a MachineHealthCheck?

A MachineHealthCheck is a resource within the Cluster API which allows users to define conditions under which Machine’s within a Cluster should be considered unhealthy.

When defining a MachineHealthCheck, users specify a timeout for each of the conditions that they define to check on the Machine’s Node, if any of these conditions is met for the duration of the timeout, the Machine will be remediated. The action of remediating a Machine should trigger a new Machine to be created, to replace the failed one.

Creating a MachineHealthCheck

Use the following example as a basis for creating a MachineHealthCheck:

apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineHealthCheck
metadata:
  name: capi-quickstart-node-unhealthy-5m
spec:
  # clusterName is required to associate this MachineHealthCheck with a particular cluster
  clusterName: capi-quickstart
  # (Optional) maxUnhealthy prevents further remediation if the cluster is already partially unhealthy
  maxUnhealthy: 40%
  # (Optional) nodeStartupTimeout determines how long a MachineHealthCheck should wait for
  # a Node to join the cluster, before considering a Machine unhealthy
  nodeStartupTimeout: 10m
  # selector is used to determine which Machines should be health checked
  selector:
    matchLabels:
      nodepool: nodepool-0
  # Conditions to check on Nodes for matched Machines, if any condition is matched for the duration of its tiemout, the Machine is considered unhealthy
  unhealthyConditions:
  - type: Ready
    status: Unknown
    timeout: 300s
  - type: Ready
    status: "False"
    timeout: 300s

Remediation short-circuiting

To ensure that MachineHealthChecks only remediate Machines when the cluster is healthy, short-circuiting is implemented to prevent further remediation via the maxUnhealthy field within the MachineHealthCheck spec.

If the user defines a value for the maxUnhealthy field (either an absolute number or a percentage of the total Machines checked by this MachineHealthCheck), before remediating any Machines, the MachineHealthCheck will compare the value of maxUnhealthy with the number of Machines it has determined to be unhealthy. If the number of unhealthy Machines exceeds the limit set by maxUnhealthy, remediation will not be performed.

With an absolute value

If maxUnhealthy is set to 2:

  • If 2 or fewer nodes are unhealthy, remediation will be performed
  • If 3 or more nodes are unhealthy, remediation will not be performed

These values are independent of how many Machines are being checked by the MachineHealthCheck.

With percentages

If maxUnhealthy is set to 40% and there are 25 Machines being checked:

  • If 10 or fewer nodes are unhealthy, remediation will be performed
  • If 11 or more nodes are unhealthy, remediation will not be performed

If maxUnhealthy is set to 40% and there are 6 Machines being checked:

  • If 2 or fewer nodes are unhealthy, remediation will be performed
  • If 3 or more nodes are unhealthy, remediation will not be performed

Note, when the percentage is not a whole number, the allowed number is rounded down.

Limitations and Caveats of a MachineHealthCheck

Before deploying a MachineHealthCheck, please familiarise yourself with the following limitations and caveats:

  • Only Machines owned by a MachineSet will be remediated by a MachineHealthCheck
  • Control Plane Machines are currently not supported and will not be remediated if they are unhealthy
  • If the Node for a Machine is removed from the cluster, a MachineHealthCheck will consider this Machine unhealthy and remediate it immediately
  • If no Node joins the cluster for a Node after the NodeStartupTimeout, the Machine will be remediated
  • If a Machine fails for any reason (if the FailureReason is set), the Machine will be remediated immediately

Kubeadm control plane

Using the Kubeadm control plane type to manage a control plane provides several ways to upgrade control plane machines.

Upgrading workload clusters

The high level steps to fully upgrading a workload cluster are to first upgrade the control plane and then upgrade the worker machines.

Upgrading the control plane machines

How to upgrade the underlying machine image

To upgrade the control plane machines underlying machine images, the MachineTemplate resource referenced by the KubeadmControlPlane must be changed. Since MachineTemplate resources are immutable, the recommended approach is to

  1. Copy the existing MachineTemplate.
  2. Modify the values that need changing, such as instance type or image ID.
  3. Create the new MachineTemplate on the management cluster.
  4. Modify the existing KubeadmControlPlane resource to reference the new MachineTemplate resource.

The final step will trigger a rolling update of the control plane using the new values found in the MachineTemplate.

How to upgrade the Kubernetes control plane version

To upgrade the Kubernetes control plane version, which will likely, depending on the provider, also upgrade the underlying machine image, make a modification to the KubeadmControlPlane resource’s Spec.Version field. This will trigger a rolling upgrade of the control plane.

Some infrastructure providers, such as CAPA, require that if a specific machine image is specified, it has to match the Kubernetes version specified in the KubeadmControlPlane spec. In order to only trigger a single upgrade, the new MachineTemplate should be created first and then both the Version and InfrastructureTemplate should be modified in a single transaction.

Upgrading workload machines managed by a MachineDeployment

Upgrades are not limited to just the control plane. This section is not related to Kubeadm control plane specifically, but is the final step in fully upgrading a Cluster API managed cluster.

It is recommended to manage workload machines with one or more MachineDeployments. MachineDeployments will transparently manage MachineSets and Machines to allow for a seamless scaling experience. A modification to the MachineDeployments spec will begin a rolling update of the workload machines.

For a more in-depth look at how MachineDeployments manage scaling events, take a look at the MachineDeployment controller documentation and the MachineSet controller documentation.

Overview of clusterctl

The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.

The clusterctl command line interface is specifically designed for providing a simple “day 1 experience” and a quick start with Cluster API; it automates fetching the YAML files defining provider components and installing them.

Additionally it encodes a set of best practices in managing providers, that helps the user in avoiding mis-configurations or in managing day 2 operations such as upgrades.

clusterctl Commands

clusterctl init

The clusterctl init command installs the Cluster API components and transforms the Kubernetes cluster into a management cluster.

This document provides more detail on how clusterctl init works and on the supported options for customizing your management cluster.

Defining the management cluster

The clusterctl init command accepts in input a list of providers to install.

Automatically installed providers

The clusterctl init command automatically adds the cluster-api core provider, the kubeadm bootstrap provider, and the kubeadm control-plane provider to the list of providers to install. This allows users to use a concise command syntax for initializing a management cluster. For example, to get a fully operational management cluster with the aws infrastructure provider, the cluster-api core provider, the kubeadm bootstrap, and the kubeadm control-plane provider, use the command:

clusterctl init --infrastructure aws

Provider version

The clusterctl init command by default installs the latest version available for each selected provider.

Target namespace

The clusterctl init command by default installs each provider in the default target namespace defined by each provider, e.g. capi-system for the Cluster API core provider.

See the provider documentation for more details.

Watching namespace

The clusterctl init command by default installs each provider configured for watching objects in all namespaces.

Multi-tenancy

Multi-tenancy for Cluster API means a management cluster where multiple instances of the same provider are installed.

The user can achieve multi-tenancy configurations with clusterctl by a combination of:

  • Multiple calls to clusterctl init;
  • Usage of the --target-namespace flag;
  • Usage of the --watching-namespace flag;

The clusterctl command officially supports the following multi-tenancy configurations:

A management cluster with n (n>1) instances of an infrastructure provider, and only one instance of Cluster API core provider, bootstrap provider and control plane provider (optional).

For example:

  • Cluster API core provider installed in the capi-system namespace, watching objects in all namespaces;
  • The kubeadm bootstrap provider in capbpk-system, watching all namespaces;
  • The kubeadm control plane provider in cacpk-system, watching all namespaces;
  • The aws infrastructure provider in aws-system1, watching objects in aws-system1 only;
  • The aws infrastructure provider in aws-system2, watching objects in aws-system2 only;
  • etc. (more instances of the aws provider)
A management cluster with n (n>1) instances of the Cluster API core provider, each one with a dedicated instance of infrastructure provider, bootstrap provider, and control plane provider (optional).

For example:

  • A Cluster API core provider installed in the capi-system1 namespace, watching objects in capi-system1 only, and with:
    • The kubeadm bootstrap provider in capi-system1, watching capi-system1;
    • The kubeadm control plane provider in capi-system1, watching capi-system1;
    • The aws infrastructure provider in capi-system1, watching objects capi-system1;
  • A Cluster API core provider installed in the capi-system2 namespace, watching objects in capi-system2 only, and with:
    • The kubeadm bootstrap provider in capi-system2, watching capi-system2;
    • The kubeadm control plane provider in capi-system2, watching capi-system2;
    • The aws infrastructure provider in capi-system2, watching objects capi-system2;
  • etc. (more instances of the Cluster API core provider and the dedicated providers)

Provider repositories

To access provider specific information, such as the components YAML to be used for installing a provider, clusterctl init accesses the provider repositories, that are well-known places where the release assets for a provider are published.

See clusterctl configuration for more info about provider repository configurations.

Variable substitution

Providers can use variables in the components YAML published in the provider’s repository.

During clusterctl init, those variables are replaced with environment variables or with variables read from the clusterctl configuration.

Additional information

When installing a provider, the clusterctl init command executes a set of steps to simplify the lifecycle management of the provider’s components.

  • All the provider’s components are labeled, so they can be easily identified in subsequent moments of the provider’s lifecycle, e.g. upgrades.
labels:
- clusterctl.cluster.x-k8s.io: ""
- cluster.x-k8s.io/provider: "<provider-name>"
  • An additional Provider object is created in the target namespace where the provider is installed. This object keeps track of the provider version, the watching namespace, and other useful information for the inventory of the providers currently installed in the management cluster.

clusterctl config cluster

The clusterctl config cluster command returns a YAML template for creating a workload cluster.

For example

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 --control-plane-machine-count=3 --worker-machine-count=3 > my-cluster.yaml

Creates a YAML file named my-cluster.yaml with a predefined list of Cluster API objects; Cluster, Machines, Machine Deployments, etc. to be deployed in the current namespace (in case, use the --target-namespace flag to specify a different target namespace).

Then, the file can be modified using your editor of choice; when ready, run the following command to apply the cluster manifest.

kubectl apply -f my-cluster.yaml

Selecting the infrastructure provider to use

The clusterctl config cluster command uses smart defaults in order to simplify the user experience; in the example above, it detects that there is only an aws infrastructure provider in the current management cluster and so it automatically selects a cluster template from the aws provider’s repository.

In case there is more than one infrastructure provider, the following syntax can be used to select which infrastructure provider to use for the workload cluster:

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
    --infrastructure:aws > my-cluster.yaml

or

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
    --infrastructure:aws:v0.4.1 > my-cluster.yaml

Flavors

The infrastructure provider authors can provide different type of cluster templates, or flavors; use the --flavor flag to specify which flavor to use; e.g.

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
    --flavor high-availability > my-cluster.yaml

Please refer to the providers documentation for more info about available flavors.

Alternative source for cluster templates

clusterctl uses the provider’s repository as a primary source for cluster templates; the following alternative sources for cluster templates can be used as well:

ConfigMaps

Use the --from-config-map flag to read cluster templates stored in a Kubernetes ConfigMap; e.g.

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
    --from-config-map my-templates > my-cluster.yaml

Also following flags are available --from-config-map-namespace (defaults to current namespace) and --from-config-map-key (defaults to template).

GitHub or local file system folder

Use the --from flag to read cluster templates stored in a GitHub repository or in a local file system folder; e.g.

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
   --from https://github.com/my-org/my-repository/blob/master/my-template.yaml > my-cluster.yaml

or

clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
   --from ~/my-template.yaml > my-cluster.yaml

Variables

If the selected cluster template expects some environment variables, user should ensure those variables are set in advance.

e.g. if the AWS_CREDENTIALS variable is expected for a cluster template targeting the aws infrastructure, you should ensure the corresponding environment variable to be set before executing clusterctl config cluster.

Please refer to the providers documentation for more info about the required variables or use the clusterctl config cluster --list-variables flag to get a list of variables names required by a cluster template.

The clusterctl configuration file can be used as alternative to environment variables.

clusterctl move

The clusterctl move command allows to move the Cluster API objects defining workload clusters, like e.g. Cluster, Machines, MachineDeployments, etc. from one management cluster to another management cluster.

You can use:

clusterctl move --to-kubeconfig="path-to-target-kubeconfig.yaml"

To move the Cluster API objects existing in the current namespace of the source management cluster; in case if you want to move the Cluster API objects defined in another namespace, you can use the --namespace flag.

Pivot

Pivoting is a process for moving the provider components and declared Cluster API resources from a source management cluster to a target management cluster.

This can now be achieved with the following procedure:

  1. Use clusterctl init to install the provider components into the target management cluster
  2. Use clusterctl move to move the cluster-api resources from a Source Management cluster to a Target Management cluster

Bootstrap & Pivot

The pivot process can be bounded with the creation of a temporary bootstrap cluster used to provision a target Management cluster.

This can now be achieved with the following procedure:

  1. Create a temporary bootstrap cluster, e.g. using Kind or Minikube
  2. Use clusterctl init to install the provider components
  3. Use clusterctl config cluster ... | kubectl apply -f - to provision a target management cluster
  4. Wait for the target management cluster to be up and running
  5. Get the kubeconfig for the new target management cluster
  6. Use clusterctl init with the new cluster’s kubeconfig to install the provider components
  7. Use clusterctl move to move the Cluster API resources from the bootstrap cluster to the target management cluster
  8. Delete the bootstrap cluster

Note: It’s required to have at least one worker node to schedule Cluster API workloads (i.e. controllers). A cluster with a single control plane node won’t be sufficient due to the NoSchedule taint. If a worker node isn’t available, clusterctl init will timeout.

clusterctl upgrade

The clusterctl upgrade command can be used to upgrade the version of the Cluster API providers (CRDs, controllers) installed into a management cluster.

Background info: management groups

The upgrade procedure is designed to ensure all the providers in a management group use the same API Version of Cluster API (contract), e.g. the v1alpha 3 Cluster API contract.

A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure providers watching objects in the same namespace.

Usually, in a management cluster there is only a management group, but in case of n-core multi tenancy there can be more than one.

upgrade plan

The clusterctl upgrade plan command can be used to identify possible targets for upgrades.

clusterctl upgrade plan

Produces an output similar to this:

Checking new release availability...

Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):

NAME                NAMESPACE                       TYPE                     CURRENT VERSION   TARGET VERSION
cluster-api         capi-system                     CoreProvider             v0.3.0            v0.3.1
kubeadm             capi-kubeadm-bootstrap-system   BootstrapProvider        v0.3.0            v0.3.1
docker              capd-system                     InfrastructureProvider   v0.3.0            v0.3.1


You can now apply the upgrade by executing the following command:

   clusterctl upgrade apply --management-group capi-system/cluster-api  --contract v1alpha3

The output contains the latest release available for each management group in the cluster/for each API Version of Cluster API (contract) available at the moment.

upgrade apply

After choosing the desired option for the upgrade, you can run the provided command.

clusterctl upgrade apply --management-group capi-system/cluster-api  --cluster-api-version v1alpha3

The upgrade process is composed by two steps:

  • Delete the current version of the provider components, while preserving the namespace where the provider components are hosted and the provider’s CRDs.
  • Install the new version of the provider components.

Please note that clusterctl does not upgrade Cluster API objects (Clusters, MachineDeployments, Machine etc.); upgrading such objects are the responsibility of the provider’s controllers.

clusterctl delete

The clusterctl delete command deletes the provider components from the management cluster.

The operation is designed to prevent accidental deletion of user created objects. For example:

clusterctl delete --infrastructure aws

Deletes the AWS infrastructure provider components, while preserving the namespace where the provider components are hosted and the provider’s CRDs.

If you want to delete all the providers in a single operation , you can use the --all flag.

clusterctl delete --all

clusterctl Configuration File

The clusterctl config file is located at $HOME/.cluster-api/clusterctl.yaml and it can be used to:

  • Customize the list of providers and provider repositories.
  • Provide configuration values to be used for variable substitution when installing providers or creating clusters.
  • Define image overrides for air-gapped environments.

Provider repositories

The clusterctl CLI is designed to work with providers implementing the clusterctl Provider Contract.

Each provider is expected to define a provider repository, a well-known place where release assets are published.

By default, clusterctl ships with providers sponsored by SIG Cluster Lifecycle. Use clusterctl config repositories to get a list of supported providers and their repository configuration.

Users can customize the list of available providers using the clusterctl configuration file, as shown in the following example:

providers:
  # add a custom provider
  - name: "my-infra-provider"
    url: "https://github.com/myorg/myrepo/releases/latest/infrastructure_components.yaml"
    type: "InfrastructureProvider"
  # override a pre-defined provider
  - name: "cluster-api"
    url: "https://github.com/myorg/myforkofclusterapi/releases/latest/core_components.yaml"
    type: "CoreProvider"

See provider contract for instructions about how to set up a provider repository.

Variables

When installing a provider clusterctl reads a YAML file that is published in the provider repository; while executing this operation, clusterctl can substitute certain variables with the ones provided by the user.

The same mechanism also applies when clusterctl reads the cluster templates YAML published in the repository, e.g. when injecting the Kubernetes version to use, or the number of worker machines to create.

The user can provide values using OS environment variables, but it is also possible to add variables in the clusterctl config file:

# Values for environment variable substitution
AWS_B64ENCODED_CREDENTIALS: XXXXXXXX

In case a variable is defined both in the config file and as an OS environment variable, the latter takes precedence.

Overrides Layer

clusterctl uses an overrides layer to read in injected provider components, cluster templates and metadata. By default, it reads the files from $HOME/.cluster-api/overrides.

The directory structure under the overrides directory should follow the template

<providerType-providerName>/<version>/<fileName>

For example,

├── bootstrap-kubeadm
│   └── v0.3.0
│       └── bootstrap-components.yaml
├── cluster-api
│   └── v0.3.0
│       └── core-components.yaml
├── control-plane-kubeadm
│   └── v0.3.0
│       └── control-plane-components.yaml
└── infrastructure-aws
    └── v0.5.0
            ├── cluster-template-dev.yaml
            └── infrastructure-components.yaml

For developers who want to generate the overrides layer, see Run the local-overrides hack!.

Once these overrides are specified, clusterctl will use them instead of getting the values from the default or specified providers.

One example usage of the overrides layer is that it allows you to deploy clusters with custom templates that may not be available from the official provider repositories. For example, you can now do

clusterctl config cluster mycluster --flavor dev --infrastructure aws:v0.5.0 -v5

The -v5 provides verbose logging which will confirm the usage of the override file.

Using Override="cluster-template-dev.yaml" Provider="infrastructure-aws" Version="v0.5.0"

Another example, if you would like to deploy a custom version of CAPA, you can make changes to infrastructure-components.yaml in the overrides folder and run,

clusterctl init --infrastructure aws:v0.5.0 -v5
...
Using Override="infrastructure-components.yaml" Provider="infrastructure-aws" Version="v0.5.0"
...

If you prefer to have the overrides directory at a different location (e.g. /Users/foobar/workspace/dev-releases) you can specify the overrides directory in the clusterctl config file as

overridesFolder: /Users/foobar/workspace/dev-releases

Image overrides

When working in air-gapped environments, it’s necessary to alter the manifests to be installed in order to pull images from a local/custom image repository instead of public ones (e.g. gcr.io, or quay.io).

The clusterctl configuration file can be used to instruct clusterctl to override images automatically.

This can be achieved by adding an images configuration entry as shown in the example:

images:
  all:
    repository: myorg.io/local-repo

Please note that the image override feature allows for more fine-grained configuration, allowing to set image overrides for specific components, for example:

images:
  all:
    repository: myorg.io/local-repo
  cert-manager:
    tag: v0.11.1

In this example we are overriding the image repository for all the components and the image tag for all the images in the cert-manager component.

Cert-Manager timeout override

For situations when resources are limited or the network is slow, the cert-manager wait time to be running can be customized by adding a field to the clusterctl config file, for example:

  cert-manager-timeout: 15m 

The value string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.

If no value is specified or the format is invalid, the default value of 10 minutes will be used.

clusterctl Provider Contract

The clusterctl command is designed to work with all the providers compliant with the following rules.

Provider Repositories

Each provider MUST define a provider repository, that is a well-known place where the release assets for a provider are published.

The provider repository MUST contain the following files:

  • The metadata YAML
  • The components YAML

Additionally, the provider repository SHOULD contain the following files:

  • Workload cluster templates

Creating a provider repository on GitHub

You can use GitHub release to package your provider artifacts for other people to use.

A github release can be used as a provider repository if:

  • The release tag is a valid semantic version number
  • The components YAML, the metadata YAML and eventually the workload cluster templates are include into the release assets.

See the GitHub help for more information about how to create a release.

Creating a local provider repository

clusterctl supports reading from a repository defined on the local file system.

A local repository can be defined by creating a <provider-label> folder with a <version> sub-folder for each hosted release; the sub-folder name MUST be a valid semantic version number. e.g.

~/local-repository/infrastructure-aws/v0.5.2

Each version sub-folder MUST contain the corresponding components YAML, the metadata YAML and eventually the workload cluster templates.

Metadata YAML

The provider is required to generate a metadata YAML file and publish it to the provider’s repository.

The metadata YAML file documents the release series of each provider and maps each release series to an API Version of Cluster API (contract).

For example, for Cluster API:

apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
kind: Metadata
releaseSeries:
- major: 0
  minor: 3
  contract: v1alpha3
- major: 0
  minor: 2
  contract: v1alpha2

Components YAML

The provider is required to generate a components YAML file and publish it to the provider’s repository. This file is a single YAML with all the components required for installing the provider itself (CRDs, Controller, RBAC etc.).

The following rules apply:

Naming conventions

It is strongly recommended that:

  • Core provider release a file called core-components.yaml
  • Infrastructure providers release a file called infrastructure-components.yaml
  • Bootstrap providers release a file called bootstrap-components.yaml
  • Control plane providers release a file called control-plane-components.yaml

Shared and instance components

The objects contained in a component YAML file can be divided in two sets:

  • Instance specific objects, like the Deployment for the controller, the ServiceAccount used for running the controller and the related RBAC rules.
  • The objects that are shared among all the provider instances, like e.g. CRDs, ValidatingWebhookConfiguration or the Deployment implementing the web-hook servers and related Service and Certificates.

As per the Cluster API contract, all the shared objects are expected to be deployed in a namespace named capi-webhook-system (if applicable).

clusterctl implements a different lifecycle for shared resources e.g.

  • ensuring that the version of the shared objects for each provider matches the latest version installed in the cluster.
  • ensuring that deleting an instance of a provider does not destroy shared resources unless explicitly requested by the user.

Target namespace

The instance components should contain one Namespace object, which will be used as the default target namespace when creating the provider components.

All the objects in the components YAML MUST belong to the target namespace, with the exception of objects that are not namespaced, like ClusterRoles/ClusterRoleBinding and CRD objects.

Controllers & Watching namespace

Each provider is expected to deploy controllers using a Deployment.

While defining the Deployment Spec, the container that executes the controller binary MUST be called manager.

The manager MUST support a --namespace flag for specifying the namespace where the controller will look for objects to reconcile.

Variables

The components YAML can contain environment variables matching the regexp ${\s*([A-Z0-9_]+)\s*}; it is highly recommended to prefix the variable name with the provider name e.g. ${ AWS_CREDENTIALS }

Additionally, each provider should create user facing documentation with the list of required variables and with all the additional notes that are required to assist the user in defining the value for each variable.

Labels

The components YAML components should be labeled with cluster.x-k8s.io/provider and the name of the provider. This will enable an easier transition from kubectl apply to clusterctl.

As a reference you can consider the labels applied to the following providers.

Provider NameLabel
CAPIcluster.x-k8s.io/provider=cluster-api
CABPKcluster.x-k8s.io/provider=bootstrap-kubeadm
CACPKcluster.x-k8s.io/provider=control-plane-kubeadm
CAPAcluster.x-k8s.io/provider=infrastructure-aws
CAPVcluster.x-k8s.io/provider=infrastructure-vsphere
CAPDcluster.x-k8s.io/provider=infrastructure-docker
CAPM3cluster.x-k8s.io/provider=infrastructure-metal3
CAPZcluster.x-k8s.io/provider=infrastructure-azure

Workload cluster templates

An infrastructure provider could publish a cluster templates file to be used by clusterctl config cluster. This is single YAML with all the objects required to create a new workload cluster.

The following rules apply:

Naming conventions

Cluster templates MUST be stored in the same folder as the component YAML and follow this naming convention:

  1. The default cluster template should be named cluster-template.yaml.
  2. Additional cluster template should be named cluster-template-{flavor}.yaml. e.g cluster-template-prod.yaml

{flavor} is the name the user can pass to the clusterctl config cluster --flavor flag to identify the specific template to use.

Each provider SHOULD create user facing documentation with the list of available cluster templates.

Target namespace

The cluster template YAML MUST assume the target namespace already exists.

All the objects in the cluster template YAML MUST be deployed in the same namespace.

Variables

The cluster templates YAML can also contain environment variables (as can the components YAML).

Additionally, each provider should create user facing documentation with the list of required variables and with all the additional notes that are required to assist the user in defining the value for each variable.

Common variables

The clusterctl config cluster command allows user to set a small set of common variables via CLI flags or command arguments.

Templates writers should use the common variables to ensure consistency across providers and a simpler user experience (if compared to the usage of OS environment variables or the clusterctl config file).

CLI flagVariable nameNote
--target-namespace${ NAMESPACE }The namespace where the workload cluster should be deployed
--kubernetes-version${ KUBERNETES_VERSION }The Kubernetes version to use for the workload cluster
--controlplane-machine-count${ CONTROL_PLANE_MACHINE_COUNT }The number of control plane machines to be added to the workload cluster
--worker-machine-count${ WORKER_MACHINE_COUNT }The number of worker machines to be added to the workload cluster

Additionally, value of the command argument to clusterctl config cluster <cluster-name> (<cluster-name> in this case), will be applied to every occurrence of the ${ CLUSTER_NAME } variable.

OwnerReferences chain

Each provider is responsible to ensure that all the providers resources (like e.g. VSphereCluster, VSphereMachine, VSphereVM etc. for the vsphere provider) MUST have a Metadata.OwnerReferences entry that links directly or indirectly to a Cluster object.

Please note that all the provider specific resources that are referenced by the Cluster API core objects will get the OwnerReference sets by the Cluster API core controllers, e.g.:

  • The Cluster controller ensures that all the objects referenced in Cluster.Spec.InfrastructureRef get an OwnerReference that links directly to the corresponding Cluster.
  • The Machine controller ensures that all the objects referenced in Machine.Spec.InfrastructureRef get an OwnerReference that links to the corresponding Machine, and the Machine is linked to the Cluster through its own OwnerReference chain.

That means that, practically speaking, provider implementers are responsible for ensuring that the OwnerReferences are set only for objects that are not directly referenced by Cluster API core objects, e.g.:

  • All the VSphereVM instances should get an OwnerReference that links to the corresponding VSphereMachine, and the VSphereMachine is linked to the Cluster through its own OwnerReference chain.

Additional notes

Components YAML transformations

Provider authors should be aware of the following transformations that clusterctl applies during component installation:

  • Variable substitution;
  • Enforcement of target namespace:
    • The name of the namespace object is set;
    • The namespace field of all the objects is set (with exception of cluster wide objects like e.g. ClusterRoles);
    • ClusterRole and ClusterRoleBinding are renamed by adding a “${namespace}-“ prefix to the name; this change reduces the risks of conflicts between several instances of the same provider in case of multi tenancy;
  • Enforcement of watching namespace;
  • All components are labeled;

Cluster template transformations

Provider authors should be aware of the following transformations that clusterctl applies during components installation:

  • Variable substitution;
  • Enforcement of target namespace:
    • The namespace field of all the objects is set;

Links to external objects

The clusterctl command requires that both the components YAML and the cluster templates contain all the required objects.

If, for any reason, the provider authors/YAML designers decide not to comply with this recommendation and e.g. to

  • implement links to external objects from a component YAML (e.g. secrets, aggregated ClusterRoles NOT included in the component YAML)
  • implement link to external objects from a cluster template (e.g. secrets, configMaps NOT included in the cluster template)

The provider authors/YAML designers should be aware that it is their responsibility to ensure the proper functioning of all the clusterctl features both in single tenancy or multi-tenancy scenarios and/or document known limitations.

Move

Provider authors should be aware that clusterctl move command implements a discovery mechanism that considers:

  • All the objects of Kind defined in one of the CRDs installed by clusterctl using clusterctl init.
  • Secret and ConfigMap objects.
  • the OwnerReference chain of the above objects.

clusterctl move does NOT consider any objects:

  • Not included in the set of objects defined above.
  • Included in the set of objects defined above, but not directly or indirectly to a Cluster object through the OwnerReference chain.

If moving some of excluded object is required, the provider authors should create documentation describing the the exact move sequence to be executed by the user.

Additionally, provider authors should be aware that clusterctl move assumes all the provider’s Controllers respect the Cluster.Spec.Paused field introduced in the v1alpha3 Cluster API specification.

clusterctl for Developers

This document describes how to use clusterctl during the development workflow.

Prerequisites

  • A Cluster API development setup (go, git, etc.)
  • A local clone of the Cluster API GitHub repository
  • A local clone of the GitHub repositories for the providers you want to install

Getting started

Build clustertl

From the root of the local copy of Cluster API, you can build the clusterctl binary by running:

make clusterctl

The output of the build is saved in the bin/ folder; In order to use it you have to specify the full path, create an alias or copy it into a folder under your $PATH.

Create a clusterctl-settings.json file

Next, create a clusterctl-settings.json file and place it in your local copy of Cluster API. Here is an example:

{
  "providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-aws"],
  "provider_repos": ["../cluster-api-provider-aws"]
}

providers (Array[]String, default=[]): A list of the providers to enable. See available providers for more details.

provider_repos (Array[]String, default=[]): A list of paths to all the providers you want to use. Each provider must have a clusterctl-settings.json file describing how to build the provider assets.

Run the local-overrides hack!

You can now run the local-overrides hack from the root of the local copy of Cluster API:

cmd/clusterctl/hack/local-overrides.py

The script reads from the local repositories of the providers you want to install, builds the providers’ assets, and places them in a local override folder located under $HOME/.cluster-api/overrides/. Additionally, the command output provides you the clusterctl init command with all the necessary flags.

clusterctl local overrides generated from local repositories for the cluster-api, bootstrap-kubeadm, control-plane-kubeadm, infrastrcuture-aws providers.
in order to use them, please run:

clusterctl init  --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --infrastructure aws:v0.5.0

See Overrides Layer for more information on the purpose of overrides.

Available providers

The following providers are currently defined in the script:

  • cluster-api
  • bootstrap-kubeadm
  • control-plane-kubeadm
  • infrastructure-docker

More providers can be added by editing the clusterctl-settings.json in your local copy of Cluster API; please note that each provider_repo should have its own clusterctl-settings.json describing how to build the provider assets, e.g.

{
  "name": "infrastructure-aws",
  "config": {
    "componentsFile": "infrastructure-components.yaml",
    "nextVersion": "v0.5.0",
  }
}

Additional steps in order to use the docker provider

Before running the local-overrides hack:

  • Run make -C test/infrastructure/docker docker-build REGISTRY=gcr.io/k8s-staging-capi-docker to build the docker provider image using a specific REGISTRY (you can choose your own).

  • Run make -C test/infrastructure/docker generate-manifests REGISTRY=gcr.io/k8s-staging-capi-docker to generate the docker provider manifest using the above registry/Image.

Run the local-overrides hack and save the clusterctl init command provided in the command output to be used later.

Edit the clusterctl config file located at ~/.cluster-api/clusterctl.yaml and configure the docker provider by adding the following lines (replace $HOME with your home path):

providers:
  - name: docker
    url: $HOME/.cluster-api/overrides/infrastructure-docker/latest/infrastructure-components.yaml
    type: InfrastructureProvider

If you are using Kind for creating the management cluster, you should:

  • run the following command to create a kind config file for allowing the Docker provider to access Docker on the host:
cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /var/run/docker.sock
        containerPath: /var/run/docker.sock
EOF
  kind create cluster --config ./kind-cluster-with-extramounts.yaml --name clusterapi
  kubectl cluster-info --context kind-clusterapi
  • Run kind create cluster --config ./kind-cluster-with-extramounts.yaml to create the management cluster using the above file

  • Run kind load docker-image gcr.io/k8s-staging-capi-docker/capd-manager-amd64:dev to make the docker provider image available for the kubelet in the management cluster.

Run clusterctl init command provided as output of the local-overrides hack.

Connecting to a workload cluster on docker

The command for getting the kubeconfig file for connecting to a workload cluster is the following:

kubectl --namespace=default get secret/capi-quickstart-kubeconfig -o jsonpath={.data.value} \
  | base64 --decode \
  > ./capi-quickstart.kubeconfig

When using docker-for-mac MacOS, you will need to do a couple of additional steps to get the correct kubeconfig for a workload cluster created with the docker provider:

# Point the kubeconfig to the exposed port of the load balancer, rather than the inaccessible container IP.
sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig

# Ignore the CA, because it is not signed for 127.0.0.1
sed -i -e "s/certificate-authority-data:.*/insecure-skip-tls-verify: true/g" ./capi-quickstart.kubeconfig

Known issues

A known issue affects Calico with the Docker provider v0.2.0. After you deploy Calico, apply this patch to work around the issue:

kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  -n kube-system patch daemonset calico-node \
  --type=strategic --patch='
spec:
  template:
    spec:
      containers:
      - name: calico-node
        env:
        - name: FELIX_IGNORELOOSERPF
          value: "true"
'

Developer Guide

Pieces of Cluster API

Cluster API is made up of many components, all of which need to be running for correct operation. For example, if you wanted to use Cluster API with AWS, you’d need to install both the cluster-api manager and the aws manager.

Cluster API includes a built-in provisioner, Docker, that’s suitable for using for testing and development. This guide will walk you through getting that daemon, known as [CAPD], up and running.

Other providers may have additional steps you need to follow to get up and running.

Prerequisites

Docker

Iterating on the cluster API involves repeatedly building Docker containers. You’ll need the docker daemon available.

A Cluster

You’ll likely want an existing cluster as your management cluster. The easiest way to do this is with kind, as explained in the quick start.

Make sure your cluster is set as the default for kubectl. If it’s not, you will need to modify subsequent kubectl commands below.

A container registry

If you’re using kind, you’ll need a way to push your images to a registry to they can be pulled. You can instead side-load all images, but the registry workflow is lower-friction.

Most users test with GCR, but you could also use something like Docker Hub. If you choose not to use GCR, you’ll need to set the REGISTRY environment variable.

Kustomize

You’ll need to install kustomize. There is a version of kustomize built into kubectl, but it does not have all the features of kustomize v3 and will not work.

Kubebuilder

You’ll need to install kubebuilder.

Development

Option 1: Tilt

Tilt is a tool for quickly building, pushing, and reloading Docker containers as part of a Kubernetes deployment. Many of the Cluster API engineers use it for quick iteration. Please see our Tilt instructions to get started.

Option 2: The Old-fashioned way

Building everything

You’ll need to build two docker images, one for Cluster API itself and one for the Docker provider (CAPD).

make docker-build
make -C test/infrastructure/docker docker-build

Push both images

$ make docker-push
docker push gcr.io/cluster-api-242700/cluster-api-controller-amd64:dev
The push refers to repository [gcr.io/cluster-api-242700/cluster-api-controller-amd64]
90a39583ad5f: Layer already exists
932da5156413: Layer already exists
dev: digest: sha256:263262cfbabd3d1add68172a5a1d141f6481a2bc443672ce80778dc122ee6234 size: 739
$ make -C test/infrastructure/docker docker-push
make: Entering directory '/home/liz/src/sigs.k8s.io/cluster-api/test/infrastructure/docker'
docker push gcr.io/cluster-api-242700/manager:dev
The push refers to repository [gcr.io/cluster-api-242700/manager]
5b1e744b2bae: Pushed
932da5156413: Layer already exists
dev: digest: sha256:35670a049372ae063dad910c267a4450758a139c4deb248c04c3198865589ab2 size: 739
make: Leaving directory '/home/liz/src/sigs.k8s.io/cluster-api/test/infrastructure/docker'

Make a note of the URLs and the digests. You’ll need them for the next step. In this case, they’re...

gcr.io/cluster-api-242700/manager@sha256:35670a049372ae063dad910c267a4450758a139c4deb248c04c3198865589ab2

and

gcr.io/cluster-api-242700/cluster-api-controller-amd64@sha256:263262cfbabd3d1add68172a5a1d141f6481a2bc443672ce80778dc122ee6234

Edit the manifests

$EDITOR config/manager/manager_image_patch.yaml
$EDITOR

In both cases, change the - image: url to the digest URL mentioned above:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: controller-manager
  namespace: system
spec:
  template:
    spec:
      containers:
      - image: gcr.io/cluster-api-242700/manager@sha256:35670a049372ae063dad910c267a4450758a139c4deb248c04c3198865589ab2`
        name: manager

Apply the manifests

$ kustomize build config/ | kubectl apply -f -
namespace/capi-system configured
customresourcedefinition.apiextensions.k8s.io/clusters.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/kubeadmconfigs.bootstrap.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/machinedeployments.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/machines.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/machinesets.cluster.x-k8s.io configured
role.rbac.authorization.k8s.io/capi-leader-election-role configured
clusterrole.rbac.authorization.k8s.io/capi-manager-role configured
rolebinding.rbac.authorization.k8s.io/capi-leader-election-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/capi-manager-rolebinding configured
deployment.apps/capi-controller-manager created
$ kustomize build test/infrastructure/docker/config | kubectl apply -f -
namespace/capd-system configured
customresourcedefinition.apiextensions.k8s.io/dockerclusters.infrastructure.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/dockermachines.infrastructure.cluster.x-k8s.io configured
customresourcedefinition.apiextensions.k8s.io/dockermachinetemplates.infrastructure.cluster.x-k8s.io configured
role.rbac.authorization.k8s.io/capd-leader-election-role configured
clusterrole.rbac.authorization.k8s.io/capd-manager-role configured
clusterrole.rbac.authorization.k8s.io/capd-proxy-role configured
rolebinding.rbac.authorization.k8s.io/capd-leader-election-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/capd-manager-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/capd-proxy-rolebinding configured
service/capd-controller-manager-metrics-service created
deployment.apps/capd-controller-manager created

Check the status of the clusters

$ kubectl get po -n capd-system
NAME                                       READY   STATUS    RESTARTS   AGE
capd-controller-manager-7568c55d65-ndpts   2/2     Running   0          71s
$ kubectl get po -n capi-system
NAME                                      READY   STATUS    RESTARTS   AGE
capi-controller-manager-bf9c6468c-d6msj   1/1     Running   0          2m9s

Testing

Cluster API has a number of test suites available for you to run. Please visit the testing page for more information on each suite.

That’s it!

Now you can create CAPI objects! To test another iteration, you’ll need to follow the steps to build, push, update the manifests, and apply.

Repository Layout

This page is still being written - stay tuned!

Developing Cluster API with Tilt

Overview

This document describes how to use kind and Tilt for a simplified workflow that offers easy deployments and rapid iterative builds.

Prerequisites

  1. Docker
  2. kind v0.6 or newer (other clusters can be used if preload_images_for_kind is set to false)
  3. kustomize standalone (kubectl kustomize does not work because it is missing some features of kustomize v3)
  4. Tilt v0.12.0 or newer
  5. Clone the Cluster API repository locally
  6. Clone the provider(s) you want to deploy locally as well

Getting started

Create a kind cluster

First, make sure you have a kind cluster and that your KUBECONFIG is set up correctly:

kind create cluster

Create a tilt-settings.json file

Next, create a tilt-settings.json file and place it in your local copy of cluster-api. Here is an example:

{
  "default_registry": "gcr.io/your-project-name-here",
  "provider_repos": ["../cluster-api-provider-aws"],
  "enable_providers": ["aws", "docker", "kubeadm-bootstrap", "kubeadm-control-plane"]
}

tilt-settings.json fields

allowed_contexts (Array, default=[]): A list of kubeconfig contexts Tilt is allowed to use. See the Tilt documentation on *allow_k8s_contexts for more details.

default_registry (String, default=””): The image registry to use if you need to push images. See the Tilt *documentation for more details.

provider_repos (Array[]String, default=[]): A list of paths to all the providers you want to use. Each provider must have a tilt-provider.json file describing how to build the provider.

enable_providers (Array[]String, default=[‘docker’]): A list of the providers to enable. See available providers for more details.

kind_cluster_name (String, default=”kind”): The name of the kind cluster to use when preloading images.

kustomize_substitutions (Map{String: String}, default={}): An optional map of substitutions for ${}-style placeholders in the provider’s yaml.

For example, if the yaml contains ${AWS_B64ENCODED_CREDENTIALS}, you could do the following:

"kustomize_substitutions": {
  "AWS_B64ENCODED_CREDENTIALS": "your credentials here"
}

An Azure Service Principal is needed for populating the controller manifests. This utilizes environment-based authentication.

  1. Save your Subscription ID
AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
az account set --subscription $AZURE_SUBSCRIPTION_ID
  1. Set the Service Principal name
AZURE_SERVICE_PRINCIPAL_NAME=ServicePrincipalName
  1. Save your Tenant ID, Client ID, Client Secret
AZURE_TENANT_ID=$( az account show --query tenantId --output tsv)
AZURE_CLIENT_SECRET=$(az ad sp create-for-rbac --name http://$AZURE_SERVICE_PRINCIPAL_NAME --query password --output tsv)
AZURE_CLIENT_ID=$(az ad sp show --id http://$AZURE_SERVICE_PRINCIPAL_NAME --query appId --output tsv)

Add the output of the following as a section in your tilt-settings.json:

cat <<EOF
"kustomize_substitutions": {
   "AZURE_SUBSCRIPTION_ID_B64": "$(echo "${AZURE_SUBSCRIPTION_ID}" | tr -d '\n' | base64 | tr -d '\n')",
   "AZURE_TENANT_ID_B64": "$(echo "${AZURE_TENANT_ID}" | tr -d '\n' | base64 | tr -d '\n')",
   "AZURE_CLIENT_SECRET_B64": "$(echo "${AZURE_CLIENT_SECRET}" | tr -d '\n' | base64 | tr -d '\n')",
   "AZURE_CLIENT_ID_B64": "$(echo "${AZURE_CLIENT_ID}" | tr -d '\n' | base64 | tr -d '\n')"
  }
EOF

You can generate a base64 version of your GCP json credentials file using:

base64 -i ~/path/to/gcp/credentials.json
"kustomize_substitutions": {
  "GCP_B64ENCODED_CREDENTIALS": "your credentials here"
}

deploy_cert_manager (Boolean, default=true): Deploys cert-manager into the cluster for use for webhook registration.

preload_images_for_kind (Boolean, default=true): Uses kind load docker-image to preload images into a kind cluster.

trigger_mode (String, default=auto): Optional setting to configure if tilt should automatically rebuild on changes. Set to manual to disable auto-rebuilding and require users to trigger rebuilds of individual changed components through the UI.

Run Tilt!

To launch your development environment, run

tilt up

This will open the command-line HUD as well as a web browser interface. You can monitor Tilt’s status in either location. After a brief amount of time, you should have a running development environment, and you should now be able to create a cluster. Please see the Usage section in the Quick Start for more information on creating workload clusters.

Available providers

The following providers are currently defined in the Tiltfile:

  • core: cluster-api itself (Cluster/Machine/MachineDeployment/MachineSet/KubeadmConfig/KubeadmControlPlane)
  • docker: Docker provider (DockerCluster/DockerMachine)

tilt-provider.json

A provider must supply a tilt-provider.json file describing how to build it. Here is an example:

{
    "name": "aws",
    "config": {
        "image": "gcr.io/k8s-staging-cluster-api-aws/cluster-api-aws-controller",
        "live_reload_deps": [
            "main.go", "go.mod", "go.sum", "api", "cmd", "controllers", "pkg"
        ]
    }
}

config fields

image: the image for this provider, as referenced in the kustomize files. This must match; otherwise, Tilt won’t build it.

live_reload_deps: a list of files/directories to watch. If any of them changes, Tilt rebuilds the manager binary for the provider and performs a live update of the running container.

additional_docker_helper_commands (String, default=””): Additional commands to be run in the helper image docker build. e.g.

RUN wget -qO- https://dl.k8s.io/v1.14.4/kubernetes-client-linux-amd64.tar.gz | tar xvz
RUN wget -qO- https://get.docker.com | sh

additional_docker_build_commands (String, default=””): Additional commands to be appended to the dockerfile. The manager image will use docker-slim, so to download files, use additional_helper_image_commands. e.g.

COPY --from=tilt-helper /usr/bin/docker /usr/bin/docker
COPY --from=tilt-helper /go/kubernetes/client/bin/kubectl /usr/bin/kubectl

Customizing Tilt

If you need to customize Tilt’s behavior, you can create files in cluster-api’s tilt.d directory. This file is ignored by git so you can be assured that any files you place here will never be checked in to source control.

These files are included after the providers map has been defined and after all the helper function definitions. This is immediately before the “real work” happens.

Under the covers, a.k.a “the real work”

At a high level, the Tiltfile performs the following actions:

  1. Read tilt-settings.json
  2. Configure the allowed Kubernetes contexts
  3. Set the default registry
  4. Define the providers map
  5. Include user-defined Tilt files
  6. Deploy cert-manager
  7. Enable providers (core + what is listed in tilt-settings.json)
    1. Build the manager binary locally as a local_resource
    2. Invoke docker_build for the provider
    3. Invoke kustomize for the provider’s config/ directory

Live updates

Each provider in the providers map has a live_reload_deps list. This defines the files and/or directories that Tilt should monitor for changes. When a dependency is modified, Tilt rebuilds the provider’s manager binary on your local machine, copies the binary to the running container, and executes a restart script. This is significantly faster than rebuilding the container image for each change. It also helps keep the size of each development image as small as possible (the container images do not need the entire go toolchain, source code, module dependencies, etc.).

Testing Cluster API

Basic tests

Unit tests

Unit tests run very quickly. They don’t require any additional services to execute. They focus on individual pieces of logic and allow integration bugs to slip through. They are fast and great for getting the initial implementation worked out.

envtest

envtest is a testing environment that is provided by kubebuilder. This environment spins up a local instance of etcd and the kube-apiserver. This allows reconcilers to do all the things they need to do to function very similarly to how they would in a real environment. These tests provide integration testing between reconcilers and kubernetes.

Running unit and envtest tests

Using the test target through make will run all of the unit and envtest tests.

Integration tests

Integration tests use a real cluster and real dependencies to run tests. The dependencies are managed manually and are not meant to be run locally. This is used during CI to ensure basic functionality works as expected.

See scripts/ci-integration.sh for more details.

End-to-end tests

The end-to-end tests are similar to the integration tests except they are designed to manage dependencies for you and have a more complete test using a docker provider to test the Cluster API mechanisms. The end-to-end tests can run locally without modifying the host machine or on a CI platform.

Running the end-to-end tests

Environment variables

The test run can be controlled to some degree by environment variables.

  • SKIP_RESOURCE_CLEANUP: Will prevent the end-to-end tests from cleaning up the local resources they create. This is useful when debugging an error that requires cluster inspection. After the error occurs and the tests fail, the cluster remains. You can then docker exec into a running container to get information to assist in debugging. This parameter is also useful if you simply want a local cluster to experiment with. After the tests run and succeed you are left with a cluster in a known working state for experimentation.

  • FOCUS: The FOCUS variable allows running of certain tests. The default on CI is FOCUS='Basic'. In order to run all the tests, set FOCUS='Basic|Full'. This will run the basic test and then run all other tests.

Types of end-to-end runs

make test-capd-e2e-full will build all manifests and provider images then use those manifests and images in the test run. This is great if you’re modifying API types in any of the providers.

make test-capd-e2e-images will only build the images for all providers then use those images during the e2e tests and use whatever manifests exist on disk. This is good if you’re only updating controller code.

make test-capd-e2e will only build the docker provider image and use whatever provider images already exist on disk. This is good if you’re working on the test framework itself. You’ll likely want to build the images at least once and then use this for a faster test run.

Examples

  • make test-capd-e2e-full SKIP_RESOURCE_CLEANUP=true
  • make test-capd-e2e-images
  • make test-capd-e2e FOCUS='Basic|Full'

Controllers

This page is still being written - stay tuned!

Bootstrap Controller

Bootstrapping is the process in which:

  1. A cluster is bootstrapped
  2. A machine is bootstrapped and takes on a role within a cluster

CAPBK is the reference bootstrap provider and is based on kubeadm. CAPBK codifies the steps for creating a cluster in multiple configurations.

See proposal for the full details on how the bootstrap process works.

Implementations

  • Kubeadm (Reference Implementation)

Cluster Controller

The Cluster controller’s main responsibilities are:

  • Setting an OwnerReference on the infrastructure object referenced in Cluster.Spec.InfrastructureRef.
  • Cleanup of all owned objects so that nothing is dangling after deletion.
  • Keeping the Cluster’s status in sync with the infrastructure Cluster’s status.
  • Creating a kubeconfig secret for workload clusters.

Contracts

Infrastructure Provider

The general expectation of an infrastructure provider is to provision the necessary infrastructure components needed to run a Kubernetes cluster. As an example, the AWS infrastructure provider, specifically the AWSCluster reconciler, will provision a VPC, some security groups, an ELB, a bastion instance and some other components all with AWS best practices baked in. Once that infrastructure is provisioned and ready to be used the AWSMachine reconciler takes over and provisions EC2 instances that will become a Kubernetes cluster through some bootstrap mechanism.

Required status fields

The InfrastructureCluster object must have a status object.

The spec object must have the following fields defined:

  • controlPlaneEndpoint - identifies the endpoint used to connect to the target’s cluster apiserver.

The status object must have the following fields defined:

  • ready - a boolean field that is true when the infrastructure is ready to be used.

Optional status fields

The status object may define several fields that do not affect functionality if missing:

  • failureReason - is a string that explains why a fatal error has occurred, if possible.
  • failureMessage - is a string that holds the message contained by the error.

Example:

kind: MyProviderCluster
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
spec:
  controlPlaneEndpoint:
    host: example.com
    port: 6443
status:
    ready: true

Secrets

If you are using the kubeadm bootstrap provider you do not have to provide Cluster API any secrets. It will generate all necessary CAs (certificate authorities) for you.

However, if you provide a CA for the cluster then Cluster API will be able to generate a kubeconfig secret. This is useful if you have a custom CA for or do not want to use the bootstrap provider’s generated self-signed CA.

Secret nameField nameContent
<cluster-name>-catls.crtbase64 encoded TLS certificate in PEM format
<cluster-name>-catls.keybase64 encoded TLS private key in PEM format

Alternatively can entirely bypass Cluster API generating a kubeconfig entirely if you provide a kubeconfig secret formatted as described below.

Secret nameField nameContent
<cluster-name>-kubeconfigvaluebase64 encoded kubeconfig

Machine Controller

The Machine controller’s main responsibilities are:

  • Setting an OwnerReference on:
    • Each Machine object to the Cluster object.
    • The associated BootstrapConfig object.
    • The associated InfrastructureMachine object.
  • Copy data from BootstrapConfig.Status.BootstrapData to Machine.Spec.Bootstrap.Data if Machine.Spec.Bootstrap.Data is empty.
  • Setting NodeRefs to be able to associate machines and kubernetes nodes.
  • Deleting Nodes in the target cluster when the associated machine is deleted.
  • Cleanup of related objects.
  • Keeping the Machine’s Status object up to date with the InfrastructureMachine’s Status object.

Contracts

Cluster API

Cluster associations are made via labels.

Expected labels

whatlabelvaluemeaning
Machinecluster.x-k8s.io/cluster-name<cluster-name>Identify a machine as belonging to a cluster with the name <cluster-name>
Machinecluster.x-k8s.io/control-planetrueIdentifies a machine as a control-plane node

Bootstrap provider

The BootstrapConfig object must have a status object.

To override the bootstrap provider, a user (or external system) can directly set the Machine.Spec.Bootstrap.Data field. This will mark the machine as ready for bootstrapping and no bootstrap data will be copied from the BootstrapConfig object.

Required status fields

The status object must have several fields defined:

  • ready - a boolean field indicating the bootstrap config data is generated and ready for use.
  • dataSecretName - a string field referencing the name of the secret that stores the generated bootstrap data.

Optional status fields

The status object may define several fields that do not affect functionality if missing:

  • failureReason - a string field explaining why a fatal error has occurred, if possible.
  • failureMessage - a string field that holds the message contained by the error.

Example:

kind: MyBootstrapProviderConfig
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
status:
    ready: true
    dataSecretName: "MyBootstrapSecret"

Infrastructure provider

The InfrastructureMachine object must have a status object.

Required status fields

The status object must have several fields defined:

  • ready - a boolean field indicating if the infrastructure is ready to be used or not.
  • providerID - a cloud provider ID identifying the machine. This is often set by the cloud-provider-controller.

Optional status fields

The status object may define several fields that do not affect functionality if missing:

  • failureReason - is a string that explains why a fatal error has occurred, if possible.
  • failureMessage - is a string that holds the message contained by the error.

Example:

kind: MyMachine
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
status:
    ready: true
    providerID: cloud:////my-cloud-provider-id

Secrets

The Machine controller will create a secret or use an existing secret in the following format:

secret namefield namecontent
<cluster-name>-kubeconfigvaluebase64 encoded kubeconfig that is authenticated with the child cluster

MachineSet

A MachineSet is an immutable abstraction over Machines.

Its main responsibilities are:

  • Adopting unowned Machines that aren’t assigned to a MachineSet
  • Adopting unmanaged Machines that aren’t assigned a Cluster
  • Booting a group of N machines
    • Monitor the status of those booted machines

MachineDeployment

A MachineDeployment orchestrates deployments over a fleet of MachineSets.

Its main responsibilities are:

  • Adopting matching MachineSets not assigned to a MachineDeployment
  • Adopting matching MachineSets not assigned to a Cluster
  • Managing the Machine deployment process
    • Scaling up new MachineSets when changes are made
    • Scaling down old MachineSets when newer MachineSets replace them
  • Updating the status of MachineDeployment objects

MachineHealthCheck

A MachineHealthCheck is responsible for remediating unhealthy Machines.

Its main responsibilities are:

  • Checking the health of Nodes in the workload clusters against a list of unhealthy conditions
  • Remediating Machine’s for Nodes determined to be unhealthy

Control Plane Controller

The Control Plane controller’s main responsibilities are:

  • Managing a set of machines that represent a Kubernetes control plane.
  • Provide information about the state of the control plane to downstream consumers.

A reference implementation is managed within the core Cluster API project as the Kubeadm control plane controller (KubeadmControlPlane). In this document, we refer to an example ImplementationControlPlane where not otherwise specified.

Contracts

Control Plane Provider

The general expectation of a control plane controller is to instantiate a Kubernetes control plane consisting of the following services:

Required Control Plane Services

  • etcd
  • Kubernetes API Server
  • Kubernetes Controller Manager
  • Kubernetes Scheduler

Optional Control Plane Services

  • Cloud controller manager
  • Cluster DNS (e.g. CoreDNS)
  • Service proxy (e.g. kube-proxy)

Prohibited Services

  • CNI - should be left to user to apply once control plane is instantiated.

Relationship to other Cluster API types

The ImplementationControlPlane must rely on the existence of status.controlplaneEndpoint in its parent Cluster object.

CRD contracts

Required spec fields for implementations using replicas

  • replicas - is an integer representing the number of desired replicas. In the KubeadmControlPlane, this represents the desired number of desired control plane machines.

  • scale subresource with the following signature:

scale:
  labelSelectorPath: .status.selector
  specReplicasPath: .spec.replicas
  statusReplicasPath: .status.replicas
status: {}

More information about the scale subresource can be found in the Kubernetes documentation.

Required status fields

The ImplementationControlPlane object must have a status object.

The status object must have the following fields defined:

Field Type Description Implementation in Kubeadm Control Plane Controller
initialized Boolean a boolean field that is true when the target cluster has completed initialization such that at least once, the target's control plane has been contactable. Transitions to initialized when the controller detects that kubeadm has uploaded a kubeadm-config configmap, which occurs at the end of kubeadm provisioning.
ready Boolean Ready denotes that the target API Server is ready to receive requests.

Required status fields for implementations using replicas

Where the ImplementationControlPlane has a concept of replicas, e.g. most high availability control planes, then the status object must have the following fields defined:

Field Type Description Implementation in Kubeadm Control Plane Controller
readyReplicas Integer Total number of fully running and ready control plane instances. Is equal to the number of fully running and ready control plane machines
replicas Integer Total number of non-terminated control plane instances, i.e. the state machine for this instance of the control plane is able to transition to ready. Is equal to the number of non-terminated control plane machines
selector String `selector` is the label selector in string format to avoid introspection by clients, and is used to provide the CRD-based integration for the scale subresource and additional integrations for things like kubectl describe. The string will be in the same format as the query-param syntax. More info about label selectors: http://kubernetes.io/docs/user-guide/labels#label-selectors
unavailableReplicas Integer Total number of unavailable control plane instances targeted by this control plane, equal to the desired number of control plane instances - ready instances. Total number of unavailable machines targeted by this control plane. This is the total number of machines that are still required for the deployment to have 100% available capacity. They may either be machines that are running but not yet ready or machines that still have not been created.
updatedReplicas integer Total number of non-terminated machines targeted by this control plane that have the desired template spec. Total number of non-terminated machines targeted by this control plane that have the desired template spec.

Optional status fields

The status object may define several fields that do not affect functionality if missing:

  • failureReason - is a string that explains why an error has occurred, if possible.
  • failureMessage - is a string that holds the message contained by the error.

Example usage

kind: KubeadmControlPlane
apiVersion: cluster.x-k8s.io/v1alpha3
metadata:
  name: kcp-1
  namespace: default
spec:
  infrastructureTemplate:
    name: kcp-infra-template
    namespace: default
  kubeadmConfigSpec:
    clusterConfiguration:
  version: v1.16.2

Provider Implementers

Cluster API v1alpha1 compared to v1alpha2

Providers

v1alpha1

Providers in v1alpha1 wrap behavior specific to an environment to create the infrastructure and bootstrap instances into Kubernetes nodes. Examples of environments that have integrated with Cluster API v1alpha1 include, AWS, GCP, OpenStack, Azure, vSphere and others. The provider vendors in Cluster API’s controllers, registers its own actuators with the Cluster API controllers and runs a single manager to complete a Cluster API management cluster.

v1alpha2

v1alpha2 introduces two new providers and changes how the Cluster API is consumed. This means that in order to have a complete management cluster that is ready to build clusters you will need three managers.

  • Core (Cluster API)
  • Bootstrap (kubeadm)
  • Infrastructure (aws, gcp, azure, vsphere, etc)

Cluster API’s controllers are no longer vendored by providers. Instead, Cluster API offers its own independent controllers that are responsible for the core types:

  • Cluster
  • Machine
  • MachineSet
  • MachineDeployment

Bootstrap providers are an entirely new concept aimed at reducing the amount of kubeadm boilerplate that every provider reimplemented in v1alpha1. The Bootstrap provider is responsible for running a controller that generates data necessary to bootstrap an instance into a Kubernetes node (cloud-init, bash, etc).

v1alpha1 “providers” have become Infrastructure providers. The Infrastructure provider is responsible for generating actual infrastructure (networking, load balancers, instances, etc) for a particular environment that can consume bootstrap data to turn the infrastructure into a Kubernetes cluster.

Actuators

v1alpha1

Actuators are interfaces that the Cluster API controller calls. A provider pulls in the generic Cluster API controller and then registers actuators to run specific infrastructure logic (calls to the provider cloud).

v1alpha2

Actuators are not used in this version. Cluster API’s controllers are no longer shared across providers and therefore they do not need to know about the actuator interface. Instead, providers communicate to each other via Cluster API’s central objects, namely Machine and Cluster. When a user modifies a Machine with a reference, each provider will notice that update and respond in some way. For example, the Bootstrap provider may attach BootstrapData to a BootstrapConfig which will then be attached to the Machine object via Cluster API’s controllers or the Infrastructure provider may create a cloud instance for Kubernetes to run on.

clusterctl

v1alpha1

clusterctl was a command line tool packaged with v1alpha1 providers. The goal of this tool was to go from nothing to a running management cluster in whatever environment the provider was built for. For example, Cluster-API-Provider-AWS packaged a clusterctl that created a Kubernetes cluster in EC2 and installed the necessary controllers to respond to Cluster API’s APIs.

v1alpha2

clusterctl is likely becoming provider-agnostic meaning one clusterctl is bundled with Cluster API and can be reused across providers. Work here is still being figured out but providers will not be packaging their own clusterctl anymore.

Cluster API v1alpha2 compared to v1alpha3

Minimum Go version

  • The Go version used by Cluster API is now Go 1.13+

In-Tree bootstrap provider

  • Cluster API now ships with the Kubeadm Bootstrap provider (CABPK).
  • Update import paths from sigs.k8s.io/cluster-api-bootstrap-provider-kubeadm to sigs.k8s.io/cluster-api/bootstrap/kubeadm.

Machine spec.metadata field has been removed

  • The field has been unused for quite some time and didn’t have any function.
  • If you have been using this field to setup MachineSet or MachineDeployment, switch to MachineTemplate’s metadata instead.

Set spec.clusterName on Machine, MachineSet, MachineDeployments

  • The field is now required on all Cluster dependant objects.
  • The cluster.x-k8s.io/cluster-name label is created automatically by each respective controller.

Context is now required for external.CloneTemplate function.

  • Pass a context as the first argument to calls to external.CloneTemplate.

Context is now required for external.Get function.

  • Pass a context as the first argument to calls to external.Get.

Cluster and Machine Status.Phase field values now start with an uppercase letter

  • To be consistent with Pod phases in k/k.
  • More details in https://github.com/kubernetes-sigs/cluster-api/pull/1532/files.

MachineClusterLabelName is renamed to ClusterLabelName

  • The variable name is renamed as this label isn’t applied only to machines anymore.
  • This label is also applied to external objects(bootstrap provider, infrastructure provider)

Cluster and Machine controllers now set cluster.x-k8s.io/cluster-name to external objects.

  • In addition to the OwnerReference back to the Cluster, a label is now added as well to any external objects, for example objects such as KubeadmConfig (bootstrap provider), AWSCluster (infrastructure provider), AWSMachine (infrastructure provider), etc.

The util/restmapper package has been removed

  • Controller runtime has native support for a DynamicRESTMapper, which is used by default when creating a new Manager.

Generated kubeconfig admin username changed from kubernetes-admin to <cluster-name>-admin

  • The kubeconfig secret shipped with Cluster API now uses the cluster name as prefix to the username field.

Changes to sigs.k8s.io/cluster-api/controllers/remote

  • The ClusterClient interface has been removed.
  • remote.NewClusterClient now returns a sigs.k8s.io/controller-runtime/pkg/client Client. It also requires client.ObjectKey instead of a cluster reference. The signature changed:
    • From: func NewClusterClient(c client.Client, cluster *clusterv1.Cluster) (ClusterClient, error)
    • To: func NewClusterClient(c client.Client, cluster client.ObjectKey, scheme runtime.Scheme) (client.Client, error)
  • Same for the remote client ClusterClientGetter interface:
    • From: type ClusterClientGetter func(ctx context.Context, c client.Client, cluster *clusterv1.Cluster, scheme *runtime.Scheme) (client.Client, error)
    • To: type ClusterClientGetter func(ctx context.Context, c client.Client, cluster client.ObjectKey, scheme *runtime.Scheme) (client.Client, error)
  • remote.RESTConfig now uses client.ObjectKey instead of a cluster reference. Signature change:
    • From: func RESTConfig(ctx context.Context, c client.Client, cluster *clusterv1.Cluster) (*restclient.Config, error)
    • To: func RESTConfig(ctx context.Context, c client.Client, cluster client.ObjectKey) (*restclient.Config, error)

Related changes to sigs.k8s.io/cluster-api/util

  • A helper function util.ObjectKey could be used to get client.ObjectKey for a Cluster, Machine etc.
  • The returned client is no longer configured for lazy discovery. Any consumers that attempt to create a client prior to the server being available will now see an error.
  • Getter for a kubeconfig secret, associated with a cluster requires client.ObjectKey instead of a cluster reference. Signature change:
    • From: func Get(ctx context.Context, c client.Client, cluster client.ObjectKey, purpose Purpose) (*corev1.Secret, error)
    • To: func Get(ctx context.Context, c client.Client, cluster *clusterv1.Cluster, purpose Purpose) (*corev1.Secret, error)

A Machine is now considered a control plane if it has cluster.x-k8s.io/control-plane set, regardless of value

  • Previously examples and tests were setting/checking for the label to be set to true.
  • The function util.IsControlPlaneMachine was previously checking for any value other than empty string, while now we only check if the associated label exists.

Machine Status.Phase field set to Provisioned if a NodeRef is set but infrastructure is not ready

  • The machine Status.Phase is set back to Provisioned if the infrastructure is not ready. This is only applicable if the infrastructure node status does not have any errors set.

Metrics

  • The cluster and machine controllers expose the following prometheus metrics.

    • capi_cluster_control_plane_ready: Cluster control plane is ready if set to 1 and not if 0.
    • capi_cluster_infrastructure_ready: Cluster infrastructure is ready if set to 1 and not if 0.
    • capi_cluster_kubeconfig_ready: Cluster kubeconfig is ready if set to 1 and not if 0.
    • capi_cluster_failure_set: Cluster FailureMesssage or FailureReason is set if metric is 1.
    • capi_machine_bootstrap_ready: Machine Boostrap is ready if set to 1 and not if 0.
    • capi_machine_infrastructure_ready: Machine InfrastructureRef is ready if set to 1 and not if 0.
    • capi_machine_node_ready: Machine NodeRef is ready if set to 1 and not if 0.

    They can be accessed by default via the 8080 metrics port on the cluster api controller manager.

Cluster Status.Phase transition to Provisioned additionally needs at least one APIEndpoint to be available.

  • Previously, the sole requirement to transition a Cluster’s Status.Phase to Provisioned was a true value of Status.InfrastructureReady. Now, there are two requirements: a true value of Status.InfrastructureReady and at least one entry in Status.APIEndpoints.
  • See https://github.com/kubernetes-sigs/cluster-api/pull/1721/files.

Status.ErrorReason and Status.ErrorMessage fields, populated to signal a fatal error has occurred, have been renamed in Cluster, Machine and MachineSet

  • Status.ErrorReason has been renamed to Status.FailureReason
  • Status.ErrorMessage has been renamed to Status.FailureMessage

The external.ErrorsFrom function has been renamed to external.FailuresFrom

  • The function has been modified to reflect the rename of Status.ErrorReason to Status.FailureReason and Status.ErrorMessage to Status.FailureMessage.

External objects will need to rename Status.ErrorReason and Status.ErrorMessage

  • As a follow up to the changes mentioned above - for the external.FailuresFrom function to retain its functionality, external objects (e.g., AWSCluster, AWSMachine, etc.) will need to rename the fields as well.
  • Status.ErrorReason should be renamed to Status.FailureReason
  • Status.ErrorMessage should be renamed to Status.FailureMessage

The field Cluster.Status.APIEndpoints is removed in favor of Cluster.Spec.ControlPlaneEndpoint

  • The slice in Cluster.Status has been removed and replaced by a single APIEndpoint field under Spec.
  • Infrastructure providers MUST expose a ControlPlaneEndpoint field in their cluster infrastructure resource at Spec.ControlPlaneEndpoint. They may optionally remove the Status.APIEndpoints field (Cluster API no longer uses it).

Data generated from a bootstrap provider is now stored in a secret.

  • The Cluster API Machine Controller no longer reconciles the bootstrap provider status.bootstrapData field, but instead looks at status.dataSecretName.
  • The Machine.Spec.Bootstrap.Data field is deprecated and will be removed in a future version.
  • Bootstrap providers must create a Secret in the bootstrap resource’s namespace and store the name in the bootstrap resource’s status.dataSecretName field.
    • The secret created by the bootstrap provider is of type cluster.x-k8s.io/secret.
    • On reconciliation, we suggest to migrate from the deprecated field to a secret reference.
  • Infrastructure providers must look for the bootstrap data secret name in Machine.Spec.Bootstrap.DataSecretName and fallback to Machine.Spec.Bootstrap.Data.

The cloudinit module under the Kubeadm bootstrap provider has been made private

The cloudinit module has been moved to an internal directory as it is not designed to be a public interface consumed outside of the existing module.

Interface for Bootstrap Provider Consumers

  • Consumers of bootstrap configuration, Machine and eventually MachinePool, must adhere to a contract that defines a set of required fields used for coordination with the kubeadm bootstrap provider.
    • apiVersion to check for supported version/kind.
    • kind to check for supported version/kind.
    • metadata.labels["cluster.x-k8s.io/control-plane"] only present in the case of a control plane Machine.
    • spec.clusterName to retrieve the owning Cluster status.
    • spec.bootstrap.dataSecretName to know where to put bootstrap data with sensitive information. Consumers must also verify the secret type matches cluster.x-k8s.io/secret.
    • status.infrastuctureReady to understand the state of the configuration consumer so the bootstrap provider can take appropriate action (e.g. renew bootstrap token).

Support the cluster.x-k8s.io/paused annotation and Cluster.Spec.Paused field.

  • A new annotation cluster.x-k8s.io/paused provides the ability to pause reconciliation on specific objects.
  • A new field Cluster.Spec.Paused provides the ability to pause reconciliation on a Cluster and all associated objects.
  • A helper function util.IsPaused can be used on any Kubernetes object associated with a Cluster and can be used during a Reconcile loop:
    // Return early if the object or Cluster is paused.
    if util.IsPaused(cluster, <object>) {
      logger.Info("Reconciliation is paused for this object")
      return ctrl.Result{}, nil
    }
    
  • Unless your controller is already watching Clusters, add a Watch to get notifications when Cluster.Spec.Paused field changes. In most cases, util.WatchOnClusterPaused and util.ClusterToObjectsMapper can be used like in the example below:
    // Add a watch on clusterv1.Cluster object for paused notifications.
    clusterToObjectFunc, err := util.ClusterToObjectsMapper(mgr.GetClient(), <List object here>, mgr.GetScheme())
    if err != nil {
      return err
    }
    if err := util.WatchOnClusterPaused(controller, clusterToObjectFunc); err != nil {
      return err
    }
    
    NB: You need to have cluster.x-k8s.io/cluster-name applied to all your objects for the mapper to function.

[OPTIONAL] Support failure domains.

An infrastructure provider may or may not implement the failure domains feature. Failure domains gives Cluster API just enough information to spread machines out reducing the risk of a target cluster failing due to a domain outage. This is particularly useful for Control Plane providers. They are now able to put control plane nodes in different domains.

An infrastructure provider can implement this by setting the InfraCluster.Status.FailureDomains field with a map of unique keys to failureDomainSpecs as well as respecting a set Machine.Spec.FailureDomain field when creating instances.

To support migration from failure domains that were previously specified through provider-specific resources, the Machine controller will support updating Machine.Spec.FailureDomain field if Spec.FailureDomain is present and defined on the provider-defined infrastructure resource.

Please see the cluster and machine infrastructure provider specifications for more detail.

Refactor kustomize config/ folder to support multi-tenancy when using webhooks.

Pre-Requisites: Upgrade to CRD v1.

More details and background can be found in Issue #2275 and PR #2279.

Goals:

  • Have all webhook related components in the capi-webhook-system namespace.
    • Achieves multi-tenancy and guarantees that both CRD and webhook resources can live globally and can be patched in future iterations.
  • Run a new manager instance that ONLY runs webhooks and doesn’t install any reconcilers.

Steps:

  • In config/certmanager/

    • Patch
      • certificate.yaml: The secretName value MUST be set to $(SERVICE_NAME)-cert.
      • kustomization.yaml: Add the following to varReference
        - kind: Certificate
          group: cert-manager.io
          path: spec/secretName
        
  • In config/

    • Create
      • kustomization.yaml: This file is going to function as the new entrypoint to run kustomize build. PROVIDER_NAME is the name of your provider, e.g. aws. PROVIDER_TYPE is the type of your provider, e.g. control-plane, bootstrap, infrastructure.
        namePrefix: {{e.g. capa-, capi-, etc.}}
        
        commonLabels:
          cluster.x-k8s.io/provider: "{{PROVIDER_TYPE}}-{{PROVIDER_NAME}}"
        
        bases:
        - crd
        - webhook # Disable this if you're not using the webhook functionality.
        - default
        
        patchesJson6902:
        - target: # NOTE: This patch needs to be repeatd for EACH CustomResourceDefinition you have under crd/bases.
            group: apiextensions.k8s.io
            version: v1
            kind: CustomResourceDefinition
            name: {{CRD_NAME_HERE}}
          path: patch_crd_webhook_namespace.yaml
        
      • patch_crd_webhook_namespace.yaml: This patch sets the conversion webhook namespace to capi-webhook-system.
        - op: replace
          path: "/spec/conversion/webhook/clientConfig/service/namespace"
          value: capi-webhook-system
        
  • In config/default

    • Create
      • namespace.yaml
        apiVersion: v1
        kind: Namespace
        metadata:
          name: system
        
    • Move
      • manager_image_patch.yaml to config/manager
      • manager_label_patch.yaml to config/manager
      • manager_pull_policy.yaml to config/manager
      • manager_auth_proxy_patch.yaml to config/manager
      • manager_webhook_patch.yaml to config/webhook
      • webhookcainjection_patch.yaml to config/webhook
      • manager_label_patch.yaml to trash.
    • Patch
      • kustomization.yaml
        • Add under resources:
          resources:
          - namespace.yaml
          
        • Replace bases with:
          bases:
          - ../rbac
          - ../manager
          
        • Add under patchesStrategicMerge:
          patchesStrategicMerge:
          - manager_role_aggregation_patch.yaml
          
        • Remove ../crd from bases (now in config/kustomization.yaml).
        • Remove namePrefix (now in config/kustomization.yaml).
        • Remove commonLabels (now in config/kustomization.yaml).
        • Remove from patchesStrategicMerge:
          • manager_image_patch.yaml
          • manager_pull_policy.yaml
          • manager_auth_proxy_patch.yaml
          • manager_webhook_patch.yaml
          • webhookcainjection_patch.yaml
          • manager_label_patch.yaml
        • Remove from vars:
          • CERTIFICATE_NAMESPACE
          • CERTIFICATE_NAME
          • SERVICE_NAMESPACE
          • SERVICE_NAME
  • In config/manager

    • Patch
      • manager.yaml: Remove the Namespace object.
      • kustomization.yaml:
        • Add under patchesStrategicMerge:
          patchesStrategicMerge:
          - manager_image_patch.yaml
          - manager_pull_policy.yaml
          - manager_auth_proxy_patch.yaml
          
  • In config/webhook

    • Patch
      • kustomizeconfig.yaml
        • Add the following to varReference
          - kind: Deployment
            path: spec/template/spec/volumes/secret/secretName
          
      • kustomization.yaml
        • Add namespace: capi-webhook-system at the top of the file.
        • Under resources, add ../certmanager and ../manager.
        • Add at the bottom of the file:
          patchesStrategicMerge:
          - manager_webhook_patch.yaml
          - webhookcainjection_patch.yaml # Disable this value if you don't have any defaulting or validation webhook. If you don't know, you can check if the manifests.yaml file in the same directory has any contents.
          
          vars:
          - name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
            objref:
              kind: Certificate
              group: cert-manager.io
              version: v1alpha2
              name: serving-cert # this name should match the one in certificate.yaml
            fieldref:
              fieldpath: metadata.namespace
          - name: CERTIFICATE_NAME
            objref:
              kind: Certificate
              group: cert-manager.io
              version: v1alpha2
              name: serving-cert # this name should match the one in certificate.yaml
          - name: SERVICE_NAMESPACE # namespace of the service
            objref:
              kind: Service
              version: v1
              name: webhook-service
            fieldref:
              fieldpath: metadata.namespace
          - name: SERVICE_NAME
            objref:
              kind: Service
              version: v1
              name: webhook-service
          
      • manager_webhook_patch.yaml
        • Under containers find manager and add after name
          - "--metrics-addr=127.0.0.1:8080"
          - "--webhook-port=9443"
          
        • Under volumes find cert and replace secretName‘s value with $(SERVICE_NAME)-cert.
      • service.yaml
        • Remove the selector map, if any. The control-plane label is not needed anymore, a unique label is applied using commonLabels under config/kustomization.yaml.

In main.go

  • Default the webhook-port flag to 0
    flag.IntVar(&webhookPort, "webhook-port", 0,
    	"Webhook Server port, disabled by default. When enabled, the manager will only work as webhook server, no reconcilers are installed.")
    
  • The controller MUST register reconcilers if and only if webhookPort == 0.
  • The controller MUST register webhooks if and only if webhookPort != 0.

After all the changes above are performed, kustomize build MUST target config/, rather than config/default. Using your favorite editor, search for config/default in your repository and change the paths accordingly.

In addition, often the Makefile contains a sed-replacement for manager_image_patch.yaml, this file has been moved from config/default to config/manager. Using your favorite editor, search for manager_image_patch in your repository and change the paths accordingly.

Apply the contract version label cluster.x-k8s.io/<version>: version1_version2_version3 to your CRDs

  • Providers MUST set cluster.x-k8s.io/<version> labels on all Custom Resource Definitions related to Cluster API starting with v1alpha3.
  • The label is a map from an API Version of Cluster API (contract) to your Custom Resource Definition versions.
    • The value is a underscore-delimited (_) list of versions.
    • Each value MUST point to an available version in your CRD Spec.
  • The label allows Cluster API controllers to perform automatic conversions for object references, the controllers will pick the last available version in the list if multiple versions are found.
  • To apply the label to CRDs it’s possible to use commonLabels in your kustomize.yaml file, usually in config/crd.

In this example we show how to map a particular Cluster API contract version to your own CRD using Kustomize’s commonLabels feature:

commonLabels:
  cluster.x-k8s.io/v1alpha2: v1alpha1
  cluster.x-k8s.io/v1alpha3: v1alpha2
  cluster.x-k8s.io/v1beta1: v1alphaX_v1beta1

Upgrade to CRD v1

  • Providers should upgrade their CRDs to v1
  • Minimum Kubernetes version supporting CRDv1 is v1.16
  • In Makefile target generate-manifests:, add the following property to the crd crdVersions=v1
generate-manifests: $(CONTROLLER_GEN) ## Generate manifests e.g. CRD, RBAC etc.
  $(CONTROLLER_GEN) \
    paths=./api/... \
    crd:crdVersions=v1 \
    output:crd:dir=$(CRD_ROOT) \
    output:webhook:dir=$(WEBHOOK_ROOT) \
    webhook
  $(CONTROLLER_GEN) \
    paths=./controllers/... \
    output:rbac:dir=$(RBAC_ROOT) \
    rbac:roleName=manager-role
  • For all the CRDs in the config/crd/bases change the version of CustomResourceDefinition to v1
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition

to

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
  • In the config/crd/kustomizeconfig.yaml file, change the path of the webhook
path: spec/conversion/webhookClientConfig/service/name

to

spec/conversion/webhook/clientConfig/service/name
  • Make the same change of changing v1beta to v1 version in the config/crd/patches
  • In the config/crd/patches/webhook_in_******.yaml file, add the conversionReviewVersions property to the CRD
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  ...
spec:
  conversion:
    strategy: Webhook
    webhookClientConfig:
    ...

to

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  ...
spec:
  strategy: Webhook
  webhook:
  conversionReviewVersions: ["v1", "v1beta1"]
  clientConfig:
  ...

Add matchPolicy=Equivalent kubebuilder marker in webhooks

  • All providers should set “matchPolicy=Equivalent” kubebuilder marker for webhooks on all Custom Resource Definitions related to Cluster API starting with v1alpha3.
  • Specifying Equivalent ensures that webhooks continue to intercept the resources they expect when upgrades enable new versions of the resource in the API server.
  • E.g., matchPolicy is added to AWSMachine (/api/v1alpha3/awsmachine_webhook.go)
    // +kubebuilder:webhook:verbs=create;update,path=/validate-infrastructure-cluster-x-k8s-io-v1alpha3-awsmachine,mutating=false,failurePolicy=fail,matchPolicy=Equivalent,groups=infrastructure.cluster.x-k8s.io,resources=awsmachines,versions=v1alpha3,name=validation.awsmachine.infrastructure.cluster.x-k8s.io
    
  • Support for matchPolicy marker has been added in kubernetes-sigs/controller-tools. Providers needs to update controller-tools dependency to make use of it, usually in hack/tools/go.mod.

[OPTIONAL] Implement --feature-gates flag in main.go

  • Cluster API now ships with a new experimental package that lives under exp/ containing both API types and controllers.
  • Controller and types should always live behind a gate defined under the feature/ package.
  • If you’re planning to support experimental features or API types in your provider or codebase, you can add feature.MutableGates.AddFlag(fs) to your main.go when initializing command line flags. For a full example, you can refer to the main.go in Cluster API or under bootstrap/kubeadm/.

NOTE: To enable experimental features users are required to set the same --feature-gates flag across providers. For example, if you want to enable MachinePool, you’ll have to enable in both Cluster API deployment and the Kubeadm Bootstrap Provider. In the future, we’ll revisit this user experience and provide a centralized way to configure gates across all Cluster API (inc. providers) controllers.

clusterctl

clusterctl is now bundled with Cluster API, provider-agnostic and can be reused across providers. It is the recommended way to setup a management cluster and it implements best practices to avoid common mis-configurations and for managing the life-cycle of deployed providers, e.g. upgrades.

see clusterctl provider contract for more details.

Cluster Infrastructure Provider Specification

Overview

A cluster infrastructure provider supplies whatever prerequisites are necessary for running machines. Examples might include networking, load balancers, firewall rules, and so on.

Data Types

A cluster infrastructure provider must define an API type for “infrastructure cluster” resources. The type:

  1. Must belong to an API group served by the Kubernetes apiserver
  2. May be implemented as a CustomResourceDefinition, or as part of an aggregated apiserver
  3. Must be namespace-scoped
  4. Must have the standard Kubernetes “type metadata” and “object metadata”
  5. Must have a spec field with the following:
    1. Required fields:
      1. controlPlaneEndpoint (apiEndpoint): the endpoint for the cluster’s control plane. apiEndpoint is defined as:
        • host (string): DNS name or IP address
        • port (int32): TCP port
  6. Must have a status field with the following:
    1. Required fields:
      1. ready (boolean): indicates the provider-specific infrastructure has been provisioned and is ready
    2. Optional fields:
      1. failureReason (string): indicates there is a fatal problem reconciling the provider’s infrastructure; meant to be suitable for programmatic interpretation
      2. failureMessage (string): indicates there is a fatal problem reconciling the provider’s infrastructure; meant to be a more descriptive value than failureReason
      3. failureDomains (failureDomains): the failure domains that machines should be placed in. failureDomains is a map, defined as map[string]FailureDomainSpec. A unique key must be used for each FailureDomainSpec. FailureDomainSpec is defined as:
        • controlPlane (bool): indicates if failure domain is appropriate for running control plane instances.
        • attributes (map[string]string): arbitrary attributes for users to apply to a failure domain.

Behavior

A cluster infrastructure provider must respond to changes to its “infrastructure cluster” resources. This process is typically called reconciliation. The provider must watch for new, updated, and deleted resources and respond accordingly.

The following diagram shows the typical logic for a cluster infrastructure provider:

Cluster infrastructure provider activity diagram

Normal resource

  1. If the resource does not have a Cluster owner, exit the reconciliation
    1. The Cluster API Cluster reconciler populates this based on the value in the Cluster‘s spec.infrastructureRef field.
  2. Add the provider-specific finalizer, if needed
  3. Reconcile provider-specific cluster infrastructure
    1. If any errors are encountered, exit the reconciliation
  4. If the provider created a load balancer for the control plane, record its hostname or IP in spec.controlPlaneEndpoint
  5. Set status.ready to true
  6. Set status.failureDomains based on available provider failure domains (optional)
  7. Patch the resource to persist changes

Deleted resource

  1. If the resource has a Cluster owner
    1. Perform deletion of provider-specific cluster infrastructure
    2. If any errors are encountered, exit the reconciliation
  2. Remove the provider-specific finalizer from the resource
  3. Patch the resource to persist changes

RBAC

Provider controller

A cluster infrastructure provider must have RBAC permissions for the types it defines. If you are using kubebuilder to generate new API types, these permissions should be configured for you automatically. For example, the AWS provider has the following configuration for its AWSCluster type:

// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=awsclusters,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=awsclusters/status,verbs=get;update;patch

A cluster infrastructure provider may also need RBAC permissions for other types, such as Cluster. If you need read-only access, you can limit the permissions to get, list, and watch. The AWS provider has the following configuration for retrieving Cluster resources:

// +kubebuilder:rbac:groups=cluster.x-k8s.io,resources=clusters;clusters/status,verbs=get;list;watch

Cluster API controllers

The Cluster API controller for Cluster resources is configured with full read/write RBAC permissions for all resources in the infrastructure.cluster.x-k8s.io API group. This group represents all cluster infrastructure providers for SIG Cluster Lifecycle-sponsored provider subprojects. If you are writing a provider not sponsored by the SIG, you must grant full read/write RBAC permissions for the “infrastructure cluster” resource in your API group to the Cluster API manager’s ServiceAccount. ClusterRoles can be granted using the aggregation label cluster.x-k8s.io/aggregate-to-manager: "true". The following is an example ClusterRole for a FooCluster resource:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: capi-foo-clusters
  labels:
    cluster.x-k8s.io/aggregate-to-manager: "true"
rules:
- apiGroups:
  - infrastructure.foo.com
  resources:
  - fooclusters
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

Note, the write permissions allow the Cluster controller to set owner references and labels on the “infrastructure cluster” resources; they are not used for general mutations of these resources.

Machine Infrastructure Provider Specification

Overview

A machine infrastructure provider is responsible for managing the lifecycle of provider-specific machine instances. These may be physical or virtual instances, and they represent the infrastructure for Kubernetes nodes.

Data Types

A machine infrastructure provider must define an API type for “infrastructure machine” resources. The type:

  1. Must belong to an API group served by the Kubernetes apiserver
  2. May be implemented as a CustomResourceDefinition, or as part of an aggregated apiserver
  3. Must be namespace-scoped
  4. Must have the standard Kubernetes “type metadata” and “object metadata”
  5. Must have a spec field with the following:
    1. Required fields:
      1. providerID (string): the identifier for the provider’s machine instance
    2. Optional fields:
      1. failureDomain (string): the string identifier of the failure domain the instance is running in for the purposes of backwards compatibility and migrating to the v1alpha3 FailureDomain support (where FailureDomain is specified in Machine.Spec.FailureDomain). This field is meant to be temporary to aid in migration of data that was previously defined on the provider type and providers will be expected to remove the field in the next version that provides breaking API changes, favoring the value defined on Machine.Spec.FailureDomain instead. If supporting conversions from previous types, the provider will need to support a conversion from the provider-specific field that was previously used to the failureDomain field to support the automated migration path.
  6. Must have a status field with the following:
    1. Required fields:
      1. ready (boolean): indicates the provider-specific infrastructure has been provisioned and is ready
    2. Optional fields:
      1. failureReason (string): indicates there is a fatal problem reconciling the provider’s infrastructure; meant to be suitable for programmatic interpretation
      2. failureMessage (string): indicates there is a fatal problem reconciling the provider’s infrastructure; meant to be a more descriptive value than failureReason
      3. addresses (MachineAddress): a list of the host names, external IP addresses, internal IP addresses, external DNS names, and/or internal DNS names for the provider’s machine instance. MachineAddress is defined as: - type (string): one of Hostname, ExternalIP, InternalIP, ExternalDNS, InternalDNS - address (string)

Behavior

A machine infrastructure provider must respond to changes to its “infrastructure machine” resources. This process is typically called reconciliation. The provider must watch for new, updated, and deleted resources and respond accordingly.

The following diagram shows the typical logic for a machine infrastructure provider:

Machine infrastructure provider activity diagram

Normal resource

  1. If the resource does not have a Machine owner, exit the reconciliation
    1. The Cluster API Machine reconciler populates this based on the value in the Machines‘s spec.infrastructureRef field
  2. If the resource has status.failureReason or status.failureMessage set, exit the reconciliation
  3. If the Cluster to which this resource belongs cannot be found, exit the reconciliation
  4. Add the provider-specific finalizer, if needed
  5. If the associated Cluster‘s status.infrastructureReady is false, exit the reconciliation
  6. If the associated Machine‘s spec.bootstrap.dataSecretName is nil, exit the reconciliation
  7. Reconcile provider-specific machine infrastructure
    1. If any errors are encountered:
      1. If they are terminal failures, set status.failureReason and status.failureMessage
      2. Exit the reconciliation
    2. If this is a control plane machine, register the instance with the provider’s control plane load balancer (optional)
  8. Set spec.providerID to the provider-specific identifier for the provider’s machine instance
  9. Set status.ready to true
  10. Set status.addresses to the provider-specific set of instance addresses (optional)
  11. Set spec.failureDomain to the provider-specific failure domain the instance is running in (optional)
  12. Patch the resource to persist changes

Deleted resource

  1. If the resource has a Machine owner
    1. Perform deletion of provider-specific machine infrastructure
    2. If this is a control plane machine, deregister the instance from the provider’s control plane load balancer (optional)
    3. If any errors are encountered, exit the reconciliation
  2. Remove the provider-specific finalizer from the resource
  3. Patch the resource to persist changes

RBAC

Provider controller

A machine infrastructure provider must have RBAC permissions for the types it defines. If you are using kubebuilder to generate new API types, these permissions should be configured for you automatically. For example, the AWS provider has the following configuration for its AWSMachine type:

// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=awsmachines,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=awsmachines/status,verbs=get;update;patch

A machine infrastructure provider may also need RBAC permissions for other types, such as Cluster and Machine. If you need read-only access, you can limit the permissions to get, list, and watch. You can use the following configuration for retrieving Cluster and Machine resources:

// +kubebuilder:rbac:groups=cluster.x-k8s.io,resources=clusters;clusters/status,verbs=get;list;watch
// +kubebuilder:rbac:groups=cluster.x-k8s.io,resources=machines;machines/status,verbs=get;list;watch

Cluster API controllers

The Cluster API controller for Machine resources is configured with full read/write RBAC permissions for all resources in the infrastructure.cluster.x-k8s.io API group. This group represents all machine infrastructure providers for SIG Cluster Lifecycle-sponsored provider subprojects. If you are writing a provider not sponsored by the SIG, you must grant full read/write RBAC permissions for the “infrastructure machine” resource in your API group to the Cluster API manager’s ServiceAccount. ClusterRoles can be granted using the aggregation label cluster.x-k8s.io/aggregate-to-manager: "true". The following is an example ClusterRole for a FooMachine resource:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: capi-foo-machines
  labels:
    cluster.x-k8s.io/aggregate-to-manager: "true"
rules:
- apiGroups:
  - infrastructure.foo.com
  resources:
  - foomachines
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch

Note, the write permissions allow the Machine controller to set owner references and labels on the “infrastructure machine” resources; they are not used for general mutations of these resources.

Bootstrap Provider Specification

Overview

A bootstrap provider generates bootstrap data that is used to bootstrap a Kubernetes node.

Data Types

Bootstrap API resource

A bootstrap provider must define an API type for bootstrap resources. The type:

  1. Must belong to an API group served by the Kubernetes apiserver
  2. May be implemented as a CustomResourceDefinition, or as part of an aggregated apiserver
  3. Must be namespace-scoped
  4. Must have the standard Kubernetes “type metadata” and “object metadata”
  5. Should have a spec field containing fields relevant to the bootstrap provider
  6. Must have a status field with the following:
    1. Required fields:
      1. ready (boolean): indicates the bootstrap data has been generated and is ready
      2. dataSecretName (string): the name of the secret that stores the generated bootstrap data
    2. Optional fields:
      1. failureReason (string): indicates there is a fatal problem reconciling the bootstrap data; meant to be suitable for programmatic interpretation
      2. failureMessage (string): indicates there is a fatal problem reconciling the bootstrap data; meant to be a more descriptive value than failureReason

Note: because the dataSecretName is part of status, this value must be deterministically recreatable from the data in the Cluster, Machine, and/or bootstrap resource. If the name is randomly generated, it is not always possible to move the resource and its associated secret from one management cluster to another.

Bootstrap Secret

The Secret containing bootstrap data must:

  1. Use the API resource’s status.dataSecretName for its name
  2. Have the label cluster.x-k8s.io/cluster-name set to the name of the cluster
  3. Have a controller owner reference to the API resource
  4. Have a single key, value, containing the bootstrap data

Behavior

A bootstrap provider must respond to changes to its bootstrap resources. This process is typically called reconciliation. The provider must watch for new, updated, and deleted resources and respond accordingly.

The following diagram shows the typical logic for a bootstrap provider:

Bootstrap provider activity diagram

  1. If the resource does not have a Machine owner, exit the reconciliation
    1. The Cluster API Machine reconciler populates this based on the value in the Machine‘s spec.bootstrap.configRef field.
  2. If the resource has status.failureReason or status.failureMessage set, exit the reconciliation
  3. If the Cluster to which this resource belongs cannot be found, exit the reconciliation
  4. Deterministically generate the name for the bootstrap data secret
  5. Try to retrieve the Secret with the name from the previous step
    1. If it does not exist, generate bootstrap data and create the Secret
  6. Set status.dataSecretName to the generated name
  7. Set status.ready to true
  8. Patch the resource to persist changes

RBAC

Provider controller

A bootstrap provider must have RBAC permissions for the types it defines, as well as the bootstrap data Secret resources it manages. If you are using kubebuilder to generate new API types, these permissions should be configured for you automatically. For example, the Kubeadm bootstrap provider the following configuration for its KubeadmConfig type:

// +kubebuilder:rbac:groups=bootstrap.cluster.x-k8s.io,resources=kubeadmconfigs;kubeadmconfigs/status,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",resources=secrets,verbs=get;list;watch;create;update;patch;delete

A bootstrap provider may also need RBAC permissions for other types, such as Cluster. If you need read-only access, you can limit the permissions to get, list, and watch. The following configuration can be used for retrieving Cluster resources:

// +kubebuilder:rbac:groups=cluster.x-k8s.io,resources=clusters;clusters/status,verbs=get;list;watch

Cluster API controllers

The Cluster API controller for Machine resources is configured with full read/write RBAC permissions for all resources in the bootstrap.cluster.x-k8s.io API group. This group represents all bootstrap providers for SIG Cluster Lifecycle-sponsored provider subprojects. If you are writing a provider not sponsored by the SIG, you must add new RBAC permissions for the Cluster API manager-role role, granting it full read/write access to the bootstrap resource in your API group.

Note, the write permissions allow the Machine controller to set owner references and labels on the bootstrap resources; they are not used for general mutations of these resources.

Overview

In order to demonstrate how to develop a new Cluster API provider we will use kubebuilder to create an example provider. For more information on kubebuilder and CRDs in general we highly recommend reading the Kubebuilder Book. Much of the information here was adapted directly from it.

This is an infrastructure provider - tasked with managing provider-specific resources for clusters and machines. There are also bootstrap providers, which turn machines into Kubernetes nodes.

Prerequisites

tl;dr

# Install kubectl
brew install kubernetes-cli

# Install kustomize
brew install kustomize
# Install kubectl
KUBECTL_VERSION=$(curl -sf https://storage.googleapis.com/kubernetes-release/release/stable.txt)
curl -fLO https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl

# Install kustomize
OS_TYPE=linux
curl -sf https://api.github.com/repos/kubernetes-sigs/kustomize/releases/latest |\
  grep browser_download |\
  grep ${OS_TYPE} |\
  cut -d '"' -f 4 |\
  xargs curl -f -O -L
mv kustomize_*_${OS_TYPE}_amd64 /usr/local/bin/kustomize
chmod u+x /usr/local/bin/kustomize
# Install Kubebuilder
os=$(go env GOOS)
arch=$(go env GOARCH)

# download kubebuilder and extract it to tmp
curl -sL https://go.kubebuilder.io/dl/2.1.0/${os}/${arch} | tar -xz -C /tmp/

# move to a long-term location and put it on your path
# (you'll need to set the KUBEBUILDER_ASSETS env var if you put it somewhere else)
sudo mv /tmp/kubebuilder_2.1.0_${os}_${arch} /usr/local/kubebuilder
export PATH=$PATH:/usr/local/kubebuilder/bin

Repository Naming

The naming convention for new Cluster API provider repositories is generally of the form cluster-api-provider-${env}, where ${env} is a, possibly short, name for the environment in question. For example cluster-api-provider-gcp is an implementation for the Google Cloud Platform, and cluster-api-provider-aws is one for Amazon Web Services. Note that an environment may refer to a cloud, bare metal, virtual machines, or any other infrastructure hosting Kubernetes. Finally, a single environment may include more than one variant. So for example, cluster-api-provider-aws may include both an implementation based on EC2 as well as one based on their hosted EKS solution.

A note on Acronyms

Because these names end up being so long, developers of Cluster API frequently refer to providers by acronyms. Cluster API itself becomes CAPI, pronounced “Cappy.” cluster-api-provider-aws is CAPA, pronounced “KappA.” cluster-api-provider-gcp is CAPG, pronounced “Cap Gee,” and so on.

Resource Naming

For the purposes of this guide we will create a provider for a service named mailgun. Therefore the name of the repository will be cluster-api-provider-mailgun.

Every Kubernetes resource has a Group, Version and Kind that uniquely identifies it.

  • The resource Group is similar to package in a language. It disambiguates different APIs that may happen to have identically named Kinds. Groups often contain a domain name, such as k8s.io. The domain for Cluster API resources is cluster.x-k8s.io, and infrastructure providers generally use infrastructure.cluster.x-k8s.io.
  • The resource Version defines the stability of the API and its backward compatibility guarantees. Examples include v1alpha1, v1beta1, v1, etc. and are governed by the Kubernetes API Deprecation Policy 1. Your provider should expect to abide by the same policies.
  • The resource Kind is the name of the objects we’ll be creating and modifying. In this case it’s MailgunMachine and MailgunCluster.

For example, our cluster object will be:

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MailgunCluster
1

https://kubernetes.io/docs/reference/using-api/deprecation-policy/

Create a repository

mkdir cluster-api-provider-mailgun
cd src/sigs.k8s.io/cluster-api-provider-mailgun
git init

You’ll then need to set up go modules

$ go mod init github.com/liztio/cluster-api-provider-mailgun
go: creating new go.mod: module github.com/liztio/cluster-api-provider-mailgun

Generate scaffolding

kubebuilder init --domain cluster.x-k8s.io

kubebuilder init will create the basic repository layout, including a simple containerized manager. It will also initialize the external go libraries that will be required to build your project.

Commit your changes so far:

git commit -m "Generate scaffolding."

Generate provider resources for Clusters and Machines

Here you will be asked if you want to generate resources and controllers. You’ll want both of them:

kubebuilder create api --group infrastructure --version v1alpha3 --kind MailgunCluster
kubebuilder create api --group infrastructure --version v1alpha3 --kind MailgunMachine
Create Resource under pkg/apis [y/n]?
y
Create Controller under pkg/controller [y/n]?
y

Add Status subresource

The status subresource lets Spec and Status requests for custom resources be addressed separately so requests don’t conflict with each other. It also lets you split RBAC rules between Spec and Status. It’s stable in Kubernetes as of v1.16, but you will have to manually enable it in Kubebuilder.

Add the subresource:status annotation to your <provider>cluster_types.go <provider>machine_types.go

// +kubebuilder:subresource:status
// +kubebuilder:object:root=true

// MailgunCluster is the Schema for the mailgunclusters API
type MailgunCluster struct {
// +kubebuilder:subresource:status
// +kubebuilder:object:root=true

// MailgunMachine is the Schema for the mailgunmachines API
type MailgunMachine struct {

And regenerate the CRDs:

make manifests

Commit your changes

git add .
git commit -m "Generate Cluster and Machine resources."

Defining your API

The API generated by Kubebuilder is just a shell, your actual API will likely have more fields defined on it.

Kubernetes has a lot of conventions and requirements around API design. The Kubebuilder docs have some helpful hints on how to design your types.

Let’s take a look at what was generated for us:

// MailgunClusterSpec defines the desired state of MailgunCluster
type MailgunClusterSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
	// Important: Run "make" to regenerate code after modifying this file
}

// MailgunClusterStatus defines the observed state of MailgunCluster
type MailgunClusterStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "make" to regenerate code after modifying this file
}

Our API is based on Mailgun, so we’re going to have some email based fields:

type Priority string

const (
	// PriorityUrgent means do this right away
	PriorityUrgent = Priority("Urgent")

	// PriorityUrgent means do this immediately
	PriorityExtremelyUrgent = Priority("ExtremelyUrgent")

	// PriorityBusinessCritical means you absolutely need to do this now
	PriorityBusinessCritical = Priority("BusinessCritical")
)

// MailgunClusterSpec defines the desired state of MailgunCluster
type MailgunClusterSpec struct {
	// Priority is how quickly you need this cluster
	Priority Priority `json:"priority"`
	// Request is where you ask extra nicely
	Request string `json:"request"`
	// Requester is the email of the person sending the request
	Requester string `json:"requester"`
}

// MailgunClusterStatus defines the observed state of MailgunCluster
type MailgunClusterStatus struct {
	// MessageID is set to the message ID from Mailgun when our message has been sent
	MessageID *string `json:"response"`
}

As the deleted comments request, run make manager manifests to regenerate some of the generated data files afterwards.

git add .
git commit -m "Added cluster types"

Controllers and Reconciliation

From the kubebuilder book:

Controllers are the core of Kubernetes, and of any operator.

It’s a controller’s job to ensure that, for any given object, the actual state of the world (both the cluster state, and potentially external state like running containers for Kubelet or loadbalancers for a cloud provider) matches the desired state in the object. Each controller focuses on one root Kind, but may interact with other Kinds.

We call this process reconciling.

Right now, we can create objects in our API but we won’t do anything about it. Let’s fix that.

Let’s see the Code

Kubebuilder has created our first controller in controllers/mailguncluster_controller.go. Let’s take a look at what got generated:

// MailgunClusterReconciler reconciles a MailgunCluster object
type MailgunClusterReconciler struct {
	client.Client
	Log logr.Logger
}

// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters/status,verbs=get;update;patch

func (r *MailgunClusterReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
	_ = context.Background()
	_ = r.Log.WithValues("mailguncluster", req.NamespacedName)

	// your logic here

	return ctrl.Result{}, nil
}

RBAC Roles

Those // +kubebuilder... lines tell kubebuilder to generate RBAC roles so the manager we’re writing can access its own managed resources. We also need to add roles that will let it retrieve (but not modify) Cluster API objects. So we’ll add another annotation for that:

// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cluster.x-k8s.io,resources=clusters;clusters/status,verbs=get;list;watch

Make sure to add the annotation to both MailgunClusterReconciler and MailgunMachineReconciler.

Regenerate the RBAC roles after you are done:

make manifests

State

Let’s focus on that struct first. First, a word of warning: no guarantees are made about parallel access, both on one machine or multiple machines. That means you should not store any important state in memory: if you need it, write it into a Kubernetes object and store it.

We’re going to be sending mail, so let’s add a few extra fields:

// MailgunClusterReconciler reconciles a MailgunCluster object
type MailgunClusterReconciler struct {
	client.Client
	Log       logr.Logger
	Mailgun   mailgun.Mailgun
	Recipient string
}

Reconciliation

Now it’s time for our Reconcile function. Reconcile is only passed a name, not an object, so let’s retrieve ours.

Here’s a naive example:

func (r *MailgunClusterReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
	ctx := context.Background()
	_ = r.Log.WithValues("mailguncluster", req.NamespacedName)

	var cluster infrastructurev1alpha3.MailgunCluster
	if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil {
		return ctrl.Result{}, err
	}

	return ctrl.Result{}, nil
}

By returning an error, we request that our controller will get Reconcile() called again. That may not always be what we want - what if the object’s been deleted? So let’s check that:

var cluster infrastructurev1alpha3.MailgunCluster
if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil {
    // 	import apierrors "k8s.io/apimachinery/pkg/api/errors"
    if apierrors.IsNotFound(err) {
        return ctrl.Result{}, nil
    }
    return ctrl.Result{}, err
}

Now, if this were any old kubebuilder project we’d be done, but in our case we have one more object to retrieve. Cluster API splits a cluster into two objects: the Cluster defined by Cluster API itself. We’ll want to retrieve that as well. Luckily, cluster API provides a helper for us.

cluster, err := util.GetOwnerCluster(ctx, r.Client, &mg)
if err != nil {
    return ctrl.Result{}, err

}

client-go versions

At the time this document was written, kubebuilder pulls client-go version 1.14.1 into go.mod (it looks like k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible).

If you encounter an error when compiling like:

../pkg/mod/k8s.io/client-go@v11.0.1-0.20190409021438-1a26190bd76a+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
    have (*versioned.Decoder)
    want (watch.Decoder, watch.Reporter)`

You may need to bump client-go. At time of writing, that means 1.15, which looks like: k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible.

The fun part

More Documentation: The Kubebuilder Book has some excellent documentation on many things, including how to write good controllers!

Now that we have our objects, it’s time to do something with them! This is where your provider really comes into it’s own. In our case, let’s try sending some mail:

subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name)
body := fmt.Sprint("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request)

msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient)
_, _, err = r.Mailgun.Send(msg)
if err != nil {
    return ctrl.Result{}, err
}

Idempotency

But wait, this isn’t quite right. Reconcile() gets called periodically for updates, and any time any updates are made. That would mean we’re potentially sending an email every few minutes! This is an important thing about controllers: they need to be idempotent.

So in our case, we’ll store the result of sending a message, and then check to see if we’ve sent one before.

if mgCluster.Status.MessageID != nil {
    // We already sent a message, so skip reconciliation
    return ctrl.Result{}, nil
}

subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name)
body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request)

msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient)
_, msgID, err := r.Mailgun.Send(msg)
if err != nil {
    return ctrl.Result{}, err
}

// patch from sigs.k8s.io/cluster-api/util/patch
helper, err := patch.NewHelper(&mgCluster, r.Client)
if err != nil {
    return ctrl.Result{}, err
}
mgCluster.Status.MessageID = &msgID
if err := helper.Patch(ctx, &mgCluster); err != nil {
    return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mgCluster.Name)
}

return ctrl.Result{}, nil

A note about the status

Usually, the Status field should only be fields that can be computed from existing state. Things like whether a machine is running can be retrieved from an API, and cluster status can be queried by a healthcheck. The message ID is ephemeral, so it should properly go in the Spec part of the object. Anything that can’t be recreated, either with some sort of deterministic generation method or by querying/observing actual state, needs to be in Spec. This is to support proper disaster recovery of resources. If you have a backup of your cluster and you want to restore it, Kubernetes doesn’t let you restore both spec & status together.

We use the MessageID as a Status here to illustrate how one might issue status updates in a real application.

Update main.go with your new fields

If you added fields to your reconciler, you’ll need to update main.go.

Right now, it probably looks like this:

if err = (&controllers.MailgunClusterReconciler{
    Client: mgr.GetClient(),
    Log:    ctrl.Log.WithName("controllers").WithName("MailgunCluster"),
}).SetupWithManager(mgr); err != nil {
    setupLog.Error(err, "unable to create controller", "controller", "MailgunCluster")
    os.Exit(1)
}

Let’s add our configuration. We’re going to use environment variables for this:

domain := os.Getenv("MAILGUN_DOMAIN")
if domain == "" {
    setupLog.Info("missing required env MAILGUN_DOMAIN")
    os.Exit(1)
}

apiKey := os.Getenv("MAILGUN_API_KEY")
if apiKey == "" {
    setupLog.Info("missing required env MAILGUN_API_KEY")
    os.Exit(1)
}

recipient := os.Getenv("MAIL_RECIPIENT")
if recipient == "" {
    setupLog.Info("missing required env MAIL_RECIPIENT")
    os.Exit(1)
}

mg := mailgun.NewMailgun(domain, apiKey)

if err = (&controllers.MailgunClusterReconciler{
    Client:    mgr.GetClient(),
    Log:       ctrl.Log.WithName("controllers").WithName("MailgunCluster"),
    Mailgun:   mg,
    Recipient: recipient,
}).SetupWithManager(mgr); err != nil {
    setupLog.Error(err, "unable to create controller", "controller", "MailgunCluster")
    os.Exit(1)
}

If you have some other state, you’ll want to initialize it here!

Building, Running, Testing

Docker Image Name

The patch in config/manager/manager_image_patch.yaml will be applied to the manager pod. Right now there is a placeholder IMAGE_URL, which you will need to change to your actual image.

Development Images

It’s likely that you will want one location and tag for release development, and another during development.

The approach most Cluster API projects is using a Makefile that uses sed to replace the image URL on demand during development.

Deployment

cert-manager

Cluster API uses cert-manager to manage the certificates it needs for its webhooks. Before you apply Cluster API’s yaml, you should install cert-manager

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/<version>/cert-manager.yaml

Cluster API

Before you can deploy the infrastructure controller, you’ll need to deploy Cluster API itself.

You can use a precompilaed manifest from the release page, or clone cluster-api and apply its manifests using kustomize.

cd cluster-api
kustomize build config/ | kubectl apply -f-

Check the status of the manager to make sure it’s running properly

$ kubectl describe -n capi-system pod | grep -A 5 Conditions
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True

Your provider

Now you can apply your provider as well:

$ cd cluster-api-provider-mailgun
$ kustomize build config/ | envsubst | kubectl apply -f-
$ kubectl describe -n cluster-api-provider-mailgun-system pod | grep -A 5 Conditions
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True

Tiltfile

Cluster API development requires a lot of iteration, and the “build, tag, push, update deployment” workflow can be very tedious. Tilt makes this process much simpler by watching for updates, then automatically building and deploying them.

You can visit some example repositories, but you can get started by writing out a yaml manifest and using the following Tiltfile kustomize build config/ | envsubst > capm.yaml

allow_k8s_contexts('kubernetes-admin@kind)

k8s_yaml('capm.yaml')

docker_build('<your docker username or repository url>/cluster-api-controller-mailgun-amd64', '.')

You can then use Tilt to watch the logs coming off your container

Your first Cluster

Let’s try our cluster out. We’ll make some simple YAML:

apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
  name: hello-mailgun
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: MailgunCluster
    name: hello-mailgun
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MailgunCluster
metadata:
  name: hello-mailgun
spec:
  priority: "ExtremelyUrgent"
  request: "Please make me a cluster, with sugar on top?"
  requester: "cluster-admin@example.com"

We apply it as normal with kubectl apply -f <filename>.yaml.

If all goes well, you should be getting an email to the address you configured when you set up your management cluster:

An email from mailgun urgently requesting a cluster

Conclusion

Obviously, this is only the first step. We need to implement our Machine object too, and log events, handle updates, and many more things.

Hopefully you feel empowered to go out and create your own provider now. The world is your Kubernetes-based oyster!

Troubleshooting

Labeling nodes with reserved labels such as node-role.kubernetes.io fails with kubeadm error during bootstrap

Self-assigning Node labels such as node-role.kubernetes.io using the kubelet --node-labels flag (see kubeletExtraArgs in the CABPK examples) is not possible due to a security measure imposed by the NodeRestriction admission controller that kubeadm enables by default.

Assigning such labels to Nodes must be done after the bootstrap process has completed:

kubectl label nodes <name> node-role.kubernetes.io/worker=""

For convenience, here is an example one-liner to do this post installation

kubectl get nodes --no-headers -l '!node-role.kubernetes.io/master' -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}' | xargs -I{} kubectl label node {} node-role.kubernetes.io/worker=''

Reference

This section contains various resources that define the Cluster API project.

Table of Contents

A | B | C | D | H | I | K | M | N | O | P | S | T | W

A


Add-ons

Services beyond the fundamental components of Kubernetes.

  • Core Add-ons: Addons that are required to deploy a Kubernetes-conformant cluster: DNS, kube-proxy, CNI.
  • Additional Add-ons: Addons that are not required for a Kubernetes-conformant cluster (e.g. metrics/Heapster, Dashboard).

B


Bootstrap

The process of turning a server into a Kubernetes node. This may involve assembling data to provide when creating the server that backs the Machine, as well as runtime configuration of the software running on that server.

Bootstrap cluster

A temporary cluster that is used to provision a Target Management cluster.

C


CAEP

Cluster API Enhancement Proposal - patterned after KEP. See template

CAPI

Core Cluster API

CAPA

Cluster API Provider AWS

CABPK

Cluster API Bootstrap Provider Kubeadm

CAPD

Cluster API Provider Docker

CAPG

Cluster API Google Cloud Provider

CAPIBM

Cluster API Provider IBM Cloud

CAPO

Cluster API Provider OpenStack

CAPV

Cluster API Provider vSphere

CAPZ

Cluster API Provider Azure

Cluster

A full Kubernetes deployment. See Management Cluster and Workload Cluster

Cluster API

Or Cluster API project

The Cluster API sub-project of the SIG-cluster-lifecycle. It is also used to refer to the software components, APIs, and community that produce them.

Control plane

The set of Kubernetes services that form the basis of a cluster. See also https://kubernetes.io/docs/concepts/#kubernetes-control-plane There are two variants:

  • Self-provisioned: A Kubernetes control plane consisting of pods or machines wholly managed by a single Cluster API deployment.
  • External: A control plane offered and controlled by some system other than Cluster API (e.g., GKE, AKS, EKS, IKS).

D


Default implementation

A feature implementation offered as part of the Cluster API project, infrastructure providers can swap it out for a different one.

H


Horizontal Scaling

The ability to add more machines based on policy and well defined metrics. For example, add a machine to a cluster when CPU load average > (X) for a period of time (Y).

Host

see Server

I


Infrastructure provider

A source of computational resources (e.g. machines, networking, etc.). Examples for cloud include AWS, Azure, Google, etc.; for bare metal include VMware, MAAS, metal3.io, etc. When there is more than one way to obtain resources from the same infrastructure provider (e.g. EC2 vs. EKS) each way is referred to as a variant.

Instance

see Server

Immutability

A resource that does not mutate. In kubernetes we often state the instance of a running pod is immutable or does not change once it is run. In order to make a change, a new pod is run. In the context of Cluster API we often refer to a running instance of a Machine as being immutable, from a Cluster API perspective.

K


Kubernetes-conformant

Or Kubernetes-compliant

A cluster that passes the Kubernetes conformance tests.

k/k

Refers to the main Kubernetes git repository or the main Kubernetes project.

M


Machine

Or Machine Resource

The Custom Resource for Kubernetes that represents a request to have a place to run kubelet.

See also: Server

Manage a cluster

Perform create, scale, upgrade, or destroy operations on the cluster.

Management cluster

The cluster where one or more Infrastructure Providers run, and where resources (e.g. Machines) are stored. Typically referred to when you are provisioning multiple workload clusters.

Management group

A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure providers watching objects in the same namespace. For example, a management group can be used for upgrades, in order to ensure all the providers in a management group support the same Cluster API version.

N


Node pools

A node pool is a group of nodes within a cluster that all have the same configuration.

O


Operating system

Or OS

A generically understood combination of a kernel and system-level userspace interface, such as Linux or Windows, as opposed to a particular distribution.

P


Pivot

Pivot is a process for moving the provider components and declared cluster-api resources from a Source Management cluster to a Target Management cluster.

The pivot process is also used for deleting a management cluster and could also be used during an upgrade of the management cluster.

Provider

See Infrastructure Provider

Provider components

Refers to the YAML artifact a provider publishes as part of their releases which is required to use the provider components, it usually contains Custom Resource Definitions (CRDs), Deployments (to run the controller manager), RBAC, etc.

Provider implementation

Existing Cluster API implementations consist of generic and infrastructure provider-specific logic. The infrastructure provider-specific logic is currently maintained in infrastructure provider repositories.

S


Scaling

Unless otherwise specified, this refers to horizontal scaling.

Stacked control plane

A control plane node where etcd is colocated with the Kubernetes API server, and is running as a static pod.

Server

The infrastructure that backs a Machine Resource, typically either a cloud instance, virtual machine, or physical host.

W


Workload Cluster

A cluster created by a ClusterAPI controller, which is not a bootstrap cluster, and is meant to be used by end-users, as opposed to by CAPI tooling.

Provider Implementations

The code in this repository is independent of any specific deployment environment. Provider specific code is being developed in separate repositories, some of which are also sponsored by SIG Cluster Lifecycle. Providers marked in bold are known to support v1alpha3 API types.

Bootstrap

Infrastructure

API Adopters

Following are the implementations managed by third-parties adopting the standard cluster-api and/or machine-api being developed here.

Ports used by Cluster API

NamePort NumberDescription
metrics8080Port that exposes the metrics. Can be customized, for that set the --metrics-addr flag when starting the manager.
webhook9443Webhook server port. To disable this set --webhook-port flag to 0.
health9440Port that exposes the heatlh endpoint. Can be customized, for that set the --health-addr flag when starting the manager.
profiler Expose the pprof profiler. By default is not configured. Can set the --profiler-address flag. e.g. --profiler-address 6060

Note: external providers (e.g. infrastructure, bootstrap, or control-plane) might allocate ports differently, please refer to the respective documentation.

Kubernetes Community Code of Conduct

Please refer to our Kubernetes Community Code of Conduct

Contributing Guidelines

Read the following guide if you’re interested in contributing to cluster-api.

Contributor License Agreements

We’d love to accept your patches! Before we can take them, we have to jump a couple of legal hurdles.

Please fill out either the individual or corporate Contributor License Agreement (CLA). More information about the CLA and instructions for signing it can be found here.

NOTE: Only original source code from you and other people that have signed the CLA can be accepted into the *repository.

Finding Things That Need Help

If you’re new to the project and want to help, but don’t know where to start, we have a semi-curated list of issues that should not need deep knowledge of the system. Have a look and see if anything sounds interesting. Alternatively, read some of the docs on other controllers and try to write your own, file and fix any/all issues that come up, including gaps in documentation!

Contributing a Patch

  1. If you haven’t already done so, sign a Contributor License Agreement (see details above).
  2. Fork the desired repo, develop and test your code changes.
  3. Submit a pull request.
    1. All code PR must be labeled with one of
      • ⚠️ (:warning:, major or breaking changes)
      • ✨ (:sparkles:, minor or feature additions)
      • 🐛 (:bug:, patch and bugfixes)
      • 📖 (:book:, documentation or proposals)
      • 🏃 (:running:, other)

All changes must be code reviewed. Coding conventions and standards are explained in the official developer docs. Expect reviewers to request that you avoid common go style mistakes in your PRs.

Backporting a Patch

Cluster API maintains older versions through release-X.Y branches. We accept backports of bug fixes to the most recent release branch. For example, if the most recent branch is release-0.2, and the master branch is under active development for v0.3.0, a bug fix that merged to master that also affects v0.2.x may be considered for backporting to release-0.2. We generally do not accept PRs against older release branches.

Breaking Changes

Breaking changes are generally allowed in the master branch, as this is the branch used to develop the next minor release of Cluster API.

There may be times, however, when master is closed for breaking changes. This is likely to happen as we near the release of a new minor version.

Breaking changes are not allowed in release branches, as these represent minor versions that have already been released. These versions have consumers who expect the APIs, behaviors, etc. to remain stable during the life time of the patch stream for the minor release.

Examples of breaking changes include:

  • Removing or renaming a field in a CRD
  • Removing or renaming a CRD
  • Removing or renaming an exported constant, variable, type, or function
  • Updating the version of critical libraries such as controller-runtime, client-go, apimachinery, etc.
    • Some version updates may be acceptable, for picking up bug fixes, but maintainers must exercise caution when reviewing.

There may, at times, need to be exceptions where breaking changes are allowed in release branches. These are at the discretion of the project’s maintainers, and must be carefully considered before merging. An example of an allowed breaking change might be a fix for a behavioral bug that was released in an initial minor version (such as v0.3.0).

Merge Approval

Please see the Kubernetes community document on pull requests for more information about the merge process.

Google Doc Viewing Permissions

To gain viewing permissions to google docs in this project, please join either the kubernetes-dev or kubernetes-sig-cluster-lifecycle google group.

Issue and Pull Request Management

Anyone may comment on issues and submit reviews for pull requests. However, in order to be assigned an issue or pull request, you must be a member of the Kubernetes SIGs GitHub organization.

If you are a Kubernetes GitHub organization member, you are eligible for membership in the Kubernetes SIGs GitHub organization and can request membership by opening an issue against the kubernetes/org repo.

However, if you are a member of any of the related Kubernetes GitHub organizations but not of the Kubernetes org, you will need explicit sponsorship for your membership request. You can read more about Kubernetes membership and sponsorship here.

Cluster API maintainers can assign you an issue or pull request by leaving a /assign <your Github ID> comment on the issue or pull request.

Cluster API Roadmap

This roadmap is a constant work in progress, subject to frequent revision. Dates are approximations.

Ongoing

  • Documentation improvements

v0.3 (v1alpha3) ~ March 2020

v0.4 (v1alpha4) ~ July 2020

v0.5 (v1alpha5? v1beta1?) ~ November 2020

  • ?

TBD