Cluster API v1.10 compared to v1.11

This document provides an overview over relevant changes between Cluster API v1.10 and v1.11 for all Cluster API users. The document has the following sections:

Any feedback or contributions to improve following documentation is welcome!

We also recommend to (re)read the Improving status in CAPI resources proposal because most of the changes described below are a consequence of the work for implementing this proposal.

Go version

  • The minimal Go version required to build Cluster API is v1.24.x
  • The Go version used by Cluster API is v1.24.x

Dependencies

  • The Controller Runtime version used by Cluster API is v0.21.x
  • The version of the Kubernetes libraries used by Cluster API is v1.33.x

API Changes

  • APIs have been moved to the top-level api folder (https://github.com/kubernetes-sigs/cluster-api/pull/12262). If you keep using v1alpha / v1beta1 APIs, imports have to be adjusted accordingly:
    • sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1 => sigs.k8s.io/cluster-api/api/bootstrap/kubeadm/v1beta1 (apiGroup: bootstrap.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1 => sigs.k8s.io/cluster-api/api/controlplane/kubeadm/v1beta1 (apiGroup: controlplane.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/api/v1beta1 => sigs.k8s.io/cluster-api/api/core/v1beta1 (apiGroup: cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/exp/api/v1beta1 => sigs.k8s.io/cluster-api/api/core/v1beta1 (apiGroup: cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/exp/ipam/api/v1alpha1 => sigs.k8s.io/cluster-api/api/ipam/v1alpha1 (apiGroup: ipam.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/exp/ipam/api/v1beta1 => sigs.k8s.io/cluster-api/api/ipam/v1beta1 (apiGroup: ipam.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/exp/runtime/api/v1alpha1 => sigs.k8s.io/cluster-api/api/runtime/v1alpha1 (apiGroup: runtime.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/exp/runtime/hooks/api/v1alpha1 => sigs.k8s.io/cluster-api/api/runtime/hooks/v1alpha1 (apiGroup: hooks.runtime.cluster.x-k8s.io)
  • v1beta2 API version has been introduced and considering the awesome amount of improvements it marks an important step in the journey towards graduating our API to v1; see following paragraphs for more details. The new API version have been added in the following packages:
    • sigs.k8s.io/cluster-api/api/bootstrap/kubeadm/v1beta2 (apiGroup: bootstrap.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/api/controlplane/kubeadm/v1beta2 (apiGroup: controlplane.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/api/addons/v1beta2 (apiGroup: addons.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/api/core/v1beta2 (apiGroup: cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/api/ipam/v1beta2 (apiGroup: ipam.cluster.x-k8s.io)
    • sigs.k8s.io/cluster-api/api/runtime/v1beta2 (apiGroup: runtime.cluster.x-k8s.io)
  • v1beta1 API version is now deprecated and it will be removed tentatively in August 2026
  • If you are using the runtime.cluster.x-k8s.io API group, please be aware that
    • ExtensionConfig v1beta2 has been created (thus aligning with other Cluster API resources)
    • ExtensionConfig v1alpha1 has been deprecated, and it will be removed in a following release.
  • controllers/remote.ClusterCacheTracker and corresponding types have been removed
  • The unused ClusterStatus struct in the kubeadm bootstrap apiGroup has been removed

All CRDs

When looking at API changes introduced in the v1beta2 API version for each CRD it could help to keep in mind a few high level themes:

  • Improve status:
    • The transition to the new K8s aligned conditions using metav1.Conditions types and the new condition semantic has been completed.
    • Replica counters are now consistent with new conditions and across all resources; new replica counters have been added at cluster level.
    • Semantic of contract fields in status have been improved and are now consistent across all resources.
    • The confusing FailureReason and FailureMessage fields have been dropped.
  • Support CC across namespaces:
    • API changes planned for this feature have been implemented.
  • Improve object references:
    • Unnecessary fields have been dropped from object reference.
    • Object references are now GitOps friendly (API version is not overwritten anymore by controllers).
  • KubeadmConfig and KubeadmControlPlane APIs have been aligned with kubeadm v1beta4 API.
    • Additionally, fields inferred from top level objects have been removed, thus getting rid of a common source of confusion/issues.
  • Compliance with K8s API guidelines:
    • Thanks to the adoption of the KAL linter compliance with K8s API guidelines has been greatly improved.
    • All metav1.Duration fields (e.g. nodeDeletionTimeout) are now represented as *int32 fields with units being part of the field name (e.g. nodeDeletionTimeoutSeconds).
      • A side effect of this is that if durations were specified on a sub-second granularity conversion will truncate to seconds. E.g. this means if you apply a v1beta1 object with nodeDeletionTimeout: 1s5ms only 1s will be stored and returned on reads.
    • All bool fields have been changed to *bool to preserve user intent.
    • Extensive work has been done to ensure required and optional is explicitly set in the API, and that both serialization and validation works accordingly:
      • Stop rendering empty structs (review of all occurrences of omitempty and introduction of omitzero)
      • Do not allow "" when it is not semantically different from value not set (either you have to provide a non-empty string value or not set the field at all).
      • Do not allow 0 when it is not semantically different from value not set (either you have to provide a non-0 int value or not set the field at all).
      • Do not allow {} when it is not semantically different from value not set (either you have to set at least one property in the object or not set the field at all).
      • Do not allow [] when it is not semantically different from value not set (either you have to set at least one item in the list or not set the field at all).
      • Ensure validation for all enum types.
    • Missing list markers have been added for SSA.
    • Note: For sake of simplicity, changes about omitempty, required and optional markers or other related validation markers have not been documented in the following paragraphs. Please look at the CRD definitions if you are interested in these changes.
  • Drop unnecessary pointers:
    • After fixing required and optional according to K8s API guidelines, extensive work has been done to drop unnecessary pointers thus improving the usability of the API’s Go structs.
  • Avoid embedding structs:
    • Coupling between API types has been reduced by reducing the usage of embedded structs.
  • Improve consistency:
    • Extensive work has been done to improve consistency across all resources:
      • Fields for Machine deletion are under a new deletion struct in all resources.
      • Settings about rollout have been logically grouped in all resources.
      • Settings about health checks and remediation have been logically grouped in all resources.
      • Etc..
  • Missing validations have been added where required.
  • Tech debt has been reduced by dropping deprecated fields.
    • Important! Please pay attention to field removals, e.g. if you are using GitOps tools, either migrate to v1beta2 or make sure to stop setting the removed fields. The removed fields won’t be preserved even if setting them via v1beta1 (as they don’t exist in v1beta2).
      • For example, we removed the clusterName field from KubeadmControlPlane.spec.kubeadmConfigSpec.clusterConfiguration

Cluster

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata: { ... }
spec:
  paused: true
  availabilityGates: [ ... ]
  clusterNetwork: { ... }
  controlPlaneEndpoint: { ... }
  controlPlaneRef:
    apiVersion: ""
    kind: ""
    name: ""
    namespace: "" # and also fieldPath, resourceVersion, uid
  infrastructureRef:
    apiVersion: ""
    kind: ""
    name: ""
    namespace: "" # and also fieldPath, resourceVersion, uid
  topology:
    class: ""
    classNamespace: ""
    version: ""
    controlPlane:
      metadata: { ... }
      variables:
        overrides:
          - name: ""
            value:
            definitionFrom: ""
      replicas: 5
      readinessGates: [ ... ]
      machineHealthCheck:
        enable: true
        nodeStartupTimeout: 300s
        unhealthyConditions: { ... }
        unhealthyRange: "[1-4]"
        maxUnhealthy: "80%"
        remediationTemplate:
          apiVersion: ""
          kind: ""
          name: ""
          namespace: "" # and also fieldPath, resourceVersion, uid
      nodeDeletionTimeout: 10s
      nodeDrainTimeout: 20s
      nodeVolumeDetachTimeout: 30s
    workers:
      machineDeployments:
        - name: ""
          class: ""
          failureDomain: ""
          metadata: { ... }
          variables:
            overrides:
              - name: ""
                value: { ... }
                definitionFrom: ""
          replicas: 5
          minReadySeconds: 15
          readinessGates: [ ... ]
          machineHealthCheck:
            enable: true
            nodeStartupTimeout: 300s
            unhealthyConditions: { ... }
            unhealthyRange: "[1-4]"
            maxUnhealthy: "80%"
            remediationTemplate:
              apiVersion: ""
              kind: ""
              name: ""
              namespace: "" # and also fieldPath, resourceVersion, uid
          strategy:
            type: RollingUpdate
            rollingUpdate:
              maxSurge: 1
              maxUnavailable: 0
              deletePolicy: Oldest
            remediation:
              maxInFlight: 3
          nodeDeletionTimeout: 10s
          nodeDrainTimeout: 20s
          nodeVolumeDetachTimeout: 30s
      machinePools:
        - name: ""
          class: ""
          failureDomains: [ ... ]
          metadata: { ... }
          variables:
            overrides:
              - name: ""
                value: { ... }
                definitionFrom: ""
          replicas: 5
          minReadySeconds: 15
          nodeDeletionTimeout: 10s
          nodeDrainTimeout: 20s
          nodeVolumeDetachTimeout: 30s
    variables:
      - definitionFrom: ""
        name: ""
        value: { ... }
    rolloutAfter: "2030-07-23T10:56:54Z"
status:
  conditions: { ... } # clusterv1beta1.Conditions
  controlPlaneReady: true
  infrastructureReady: true
  observedGeneration: 5
  phase: ""
  failureDomains:
    failureDomain1:
      controlPlane: true
      attributes: { ... }
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
    controlPlane:
      availableReplicas: 1
      desiredReplicas: 2
      readyReplicas: 3
      replicas: 4
      upToDateReplicas: 5
    workers:
      availableReplicas: 11
      desiredReplicas: 12
      readyReplicas: 13
      replicas: 14
      upToDateReplicas: 15
  failureMessage: ""
  failureReason: ""
apiVersion: cluster.x-k8s.io/v1beta2
kind: Cluster
metadata: { ... }
spec:
  paused: true
  availabilityGates: [ ... ]
  clusterNetwork: { ... }
  controlPlaneEndpoint: { ... }
  controlPlaneRef:
    apiGroup: ""
    kind: ""
    name: ""
  infrastructureRef:
    apiGroup: ""
    kind: ""
    name: ""
  topology:
    classRef:
      name: ""
      namespace: ""
    version: ""
    controlPlane:
      metadata: { ... }
      variables:
        overrides:
          - name: ""
            value:
      replicas: 5
      readinessGates: [ ... ]
      healthCheck:
        enabled: true
        checks:
          nodeStartupTimeoutSeconds: 300
          unhealthyNodeConditions: [ ... ]
        remediation:
          triggerIf:
            unhealthyInRange: "[1-4]"
            unhealthyLessThanOrEqualTo: "80%"
          templateRef:
            apiVersion: ""
            kind: ""
            name: ""
      deletion:
        nodeDeletionTimeoutSeconds: 10
        nodeDrainTimeoutSeconds: 20
        nodeVolumeDetachTimeoutSeconds: 30
    workers:
      machineDeployments:
        - name: ""
          class: ""
          failureDomain: ""
          metadata: { ... }
          variables:
            overrides:
              - name: ""
                value: { ... }
          replicas: 5
          minReadySeconds: 15
          readinessGates: [ ... ]
          healthCheck:
            enabled: true
            checks:
              nodeStartupTimeoutSeconds: 300
              unhealthyNodeConditions: [ ... ]
            remediation:
              triggerIf:
                unhealthyInRange: "[1-4]"
                unhealthyLessThanOrEqualTo: "80%"
              templateRef:
                apiVersion: ""
                kind: ""
                name: ""
              maxInFlight: 3
          rollout:
            strategy:
              type: RollingUpdate
              rollingUpdate:
                maxSurge: 1
                maxUnavailable: 0
          deletion:
            order: Oldest
            nodeDeletionTimeoutSeconds: 10
            nodeDrainTimeoutSeconds: 20
            nodeVolumeDetachTimeoutSeconds: 30
      machinePools:
        - name: ""
          class: ""
          failureDomains: [ ... ]
          metadata: { ... }
          variables:
            overrides:
              - name: ""
                value: { ... }
          replicas: 5
          minReadySeconds: 15
          deletion:
            nodeDeletionTimeoutSeconds: 10
            nodeDrainTimeoutSeconds: 20
            nodeVolumeDetachTimeoutSeconds: 30
    variables:
      - name: ""
        value: { ... }
status:
  conditions: [ ... ] # metav1.Conditions
  initialization:
    controlPlaneInitialized: true
    infrastructureProvisioned: true
  observedGeneration: 5
  phase: ""
  failureDomains:
    - name: "failureDomain1"
      controlPlane: true
      attributes: { ... }
  controlPlane:
    availableReplicas: 1
    desiredReplicas: 2
    readyReplicas: 3
    replicas: 4
    upToDateReplicas: 5
  workers:
    availableReplicas: 11
    desiredReplicas: 12
    readyReplicas: 13
    replicas: 14
    upToDateReplicas: 15
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      failureMessage: ""
      failureReason: ""
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The type of the spec.paused field has been changed from bool to *bool(compliance with K8s API guidelines)
  • The spec.controlPlaneRef and spec.infrastructureRef fields are now using ContractVersionedObjectReference type instead of corev1.ObjectReference (improve object references)
    • The following fields have been removed: namespace, uid, resourceVersion, fieldPath
    • apiVersion has been replaced with apiGroup. As before, the version will be read from the corresponding CRD
  • The spec.controlPlaneEndpoint.host field does not accept “” anymore as a value (missing validation)
  • The spec.controlPlaneEndpoint.port field does not accept 0 anymore as a value (missing validation)
  • The type of the spec.clusterNetwork.apiServerPort field has been changed from *int32 to int32 (drop unnecessary pointers)
    • This field does not accept 0 anymore as a value (missing validation)
  • The type of the spec.clusterNetwork.services and spec.clusterNetwork.pods fields has been changed from *NetworkRanges to NetworkRanges (drop unnecessary pointers)
  • The spec.topology.class field has been renamed to spec.topology.classRef.name (support CC across namespaces)
  • The spec.topology.classNamespace field has been renamed to spec.topology.classRef.namespace (support CC across namespaces)
  • The spec.topology.rolloutAfter field has been removed because the corresponding functionality was never implemented (tech debt)
  • The definitionFrom field (deprecated since CAPI v1.8) has been removed from
    • spec.topology.variables
    • spec.topology.controlPlane.variables.overrides[]
    • spec.topology.workers.machineDeployments[].variables.overrides[]
    • spec.topology.workers.machinePools[].variables.overrides[]
  • The type of the spec.workers field has been changed from *WorkersTopology to WorkersTopology (drop unnecessary pointers)
  • The type of the spec.topology.controlPlane.variables field has been changed from *ControlPlaneVariables to ControlPlaneVariables (drop unnecessary pointers)
  • The type of the spec.topology.workers.machineDeployments[].failureDomain field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.topology.workers.machineDeployments[].deletion.order field, previously spec.topology.workers.machineDeployments[].strategy.rollingUpdate.deletePolicy, has been changed from *string to MachineSetDeletionOrder (improve consistency)
  • A new spec.topology.workers.machineDeployments[].rollout field has been introduced, it contains the previous spec.topology.workers.machineDeployments[].strategy field. The Go structs have been modified accordingly. For more details see YAML above (improve consistency)
  • The type of the spec.topology.workers.machineDeployments[].variables field has been changed from *MachineDeploymentVariables to MachineDeploymentVariables (drop unnecessary pointers)
  • The type of the spec.topology.workers.machinePools[].variables field has been changed from *MachinePoolVariables to MachinePoolVariables (drop unnecessary pointers)
  • All fields of type Duration in spec.topology.{controlPlane,workers.machineDeployments[],workers.machinePools[]} have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • The spec.topology.controlPlane.healthCheck field, previously spec.topology.controlPlane.machineHealthCheck, has been restructured and made consistent across all resources. Notably fields for checks and remediation are now well identified under corresponding fields. The Go structs have been modified accordingly. For more details see YAML above (improve consistency).
    • The same change has been applied to spec.topology.workers.machineDeployments[].healthCheck, previously spec.topology.workers.machineDeployments[].machineHealthCheck
    • Also the spec.topology.workers.machineDeployments[].healthCheck.remediation.maxInFlight field has been moved from spec.topology.workers.machineDeployments[].strategy.remediation.maxInFlight
  • All fields of type Duration in spec.topology.{controlPlane.healthCheck.checks,workers.machineDeployments[].healthCheck.checks}, previously spec.topology.{controlPlane.healthCheck,workers.machineDeployments[].machineHealthCheck} have been renamed by adding the Seconds suffix and their type was changed to *int32 (compliance with K8s API guidelines)
    • nodeStartupTimeout => nodeStartupTimeoutSeconds
    • unhealthyNodeConditions[].timeout => unhealthyNodeConditions[].timeoutSeconds
  • All the remediation.templateRef fields have been migrated from type corev1.ObjectReference to MachineHealthCheckRemediationTemplateReference:
    • spec.topology.controlPlane.healthCheck.remediation.templateRef, previously spec.topology.controlPlane.machineHealthCheck.remediationTemplate
    • spec.topology.workers.machineDeployments[].healthCheck.remediation.templateRef, previously spec.topology.workers.machineDeployments[].machineHealthCheck.remediationTemplate
    • For all the above, the following fields have been removed from remediation.templateRef: namespace, uid, resourceVersion, fieldPath
  • The type of the spec.topology.controlPlane.healthCheck.remediation.triggerIf.unhealthyInRange, spec.topology.workers.machineDeployments[].healthCheck.remediation.triggerIf.unhealthyInRange fields, previously spec.topology.controlPlane.machineHealthCheck.unhealthyRange, spec.topology.workers.machineDeployments[].machineHealthCheck.unhealthyRange, have been changed from *string to string (drop unnecessary pointers)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • Information about the initial provisioning process are now surfacing under the new status.initialization field (improve status)
    • status.infrastructureReady has been replaced by status.initialization.infrastructureProvisioned
    • status.controlPlaneReady has been replaced by status.initialization.controlPlaneInitialized
  • The .status.failureDomains field has been changed from a map to an array
  • New fields for replica counters have been added to the cluster object (improve status):
    • status.controlPlane now reports replica counters surfaced from the control plane object
    • status.workers now reports replica counters from MachineDeployments and standalone MachineSet and Machines
  • Support for terminal errors has been dropped (improve status)
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1
    • The const values for Failed phase has been deprecated in the enum type for status.phase (controllers are not setting this value anymore)
  • The GetIPFamily method (deprecated since CAPI v1.8) has been removed from the Cluster struct.
  • The index.ByClusterClassName, index.ClusterByClusterClassClassName and index.ClusterClassNameField types have been removed in favor of index.ByClusterClassRef, index.ClusterByClusterClassRef and index.ClusterClassRefPath

MachineDeployment

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata: { ... }
spec:
  paused: true
  clusterName: ""
  selector: { ... }
  progressDeadlineSeconds: 5
  revisionHistoryLimit: 5
  machineNamingStrategy:
    template: ""
  replicas: 5
  template:
    metadata: { ... }
    spec:
      clusterName: ""
      failureDomain: ""
      version: ""
      readinessGates: [ ... ]
      bootstrap:
        configRef:
          apiVersion: ""
          kind: ""
          name: ""
          namespace: "" # and also fieldPath, resourceVersion, uid
        dataSecretName: ""
      infrastructureRef:
        apiVersion: ""
        kind: ""
        name: ""
        namespace: "" # and also fieldPath, resourceVersion, uid
      nodeDeletionTimeout: 10s
      nodeDrainTimeout: 20s
      nodeVolumeDetachTimeout: 30s
      providerID: ""
  minReadySeconds: 15
  rolloutAfter: "2030-07-23T10:56:54Z"
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
      deletePolicy: Oldest
    remediation:
      maxInFlight: 3
status:
  conditions: { ... } # clusterv1beta1.Conditions
  observedGeneration: 5
  phase: ""
  selector: ""
  replicas: 5
  availableReplicas: 11
  readyReplicas: 12
  unavailableReplicas: 13
  updatedReplicas: 14
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
    availableReplicas: 1
    readyReplicas: 2
    upToDateReplicas: 3
apiVersion: cluster.x-k8s.io/v1beta2
kind: MachineDeployment
metadata: { ... }
spec:
  paused: true
  clusterName: ""
  selector: { ... }
  machineNaming:
    template: ""
  replicas: 5
  template:
    metadata: { ... }
    spec:
      clusterName: ""
      failureDomain: ""
      version: ""
      readinessGates: [ ... ]
      bootstrap:
        configRef:
          apiGroup: ""
          kind: ""
          name: ""
        dataSecretName: ""
      infrastructureRef:
        apiGroup: ""
        kind: ""
        name: ""
      deletion:
        nodeDeletionTimeoutSeconds: 10
        nodeDrainTimeoutSeconds: 20
        nodeVolumeDetachTimeoutSeconds: 30
      providerID: ""
      minReadySeconds: 15
  rollout:
    after: "2030-07-23T10:56:54Z"
    strategy:
      type: RollingUpdate
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
  deletion:
    order: Oldest
  remediation:
    maxInFlight: 3
status:
  conditions: [ ... ] # metav1.Conditions
  observedGeneration: 5
  phase: ""
  selector: ""
  replicas: 5
  availableReplicas: 1
  readyReplicas: 2
  upToDateReplicas: 3
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      availableReplicas: 11
      readyReplicas: 12
      unavailableReplicas: 13
      updatedReplicas: 14
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The spec.machineNamingStrategy field was renamed to spec.machineNaming and is now using MachineNamingSpec type instead of *MachineNamingStrategy (improve consistency, drop unnecessary pointers)
  • The spec.template.spec.bootstrap.configRef and spec.template.spec.infrastructureRef fields are now using ContractVersionedObjectReference type instead of corev1.ObjectReference (improve object references)
    • The following fields have been removed: namespace, uid, resourceVersion, fieldPath
    • apiVersion has been replaced with apiGroup. As before, the version will be read from the corresponding CRD
  • The spec.progressDeadlineSeconds field (deprecated since CAPI v1.9) has been removed
  • All fields of type Duration in spec.template.spec have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • The type of the spec.paused field has been changed from bool to *bool (compliance with K8s API guidelines)
  • A new spec.rollout field has been introduced, it combines previous spec.rolloutAfter and spec.strategy fields. The Go structs have been modified accordingly. For more details see YAML above (improve consistency)
    • The type of the spec.rollout.after, previously spec.rolloutAfter, field has been changed from *metav1.Time to metav1.Time (drop unnecessary pointers)
  • The type of the spec.remediation field, previously spec.strategy.remediation, has been changed from *RemediationStrategy to MachineDeploymentRemediationSpec (improve consistency, drop unnecessary pointers)
  • The type of the spec.deletion.order field, previously spec.strategy.rollingUpdate.deletePolicy, has been changed from *string to MachineSetDeletionOrder (improve consistency)
  • The spec.revisionHistoryLimit field has been removed. The MachineDeployment controller will now clean up all MachineSets without replicas (tech debt)
    • The corresponding machinedeployment.clusters.x-k8s.io/revision-history annotation has also been removed
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • Replica counters are now consistent with replica counters from other resources (improve status):
    • status.replicas was made a pointer and omitempty was added
    • status.readyReplicas has now a new semantic based on machine’s Ready condition
    • status.availableReplicas has now a new semantic based on machine’s Available condition
    • status.upToDateReplicas has now a new semantic (and name) based on machine’s UpToDate condition
    • Temporarily, old replica counters will still be available under the status.deprecated.v1beta1 struct
  • Support for terminal errors has been dropped (improve status)
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1
    • The const values for Failed phase has been deprecated in the enum type for status.phase (controllers are not setting this value anymore)
  • The status.phases field is now computed using the same logic used for ScalingUp and ScalingDown conditions (improve status)

MachineSet

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineSet
metadata: { ... }
spec:
  clusterName: ""
  selector: { ... }
  machineNamingStrategy:
    template: ""
  replicas: 5
  template:
    metadata: { ... }
    spec:
      clusterName: ""
      failureDomain: ""
      version: ""
      readinessGates: [ ... ]
      providerID: ""
      bootstrap:
        configRef:
          apiVersion: ""
          kind: ""
          name: ""
          namespace: "" # and also fieldPath, resourceVersion, uid
        dataSecretName: ""
      infrastructureRef:
        apiVersion: ""
        kind: ""
        name: ""
        namespace: "" # and also fieldPath, resourceVersion, uid
      nodeDeletionTimeout: 10s
      nodeDrainTimeout: 20s
      nodeVolumeDetachTimeout: 30s
  minReadySeconds: 15
  deletePolicy: Oldest
status:
  conditions: { ... } # clusterv1beta1.Conditions
  observedGeneration: 5
  selector: ""
  replicas: 5
  availableReplicas: 11
  readyReplicas: 12
  fullyLabeledReplicas: 13
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
    availableReplicas: 1
    readyReplicas: 2
    upToDateReplicas: 3
  failureMessage: ""
  failureReason: ""
apiVersion: cluster.x-k8s.io/v1beta2
kind: MachineSet
metadata: { ... }
spec:
  clusterName: ""
  selector: { ... }
  machineNaming:
    template: ""
  replicas: 5
  template:
    metadata: { ... }
    spec:
      clusterName: ""
      failureDomain: ""
      version: ""
      readinessGates: [ ... ]
      providerID: ""
      bootstrap:
        configRef:
          apiGroup: ""
          kind: ""
          name: ""
        dataSecretName: ""
      infrastructureRef:
        apiGroup: ""
        kind: ""
        name: ""
      deletion:
        nodeDeletionTimeoutSeconds: 10
        nodeDrainTimeoutSeconds: 20
        nodeVolumeDetachTimeoutSeconds: 30
      minReadySeconds: 15
  deletion:
    order: Oldest
status:
  conditions: [ ... ] # metav1.Conditions
  observedGeneration: 5
  selector: ""
  replicas: 5
  availableReplicas: 1
  readyReplicas: 2
  upToDateReplicas: 3
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      availableReplicas: 11
      readyReplicas: 12
      fullyLabeledReplicas: 13
      failureMessage: ""
      failureReason: ""
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The spec.machineNamingStrategy field was renamed to spec.machineNaming and is now using MachineNamingSpec type instead of *MachineNamingStrategy (improve consistency, drop unnecessary pointers)
  • The spec.template.spec.bootstrap.configRef and spec.template.spec.infrastructureRef fields are now using ContractVersionedObjectReference type instead of corev1.ObjectReference (improve object references)
    • The following fields have been removed: namespace, uid, resourceVersion, fieldPath
    • apiVersion has been replaced with apiGroup. As before, the version will be read from the corresponding CRD
  • The type of the spec.deletion.order field, previously spec.deletePolicy, field has been changed from string to MachineSetDeletionOrder (improve consistency)
  • All fields of type Duration in spec.template.spec have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • Replica counters fields are now consistent with replica counters from other resources (improve status):
    • status.replicas was made a pointer and omitempty was added
    • status.readyReplicas has now a new semantic based on machine’s Ready condition
    • status.availableReplicas has now a new semantic based on machine’s Available condition
    • status.upToDateReplicas has now a new semantic (and name) based on machine’s UpToDate condition
    • Temporarily, old replica counters will still be available under the status.deprecated.v1beta1 struct
  • Support for terminal errors has been dropped (improve status).
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1

MachinePool

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata: { ... }
spec:
  clusterName: ""
  failureDomains: [ ... ]
  providerIDList: [ ... ]
  replicas: 5
  template:
    metadata: { ... }
    spec:
      clusterName: ""
      failureDomain: ""
      bootstrap:
        configRef:
          apiVersion: ""
          kind: ""
          name: ""
          namespace: "" # and also fieldPath, resourceVersion, uid
        dataSecretName: ""
      infrastructureRef:
        apiVersion: ""
        kind: ""
        name: ""
        namespace: "" # and also fieldPath, resourceVersion, uid
      nodeDeletionTimeout: 10s
      nodeDrainTimeout: 20s
      nodeVolumeDetachTimeout: 30s
      providerID: ""
      readinessGates: [ ... ]
      version: ""
  minReadySeconds: 15
status:
  conditions: { ... } # clusterv1beta1.Conditions
  bootstrapReady: true
  infrastructureReady: true
  observedGeneration: 5
  phase: ""
  nodeRefs: [ ... ]
  replicas: 5
  availableReplicas: 11
  readyReplicas: 12
  unavailableReplicas: 13
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
    availableReplicas: 1
    readyReplicas: 2
    upToDateReplicas: 3
  failureMessage: ""
  failureReason: ""
apiVersion: cluster.x-k8s.io/v1beta2
kind: MachinePool
metadata: { ... }
spec:
  clusterName: ""
  failureDomains: [ ... ]
  providerIDList: [ ... ]
  replicas: 5
  template:
    metadata: { ... }
    spec:
      clusterName: ""
      failureDomain: ""
      bootstrap:
        configRef:
          apiGroup: ""
          kind: ""
          name: ""
        dataSecretName: ""
      infrastructureRef:
        apiGroup: ""
        kind: ""
        name: ""
      deletion:
        nodeDeletionTimeoutSeconds: 10
        nodeDrainTimeoutSeconds: 20
        nodeVolumeDetachTimeoutSeconds: 30
      providerID: ""
      readinessGates: [ ... ]
      version: ""
      minReadySeconds: 15
status:
  conditions: [ ... ] # metav1.Conditions
  initialization:
    bootstrapDataSecretCreated: true
    infrastructureProvisioned: true
  observedGeneration: 5
  phase: ""
  nodeRefs: [ ... ]
  replicas: 5
  availableReplicas: 1
  readyReplicas: 2
  upToDateReplicas: 3
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      availableReplicas: 11
      readyReplicas: 12
      unavailableReplicas: 13
      failureMessage: ""
      failureReason: ""
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The spec.template.spec.bootstrap.configRef and spec.template.spec.infrastructureRef fields are now using ContractVersionedObjectReference type instead of corev1.ObjectReference (improve object references)
    • The following fields have been removed: namespace, uid, resourceVersion, fieldPath
    • apiVersion has been replaced with apiGroup. As before, the version will be read from the corresponding CRD
  • All fields of type Duration in spec.template.spec have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • status.replicas was made a pointer and omitempty was added (improve status)
  • Support for terminal errors has been dropped (improve status)
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1
    • The const values for Failed phase has been deprecated in the enum type for status.phase because controllers are not setting this value anymore

Machine

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: Machine
metadata: { ... }
spec:
  clusterName: ""
  failureDomain: ""
  bootstrap:
    configRef:
      apiVersion: ""
      kind: ""
      name: ""
      namespace: "" # and also fieldPath, resourceVersion, uid
    dataSecretName: ""
  infrastructureRef:
    apiVersion: ""
    kind: ""
    name: ""
    namespace: ""  # and also fieldPath, resourceVersion, uid
  version: ""
  providerID: ""
  readinessGates:  [ ... ]
  nodeDeletionTimeout: 10s
  nodeDrainTimeout: 20s
  nodeVolumeDetachTimeout: 30s
status:
  conditions: { ... } # clusterv1beta1.Conditions
  bootstrapReady: true
  infrastructureReady: true
  observedGeneration: 5
  phase: ""
  addresses:  [ ... ]
  certificatesExpiryDate: ""
  deletion: { ... }
  lastUpdated: ""
  nodeInfo: { ... }
  nodeRef:
    apiVersion: ""
    kind: ""
    name: ""
    namespace: "" # and also fieldPath, resourceVersion, uid
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
  failureMessage: ""
  failureReason: ""
apiVersion: cluster.x-k8s.io/v1beta2
kind: Machine
metadata: { ... }
spec:
  clusterName: ""
  failureDomain: ""
  bootstrap:
    configRef:
      apiGroup: ""
      kind: ""
      name: ""
    dataSecretName: ""
  infrastructureRef:
    apiGroup: ""
    kind: ""
    name: ""
  version: ""
  providerID: ""
  readinessGates:  [ ... ]
  minReadySeconds: 15
  deletion:
    nodeDeletionTimeoutSeconds: 10
    nodeDrainTimeoutSeconds: 20
    nodeVolumeDetachTimeoutSeconds: 30
status:
  conditions: [ ... ] # metav1.Conditions
  initialization:
    bootstrapDataSecretCreated: true
    infrastructureProvisioned: true
  observedGeneration: 5
  phase: ""
  addresses:  [ ... ]
  certificatesExpiryDate: ""
  deletion: { ... }
  lastUpdated: ""
  nodeInfo: { ... }
  nodeRef:
    name: ""
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      failureMessage: ""
      failureReason: ""
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The spec.bootstrap.configRef and spec.infrastructureRef fields are now using ContractVersionedObjectReference type instead of corev1.ObjectReference (improve object references)
    • The following fields have been removed: namespace, uid, resourceVersion, fieldPath
    • apiVersion has been replaced with apiGroup. As before, the version will be read from the corresponding CRD
  • All fields of type Duration in spec have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • The type of the spec.version field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.providerID field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.failureDomain field has been changed from *string to string (drop unnecessary pointers)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • Information about the initial provisioning process is now surfacing under the new status.initialization field (improve status)
    • status.infrastructureReady has been replaced by status.initialization.infrastructureProvisioned
    • status.bootstrapReady has been replaced by status.initialization.bootstrapDataSecretCreated
  • Support for terminal errors has been dropped (improve status)
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1
    • The const values for Failed phase has been deprecated in the enum type for status.phase (controllers are not setting this value anymore)
  • The type of the status.nodeRef field has been changed from corev1.ObjectReference to MachineNodeReference (improve object references)
    • The following fields have been removed from status.nodeRef: kind, namespace, uid, apiVersion, resourceVersion, fieldPath
  • The type of the following fields has been changed from *metav1.Time to metav1.Time (drop unnecessary pointers)
    • status.lastUpdated
    • status.certificatesExpiryDate
    • status.deletion.nodeDrainStartTime
    • status.deletion.waitForNodeVolumeDetachStartTime

MachineHealthCheck

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineHealthCheck
metadata: { ... }
spec:
  clusterName: ""
  selector: { ... }
  nodeStartupTimeout: 300s
  unhealthyConditions: { ... }
  unhealthyRange: "[1-4]"
  maxUnhealthy: "80%"
  remediationTemplate:
    apiVersion: ""
    kind: ""
    name: ""
    namespace: "" # and also fieldPath, resourceVersion, uid
status:
  conditions: { ... } # clusterv1beta1.Conditions
  currentHealthy: 1
  expectedMachines: 2
  observedGeneration: 3
  remediationsAllowed: 4
  targets: [ ... ]
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
apiVersion: cluster.x-k8s.io/v1beta2
kind: MachineHealthCheck
metadata: { ... }
spec:
  clusterName: ""
  selector: { ... }
  checks:
    nodeStartupTimeoutSeconds: 300
    unhealthyNodeConditions: [ ... ]
  remediation:
    triggerIf:
      unhealthyInRange: "[1-4]"
      unhealthyLessThanOrEqualTo: "80%"
    templateRef:
      apiVersion: ""
      kind: ""
      name: ""
status:
  conditions: [ ... ] # metav1.Conditions
  currentHealthy: 1
  expectedMachines: 2
  observedGeneration: 3
  remediationsAllowed: 4
  targets: [ ... ]
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The spec has been restructured and made consistent across all resources. Notably fields for checks and remediation are now well identified under corresponding fields. The Go structs have been modified accordingly. For more details see YAML above (improve consistency).
  • The type of the spec.checks.nodeStartupTimeoutSeconds field, previously spec.nodeStartupTimeout, was changed to int32 (compliance with K8s API guidelines)
  • The spec.unhealthyConditions field has been renamed to spec.checks.unhealthyNodeConditions (improve consistency)
  • The type of the spec.checks.unhealthyNodeConditions[].timeoutSeconds field, previously spec.unhealthyConditions[].timeout, was changed to *int32 (compliance with K8s API guidelines)
  • The type of the spec.remediation.templateRef field, previously spec.remediationTemplate, was changed from corev1.ObjectReference to MachineHealthCheckRemediationTemplateReference (improve object references):
    • The following fields have been removed from templateRef: namespace, uid, resourceVersion, fieldPath
  • The type of the spec.remediation.triggerIf.unhealthyInRange field, previously spec.unhealthyRange, was changed from *string to string (drop unnecessary pointers)
  • The spec.maxUnhealthy field has been renamed to spec.remediation.triggerIf.unhealthyLessThanOrEqualTo (improve consistency)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • The type of the status.expectedMachines field has been changed from *int32 to int32 (drop unnecessary pointers)
  • The type of the status.currentHealthy field has been changed from *int32 to int32 (drop unnecessary pointers)
  • The type of the status.remediationsAllowed field has been changed from *int32 to int32 (drop unnecessary pointers)

ClusterClass

v1beta1 v1beta2
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata: { ... }
spec:
  availabilityGates: [ ... ]
  infrastructure:
    ref:
      apiVersion: ""
      kind: ""
      name: ""
      namespace: "" # and also fieldPath, resourceVersion, uid
  infrastructureNamingStrategy:
    template: ""
  controlPlane:
    ref:
      apiVersion: ""
      kind: ""
      name: ""
      namespace: "" # and also fieldPath, resourceVersion, uid
    namingStrategy:
      template: ""
    metadata: { ... }
    machineInfrastructure:
      ref:
        apiVersion: ""
        kind: ""
        name: ""
        namespace: "" # and also fieldPath, resourceVersion, uid
    readinessGates: [ ... ]
    machineHealthCheck:
      nodeStartupTimeout: 300s
      unhealthyConditions: { ... }
      unhealthyRange: "[1-4]"
      maxUnhealthy: "80%"
      remediationTemplate:
        apiVersion: ""
        kind: ""
        name: ""
        namespace: "" # and also fieldPath, resourceVersion, uid
    nodeDeletionTimeout: 10s
    nodeDrainTimeout: 20s
    nodeVolumeDetachTimeout: 30s
  workers:
    machineDeployments:
      - class: ""
        failureDomain: ""
        template:
          metadata: { ... }
          bootstrap:
            ref:
              apiVersion: ""
              kind: ""
              name: ""
              namespace: "" # and also fieldPath, resourceVersion, uid
          infrastructure:
            ref:
              apiVersion: ""
              kind: ""
              name: ""
              namespace: "" # and also fieldPath, resourceVersion, uid
        namingStrategy:
          template: ""
        minReadySeconds: 15
        readinessGates: [ ... ]
        strategy:
          type: RollingUpdate
          rollingUpdate:
            maxSurge: 1
            maxUnavailable: 0
            deletePolicy: Oldest
          remediation:
            maxInFlight: 3
        machineHealthCheck:
          nodeStartupTimeout: 300s
          unhealthyConditions: { ... }
          unhealthyRange: "[1-4]"
          maxUnhealthy: "80%"
          remediationTemplate:
            apiVersion: ""
            kind: ""
            name: ""
            namespace: "" # and also fieldPath, resourceVersion, uid
        nodeDeletionTimeout: 10s
        nodeDrainTimeout: 20s
        nodeVolumeDetachTimeout: 30s
    machinePools:
      - class: ""
        failureDomains: [ ... ]
        minReadySeconds: 15
        template:
          metadata: { ... }
          bootstrap:
            ref:
              apiVersion: ""
              kind: ""
              name: ""
              namespace: "" # and also fieldPath, resourceVersion, uid
          infrastructure:
            ref:
              apiVersion: ""
              kind: ""
              name: ""
              namespace: "" # and also fieldPath, resourceVersion, uid
        namingStrategy:
          template: ""
        nodeDeletionTimeout: 10s
        nodeDrainTimeout: 20s
        nodeVolumeDetachTimeout: 30s
  patches:
    - definitions: { ... }
      description: ""
      enabledIf: ""
      external:
        discoverVariablesExtension: ""
        generateExtension: ""
        validateExtension: ""
        settings: { ... }
      name: ""
  variables:
    - name: ""
      schema: { ... }
      required: true
      metadata: { ... }
status:
  conditions: { ... } # clusterv1beta1.Conditions
  observedGeneration: 5
  variables:
    - definitions:
        - schema: { ... }
          required: true
          from: ""
          metadata: { ... }
      definitionsConflict: true
      name: ""
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
apiVersion: cluster.x-k8s.io/v1beta2
kind: ClusterClass
metadata: { ... }
spec:
  availabilityGates: [ ... ]
  infrastructure:
    templateRef:
      apiVersion: ""
      kind: ""
      name: ""
    naming:
      template: ""
  controlPlane:
    templateRef:
      apiVersion: ""
      kind: ""
      name: ""
    naming:
      template: ""
    metadata: { ... }
    machineInfrastructure:
      templateRef:
        apiVersion: ""
        kind: ""
        name: ""
    readinessGates: [ ... ]
    healthCheck:
      checks:
        nodeStartupTimeoutSeconds: 300
        unhealthyNodeConditions: [ ... ]
      remediation:
        triggerIf:
          unhealthyInRange: "[1-4]"
          unhealthyLessThanOrEqualTo: "80%"
        templateRef:
          apiVersion: ""
          kind: ""
          name: ""
    deletion:
      nodeDeletionTimeoutSeconds: 10
      nodeDrainTimeoutSeconds: 20
      nodeVolumeDetachTimeoutSeconds: 30
  workers:
    machineDeployments:
      - class: ""
        failureDomain: ""
        metadata: { ... }
        bootstrap:
          templateRef:
            apiVersion: ""
            kind: ""
            name: ""
        infrastructure:
          templateRef:
            apiVersion: ""
            kind: ""
            name: ""
        naming:
          template: ""
        minReadySeconds: 15
        readinessGates: [ ... ]
        rollout:
          strategy:
            type: RollingUpdate
            rollingUpdate:
              maxSurge: 1
              maxUnavailable: 0
        healthCheck:
          checks:
            nodeStartupTimeoutSeconds: 300
            unhealthyNodeConditions: [ ... ]
          remediation:
            maxInFlight: 3
            triggerIf:
              unhealthyInRange: "[1-4]"
              unhealthyLessThanOrEqualTo: "80%"
            templateRef:
              apiVersion: ""
              kind: ""
              name: ""
        deletion:
          order: Oldest
          nodeDeletionTimeoutSeconds: 10
          nodeDrainTimeoutSeconds: 20
          nodeVolumeDetachTimeoutSeconds: 30
    machinePools:
      - class: ""
        failureDomains: [ ... ]
        minReadySeconds: 15
        metadata: { ... }
        bootstrap:
          templateRef:
            apiVersion: ""
            kind: ""
            name: ""
        infrastructure:
          templateRef:
            apiVersion: ""
            kind: ""
            name: ""
        naming:
          template: ""
        deletion:
          nodeDeletionTimeoutSeconds: 10
          nodeDrainTimeoutSeconds: 20
          nodeVolumeDetachTimeoutSeconds: 30
  patches:
    - definitions: { ... }
      description: ""
      enabledIf: ""
      external:
        discoverVariablesExtension: ""
        generatePatchesExtension: ""
        validateTopologyExtension: ""
        settings: { ... }
      name: ""
  variables:
    - name: ""
      schema: { ... }
      required: true
      deprecatedV1Beta1Metadata: { ... }
status:
  conditions: [ ... ] # metav1.Conditions
  observedGeneration: 5
  variables:
    - definitions:
        - schema: { ... }
          required: true
          from: ""
          deprecatedV1Beta1Metadata: { ... }
      definitionsConflict: true
      name: ""
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • All fields of type Duration in spec.{controlPlane,workers.machineDeployments[],workers.machinePools[]} have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • All fields of type Duration in spec.controlPlane.healthCheck and spec.workers.machineDeployments[].healthCheck, previously spec.controlPlane.machineHealthCheck and spec.workers.machineDeployments[].machineHealthCheck, have been renamed by adding the Seconds suffix and their type was changed to *int32 (compliance with K8s API guidelines)
    • nodeStartupTimeout => nodeStartupTimeoutSeconds
    • unhealthyNodeConditions[].timeout => unhealthyNodeConditions[].timeoutSeconds
  • All fields implementing or embedding a reference to a template are now using the ClusterClassTemplateReference type instead of corev1.ObjectReference; additionally field have been renamed and unnecessary nested structs dropped (improve object references):
    • spec.infrastructure.templateRef, previously spec.infrastructure.ref
    • spec.controlPlane.templateRef, previously spec.controlPlane.ref
    • spec.controlPlane.machineInfrastructure.templateRef, previously spec.controlPlane.machineInfrastructure.ref
    • spec.workers.machineDeployments[].bootstrap.templateRef, previously spec.workers.machineDeployments[].template.bootstrap.ref
    • spec.workers.machineDeployments[].infrastructure.templateRef, previously spec.workers.machineDeployments[].template.infrastructure.ref
    • spec.workers.machinePool[].bootstrap.templateRef, previously spec.workers.machinePool[].template.bootstrap.ref
    • spec.workers.machinePool[].infrastructure.templateRef, previously spec.workers.machinePool[].template.infrastructure.ref
    • For all the above, the following fields have been removed from *.ref: namespace, uid, resourceVersion, fieldPath
  • The spec.controlPlane.healthCheck field, previously spec.controlPlane.machineHealthCheck, has been restructured and made consistent across all resources. Notably fields for checks and remediation are now well identified under corresponding fields. The Go structs have been modified accordingly. For more details see YAML above (improve consistency).
    • The same change has been applied to spec.workers.machineDeployments[].healthCheck, previously spec.workers.machineDeployments[].machineHealthCheck
    • Also the spec.workers.machineDeployments[].healthCheck.remediation.maxInFlight field has been moved from spec.workers.machineDeployments[].strategy.remediation.maxInFlight
  • All fields of type Duration in spec.{controlPlane.healthCheck.checks,workers.machineDeployments[].healthCheck.checks}, previously spec.{controlPlane.healthCheck,workers.machineDeployments[].machineHealthCheck} have been renamed by adding the Seconds suffix and their type was changed to *int32 (compliance with K8s API guidelines)
    • nodeStartupTimeout => nodeStartupTimeoutSeconds
    • unhealthyNodeConditions[].timeout => unhealthyNodeConditions[].timeoutSeconds
  • All the remediation.templateRef fields have been migrated from type corev1.ObjectReference to MachineHealthCheckRemediationTemplateReference:
    • spec.controlPlane.healthCheck.remediation.templateRef, previously spec.controlPlane.machineHealthCheck.remediationTemplate
    • spec.workers.machineDeployments[].healthCheck.remediation.templateRef, previously spec.workers.machineDeployments[].machineHealthCheck.remediationTemplate
    • For all the above, the following fields have been removed from remediation.templateRef: namespace, uid, resourceVersion, fieldPath
  • The type of the spec.controlPlane.healthCheck.remediation.triggerIf.unhealthyInRange, spec.workers.machineDeployments[].healthCheck.remediation.triggerIf.unhealthyInRange fields, previously spec.controlPlane.machineHealthCheck.unhealthyRange, spec.workers.machineDeployments[].machineHealthCheck.unhealthyRange, has been changed from *string to string (drop unnecessary pointers)
  • The spec.workers.machineDeployments[].template.metadata field has been moved to spec.workers.machineDeployments[].metadata (drop unnecessary nested struct)
  • The spec.workers.machinePools[].template.metadata field has been moved to spec.workers.machinePools[].metadata (drop unnecessary nested struct)
  • The spec.infrastructureNamingStrategy field was renamed to spec.infrastructure.naming and is now using InfrastructureClassNamingSpec type (improve consistency, drop unnecessary pointers)
    • The type of the spec.infrastructure.naming.template field has been changed from *string to string (drop unnecessary pointers)
  • The spec.controlPlane.namingStrategy.template field was renamed to spec.controlPlane.naming and is now using ControlPlaneClassNamingSpec type (improve consistency, drop unnecessary pointers)
    • The type of the spec.controlPlane.naming.template field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.workers.machineDeployments[].failureDomain field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.workers.machineDeployments[].deletion.order field, previously spec.workers.machineDeployments[].strategy.rollingUpdate.deletePolicy, has been changed from *string to MachineSetDeletionOrder (improve consistency)
  • A new spec.workers.machineDeployments[].rollout field has been introduced, it contains the previous spec.workers.machineDeployments[].strategy field. The Go structs have been modified accordingly. For more details see YAML above (improve consistency)
  • The spec.workers.machineDeployments[].namingStrategy field was renamed to spec.workers.machineDeployments[].naming and is now using MachineDeploymentClassNamingSpec type (improve consistency, drop unnecessary pointers)
    • The type of the spec.workers.machineDeployments[].naming.template field has been changed from *string to string (drop unnecessary pointers)
  • The spec.workers.machinePools[].namingStrategy field was renamed to spec.workers.machinePools[].naming and is now using MachinePoolClassNamingSpec type (improve consistency, drop unnecessary pointers)
    • The type of the spec.workers.machinePools[].naming.template field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.patches[].enabledIf field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.patches[].definitions[].selector.matchResources.controlPlane, spec.patches[].definitions[].selector.matchResources.infrastructureCluster fields has been changed from bool to *bool(compliance with K8s API guidelines)
  • The type of the spec.patches[].definitions[].jsonPatches[].valueFrom.template, spec.patches[].definitions[].jsonPatches[].valueFrom.variable fields has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.patches[].external.generatePatchesExtension (previously generateExtension), spec.patches[].external.validateTopologyExtension (previously validateExtension), spec.patches[].external.discoverVariablesExtension fields has been changed from *string to string (drop unnecessary pointers)
  • The deprecated spec.variables[].metadata and .status.variables[].definitions[].metadata fields have been renamed to spec.variables[].deprecatedV1Beta1Metadata and .status.variables[].definitions[].deprecatedV1Beta1Metadata
    • These fields are deprecated and will be removed when support for v1beta1 will be dropped
    • Please use XMetadata in JSONSchemaProps instead
  • The type of the spec.variables[].required, spec.variables[].schema.openAPIV3Schema.uniqueItems, spec.variables[].schema.openAPIV3Schema.exclusiveMaximum, spec.variables[].schema.openAPIV3Schema.exclusiveMinimum, spec.variables[].schema.openAPIV3Schema.x-kubernetes-preserve-unknown-fields, spec.variables[].schema.openAPIV3Schema.x-kubernetes-int-or-string, .status.variables[].definitions[].required fields has been changed from bool to *bool(compliance with K8s API guidelines)
    • Same applies to the corresponding fields under:
      • spec.variables.schema.openAPIV3Schema.properties[]
      • spec.variables.schema.openAPIV3Schema.additionalProperties
      • spec.variables.schema.openAPIV3Schema.allOf[]
      • spec.variables.schema.openAPIV3Schema.oneOf[]
      • spec.variables.schema.openAPIV3Schema.anyOf[]
      • spec.variables.schema.openAPIV3Schema.not
      • and all the corresponding fields under status.variables[]
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • The type of the field status.patches[].definitionsConflict has been changed from bool to *bool (compliance with K8s API guidelines)
  • The builtin.cluster.classRef.Name and builtin.cluster.classRef.Namespace variables have been added (support CC across namespaces)
    • The builtin.cluster.class and builtin.cluster.classNamespace are deprecated and will be removed with the next apiVersion.
  • The deprecated builtin.cluster.network.ipFamily variable has been removed and it cannot be used anymore in patches

KubeadmConfig

v1beta1 v1beta2
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfig
metadata: { ... }
spec:
  bootCommands: [ ... ]
  clusterConfiguration:
    apiVersion: ""
    kind: ""
    clusterName: ""
    kubernetesVersion: ""
    networking:
      dnsDomain: ""
      podSubnet: ""
      serviceSubnet: ""
    apiServer:
      certSANs: [ ... ]
      extraArgs:
      - "v": "5"
      extraEnvs: [ ... ]
      extraVolumes: [ ... ]
      timeoutForControlPlane: "25s"
    certificatesDir: ""
    controlPlaneEndpoint: ""
    controllerManager:
      extraArgs:
      - "v": "5"
      extraEnvs: [ ... ]
      extraVolumes: [ ... ]
    dns: { ... }
    etcd:
      external: { ... }
      local:
        dataDir: ""
        extraArgs:
        - "v": "5"
        extraEnvs: [ ... ]
        imageRepository: ""
        imageTag: ""
        peerCertSANs: [ ... ]
        serverCertSANs: [ ... ]
    featureGates: { ... }
    imageRepository: ""
    scheduler:
      extraArgs:
      - "v": "5"
      extraEnvs: [ ... ]
      extraVolumes: [ ... ]
  diskSetup: { ... }
  files: [ ... ]
  format: ""
  ignition: { ... }
  initConfiguration:
    apiVersion: ""
    kind: ""
    bootstrapTokens:
    - description: ""
      expires: ""
      groups: [ ... ]
      token: ""
      usages: [ ... ]
      ttl: "45s"
    localAPIEndpoint: { ... }
    nodeRegistration:
      criSocket: ""
      ignorePreflightErrors: [ ... ]
      imagePullPolicy: ""
      imagePullSerial: true
      kubeletExtraArgs:
      - "v": "5"
      name: ""
      taints: [ ... ]
    patches: { ... }
    skipPhases: [ ... ]
  joinConfiguration:
    apiVersion: ""
    kind: ""
    caCertPath: ""
    controlPlane: { ... }
    discovery:
      bootstrapToken: { ... }
      file: { ... }
      tlsBootstrapToken: ""
      timeout: "35s"
    nodeRegistration:
      criSocket: ""
      ignorePreflightErrors: [ ... ]
      imagePullPolicy: ""
      imagePullSerial: true
      kubeletExtraArgs:
      - "v": "5"
      name: ""
      taints: [ ... ]
    patches: { ...}
    skipPhases: [ ... ]
  mounts: [ ... ]
  ntp: { ... }
  postKubeadmCommands: [ ... ]
  preKubeadmCommands: [ ... ]
  useExperimentalRetryJoin: true
  users: [ ... ]
status:
  conditions: { ... } # clusterv1beta1.Conditions
  ready: true
  dataSecretName: ""
  observedGeneration: 5
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
  failureMessage: ""
  failureReason: ""
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: KubeadmConfig
metadata: { ... }
spec:
  bootCommands: [ ... ]
  clusterConfiguration:
    apiServer:
      certSANs: [ ... ]
      extraArgs:
      - name: "v"
        value: "5"
      extraEnvs: [ ... ]
      extraVolumes: [ ... ]
    certificatesDir: ""
    controlPlaneEndpoint: ""
    controllerManager:
      extraArgs:
      - name: "v"
        value: "5"
      extraEnvs: [ ... ]
      extraVolumes: [ ... ]
    dns: { ... }
    etcd:
      external: { ... }
      local:
        dataDir: ""
        extraArgs:
        - name: "v"
          value: "5"
        extraEnvs: [ ... ]
        imageRepository: ""
        imageTag: ""
        peerCertSANs: [ ... ]
        serverCertSANs: [ ... ]
    featureGates: { ... }
    imageRepository: ""
    certificateValidityPeriodDays: 365
    caCertificateValidityPeriodDays: 3650
    scheduler:
      extraArgs:
      - name: "v"
        value: "5"
      extraEnvs: [ ... ]
      extraVolumes: [ ... ]
  diskSetup: { ... }
  files: [ ... ]
  format: ""
  ignition: { ... }
  initConfiguration:
    bootstrapTokens:
    - description: ""
      expires: ""
      groups: [ ... ]
      token: ""
      usages: [ ... ]
      ttlSeconds: 45
    localAPIEndpoint: { ... }
    nodeRegistration:
      criSocket: ""
      ignorePreflightErrors: [ ... ]
      imagePullPolicy: ""
      imagePullSerial: true
      kubeletExtraArgs:
      - name: "v"
        value: "5"
      name: ""
      taints: [ ... ]
    patches: { ... }
    skipPhases: [ ... ]
    timeouts:
      controlPlaneComponentHealthCheckSeconds: 25
      discoverySeconds: 5
      etcdAPICallSeconds: 5
      kubeletHealthCheckSeconds: 5
      kubernetesAPICallSeconds: 5
      tlsBootstrapSeconds: 35
  joinConfiguration:
    caCertPath: ""
    controlPlane: { ... }
    discovery:
      bootstrapToken: { ... }
      file: { ... }
      tlsBootstrapToken: ""
    nodeRegistration:
      criSocket: ""
      ignorePreflightErrors: [ ... ]
      imagePullPolicy: ""
      imagePullSerial: true
      kubeletExtraArgs:
      - name: "v"
        value: "5"
      name: ""
      taints: [ ... ]
    patches: { ...}
    skipPhases: [ ... ]
    timeouts:
      controlPlaneComponentHealthCheckSeconds: 25
      discoverySeconds: 5
      etcdAPICallSeconds: 5
      kubeletHealthCheckSeconds: 5
      kubernetesAPICallSeconds: 5
      tlsBootstrapSeconds: 35
  mounts: [ ... ]
  ntp: { ... }
  postKubeadmCommands: [ ... ]
  preKubeadmCommands: [ ... ]
  users: [ ... ]
status:
  conditions: [ ... ] # metav1.Conditions
  initialization:
    dataSecretCreated: true
  dataSecretName: ""
  observedGeneration: 5
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      failureMessage: ""
      failureReason: ""
  • KubeadmConfig (and the entire CABPK provider) now implements the v1beta2 Cluster API contract
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 and #12560 for details (drop unnecessary pointers)
  • extraArg field types have been changed from map[string]sting to []Arg, thus aligning with kubeadm v1beta4 API; however, using multiple args with the same name will be enabled only when v1beta1 is removed, tentatively in August 2026
    • spec.clusterConfiguration.apiServer.extraArgs type has been changed to []Arg
    • spec.clusterConfiguration.controllerManager.extraArgs type has been changed to []Arg
    • spec.clusterConfiguration.scheduler.extraArgs type has been changed to []Arg
    • spec.clusterConfiguration.etcd.local.extraArgs type has been changed to []Arg
    • spec.initConfiguration.nodeRegistration.kubeletExtraArgs type has been changed to []Arg
    • spec.joinConfiguration.nodeRegistration.kubeletExtraArgs type has been changed to []Arg
  • imagePullPolicy field types have been changed from string to corev1.PullPolicy, thus aligning with kubeadm v1beta4 API
    • spec.initConfiguration.nodeRegistration.imagePullPolicy type has been changed to corev1.PullPolicy
    • spec.joinConfiguration.nodeRegistration.imagePullPolicy type has been changed to corev1.PullPolicy
  • timeout fields have been aligned with kubeadm v1beta4 API, but field names and types have been adapted according to K8s API guidelines
    • spec.initConfiguration.timeouts struct has been added with the following fields:
      • controlPlaneComponentHealthCheckSeconds
      • kubeletHealthCheckSeconds
      • kubernetesAPICallSeconds
      • etcdAPICallSeconds
      • tlsBootstrapSeconds
      • discoverySeconds
    • spec.joinConfiguration.timeouts field has been added with the same set of timeouts listed above
    • spec.clusterConfiguration.apiServer.timeoutForControlPlane field has been removed Use spec.initConfiguration.timeouts.controlPlaneComponentHealthCheckSeconds and spec.joinConfiguration.timeouts.controlPlaneComponentHealthCheckSeconds instead; however, using different timeouts for init and join will be enabled only when v1beta1 is removed
    • spec.joinConfiguration.discovery.timeout field has been removed. Use spec.joinConfiguration.timeouts.tlsBootstrapSeconds instead
  • The spec.clusterConfiguration.certificateValidityPeriodDays and spec.clusterConfiguration.caCertificateValidityPeriodDays have been added thus aligning with kubeadm v1beta4 API
  • The spec.clusterConfiguration.apiServer field does not embed ControlPlaneComponent anymore (avoid embedding structs)
    • extraArgs, extraVolumes, extraEnvs fields have been added to the spec.clusterConfiguration.apiServer struct
  • The type of the spec.clusterConfiguration.controllerManager field has been changed from ControlPlaneComponent to ControllerManager (avoid embedding structs)
  • The type of the spec.clusterConfiguration.scheduler field has been changed from ControlPlaneComponent to Scheduler (avoid embedding structs)
  • The type of the extraEnvs fields in spec.clusterConfiguration.apiServer, spec.clusterConfiguration.controllerManager, spec.clusterConfiguration.scheduler and spec.clusterConfiguration.etcd.local has been changed from []EnvVar to *[]EnvVar (compliance with K8s API guidelines)
  • The type of the spec.clusterConfiguration.apiServer.extraVolumes.readOnly, spec.clusterConfiguration.controllerManager.extraVolumes.readOnly , spec.clusterConfiguration.scheduler.extraVolumes.readOnly fields have been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.initConfiguration.bootstrapTokens[].token field has been changed from *BootstrapTokenString to BootstrapTokenString (drop unnecessary pointers)
  • The type of the spec.initConfiguration.nodeRegistration, spec.joinConfiguration.nodeRegistration fields have been changed from []corev1.Taint to *[]corev1.Taint (avoid custom serialization)
  • The type of the spec.joinConfiguration.discovery.bootstrapToken.unsafeSkipCAVerification field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.joinConfiguration.discovery.file.kubeConfig.cluster.insecureSkipTLSVerify field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.joinConfiguration.discovery.file.kubeConfig.user.exec.provideClusterInfo field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.files[].append field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.users[].gecos, spec.users[].groups, spec.users[].homeDir, spec.users[].shell, spec.users[].passwd, spec.users[].pr, spec.users[].sudo fields have been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.diskSetup.filesystems[].partition, spec.diskSetup.filesystems[].replaceFS fields have been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.diskSetup.partitions[].tableType field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.diskSetup.partitions[].layout field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.ignition.containerLinuxConfig.strict field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The spec.useExperimentalRetryJoin field (deprecated in CAPI v1.2!) has been removed
  • The following spec fields have been removed because they are not necessary because Cluster API automatically applies the right gvk when generating kubeadm config files:
    • clusterConfiguration.apiVersion, clusterConfiguration.kind
    • initConfiguration.apiVersion, initConfiguration.kind
    • joinConfiguration.apiVersion, joinConfiguration.kind
  • The following spec.clusterConfiguration fields have been removed because they are duplicates to fields that already exist in the Cluster object:
    • networking.serviceSubnet (can still be set via Cluster.spec.clusterNetwork.services.cidrBlocks)
    • networking.podSubnet (can still be set via Cluster.spec.clusterNetwork.pods.cidrBlocks)
    • networking.dnsDomain (can still be set via Cluster.spec.clusterNetwork.serviceDomain)
    • kubernetesVersion (can still be set via Machine.spec.version)
    • clusterName (can still be set via Cluster.metadata.name) Note: The ClusterConfiguration fields could previously be used to overwrite the fields from Cluster, now we only use the fields from Cluster
  • All fields of type Duration in spec.initConfiguration.bootstrapTokens[] have been renamed by adding the Seconds suffix and their type was changed to int32 (compliance with K8s API guidelines)
    • .ttl => .ttlSeconds
  • The type of the spec.initConfiguration.bootstrapTokens[].expires field has been changed from *metav1.Time to metav1.Time (drop unnecessary pointers)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • The type of the field status.dataSecretName has been changed from *string to string (drop unnecessary pointers)
  • Information about the initial provisioning process is now surfacing under the new status.initialization field (improve status)
    • status.ready has been replaced by status.initialization.dataSecretCreated
  • Support for terminal errors has been dropped (improve status)
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1

KubeadmConfigTemplate

KubeadmConfigTemplate spec.template.spec has been aligned to changes in the KubeadmConfig spec struct

KubeadmControlPlane

v1beta1 v1beta2
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata: { ... }
spec:
  kubeadmConfigSpec:
    bootCommands: [ ... ]
    clusterConfiguration:
      apiVersion: ""
      kind: ""
      clusterName: ""
      kubernetesVersion: ""
      networking:
        dnsDomain: ""
        podSubnet: ""
        serviceSubnet: ""
      apiServer:
        certSANs: [ ... ]
        extraArgs:
          - "v": "5"
        extraEnvs: [ ... ]
        extraVolumes: [ ... ]
        timeoutForControlPlane: "25s"
      certificatesDir: ""
      controlPlaneEndpoint: ""
      controllerManager:
        extraArgs:
          - "v": "5"
        extraEnvs: [ ... ]
        extraVolumes: [ ... ]
      dns: { ...}
      etcd:
        external: { ...}
        local:
          dataDir: ""
          extraArgs:
            - "v": "5"
          extraEnvs: [ ... ]
          imageRepository: ""
          imageTag: ""
          peerCertSANs: [ ... ]
          serverCertSANs: [ ... ]
      featureGates: { ...}
      imageRepository: ""
      scheduler:
        extraArgs:
          - "v": "5"
        extraEnvs: [ ... ]
        extraVolumes: [ ... ]
    diskSetup: { ...}
    files: [ ... ]
    format: ""
    ignition: { ...}
    initConfiguration:
      apiVersion: ""
      kind: ""
      bootstrapTokens:
        - description: ""
          expires: ""
          groups: [ ... ]
          token: ""
          usages: [ ... ]
          ttl: "45s"
      localAPIEndpoint: { ...}
      nodeRegistration:
        criSocket: ""
        ignorePreflightErrors: [ ... ]
        imagePullPolicy: ""
        imagePullSerial: true
        kubeletExtraArgs:
          - "v": "5"
        name: ""
        taints: [ ... ]
      patches: { ...}
      skipPhases: [ ... ]
    joinConfiguration:
      apiVersion: ""
      kind: ""
      caCertPath: ""
      controlPlane: { ...}
      discovery:
        bootstrapToken: { ...}
        file: { ...}
        tlsBootstrapToken: ""
        timeout: "35s"
      nodeRegistration:
        criSocket: ""
        ignorePreflightErrors: [ ... ]
        imagePullPolicy: ""
        imagePullSerial: true
        kubeletExtraArgs:
          - "v": "5"
        name: ""
        taints: [ ... ]
      patches: { ...}
      skipPhases: [ ... ]
    mounts: [ ... ]
    ntp:
      enabled: true
      servers: [ ... ]
    postKubeadmCommands: [ ... ]
    preKubeadmCommands: [ ... ]
    useExperimentalRetryJoin: true
    users: [ ... ]
  machineNamingStrategy:
    template: ""
  replicas: 5
  machineTemplate:
    metadata: { ... }
    readinessGates: [ ... ]
    infrastructureRef:
      apiVersion: ""
      kind: ""
      name: ""
      namespace: "" # and also fieldPath, resourceVersion, uid
    nodeDeletionTimeout: 10s
    nodeDrainTimeout: 20s
    nodeVolumeDetachTimeout: 30s
  rolloutAfter: "2030-07-23T10:56:54Z"
  rolloutBefore:
    certificatesExpiryDays: 5
  rolloutStrategy:
    rollingUpdate:
      maxSurge: 1
    type: RollingUpdate
  remediationStrategy:
    maxRetry: 5
    minHealthyPeriod: "60s"
    retryPeriod: "90s"
  version: ""
status:
  conditions: { ... } # clusterv1beta1.Conditions
  ready: true
  initialized: true
  observedGeneration: 5
  selector: ""
  lastRemediation:
    machine: ""
    retryCount: 5
    timestamp: ""
  version: ""
  replicas: 5
  readyReplicas: 11
  updatedReplicas: 12
  unavailableReplicas: 13
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
    availableReplicas: 1
    readyReplicas: 2
    upToDateReplicas: 3
  failureMessage: ""
  failureReason: ""
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: KubeadmControlPlane
metadata: { ... }
spec:
  kubeadmConfigSpec:
    bootCommands: [ ... ]
    clusterConfiguration:
      apiServer:
        certSANs: [ ... ]
        extraArgs:
          - name: "v"
            value: "5"
        extraEnvs: [ ... ]
        extraVolumes: [ ... ]
      certificatesDir: ""
      controlPlaneEndpoint: ""
      controllerManager:
        extraArgs:
          - name: "v"
            value: "5"
        extraEnvs: [ ... ]
        extraVolumes: [ ... ]
      dns: { ...}
      etcd:
        external: { ...}
        local:
          dataDir: ""
          extraArgs:
            - name: "v"
              value: "5"
          extraEnvs: [ ... ]
          imageRepository: ""
          imageTag: ""
          peerCertSANs: [ ... ]
          serverCertSANs: [ ... ]
      featureGates: { ...}
      imageRepository: ""
      certificateValidityPeriodDays: 365
      caCertificateValidityPeriodDays: 3650
      scheduler:
        extraArgs:
          name: "v"
          value: "5"
        extraEnvs: [ ... ]
        extraVolumes: [ ... ]
    diskSetup: { ...}
    files: [ ... ]
    format: ""
    ignition: { ...}
    initConfiguration:
      bootstrapTokens:
        - description: ""
          expires: ""
          groups: [ ... ]
          token: ""
          usages: [ ... ]
          ttlSeconds: 45
      localAPIEndpoint: { ...}
      nodeRegistration:
        criSocket: ""
        ignorePreflightErrors: [ ... ]
        imagePullPolicy: ""
        imagePullSerial: true
        kubeletExtraArgs:
          - name: "v"
            value: "5"
        name: ""
        taints: [ ... ]
      patches: { ...}
      skipPhases: [ ... ]
      timeouts:
        controlPlaneComponentHealthCheckSeconds: 25
        discoverySeconds: 5
        etcdAPICallSeconds: 5
        kubeletHealthCheckSeconds: 5
        kubernetesAPICallSeconds: 5
        tlsBootstrapSeconds: 35
    joinConfiguration:
      caCertPath: ""
      controlPlane: { ...}
      discovery:
        bootstrapToken: { ...}
        file: { ...}
        tlsBootstrapToken: ""
      nodeRegistration:
        criSocket: ""
        ignorePreflightErrors: [ ... ]
        imagePullPolicy: ""
        imagePullSerial: true
        kubeletExtraArgs:
          - name: "v"
            value: "5"
        name: ""
        taints: [ ... ]
      patches: { ...}
      skipPhases: [ ... ]
      timeouts:
        controlPlaneComponentHealthCheckSeconds: 25
        discoverySeconds: 5
        etcdAPICallSeconds: 5
        kubeletHealthCheckSeconds: 5
        kubernetesAPICallSeconds: 5
        tlsBootstrapSeconds: 35
    mounts: [ ... ]
    ntp:
      enabled: true
      servers: [ ... ]
    postKubeadmCommands: [ ... ]
    preKubeadmCommands: [ ... ]
    users: [ ... ]
  machineNaming:
    template: ""
  replicas: 5
  machineTemplate:
    metadata: { ... }
    spec:
      readinessGates: [ ... ]
      infrastructureRef:
        apiGroup: ""
        kind: ""
        name: ""
      deletion:
        nodeDeletionTimeoutSeconds: 10
        nodeDrainTimeoutSeconds: 20
        nodeVolumeDetachTimeoutSeconds: 30
  rollout:
    after: "2030-07-23T10:56:54Z"
    before:
      certificatesExpiryDays: 5
    strategy:
      rollingUpdate:
        maxSurge: 1
      type: RollingUpdate
  remediation:
    maxRetry: 5
    minHealthyPeriodSeconds: 60
    retryPeriodSeconds: 90
  version: ""
status:
  conditions: [ ... ] # metav1.Conditions
  initialization:
    controlPlaneInitialized: true
  observedGeneration: 5
  selector: ""
  lastRemediation:
    machine: ""
    retryCount: 5
    time: ""
  version: ""
  replicas: 5
  availableReplicas: 1
  readyReplicas: 2
  upToDateReplicas: 3
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
      readyReplicas: 11
      updatedReplicas: 12
      unavailableReplicas: 13
      failureMessage: ""
      failureReason: ""
  • KubeadmControlPlane (and the entire KCP provider) now implements the v1beta2 Cluster API contract
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 and #12560 for details (drop unnecessary pointers)
  • The spec.machineNamingStrategy field was renamed to spec.machineNaming and is now using MachineNamingSpec type instead of *MachineNamingStrategy (improve consistency, drop unnecessary pointers)
  • The spec.machineTemplate.infrastructureRef field was moved to spec.machineTemplate.spec.infrastructureRef and is now using ContractVersionedObjectReference type instead of corev1.ObjectReference
    • The following fields have been removed: namespace, uid, resourceVersion, fieldPath
    • apiVersion has been replaced with apiGroup. As before, the version will be read from the corresponding CRD
  • The spec.machineTemplate.readinessGates field was moved to spec.machineTemplate.spec.readinessGates.
  • extraArg field types have been changed from map[string]sting to []Arg, thus aligning with kubeadm v1beta4 API; however, using multiple args with the same name will be enabled only when v1beta1 is removed, tentatively in August 2026
    • spec.kubeadmConfigSpec.clusterConfiguration.apiServer.extraArgs type has been changed to []Arg
    • spec.kubeadmConfigSpec.clusterConfiguration.controllerManager.extraArgs type has been changed to []Arg
    • spec.kubeadmConfigSpec.clusterConfiguration.scheduler.extraArgs type has been changed to []Arg
    • spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.extraArgs type has been changed to []Arg
    • spec.kubeadmConfigSpec.initConfiguration.nodeRegistration.kubeletExtraArgs type has been changed to []Arg
    • spec.kubeadmConfigSpec.joinConfiguration.nodeRegistration.kubeletExtraArgs type has been changed to []Arg
  • imagePullPolicy field types have been changed from string to corev1.PullPolicy, thus aligning with kubeadm v1beta4 API
    • spec.kubeadmConfigSpec.initConfiguration.nodeRegistration.imagePullPolicy type has been changed to corev1.PullPolicy
    • spec.kubeadmConfigSpec.joinConfiguration.nodeRegistration.imagePullPolicy type has been changed to corev1.PullPolicy
  • timeout fields have been aligned with kubeadm v1beta4 API, but field names and types have been adapted according to API guidelines
    • spec.kubeadmConfigSpec.initConfiguration.timeouts struct has been added with the following fields:
      • controlPlaneComponentHealthCheckSeconds
      • kubeletHealthCheckSeconds
      • kubernetesAPICallSeconds
      • etcdAPICallSeconds
      • tlsBootstrapSeconds
      • discoverySeconds
    • spec.kubeadmConfigSpec.joinConfiguration.timeouts field has been added with the same set of timeouts listed above
    • spec.kubeadmConfigSpec.clusterConfiguration.apiServer.timeoutForControlPlane field has been removed Use spec.kubeadmConfigSpec.initConfiguration.timeouts.controlPlaneComponentHealthCheckSeconds and spec.kubeadmConfigSpec.joinConfiguration.timeouts.controlPlaneComponentHealthCheckSeconds instead; however, using different timeouts for init and join will be enabled only when v1beta1 is removed
    • spec.kubeadmConfigSpec.joinConfiguration.discovery.timeout field has been removed. Use spec.kubeadmConfigSpec.joinConfiguration.timeouts.tlsBootstrapSeconds instead
  • The spec.kubeadmConfigSpec.clusterConfiguration.certificateValidityPeriodDays and spec.kubeadmConfigSpec.clusterConfiguration.caCertificateValidityPeriodDays have been added thus aligning with kubeadm v1beta4 API
  • The spec.kubeadmConfigSpec.clusterConfiguration.apiServer field does not embed ControlPlaneComponent anymore (avoid embedding structs)
    • extraArgs, extraVolumes, extraEnvs fields have been added to the spec.kubeadmConfigSpec.clusterConfiguration.apiServer struct
  • The type of the spec.kubeadmConfigSpec.clusterConfiguration.controllerManager field has been changed from ControlPlaneComponent to ControllerManager (avoid embedding structs)
  • The type of the spec.kubeadmConfigSpec.clusterConfiguration.scheduler field has been changed from ControlPlaneComponent to Scheduler (avoid embedding structs)
  • The type of the extraEnvs fields in spec.kubeadmConfigSpec.clusterConfiguration.apiServer, spec.kubeadmConfigSpec.clusterConfiguration.controllerManager, spec.kubeadmConfigSpec.clusterConfiguration.scheduler and spec.kubeadmConfigSpec.clusterConfiguration.etcd.local has been changed from []EnvVar to *[]EnvVar (compliance with K8s API guidelines)
  • The type of the spec.kubeadmConfigSpec.clusterConfiguration.apiServer.extraVolumes.readOnly, spec.kubeadmConfigSpec.clusterConfiguration.controllerManager.extraVolumes.readOnly , spec.kubeadmConfigSpec.clusterConfiguration.scheduler.extraVolumes.readOnly fields have been changed from bool to *bool (avoid custom serialization)
  • The type of the spec.kubeadmConfigSpec.initConfiguration.bootstrapTokens[].token field has been changed from *BootstrapTokenString to BootstrapTokenString (drop unnecessary pointers)
  • The type of the spec.kubeadmConfigSpec.initConfiguration.nodeRegistration, spec.kubeadmConfigSpec.joinConfiguration.nodeRegistration fields have been changed from []corev1.Taint to *[]corev1.Taint (avoid custom serialization)
  • The type of the spec.kubeadmConfigSpec.joinConfiguration.discovery.bootstrapToken.unsafeSkipCAVerification field has been changed from bool to *bool(compliance with K8s API guidelines)
  • The type of the spec.kubeadmConfigSpec.joinConfiguration.discovery.file.kubeConfig.cluster.insecureSkipTLSVerify field has been changed from bool to *bool(compliance with K8s API guidelines)
  • The type of the spec.kubeadmConfigSpec.joinConfiguration.discovery.file.kubeConfig.user.exec.provideClusterInfo field has been changed from bool to *bool(compliance with K8s API guidelines)
  • The type of the spec.kubeadmConfigSpec.files[].append field has been changed from bool to *bool(compliance with K8s API guidelines)
  • The type of the spec.kubeadmConfigSpec.users[].gecos, spec.kubeadmConfigSpec.users[].groups, spec.kubeadmConfigSpec.users[].homeDir, spec.kubeadmConfigSpec.users[].shell, spec.kubeadmConfigSpec.users[].passwd, spec.kubeadmConfigSpec.users[].primaryGroup, spec.kubeadmConfigSpec.users[].sudo fields have been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.kubeadmConfigSpec.diskSetup.filesystems[].partition, spec.kubeadmConfigSpec.diskSetup.filesystems[].replaceFS fields have been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.kubeadmConfigSpec.diskSetup.partitions[].tableType field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.kubeadmConfigSpec.diskSetup.partitions[].layout field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The type of the spec.kubeadmConfigSpec.ignition.containerLinuxConfig.strict field has been changed from bool to *bool (compliance with K8s API guidelines)
  • The spec.kubeadmConfigSpec.useExperimentalRetryJoin field (deprecated in CAPI v1.2!) has been removed
  • The following spec.kubeadmConfigSpec fields have been removed because they are not necessary (Cluster API automatically applies the right gvk when generating kubeadm config files):
    • clusterConfiguration.apiVersion, clusterConfiguration.kind
    • initConfiguration.apiVersion, initConfiguration.kind
    • joinConfiguration.apiVersion, joinConfiguration.kind
  • The following spec.kubeadmConfigSpec.clusterConfiguration fields have been removed because they are duplicates to fields that already exist in the Cluster object:
    • networking.serviceSubnet (can still be set via Cluster.spec.clusterNetwork.services.cidrBlocks)
    • networking.podSubnet (can still be set via Cluster.spec.clusterNetwork.pods.cidrBlocks)
    • networking.dnsDomain (can still be set via Cluster.spec.clusterNetwork.serviceDomain)
    • kubernetesVersion (can still be set via KubeadmControlPlane.spec.version)
    • clusterName (can still be set via Cluster.metadata.name) Note: The ClusterConfiguration fields could previously be used to overwrite the fields from Cluster, now we only use the fields from Cluster
  • All fields of type Duration in spec.kubeadmConfigSpec.initConfiguration.bootstrapTokens[] have been renamed by adding the Seconds suffix and their type was changed to int32 (compliance with K8s API guidelines)
    • .ttl => .ttlSeconds
  • The type of the spec.kubeadmConfigSpec.initConfiguration.bootstrapTokens[].expires field has been changed from *metav1.Time to metav1.Time (drop unnecessary pointers)
  • All fields of type Duration in spec.machineTemplate have been renamed by adding the Seconds suffix, moved into the deletion section and their type was changed to int32 (compliance with K8s API guidelines)
    • nodeDrainTimeout => deletion.nodeDrainTimeoutSeconds
    • nodeVolumeDetachTimeout => deletion.nodeVolumeDetachTimeoutSeconds
    • nodeDeletionTimeout => deletion.nodeDeletionTimeoutSeconds
  • All fields of type Duration in spec.remediationStrategy have been renamed by adding the Seconds suffix and their type was changed to int32 (compliance with K8s API guidelines)
    • retryPeriod => retryPeriodSeconds
    • minHealthyPeriod => minHealthyPeriodSeconds
  • The type of the spec.version field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.remediation field (previously spec.remediationStrategy) has been changed from RemediationStrategy to KubeadmControlPlaneRemediationSpec (improve consistency)
  • A new spec.rollout field has been introduced, it combines previous spec.rolloutBefore, spec.rolloutAfter and spec.rolloutStrategy fields. The Go structs have been modified accordingly. For more details see YAML above (improve consistency)
    • The type of the spec.rollout.after, previously spec.rolloutAfter, field has been changed from *metav1.Time to metav1.Time (drop unnecessary pointers)
  • The type of the spec.remediation.retryPeriodSeconds field (previously spec.remediationStrategy.retryPeriod) has been changed from *string to string (drop unnecessary pointers)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • Replica counters fields are now consistent with replica counters from other resources (improve status)
    • status.replicas was made a pointer and omitempty was added
    • status.readyReplicas has now a new semantic based on machine’s Ready condition
    • status.availableReplicas has now a new semantic based on machine’s Available condition
    • status.upToDateReplicas has now a new semantic (and name) based on machine’s UpToDate condition
    • Temporarily, old replica counters will still be available under the status.deprecated.v1beta1 struct
  • Information about the initial provisioning process is now surfacing under the new status.initialization field (improve status)
    • status.ready has been dropped
    • status.initialized has been replaced by status.initialization.controlPlaneInitialized
  • Support for terminal errors has been dropped (improve status)
    • status.failureReason and status.failureMessage will continue to exist temporarily under status.deprecated.v1beta1
  • The type of the status.version field has been changed from *string to string (drop unnecessary pointers)
  • The status.lastRemediation.timestamp field has been renamed to status.lastRemediation.time (compliance with K8s API guidelines)
  • The type of the status.lastRemediation.retryCount field has been changed from int32 to *int32 (compliance with K8s API guidelines)

KubeadmControlPlaneTemplate

KubeadmControlPlaneTemplate spec.template.spec has been aligned to changes in the KubeadmControlPlane spec struct

ClusterResourceSet

v1beta1 v1beta2
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSet
metadata: { ... }
spec:
  clusterselector: { ... }
  resources: { ... }
  strategy: ""
status:
  conditions: { ... } # clusterv1beta1.Conditions
  observedGeneration: 5
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
apiVersion: addons.cluster.x-k8s.io/v1beta2
kind: ClusterResourceSet
metadata: { ... }
spec:
  clusterselector: { ... }
  resources: { ... }
  strategy: ""
status:
  conditions: [ ... ] # metav1.Conditions
  observedGeneration: 5
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
  • See changes that apply to all CRDs
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.

ClusterResourceSetBinding

v1beta1 v1beta2
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSetBinding
metadata: { ... }
spec:
  bindings: { ... }
  clusterName: ""
apiVersion: addons.cluster.x-k8s.io/v1beta2
kind: ClusterResourceSetBinding
metadata: { ... }
spec:
  bindings: { ... }
  clusterName: ""
  • See changes that apply to all CRDs
  • The type of the spec.bindings field has been changed from []*ResourceSetBinding to []ResourceSetBinding (drop unnecessary pointers)
  • The type of the spec.bindings[].resources[].lastAppliedTime field has been changed from *metav1.Time to metav1.Time (drop unnecessary pointers)
  • The type of the spec.bindings[].resources[].applied field has been changed from bool to *bool (compliance with K8s API guidelines)
  • Remove deprecated ClusterResourceSetBinding.DeleteBinding func

ExtensionConfig

v1beta1 v1beta2
apiVersion: runtime.cluster.x-k8s.io/v1alpha1
kind: ExtensionConfig
metadata: { ... }
spec:
  clientConfig: { ... }
  namespaceSelector: { ... }
  settings: { ... }
status:
  conditions: { ... } # clusterv1beta1.Conditions
  handlers: { ... }
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
apiVersion: runtime.cluster.x-k8s.io/v1beta2
kind: ExtensionConfig
metadata: { ... }
spec:
  clientConfig: { ... }
  namespaceSelector: { ... }
  settings: { ... }
status:
  conditions: [ ... ] # metav1.Conditions
  handlers: { ... }
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
  • ExtensionConfig v1beta2 has been created, thus aligning with other Cluster API resources
  • ExtensionConfig v1alpha1 has been deprecated, and it will be removed in a following release
  • See changes that apply to all CRDs
  • Pointers have been removed from various struct fields. See #12545 for details (drop unnecessary pointers)
  • The type of the spec.clientConfig.url field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.clientConfig.service.path field has been changed from *string to string (drop unnecessary pointers)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.
  • The type of the status.handlers[].timeoutSeconds field has been changed from *int32 to int32 (drop unnecessary pointers)
  • The type of the status.handlers[].failurePolicy field has been changed from *FailurePolicy to FailurePolicy (drop unnecessary pointers)

IPAddress

v1beta1 v1beta2
apiVersion: ipam.cluster.x-k8s.io/v1beta1
kind: IPAddress
metadata: { ... }
spec: { ... }
apiVersion: ipam.cluster.x-k8s.io/v1beta2
kind: IPAddress
metadata: { ... }
spec: { ... }
  • See changes that apply to all CRDs
  • The type of the spec.claimRef field has been changed from corev1.LocalObjectReference to IPAddressClaimReference (improve object references)
  • The type of the spec.poolRef field has been changed from corev1.TypedLocalObjectReference to IPPoolReference (improve object references)
  • The type of the spec.poolRef.apiGroup field has been changed from *string to string (drop unnecessary pointers)
  • The type of the spec.prefix field has been changed from int32 to *int32 (compliance with K8s API guidelines)

IPAddressClaim

v1beta1 v1beta2
apiVersion: ipam.cluster.x-k8s.io/v1beta1
kind: IPAddressClaim
metadata: { ... }
spec: { ... }
status:
  conditions: { ... } # clusterv1beta1.Conditions
  addressRef:
    name: ""
  v1beta2:
    conditions: [ ... ] # metav1.Conditions
apiVersion: ipam.cluster.x-k8s.io/v1beta2
kind: IPAddressClaim
metadata: { ... }
spec: { ... }
status:
  conditions: [ ... ] # metav1.Conditions
  addressRef:
    name: ""
  deprecated:
    v1beta1:
      conditions: { ... } # clusterv1beta1.Conditions
  • See changes that apply to all CRDs
  • The type of the spec.poolRef field has been changed from corev1.TypedLocalObjectReference to IPPoolReference (improve object references)
  • The type of the spec.poolRef.apiGroup field has been changed from *string to string (drop unnecessary pointers)
  • The type of the status.addressRef field has been changed from corev1.LocalObjectReference to IPAddressReference (improve object references)
  • status.conditions has been replaced with status.v1beta2.conditions based on metav1 condition types (improve status)
    • the old status.conditions based on custom cluster API condition types will continue to exist temporarily under status.deprecated.v1beta1.conditions for the sake of down conversions and to provide a temporary option for users willing to continue using old conditions.

Cluster API Contract changes

  • v1beta1 version of the Cluster API contract is now deprecated

    • In order to ease the transition to the new v1beta2 version of the Cluster API contract, v1beta2 version will implement temporarily compatibility with the deprecated v1beta1 version of the Cluster API contract
      • Compatibility is only intended to ease the transition for providers, and it has some limitations; please read details in following paragraphs.
    • Compatibility support for the v1beta1 version of the Cluster API contract will be removed tentatively in August 2026
    • After compatibility support for the v1beta1 version of the Cluster API contract is removed, providers which are implementing the v1beta1 contract will stop to work (they will work only with older versions of Cluster API).
  • v1beta2 version of the Cluster API contract has been introduced; see following paragraphs for more details

Contract rules for InfraCluster

Following rules have been changed or are not supported anymore; please read corresponding notes about compatibility for providers still implementing the v1beta1 contract.

Contract rules for InfraMachine

Following rules have been changed or are not supported anymore; please read corresponding notes about compatibility for providers still implementing the v1beta1 contract.

  • InfraMachine: provider ID
    • Type of the spec.providerID field was changed from *string to string.
  • InfraMachine: initialization completed
  • InfraMachine: conditions
    • The fact that Providers are not required to implement conditions remains valid
    • In case a provider implements conditions, Cluster API doesn’t require anymore usage of a specific condition type, even if transition to metav1.Conditions is highly recommended.
  • InfraMachine: terminal failures
    • The Machine controller won’t consider the presence of status.failureReason and status.failureMessage info as “terminal failures”
    • MachineHealthCheck controller won’t consider the presence of status.failureReason and status.failureMessage to determine when a Machine needs remediation.

Contract rules for BootstrapConfig

Following rules have been changed or are not supported anymore; please read corresponding notes about compatibility for providers still implementing the v1beta1 contract.

  • BootstrapConfig: initialization completed
  • BootstrapConfig: data secret
    • Type of the status.dataSecretName field was changed from *string to string.
  • BootstrapConfig: conditions
    • The fact that Providers are not required to implement conditions remains valid
    • In case a provider implements conditions, Cluster API doesn’t require anymore usage of a specific condition type, even if transition to metav1.Conditions is highly recommended.
  • BootstrapConfig: terminal failures
    • The Machine controller won’t consider the presence of status.failureReason and status.failureMessage info as “terminal failures”
    • MachineHealthCheck controller won’t consider the presence of status.failureReason and status.failureMessage to determine when a Machine needs remediation.

Contract rules for ControlPlane

Following rules have been changed or are not supported anymore; please read corresponding notes about compatibility for providers still implementing the v1beta1 contract.

Contract rules for IPAM provider

TODO

Contract rules for MachinePool

TODO

clusterctl

  • Stricter validation for provider metadata: clusterctl now enforces validation rules when reading provider metadata files to ensure they are properly formatted and contain required information. These changes help surface mis-shaped metadata early and make failures easier to troubleshoot. Providers with invalid metadata.yaml files will need to update them to comply with these validation rules. The following validation rules are now enforced:
    • apiVersion must be set to clusterctl.cluster.x-k8s.io/v1alpha3
    • kind must be set to Metadata
    • releaseSeries must contain at least one entry
  • The --v1beta2 flag in clusterctl describe now defaults to true; the flag will be removed as soon as v1beta1 API is removed.

Deprecations

  • v1beta1 API version is deprecated and it will be removed tentatively in August 2026

    • All the fields under status.deprecated.v1beta1 in the new v1beta2 API are deprecated and whey will be removed. This includes:
      • status.deprecated.v1beta1.conditions based on custom cluster API condition types
      • status.deprecated.v1beta1.failureReason
      • status.deprecated.v1beta1.failureMessage
      • status.deprecated.v1beta1.readyReplicas with old semantic for MachineDeployments, MachineSet and KubeadmControlPlane
      • status.deprecated.v1beta1.availableReplicas with old semantic for MachineDeployments, MachineSet
      • status.deprecated.v1beta1.unavailableReplicas with old semantic for MachineDeployments, KubeadmControlPlane
      • status.deprecated.v1beta1.updatedReplicas with old semantic (and name) for MachineDeployments, KubeadmControlPlane
      • status.deprecated.v1beta1.fullyLabeledReplicas for MachineSet
  • v1beta1 conditions utils are now deprecated, and will removed as soon as v1beta1 API will be removed

  • v1beta1 support in the patch helper is now deprecated, and will removed as soon as v1beta1 API will be removed

  • As a consequence of dropping support for terminal errors from all Kinds, the const values for Failed phase has been deprecated in the following enum types (controllers are not setting this value anymore):

    • ClusterPhase, used in cluster.status.phase
    • MachineDeploymentPhase, used in machineDeployment.status.phase
    • MachinePoolPhase, used in machinePool.status.phase
    • MachinePhase, used in machine.status.phase

clusterctl changes

  • The clusterctl alpha rollout undo command has been removed as the corresponding revision history feature has been removed from MachineDeployment

Removals scheduled for future releases

As documented in Suggested changes for providers, it is highly recommended to start planning for future removals:

  • v1beta1 API version will be removed tentatively in August 2026
  • Starting from the CAPI release when v1beta1 removal will happen (tentative Aug 2026), the Cluster API project will remove the Cluster API condition type, the util/conditions/deprecated/v1beta1 package, the util/deprecated/v1beta1 package, the code handling old conditions in util/patch.Helper and everything related to the custom Cluster API custom condition type.
  • Compatibility support for the v1beta1 version of the Cluster API contract will be removed tentatively in August 2026

Recommendation for everyone

  • As a general recommendation we suggest to upgrade first to CAPI v1.10 before moving to CAPI v1.11 / v1beta2.
    • v1.10 introduced additional field validation on API fields.
    • Upgrading to v1.10 allows you to first address potential issues with existing invalid field values before picking up CAPI v1.11 / v1beta2, which will start converting v1beta1 objects to v1beta2 objects.

Suggested changes for clients using Cluster API Go types

  • We highly recommend providers to start using Cluster API v1beta2 types when bumping to CAPI v1.11. This requires changing following import:

    import (
        ...
        clusterv1 "sigs.k8s.io/cluster-api/api/core/v1beta1"
    )
    

    into:

    import (
        ...
        clusterv1 "sigs.k8s.io/cluster-api/api/core/v1beta2"
    )
    

    Please refer to API Changes for more details about the changes introduced by this release.

    Note: it is technically possible for providers to keep using v1beta1 types from CAPI v1.11, but this is not recommended because it will lead to additional conversion calls.

    Additionally, in v1.11 all the CAPI utils like e.g. IsControlPlaneMachine are now using v1beta2 API types. This might lead to additional work for providers to keep using v1beta1 types from CAPI v1.11 (you have to fork all utils the provider is using from an older CAPI release or replace them with something else).

    Last but not least, please be aware that given the issues above, this approach was not tested during the implementation, and we don’t know if there are road blockers.

  • If you are using Conditions from Cluster API resources, e.g. by looking at the ControlPlaneInitialized condition on the Cluster object, we highly recommend clients to use new conditions instead of old ones. Utils for working with new conditions are available in the sigs.k8s.io/cluster-api/util/conditions package.

    • To keep using old conditions from the Cluster object, temporarily present under status.deprecated.v1beta1.conditions it is required to use utils from the util/conditions/deprecated/v1beta1 package. Please note that status.deprecated.v1beta1.conditions will be removed tentatively in August 2026.
  • References on Cluster, KubeadmControlPlane, MachineDeployment, MachinePool, MachineSet and Machine are now using the ContractVersionedObjectReference type instead of corev1.ObjectReference. These references can now be resolved with external.GetObjectFromContractVersionedRef instead of external.Get:

    external.GetObjectFromContractVersionedRef(ctx, r.Client, cluster.Spec.InfrastructureRef, cluster.Namespace)
    
  • Go clients writing status of core Cluster API objects, should use at least Cluster API v1.9 Go types. If that is not possible, avoid updating or patching the entire status field and instead you should patch only individual fields. (Cluster API v1.9 introduced .status.v1beta2 fields that are necessary for lossless v1beta2 => v1beta1 => v1beta2 round trips)

  • Important! Please pay attention to field removals, e.g. if you are using Go types to write object, either migrate to v1beta2 or make sure to stop setting the removed fields. The removed fields won’t be preserved even if setting them via v1beta1 Go types.

  • All metav1.Duration fields (e.g. nodeDeletionTimeout) are now represented as *int32 fields with units being part of the field name (e.g. nodeDeletionTimeoutSeconds).

    • A side effect of this is that if durations were specified on a sub-second granularity conversion will truncate to seconds. E.g. this means if you apply a v1beta1 object with nodeDeletionTimeout: 1s5ms only 1s will be stored and returned on reads.

Suggested changes for providers

  • All recommendations in Suggested changes for clients using Cluster API Go types also apply to providers.

  • If providers are using Conditions from Cluster API resources, e.g. by looking at the ControlPlaneInitialized condition on the Cluster object, we highly recommend providers to use new conditions instead of old ones. Utils for working with new conditions ara available in the sigs.k8s.io/cluster-api/util/conditions package.

    • To keep using old conditions from the Cluster object, temporarily present under status.deprecated.v1beta1.conditions it is required to use utils from the util/conditions/deprecated/v1beta1 package. Please note that status.deprecated.v1beta1.conditions will be removed tentatively in August 2026.
  • We highly recommend providers to start planning the move to the new v1beta2 version of the Cluster API contract for their own resources, e.g. in the AWSCluster or the AWSMachine resource; the transition MUST be completed before compatibility support for the v1beta1 version of the Cluster API contract will be removed tentatively in August 2026

  • Depending on which Cluster API contract version you are choosing to implement in the provider’s own CRDs, please refer to:

  • We highly recommend providers to define their future strategy for condition management for their own resources, e.g. in the AWSCluster or the AWSMachine resource; also in this case the transition to the new condition management strategy MUST be completed before compatibility support for the v1beta1 version of the Cluster API contract will be removed tentatively in August 2026. Available options are:

    • Migrate to metav1.Conditions like Cluster API (recommended)
    • Replace Cluster API’s v1beta1 Conditions with a custom condition implementation that is compliant with what is required by the v1beta2 Cluster API contract.
      • Starting from the CAPI release when v1beta1 removal will happen (tentative August 2026), the Cluster API project will remove the Cluster API condition type, the util/conditions/deprecated/v1beta1 package, the util/deprecated/v1beta1 package, the code handling old conditions in util/patch.Helper and everything related to the custom Cluster API custom condition type.
  • Depending on which option you are choosing for condition management in the provider’s own CRDs, please refer to:

  • References on Cluster, KubeadmControlPlane, MachineDeployment, MachinePool, MachineSet and Machine are now using the ContractVersionedObjectReference type instead of corev1.ObjectReference. These references can now be resolved with external.GetObjectFromContractVersionedRef instead of external.Get:

    external.GetObjectFromContractVersionedRef(ctx, r.Client, cluster.Spec.InfrastructureRef, cluster.Namespace)
    
  • core Cluster API added the new CRD migrator component in the v1.9 release. For more details, see: https://github.com/kubernetes-sigs/cluster-api/issues/11894

    • CRD migration in clusterctl has been deprecated and will be removed in CAPI v1.13, so it’s recommended to adopt the CRD migrator in providers instead.
    • Please see the examples in https://github.com/kubernetes-sigs/cluster-api/pull/11889, the following high-level steps are required:
      • Add the --skip-crd-migration-phases command-line flag that allows to skip CRD migration phases
      • Setup the CRDMigrator component with the manager.
      • Configure all CRDs owned by your provider, only set UseCache for the objects for which your provider already has an informer.
      • Add the following RBAC:
      • resources: customresourcedefinitions, verbs: get;list;watch
      • resources: customresourcedefinitions;customresourcedefinitions/status, resourceNames: <crd-name>, verbs: update;patch
        • Note: The CRD migrator will add the crd-migration.cluster.x-k8s.io/observed-generation annotation on the CRD object, please ensure that if these CRD objects are deployed with a tool like kapp / Argo / Flux the annotation is not continuously removed.
      • For all CRs that should be migrated by the CRDMigrator: verbs: get;list;watch;patch;update
      • For all CRs with UseStatusForStorageVersionMigration: true verbs: update;patch on their /status resource (e.g. ipaddressclaims/status)

How to bump to CAPI V1.11 but keep implementing the deprecated v1beta1 contract

CAPI v1.11 implements the v1beta2 version of the Cluster API contract.

However, in order to ease the transition for providers, the v1beta2 version of the Cluster API contract temporarily preserves compatibility with the deprecated v1beta1 contract; a few limitations apply, please read Cluster API Contract changes for more details.

Provider’s implementing the deprecated v1beta1 contract can leverage compatibility support without any change, but it is crucial for them to start planning for the implementation of the new v1beta2 version of the Cluster API contract as soon a possible.

The implementation of the new v1beta2 version of the Cluster API contract MUST be completed before compatibility support for the v1beta1 version of the Cluster API contract will be removed tentatively in August 2026.

After compatibility support for the v1beta1 version of the Cluster API contract will be removed, providers which are still implementing the v1beta1 contract will stop to work (they will work only with older versions of Cluster API).

Please see Cluster API Contract changes and provider contracts for more details.

How to implement the new v1beta2 contract

We highly recommend providers to start planning the move to the new v1beta2 version of the Cluster API contract as soon as possible.

Implementing the new v1beta2 contract for providers is a multistep operation:

  1. Implement changes defined for a specific provider type; See Cluster API Contract changes and provider contracts for more details.
  • In most cases, v1beta2 contract introduced changes in the initialization completed, conditions, terminal failures rules; Also replicas rule is changed for control plane providers.
  • If providers are starting to use api/core/v1beta2/clusterv1.ObjectMeta in your API types, please note that it is necessary to set the omitzero JSON marker
  1. While implementing the changes above, It is also highly recommended to check the implementation of all the other rules (documentation for contract rules have been improved in recent releases, worth to take a look!).
  2. Change the CRD annotation that document which Cluster API contract is implemented by your Provider. e.g.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    cluster.x-k8s.io/v1beta2: <your API version>
  name: <your CRD name>

How to bump to CAPI V1.11 but keep using deprecated v1beta1 conditions

A provider can continue to use deprecated v1beta1 conditions also after bumping to CAPI V1.11, but to do so it is required to change following imports:

import (
	...
    "sigs.k8s.io/cluster-api/util/conditions"
    "sigs.k8s.io/cluster-api/util/patch"
)

into:

import (
	...
    v1beta1conditions "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/conditions"
    v1beta1patch "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch"
)

Important! Please pay special attention to use sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch import everywhere, because using sigs.k8s.io/cluster-api/util/patch by mistake could in some cases lead to dropping conditions at runtime.

Also, please note that starting from the CAPI release when v1beta1 removal will happen (tentative Aug 2026), the Cluster API project will remove the Cluster API condition type, the util/conditions/deprecated/v1beta1 package, the util/deprecated/v1beta1 package, the code handling old conditions in util/patch.Helper and everything related to the custom Cluster API custom condition type.

This means that if a provider wants to keep using deprecated v1beta1 conditions after this date, they have to maintain their own custom copy of the types and related utils.

Also note that the v1beta2 contract is not going to require a specific condition type, so it will be also possible to use a custom condition type,

See Suggested changes for providers and Cluster API Contract changes for more details.

How to start using metav1.conditions

If providers choose to migrate metav1.Conditions, the process described in Improving status in CAPI resources can be used as a reference about how to implement a phased transition.

As a quick summary, transition should go through following phases:

Stage 1

Add status.v1beta2.conditions to your API (existing conditions will remain at status.conditions)

If you are at this stage, you must use following util packages from CAPI v1.11 (or following releases, see note below):

import (
	...
    v1beta1conditions "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/conditions"
    v1beta2conditions "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/conditions/v1beta2"
    v1beta1patch "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch"
)
  • the v1beta1conditions package alias provides access to utils for managing clusterv1beta1.Conditions in status.conditions
  • the v1beta2conditions package alias provides access to utils for managing metav1.Conditions in status.v1beta2.conditions
  • the v1beta1patch package alias provides access to utils for patching objects in this phase.

Important!

  • Please pay special attention to use sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch import everywhere, because using sigs.k8s.io/cluster-api/util/patch by mistake could in some cases lead to dropping conditions at runtime (note: sigs.k8s.io/cluster-api/util/patch is the package we assume you are using before stage 1).
  • All packages from the import above (all packages below sigs.k8s.io/cluster-api/util/deprecated/v1beta1) are going to be removed from CAPI when the v1beta1 removal will happen (tentative Aug 2026).

Stage 2

Create a new API version, swap old conditions and new conditions, implement conversions

  • Move old conditions from status.conditions to status.deprecated.v1beta1.conditions
  • Move new conditions from status.v1beta2.conditions to status.conditions

If you are at this stage, you must use following util packages from CAPI v1.11 (or following releases, see note below):

import (
	...
    "sigs.k8s.io/cluster-api/util/conditions"
    deprecatedv1beta1conditions "sigs.k8s.io/cluster-api/util/conditions/deprecated/v1beta1"
    "sigs.k8s.io/cluster-api/util/patch"
)
  • the conditions package provides access to utils for managing metav1.Conditions in status.conditions
  • the deprecatedv1beta1conditions package alias provides access to utils for managing clusterv1.Conditions in status.deprecated.v1beta1.conditions
  • the patch package provides access to utils for patching objects in this phase

Important!

  • Please pay special attention to use sigs.k8s.io/cluster-api/util/patch import everywhere, because using sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch by mistake could in some cases lead to dropping conditions at runtime (note: sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch is the package you were using in stage 1, it should not be used in stage 2).
  • The package sigs.k8s.io/cluster-api/util/conditions/deprecated/v1beta1“ is going to be removed from CAPI when the v1beta1 removal will happen (tentative Aug 2026).

Stage 3

When removing the old API version, remove status.deprecated.v1beta1

Following util packages from CAPI v1.11 (or following releases) should be still in use:

import (
	...
    "sigs.k8s.io/cluster-api/util/conditions"
    "sigs.k8s.io/cluster-api/util/patch"
)
  • the conditions package provides access to utils for managing metav1.Conditions in status.conditions
  • the patch package provides access to utils for patching objects in this phase

Important!

  • Please pay special attention to use sigs.k8s.io/cluster-api/util/patch import everywhere, because using sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch by mistake could in some cases lead to dropping conditions at runtime (note: sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch is the package in stage 1, if should not be used in following stages).

Annex

Imports for conditions and patch helper utils

In order to help users to transition away from the CAPI conditions type, CAPI v1.11 supports different versions of conditions and patch helper utils.

Following table should help to pick the right utils.

Field to changeImport for condition utilImport for patch helper
status.conditions of type clusterv1beta1.Conditionsv1beta1conditions "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/conditions"v1beta1patch "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch"
status.v1beta2.conditions of type []metav1.Conditionsv1beta2conditions "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/conditions/v1beta2"v1beta1patch "sigs.k8s.io/cluster-api/util/deprecated/v1beta1/patch"
status.deprecated.v1beta1.conditions of type clusterv1.Conditionsdeprecatedv1beta1conditions "sigs.k8s.io/cluster-api/util/conditions/deprecated/v1beta1""sigs.k8s.io/cluster-api/util/patch"
status.conditions of type []metav1.Conditions"sigs.k8s.io/cluster-api/util/conditions""sigs.k8s.io/cluster-api/util/patch"
Important!
- Please pay special attention to use the correct patch helper import everywhere, because using a wrong
one could in some cases lead to dropping conditions at runtime while not having compile errors.