Leases are a special object in k8s with various use cases, most popular of which are:

  • Node heartbeats (see Node controller).
  • Leader election of control plane components in HA setups.
  • Custom workloads leader elections.

Node heartbeats

One of the heartbeat mechanisms the Node controller uses to monitor the health of nodes is through the Lease object. All nodes have an associated Lease with the same name as them under the kube-node-leases namespace. Under the hood, every kubelet heartbeat is a request to update the Lease object’s spec.renewTime field, used by the Node controller as an indication of the availability of a Node.

Leader election

In HA setups, it is common to have multiple instances of the kube-apiserver, the kube-controller-manager, and the kube-scheduler. This requires coordination between each of those instances to determine which is the current operational (active) one. With HA setups that contain multiple CMs and SCHEDs, a leader is elected by acquiring a Lease object by setting its spec.holderIdentity field with the name of the leader.

API server election

A special case is the kube-apiserver because each instance could have a separate lease object. In this case the lease objects are used by other cluster components to determine how many API server instances exist.

Every kube-apiserver name contains its host’s SHA256-ed hostname, ensuring uniqueness. In case a new instance with the same hostname appears, the Lease object’s spec.holderIdentity is changed to the new instance (instead of creating a fully new Lease; this is related to the behaviour above &mdash there could be only one leader on a single node).

Custom workloads

You can use Leases with custom controllers, supporting HA setups.

Sources: