--- title: Leases content_type: concept weight: 30 --- Distributed systems often have a need for _leases_, which provide a mechanism to lock shared resources and coordinate activity between members of a set. In Kubernetes, the lease concept is represented by [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects in the `coordination.k8s.io` {{< glossary_tooltip text="API Group" term_id="api-group" >}}, which are used for system-critical capabilities such as node heartbeats and component-level leader election. ## Node heartbeats {#node-heart-beats} Kubernetes uses the Lease API to communicate kubelet node heartbeats to the Kubernetes API server. For every `Node` , there is a `Lease` object with a matching name in the `kube-node-lease` namespace. Under the hood, every kubelet heartbeat is an **update** request to this `Lease` object, updating the `spec.renewTime` field for the Lease. The Kubernetes control plane uses the time stamp of this field to determine the availability of this `Node`. See [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats) for more details. ## Leader election Kubernetes also uses Leases to ensure only one instance of a component is running at any given time. This is used by control plane components like `kube-controller-manager` and `kube-scheduler` in HA configurations, where only one instance of the component should be actively running while the other instances are on stand-by. ## API server identity {{< feature-state for_k8s_version="v1.26" state="beta" >}} Starting in Kubernetes v1.26, each `kube-apiserver` uses the Lease API to publish its identity to the rest of the system. While not particularly useful on its own, this provides a mechanism for clients to discover how many instances of `kube-apiserver` are operating the Kubernetes control plane. Existence of kube-apiserver leases enables future capabilities that may require coordination between each kube-apiserver. You can inspect Leases owned by each kube-apiserver by checking for lease objects in the `kube-system` namespace with the name `kube-apiserver-`. Alternatively you can use the label selector `k8s.io/component=kube-apiserver`: ```shell kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver ``` ``` NAME HOLDER AGE kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s ``` The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be configured to use a hostname that is unique within the cluster. New instances of kube-apiserver that use the same hostname will take over existing Leases using a new holder identity, as opposed to instantiating new Lease objects. You can check the hostname used by kube-apisever by checking the value of the `kubernetes.io/hostname` label: ```shell kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml ``` ```yaml apiVersion: coordination.k8s.io/v1 kind: Lease metadata: creationTimestamp: "2022-11-30T15:37:15Z" labels: k8s.io/component: kube-apiserver kubernetes.io/hostname: kind-control-plane name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a namespace: kube-system resourceVersion: "18171" uid: d6c68901-4ec5-4385-b1ef-2d783738da6c spec: holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 leaseDurationSeconds: 3600 renewTime: "2022-11-30T18:04:27.912073Z" ``` Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour. You can disable API server identity leases by disabling the `APIServerIdentity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). ## Workloads {#custom-workload} Your own workload can define its own use of Leases. For example, you might run a custom {{< glossary_tooltip term_id="controller" text="controller" >}} where a primary or leader member performs operations that its peers do not. You define a Lease so that the controller replicas can select or elect a leader, using the Kubernetes API for coordination. If you do use a Lease, it's a good practice to define a name for the Lease that is obviously linked to the product or component. For example, if you have a component named Example Foo, use a Lease named `example-foo`. If a cluster operator or another end user could deploy multiple instances of a component, select a name prefix and pick a mechanism (such as hash of the name of the Deployment) to avoid name collisions for the Leases. You can use another approach so long as it achieves the same outcome: different software products do not conflict with one another.