updated button_path

This commit is contained in:
pranav-pandey0804 2024-03-09 17:48:29 +05:30
commit f8d1ef69f3
26 changed files with 109 additions and 90 deletions

View File

@ -49,14 +49,15 @@ aliases:
- windsonsea - windsonsea
sig-docs-es-owners: # Admins for Spanish content sig-docs-es-owners: # Admins for Spanish content
- 92nqb - 92nqb
- krol3
- electrocucaracha - electrocucaracha
- krol3
- raelga - raelga
- ramrodo - ramrodo
sig-docs-es-reviews: # PR reviews for Spanish content sig-docs-es-reviews: # PR reviews for Spanish content
- 92nqb - 92nqb
- krol3
- electrocucaracha - electrocucaracha
- jossemarGT
- krol3
- raelga - raelga
- ramrodo - ramrodo
sig-docs-fr-owners: # Admins for French content sig-docs-fr-owners: # Admins for French content

View File

@ -64,7 +64,7 @@ time to review the [dockershim migration documentation](/docs/tasks/administer-c
and consult your Kubernetes hosting vendor (if you have one) what container runtime options are available for you. and consult your Kubernetes hosting vendor (if you have one) what container runtime options are available for you.
Read up [container runtime documentation with instructions on how to use containerd and CRI-O](/docs/setup/production-environment/container-runtimes/#container-runtimes) Read up [container runtime documentation with instructions on how to use containerd and CRI-O](/docs/setup/production-environment/container-runtimes/#container-runtimes)
to help prepare you when you're ready to upgrade to 1.24. CRI-O, containerd, and to help prepare you when you're ready to upgrade to 1.24. CRI-O, containerd, and
Docker with [Mirantis cri-dockerd](https://github.com/Mirantis/cri-dockerd) are Docker with [Mirantis cri-dockerd](https://mirantis.github.io/cri-dockerd/) are
not the only container runtime options, we encourage you to explore the [CNCF landscape on container runtimes](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#runtime--container-runtime) not the only container runtime options, we encourage you to explore the [CNCF landscape on container runtimes](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#runtime--container-runtime)
in case another suits you better. in case another suits you better.

View File

@ -109,7 +109,7 @@ Kubernetes clusters. Containers make this kind of interoperability possible.
Mirantis and Docker have [committed][mirantis] to maintaining a replacement adapter for Mirantis and Docker have [committed][mirantis] to maintaining a replacement adapter for
Docker Engine, and to maintain that adapter even after the in-tree dockershim is removed Docker Engine, and to maintain that adapter even after the in-tree dockershim is removed
from Kubernetes. The replacement adapter is named [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd). from Kubernetes. The replacement adapter is named [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/).
You can install `cri-dockerd` and use it to connect the kubelet to Docker Engine. Read [Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) to learn more. You can install `cri-dockerd` and use it to connect the kubelet to Docker Engine. Read [Migrate Docker Engine nodes from dockershim to cri-dockerd](/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/) to learn more.

View File

@ -173,6 +173,8 @@ publishing packages to the Google-hosted repository in the future.
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
``` ```
_Update: In releases older than Debian 12 and Ubuntu 22.04, the folder `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command._
3. Update the `apt` package index: 3. Update the `apt` package index:
```shell ```shell

View File

@ -11,11 +11,11 @@ using a client of the {{<glossary_tooltip term_id="kube-apiserver" text="API ser
creates an `Eviction` object, which causes the API server to terminate the Pod. creates an `Eviction` object, which causes the API server to terminate the Pod.
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/) API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination). and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
Using the API to create an Eviction object for a Pod is like performing a Using the API to create an Eviction object for a Pod is like performing a
policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod) policy-controlled [`DELETE` operation](/docs/reference/kubernetes-api/workload-resources/pod-v1/#delete-delete-a-pod)
on the Pod. on the Pod.
## Calling the Eviction API ## Calling the Eviction API
@ -75,13 +75,13 @@ checks and responds in one of the following ways:
* `429 Too Many Requests`: the eviction is not currently allowed because of the * `429 Too Many Requests`: the eviction is not currently allowed because of the
configured {{<glossary_tooltip term_id="pod-disruption-budget" text="PodDisruptionBudget">}}. configured {{<glossary_tooltip term_id="pod-disruption-budget" text="PodDisruptionBudget">}}.
You may be able to attempt the eviction again later. You might also see this You may be able to attempt the eviction again later. You might also see this
response because of API rate limiting. response because of API rate limiting.
* `500 Internal Server Error`: the eviction is not allowed because there is a * `500 Internal Server Error`: the eviction is not allowed because there is a
misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod. misconfiguration, like if multiple PodDisruptionBudgets reference the same Pod.
If the Pod you want to evict isn't part of a workload that has a If the Pod you want to evict isn't part of a workload that has a
PodDisruptionBudget, the API server always returns `200 OK` and allows the PodDisruptionBudget, the API server always returns `200 OK` and allows the
eviction. eviction.
If the API server allows the eviction, the Pod is deleted as follows: If the API server allows the eviction, the Pod is deleted as follows:
@ -103,12 +103,12 @@ If the API server allows the eviction, the Pod is deleted as follows:
## Troubleshooting stuck evictions ## Troubleshooting stuck evictions
In some cases, your applications may enter a broken state, where the Eviction In some cases, your applications may enter a broken state, where the Eviction
API will only return `429` or `500` responses until you intervene. This can API will only return `429` or `500` responses until you intervene. This can
happen if, for example, a ReplicaSet creates pods for your application but new happen if, for example, a ReplicaSet creates pods for your application but new
pods do not enter a `Ready` state. You may also notice this behavior in cases pods do not enter a `Ready` state. You may also notice this behavior in cases
where the last evicted Pod had a long termination grace period. where the last evicted Pod had a long termination grace period.
If you notice stuck evictions, try one of the following solutions: If you notice stuck evictions, try one of the following solutions:
* Abort or pause the automated operation causing the issue. Investigate the stuck * Abort or pause the automated operation causing the issue. Investigate the stuck
application before you restart the operation. application before you restart the operation.

View File

@ -96,7 +96,7 @@ define. Some of the benefits of affinity and anti-affinity include:
The affinity feature consists of two types of affinity: The affinity feature consists of two types of affinity:
- *Node affinity* functions like the `nodeSelector` field but is more expressive and - *Node affinity* functions like the `nodeSelector` field but is more expressive and
allows you to specify soft rules. allows you to specify soft rules.
- *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels - *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
on other Pods. on other Pods.
@ -254,13 +254,13 @@ the node label that the system uses to denote the domain. For examples, see
[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/). [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/).
{{< note >}} {{< note >}}
Inter-pod affinity and anti-affinity require substantial amount of Inter-pod affinity and anti-affinity require substantial amounts of
processing which can slow down scheduling in large clusters significantly. We do processing which can slow down scheduling in large clusters significantly. We do
not recommend using them in clusters larger than several hundred nodes. not recommend using them in clusters larger than several hundred nodes.
{{< /note >}} {{< /note >}}
{{< note >}} {{< note >}}
Pod anti-affinity requires nodes to be consistently labelled, in other words, Pod anti-affinity requires nodes to be consistently labeled, in other words,
every node in the cluster must have an appropriate label matching `topologyKey`. every node in the cluster must have an appropriate label matching `topologyKey`.
If some or all nodes are missing the specified `topologyKey` label, it can lead If some or all nodes are missing the specified `topologyKey` label, it can lead
to unintended behavior. to unintended behavior.
@ -305,22 +305,22 @@ Pod affinity rule uses the "hard"
`requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule `requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule
uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
The affinity rule specifies that the scheduler is allowed to place the example Pod The affinity rule specifies that the scheduler is allowed to place the example Pod
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/) on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
where other Pods have been labeled with `security=S1`. where other Pods have been labeled with `security=S1`.
For instance, if we have a cluster with a designated zone, let's call it "Zone V," For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
assign the Pod to any node within Zone V, as long as there is at least one Pod within assign the Pod to any node within Zone V, as long as there is at least one Pod within
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/) on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
where other Pods have been labeled with `security=S2`. where other Pods have been labeled with `security=S2`.
For instance, if we have a cluster with a designated zone, let's call it "Zone R," For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
assigning the Pod to any node within Zone R, as long as there is at least one Pod within assigning the Pod to any node within Zone R, as long as there is at least one Pod within
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
scheduling into Zone R if there are no Pods with `security=S2` labels. scheduling into Zone R if there are no Pods with `security=S2` labels.
To get yourself more familiar with the examples of Pod affinity and anti-affinity, To get yourself more familiar with the examples of Pod affinity and anti-affinity,
@ -364,19 +364,19 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
{{< note >}} {{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA --> <!-- UPDATE THIS WHEN PROMOTING TO BETA -->
The `matchLabelKeys` field is a alpha-level field and is disabled by default in The `matchLabelKeys` field is an alpha-level field and is disabled by default in
Kubernetes {{< skew currentVersion >}}. Kubernetes {{< skew currentVersion >}}.
When you want to use it, you have to enable it via the When you want to use it, you have to enable it via the
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). `MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
{{< /note >}} {{< /note >}}
Kubernetes includes an optional `matchLabelKeys` field for Pod affinity Kubernetes includes an optional `matchLabelKeys` field for Pod affinity
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels, or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
when satisfying the Pod (anti)affinity. when satisfying the Pod (anti)affinity.
The keys are used to look up values from the pod labels; those key-value labels are combined The keys are used to look up values from the pod labels; those key-value labels are combined
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined (using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation. filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
managed as part of a Deployment, where the value is unique for each revision). managed as part of a Deployment, where the value is unique for each revision).
@ -405,7 +405,7 @@ spec:
# Only Pods from a given rollout are taken into consideration when calculating pod affinity. # Only Pods from a given rollout are taken into consideration when calculating pod affinity.
# If you update the Deployment, the replacement Pods follow their own affinity rules # If you update the Deployment, the replacement Pods follow their own affinity rules
# (if there are any defined in the new Pod template) # (if there are any defined in the new Pod template)
matchLabelKeys: matchLabelKeys:
- pod-template-hash - pod-template-hash
``` ```
@ -415,14 +415,14 @@ spec:
{{< note >}} {{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA --> <!-- UPDATE THIS WHEN PROMOTING TO BETA -->
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in The `mismatchLabelKeys` field is an alpha-level field and is disabled by default in
Kubernetes {{< skew currentVersion >}}. Kubernetes {{< skew currentVersion >}}.
When you want to use it, you have to enable it via the When you want to use it, you have to enable it via the
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). `MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
{{< /note >}} {{< /note >}}
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels, or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
when satisfying the Pod (anti)affinity. when satisfying the Pod (anti)affinity.
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in. One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
@ -438,22 +438,22 @@ metadata:
... ...
spec: spec:
affinity: affinity:
podAffinity: podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
# ensure that pods associated with this tenant land on the correct node pool # ensure that pods associated with this tenant land on the correct node pool
- matchLabelKeys: - matchLabelKeys:
- tenant - tenant
topologyKey: node-pool topologyKey: node-pool
podAntiAffinity: podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
# ensure that pods associated with this tenant can't schedule to nodes used for another tenant # ensure that pods associated with this tenant can't schedule to nodes used for another tenant
- mismatchLabelKeys: - mismatchLabelKeys:
- tenant # whatever the value of the "tenant" label for this Pod, prevent - tenant # whatever the value of the "tenant" label for this Pod, prevent
# scheduling to nodes in any pool where any Pod from a different # scheduling to nodes in any pool where any Pod from a different
# tenant is running. # tenant is running.
labelSelector: labelSelector:
# We have to have the labelSelector which selects only Pods with the tenant label, # We have to have the labelSelector which selects only Pods with the tenant label,
# otherwise this Pod would hate Pods from daemonsets as well, for example, # otherwise this Pod would hate Pods from daemonsets as well, for example,
# which aren't supposed to have the tenant label. # which aren't supposed to have the tenant label.
matchExpressions: matchExpressions:
- key: tenant - key: tenant
@ -561,7 +561,7 @@ where each web server is co-located with a cache, on three separate nodes.
| *webserver-1* | *webserver-2* | *webserver-3* | | *webserver-1* | *webserver-2* | *webserver-3* |
| *cache-1* | *cache-2* | *cache-3* | | *cache-1* | *cache-2* | *cache-3* |
The overall effect is that each cache instance is likely to be accessed by a single client, that The overall effect is that each cache instance is likely to be accessed by a single client that
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency. is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
You might have other reasons to use Pod anti-affinity. You might have other reasons to use Pod anti-affinity.
@ -589,7 +589,7 @@ Some of the limitations of using `nodeName` to select nodes are:
{{< note >}} {{< note >}}
`nodeName` is intended for use by custom schedulers or advanced use cases where `nodeName` is intended for use by custom schedulers or advanced use cases where
you need to bypass any configured schedulers. Bypassing the schedulers might lead to you need to bypass any configured schedulers. Bypassing the schedulers might lead to
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers. failed Pods if the assigned Nodes get oversubscribed. You can use the [node affinity](#node-affinity) or the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
{{</ note >}} {{</ note >}}
Here is an example of a Pod spec using the `nodeName` field: Here is an example of a Pod spec using the `nodeName` field:
@ -633,13 +633,13 @@ The following operators can only be used with `nodeAffinity`.
| Operator | Behaviour | | Operator | Behaviour |
| :------------: | :-------------: | | :------------: | :-------------: |
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector | | `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector | | `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
{{<note>}} {{<note>}}
`Gt` and `Lt` operators will not work with non-integer values. If the given value `Gt` and `Lt` operators will not work with non-integer values. If the given value
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt` doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
are not available for `podAffinity`. are not available for `podAffinity`.
{{</note>}} {{</note>}}

View File

@ -41,14 +41,14 @@ ResourceClass
driver. driver.
ResourceClaim ResourceClaim
: Defines a particular resource instances that is required by a : Defines a particular resource instance that is required by a
workload. Created by a user (lifecycle managed manually, can be shared workload. Created by a user (lifecycle managed manually, can be shared
between different Pods) or for individual Pods by the control plane based on between different Pods) or for individual Pods by the control plane based on
a ResourceClaimTemplate (automatic lifecycle, typically used by just one a ResourceClaimTemplate (automatic lifecycle, typically used by just one
Pod). Pod).
ResourceClaimTemplate ResourceClaimTemplate
: Defines the spec and some meta data for creating : Defines the spec and some metadata for creating
ResourceClaims. Created by a user when deploying a workload. ResourceClaims. Created by a user when deploying a workload.
PodSchedulingContext PodSchedulingContext

View File

@ -62,7 +62,7 @@ kube-scheduler selects a node for the pod in a 2-step operation:
The _filtering_ step finds the set of Nodes where it's feasible to The _filtering_ step finds the set of Nodes where it's feasible to
schedule the Pod. For example, the PodFitsResources filter checks whether a schedule the Pod. For example, the PodFitsResources filter checks whether a
candidate Node has enough available resource to meet a Pod's specific candidate Node has enough available resources to meet a Pod's specific
resource requests. After this step, the node list contains any suitable resource requests. After this step, the node list contains any suitable
Nodes; often, there will be more than one. If the list is empty, that Nodes; often, there will be more than one. If the list is empty, that
Pod isn't (yet) schedulable. Pod isn't (yet) schedulable.

View File

@ -171,7 +171,7 @@ The kubelet has the following default hard eviction thresholds:
- `nodefs.inodesFree<5%` (Linux nodes) - `nodefs.inodesFree<5%` (Linux nodes)
These default values of hard eviction thresholds will only be set if none These default values of hard eviction thresholds will only be set if none
of the parameters is changed. If you changed the value of any parameter, of the parameters is changed. If you change the value of any parameter,
then the values of other parameters will not be inherited as the default then the values of other parameters will not be inherited as the default
values and will be set to zero. In order to provide custom values, you values and will be set to zero. In order to provide custom values, you
should provide all the thresholds respectively. should provide all the thresholds respectively.

View File

@ -64,7 +64,7 @@ and it cannot be prefixed with `system-`.
A PriorityClass object can have any 32-bit integer value smaller than or equal A PriorityClass object can have any 32-bit integer value smaller than or equal
to 1 billion. This means that the range of values for a PriorityClass object is to 1 billion. This means that the range of values for a PriorityClass object is
from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for
built-in PriorityClasses that represent critical system Pods. A cluster built-in PriorityClasses that represent critical system Pods. A cluster
admin should create one PriorityClass object for each such mapping that they want. admin should create one PriorityClass object for each such mapping that they want.
@ -182,8 +182,8 @@ When Pod priority is enabled, the scheduler orders pending Pods by
their priority and a pending Pod is placed ahead of other pending Pods their priority and a pending Pod is placed ahead of other pending Pods
with lower priority in the scheduling queue. As a result, the higher with lower priority in the scheduling queue. As a result, the higher
priority Pod may be scheduled sooner than Pods with lower priority if priority Pod may be scheduled sooner than Pods with lower priority if
its scheduling requirements are met. If such Pod cannot be scheduled, its scheduling requirements are met. If such Pod cannot be scheduled, the
scheduler will continue and tries to schedule other lower priority Pods. scheduler will continue and try to schedule other lower priority Pods.
## Preemption ## Preemption
@ -199,7 +199,7 @@ the Pods are gone, P can be scheduled on the Node.
### User exposed information ### User exposed information
When Pod P preempts one or more Pods on Node N, `nominatedNodeName` field of Pod When Pod P preempts one or more Pods on Node N, `nominatedNodeName` field of Pod
P's status is set to the name of Node N. This field helps scheduler track P's status is set to the name of Node N. This field helps the scheduler track
resources reserved for Pod P and also gives users information about preemptions resources reserved for Pod P and also gives users information about preemptions
in their clusters. in their clusters.
@ -209,8 +209,8 @@ After victim Pods are preempted, they get their graceful termination period. If
another node becomes available while scheduler is waiting for the victim Pods to another node becomes available while scheduler is waiting for the victim Pods to
terminate, scheduler may use the other node to schedule Pod P. As a result terminate, scheduler may use the other node to schedule Pod P. As a result
`nominatedNodeName` and `nodeName` of Pod spec are not always the same. Also, if `nominatedNodeName` and `nodeName` of Pod spec are not always the same. Also, if
scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P the scheduler preempts Pods on Node N, but then a higher priority Pod than Pod P
arrives, scheduler may give Node N to the new higher priority Pod. In such a arrives, the scheduler may give Node N to the new higher priority Pod. In such a
case, scheduler clears `nominatedNodeName` of Pod P. By doing this, scheduler case, scheduler clears `nominatedNodeName` of Pod P. By doing this, scheduler
makes Pod P eligible to preempt Pods on another Node. makes Pod P eligible to preempt Pods on another Node.
@ -256,9 +256,9 @@ the Node is not considered for preemption.
If a pending Pod has inter-pod {{< glossary_tooltip text="affinity" term_id="affinity" >}} If a pending Pod has inter-pod {{< glossary_tooltip text="affinity" term_id="affinity" >}}
to one or more of the lower-priority Pods on the Node, the inter-Pod affinity to one or more of the lower-priority Pods on the Node, the inter-Pod affinity
rule cannot be satisfied in the absence of those lower-priority Pods. In this case, rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
the scheduler does not preempt any Pods on the Node. Instead, it looks for another the scheduler does not preempt any Pods on the Node. Instead, it looks for another
Node. The scheduler might find a suitable Node or it might not. There is no Node. The scheduler might find a suitable Node or it might not. There is no
guarantee that the pending Pod can be scheduled. guarantee that the pending Pod can be scheduled.
Our recommended solution for this problem is to create inter-Pod affinity only Our recommended solution for this problem is to create inter-Pod affinity only
@ -288,7 +288,7 @@ enough demand and if we find an algorithm with reasonable performance.
## Troubleshooting ## Troubleshooting
Pod priority and pre-emption can have unwanted side effects. Here are some Pod priority and preemption can have unwanted side effects. Here are some
examples of potential problems and ways to deal with them. examples of potential problems and ways to deal with them.
### Pods are preempted unnecessarily ### Pods are preempted unnecessarily
@ -361,7 +361,7 @@ to get evicted. The kubelet ranks pods for eviction based on the following facto
1. Whether the starved resource usage exceeds requests 1. Whether the starved resource usage exceeds requests
1. Pod Priority 1. Pod Priority
1. Amount of resource usage relative to requests 1. Amount of resource usage relative to requests
See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction) See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
for more details. for more details.

View File

@ -9,7 +9,7 @@ weight: 40
{{< feature-state for_k8s_version="v1.27" state="beta" >}} {{< feature-state for_k8s_version="v1.27" state="beta" >}}
Pods were considered ready for scheduling once created. Kubernetes scheduler Pods were considered ready for scheduling once created. Kubernetes scheduler
does its due diligence to find nodes to place all pending Pods. However, in a does its due diligence to find nodes to place all pending Pods. However, in a
real-world case, some Pods may stay in a "miss-essential-resources" state for a long period. real-world case, some Pods may stay in a "miss-essential-resources" state for a long period.
These Pods actually churn the scheduler (and downstream integrators like Cluster AutoScaler) These Pods actually churn the scheduler (and downstream integrators like Cluster AutoScaler)
in an unnecessary manner. in an unnecessary manner.
@ -59,7 +59,7 @@ The output is:
``` ```
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
by re-applying a modified manifest: by reapplying a modified manifest:
{{% code_sample file="pods/pod-without-scheduling-gates.yaml" %}} {{% code_sample file="pods/pod-without-scheduling-gates.yaml" %}}
@ -79,7 +79,7 @@ Given the test-pod doesn't request any CPU/memory resources, it's expected that
transited from previous `SchedulingGated` to `Running`: transited from previous `SchedulingGated` to `Running`:
```none ```none
NAME READY STATUS RESTARTS AGE IP NODE NAME READY STATUS RESTARTS AGE IP NODE
test-pod 1/1 Running 0 15s 10.0.0.4 node-2 test-pod 1/1 Running 0 15s 10.0.0.4 node-2
``` ```
@ -94,8 +94,8 @@ scheduling. You can use `scheduler_pending_pods{queue="gated"}` to check the met
{{< feature-state for_k8s_version="v1.27" state="beta" >}} {{< feature-state for_k8s_version="v1.27" state="beta" >}}
You can mutate scheduling directives of Pods while they have scheduling gates, with certain constraints. You can mutate scheduling directives of Pods while they have scheduling gates, with certain constraints.
At a high level, you can only tighten the scheduling directives of a Pod. In other words, the updated At a high level, you can only tighten the scheduling directives of a Pod. In other words, the updated
directives would cause the Pods to only be able to be scheduled on a subset of the nodes that it would directives would cause the Pods to only be able to be scheduled on a subset of the nodes that it would
previously match. More concretely, the rules for updating a Pod's scheduling directives are as follows: previously match. More concretely, the rules for updating a Pod's scheduling directives are as follows:
1. For `.spec.nodeSelector`, only additions are allowed. If absent, it will be allowed to be set. 1. For `.spec.nodeSelector`, only additions are allowed. If absent, it will be allowed to be set.
@ -107,8 +107,8 @@ previously match. More concretely, the rules for updating a Pod's scheduling dir
or `fieldExpressions` are allowed, and no changes to existing `matchExpressions` or `fieldExpressions` are allowed, and no changes to existing `matchExpressions`
and `fieldExpressions` will be allowed. This is because the terms in and `fieldExpressions` will be allowed. This is because the terms in
`.requiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms`, are ORed `.requiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms`, are ORed
while the expressions in `nodeSelectorTerms[].matchExpressions` and while the expressions in `nodeSelectorTerms[].matchExpressions` and
`nodeSelectorTerms[].fieldExpressions` are ANDed. `nodeSelectorTerms[].fieldExpressions` are ANDed.
4. For `.preferredDuringSchedulingIgnoredDuringExecution`, all updates are allowed. 4. For `.preferredDuringSchedulingIgnoredDuringExecution`, all updates are allowed.
This is because preferred terms are not authoritative, and so policy controllers This is because preferred terms are not authoritative, and so policy controllers

View File

@ -58,8 +58,8 @@ Within the `scoringStrategy` field, you can configure two parameters: `requested
`resources`. The `shape` in the `requestedToCapacityRatio` `resources`. The `shape` in the `requestedToCapacityRatio`
parameter allows the user to tune the function as least requested or most parameter allows the user to tune the function as least requested or most
requested based on `utilization` and `score` values. The `resources` parameter requested based on `utilization` and `score` values. The `resources` parameter
consists of `name` of the resource to be considered during scoring and `weight` comprises both the `name` of the resource to be considered during scoring and
specify the weight of each resource. its corresponding `weight`, which specifies the weight of each resource.
Below is an example configuration that sets Below is an example configuration that sets
the bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar` the bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`

View File

@ -77,7 +77,7 @@ If you don't specify a threshold, Kubernetes calculates a figure using a
linear formula that yields 50% for a 100-node cluster and yields 10% linear formula that yields 50% for a 100-node cluster and yields 10%
for a 5000-node cluster. The lower bound for the automatic value is 5%. for a 5000-node cluster. The lower bound for the automatic value is 5%.
This means that, the kube-scheduler always scores at least 5% of your cluster no This means that the kube-scheduler always scores at least 5% of your cluster no
matter how large the cluster is, unless you have explicitly set matter how large the cluster is, unless you have explicitly set
`percentageOfNodesToScore` to be smaller than 5. `percentageOfNodesToScore` to be smaller than 5.

View File

@ -83,7 +83,7 @@ the Pod is put into the active queue or the backoff queue
so that the scheduler will retry the scheduling of the Pod. so that the scheduler will retry the scheduling of the Pod.
{{< note >}} {{< note >}}
QueueingHint evaluation during scheduling is a beta-level feature. QueueingHint evaluation during scheduling is a beta-level feature.
The v1.28 release series initially enabled the associated feature gate; however, after the The v1.28 release series initially enabled the associated feature gate; however, after the
discovery of an excessive memory footprint, the Kubernetes project set that feature gate discovery of an excessive memory footprint, the Kubernetes project set that feature gate
to be disabled by default. In Kubernetes {{< skew currentVersion >}}, this feature gate is to be disabled by default. In Kubernetes {{< skew currentVersion >}}, this feature gate is
@ -113,7 +113,7 @@ called for that node. Nodes may be evaluated concurrently.
### PostFilter {#post-filter} ### PostFilter {#post-filter}
These plugins are called after Filter phase, but only when no feasible nodes These plugins are called after the Filter phase, but only when no feasible nodes
were found for the pod. Plugins are called in their configured order. If were found for the pod. Plugins are called in their configured order. If
any postFilter plugin marks the node as `Schedulable`, the remaining plugins any postFilter plugin marks the node as `Schedulable`, the remaining plugins
will not be called. A typical PostFilter implementation is preemption, which will not be called. A typical PostFilter implementation is preemption, which

View File

@ -84,7 +84,7 @@ An empty `effect` matches all effects with key `key1`.
{{< /note >}} {{< /note >}}
The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`. The above example used the `effect` of `NoSchedule`. Alternatively, you can use the `effect` of `PreferNoSchedule`.
The allowed values for the `effect` field are: The allowed values for the `effect` field are:
@ -227,7 +227,7 @@ are true. The following taints are built in:
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable. * `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
* `node.kubernetes.io/unschedulable`: Node is unschedulable. * `node.kubernetes.io/unschedulable`: Node is unschedulable.
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started * `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
with "external" cloud provider, this taint is set on a node to mark it with an "external" cloud provider, this taint is set on a node to mark it
as unusable. After a controller from the cloud-controller-manager initializes as unusable. After a controller from the cloud-controller-manager initializes
this node, the kubelet removes this taint. this node, the kubelet removes this taint.

View File

@ -71,7 +71,7 @@ spec:
``` ```
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or
refer to [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod. refer to the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of the API reference for Pod.
### Spread constraint definition ### Spread constraint definition
@ -99,7 +99,7 @@ your cluster. Those fields are:
{{< note >}} {{< note >}}
The `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) The `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enables `minDomains` for pod topology spread. Starting from v1.28, enables `minDomains` for pod topology spread. Starting from v1.28,
the `MinDomainsInPodTopologySpread` gate the `MinDomainsInPodTopologySpread` gate
is enabled by default. In older Kubernetes clusters it might be explicitly is enabled by default. In older Kubernetes clusters it might be explicitly
disabled or the field might not be available. disabled or the field might not be available.
{{< /note >}} {{< /note >}}
@ -254,7 +254,7 @@ follows the API definition of the field; however, the behavior is more likely to
confusing and troubleshooting is less straightforward. confusing and troubleshooting is less straightforward.
You need a mechanism to ensure that all the nodes in a topology domain (such as a You need a mechanism to ensure that all the nodes in a topology domain (such as a
cloud provider region) are labelled consistently. cloud provider region) are labeled consistently.
To avoid you needing to manually label nodes, most clusters automatically To avoid you needing to manually label nodes, most clusters automatically
populate well-known labels such as `kubernetes.io/hostname`. Check whether populate well-known labels such as `kubernetes.io/hostname`. Check whether
your cluster supports this. your cluster supports this.
@ -263,7 +263,7 @@ your cluster supports this.
### Example: one topology spread constraint {#example-one-topologyspreadconstraint} ### Example: one topology spread constraint {#example-one-topologyspreadconstraint}
Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in Suppose you have a 4-node cluster where 3 Pods labeled `foo: bar` are located in
node1, node2 and node3 respectively: node1, node2 and node3 respectively:
{{<mermaid>}} {{<mermaid>}}
@ -290,7 +290,7 @@ can use a manifest similar to:
{{% code_sample file="pods/topology-spread-constraints/one-constraint.yaml" %}} {{% code_sample file="pods/topology-spread-constraints/one-constraint.yaml" %}}
From that manifest, `topologyKey: zone` implies the even distribution will only be applied From that manifest, `topologyKey: zone` implies the even distribution will only be applied
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label to nodes that are labeled `zone: <any value>` (nodes that don't have a `zone` label
are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the
incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint. incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint.
@ -494,7 +494,7 @@ There are some implicit conventions worth noting here:
above example, if you remove the incoming Pod's labels, it can still be placed onto above example, if you remove the incoming Pod's labels, it can still be placed onto
nodes in zone `B`, since the constraints are still satisfied. However, after that nodes in zone `B`, since the constraints are still satisfied. However, after that
placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A` placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A`
having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as having 2 Pods labeled as `foo: bar`, and zone `B` having 1 Pod labeled as
`foo: bar`. If this is not what you expect, update the workload's `foo: bar`. If this is not what you expect, update the workload's
`topologySpreadConstraints[*].labelSelector` to match the labels in the pod template. `topologySpreadConstraints[*].labelSelector` to match the labels in the pod template.
@ -618,7 +618,7 @@ section of the enhancement proposal about Pod topology spread constraints.
because, in this case, those topology domains won't be considered until there is because, in this case, those topology domains won't be considered until there is
at least one node in them. at least one node in them.
You can work around this by using an cluster autoscaling tool that is aware of You can work around this by using a cluster autoscaling tool that is aware of
Pod topology spread constraints and is also aware of the overall set of topology Pod topology spread constraints and is also aware of the overall set of topology
domains. domains.

View File

@ -45,6 +45,14 @@ Each day in a week-long shift as PR Wrangler:
- Using style fixups as good first issues is a good way to ensure a supply of easier tasks - Using style fixups as good first issues is a good way to ensure a supply of easier tasks
to help onboard new contributors. to help onboard new contributors.
{{< note >}}
PR wrangler duties do not apply to localization PRs (non-English PRs).
Localization teams have their own processes and teams for reviewing their language PRs.
However, it's often helpful to ensure language PRs are labeled correctly,
review small non-language dependent PRs (like a link update),
or tag reviewers or contributors in long-running PRs (ones opened more than 6 months ago and have not been updated in a month or more).
{{< /note >}}
### Helpful GitHub queries for wranglers ### Helpful GitHub queries for wranglers

View File

@ -71,12 +71,19 @@ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2
kubectl config view kubectl config view
# Show merged kubeconfig settings and raw certificate data and exposed secrets
kubectl config view --raw
# get the password for the e2e user # get the password for the e2e user
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
# get the certificate for the e2e user
kubectl config view --raw -o jsonpath='{.users[?(.name == 'e2e')].user.client-certificate-data}' | base64 -d
kubectl config view -o jsonpath='{.users[].name}' # display the first user kubectl config view -o jsonpath='{.users[].name}' # display the first user
kubectl config view -o jsonpath='{.users[*].name}' # get a list of users kubectl config view -o jsonpath='{.users[*].name}' # get a list of users
kubectl config get-contexts # display list of contexts kubectl config get-contexts # display list of contexts
kubectl config get-contexts -o name # get all context names
kubectl config current-context # display the current-context kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name kubectl config use-context my-cluster-name # set the default context to my-cluster-name

View File

@ -60,7 +60,7 @@ When present, indicates that modifications should not be persisted. An invalid o
## fieldManager {#fieldManager} ## fieldManager {#fieldManager}
fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://pkg.go.dev/unicode#IsPrint.
<hr> <hr>

View File

@ -48,6 +48,6 @@ You can provide feedback via the GitHub issue [**Dockershim removal feedback & i
* Mirantis blog: [The Future of Dockershim is cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/) (published 2021/04/21) * Mirantis blog: [The Future of Dockershim is cri-dockerd](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/) (published 2021/04/21)
* Mirantis: [Mirantis/cri-dockerd](https://github.com/Mirantis/cri-dockerd) Git repository (on GitHub) * Mirantis: [Mirantis/cri-dockerd](https://mirantis.github.io/cri-dockerd/) Official Documentation
* Tripwire: [How Dockershims Forthcoming Deprecation Affects Your Kubernetes](https://www.tripwire.com/state-of-security/security-data-protection/cloud/how-dockershim-forthcoming-deprecation-affects-your-kubernetes/) (published 2021/07/01) * Tripwire: [How Dockershims Forthcoming Deprecation Affects Your Kubernetes](https://www.tripwire.com/state-of-security/security-data-protection/cloud/how-dockershim-forthcoming-deprecation-affects-your-kubernetes/) (published 2021/07/01)

View File

@ -325,15 +325,14 @@ This config option supports live configuration reload to apply this change: `sys
{{< note >}} {{< note >}}
These instructions assume that you are using the These instructions assume that you are using the
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) adapter to integrate [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/) adapter to integrate
Docker Engine with Kubernetes. Docker Engine with Kubernetes.
{{< /note >}} {{< /note >}}
1. On each of your nodes, install Docker for your Linux distribution as per 1. On each of your nodes, install Docker for your Linux distribution as per
[Install Docker Engine](https://docs.docker.com/engine/install/#server). [Install Docker Engine](https://docs.docker.com/engine/install/#server).
2. Install [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd), following 2. Install [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/usage/install), following the directions in the install section of the documentation.
the instructions in that source code repository.
For `cri-dockerd`, the CRI socket is `/run/cri-dockerd.sock` by default. For `cri-dockerd`, the CRI socket is `/run/cri-dockerd.sock` by default.
@ -343,7 +342,7 @@ For `cri-dockerd`, the CRI socket is `/run/cri-dockerd.sock` by default.
available container runtime that was formerly known as Docker Enterprise Edition. available container runtime that was formerly known as Docker Enterprise Edition.
You can use Mirantis Container Runtime with Kubernetes using the open source You can use Mirantis Container Runtime with Kubernetes using the open source
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) component, included with MCR. [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/) component, included with MCR.
To learn more about how to install Mirantis Container Runtime, To learn more about how to install Mirantis Container Runtime,
visit [MCR Deployment Guide](https://docs.mirantis.com/mcr/20.10/install.html). visit [MCR Deployment Guide](https://docs.mirantis.com/mcr/20.10/install.html).

View File

@ -93,7 +93,7 @@ for more information.
{{< note >}} {{< note >}}
Docker Engine does not implement the [CRI](/docs/concepts/architecture/cri/) Docker Engine does not implement the [CRI](/docs/concepts/architecture/cri/)
which is a requirement for a container runtime to work with Kubernetes. which is a requirement for a container runtime to work with Kubernetes.
For that reason, an additional service [cri-dockerd](https://github.com/Mirantis/cri-dockerd) For that reason, an additional service [cri-dockerd](https://mirantis.github.io/cri-dockerd/)
has to be installed. cri-dockerd is a project based on the legacy built-in has to be installed. cri-dockerd is a project based on the legacy built-in
Docker Engine support that was [removed](/dockershim) from the kubelet in version 1.24. Docker Engine support that was [removed](/dockershim) from the kubelet in version 1.24.
{{< /note >}} {{< /note >}}
@ -186,7 +186,7 @@ These instructions are for Kubernetes {{< skew currentVersion >}}.
# sudo mkdir -p -m 755 /etc/apt/keyrings # sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
``` ```
{{< note >}} {{< note >}}
In releases older than Debian 12 and Ubuntu 22.04, folder `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command. In releases older than Debian 12 and Ubuntu 22.04, folder `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command.
{{< /note >}} {{< /note >}}

View File

@ -47,7 +47,7 @@ to `cri-dockerd`.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
* [`cri-dockerd`](https://github.com/mirantis/cri-dockerd#build-and-install) * [`cri-dockerd`](https://mirantis.github.io/cri-dockerd/usage/install)
installed and started on each node. installed and started on each node.
* A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). * A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).

View File

@ -9,7 +9,7 @@ weight: 10
<body> <body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet"> <link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top"> <div class="layout" id="top">
@ -65,13 +65,12 @@ weight: 10
<br> <br>
<div class="row"> <div class="row">
<p class="col-md-8"> <div class="col-md-8">
<p>आप कुबेरनेट्स कमांड लाइन इंटरफेस, <b>kubectl</b> का उपयोग करके डिप्लॉयमेंट बना और प्रबंधित कर सकते हैं। kubectl क्लस्टर के साथ बातचीत करने के लिए कुबेरनेट्स एपीआई का उपयोग करता है। इस मॉड्यूल में, आप कुबेरनेट्स क्लस्टर पर आपके एप्लिकेशन चलाने वाले डिप्लॉयमेंट बनाने के लिए आवश्यक सबसे सामान्य kubectl कमांड सीखेंगे।</p> <p>आप कुबेरनेट्स कमांड लाइन इंटरफेस, <b>kubectl</b> का उपयोग करके डिप्लॉयमेंट बना और प्रबंधित कर सकते हैं। kubectl क्लस्टर के साथ बातचीत करने के लिए कुबेरनेट्स एपीआई का उपयोग करता है। इस मॉड्यूल में, आप कुबेरनेट्स क्लस्टर पर आपके एप्लिकेशन चलाने वाले डिप्लॉयमेंट बनाने के लिए आवश्यक सबसे सामान्य kubectl कमांड सीखेंगे।</p>
<p>जब आप कोई डिप्लॉयमेंट बनाते हैं, तो आपको अपने एप्लिकेशन के लिए कंटेनर इमेज और चलाने के लिए इच्छित प्रतिकृतियों की संख्या निर्दिष्ट करने की आवश्यकता होगी। आप अपने कामकाज को अपडेट करके बाद में उस जानकारी को बदल सकते हैं; बूटकैंप के मॉड्यूल <a href="/hi//docs/tutorials/kubernetes-basics/scale/scale-intro/">5</a> और <a href="/hi/docs/tutorials/kubernetes-basics/update/update-intro/" >6</a> चर्चा करते हैं कि आप अपने डिप्लॉयमेंट को कैसे स्केल और अपडेट कर सकते हैं।</p> <p>जब आप कोई डिप्लॉयमेंट बनाते हैं, तो आपको अपने एप्लिकेशन के लिए कंटेनर इमेज और चलाने के लिए इच्छित प्रतिकृतियों की संख्या निर्दिष्ट करने की आवश्यकता होगी। आप अपने कामकाज को अपडेट करके बाद में उस जानकारी को बदल सकते हैं; बूटकैंप के मॉड्यूल <a href="/hi//docs/tutorials/kubernetes-basics/scale/scale-intro/">5</a> और <a href="/hi/docs/tutorials/kubernetes-basics/update/update-intro/" >6</a> चर्चा करते हैं कि आप अपने डिप्लॉयमेंट को कैसे स्केल और अपडेट कर सकते हैं।</p>
</div> </div>
<div class="col-md-4"> <div class="col-md-4">
<div class="content__box content__box_fill"> <div class="content__box content__box_fill">

View File

@ -91,6 +91,10 @@
<script defer src="{{ "js/release_binaries.js" | relURL }}"></script> <script defer src="{{ "js/release_binaries.js" | relURL }}"></script>
{{- end -}} {{- end -}}
{{- if .HasShortcode "cncf-landscape" -}}
<script src="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.9/iframeResizer.min.js" integrity="sha384-hHTwgxzjpO1G1NI0wMHWQYUxnGtpWyDjVSZrFnDrlWa5OL+DFY57qnDWw/5WSJOl" crossorigin="anonymous"></script>
{{- end -}}
{{- if eq (lower .Params.cid) "community" -}} {{- if eq (lower .Params.cid) "community" -}}
{{- if eq .Params.community_styles_migrated true -}} {{- if eq .Params.community_styles_migrated true -}}
<link href="/css/community.css" rel="stylesheet"><!-- legacy styles --> <link href="/css/community.css" rel="stylesheet"><!-- legacy styles -->

View File

@ -57,7 +57,6 @@ document.addEventListener("DOMContentLoaded", function () {
</script> </script>
{{- end -}} {{- end -}}
<div id="frameHolder"> <div id="frameHolder">
<script src="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.9/iframeResizer.min.js"></script>
{{ if ( .Get "category" ) }} {{ if ( .Get "category" ) }}
<iframe id="iframe-landscape" src="https://landscape.cncf.io/embed/embed.html?key={{ .Get "category" }}&headers=false&style=shadowed&size=md&bg-color=%233371e3&fg-color=%23ffffff&iframe-resizer=true" style="width: 1px; min-width: 100%; min-height: 100px; border: 0;"></iframe> <iframe id="iframe-landscape" src="https://landscape.cncf.io/embed/embed.html?key={{ .Get "category" }}&headers=false&style=shadowed&size=md&bg-color=%233371e3&fg-color=%23ffffff&iframe-resizer=true" style="width: 1px; min-width: 100%; min-height: 100px; border: 0;"></iframe>
<script> <script>