Merge remote-tracking branch 'upstream/main' into dev-1.24

This commit is contained in:
Nate W 2022-01-31 11:57:45 -08:00
commit 2fa88c51b6
73 changed files with 2005 additions and 430 deletions

View File

@ -65,9 +65,9 @@ This will start the local Hugo server on port 1313. Open up your browser to <htt
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
To update the reference pages for a new Kubernetes release (replace v1.20 in the following examples with the release to update to):
To update the reference pages for a new Kubernetes release follow these steps:
1. Pull the `kubernetes-resources-reference` submodule:
1. Pull in the `api-ref-generator` submodule:
```bash
git submodule update --init --recursive --depth 1
@ -75,9 +75,9 @@ To update the reference pages for a new Kubernetes release (replace v1.20 in the
2. Update the Swagger specification:
```
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
3. In `api-ref-assets/config/`, adapt the files `toc.yaml` and `fields.yaml` to reflect the changes of the new release.

View File

@ -317,6 +317,12 @@ footer {
padding-top: 1.5rem !important;
top: 5rem !important;
@supports (position: sticky) {
position: sticky !important;
height: calc(100vh - 10rem);
overflow-y: auto;
}
#TableOfContents {
padding-top: 1rem;
}

View File

@ -190,7 +190,8 @@ kubectl get configmap
No resources found in default namespace.
```
To sum things up, when there's an override owner reference from a child to a parent, deleting the parent deletes the children automatically. This is called `cascade`. The default for cascade is `true`, however, you can use the --cascade=orphan option for `kubectl delete` to delete an object and orphan its children.
To sum things up, when there's an override owner reference from a child to a parent, deleting the parent deletes the children automatically. This is called `cascade`. The default for cascade is `true`, however, you can use the --cascade=orphan option for `kubectl delete` to delete an object and orphan its children. *Update: starting with kubectl v1.20, the default for cascade is `background`.*
In the following example, there is a parent and a child. Notice the owner references are still included. If I delete the parent using --cascade=orphan, the parent is deleted but the child still exists:

View File

@ -138,4 +138,5 @@ Stay tuned for what comes next, and if you have any questions, comments or sugge
* Chat with us on the Kubernetes [Slack](http://slack.k8s.io/):[#cluster-api](https://kubernetes.slack.com/archives/C8TSNPY4T)
* Join the SIG Cluster Lifecycle [Google Group](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle) to receive calendar invites and gain access to documents
* Join our [Zoom meeting](https://zoom.us/j/861487554), every Wednesday at 10:00 Pacific Time
* Check out the [ClusterClass tutorial](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-classes.html) in the Cluster API book.
* Check out the [ClusterClass quick-start](https://cluster-api.sigs.k8s.io/user/quick-start.html) for the Docker provider (CAPD) in the Cluster API book.
* _UPDATE_: Check out the [ClusterClass experimental feature](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/index.html) documentation in the Cluster API book.

View File

@ -20,7 +20,7 @@ case_study_details:
<h2>Solution</h2>
<p>In 2016, the company began moving their code from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p>
<p>In 2016, the company began moving their code from Heroku to containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p>
<h2>Impact</h2>
@ -42,7 +42,7 @@ With the speed befitting a startup, Pear Deck delivered its first prototype to c
<p>On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.</p>
<p>The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p>
<p>The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p>
{{< case-studies/quote image="/images/case-studies/peardeck/banner1.jpg" >}}
"When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.

View File

@ -70,7 +70,7 @@ If you haven't set foot in a school in awhile, you might be surprised by what yo
<p>Recently, the team launched a new single sign-on solution for use in an internal application. "Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment," says van den Bosch. "On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates."</p>
<p>Legacy applications are also being moved to Kubernetes. Not long ago, the team needed to set up a Java-based application for compiling and running a frontend. "On a traditional VM, it would have taken quite a bit of time to set it up and keep it up to date, not to mention maintenance for that setup down the line," says van den Bosch. Instead, it took less than half a day to Dockerize it and get it running on Kubernetes. "It was much easier, and we were able to save costs too because we didn't have to spin up new VMs specially for it."</p>
<p>Legacy applications are also being moved to Kubernetes. Not long ago, the team needed to set up a Java-based application for compiling and running a frontend. "On a traditional VM, it would have taken quite a bit of time to set it up and keep it up to date, not to mention maintenance for that setup down the line," says van den Bosch. Instead, it took less than half a day to containerize it and get it running on Kubernetes. "It was much easier, and we were able to save costs too because we didn't have to spin up new VMs specially for it."</p>
{{< case-studies/quote author="VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE" >}}
"We're really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places," says van den Bosch. And, says Haalstra, "We cannot do it without Kubernetes."

View File

@ -46,7 +46,7 @@ Since it was started in a dorm room in 2003, Squarespace has made it simple for
After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
{{< /case-studies/quote >}}
<p>Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects <a href="https://prometheus.io/">Prometheus</a> and <a href="https://www.fluentd.org/">fluentd</a> to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Docker file, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."</p>
<p>Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects <a href="https://prometheus.io/">Prometheus</a> and <a href="https://www.fluentd.org/">fluentd</a> to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Dockerfile, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."</p>
<p>And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. "From end to end that probably took half an hour, and that's not accounting for the fact that an infrastructure engineer would be responsible for doing that, so there's some business delay in there as well."</p>

View File

@ -58,9 +58,9 @@ How many people does it take to turn on a light bulb?
<p>In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service," says Klein. "We've always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker."</p>
<p>At the time just over two years ago Docker wasn't yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn't really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads."</p>
<p>In 2015, Docker wasn't yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn't really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads."</p>
<p>Once Wink's backend engineering team decided on a Dockerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."</p>
<p>Once Wink's backend engineering team decided on a containerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."</p>
{{< case-studies/quote image="/images/case-studies/wink/banner4.jpg" >}}
"Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."
@ -68,7 +68,7 @@ How many people does it take to turn on a light bulb?
<p>Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed," he says. "We didn't have to futz around with trying to take something like a Linux distro and install everything. It's got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It's not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed."</p>
<p>Wink's hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the Dockerized CoreOS deployment. Since then, they've moved almost every other piece of their infrastructure from third-party cloud-to-cloud integrations to their customer service and payment portals onto CoreOS Container Linux clusters.</p>
<p>Wink's hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the containerized CoreOS deployment. Since then, they've moved almost every other piece of their infrastructure from third-party cloud-to-cloud integrations to their customer service and payment portals onto CoreOS Container Linux clusters.</p>
<p>Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn't take care of routing, sharing configurations, secrets, et cetera, among instances of a service," Klein says. "All of those layers of functionality can be implemented, of course, but if you don't want to spend a lot of time writing unit files manually which of course nobody does you need to create a tool to automate some of that, which we did."</p>

View File

@ -12,7 +12,7 @@ weight: 60
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
For example, you may want access your application's logs if a container crashes; a pod gets evicted; or a node dies.
For example, you may want to access your application's logs if a container crashes, a pod gets evicted, or a node dies.
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.
<!-- body -->

View File

@ -98,6 +98,24 @@ is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
As a result, all namespace names must be valid
[RFC 1123 DNS labels](/docs/concepts/overview/working-with-objects/names/#dns-label-names).
{{< warning >}}
By creating namespaces with the same name as [public top-level
domains](https://data.iana.org/TLD/tlds-alpha-by-domain.txt), Services in these
namespaces can have short DNS names that overlap with public DNS records.
Workloads from any namespace performing a DNS lookup without a [trailing dot](https://datatracker.ietf.org/doc/html/rfc1034#page-8) will
be redirected to those services, taking precedence over public DNS.
To mitigate this, limit privileges for creating namespaces to trusted users. If
required, you could additionally configure third-party security controls, such
as [admission
webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/),
to block creating any namespace with the name of [public
TLDs](https://data.iana.org/TLD/tlds-alpha-by-domain.txt).
{{< /warning >}}
## Not All Objects are in a Namespace
Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are

View File

@ -11,7 +11,8 @@ weight: 30
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. For more information on the deprecation,
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. It has been replaced by
[Pod Security Admission](/docs/concepts/security/pod-security-admission/). For more information on the deprecation,
see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/).
Pod Security Policies enable fine-grained authorization of pod creation and

View File

@ -305,34 +305,22 @@ fail validation.
<tr>
<td style="white-space: nowrap">Volume Types</td>
<td>
<p>In addition to restricting HostPath volumes, the restricted policy limits usage of non-core volume types to those defined through PersistentVolumes.</p>
<p>The restricted policy only permits the following volume types.</p>
<p><strong>Restricted Fields</strong></p>
<ul>
<li><code>spec.volumes[*].hostPath</code></li>
<li><code>spec.volumes[*].gcePersistentDisk</code></li>
<li><code>spec.volumes[*].awsElasticBlockStore</code></li>
<li><code>spec.volumes[*].gitRepo</code></li>
<li><code>spec.volumes[*].nfs</code></li>
<li><code>spec.volumes[*].iscsi</code></li>
<li><code>spec.volumes[*].glusterfs</code></li>
<li><code>spec.volumes[*].rbd</code></li>
<li><code>spec.volumes[*].flexVolume</code></li>
<li><code>spec.volumes[*].cinder</code></li>
<li><code>spec.volumes[*].cephfs</code></li>
<li><code>spec.volumes[*].flocker</code></li>
<li><code>spec.volumes[*].fc</code></li>
<li><code>spec.volumes[*].azureFile</code></li>
<li><code>spec.volumes[*].vsphereVolume</code></li>
<li><code>spec.volumes[*].quobyte</code></li>
<li><code>spec.volumes[*].azureDisk</code></li>
<li><code>spec.volumes[*].portworxVolume</code></li>
<li><code>spec.volumes[*].scaleIO</code></li>
<li><code>spec.volumes[*].storageos</code></li>
<li><code>spec.volumes[*].photonPersistentDisk</code></li>
<li><code>spec.volumes[*]</code></li>
</ul>
<p><strong>Allowed Values</strong></p>
Every item in the <code>spec.volumes[*]</code> list must set one of the following fields to a non-null value:
<ul>
<li>Undefined/nil</li>
<li><code>spec.volumes[*].configMap</code></li>
<li><code>spec.volumes[*].csi</code></li>
<li><code>spec.volumes[*].downwardAPI</code></li>
<li><code>spec.volumes[*].emptyDir</code></li>
<li><code>spec.volumes[*].ephemeral</code></li>
<li><code>spec.volumes[*].persistentVolumeClaim</code></li>
<li><code>spec.volumes[*].projected</code></li>
<li><code>spec.volumes[*].secret</code></li>
</ul>
</td>
</tr>
@ -391,26 +379,6 @@ fail validation.
</ul>
</td>
</tr>
<tr>
<td style="white-space: nowrap">Non-root groups <em>(optional)</em></td>
<td>
<p>Containers should be forbidden from running with a root primary or supplementary GID.</p>
<p><strong>Restricted Fields</strong></p>
<ul>
<li><code>spec.securityContext.runAsGroup</code></li>
<li><code>spec.securityContext.supplementalGroups[*]</code></li>
<li><code>spec.securityContext.fsGroup</code></li>
<li><code>spec.containers[*].securityContext.runAsGroup</code></li>
<li><code>spec.initContainers[*].securityContext.runAsGroup</code></li>
<li><code>spec.ephemeralContainers[*].securityContext.runAsGroup</code></li>
</ul>
<p><strong>Allowed Values</strong></p>
<ul>
<li>Undefined/nil (except for <code>*.runAsGroup</code>)</li>
<li>Non-zero</li>
</ul>
</td>
</tr>
<tr>
<td style="white-space: nowrap">Seccomp (v1.19+)</td>
<td>

View File

@ -106,10 +106,9 @@ and the domain name for your cluster is `cluster.local`, then the Pod has a DNS
`172-17-0-3.default.pod.cluster.local`.
Any pods created by a Deployment or DaemonSet exposed by a Service have the
following DNS resolution available:
Any pods exposed by a Service have the following DNS resolution available:
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`.
`pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`.
### Pod's hostname and subdomain fields

View File

@ -88,6 +88,15 @@ has all the information needed to configure a load balancer or proxy server. Mos
contains a list of rules matched against all incoming requests. Ingress resource only supports rules
for directing HTTP(S) traffic.
If the `ingressClassName` is omitted, a [default Ingress class](#default-ingress-class)
should be defined.
There are some ingress controllers, that work without the definition of a
default `IngressClass`. For example, the Ingress-NGINX controller can be
configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass` as shown [below](#default-ingress-class).
### Ingress rules
Each HTTP rule contains the following information:
@ -339,6 +348,14 @@ an `ingressClassName` specified. You can resolve this by ensuring that at most 1
IngressClass is marked as default in your cluster.
{{< /caution >}}
There are some ingress controllers, that work without the definition of a
default `IngressClass`. For example, the Ingress-NGINX controller can be
configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass`:
{{< codenew file="service/networking/default-ingressclass.yaml" >}}
## Types of Ingress
### Ingress backed by a single Service {#single-service-ingress}
@ -468,9 +485,7 @@ web traffic to the IP address of your Ingress controller can be matched without
virtual host being required.
For example, the following Ingress routes traffic
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic
to the IP address without a hostname defined in request (that is, without a request header being
presented) to `service3`.
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic whose request host header doesn't match `first.bar.com` and `second.bar.com` to `service3`.
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}

View File

@ -422,7 +422,7 @@ Helper programs relating to the volume type may be required for consumption of a
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`.
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. Read the glossary term [Quantity](/docs/reference/glossary/?all=true#term-quantity) to understand the units expected by `capacity`.
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
@ -535,19 +535,19 @@ Not all Persistent Volume types support mount options.
The following volume types support mount options:
* AWSElasticBlockStore
* AzureDisk
* AzureFile
* CephFS
* Cinder (OpenStack block storage)
* GCEPersistentDisk
* Glusterfs
* NFS
* Quobyte Volumes
* RBD (Ceph Block Device)
* StorageOS
* VsphereVolume
* iSCSI
* `awsElasticBlockStore`
* `azureDisk`
* `azureFile`
* `cephfs`
* `cinder` (**deprecated** in v1.18)
* `gcePersistentDisk`
* `glusterfs`
* `iscsi`
* `nfs`
* `quobyte` (**deprecated** in v1.22)
* `rbd`
* `storageos` (**deprecated** in v1.22)
* `vsphereVolume`
Mount options are not validated. If a mount option is invalid, the mount fails.

View File

@ -26,7 +26,7 @@ Currently, the following types of volume sources can be projected:
* `serviceAccountToken`
All sources are required to be in the same namespace as the Pod. For more details,
see the [all-in-one volume design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/all-in-one-volume.md).
see the [all-in-one volume](https://github.com/kubernetes/design-proposals-archive/blob/main/node/all-in-one-volume.md) design document.
### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap}
@ -71,9 +71,7 @@ volume mount will not receive updates for those volume sources.
## SecurityContext interactions
The [proposal for file permission handling in projected service account volume](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#token-volume-projection)
enhancement introduced the projected files having the the correct owner
permissions set.
The [proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the the correct owner permissions set.
### Linux

View File

@ -842,6 +842,13 @@ Kubernetes marks a Deployment as _progressing_ when one of the following tasks i
* The Deployment is scaling down its older ReplicaSet(s).
* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)).
When the rollout becomes “progressing”, the Deployment controller adds a condition with the following
attributes to the Deployment's `.status.conditions`:
* `type: Progressing`
* `status: "True"`
* `reason: NewReplicaSetCreated` | `reason: FoundNewReplicaSet` | `reason: ReplicaSetUpdated`
You can monitor the progress for a Deployment by using `kubectl rollout status`.
### Complete Deployment
@ -853,6 +860,17 @@ updates you've requested have been completed.
* All of the replicas associated with the Deployment are available.
* No old replicas for the Deployment are running.
When the rollout becomes “complete”, the Deployment controller sets a condition with the following
attributes to the Deployment's `.status.conditions`:
* `type: Progressing`
* `status: "True"`
* `reason: NewReplicaSetAvailable`
This `Progressing` condition will retain a status value of `"True"` until a new rollout
is initiated. The condition holds even when availability of replicas changes (which
does instead affect the `Available` condition).
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed
successfully, `kubectl rollout status` returns a zero exit code.
@ -890,7 +908,7 @@ number of seconds the Deployment controller waits before indicating (in the Depl
Deployment progress has stalled.
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
lack of progress for a Deployment after 10 minutes:
lack of progress of a rollout for a Deployment after 10 minutes:
```shell
kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
@ -902,15 +920,18 @@ deployment.apps/nginx-deployment patched
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
attributes to the Deployment's `.status.conditions`:
* Type=Progressing
* Status=False
* Reason=ProgressDeadlineExceeded
* `type: Progressing`
* `status: "False"`
* `reason: ProgressDeadlineExceeded`
This condition can also fail early and is then set to status value of `"False"` due to reasons as `ReplicaSetCreateError`.
Also, the deadline is not taken into account anymore once the Deployment rollout completes.
See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions.
{{< note >}}
Kubernetes takes no action on a stalled Deployment other than to report a status condition with
`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
`reason: ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
example, rollback the Deployment to its previous version.
{{< /note >}}
@ -984,7 +1005,7 @@ Conditions:
You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other
controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota
conditions and the Deployment controller then completes the Deployment rollout, you'll see the
Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`).
Deployment's status update with a successful condition (`status: "True"` and `reason: NewReplicaSetAvailable`).
```
Conditions:
@ -994,11 +1015,11 @@ Conditions:
Progressing True NewReplicaSetAvailable
```
`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated
by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment
`type: Available` with `status: "True"` means that your Deployment has minimum availability. Minimum availability is dictated
by the parameters specified in the deployment strategy. `type: Progressing` with `status: "True"` means that your Deployment
is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
required new replicas are available (see the Reason of the condition for the particulars - in our case
`Reason=NewReplicaSetAvailable` means that the Deployment is complete).
`reason: NewReplicaSetAvailable` means that the Deployment is complete).
You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status`
returns a non-zero exit code if the Deployment has exceeded the progression deadline.
@ -1155,8 +1176,8 @@ total number of Pods running at any time during the update is at most 130% of de
`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
to wait for your Deployment to progress before the system reports back that the Deployment has
[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`.
and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep
[failed progressing](#failed-deployment) - surfaced as a condition with `type: Progressing`, `status: "False"`.
and `reason: ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep
retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment
controller will roll back a Deployment as soon as it observes such a condition.

View File

@ -313,7 +313,7 @@ ensures that a desired number of Pods with a matching label selector are availab
When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to
prioritize scaling down pods based on the following general algorithm:
1. Pending (and unschedulable) pods are scaled down first
2. If controller.kubernetes.io/pod-deletion-cost annotation is set, then
2. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then
the pod with the lower value will come first.
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
4. If the pods' creation times differ, the pod that was created more recently

View File

@ -562,7 +562,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
- `APIServerIdentity`: Assign each API server an ID in a cluster.
- `APIServerTracing`: Add support for distributed tracing in the API server.
- `Accelerators`: Enable Nvidia GPU support when using Docker
- `Accelerators`: Provided an early form of plugin to enable Nvidia GPU support when using
Docker Engine; no longer available. See
[Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) for
an alternative.
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
- `AffinityInAnnotations`: Enable setting
[Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
@ -571,7 +574,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
kubelets on Pod log requests.
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
- `AppArmor`: Enable AppArmor based mandatory access control on Linux nodes when using Docker.
- `AppArmor`: Enable use of AppArmor mandatory access control for Pods running on Linux nodes.
See [AppArmor Tutorial](/docs/tutorials/clusters/apparmor/) for more details.
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
that can be attached to a node.

View File

@ -0,0 +1,25 @@
---
title: Event
id: event
date: 2022-01-16
full_link: /docs/reference/kubernetes-api/cluster-resources/event-v1/
short_description: >
A report of an event somewhere in the cluster. It generally denotes some state change in the system.
aka:
tags:
- core-object
- fundamental
---
Each Event is a report of an event somewhere in the {{< glossary_tooltip text="cluster" term_id="cluster" >}}.
It generally denotes some state change in the system.
<!--more-->
Events have a limited retention time and triggers and messages may evolve with time.
Event consumers should not rely on the timing of an event with a given reason reflecting a consistent underlying trigger,
or the continued existence of events with that reason.
Events should be treated as informative, best-effort, supplemental data.
In Kubernetes, [auditing](/docs/tasks/debug-application-cluster/audit/) generates a different kind of
Event record (API group `audit.k8s.io`).

View File

@ -159,6 +159,20 @@ The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure tha
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
## volume.beta.kubernetes.io/storage-provisioner (deprecated)
Example: `volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath`
Used on: PersistentVolumeClaim
This annotation has been deprecated.
## volume.kubernetes.io/storage-provisioner
Used on: PersistentVolumeClaim
This annotation will be added to dynamic provisioning required PVC.
## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build}
Example: `node.kubernetes.io/windows-build=10.0.17763`

View File

@ -73,21 +73,23 @@ knows how to convert between them in both directions. Additionally, any new
field added in v2 must be able to round-trip to v1 and back, which means v1
might have to add an equivalent field or represent it as an annotation.
**Rule #3: An API version in a given track may not be deprecated until a new
API version at least as stable is released.**
**Rule #3: An API version in a given track may not be deprecated in favor of a less stable API version.**
GA API versions can replace GA API versions as well as beta and alpha API
versions. Beta API versions *may not* replace GA API versions.
* GA API versions can replace beta and alpha API versions.
* Beta API versions can replace earlier beta and alpha API versions, but *may not* replace GA API versions.
* Alpha API versions can replace earlier alpha API versions, but *may not* replace GA or beta API versions.
**Rule #4a: Other than the most recent API versions in each track, older API
versions must be supported after their announced deprecation for a duration of
no less than:**
**Rule #4a: minimum API lifetime is determined by the API stability level**
* **GA: 12 months or 3 releases (whichever is longer)**
* **Beta: 9 months or 3 releases (whichever is longer)**
* **Alpha: 0 releases**
* **GA API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes**
* **Beta API versions must be supported for 9 months or 3 releases (whichever is longer) after deprecation**
* **Alpha API versions may be removed in any release without prior deprecation notice**
This covers the [maximum supported version skew of 2 releases](/docs/setup/release/version-skew-policy/).
This ensures beta API support covers the [maximum supported version skew of 2 releases](/docs/setup/release/version-skew-policy/).
{{< note >}}
There are no current plans for a major version revision of Kubernetes that removes GA APIs.
{{< /note >}}
{{< note >}}
Until [#52185](https://github.com/kubernetes/kubernetes/issues/52185) is
@ -237,7 +239,7 @@ API versions are supported in a series of subsequent releases.
<td>
<ul>
<li>v2beta2 is deprecated, "action required" relnote</li>
<li>v1 is deprecated, "action required" relnote</li>
<li>v1 is deprecated in favor of v2, but will not be removed</li>
</ul>
</td>
</tr>
@ -267,22 +269,6 @@ API versions are supported in a series of subsequent releases.
</ul>
</td>
</tr>
<tr>
<td>X+16</td>
<td>v2, v1 (deprecated)</td>
<td>v2</td>
<td></td>
</tr>
<tr>
<td>X+17</td>
<td>v2</td>
<td>v2</td>
<td>
<ul>
<li>v1 is removed, "action required" relnote</li>
</ul>
</td>
</tr>
</tbody>
</table>

View File

@ -2,14 +2,18 @@
title: Configure Minimum and Maximum CPU Constraints for a Namespace
content_type: task
weight: 40
description: >-
Define a range of valid CPU resource limits for a namespace, so that every new Pod
in that namespace falls within the range you configure.
---
<!-- overview -->
This page shows how to set minimum and maximum values for the CPU resources used by Containers
and Pods in a namespace. You specify minimum and maximum CPU values in a
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core)
This page shows how to set minimum and maximum values for the CPU resources used by containers
and Pods in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. You specify minimum
and maximum CPU values in a
[LimitRange](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created
in the namespace.
@ -19,11 +23,13 @@ in the namespace.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
Your cluster must have at least 1 CPU available for use to run the task examples.
{{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
Your cluster must have at least 1.0 CPU available for use to run the task examples.
See [meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu)
to learn what Kubernetes means by “1 CPU”.
<!-- steps -->
@ -39,7 +45,7 @@ kubectl create namespace constraints-cpu-example
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange:
Here's an example manifest for a LimitRange:
{{< codenew file="admin/resource/cpu-constraints.yaml" >}}
@ -72,15 +78,15 @@ limits:
type: Container
```
Now whenever a Container is created in the constraints-cpu-example namespace, Kubernetes
performs these steps:
Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client
of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:
* If the Container does not specify its own CPU request and limit, assign the default
CPU request and limit to the Container.
* If any container in that Pod does not specify its own CPU request and limit, the control plane
assigns the default CPU request and limit to that container.
* Verify that the Container specifies a CPU request that is greater than or equal to 200 millicpu.
* Verify that every container in that Pod specifies a CPU request that is greater than or equal to 200 millicpu.
* Verify that the Container specifies a CPU limit that is less than or equal to 800 millicpu.
* Verify that every container in that Pod specifies a CPU limit that is less than or equal to 800 millicpu.
{{< note >}}
When creating a `LimitRange` object, you can specify limits on huge-pages
@ -88,7 +94,7 @@ or GPUs as well. However, when both `default` and `defaultRequest` are specified
on these resources, the two values must be the same.
{{< /note >}}
Here's the configuration file for a Pod that has one Container. The Container manifest
Here's a manifest for a Pod that has one container. The container manifest
specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the
minimum and maximum CPU constraints imposed by the LimitRange.
@ -100,7 +106,7 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example
```
Verify that the Pod's Container is running:
Verify that the Pod is running and that its container is healthy:
```shell
kubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example
@ -112,7 +118,7 @@ View detailed information about the Pod:
kubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example
```
The output shows that the Container has a CPU request of 500 millicpu and CPU limit
The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit
of 800 millicpu. These satisfy the constraints imposed by the LimitRange.
```yaml
@ -131,7 +137,7 @@ kubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example
## Attempt to create a Pod that exceeds the maximum CPU constraint
Here's the configuration file for a Pod that has one Container. The Container specifies a
Here's a manifest for a Pod that has one container. The container specifies a
CPU request of 500 millicpu and a cpu limit of 1.5 cpu.
{{< codenew file="admin/resource/cpu-constraints-pod-2.yaml" >}}
@ -142,8 +148,8 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example
```
The output shows that the Pod does not get created, because the Container specifies a CPU limit that is
too large:
The output shows that the Pod does not get created, because it defines an unacceptable container.
That container is not acceptable because it specifies a CPU limit that is too large:
```
Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-2.yaml":
@ -152,7 +158,7 @@ pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 8
## Attempt to create a Pod that does not meet the minimum CPU request
Here's the configuration file for a Pod that has one Container. The Container specifies a
Here's a manifest for a Pod that has one container. The container specifies a
CPU request of 100 millicpu and a CPU limit of 800 millicpu.
{{< codenew file="admin/resource/cpu-constraints-pod-3.yaml" >}}
@ -163,8 +169,9 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example
```
The output shows that the Pod does not get created, because the Container specifies a CPU
request that is too small:
The output shows that the Pod does not get created, because it defines an unacceptable container.
That container is not acceptable because it specifies a CPU limit that is lower than the
enforced minimum:
```
Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-3.yaml":
@ -173,8 +180,8 @@ pods "constraints-cpu-demo-3" is forbidden: minimum cpu usage per Container is 2
## Create a Pod that does not specify any CPU request or limit
Here's the configuration file for a Pod that has one Container. The Container does not
specify a CPU request, and it does not specify a CPU limit.
Here's a manifest for a Pod that has one container. The container does not
specify a CPU request, nor does it specify a CPU limit.
{{< codenew file="admin/resource/cpu-constraints-pod-4.yaml" >}}
@ -190,8 +197,9 @@ View detailed information about the Pod:
kubectl get pod constraints-cpu-demo-4 --namespace=constraints-cpu-example --output=yaml
```
The output shows that the Pod's Container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu.
How did the Container get those values?
The output shows that the Pod's single container has a CPU request of 800 millicpu and a
CPU limit of 800 millicpu.
How did that container get those values?
```yaml
resources:
@ -201,11 +209,12 @@ resources:
cpu: 800m
```
Because your Container did not specify its own CPU request and limit, it was given the
Because that container did not specify its own CPU request and limit, the control plane
applied the
[default CPU request and limit](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
from the LimitRange.
from the LimitRange for this namespace.
At this point, your Container might be running or it might not be running. Recall that a prerequisite for this task is that your cluster must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request.
At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your cluster must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request.
Delete your Pod:

View File

@ -2,23 +2,36 @@
title: Configure Default CPU Requests and Limits for a Namespace
content_type: task
weight: 20
description: >-
Define a default CPU resource limits for a namespace, so that every new Pod
in that namespace has a CPU resource limit configured.
---
<!-- overview -->
This page shows how to configure default CPU requests and limits for a namespace.
A Kubernetes cluster can be divided into namespaces. If a Container is created in a namespace
that has a default CPU limit, and the Container does not specify its own CPU limit, then
the Container is assigned the default CPU limit. Kubernetes assigns a default CPU request
under certain conditions that are explained later in this topic.
This page shows how to configure default CPU requests and limits for a
{{< glossary_tooltip text="namespace" term_id="namespace" >}}.
A Kubernetes cluster can be divided into namespaces. If you create a Pod within a
namespace that has a default CPU
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits), and any container in that Pod does not specify
its own CPU limit, then the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} assigns the default
CPU limit to that container.
Kubernetes assigns a default CPU
[request](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
but only under certain conditions that are explained later in this page.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
If you're not already familiar with what Kubernetes means by 1.0 CPU,
read [meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu).
<!-- steps -->
@ -33,8 +46,8 @@ kubectl create namespace default-cpu-example
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies
a default CPU request and a default CPU limit.
Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}.
The manifest specifies a default CPU request and a default CPU limit.
{{< codenew file="admin/resource/cpu-defaults.yaml" >}}
@ -44,12 +57,12 @@ Create the LimitRange in the default-cpu-example namespace:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example
```
Now if a Container is created in the default-cpu-example namespace, and the
Container does not specify its own values for CPU request and CPU limit,
the Container is given a default CPU request of 0.5 and a default
Now if you create a Pod in the default-cpu-example namespace, and any container
in that Pod does not specify its own values for CPU request and CPU limit,
then the control plane applies default values: a CPU request of 0.5 and a default
CPU limit of 1.
Here's the configuration file for a Pod that has one Container. The Container
Here's a manifest for a Pod that has one container. The container
does not specify a CPU request and limit.
{{< codenew file="admin/resource/cpu-defaults-pod.yaml" >}}
@ -66,8 +79,9 @@ View the Pod's specification:
kubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example
```
The output shows that the Pod's Container has a CPU request of 500 millicpus and
a CPU limit of 1 cpu. These are the default values specified by the LimitRange.
The output shows that the Pod's only container has a CPU request of 500m `cpu`
(which you can read as “500 millicpu”), and a CPU limit of 1 `cpu`.
These are the default values specified by the LimitRange.
```shell
containers:
@ -81,9 +95,9 @@ containers:
cpu: 500m
```
## What if you specify a Container's limit, but not its request?
## What if you specify a container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container
Here's a manifest for a Pod that has one container. The container
specifies a CPU limit, but not a request:
{{< codenew file="admin/resource/cpu-defaults-pod-2.yaml" >}}
@ -95,14 +109,15 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example
```
View the Pod specification:
View the [specification](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
of the Pod that you created:
```
kubectl get pod default-cpu-demo-2 --output=yaml --namespace=default-cpu-example
```
The output shows that the Container's CPU request is set to match its CPU limit.
Notice that the Container was not assigned the default CPU request value of 0.5 cpu.
The output shows that the container's CPU request is set to match its CPU limit.
Notice that the container was not assigned the default CPU request value of 0.5 `cpu`:
```
resources:
@ -112,9 +127,9 @@ resources:
cpu: "1"
```
## What if you specify a Container's request, but not its limit?
## What if you specify a container's request, but not its limit?
Here's the configuration file for a Pod that has one Container. The Container
Here's an example manifest for a Pod that has one container. The container
specifies a CPU request, but not a limit:
{{< codenew file="admin/resource/cpu-defaults-pod-3.yaml" >}}
@ -125,15 +140,16 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example
```
View the Pod specification:
View the specification of the Pod that you created:
```
kubectl get pod default-cpu-demo-3 --output=yaml --namespace=default-cpu-example
```
The output shows that the Container's CPU request is set to the value specified in the
Container's configuration file. The Container's CPU limit is set to 1 cpu, which is the
default CPU limit for the namespace.
The output shows that the container's CPU request is set to the value you specified at
the time you created the Pod (in other words: it matches the manifest).
However, the same container's CPU limit is set to 1 `cpu`, which is the default CPU limit
for that namespace.
```
resources:
@ -145,16 +161,22 @@ resources:
## Motivation for default CPU limits and requests
If your namespace has a
[resource quota](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/),
If your namespace has a CPU {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}}
configured,
it is helpful to have a default value in place for CPU limit.
Here are two of the restrictions that a resource quota imposes on a namespace:
Here are two of the restrictions that a CPU resource quota imposes on a namespace:
* Every Container that runs in the namespace must have its own CPU limit.
* The total amount of CPU used by all Containers in the namespace must not exceed a specified limit.
* For every Pod that runs in the namespace, each of its containers must have a CPU limit.
* CPU limits apply a resource reservation on the node where the Pod in question is scheduled.
The total amount of CPU that is reserved for use by all Pods in the namespace must not
exceed a specified limit.
When you add a LimitRange:
If any Pod in that namespace that includes a container does not specify its own CPU limit,
the control plane applies the default CPU limit to that container, and the Pod can be
allowed to run in a namespace that is restricted by a CPU ResourceQuota.
If a Container does not specify its own CPU limit, it is given the default limit, and then
it can be allowed to run in a namespace that is restricted by a quota.
## Clean up

View File

@ -2,12 +2,15 @@
title: Configure Minimum and Maximum Memory Constraints for a Namespace
content_type: task
weight: 30
description: >-
Define a range of valid memory resource limits for a namespace, so that every new Pod
in that namespace falls within the range you configure.
---
<!-- overview -->
This page shows how to set minimum and maximum values for memory used by Containers
This page shows how to set minimum and maximum values for memory used by containers
running in a namespace. You specify minimum and maximum memory values in a
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core)
object. If a Pod does not meet the constraints imposed by the LimitRange,
@ -15,16 +18,14 @@ it cannot be created in the namespace.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
Each node in your cluster must have at least 1 GiB of memory.
{{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 1 GiB of memory available for Pods.
<!-- steps -->
@ -39,7 +40,7 @@ kubectl create namespace constraints-mem-example
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange:
Here's an example manifest for a LimitRange:
{{< codenew file="admin/resource/memory-constraints.yaml" >}}
@ -72,18 +73,19 @@ file for the LimitRange, they were created automatically.
type: Container
```
Now whenever a Container is created in the constraints-mem-example namespace, Kubernetes
Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes
performs these steps:
* If the Container does not specify its own memory request and limit, assign the default
memory request and limit to the Container.
* If any container in that Pod does not specify its own memory request and limit, assign
the default memory request and limit to that container.
* Verify that the Container has a memory request that is greater than or equal to 500 MiB.
* Verify that every container in that Pod requests at least 500 MiB of memory.
* Verify that the Container has a memory limit that is less than or equal to 1 GiB.
* Verify that every container in that Pod requests no more than 1024 MiB (1 GiB)
of memory.
Here's the configuration file for a Pod that has one Container. The Container manifest
specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the
Here's a manifest for a Pod that has one container. Within the Pod spec, the sole
container specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the
minimum and maximum memory constraints imposed by the LimitRange.
{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}}
@ -94,7 +96,7 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example
```
Verify that the Pod's Container is running:
Verify that the Pod is running and that its container is healthy:
```shell
kubectl get pod constraints-mem-demo --namespace=constraints-mem-example
@ -106,8 +108,9 @@ View detailed information about the Pod:
kubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example
```
The output shows that the Container has a memory request of 600 MiB and a memory limit
of 800 MiB. These satisfy the constraints imposed by the LimitRange.
The output shows that the container within that Pod has a memory request of 600 MiB and
a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for
this namespace:
```yaml
resources:
@ -125,7 +128,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
## Attempt to create a Pod that exceeds the maximum memory constraint
Here's the configuration file for a Pod that has one Container. The Container specifies a
Here's a manifest for a Pod that has one container. The container specifies a
memory request of 800 MiB and a memory limit of 1.5 GiB.
{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}}
@ -136,8 +139,8 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example
```
The output shows that the Pod does not get created, because the Container specifies a memory limit that is
too large:
The output shows that the Pod does not get created, because it defines a container that
requests more memory than is allowed:
```
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-2.yaml":
@ -146,7 +149,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i
## Attempt to create a Pod that does not meet the minimum memory request
Here's the configuration file for a Pod that has one Container. The Container specifies a
Here's a manifest for a Pod that has one container. That container specifies a
memory request of 100 MiB and a memory limit of 800 MiB.
{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}}
@ -157,8 +160,8 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example
```
The output shows that the Pod does not get created, because the Container specifies a memory
request that is too small:
The output shows that the Pod does not get created, because it defines a container
that requests less memory than the enforced minimum:
```
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-3.yaml":
@ -167,9 +170,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i
## Create a Pod that does not specify any memory request or limit
Here's the configuration file for a Pod that has one Container. The Container does not
Here's a manifest for a Pod that has one container. The container does not
specify a memory request, and it does not specify a memory limit.
{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}}
@ -182,12 +183,12 @@ kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4
View detailed information about the Pod:
```
```shell
kubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml
```
The output shows that the Pod's Container has a memory request of 1 GiB and a memory limit of 1 GiB.
How did the Container get those values?
The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB.
How did that container get those values?
```
resources:
@ -197,11 +198,20 @@ resources:
memory: 1Gi
```
Because your Container did not specify its own memory request and limit, it was given the
Because your Pod did not define any memory request and limit for that container, the cluster
applied a
[default memory request and limit](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
from the LimitRange.
At this point, your Container might be running or it might not be running. Recall that a prerequisite
This means that the definition of that Pod shows those values. You can check it using
`kubectl describe`:
```shell
# Look for the "Requests:" section of the output
kubectl describe pod constraints-mem-demo-4 --namespace=constraints-mem-example
```
At this point, your Pod might be running or it might not be running. Recall that a prerequisite
for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only
1 GiB of memory, then there is not enough allocatable memory on any Node to accommodate a memory
request of 1 GiB. If you happen to be using Nodes with 2 GiB of memory, then you probably have
@ -209,7 +219,7 @@ enough space to accommodate the 1 GiB request.
Delete your Pod:
```
```shell
kubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example
```
@ -224,12 +234,12 @@ Pods that were created previously.
As a cluster administrator, you might want to impose restrictions on the amount of memory that Pods can use.
For example:
* Each Node in a cluster has 2 GB of memory. You do not want to accept any Pod that requests
more than 2 GB of memory, because no Node in the cluster can support the request.
* Each Node in a cluster has 2 GiB of memory. You do not want to accept any Pod that requests
more than 2 GiB of memory, because no Node in the cluster can support the request.
* A cluster is shared by your production and development departments.
You want to allow production workloads to consume up to 8 GB of memory, but
you want development workloads to be limited to 512 MB. You create separate namespaces
You want to allow production workloads to consume up to 8 GiB of memory, but
you want development workloads to be limited to 512 MiB. You create separate namespaces
for production and development, and you apply memory constraints to each namespace.
## Clean up
@ -241,7 +251,6 @@ kubectl delete namespace constraints-mem-example
```
## {{% heading "whatsnext" %}}

View File

@ -2,21 +2,35 @@
title: Configure Default Memory Requests and Limits for a Namespace
content_type: task
weight: 10
description: >-
Define a default memory resource limit for a namespace, so that every new Pod
in that namespace has a memory resource limit configured.
---
<!-- overview -->
This page shows how to configure default memory requests and limits for a namespace.
If a Container is created in a namespace that has a default memory limit, and the Container
does not specify its own memory limit, then the Container is assigned the default memory limit.
This page shows how to configure default memory requests and limits for a
{{< glossary_tooltip text="namespace" term_id="namespace" >}}.
A Kubernetes cluster can be divided into namespaces. Once you have a namespace that
that has a default memory
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
and you then try to create a Pod with a container that does not specify its own memory
limit its own memory limit, then the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} assigns the default
memory limit to that container.
Kubernetes assigns a default memory request under certain conditions that are explained later in this topic.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 2 GiB of memory.
@ -35,8 +49,9 @@ kubectl create namespace default-mem-example
## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies
a default memory request and a default memory limit.
Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}.
The manifest specifies a default memory
request and a default memory limit.
{{< codenew file="admin/resource/memory-defaults.yaml" >}}
@ -46,12 +61,13 @@ Create the LimitRange in the default-mem-example namespace:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example
```
Now if a Container is created in the default-mem-example namespace, and the
Container does not specify its own values for memory request and memory limit,
the Container is given a default memory request of 256 MiB and a default
memory limit of 512 MiB.
Now if you create a Pod in the default-mem-example namespace, and any container
within that Pod does not specify its own values for memory request and memory limit,
then the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
applies default values: a memory request of 256MiB and a memory limit of 512MiB.
Here's the configuration file for a Pod that has one Container. The Container
Here's an example manifest for a Pod that has one container. The container
does not specify a memory request and limit.
{{< codenew file="admin/resource/memory-defaults-pod.yaml" >}}
@ -68,7 +84,7 @@ View detailed information about the Pod:
kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example
```
The output shows that the Pod's Container has a memory request of 256 MiB and
The output shows that the Pod's container has a memory request of 256 MiB and
a memory limit of 512 MiB. These are the default values specified by the LimitRange.
```shell
@ -89,9 +105,9 @@ Delete your Pod:
kubectl delete pod default-mem-demo --namespace=default-mem-example
```
## What if you specify a Container's limit, but not its request?
## What if you specify a container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container
Here's a manifest for a Pod that has one container. The container
specifies a memory limit, but not a request:
{{< codenew file="admin/resource/memory-defaults-pod-2.yaml" >}}
@ -109,8 +125,8 @@ View detailed information about the Pod:
kubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example
```
The output shows that the Container's memory request is set to match its memory limit.
Notice that the Container was not assigned the default memory request value of 256Mi.
The output shows that the container's memory request is set to match its memory limit.
Notice that the container was not assigned the default memory request value of 256Mi.
```
resources:
@ -120,9 +136,9 @@ resources:
memory: 1Gi
```
## What if you specify a Container's request, but not its limit?
## What if you specify a container's request, but not its limit?
Here's the configuration file for a Pod that has one Container. The Container
Here's a manifest for a Pod that has one container. The container
specifies a memory request, but not a limit:
{{< codenew file="admin/resource/memory-defaults-pod-3.yaml" >}}
@ -139,9 +155,9 @@ View the Pod's specification:
kubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example
```
The output shows that the Container's memory request is set to the value specified in the
Container's configuration file. The Container's memory limit is set to 512Mi, which is the
default memory limit for the namespace.
The output shows that the container's memory request is set to the value specified in the
container's manifest. The container is limited to use no more than 512MiB of
memory, which matches the default memory limit for the namespace.
```
resources:
@ -153,15 +169,23 @@ resources:
## Motivation for default memory limits and requests
If your namespace has a resource quota,
If your namespace has a memory {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}}
configured,
it is helpful to have a default value in place for memory limit.
Here are two of the restrictions that a resource quota imposes on a namespace:
* Every Container that runs in the namespace must have its own memory limit.
* The total amount of memory used by all Containers in the namespace must not exceed a specified limit.
* For every Pod that runs in the namespace, the Pod and each of its containers must have a memory limit.
(If you specify a memory limit for every container in a Pod, Kubernetes can infer the Pod-level memory
limit by adding up the limits for its containers).
* CPU limits apply a resource reservation on the node where the Pod in question is scheduled.
The total amount of memory reserved for all Pods in the namespace must not exceed a specified limit.
* The total amount of memory actually used by all Pods in the namespace must also not exceed a specified limit.
If a Container does not specify its own memory limit, it is given the default limit, and then
it can be allowed to run in a namespace that is restricted by a quota.
When you add a LimitRange:
If any Pod in that namespace that includes a container does not specify its own memory limit,
the control plane applies the default memory limit to that container, and the Pod can be
allowed to run in a namespace that is restricted by a memory ResourceQuota.
## Clean up

View File

@ -2,14 +2,17 @@
title: Configure Memory and CPU Quotas for a Namespace
content_type: task
weight: 50
description: >-
Define overall memory and CPU resource limits for a namespace.
---
<!-- overview -->
This page shows how to set quotas for the total amount memory and CPU that
can be used by all Containers running in a namespace. You specify quotas in a
[ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core)
can be used by all Pods running in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
You specify quotas in a
[ResourceQuota](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
object.
@ -17,14 +20,13 @@ object.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 1 GiB of memory.
<!-- steps -->
## Create a namespace
@ -38,7 +40,7 @@ kubectl create namespace quota-mem-cpu-example
## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object:
Here is a manifest for an example ResourceQuota:
{{< codenew file="admin/resource/quota-mem-cpu.yaml" >}}
@ -56,15 +58,18 @@ kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --outpu
The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:
* Every Container must have a memory request, memory limit, cpu request, and cpu limit.
* The memory request total for all Containers must not exceed 1 GiB.
* The memory limit total for all Containers must not exceed 2 GiB.
* The CPU request total for all Containers must not exceed 1 cpu.
* The CPU limit total for all Containers must not exceed 2 cpu.
* For every Pod in the namespace, each container must have a memory request, memory limit, cpu request, and cpu limit.
* The memory request total for all Pods in that namespace must not exceed 1 GiB.
* The memory limit total for all Pods in that namespace must not exceed 2 GiB.
* The CPU request total for all Pods in that namespace must not exceed 1 cpu.
* The CPU limit total for all Pods in that namespace must not exceed 2 cpu.
See [meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu)
to learn what Kubernetes means by “1 CPU”.
## Create a Pod
Here is the configuration file for a Pod:
Here is a manifest for an example Pod:
{{< codenew file="admin/resource/quota-mem-cpu-pod.yaml" >}}
@ -75,15 +80,15 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example
```
Verify that the Pod's Container is running:
Verify that the Pod is running and that its (only) container is healthy:
```
```shell
kubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example
```
Once again, view detailed information about the ResourceQuota:
```
```shell
kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml
```
@ -105,15 +110,22 @@ status:
requests.memory: 600Mi
```
If you have the `jq` tool, you can also query (using [JSONPath](/docs/reference/kubectl/jsonpath/))
for just the `used` values, **and** pretty-print that that of the output. For example:
```shell
kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o jsonpath='{ .status.used }' | jq .
```
## Attempt to create a second Pod
Here is the configuration file for a second Pod:
Here is a manifest for a second Pod:
{{< codenew file="admin/resource/quota-mem-cpu-pod-2.yaml" >}}
In the configuration file, you can see that the Pod has a memory request of 700 MiB.
In the manifest, you can see that the Pod has a memory request of 700 MiB.
Notice that the sum of the used memory request and this new memory
request exceeds the memory request quota. 600 MiB + 700 MiB > 1 GiB.
request exceeds the memory request quota: 600 MiB + 700 MiB > 1 GiB.
Attempt to create the Pod:
@ -133,11 +145,12 @@ requested: requests.memory=700Mi,used: requests.memory=600Mi, limited: requests.
## Discussion
As you have seen in this exercise, you can use a ResourceQuota to restrict
the memory request total for all Containers running in a namespace.
the memory request total for all Pods running in a namespace.
You can also restrict the totals for memory limit, cpu request, and cpu limit.
If you want to restrict individual Containers, instead of totals for all Containers, use a
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
Instead of managing total resource use within a namespace, you might want to restrict
individual Pods, or the containers in those Pods. To achieve that kind of limiting, use a
[LimitRange](/docs/concepts/policy/limit-range/).
## Clean up

View File

@ -2,14 +2,16 @@
title: Configure a Pod Quota for a Namespace
content_type: task
weight: 60
description: >-
Restrict how many Pods you can create within a namespace.
---
<!-- overview -->
This page shows how to set a quota for the total number of Pods that can run
in a namespace. You specify quotas in a
[ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core)
in a {{< glossary_tooltip text="Namespace" term_id="namespace" >}}. You specify quotas in a
[ResourceQuota](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
object.
@ -18,10 +20,9 @@ object.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
<!-- steps -->
@ -36,7 +37,7 @@ kubectl create namespace quota-pod-example
## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object:
Here is an example manifest for a ResourceQuota:
{{< codenew file="admin/resource/quota-pod.yaml" >}}
@ -66,11 +67,12 @@ status:
pods: "0"
```
Here is the configuration file for a Deployment:
Here is an example manifest for a {{< glossary_tooltip term_id="deployment" >}}:
{{< codenew file="admin/resource/quota-pod-deployment.yaml" >}}
In the configuration file, `replicas: 3` tells Kubernetes to attempt to create three Pods, all running the same application.
In that manifest, `replicas: 3` tells Kubernetes to attempt to create three new Pods, all
running the same application.
Create the Deployment:
@ -85,7 +87,7 @@ kubectl get deployment pod-quota-demo --namespace=quota-pod-example --output=yam
```
The output shows that even though the Deployment specifies three replicas, only two
Pods were created because of the quota.
Pods were created because of the quota you defined earlier:
```yaml
spec:
@ -95,11 +97,18 @@ spec:
status:
availableReplicas: 2
...
lastUpdateTime: 2017-07-07T20:57:05Z
lastUpdateTime: 2021-04-02T20:57:05Z
message: 'unable to create pods: pods "pod-quota-demo-1650323038-" is forbidden:
exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'
```
### Choice of resource
In this task you have defined a ResourceQuota that limited the total number of Pods, but
you could also limit the total number of other kinds of object. For example, you
might decide to limit how many {{< glossary_tooltip text="CronJobs" term_id="cronjob" >}}
that can live in a single namespace.
## Clean up
Delete your namespace:

View File

@ -8,37 +8,35 @@ weight: 70
<!-- overview -->
With Kubernetes 1.20 dockershim was deprecated. From the
[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
you might already know that most apps do not have a direct dependency on runtime hosting
containers. However, there are still a lot of telemetry and security agents
that has a dependency on docker to collect containers metadata, logs and
metrics. This document aggregates information on how to detect these
dependencies and links on how to migrate these agents to use generic tools or
alternative runtimes.
Kubernetes' support for direct integration with Docker Engine is deprecated, and will be removed. Most apps do not have a direct dependency on runtime hosting containers. However, there are still a lot of telemetry and monitoring agents that has a dependency on docker to collect containers metadata, logs and metrics. This document aggregates information on how to detect these dependencies and links on how to migrate these agents to use generic tools or alternative runtimes.
## Telemetry and security agents
There are a few ways agents may run on Kubernetes cluster. Agents may run on
nodes directly or as DaemonSets.
Within a Kubernetes cluster there are a few different ways to run telemetry or security agents.
Some agents have a direct dependency on Docker Engine when they as DaemonSets or
directly on nodes.
### Why do telemetry agents rely on Docker?
### Why do some telemetry agents communicate with Docker Engine?
Historically, Kubernetes was built on top of Docker. Kubernetes is managing
networking and scheduling, Docker was placing and operating containers on a
node. So you can get scheduling-related metadata like a pod name from Kubernetes
and containers state information from Docker. Over time more runtimes were
created to manage containers. Also there are projects and Kubernetes features
that generalize container status information extraction across many runtimes.
Historically, Kubernetes was written to work specifically with Docker Engine.
Kubernetes took care of networking and scheduling, relying on Docker Engine for launching
and running containers (within Pods) on a node. Some information that is relevant to telemetry,
such as a pod name, is only available from Kubernetes components. Other data, such as container
metrics, is not the responsibility of the container runtime. Early yelemetry agents needed to query the
container runtime **and** Kubernetes to report an accurate picture. Over time, Kubernetes gained
the ability to support multiple runtimes, and now supports any runtime that is compatible with
the container runtime interface.
Some agents are tied specifically to the Docker tool. The agents may run
commands like [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
Some telemetry agents rely specifically on Docker Engine tooling. For example, an agent
might run a command such as
[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list
containers and processes or [docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
to subscribe on docker logs. With the deprecating of Docker as a container runtime,
containers and processes or [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/)
to receive streamed logs. If nodes in your existing cluster use
Docker Engine, and you switch to a different container runtime,
these commands will not work any longer.
### Identify DaemonSets that depend on Docker {#identify-docker-dependency}
### Identify DaemonSets that depend on Docker Engine {#identify-docker-dependency}
If a pod wants to make calls to the `dockerd` running on the node, the pod must either:

View File

@ -67,7 +67,7 @@ transient slices for resources that are supported by that init system.
Depending on the configuration of the associated container runtime,
operators may have to choose a particular cgroup driver to ensure
proper system behavior. For example, if operators use the `systemd`
cgroup driver provided by the `docker` runtime, the `kubelet` must
cgroup driver provided by the `containerd` runtime, the `kubelet` must
be configured to use the `systemd` cgroup driver.
### Kube Reserved

View File

@ -174,7 +174,7 @@ The output shows that the Container was killed because it is out of memory (OOM)
```shell
lastState:
terminated:
containerID: docker://65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f
containerID: 65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f
exitCode: 137
finishedAt: 2017-06-20T20:52:19Z
reason: OOMKilled

View File

@ -7,8 +7,10 @@ weight: 100
<!-- overview -->
This page shows how to create a Pod that uses a
{{< glossary_tooltip text="Secret" term_id="secret" >}} to pull an image from a
private container image registry or repository.
{{< glossary_tooltip text="Secret" term_id="secret" >}} to pull an image
from a private container image registry or repository. There are many private
registries in use. This task uses [Docker Hub](https://www.docker.com/products/docker-hub)
as an example registry.
{{% thirdparty-content single="true" %}}
@ -18,6 +20,8 @@ private container image registry or repository.
* To do this exercise, you need the `docker` command line tool, and a
[Docker ID](https://docs.docker.com/docker-id/) for which you know the password.
* If you are using a different private container registry, you need the command
line tool for that registry and any login information for the registry.
<!-- steps -->
@ -59,12 +63,13 @@ The output contains a section similar to this:
If you use a Docker credentials store, you won't see that `auth` entry but a `credsStore` entry with the name of the store as value.
{{< /note >}}
## Create a Secret based on existing Docker credentials {#registry-secret-existing-credentials}
## Create a Secret based on existing credentials {#registry-secret-existing-credentials}
A Kubernetes cluster uses the Secret of `kubernetes.io/dockerconfigjson` type to authenticate with
a container registry to pull a private image.
If you already ran `docker login`, you can copy that credential into Kubernetes:
If you already ran `docker login`, you can copy
that credential into Kubernetes:
```shell
kubectl create secret generic regcred \
@ -77,7 +82,7 @@ secret) then you can customise the Secret before storing it.
Be sure to:
- set the name of the data item to `.dockerconfigjson`
- base64 encode the docker file and paste that string, unbroken
- base64 encode the Docker configuration file and then paste that string, unbroken
as the value for field `data[".dockerconfigjson"]`
- set `type` to `kubernetes.io/dockerconfigjson`
@ -213,4 +218,3 @@ kubectl get pod private-reg
* Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
* See [kubectl create secret docker-registry](/docs/reference/generated/kubectl/kubectl-commands/#-em-secret-docker-registry-em-).
* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod

View File

@ -42,7 +42,7 @@ The `spec` of a static Pod cannot refer to other API objects
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
This page assumes you're using {{< glossary_tooltip term_id="docker" >}} to run Pods,
This page assumes you're using {{< glossary_tooltip term_id="cri-o" >}} to run Pods,
and that your nodes are running the Fedora operating system.
Instructions for other distributions or Kubernetes installations may vary.
@ -156,15 +156,20 @@ already be running.
You can view running containers (including static Pods) by running (on the node):
```shell
# Run this command on the node where the kubelet is running
docker ps
crictl ps
```
The output might be something like:
```console
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
129fd7d382018 docker.io/library/nginx@sha256:... 11 minutes ago Running web 0 34533c6729106
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
```
{{< note >}}
`crictl` outputs the image URI and SHA-256 checksum. `NAME` will look more like:
`docker.io/library/nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31`.
{{< /note >}}
You can see the mirror Pod on the API server:
@ -172,8 +177,8 @@ You can see the mirror Pod on the API server:
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
static-web-my-node1 1/1 Running 0 2m
NAME READY STATUS RESTARTS AGE
static-web 1/1 Running 0 2m
```
{{< note >}}
@ -181,7 +186,6 @@ Make sure the kubelet has permission to create the mirror Pod in the API server.
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/).
{{< /note >}}
{{< glossary_tooltip term_id="label" text="Labels" >}} from the static Pod are
propagated into the mirror Pod. You can use those labels as normal via
{{< glossary_tooltip term_id="selector" text="selectors" >}}, etc.
@ -190,34 +194,33 @@ If you try to use `kubectl` to delete the mirror Pod from the API server,
the kubelet _doesn't_ remove the static Pod:
```shell
kubectl delete pod static-web-my-node1
kubectl delete pod static-web
```
```
pod "static-web-my-node1" deleted
pod "static-web" deleted
```
You can see that the Pod is still running:
```shell
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
static-web-my-node1 1/1 Running 0 12s
NAME READY STATUS RESTARTS AGE
static-web 1/1 Running 0 4s
```
Back on your node where the kubelet is running, you can try to stop the Docker
container manually.
Back on your node where the kubelet is running, you can try to stop the container manually.
You'll see that, after a time, the kubelet will notice and will restart the Pod
automatically:
```shell
# Run these commands on the node where the kubelet is running
docker stop f6d05272b57e # replace with the ID of your container
crictl stop 129fd7d382018 # replace with the ID of your container
sleep 20
docker ps
crictl ps
```
```
CONTAINER ID IMAGE COMMAND CREATED ...
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
```console
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106
```
## Dynamic addition and removal of static pods
@ -230,14 +233,13 @@ The running kubelet periodically scans the configured directory (`/etc/kubelet.d
#
mv /etc/kubelet.d/static-web.yaml /tmp
sleep 20
docker ps
crictl ps
# You see that no nginx container is running
mv /tmp/static-web.yaml /etc/kubelet.d/
sleep 20
docker ps
crictl ps
```
```console
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106
```
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```

View File

@ -39,7 +39,7 @@ You may want to set
(default to 1),
[`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds)
(default to 0) and
[`.spec.maxSurge`](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge)
[`.spec.updateStrategy.rollingUpdate.maxSurge`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)
(a beta feature and defaults to 0) as well.
### Creating a DaemonSet with `RollingUpdate` update strategy

View File

@ -70,14 +70,9 @@ spec:
Patch your Deployment:
{{< tabs name="kubectl_patch_example" >}}
{{{< tab name="Bash" codelang="bash" >}}
kubectl patch deployment patch-demo --patch "$(cat patch-file.yaml)"
{{< /tab >}}
{{< tab name="PowerShell" codelang="posh" >}}
kubectl patch deployment patch-demo --patch $(Get-Content patch-file.yaml -Raw)
{{< /tab >}}}
{{< /tabs >}}
```shell
kubectl patch deployment patch-demo --patch-file patch-file.yaml
```
View the patched Deployment:
@ -183,7 +178,7 @@ spec:
Patch your Deployment:
```shell
kubectl patch deployment patch-demo --patch "$(cat patch-file-tolerations.yaml)"
kubectl patch deployment patch-demo --patch-file patch-file-tolerations.yaml
```
View the patched Deployment:
@ -249,7 +244,7 @@ spec:
In your patch command, set `type` to `merge`:
```shell
kubectl patch deployment patch-demo --type merge --patch "$(cat patch-file-2.yaml)"
kubectl patch deployment patch-demo --type merge --patch-file patch-file-2.yaml
```
View the patched Deployment:
@ -308,14 +303,9 @@ spec:
Patch your Deployment:
{{< tabs name="kubectl_retainkeys_example" >}}
{{{< tab name="Bash" codelang="bash" >}}
kubectl patch deployment retainkeys-demo --type merge --patch "$(cat patch-file-no-retainkeys.yaml)"
{{< /tab >}}
{{< tab name="PowerShell" codelang="posh" >}}
kubectl patch deployment retainkeys-demo --type merge --patch $(Get-Content patch-file-no-retainkeys.yaml -Raw)
{{< /tab >}}}
{{< /tabs >}}
```shell
kubectl patch deployment retainkeys-demo --type merge --patch-file patch-file-no-retainkeys.yaml
```
In the output, you can see that it is not possible to set `type` as `Recreate` when a value is defined for `spec.strategy.rollingUpdate`:
@ -339,14 +329,9 @@ With this patch, we indicate that we want to retain only the `type` key of the `
Patch your Deployment again with this new patch:
{{< tabs name="kubectl_retainkeys2_example" >}}
{{{< tab name="Bash" codelang="bash" >}}
kubectl patch deployment retainkeys-demo --type merge --patch "$(cat patch-file-retainkeys.yaml)"
{{< /tab >}}
{{< tab name="PowerShell" codelang="posh" >}}
kubectl patch deployment retainkeys-demo --type merge --patch $(Get-Content patch-file-retainkeys.yaml -Raw)
{{< /tab >}}}
{{< /tabs >}}
```shell
kubectl patch deployment retainkeys-demo --type merge --patch-file patch-file-retainkeys.yaml
```
Examine the content of the Deployment:
@ -425,10 +410,10 @@ The following commands are equivalent:
```shell
kubectl patch deployment patch-demo --patch "$(cat patch-file.yaml)"
kubectl patch deployment patch-demo --patch-file patch-file.yaml
kubectl patch deployment patch-demo --patch 'spec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis'
kubectl patch deployment patch-demo --patch "$(cat patch-file.json)"
kubectl patch deployment patch-demo --patch-file patch-file.json
kubectl patch deployment patch-demo --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo-ctr-2","image": "redis"}]}}}}'
```

View File

@ -173,7 +173,10 @@ automatically responds to changes in the number of replicas of the corresponding
## Create the PDB object
You can create or update the PDB object with a command like `kubectl apply -f mypdb.yaml`.
You can create or update the PDB object using kubectl.
```shell
kubectl apply -f mypdb.yaml
```
## Check the status of the PDB

View File

@ -85,7 +85,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
chmod +x kubectl
mkdir -p ~/.local/bin/kubectl
mv ./kubectl ~/.local/bin/kubectl
# and then add ~/.local/bin/kubectl to $PATH
# and then append (or prepend) ~/.local/bin to $PATH
```
{{< /note >}}

View File

@ -59,7 +59,7 @@ The following methods exist for installing kubectl on Windows:
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
```
1. Add the binary in to your `PATH`.
1. Append or prepend the kubectl binary folder to your `PATH` environment variable.
1. Test to ensure the version of `kubectl` is the same as downloaded:
@ -172,7 +172,7 @@ Below are the procedures to set up autocompletion for PowerShell.
$($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256)
```
1. Add the binary in to your `PATH`.
1. Append or prepend the kubectl binary folder to your `PATH` environment variable.
1. Verify plugin is successfully installed

View File

@ -174,6 +174,15 @@ of security defaults while preserving the functionality of the workload. It is
possible that the default profiles differ between container runtimes and their
release versions, for example when comparing those from CRI-O and containerd.
{{< note >}}
Enabling the feature will neither change the Kubernetes
`securityContext.seccompProfile` API field nor add the deprecated annotations of
the workload. This provides users the possibility to rollback anytime without
actually changing the workload configuration. Tools like
[`crictl inspect`](https://github.com/kubernetes-sigs/cri-tools) can be used to
verify which seccomp profile is being used by a container.
{{< /note >}}
Some workloads may require a lower amount of syscall restrictions than others.
This means that they can fail during runtime even with the `RuntimeDefault`
profile. To mitigate such a failure, you can:
@ -203,6 +212,51 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
SeccompDefault: true
nodes:
- role: control-plane
image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
seccomp-default: "true"
- role: worker
image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
feature-gates: SeccompDefault=true
seccomp-default: "true"
```
If the cluster is ready, then running a pod:
```shell
kubectl run --rm -it --restart=Never --image=alpine alpine -- sh
```
Should now have the default seccomp profile attached. This can be verified by
using `docker exec` to run `crictl inspect` for the container on the kind
worker:
```shell
docker exec -it kind-worker bash -c \
'crictl inspect $(crictl ps --name=alpine -q) | jq .info.runtimeSpec.linux.seccomp'
```
```json
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"],
"syscalls": [
{
"names": ["..."]
}
]
}
```
## Create a Pod with a seccomp profile for syscall auditing

View File

@ -0,0 +1,10 @@
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
name: nginx-example
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx

View File

@ -5,6 +5,7 @@ metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:

View File

@ -1,4 +1,4 @@
You need to either have a dynamic PersistentVolume provisioner with a default
You need to either have a [dynamic PersistentVolume provisioner](/docs/concepts/storage/dynamic-provisioning/) with a default
[StorageClass](/docs/concepts/storage/storage-classes/),
or [statically provision PersistentVolumes](/docs/user-guide/persistent-volumes/#provisioning)
yourself to satisfy the [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims)

View File

@ -92,6 +92,7 @@ End of Life for **1.23** is **2023-02-28**.
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
| 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) |
| 1.23.2 | 2022-01-14 | 2022-01-19 | |
| 1.23.1 | 2021-12-14 | 2021-12-16 | |

View File

@ -169,9 +169,9 @@ of each minor (1.Y) and patch (1.Y.Z) release
GitHub team: [@kubernetes/build-admins](https://github.com/orgs/kubernetes/teams/build-admins)
- Aaron Crickenberger ([@spiffxp](https://github.com/spiffxp))
- Amit Watve ([@amwat](https://github.com/amwat))
- Benjamin Elder ([@BenTheElder](https://github.com/BenTheElder))
- Grant McCloskey ([@MushuEE](https://github.com/MushuEE))
- Juan Escobar ([@juanfescobar](https://github.com/juanfescobar))
## SIG Release Leads

View File

@ -0,0 +1,102 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Aprovisionamiento Dinámico de volumen
content_type: concept
weight: 40
---
<!-- overview -->
El aprovisionamiento dinámico de volúmenes permite crear volúmenes de almacenamiento bajo demanda. Sin el aprovisionamiento dinámico, los administradores de clústeres tienen que realizar llamadas manualmente a su proveedor de almacenamiento o nube para crear nuevos volúmenes de almacenamiento y luego crear [objetos de `PersistentVolume`](/docs/concepts/storage/persistent-volumes/)
para representarlos en Kubernetes. La función de aprovisionamiento dinámico elimina la necesidad de que los administradores del clúster aprovisionen previamente el almacenamiento. En cambio, el aprovisionamiento ocurre automáticamente cuando los usuarios lo solicitan.
<!-- body -->
## Antecedentes
La implementación del aprovisionamiento dinámico de volúmenes se basa en el objeto API `StorageClass`
del grupo API `storage.k8s.io`. Un administrador de clúster puede definir tantos objetos
`StorageClass` como sea necesario, cada uno especificando un _volume plugin_ (aka
_provisioner_) que aprovisiona un volumen y el conjunto de parámetros para pasar a ese aprovisionador. Un administrador de clúster puede definir y exponer varios tipos de almacenamiento (del mismo o de diferentes sistemas de almacenamiento) dentro de un clúster, cada uno con un conjunto personalizado de parámetros. Este diseño también garantiza que los usuarios finales no tengan que preocuparse por la complejidad y los matices de cómo se aprovisiona el almacenamiento, pero que aún tengan la capacidad de seleccionar entre múltiples opciones de almacenamiento.
Puede encontrar más información sobre las clases de almacenamiento
[aqui](/docs/concepts/storage/storage-classes/).
## Habilitación del aprovisionamiento dinámico
Para habilitar el aprovisionamiento dinámico, un administrador de clúster debe crear previamente uno o más objetos StorageClass para los usuarios. Los objetos StorageClass definen qué aprovisionador se debe usar y qué parámetros se deben pasar a ese aprovisionador cuando se invoca el aprovisionamiento dinámico.
El nombre de un objeto StorageClass debe ser un
[nombre de subdominio de DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
El siguiente manifiesto crea una clase de almacenamiento llamada "slow" que aprovisiona discos persistentes estándar similares a discos.
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
```
El siguiente manifiesto crea una clase de almacenamiento llamada "fast" que aprovisiona discos persistentes similares a SSD.
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
```
## Usar Aprovisionamiento Dinámico
Los usuarios solicitan almacenamiento aprovisionado dinámicamente al incluir una clase de almacenamiento en su `PersistentVolumeClaim`. Antes de Kubernetes v1.6, esto se hacía a través del la anotación
`volume.beta.kubernetes.io/storage-class`. Sin embargo, esta anotación está obsoleta desde v1.9. Los usuarios ahora pueden y deben usar el campo
`storageClassName` del objeto `PersistentVolumeClaim`. El valor de este campo debe coincidir con el nombre de un `StorageClass` configurada por el administrador
(ver [documentación](#habilitación-del-aprovisionamiento-dinámico)).
Para seleccionar la clase de almacenamiento llamada "fast", por ejemplo, un usuario crearía el siguiente PersistentVolumeClaim:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim1
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast
resources:
requests:
storage: 30Gi
```
Esta afirmación da como resultado que se aprovisione automáticamente un disco persistente similar a SSD. Cuando se elimina la petición, se destruye el volumen.
## Comportamiento Predeterminado
El aprovisionamiento dinámico se puede habilitar en un clúster de modo que todas las peticiones se aprovisionen dinámicamente si no se especifica una clase de almacenamiento. Un administrador de clúster puede habilitar este comportamiento al:
- Marcar un objeto `StorageClass` como _default_;
- Asegúrese de que el [controlador de admisión `DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) esté habilitado en el servidor de API.
Un administrador puede marcar un `StorageClass` específico como predeterminada agregando la anotación
`storageclass.kubernetes.io/is-default-class`.
Cuando existe un `StorageClass` predeterminado en un clúster y un usuario crea un
`PersistentVolumeClaim` con `storageClassName` sin especificar, el controlador de admisión
`DefaultStorageClass` agrega automáticamente el campo
`storageClassName` que apunta a la clase de almacenamiento predeterminada.
Tenga en cuenta que puede haber como máximo una clase de almacenamiento _default_, o un `PersistentVolumeClaim` sin `storageClassName` especificado explícitamente.
## Conocimiento de la Topología
En los clústeres [Multi-Zone](/docs/setup/multiple-zones), los Pods se pueden distribuir en zonas de una región. Los backends de almacenamiento de zona única deben aprovisionarse en las zonas donde se programan los Pods. Esto se puede lograr configurando el [Volume Binding
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).

View File

@ -0,0 +1,74 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Capacidad de Almacenamiento
content_type: concept
weight: 45
---
<!-- overview -->
La capacidad de almacenamiento es limitada y puede variar según el nodo en el que un Pod se ejecuta: es posible que no todos los nodos puedan acceder al almacenamiento conectado a la red o que, para empezar, el almacenamiento sea local en un nodo.
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
Esta página describe cómo Kubernetes realiza un seguimiento de la capacidad de almacenamiento y cómo el planificador usa esa información para programar Pods en nodos que tienen acceso a suficiente capacidad de almacenamiento para los volúmenes restantes que faltan. Sin el seguimiento de la capacidad de almacenamiento, el planificador puede elegir un nodo que no tenga suficiente capacidad para aprovisionar un volumen y se necesitarán varios reintentos de planificación.
El seguimiento de la capacidad de almacenamiento es compatible con los controladores de la {{< glossary_tooltip
text="Interfaz de Almacenamiento de Contenedores" term_id="csi" >}} (CSI) y
[necesita estar habilitado](#enabling-storage-capacity-tracking) al instalar un controlador CSI.
<!-- body -->
## API
Hay dos extensiones de API para esta función:
- Los objetos CSIStorageCapacity:
son producidos por un controlador CSI en el Namespace donde está instalado el controlador. Cada objeto contiene información de capacidad para una clase de almacenamiento y define qué nodos tienen acceso a ese almacenamiento.
- [El campo `CSIDriverSpec.StorageCapacity`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csidriverspec-v1-storage-k8s-io):
cuando se establece en `true`, el [Planificador de Kubernetes](/docs/concepts/scheduling-eviction/kube-scheduler/) considerará la capacidad de almacenamiento para los volúmenes que usan el controlador CSI.
## Planificación
El planificador de Kubernetes utiliza la información sobre la capacidad de almacenamiento si:
- la Feature gate de `CSIStorageCapacity` es `true`,
- un Pod usa un volumen que aún no se ha creado,
- ese volumen usa un {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} que hace referencia a un controlador CSI y usa el [modo de enlace de volumen] (/docs/concepts/storage/storage-classes/#volume-binding-mode)`WaitForFirstConsumer`,
y
- el objeto `CSIDriver` para el controlador tiene `StorageCapacity` establecido en `true`.
En ese caso, el planificador sólo considera los nodos para el Pod que tienen suficiente almacenamiento disponible. Esta verificación es muy simplista y solo compara el tamaño del volumen con la capacidad indicada en los objetos `CSIStorageCapacity` con una topología que incluye el nodo.
Para los volúmenes con el modo de enlace de volumen `Immediate`, el controlador de almacenamiento decide dónde crear el volumen, independientemente de los pods que usarán el volumen.
Luego, el planificador programa los pods en los nodos donde el volumen está disponible después de que se haya creado.
Para los [volúmenes efímeros de CSI](/docs/concepts/storage/volumes/#csi),
la planificación siempre ocurre sin considerar la capacidad de almacenamiento. Esto se basa en la suposición de que este tipo de volumen solo lo utilizan controladores CSI especiales que son locales a un nodo y no necesitan allí recursos importantes.
## Replanificación
Cuando se selecciona un nodo para un Pod con volúmenes `WaitForFirstConsumer`, esa decisión sigue siendo tentativa. El siguiente paso es que se le pide al controlador de almacenamiento CSI que cree el volumen con una pista de que el volumen está disponible en el nodo seleccionado.
Debido a que Kubernetes pudo haber elegido un nodo basándose en información de capacidad desactualizada, es posible que el volumen no se pueda crear realmente. Luego, la selección de nodo se restablece y el planificador de Kubernetes intenta nuevamente encontrar un nodo para el Pod.
## Limitaciones
El seguimiento de la capacidad de almacenamiento aumenta las posibilidades de que la planificación funcione en el primer intento, pero no puede garantizarlo porque el planificador tiene que decidir basándose en información potencialmente desactualizada. Por lo general, el mismo mecanismo de reintento que para la planificación sin información de capacidad de almacenamiento es manejado por los errores de planificación.
Una situación en la que la planificación puede fallar de forma permanente es cuando un pod usa varios volúmenes: es posible que un volumen ya se haya creado en un segmento de topología que luego no tenga suficiente capacidad para otro volumen. La intervención manual es necesaria para recuperarse de esto, por ejemplo, aumentando la capacidad o eliminando el volumen que ya se creó. [
Trabajo adicional](https://github.com/kubernetes/enhancements/pull/1703) para manejar esto automáticamente.
## Habilitación del seguimiento de la capacidad de almacenamiento
El seguimiento de la capacidad de almacenamiento es una función beta y está habilitada de forma predeterminada en un clúster de Kubernetes desde Kubernetes 1.21. Además de tener la función habilitada en el clúster, un controlador CSI también tiene que admitirlo. Consulte la documentación del controlador para obtener más detalles.
## {{% heading "whatsnext" %}}
- Para obtener más información sobre el diseño, consulte las
[Restricciones de Capacidad de Almacenamiento para la Planificación de Pods KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md).
- Para obtener más información sobre un mayor desarrollo de esta función, consulte [problema de seguimiento de mejoras #1472](https://github.com/kubernetes/enhancements/issues/1472).
- Aprender sobre [Planificador de Kubernetes](/docs/concepts/scheduling-eviction/kube-scheduler/)

View File

@ -0,0 +1,66 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Clonación de volumen CSI
content_type: concept
weight: 30
---
<!-- overview -->
Este documento describe el concepto para clonar volúmenes CSI existentes en Kubernetes. Se sugiere estar familiarizado con [Volúmenes](/docs/concepts/storage/volumes).
<!-- body -->
## Introducción
La función de clonación de volumen {{< glossary_tooltip text="CSI" term_id="csi" >}} agrega soporte para especificar {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s existentes en el campo `dataSource` para indicar que un usuario desea clonar un {{< glossary_tooltip term_id="volume" >}}.
Un Clon se define como un duplicado de un volumen de Kubernetes existente que se puede consumir como lo sería cualquier volumen estándar. La única diferencia es que al aprovisionar, en lugar de crear un "nuevo" Volumen vacío, el dispositivo de backend crea un duplicado exacto del Volumen especificado.
La implementación de la clonación, desde la perspectiva de la API de Kubernetes, agrega la capacidad de especificar un PVC existente como dataSource durante la creación de un nuevo PVC. El PVC de origen debe estar vinculado y disponible (no en uso).
Los usuarios deben tener en cuenta lo siguiente cuando utilicen esta función:
- El soporte de clonación (`VolumePVCDataSource`) sólo está disponible para controladores CSI.
- El soporte de clonación sólo está disponible para aprovisionadores dinámicos.
- Los controladores CSI pueden haber implementado o no la funcionalidad de clonación de volúmenes.
- Sólo puede clonar un PVC cuando existe en el mismo Namespace que el PVC de destino (el origen y el destino deben estar en el mismo Namespace).
- La clonación sólo se admite dentro de la misma Clase de Almacenamiento.
- El volumen de destino debe ser de la misma clase de almacenamiento que el origen
- Se puede utilizar la clase de almacenamiento predeterminada y se puede omitir storageClassName en la especificación
- La clonación sólo se puede realizar entre dos volúmenes que usan la misma configuración de VolumeMode (si solicita un volumen en modo de bloqueo, la fuente DEBE también ser en modo de bloqueo)
## Aprovisionamiento
Los clones se aprovisionan como cualquier otro PVC con la excepción de agregar un origen de datos que hace referencia a un PVC existente en el mismo Namespace.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clone-of-pvc-1
namespace: myns
spec:
accessModes:
- ReadWriteOnce
storageClassName: cloning
resources:
requests:
storage: 5Gi
dataSource:
kind: PersistentVolumeClaim
name: pvc-1
```
{{< note >}}
Debe especificar un valor de capacidad para `spec.resources.requests.storage` y el valor que especifique debe ser igual o mayor que la capacidad del volumen de origen.
{{< /note >}}
El resultado es un nuevo PVC con el nombre `clone-of-pvc-1` que tiene exactamente el mismo contenido que la fuente especificada `pvc-1`.
## Uso
Una vez disponible el nuevo PVC, el PVC clonado se consume igual que el resto de PVC. También se espera en este punto que el PVC recién creado sea un objeto independiente. Se puede consumir, clonar, tomar snapshots, o eliminar de forma independiente y sin tener en cuenta sus datos originales. Esto también implica que la fuente no está vinculada de ninguna manera al clon recién creado, también puede modificarse o eliminarse sin afectar al clon recién creado.

View File

@ -97,7 +97,7 @@ Comme pour toutes les autres ressources Kubernetes, un Ingress (une entrée) a b
 est l'annotation [rewrite-target](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md).
 Différents [Ingress controller](/docs/concepts/services-networking/ingress-controllers) prennent en charge différentes annotations. Consultez la documentation du contrôleur Ingress de votre choix pour savoir quelles annotations sont prises en charge.
La [spécification de la ressource Ingress](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) dispose de toutes les informations nécessaires pour configurer un loadbalancer ou un serveur proxy. Plus important encore, il
La [spécification de la ressource Ingress](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) dispose de toutes les informations nécessaires pour configurer un loadbalancer ou un serveur proxy. Plus important encore, il
contient une liste de règles d'appariement de toutes les demandes entrantes. La ressource Ingress ne supporte que les règles pour diriger le trafic HTTP.

View File

@ -59,7 +59,7 @@ Lors de la récupération d'un seul pod par son nom, par exemple `kubectl get po
Le formatage peut être contrôlé davantage en utilisant l'opération `range` pour parcourir les éléments individuellement.
```shell
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
sort
```
@ -69,7 +69,7 @@ Pour cibler uniquement les pods correspondant à un label spécifique, utilisez
Les éléments suivants correspondent uniquement aux pods avec les labels `app=nginx`.
```shell
kubectl get pods --all-namespaces -o=jsonpath="{.items[*].spec.containers[*].image}" -l app=nginx
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" -l app=nginx
```
## Filtrage des images de conteneur de liste par namespace de pod

View File

@ -76,7 +76,7 @@ Format dapat dikontrol lebih lanjut dengan menggunakan operasi `range` untuk
melakukan iterasi untuk setiap elemen secara individual.
```sh
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\
sort
```
@ -86,7 +86,7 @@ Untuk menargetkan hanya Pod yang cocok dengan label tertentu saja, gunakan tanda
dibawah ini akan menghasilkan Pod dengan label yang cocok dengan `app=nginx`.
```sh
kubectl get pods --all-namespaces -o=jsonpath="{.items[*].spec.containers[*].image}" -l app=nginx
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" -l app=nginx
```
## Membuat daftar _image_ Container yang difilter berdasarkan Namespace Pod

View File

@ -41,12 +41,12 @@ Kubernetesはオープンソースなので、オンプレミスやパブリッ
<button id="desktopShowVideoButton" onclick="kub.showVideo()">ビデオを見る</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna20" button id="desktopKCButton">2020年11月17日-20日のKubeCon NAバーチャルに参加する</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">2022年5月16日〜20日のKubeCon EUバーチャルに参加する</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu21" button id="desktopKCButton">2021年5月4日-7日のKubeCon EUバーチャルに参加する</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">2022年10月24日-28日のKubeCon NAバーチャルに参加する</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -0,0 +1,46 @@
---
layout: blog
title: "Don't Panic: Kubernetes and Docker"
date: 2020-12-02
slug: dont-panic-kubernetes-and-docker
---
**著者:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
Kubernetesはv1.20より新しいバージョンで、コンテナランタイムとして[Dockerをサポートしません](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)。
**パニックを起こす必要はありません。これはそれほど抜本的なものではないのです。**
概要: ランタイムとしてのDockerは、Kubernetesのために開発された[Container Runtime Interface(CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)を利用しているランタイムを選んだ結果としてサポートされなくなります。しかし、Dockerによって生成されたイメージはこれからも、今までもそうだったように、みなさんのクラスターで使用可能です。
もし、あなたがKubernetesのエンドユーザーであるならば、多くの変化はないでしょう。これはDockerの死を意味するものではありませんし、開発ツールとして今後Dockerを使用するべきでない、使用することは出来ないと言っているのでもありません。Dockerはコンテナを作成するのに便利なツールですし、docker buildコマンドで作成されたイメージはKubernetesクラスタ上でこれからも動作可能なのです。
もし、GKE、EKS、AKSといったマネージドKubernetesサービス(それらはデフォルトで[containerdを使用しています](https://github.com/Azure/AKS/releases/tag/2020-11-16))を使っているのなら、ワーカーードがサポート対象のランタイムを使用しているか、Dockerのサポートが将来のK8sバージョンで切れる前に確認しておく必要があるでしょう。
もし、ードをカスタマイズしているのなら、環境やRuntimeの仕様に合わせて更新する必要があるでしょう。サービスプロバイダーと確認し、アップグレードのための適切なテストと計画を立ててください。
もし、ご自身でClusterを管理しているのなら、やはり問題が発生する前に必要な対応を行う必要があります。v1.20の時点で、Dockerの使用についての警告メッセージが表示されるようになります。将来のKubernetesリリース(現在の計画では2021年下旬のv1.22)でDockerのRuntimeとしての使用がサポートされなくなれば、containerdやCRI-Oといった他のサポート対象のRuntimeに切り替える必要があります。切り替える際、そのRuntimeが現在使用しているDocker Daemonの設定をサポートすることを確認してください。(Loggingなど)
## では、なぜ混乱が生じ、誰もが恐怖に駆られているのか。
ここで議論になっているのは2つの異なる場面についてであり、それが混乱の原因になっています。Kubernetesクラスターの内部では、Container runtimeと呼ばれるものがあり、それはImageをPullし起動する役目を持っています。Dockerはその選択肢として人気があります(他にはcontainerdやCRI-Oが挙げられます)が、しかしDockerはそれ自体がKubernetesの一部として設計されているわけではありません。これが問題の原因となっています。
お分かりかと思いますが、ここで”Docker”と呼んでいるものは、ある1つのものではなく、その技術的な体系の全体であり、その一部には"containerd"と呼ばれるものもあり、これはそれ自体がハイレベルなContainer runtimeとなっています。Dockerは素晴らしいもので、便利です。なぜなら、多くのUXの改善がされており、それは人間が開発を行うための操作を簡単にしているのです。しかし、それらはKubernetesに必要なものではありません。Kubernetesは人間ではないからです。
このhuman-friendlyな抽象化レイヤが作られてために、結果としてはKubernetesクラスタはDockershimと呼ばれるほかのツールを使い、本当に必要な機能つまりcontainerdを利用してきました。これは素晴らしいとは言えません。なぜなら、我々がメンテする必要のあるものが増えますし、それは問題が発生する要因ともなります。今回の変更で実際に行われることというのは、Dockershimを最も早い場合でv1.23のリリースでkubeletから除外することです。その結果として、Dockerのサポートがなくなるということなのです。
ここで、containerdがDockerに含まれているなら、なぜDockershimが必要なのかと疑問に思われる方もいるでしょう。
DockerはCRI([Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/))に準拠していません。もしそうであればshimは必要ないのですが、現実はそうでありません。
しかし、これは世界の終わりでありません、心配しないでください。みなさんはContainer runtimeをDockerから他のサポート対象であるContainer runtimeに切り替えるだけでよいのです。
1つ注意すべきことは、クラスターで行われる処理のなかでDocker socket(`/var/run/docker.sock`)に依存する部分がある場合、他のRuntimeへ切り替えるとこの部分が働かなくなるでしょう。このパターンはしばしばDocker in Dockerと呼ばれます。このような場合の対応方法はたくさんあります。[kaniko](https://github.com/GoogleContainerTools/kaniko)、[img](https://github.com/genuinetools/img)、[buildah](https://github.com/containers/buildah)などです。
## では開発者にとって、この変更は何を意味するのか。これからもDockerfileを使ってよいのか。これからもDockerでビルドを行ってよいのか。
この変更は、Dockerを直接操作している多くのみなさんとは別の場面に影響を与えるでしょう。
みなさんが開発を行う際に使用しているDockerと、Kubernetesクラスタの内部で使われているDocker runtimeは関係ありません。これがわかりにくいことは理解しています。開発者にとって、Dockerはこれからも便利なものであり、このアナウンスがあった前と変わらないでしょう。DockerでビルドされたImageは、決してDockerでだけ動作するというわけではありません。それはOCI([Open Container Initiative](https://opencontainers.org/)) Imageと呼ばれるものです。あらゆるOCI準拠のImageは、それを何のツールでビルドしたかによらず、Kubernetesから見れば同じものなのです。[containerd](https://containerd.io/)も[CRI-O](https://cri-o.io/)も、そのようなImageをPullし、起動することが出来ます。
これがコンテナの仕様について、共通の仕様を策定している理由なのです。
さて、この変更は決定しています。いくつかの問題は発生するかもしてませんが、決して壊滅的なものではなく、ほとんどの場合は良い変化となるでしょう。Kubernetesをどのように使用しているかによりますが、この変更が特に何の影響も及ぼさない人もいるでしょうし、影響がとても少ない場合もあります。長期的に見れば、物事を簡単にするのに役立つものです。
もし、この問題がまだわかりにくいとしても、心配しないでください。Kubernetesでは多くのものが変化しており、その全てに完璧に精通している人など存在しません。
経験の多寡や難易度にかかわらず、どんなことでも質問してください。我々の目標は、全ての人が将来の変化について、可能な限りの知識と理解を得られることです。
このブログが多くの質問の答えとなり、不安を和らげることができればと願っています。
別の情報をお探してあれば、[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)を参照してください。

View File

@ -18,7 +18,7 @@ weight: 10
- JSONではなくYAMLを使って設定ファイルを書いてください。これらのフォーマットはほとんどすべてのシナリオで互換的に使用できますが、YAMLはよりユーザーフレンドリーになる傾向があります。
- 意味がある場合は常に、関連オブジェクトを単一ファイルにグループ化します。多くの場合、1つのファイルの方が管理が簡単です。例として[guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/all-in-one/guestbook-all-in-one.yaml)ファイルを参照してください。
- 意味がある場合は常に、関連オブジェクトを単一ファイルにグループ化します。多くの場合、1つのファイルの方が管理が簡単です。例として[guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml)ファイルを参照してください。
- 多くの`kubectl`コマンドがディレクトリに対しても呼び出せることも覚えておきましょう。たとえば、設定ファイルのディレクトリで `kubectl apply`を呼び出すことができます。

View File

@ -0,0 +1,57 @@
---
title: サービス内部トラフィックポリシー
content_type: concept
weight: 45
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
*サービス内部トラフィックポリシー*を使用すると、内部トラフィック制限により、トラフィックが発信されたノード内のエンドポイントにのみ内部トラフィックをルーティングできます。
ここでの「内部」トラフィックとは、現在のクラスターのPodから発信されたトラフィックを指します。これは、コストを削減し、パフォーマンスを向上させるのに役立ちます。
<!-- body -->
## ServiceInternalTrafficPolicyの使用
`ServiceInternalTrafficPolicy` [フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にすると、`.spec.internalTrafficPolicy`を`Local`に設定して、{{< glossary_tooltip text="Service" term_id="service" >}}内部のみのトラフィックポリシーを有効にすることができます。
これにより、kube-proxyは、クラスター内部トラフィックにードローカルエンドポイントのみを使用するようになります。
{{< note >}}
特定のServiceのエンドポイントがないード上のPodの場合、Serviceに他のードのエンドポイントがある場合でも、Serviceは(このノード上のポッドの)エンドポイントがゼロであるかのように動作します。
{{< /note >}}
次の例は、`.spec.internalTrafficPolicy`を`Local`に設定した場合のServiceの様子を示しています
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
internalTrafficPolicy: Local
```
## 使い方
kube-proxyは、`spec.internalTrafficPolicy`の設定に基づいて、ルーティング先のエンドポイントをフィルタリングします。
`spec.internalTrafficPolicy`が`Local`であれば、ノードのローカルエンドポイントにのみルーティングできるようにします。`Cluster`または未設定であればすべてのエンドポイントにルーティングできるようにします。
`ServiceInternalTrafficPolicy`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が有効な場合、`spec.internalTrafficPolicy`のデフォルトは`Cluster`です。
## 制約
* Serviceで`externalTrafficPolicy`が`Local`に設定されている場合、サービス内部トラフィックポリシーは使用されません。同じServiceだけではなく、同じクラスター内の異なるServiceで両方の機能を使用することができます。
## {{% heading "whatsnext" %}}
* [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints)を読む
* [Service External Traffic Policy](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)を読む
* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む

View File

@ -0,0 +1,58 @@
---
title: kubectlの使用規則
content_type: concept
---
<!-- overview -->
`kubectl`の推奨される使用規則です。
<!-- body -->
## 再利用可能なスクリプトでの`kubectl`の使用
スクリプトでの安定した出力のために:
* `-o name`, `-o json`, `-o yaml`, `-o go-template`, `-o jsonpath` などの機械指向の出力形式のいずれかを必要します。
* バージョンを完全に指定します。例えば、`jobs.v1.batch/myjob`のようにします。これにより、kubectlが時間とともに変化する可能性のあるデフォルトのバージョンを使用しないようにします。
* コンテキストや設定、その他の暗黙的な状態に頼ってはいけません。
## ベストプラクティス
### `kubectl run`
`kubectl run`がインフラのコード化を満たすために:
* イメージにバージョン固有のタグを付けて、そのタグを新しいバージョンに移さない。例えば、`:latest`ではなく、`:v1234`、`v1.2.3`、`r03062016-1-4`を使用してください(詳細は、[Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images)を参照してください)。
* パラメーターが多用されているイメージをスクリプトでチェックします。
* `kubectl run` フラグでは表現できない機能を、ソースコントロールでチェックした設定ファイルに切り替えます。
`dry-run=client` フラグを使用すると、実際に送信することなく、クラスターに送信されるオブジェクトを確認することができます。
{{< note >}}
すべての`kubectl run`ジェネレーターは非推奨です。ジェネレーターの[リスト](https://v1-17.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators)とその使用方法については、Kubernetes v1.17のドキュメントを参照してください。
{{< /note >}}
#### Generators
`kubectl create --dry-run=client -o yaml`というkubectlコマンドで以下のリソースを生成することができます。
* `clusterrole`: ClusterRoleを作成します。
* `clusterrolebinding`: 特定のClusterRoleに対するClusterRoleBindingを作成します。
* `configmap`: ローカルファイル、ディレクトリ、またはリテラル値からConfigMapを作成します。
* `cronjob`: 指定された名前のCronJobを作成します。
* `deployment`: 指定された名前でDeploymentを作成します。
* `job`: 指定された名前でJobを作成します。
* `namespace`: 指定された名前でNamespaceを作成します。
* `poddisruptionbudget`: 指定された名前でPodDisruptionBudgetを作成します。
* `priorityclass`: 指定された名前でPriorityClassを作成します。
* `quota`: 指定された名前でQuotaを作成します。
* `role`: 1つのルールでRoleを作成します。
* `rolebinding`: 特定のロールやClusterRoleに対するRoleBindingを作成します。
* `secret`: 指定されたサブコマンドを使用してSecretを作成します。
* `service`: 指定されたサブコマンドを使用してServiceを作成します。
* `ServiceAccount`: 指定された名前でServiceAccountを作成します。
### `kubectl apply`
* リソースの作成や更新には `kubectl apply` を使用できます。kubectl applyを使ったリソースの更新については、[Kubectl Book](https://kubectl.docs.kubernetes.io)を参照してください。

View File

@ -0,0 +1,114 @@
---
title: API概要
content_type: concept
weight: 10
no_list: true
card:
name: reference
weight: 50
title: API概要
---
<!-- overview -->
このセクションでは、Kubernetes APIのリファレンス情報を提供します。
REST APIはKubernetesの基本的な構造です。
すべての操作とコンポーネント間のと通信、および外部ユーザーのコマンドは、REST API呼び出しでありAPIサーバーが処理します。
その結果、Kubernetesプラットフォーム内のすべてのものは、APIオブジェクトとして扱われ、[API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)に対応するエントリーがあります。
[Kubernetes APIリファレンス](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)は、Kubernetesバージョン{{< param "version" >}}のAPI一覧を提供します。
一般的な背景情報を知るには、[The Kubernetes API](/docs/concepts/overview/kubernetes-api/)、
[Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/)を読んでください。
それらはKubernetes APIサーバーがクライアントを認証する方法とリクエストを認可する方法を説明します。
## APIバージョニング
JSONとProtobufなどのシリアル化スキーマの変更については同じガイドラインに従います。
以下の説明は、両方のフォーマットをカバーしています。
APIのバージョニングとソフトウェアのバージョニングは間接的に関係しています。
[API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)は、APIバージョニングとソフトウェアバージョニングの関係を説明しています。
APIのバージョンが異なると、安定性やサポートのレベルも異なります。
各レベルの基準については、[API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)で詳しく説明しています。
各レベルの概要は以下の通りです:
- Alpha:
- バージョン名に「alpha」が含まれています「v1alpha1」
- バグが含まれている可能性があります。
機能を有効にするとバグが露呈する可能性があります。
機能がデフォルトで無効になっている可能性があります。
- ある機能のサポートは、予告なしにいつでも中止される可能性があります。
- 後にリリースされるソフトウェアで、互換性のない方法で予告なく変更される可能性があります。
- バグのリスクが高く、長期的なサポートが得られないため、短期間のテストクラスターのみでの使用を推奨します。
- Beta:
- バージョン名には `beta` が含まれています(例:`v2beta3`)。
- ソフトウェアは十分にテストされています。
機能を有効にすることは安全であると考えられています。
機能はデフォルトで有効になっています。
- 機能のサポートが打ち切られることはありませんが、詳細は変更される可能性があります。
- オブジェクトのスキーマやセマンティクスは、その後のベータ版や安定版のリリースで互換性のない方法で変更される可能性があります。
このような場合には、移行手順が提供されます。
スキーマの変更に伴い、APIオブジェクトの削除、編集、再作成が必要になる場合があります。
編集作業は単純ではないかもしれません。
移行に伴い、その機能に依存しているアプリケーションのダウンタイムが必要になる場合があります。
- 本番環境での使用は推奨しません。
後続のリリース は、互換性のない変更を導入する可能性があります。
独立してアップグレード可能な複数のクラスターがある場合、この制限を緩和できる可能性があります。
{{< note >}}
ベータ版の機能をお試しいただき、ご意見をお寄せください。
ベータ版の機能が終了した後はこれ以上の変更ができない場合があります。
{{< /note >}}
- Stable:
- バージョン名は `vX` であり、`X` は整数である。
- 安定版の機能は、リリースされたソフトウェアの中で、その後の多くのバージョンに登場します。
## APIグループ
[API groups](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)で、KubernetesのAPIを簡単に拡張することができます。
APIグループは、RESTパスとシリアル化されたオブジェクトの`apiVersion`フィールドで指定されます。
KubernetesにはいくつかのAPIグループがあります:
* *core*(*legacy*とも呼ばれる)グループは、RESTパス `/api/v1` にあります。
コアグループは `apiVersion` フィールドの一部としては指定されません。
例えば、`apiVersion: v1` のように。
* 名前付きのグループは、RESTパス `/apis/$GROUP_NAME/$VERSION` にあり、以下のように使用します。
`apiVersion: $GROUP_NAME/$VERSION`を使用します(例:`apiVersion: batch/v1`)。
サポートされているAPIグループの完全なリストは以下にあります。
[Kubernetes API reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#strong-api-groups-strong-)。
## APIグループの有効化と無効化 {#enabling-or-disabling}
一部のリソースやAPIグループはデフォルトで有効になっています。
APIサーバー上で`--runtime-config`を設定することで、有効にしたり無効にしたりすることができます。
また`runtime-config`フラグには、APIサーバーのランタイム構成を記述したコンマ区切りの`<key>[=<value>]`ペアを指定します。
もし`=<value>`の部分が省略された場合には、`=true`が指定されたものとして扱われます。
例えば:
- `batch/v1`を無効するには、`--runtime-config=batch/v1=false`を設定する
- `batch/v2alpha1`を有効するには、`--runtime-config=batch/v2alpha1`を設定する
{{< note >}}
グループやリソースを有効または無効にした場合、
APIサーバーとコントローラマネージャーを再起動して、`--runtime-config`の変更を反映させる必要があります。
{{< /note >}}
## 永続化
Kubernetesはシリアライズされた状態を、APIリソースとして{{< glossary_tooltip term_id="etcd" >}}に書き込んで保存します。
## {{% heading "whatsnext" %}}
- [API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)をもっと知る
- [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)の設計ドキュメントを読む

View File

@ -0,0 +1,399 @@
---
content_type: concept
title: アプリケーションの自己観察とデバッグ
---
<!-- overview -->
アプリケーションが稼働すると、必然的にその問題をデバッグする必要が出てきます。
先に、`kubectl get pods`を使って、Podの簡単なステータス情報を取得する方法を説明しました。
しかし、アプリケーションに関するより多くの情報を取得する方法がいくつかあります。
<!-- body -->
## `kubectl describe pod`を使ってpodの詳細を取得
この例では、先ほどの例と同様に、Deploymentを使用して2つのpodを作成します。
{{< codenew file="application/nginx-with-request.yaml" >}}
以下のコマンドを実行して、Deploymentを作成します:
```shell
kubectl apply -f https://k8s.io/examples/application/nginx-with-request.yaml
```
```none
deployment.apps/nginx-deployment created
```
以下のコマンドでPodの状態を確認します:
```shell
kubectl get pods
```
```none
NAME READY STATUS RESTARTS AGE
nginx-deployment-1006230814-6winp 1/1 Running 0 11s
nginx-deployment-1006230814-fmgu3 1/1 Running 0 11s
```
`kubectl describe pod`を使うと、これらのPodについてより多くの情報を得ることができます。
例えば:
```shell
kubectl describe pod nginx-deployment-1006230814-6winp
```
```none
Name: nginx-deployment-1006230814-6winp
Namespace: default
Node: kubernetes-node-wul5/10.240.0.9
Start Time: Thu, 24 Mar 2016 01:39:49 +0000
Labels: app=nginx,pod-template-hash=1006230814
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-1956810328","uid":"14e607e7-8ba1-11e7-b5cb-fa16" ...
Status: Running
IP: 10.244.0.6
Controllers: ReplicaSet/nginx-deployment-1006230814
Containers:
nginx:
Container ID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
Image: nginx
Image ID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
Port: 80/TCP
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
cpu: 500m
memory: 128Mi
Requests:
memory: 128Mi
cpu: 500m
State: Running
Started: Thu, 24 Mar 2016 01:39:51 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5kdvl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
54s 54s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-deployment-1006230814-6winp to kubernetes-node-wul5
54s 54s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulling pulling image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulled Successfully pulled image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Created Created container with docker id 90315cc9f513
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Started Started container with docker id 90315cc9f513
```
ここでは、コンテナ(複数可)とPodに関する構成情報(ラベル、リソース要件など)や、コンテナ(複数可)とPodに関するステータス情報(状態、準備状況、再起動回数、イベントなど)を確認できます。
コンテナの状態は、Waiting(待機中)、Running(実行中)、Terminated(終了)のいずれかです。状態に応じて、追加の情報が提供されます。ここでは、Running状態のコンテナについて、コンテナがいつ開始されたかが表示されています。
Readyは、コンテナが最後のReadiness Probeに合格したかどうかを示す。(この場合、コンテナにはReadiness Probeが設定されていません。Readiness Probeが設定されていない場合、コンテナは準備が完了した状態であるとみなされます)。
Restart Countは、コンテナが何回再起動されたかを示します。この情報は、再起動ポリシーが「always」に設定されているコンテナのクラッシュループを検出するのに役立ちます。
現在、Podに関連する条件は、二値のReady条件のみです。これは、Podがリクエストに対応可能であり、マッチングするすべてのサービスのロードバランシングプールに追加されるべきであることを示します。
最後に、Podに関連する最近のイベントのログが表示されます。このシステムでは、複数の同一イベントを圧縮して、最初に見られた時刻と最後に見られた時刻、そして見られた回数を示します。"From"はイベントを記録しているコンポーネントを示し、"SubobjectPath"はどのオブジェクト(例: Pod内のコンテナ)が参照されているかを示し、"Reason"と "Message"は何が起こったかを示しています。
## 例: Pending Podsのデバッグ
イベントを使って検出できる一般的なシナリオは、どのードにも収まらないPodを作成した場合です。例えば、Podがどのードでも空いている以上のリソースを要求したり、どのードにもマッチしないラベルセレクターを指定したりする場合です。例えば、各(仮想)マシンが1つのCPUを持つ4ードのクラスター上で、(2つではなく)5つのレプリカを持ち、500ではなく600ミリコアを要求する前のDeploymentを作成したとします。この場合、Podの1つがスケジュールできなくなります。(なお、各ードではfluentdやskydnsなどのクラスターアドオンPodが動作しているため、もし1000ミリコアを要求した場合、どのPodもスケジュールできなくなります)
```shell
kubectl get pods
```
```none
NAME READY STATUS RESTARTS AGE
nginx-deployment-1006230814-6winp 1/1 Running 0 7m
nginx-deployment-1006230814-fmgu3 1/1 Running 0 7m
nginx-deployment-1370807587-6ekbw 1/1 Running 0 1m
nginx-deployment-1370807587-fg172 0/1 Pending 0 1m
nginx-deployment-1370807587-fz9sd 0/1 Pending 0 1m
```
nginx-deployment-1370807587-fz9sdのPodが実行されていない理由を調べるには、保留中のPodに対して`kubectl describe pod`を使用し、そのイベントを見てみましょう
```shell
kubectl describe pod nginx-deployment-1370807587-fz9sd
```
```none
Name: nginx-deployment-1370807587-fz9sd
Namespace: default
Node: /
Labels: app=nginx,pod-template-hash=1370807587
Status: Pending
IP:
Controllers: ReplicaSet/nginx-deployment-1370807587
Containers:
nginx:
Image: nginx
Port: 80/TCP
QoS Tier:
memory: Guaranteed
cpu: Guaranteed
Limits:
cpu: 1
memory: 128Mi
Requests:
cpu: 1
memory: 128Mi
Environment Variables:
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 48s 7 {default-scheduler } Warning FailedScheduling pod (nginx-deployment-1370807587-fz9sd) failed to fit in any node
fit failure on node (kubernetes-node-6ta5): Node didn't have enough resource: CPU, requested: 1000, used: 1420, capacity: 2000
fit failure on node (kubernetes-node-wul5): Node didn't have enough resource: CPU, requested: 1000, used: 1100, capacity: 2000
```
ここでは、理由 `FailedScheduling` (およびその他の理由)でPodのスケジュールに失敗したという、スケジューラーによって生成されたイベントを見ることができます。このメッセージは、どのードでもPodに十分なリソースがなかったことを示しています。
この状況を修正するには、`kubectl scale`を使用して、4つ以下のレプリカを指定するようにDeploymentを更新します。(あるいは、1つのPodを保留にしたままにしておいても害はありません。)
`kubectl describe pod`の最後に出てきたようなイベントは、etcdに永続化され、クラスターで何が起こっているかについての高レベルの情報を提供します。
すべてのイベントをリストアップするには、次のようにします:
```shell
kubectl get events
```
しかし、イベントは名前空間に所属することを忘れてはいけません。つまり、名前空間で管理されているオブジェクトのイベントに興味がある場合(例: 名前空間 `my-namespace`のPods で何が起こったか)、コマンドに名前空間を明示的に指定する必要があります。
```shell
kubectl get events --namespace=my-namespace
```
すべての名前空間からのイベントを見るには、`--all-namespaces` 引数を使用できます。
`kubectl describe pod`に加えて、(`kubectl get pod` で提供される以上の)Podに関する追加情報を得るためのもう一つの方法は、`-o yaml`出力形式フラグを `kubectl get pod`に渡すことです。これにより、`kubectl describe pod`よりもさらに多くの情報、つまりシステムが持っているPodに関するすべての情報をYAML形式で得ることができます。ここでは、アテーション(Kubernetesのシステムコンポーネントが内部的に使用している、ラベル制限のないキーバリューのメタデータ)、再起動ポリシー、ポート、ボリュームなどが表示されます。
```shell
kubectl get pod nginx-deployment-1006230814-6winp -o yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-1006230814","uid":"4c84c175-f161-11e5-9a78-42010af00005","apiVersion":"extensions","resourceVersion":"133434"}}
creationTimestamp: 2016-03-24T01:39:50Z
generateName: nginx-deployment-1006230814-
labels:
app: nginx
pod-template-hash: "1006230814"
name: nginx-deployment-1006230814-6winp
namespace: default
resourceVersion: "133447"
uid: 4c879808-f161-11e5-9a78-42010af00005
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 500m
memory: 128Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-4bcbi
readOnly: true
dnsPolicy: ClusterFirst
nodeName: kubernetes-node-wul5
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-4bcbi
secret:
secretName: default-token-4bcbi
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-03-24T01:39:51Z
status: "True"
type: Ready
containerStatuses:
- containerID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
image: nginx
imageID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2016-03-24T01:39:51Z
hostIP: 10.240.0.9
phase: Running
podIP: 10.244.0.6
startTime: 2016-03-24T01:39:49Z
```
## 例: ダウン/到達不可能なノードのデバッグ
例えば、ード上で動作しているPodのおかしな挙動に気付いたり、Podがード上でスケジュールされない原因を探ったりと、デバッグ時にードのステータスを見ることが有用な場合があります。Podと同様に、`kubectl describe node`や`kubectl get node -o yaml`を使ってノードの詳細情報を取得することができます。例えば、ノードがダウンした場合(ネットワークから切断された、またはkubeletが死んで再起動しないなど)に表示される内容は以下の通りです。ードがNotReadyであることを示すイベントに注目してください。また、Podが実行されなくなっていることにも注目してください(NotReady状態が5分続くと、Podは退避されます)。
```shell
kubectl get nodes
```
```none
NAME STATUS ROLES AGE VERSION
kubernetes-node-861h NotReady <none> 1h v1.13.0
kubernetes-node-bols Ready <none> 1h v1.13.0
kubernetes-node-st6x Ready <none> 1h v1.13.0
kubernetes-node-unaj Ready <none> 1h v1.13.0
```
```shell
kubectl describe node kubernetes-node-861h
```
```none
Name: kubernetes-node-861h
Role
Labels: kubernetes.io/arch=amd64
kubernetes.io/os=linux
kubernetes.io/hostname=kubernetes-node-861h
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Mon, 04 Sep 2017 17:13:23 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk Unknown Fri, 08 Sep 2017 16:04:28 +0800 Fri, 08 Sep 2017 16:20:58 +0800 NodeStatusUnknown Kubelet stopped posting node status.
MemoryPressure Unknown Fri, 08 Sep 2017 16:04:28 +0800 Fri, 08 Sep 2017 16:20:58 +0800 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Fri, 08 Sep 2017 16:04:28 +0800 Fri, 08 Sep 2017 16:20:58 +0800 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Fri, 08 Sep 2017 16:04:28 +0800 Fri, 08 Sep 2017 16:20:58 +0800 NodeStatusUnknown Kubelet stopped posting node status.
Addresses: 10.240.115.55,104.197.0.26
Capacity:
cpu: 2
hugePages: 0
memory: 4046788Ki
pods: 110
Allocatable:
cpu: 1500m
hugePages: 0
memory: 1479263Ki
pods: 110
System Info:
Machine ID: 8e025a21a4254e11b028584d9d8b12c4
System UUID: 349075D1-D169-4F25-9F2A-E886850C47E3
Boot ID: 5cd18b37-c5bd-4658-94e0-e436d3f110e0
Kernel Version: 4.4.0-31-generic
OS Image: Debian GNU/Linux 8 (jessie)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.5
Kubelet Version: v1.6.9+a3d1dfa6f4335
Kube-Proxy Version: v1.6.9+a3d1dfa6f4335
ExternalID: 15233045891481496305
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
......
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
900m (60%) 2200m (146%) 1009286400 (66%) 5681286400 (375%)
Events: <none>
```
```shell
kubectl get node kubernetes-node-861h -o yaml
```
```yaml
apiVersion: v1
kind: Node
metadata:
creationTimestamp: 2015-07-10T21:32:29Z
labels:
kubernetes.io/hostname: kubernetes-node-861h
name: kubernetes-node-861h
resourceVersion: "757"
uid: 2a69374e-274b-11e5-a234-42010af0d969
spec:
externalID: "15233045891481496305"
podCIDR: 10.244.0.0/24
providerID: gce://striped-torus-760/us-central1-b/kubernetes-node-861h
status:
addresses:
- address: 10.240.115.55
type: InternalIP
- address: 104.197.0.26
type: ExternalIP
capacity:
cpu: "1"
memory: 3800808Ki
pods: "100"
conditions:
- lastHeartbeatTime: 2015-07-10T21:34:32Z
lastTransitionTime: 2015-07-10T21:35:15Z
reason: Kubelet stopped posting node status.
status: Unknown
type: Ready
nodeInfo:
bootID: 4e316776-b40d-4f78-a4ea-ab0d73390897
containerRuntimeVersion: docker://Unknown
kernelVersion: 3.16.0-0.bpo.4-amd64
kubeProxyVersion: v0.21.1-185-gffc5a86098dc01
kubeletVersion: v0.21.1-185-gffc5a86098dc01
machineID: ""
osImage: Debian GNU/Linux 7 (wheezy)
systemUUID: ABE5F6B4-D44B-108B-C46A-24CCE16C8B6E
```
## {{% heading "whatsnext" %}}
以下のような追加のデバッグツールについて学びます:
* [Logging](/docs/concepts/cluster-administration/logging/)
* [Monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
* [Getting into containers via `exec`](/docs/tasks/debug-application-cluster/get-shell-running-container/)
* [Connecting to containers via proxies](/docs/tasks/extend-kubernetes/http-proxy-access-api/)
* [Connecting to containers via port forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
* [Inspect Kubernetes node with crictl](/docs/tasks/debug-application-cluster/crictl/)

View File

@ -0,0 +1,142 @@
---
title: アプリケーションのトラブルシューティング
content_type: concept
---
<!-- overview -->
このガイドは、Kubernetesにデプロイされ、正しく動作しないアプリケーションをユーザーがデバッグするためのものです。
これは、自分のクラスターをデバッグしたい人のためのガイドでは *ありません*
そのためには、[debug-cluster](/docs/tasks/debug-application-cluster/debug-cluster)を確認する必要があります。
<!-- body -->
## 問題の診断
トラブルシューティングの最初のステップは切り分けです。何が問題なのでしょうか?
Podなのか、レプリケーションコントローラーなのか、それともサービスなのか
* [Debugging Pods](#debugging-pods)
* [Debugging Replication Controllers](#debugging-replication-controllers)
* [Debugging Services](#debugging-services)
### Podのデバッグ
デバッグの第一歩は、Podを見てみることです。
以下のコマンドで、Podの現在の状態や最近のイベントを確認します。
```shell
kubectl describe pods ${POD_NAME}
```
Pod内のコンテナの状態を見てください。
すべて`Running`ですか? 最近、再起動がありましたか?
Podの状態に応じてデバッグを続けます。
#### PodがPendingのまま
Podが`Pending`で止まっている場合、それはノードにスケジュールできないことを意味します。
一般に、これはある種のリソースが不十分で、スケジューリングできないことが原因です。
上の`kubectl describe ...`コマンドの出力を見てください。
なぜあなたのPodをスケジュールできないのか、スケジューラーからのメッセージがあるはずです。
理由は以下の通りです。
* **リソースが不足しています。** クラスターのCPUまたはメモリーを使い果たしている可能性があります。Podを削除するか、リソースの要求値を調整するか、クラスターに新しいードを追加する必要があります。詳しくは[Compute Resources document](/ja/docs/concepts/configuration/manage-resources-containers/)を参照してください。
* **あなたが使用しているのは`hostPort`**です。Podを`hostPort`にバインドすると、そのPodがスケジュールできる場所が限定されます。ほとんどの場合、`hostPort`は不要なので、Serviceオブジェクトを使ってPodを公開するようにしてください。もし`hostPort` が必要な場合は、Kubernetesクラスターのード数だけPodをスケジュールすることができます。
#### Podがwaitingのまま
Podが`Waiting`状態で止まっている場合、ワーカーノードにスケジュールされていますが、そのノード上で実行することができません。この場合も、`kubectl describe ...`の情報が参考になるはずです。`Waiting`状態のPodの最も一般的な原因は、コンテナイメージのプルに失敗することです。
確認すべきことは3つあります。
* イメージの名前が正しいかどうか確認してください。
* イメージをレジストリにプッシュしましたか?
* あなたのマシンで手動で`docker pull <image>`を実行し、イメージをプルできるかどうか確認してください。
#### Podがクラッシュするなどの不健全な状態
Podがスケジュールされると、[Debug Running Pods](/docs/tasks/debug-application-cluster/debug-running-pod/)で説明されている方法がデバッグに利用できるようになります。
#### Podが期待する通りに動きません
Podが期待した動作をしない場合、ポッドの記述(ローカルマシンの `mypod.yaml` ファイルなど)に誤りがあり、Pod作成時にその誤りが黙って無視された可能性があります。Pod記述のセクションのネストが正しくないか、キー名が間違って入力されていることがよくあり、そのようなとき、そのキーは無視されます。たとえば、`command`のスペルを`commnd`と間違えた場合、Podは作成されますが、あなたが意図したコマンドラインは使用されません。
まずPodを削除して、`--validate` オプションを付けて再度作成してみてください。
例えば、`kubectl apply --validate -f mypod.yaml`と実行します。
`command`のスペルを`commnd`に間違えると、以下のようなエラーになります。
```shell
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842
pods/mypod
```
<!-- TODO: Now that #11914 is merged, this advice may need to be updated -->
次に確認することは、apiserver上のPodが、作成しようとしたPod(例えば、ローカルマシンのyamlファイル)と一致しているかどうかです。
例えば、`kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml` を実行して、元のポッドの説明である`mypod.yaml`とapiserverから戻ってきた`mypod-on-apiserver.yaml`を手動で比較してみてください。
通常、"apiserver" バージョンには、元のバージョンにはない行がいくつかあります。これは予想されることです。
しかし、もし元のバージョンにある行がapiserverバージョンにない場合、これはあなたのPod specに問題があることを示している可能性があります。
### レプリケーションコントローラーのデバッグ
レプリケーションコントローラーはかなり単純なものです。
彼らはPodを作ることができるか、できないか、どちらかです。
もしPodを作成できないのであれば、[上記の説明](#debugging-pods)を参照して、Podをデバッグしてください。
また、`kubectl describe rc ${CONTROLLER_NAME}`を使用すると、レプリケーションコントローラーに関連するイベントを確認することができます。
### Serviceのデバッグ
Serviceは、Podの集合全体でロードバランシングを提供します。
Serviceが正しく動作しない原因には、いくつかの一般的な問題があります。
以下の手順は、Serviceの問題をデバッグするのに役立つはずです。
まず、Serviceに対応するEndpointが存在することを確認します。
全てのServiceオブジェクトに対して、apiserverは `endpoints` リソースを利用できるようにします。
このリソースは次のようにして見ることができます。
```shell
kubectl get endpoints ${SERVICE_NAME}
```
EndpointがServiceのメンバーとして想定されるPod数と一致していることを確認してください。
例えば、3つのレプリカを持つnginxコンテナ用のServiceであれば、ServiceのEndpointには3つの異なるIPアドレスが表示されるはずです。
#### Serviceに対応するEndpointがありません
Endpointが見つからない場合は、Serviceが使用しているラベルを使用してPodをリストアップしてみてください。
ラベルがあるところにServiceがあると想像してください。
```yaml
...
spec:
- selector:
name: nginx
type: frontend
```
セレクタに一致するPodを一覧表示するには、次のコマンドを使用します。
```shell
kubectl get pods --selector=name=nginx,type=frontend
```
リストがServiceを提供する予定のPodと一致することを確認します。
Podの`containerPort`がServiceの`targetPort`と一致することを確認します。
#### ネットワークトラフィックが転送されません
詳しくは[Serviceのデバッグ](/ja/docs/tasks/debug-application-cluster/debug-service/)を参照してください。
## {{% heading "whatsnext" %}}
上記のいずれの方法でも問題が解決しない場合は、以下の手順に従ってください。
[Debugging Service document](/docs/tasks/debug-application-cluster/debug-service/)で、`Service` が実行されていること、`Endpoints`があること、`Pods`が実際にサービスを提供していること、DNS が機能していること、IPtablesルールがインストールされていること、kube-proxyが誤作動を起こしていないようなことを確認してください。
[トラブルシューティングドキュメント](/docs/tasks/debug-application-cluster/troubleshooting/)に詳細が記載されています。

View File

@ -0,0 +1,62 @@
---
title: リソースメトリクスパイプライン
content_type: concept
---
<!-- overview -->
Kubernetesでは、コンテナのCPU使用率やメモリ使用率といったリソース使用量のメトリクスが、メトリクスAPIを通じて提供されています。これらのメトリクスは、ユーザーが`kubectl top`コマンドで直接アクセスするか、クラスター内のコントローラー(例えばHorizontal Pod Autoscaler)が判断するためにアクセスすることができます。
<!-- body -->
## メトリクスAPI
メトリクスAPIを使用すると、指定したードやPodが現在使用しているリソース量を取得することができます。
このAPIはメトリックの値を保存しないので、例えば10分前に指定されたードが使用したリソース量を取得することはできません。
メトリクスAPIは他のAPIと何ら変わりはありません。
- 他のKubernetes APIと同じエンドポイントを経由して、`/apis/metrics.k8s.io/`パスの下で発見できます。
- 同じセキュリティ、スケーラビリティ、信頼性の保証を提供します。
メトリクスAPIは[k8s.io/metrics](https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1beta1/types.go)リポジトリで定義されています。
メトリクスAPIについての詳しい情報はそちらをご覧ください。
{{< note >}}
メトリクスAPIを使用するには、クラスター内にメトリクスサーバーが配置されている必要があります。そうでない場合は利用できません。
{{< /note >}}
## リソース使用量の測定
### CPU
CPUは、一定期間の平均使用量を[CPU cores](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu)という単位で報告されます。
この値は、カーネルが提供する累積CPUカウンターの比率を取得することで得られます(LinuxとWindowsの両カーネルで)。
kubeletは、比率計算のためのウィンドウを選択します。
### メモリ
メモリは、測定値が収集された時点のワーキングセットとして、バイト単位で報告されます。
理想的な世界では、「ワーキングセット」は、メモリ不足で解放できない使用中のメモリ量です。
しかし、ワーキングセットの計算はホストOSによって異なり、一般に推定値を生成するために経験則を多用しています。
Kubernetesはスワップをサポートしていないため、すべての匿名(非ファイルバックアップ)メモリが含まれます。
ホストOSは常にそのようなページを再請求することができないため、メトリックには通常、一部のキャッシュされた(ファイルバックされた)メモリも含まれます。
## メトリクスサーバー
[メトリクスサーバー](https://github.com/kubernetes-sigs/metrics-server)は、クラスター全体のリソース使用量データのアグリゲーターです。
デフォルトでは、`kube-up.sh`スクリプトで作成されたクラスターにDeploymentオブジェクトとしてデプロイされます。
別のKubernetesセットアップ機構を使用する場合は、提供される[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases)ファイルを使用してデプロイすることができます。
メトリクスサーバーは、Summary APIからメトリクスを収集します。
各ノードの[Kubelet](/docs/reference/command-line-tools-reference/kubelet/)から[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)経由でメインAPIサーバーに登録されるようになっています。
メトリクスサーバーについては、[Design proposals](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/metrics-server.md)で詳しく解説しています。
### Summary APIソース
[Kubelet](/docs/reference/command-line-tools-reference/kubelet/)は、ード、ボリューム、Pod、コンテナレベルの統計情報を収集し、[Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go)で省略して消費者が読めるようにするものです。
1.23以前は、これらのリソースは主に[cAdvisor](https://github.com/google/cadvisor)から収集されていました。しかし、1.23では`PodAndContainerStatsFromCRI`フィーチャーゲートの導入により、コンテナとPodレベルの統計情報をCRI実装で収集することができます。
注意: これはCRI実装によるサポートも必要です(containerd >= 1.6.0, CRI-O >= 1.23.0)。

View File

@ -0,0 +1,41 @@
---
content_type: concept
title: リソース監視のためのツール
---
<!-- overview -->
アプリケーションを拡張し、信頼性の高いサービスを提供するために、デプロイ時にアプリケーションがどのように動作するかを理解する必要があります。
コンテナ、[Pod](/docs/concepts/workloads/pods/)、[Service](/docs/concepts/services-networking/service/)、クラスター全体の特性を調べることにより、Kubernetesクラスターのアプリケーションパフォーマンスを調査することができます。
Kubernetesは、これらの各レベルでアプリケーションのリソース使用に関する詳細な情報を提供します。
この情報により、アプリケーションのパフォーマンスを評価し、ボトルネックを取り除くことで全体のパフォーマンスを向上させることができます。
<!-- body -->
Kubernetesでは、アプリケーションの監視は1つの監視ソリューションに依存することはありません。
新しいクラスターでは、[リソースメトリクス](#resource-metrics-pipeline)または[フルメトリクス](#full-metrics-pipeline)パイプラインを使用してモニタリング統計を収集することができます。
## リソースメトリクスパイプライン
リソースメトリックパイプラインは、[Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/)コントローラーなどのクラスターコンポーネントや、`kubectl top`ユーティリティに関連する限定的なメトリックセットを提供します。
これらのメトリクスは軽量、短期、インメモリーの[metrics-server](https://github.com/kubernetes-sigs/metrics-server)によって収集され、`metrics.k8s.io` APIを通じて公開されます。
metrics-serverはクラスター上のすべてのードを検出し
各ノードの[kubelet](/docs/reference/command-line-tools-reference/kubelet/)にCPUとメモリーの使用量を問い合わせます。
kubeletはKubernetesマスターとードの橋渡し役として、マシン上で動作するPodやコンテナを管理する。
kubeletは各Podを構成するコンテナに変換し、コンテナランタイムインターフェースを介してコンテナランタイムから個々のコンテナ使用統計情報を取得します。この情報は、レガシーDocker統合のための統合cAdvisorから取得されます。
そして、集約されたPodリソース使用統計情報を、metrics-server Resource Metrics APIを通じて公開します。
このAPIは、kubeletの認証済みおよび読み取り専用ポート上の `/metrics/resource/v1beta1` で提供されます。
## フルメトリクスパイプライン
フルメトリクスパイプラインは、より豊富なメトリクスにアクセスすることができます。
Kubernetesは、Horizontal Pod Autoscalerなどのメカニズムを使用して、現在の状態に基づいてクラスターを自動的にスケールまたは適応させることによって、これらのメトリクスに対応することができます。
モニタリングパイプラインは、kubeletからメトリクスを取得し、`custom.metrics.k8s.io` または `external.metrics.k8s.io` APIを実装してアダプタ経由でKubernetesにそれらを公開します。
CNCFプロジェクトの[Prometheus](https://prometheus.io)は、Kubernetes、ード、Prometheus自身をネイティブに監視することができます。
CNCFに属さない完全なメトリクスパイプラインのプロジェクトは、Kubernetesのドキュメントの範囲外です。

View File

@ -43,12 +43,12 @@ Google이 일주일에 수십억 개의 컨테이너들을 운영하게 해준
<button id="desktopShowVideoButton" onclick="kub.showVideo()">비디오 보기</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Attend KubeCon North America on October 11-15, 2021</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -15,7 +15,7 @@ W wyniku instalacji Kubernetesa otrzymujesz klaster.
{{< glossary_definition term_id="cluster" length="all" prepend="Klaster Kubernetes to">}}
W tym dokumencie opisujemy składniki niezbędne do zbudowania kompletnego, poprawnie działającego klastra Kubernetes.
W tym dokumencie opisujemy składniki niezbędne do zbudowania kompletnego, poprawnie działającego klastra Kubernetesa.
{{< figure src="/images/docs/components-of-kubernetes.svg" alt="Składniki Kubernetesa" caption="Części składowe klastra Kubernetes" class="diagram-large" >}}

View File

@ -35,8 +35,9 @@ warto rozważyć użycie jednej z [bibliotek klienckich](/docs/reference/using-a
Pełną specyfikację API udokumentowano za pomocą [OpenAPI](https://www.openapis.org/).
Serwer API Kubernetes API udostępnia specyfikację OpenAPI poprzez ścieżkę `/openapi/v2`.
Aby wybrać format odpowiedzi, użyj nagłówków żądania zgodnie z tabelą:
Serwer API Kubernetesa udostępnia specyfikację OpenAPI poprzez
ścieżkę `/openapi/v2`. Aby wybrać format odpowiedzi,
użyj nagłówków żądania zgodnie z tabelą:
<table>
<caption style="display:none">Dopuszczalne wartości nagłówka żądania dla zapytań OpenAPI v2</caption>
@ -75,6 +76,55 @@ Więcej szczegółów znajduje się w dokumencie [Kubernetes Protobuf serializat
oraz w plikach *Interface Definition Language* (IDL) dla każdego ze schematów
zamieszczonych w pakietach Go, które definiują obiekty API.
### OpenAPI V3
{{< feature-state state="alpha" for_k8s_version="v1.23" >}}
Kubernetes v1.23 umożliwia (na razie w we wczesnej wersji roboczej) publikowanie swojego API jako OpenAPI v3.
Ta funkcjonalność jest w wersji _alfa_ i jest domyślnie wyłączona.
Funkcjonalności w wersji _alfa_ można włączać poprzez
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) o nazwie `OpenAPIV3`
składnika kube-apiserver.
Po włączeniu tej funkcjonalności, serwer API Kubernetesa udostępnia
zagregowaną specyfikację OpenAPI v3 dla odpowiednich grup i wersji poprzez ścieżkę
`/openapi/v3/apis/<group>/<version>`. Tabela poniżej podaje dopuszczalne wartości
nagłówków żądania.
<table>
<caption style="display:none">Dopuszczalne wartości nagłówka żądania dla zapytań OpenAPI v3</caption>
<thead>
<tr>
<th>Nagłówek</th>
<th style="min-width: 50%;">Dopuszczalne wartości</th>
<th>Uwagi</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>Accept-Encoding</code></td>
<td><code>gzip</code></td>
<td><em>pominięcie tego nagłówka jest dozwolone</em></td>
</tr>
<tr>
<td rowspan="3"><code>Accept</code></td>
<td><code>application/com.github.proto-openapi.spec.v3@v1.0+protobuf</code></td>
<td><em>głównie do celu komunikacji wewnątrz klastra</em></td>
</tr>
<tr>
<td><code>application/json</code></td>
<td><em>domyślne</em></td>
</tr>
<tr>
<td><code>*</code></td>
<td><em>udostępnia </em><code>application/json</code></td>
</tr>
</tbody>
</table>
Poprzez ścieżkę `/openapi/v3` można wyświetlić pełną listę
dostępnych grup i wersji. Formatem odpowiedzi jest tylko JSON.
## Przechowywanie stanu
Kubernetes przechowuje serializowany stan swoich obiektów w

View File

@ -12,7 +12,7 @@ card:
<!-- overview -->
Kubernetes zaprasza do współpracy wszystkich - zarówno nowicjuszy, jak i doświadczonych!
*Kubernetes zaprasza do współpracy wszystkich - zarówno nowicjuszy, jak i doświadczonych!*
{{< note >}}
Aby dowiedzieć się więcej ogólnych informacji o współpracy przy tworzeniu Kubernetesa, zajrzyj
@ -43,17 +43,94 @@ wymagana jest pewna biegłość w korzystaniu z
Aby zaangażować się w prace nad dokumentacją należy:
1. Podpisać [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md) CNCF.
1. Zapoznać się z [repozytorium dokumentacji](https://github.com/kubernetes/website)
2. Zapoznać się z [repozytorium dokumentacji](https://github.com/kubernetes/website)
i z [generatorem statycznej strony](https://gohugo.io) www.
1. Zrozumieć podstawowe procesy [otwierania *pull request*](/docs/contribute/new-content/new-content/) oraz
3. Zrozumieć podstawowe procesy [otwierania *pull request*](/docs/contribute/new-content/new-content/) oraz
[recenzowania zmian](/docs/contribute/review/reviewing-prs/).
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
{{< mermaid >}}
flowchart TB
subgraph third[Otwórz PR]
direction TB
U[ ] -.-
Q[Ulepsz zawartość] --- N[Dodaj nową]
N --- O[Przetłumacz dokumentację]
O --- P[Zarządzaj dokumentacją<br>przy kolejnych<br>wydaniach K8s]
end
subgraph second[Recenzuj]
direction TB
T[ ] -.-
D[Przejrzyj<br>repozytorium<br>K8s/website] --- E[Pobierz generator<br>stron statycznych<br>Hugo]
E --- F[Zrozum podstawowe<br>polecenia GitHub-a]
F --- G[Zrecenzuj otwarty PR<br>i zmień procesy<br>recenzji]
end
subgraph first[Zapisz się]
direction TB
S[ ] -.-
B[Podpisz CNCF<br>Contributor<br>License Agreement] --- C[Dołącz do Slack-a<br>sig-docs]
C --- V[Zapisz się na listę<br>kubernetes-sig-docs]
V --- M[Weź udział w cotygodniowych<br>spotkaniach sig-docs]
end
A([fa:fa-user Nowy<br>uczestnik]) --> first
A --> second
A --> third
A --> H[Zapytaj!!!]
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
class A,B,C,D,E,F,G,H,M,Q,N,O,P,V grey
class S,T,U spacewhite
class first,second,third white
{{</ mermaid >}}
***Schemat - Jak zacząć współpracę***
To jest schemat postępowania dla osób, które chcą zacząć współtworzyć Kubernetesa. Przejdź część lub wszystkie kroki opisane w częściach `Zapisz się` i `Recenzuj`. Teraz już możesz tworzyć nowe PR, zgodnie z sugestiami w `Otwórz PR`. I jak zawsze, pytania mile widziane!
Do realizacji niektórych zadań potrzeba wyższego poziomu zaufania i odpowiednich uprawnień w organizacji Kubernetes.
Zajrzyj do [Participating in SIG Docs](/docs/contribute/participate/) po więcej szczegółów dotyczących
ról i uprawnień.
## Pierwsze kroki
Zapoznaj się z krokami opisanymi poniżej, aby się lepiej przygotować.
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
{{< mermaid >}}
flowchart LR
subgraph second[Pierwszy wkład]
direction TB
S[ ] -.-
G[Obejrzyj PRy<br>innych uczestników K8s] -->
A[Przejrzyj listę zgłoszonych spraw<br>na K8s/website<br>po pomysł na nowy PR] --> B[Otwórz PR!!]
end
subgraph first[Sugerowane przygotowanie]
direction TB
T[ ] -.-
D[Przeczytaj wprowadzenie<br>dla współtwórców] -->E[Przeczytaj K8s content<br>and style guides]
E --> F[Poczytaj o typach zawartości<br>stron i skrótach Hugo]
end
first ----> second
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
class A,B,D,E,F,G grey
class S,T spacewhite
class first,second white
{{</ mermaid >}}
***Schemat - Jak się przygotować***
- Przeczytaj [Contribution overview](/docs/contribute/new-content/overview/),
aby dowiedzieć się o różnych sposobach współpracy.
- Zajrzyj do [Contribute to kubernetes/website](https://github.com/kubernetes/website/contribute),
@ -89,10 +166,12 @@ Aby włączyć się w komunikację w ramach SIG Docs, możesz:
się przedstawić!
- [Zapisać się na listę `kubernetes-sig-docs`](https://groups.google.com/forum/#!forum/kubernetes-sig-docs),
na której prowadzone są dyskusje o szerszym zasięgu i zapisywane oficjalne decyzje.
- Dołączyć do [cotygodniowego spotkania wideo SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs). Spotkania są zawsze zapowiadane na `#sig-docs` i dodawane do [kalendarza spotkań społeczności Kubernetes](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). Będziesz potrzebował komunikatora [Zoom](https://zoom.us/download) lub telefonu, aby się wdzwonić.
- Dołączyć do [spotkania wideo SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) odbywającego się co dwa tygodnie. Spotkania są zawsze zapowiadane na `#sig-docs` i dodawane do [kalendarza spotkań społeczności Kubernetesa](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=Europe/Warsaw). Będziesz potrzebował komunikatora [Zoom](https://zoom.us/download) lub telefonu, aby się wdzwonić.
- Dołączyć do spotkania SIG Docs na Slacku organizowanego w tych tygodniach, kiedy nie ma spotkania na Zoomie. Informacja o spotkaniu zawsze ogłaszana jest na `#sig-docs`. W rozmowach prowadzonych w różnych wątkach na tym kanale można brać udział do 24 godzin od chwili ogłoszenia.
## Inne sposoby współpracy
- Odwiedź [stronę społeczności Kubernetes](/community/). Korzystaj z Twittera i Stack Overflow, dowiedz się o spotkaniach lokalnych grup Kubernetes, różnych wydarzeniach i nie tylko.
- Przeczytaj [ściągawkę dla współtwórców](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet), aby zaangażować się w dalszy rozwój Kubernetesa.
- Odwiedź [stronę społeczności Kubernetesa](/community/). Korzystaj z Twittera i Stack Overflow, dowiedz się o spotkaniach lokalnych grup Kubernetesa, różnych wydarzeniach i nie tylko.
- Przeczytaj [ściągawkę dla współtwórców](https://www.kubernetes.dev/docs/contributor-cheatsheet/contributor-cheatsheet/), aby zaangażować się w dalszy rozwój Kubernetesa.
- Odwiedź stronę [Kubernetes Contributors](https://www.kubernetes.dev/) i zajrzyj do [dodatkowych zasobów](https://www.kubernetes.dev/resources/).
- Przygotuj [wpis na blogu lub *case study*](/docs/contribute/new-content/blogs-case-studies/).

View File

@ -56,7 +56,7 @@ biblioteki to:
* [Scheduler Profiles](/docs/reference/scheduling/config#profiles)
* Spis [portów i protokołów](/docs/reference/ports-and-protocols/), które
muszą być otwarte dla warstwy sterowania i na węzłach roboczych
muszą być otwarte dla warstwy sterowania i na węzłach roboczych.
## API konfiguracji
@ -65,14 +65,15 @@ Kubernetesa lub innych narzędzi. Choć większość tych API nie jest udostępn
serwer API w trybie RESTful, są one niezbędne dla użytkowników i administratorów
w korzystaniu i zarządzaniu klastrem.
* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/)
* [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
* [kube-scheduler configuration (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/)
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/)
* [kube-scheduler policy reference (v1)](/docs/reference/config-api/kube-scheduler-policy-config.v1/)
* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) i
[kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) i
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
* [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
* [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/)
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) i
[Client authentication API (v1)](/docs/reference/config-api/client-authentication.v1/)
* [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/)
## API konfiguracji dla kubeadm

View File

@ -15,7 +15,7 @@ tags:
<!--more-->
Kubernetes obsługuje różne *container runtimes*: {{< glossary_tooltip term_id="docker">}},
Kubernetes obsługuje różne *container runtimes*:
{{< glossary_tooltip term_id="containerd" >}}, {{< glossary_tooltip term_id="cri-o" >}}
oraz każdą implementację zgodną z [Kubernetes CRI (Container Runtime
Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md).

View File

@ -46,16 +46,17 @@ Przed zapoznaniem się z samouczkami warto stworzyć zakładkę do
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
## Klastry
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
## Serwisy
* [Using Source IP](/docs/tutorials/services/source-ip/)
## Bezpieczeństwo
* [Apply Pod Security Standards at Cluster level](/docs/tutorials/security/cluster-level-pss/)
* [Apply Pod Security Standards at Namespace level](/docs/tutorials/security/ns-level-pss/)
* [AppArmor](/docs/tutorials/security/apparmor/)
* [seccomp](/docs/tutorials/security/seccomp/)
## {{% heading "whatsnext" %}}
Jeśli chciałbyś napisać nowy samouczek, odwiedź

View File

@ -2,3 +2,5 @@
title: Tworzenie klastra
weight: 10
---
Poznaj {{< glossary_tooltip text="klaster" term_id="cluster" length="all" >}} Kubernetesa i naucz się, jak stworzyć jego prostą wersję przy pomocy Minikube.

View File

@ -72,7 +72,7 @@ weight: 10
<div class="row">
<div class="col-md-8">
<p><b>Warstwa sterowania odpowiada za zarządzanie klastrem.</b> Warstwa sterowania koordynuje wszystkie działania klastra, takie jak zlecanie uruchomienia aplikacji, utrzymywanie pożądanego stanu aplikacji, skalowanie aplikacji i instalowanie nowych wersji.</p>
<p><b>Węzeł to maszyna wirtualna (VM) lub fizyczny serwer, który jest maszyną roboczą w klastrze Kubernetes.</b> Na każdym węźle działa Kubelet, agent zarządzający tym węzłem i komunikujący się z warstwą sterowania Kubernetesa. Węzeł zawiera także narzędzia do obsługi kontenerów, takie jak containerd lub Docker. Klaster Kubernetes w środowisku produkcyjnym powinien składać się minimum z trzech węzłów.</p>
<p><b>Węzeł to maszyna wirtualna (VM) lub fizyczny serwer, który jest maszyną roboczą w klastrze Kubernetes.</b> Na każdym węźle działa Kubelet, agent zarządzający tym węzłem i komunikujący się z warstwą sterowania Kubernetesa. Węzeł zawiera także narzędzia do obsługi kontenerów, takie jak containerd lub Docker. Klaster Kubernetes w środowisku produkcyjnym powinien składać się minimum z trzech węzłów, ponieważ w przypadku awarii jednego węzła traci się zarówno element etcd, jak i warstwy sterowania i w ten sposób minimalną nadmiarowość (<em>redundancy</em>). Dodanie kolejnych węzłów warstwy sterowania może temu zapobiec.</p></p>
</div>
<div class="col-md-4">

View File

@ -7,7 +7,6 @@ feature:
<a href="https://aws.amazon.com/products/storage/">AWS</a>
之类公有云提供商所提供的存储或者诸如 NFS、iSCSI、Gluster、Ceph、Cinder
或 Flocker 这类网络存储系统。
content_type: concept
weight: 20
---
@ -23,7 +22,6 @@ feature:
title: Storage orchestration
description: >
Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as <a href="https://cloud.google.com/storage/">GCP</a> or <a href="https://aws.amazon.com/products/storage/">AWS</a>, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
content_type: concept
weight: 20
-->
@ -31,9 +29,9 @@ weight: 20
<!-- overview -->
<!--
This document describes the current state of _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
This document describes _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
-->
本文描述 Kubernetes 中 _持久卷Persistent Volume_ 的当前状态
本文描述 Kubernetes 中 _持久卷Persistent Volume_
建议先熟悉[卷Volume](/zh/docs/concepts/storage/volumes/)的概念。
<!-- body -->
@ -446,21 +444,21 @@ to `Retain`, including cases where you are reusing an existing PV.
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
<!--
Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand
Support for expanding PersistentVolumeClaims (PVCs) is enabled by default. You can expand
the following types of volumes:
-->
现在,对扩充 PVC 申领的支持默认处于被启用状态。你可以扩充以下类型的卷:
* gcePersistentDisk
* azureDisk
* azureFile
* awsElasticBlockStore
* Cinder
* cinder (deprecated)
* {{< glossary_tooltip text="csi" term_id="csi" >}}
* flexVolume (deprecated)
* gcePersistentDisk
* glusterfs
* rbd
* Azure File
* Azure Disk
* Portworx
* FlexVolumes
{{< glossary_tooltip text="CSI" term_id="csi" >}}
* portworxVolume
<!--
You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
@ -491,6 +489,24 @@ new PersistentVolume is never created to satisfy the claim. Instead, an existing
Kubernetes 不会创建新的 PV 卷来满足此申领的请求。
与之相反,现有的卷会被调整大小。
<!--
Directly editing the size of a PersistentVolume can prevent an automatic resize of that volume.
If you edit the capacity of a PersistentVolume, and then edit the `.spec` of a matching
PersistentVolumeClaim to make the size of the PersistentVolumeClaim match the PersistentVolume,
then no storage resize happens.
The Kubernetes control plane will see that the desired state of both resources matches,
conclude that the backing volume size has been manually
increased and that no resize is necessary.
-->
{{< warning >}}
直接编辑 PersistentVolume 的大小可以阻止该卷自动调整大小。
如果对 PersistentVolume 的容量进行编辑,然后又将其所对应的
PersistentVolumeClaim 的 `.spec` 进行编辑,使该 PersistentVolumeClaim
的大小匹配 PersistentVolume 的话,则不会发生存储大小的调整。
Kubernetes 控制平面将看到两个资源的所需状态匹配,并认为其后备卷的大小
已被手动增加,无需调整。
{{< /warning >}}
<!--
#### CSI Volume expansion
-->
@ -518,8 +534,8 @@ When a volume contains a file system, the file system is only resized when a new
the PersistentVolumeClaim in `ReadWrite` mode. File system expansion is either done when a Pod is starting up
or when a Pod is running and the underlying file system supports online expansion.
FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to `true`.
The FlexVolume can be resized on Pod restart.
FlexVolumes (deprecated since Kubernetes v1.23) allow resize if the driver is configured with the
`RequiresFSResize` capability to `true`. The FlexVolume can be resized on Pod restart.
-->
当卷中包含文件系统时,只有在 Pod 使用 `ReadWrite` 模式来使用 PVC 申领的
情况下才能重设其文件系统的大小。
@ -527,7 +543,7 @@ The FlexVolume can be resized on Pod restart.
扩充的前提下在 Pod 运行期间完成。
如果 FlexVolumes 的驱动将 `RequiresFSResize` 能力设置为 `true`,则该
FlexVolume 卷可以在 Pod 重启期间调整大小。
FlexVolume 卷(于 Kubernetes v1.23 弃用)可以在 Pod 重启期间调整大小。
<!--
#### Resizing an in-use PersistentVolumeClaim
@ -582,10 +598,20 @@ Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume
<!--
#### Recovering from Failure when Expanding Volumes
If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.
If a user specifies a new size that is too big to be satisfied by underlying storage system, expansion of PVC will be continuously retried until user or cluster administrator takes some action. This can be undesirable and hence Kubernetes provides following methods of recovering from such failures.
-->
#### 处理扩充卷过程中的失败 {#recovering-from-failure-when-expanding-volumes}
如果用户指定的新大小过大底层存储系统无法满足PVC 的扩展将不断重试,
直到用户或集群管理员采取一些措施。这种情况是不希望发生的,因此 Kubernetes
提供了以下从此类故障中恢复的方法。
{{< tabs name="recovery_methods" >}}
{{% tab name="集群管理员手动处理" %}}
<!--
If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.
-->
如果扩充下层存储的操作失败,集群管理员可以手动地恢复 PVC 申领的状态并
取消重设大小的请求。否则,在没有管理员干预的情况下,控制器会反复重试
重设大小的操作。
@ -605,6 +631,53 @@ If expanding underlying storage fails, the cluster administrator can manually re
这一操作将把新的 PVC 对象绑定到现有的 PV 卷。
5. 不要忘记恢复 PV 卷上设置的回收策略。
{{% /tab %}}
{{% tab name="通过请求扩展为更小尺寸" %}}
{{% feature-state for_k8s_version="v1.23" state="alpha" %}}
<!--
Recovery from failing PVC expansion by users is available as an alpha feature since Kubernetes 1.23. The `RecoverVolumeExpansionFailure` feature must be enabled for this feature to work. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.
-->
{{< note >}}
Kubernetes 从 1.23 版本开始将允许用户恢复失败的 PVC 扩展这一能力作为
alpha 特性支持。 `RecoverVolumeExpansionFailure` 必须被启用以允许使用此功能。
可参考[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
文档了解更多信息。
{{< /note >}}
<!--
If the feature gates `ExpandPersistentVolumes` and `RecoverVolumeExpansionFailure` are both
enabled in your cluster, and expansion has failed for a PVC, you can retry expansion with a
smaller size than the previously requested value. To request a new expansion attempt with a
smaller proposed size, edit `.spec.resources` for that PVC and choose a value that is less than the
value you previously tried.
This is useful if expansion to a higher value did not succeed because of capacity constraint.
If that has happened, or you suspect that it might have, you can retry expansion by specifying a
size that is within the capacity limits of underlying storage provider. You can monitor status of resize operation by watching `.status.resizeStatus` and events on the PVC.
-->
如果集群中的特性门控 `ExpandPersistentVolumes``RecoverVolumeExpansionFailure`
都已启用,在 PVC 的扩展发生失败时,你可以使用比先前请求的值更小的尺寸来重试扩展。
要使用一个更小的尺寸尝试请求新的扩展,请编辑该 PVC 的 `.spec.resources` 并选择
一个比你之前所尝试的值更小的值。
如果由于容量限制而无法成功扩展至更高的值,这将很有用。
如果发生了这种情况,或者你怀疑可能发生了这种情况,你可以通过指定一个在底层存储供应容量
限制内的尺寸来重试扩展。你可以通过查看 `.status.resizeStatus` 以及 PVC 上的事件
来监控调整大小操作的状态。
<!--
Note that,
although you can a specify a lower amount of storage than what was requested previously,
the new value must still be higher than `.status.capacity`.
Kubernetes does not support shrinking a PVC to less than its current size.
-->
请注意,
尽管你可以指定比之前的请求更低的存储量,新值必须仍然高于 `.status.capacity`
Kubernetes 不支持将 PVC 缩小到小于其当前的尺寸。
{{% /tab %}}
{{% /tabs %}}
<!--
## Types of Persistent Volumes
@ -622,7 +695,6 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插
* [`cephfs`](/docs/concepts/storage/volumes/#cephfs) - CephFS volume
* [`csi`](/docs/concepts/storage/volumes/#csi) - Container Storage Interface (CSI)
* [`fc`](/docs/concepts/storage/volumes/#fc) - Fibre Channel (FC) storage
* [`flexVolume`](/docs/concepts/storage/volumes/#flexVolume) - FlexVolume
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
* [`glusterfs`](/docs/concepts/storage/volumes/#glusterfs) - Glusterfs volume
* [`hostPath`](/docs/concepts/storage/volumes/#hostpath) - HostPath volume
@ -642,7 +714,6 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插
* [`cephfs`](/zh/docs/concepts/storage/volumes/#cephfs) - CephFS volume
* [`csi`](/zh/docs/concepts/storage/volumes/#csi) - 容器存储接口 (CSI)
* [`fc`](/zh/docs/concepts/storage/volumes/#fc) - Fibre Channel (FC) 存储
* [`flexVolume`](/zh/docs/concepts/storage/volumes/#flexVolume) - FlexVolume
* [`gcePersistentDisk`](/zh/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE 持久化盘
* [`glusterfs`](/zh/docs/concepts/storage/volumes/#glusterfs) - Glusterfs 卷
* [`hostPath`](/zh/docs/concepts/storage/volumes/#hostpath) - HostPath 卷
@ -660,6 +731,8 @@ The following types of PersistentVolume are deprecated. This means that support
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
(**deprecated** in v1.18)
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
(**deprecated** in v1.23)
* [`flocker`](/docs/concepts/storage/volumes/#flocker) - Flocker storage
(**deprecated** in v1.22)
* [`quobyte`](/docs/concepts/storage/volumes/#quobyte) - Quobyte volume
@ -671,6 +744,7 @@ The following types of PersistentVolume are deprecated. This means that support
以下的持久卷已被弃用。这意味着当前仍是支持的,但是 Kubernetes 将来的发行版会将其移除。
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - CinderOpenStack 块存储)(于 v1.18 **弃用**
* [`flexVolume`](/zh/docs/concepts/storage/volumes/#flexVolume) - FlexVolume (于 v1.23 **弃用**
* [`flocker`](/docs/concepts/storage/volumes/#flocker) - Flocker 存储(于 v1.22 **弃用**
* [`quobyte`](/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷
(于 v1.22 **弃用**

View File

@ -400,6 +400,6 @@ running the build targets, see the following guides:
要手动设置所需的构造仓库,执行构建目标,以生成各个参考文档,可参考下面的指南:
* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/)
* [为 kubeclt 命令生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/)
* [为 kubectl 命令生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/)
* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/)

View File

@ -3,6 +3,6 @@ The topics in the [User Guide](/docs/user-guide/) section of the Kubernetes docs
are being moved to the [Tasks](/docs/tasks/), [Tutorials](/docs/tutorials/), and
[Concepts](/docs/concepts) sections. The content in this topic has moved to:
-->
Kubernetes文档中[使用手册](/zh/docs/user-guide/)部分中的主题被移动到
Kubernetes文档中[用户指南](/zh/docs/user-guide/)部分中的主题被移动到
[任务](/zh/docs/tasks/)、[教程](/zh/docs/tutorials/)和[概念](/zh/docs/concepts)节。
本主题内容已移至:

View File

@ -1,11 +1,18 @@
schedules:
- release: 1.23
releaseDate: 2021-12-07
next: 1.23.2
cherryPickDeadline: 2022-01-14
targetDate: 2022-01-19
next: 1.23.4
cherryPickDeadline: 2022-02-11
targetDate: 2022-02-16
endOfLifeDate: 2023-02-28
previousPatches:
- release: 1.23.3
cherryPickDeadLine: 2022-01-24
targetDate: 2022-01-25
note: "Out-of-Bound Release https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY"
- release: 1.23.2
cherryPickDeadline: 2022-01-14
targetDate: 2022-01-19
- release: 1.23.1
cherryPickDeadline: 2021-12-14
targetDate: 2021-12-16