Merge master into dev-1.22 to keep in sync

This commit is contained in:
Victor Palade 2021-07-27 18:47:27 +03:00
commit 3c95e6a96b
159 changed files with 4134 additions and 2152 deletions

View File

@ -1,11 +1,9 @@
aliases:
sig-docs-blog-owners: # Approvers for blog content
- castrojo
- kbarnard10
- onlydole
- mrbobbytables
sig-docs-blog-reviewers: # Reviewers for blog content
- castrojo
- kbarnard10
- mrbobbytables
- onlydole
@ -31,9 +29,7 @@ aliases:
- reylejano
- savitharaghunathan
- sftim
- steveperry-53
- tengqm
- zparnold
sig-docs-en-reviews: # PR reviews for English content
- bradtopol
- celestehorgan
@ -44,9 +40,7 @@ aliases:
- onlydole
- rajeshdeshpande02
- sftim
- steveperry-53
- tengqm
- zparnold
sig-docs-es-owners: # Admins for Spanish content
- raelga
- electrocucaracha
@ -138,10 +132,11 @@ aliases:
- ClaudiaJKang
- gochist
- ianychoi
- seokho-son
- ysyukr
- jihoon-seo
- pjhwa
- seokho-son
- yoonian
- ysyukr
sig-docs-leads: # Website chairs and tech leads
- irvifa
- jimangel
@ -163,6 +158,7 @@ aliases:
# zhangxiaoyu-zidif
sig-docs-zh-reviews: # PR reviews for Chinese content
- chenrui333
- chenxuc
- howieyuen
- idealhack
- pigletfly
@ -252,10 +248,11 @@ aliases:
release-engineering-reviewers:
- ameukam # Release Manager Associate
- jimangel # Release Manager Associate
- markyjackson-taulia # Release Manager Associate
- mkorbi # Release Manager Associate
- palnabarun # Release Manager Associate
- onlydole # Release Manager Associate
- sethmccombs # Release Manager Associate
- thejoycekung # Release Manager Associate
- verolop # Release Manager Associate
- wilsonehusin # Release Manager Associate
- wilsonehusin # Release Manager Associate

View File

@ -1,6 +1,6 @@
# Kubernetesのドキュメント
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
このリポジトリには、[KubernetesのWebサイトとドキュメント](https://kubernetes.io/)をビルドするために必要な全アセットが格納されています。貢献に興味を持っていただきありがとうございます!

View File

@ -1,6 +1,6 @@
# 쿠버네티스 문서화
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
이 저장소에는 [쿠버네티스 웹사이트 및 문서](https://kubernetes.io/)를 빌드하는 데 필요한 자산이 포함되어 있습니다. 기여해주셔서 감사합니다!

View File

@ -1,6 +1,6 @@
# Dokumentacja projektu Kubernetes
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony internetowej Kubernetesa wraz z dokumentacją](https://kubernetes.io/). Bardzo nam miło, że chcesz wziąć udział w jej współtworzeniu!

View File

@ -1,6 +1,6 @@
# A documentação do Kubernetes
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!

View File

@ -26,7 +26,7 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [Contiv](http://contiv.github.io) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
* [Multus](https://github.com/Intel-Corp/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.

View File

@ -104,7 +104,7 @@ Master and Worker nodes should be protected from overload and resource exhaustio
Resource consumption by the control plane will correlate with the number of pods and the pod churn rate. Very large and very small clusters will benefit from non-default [settings](/docs/reference/command-line-tools-reference/kube-apiserver/) of kube-apiserver request throttling and memory. Having these too high can lead to request limit exceeded and out of memory errors.
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/tasks/administer-cluster/out-of-resource/) conditions can be configured.
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/concepts/scheduling-eviction/node-pressure-eviction/) conditions can be configured.
## Security

View File

@ -0,0 +1,275 @@
---
layout: blog
title: "Kubernetes API and Feature Removals In 1.22: Heres What You Need To Know"
date: 2021-07-14
slug: upcoming-changes-in-kubernetes-1-22
---
**Authors**: Krishna Kilari (Amazon Web Services), Tim Bannister (The Scale Factory)
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old APIs they replace are deprecated, and eventually removed.
See [Kubernetes API removals](#kubernetes-api-removals) to read more about Kubernetes'
policy on removing APIs.
We want to make sure you're aware of some upcoming removals. These are
beta APIs that you can use in current, supported Kubernetes versions,
and they are already deprecated. The reason for all of these removals
is that they have been superseded by a newer, stable (“GA”) API.
Kubernetes 1.22, due for release in August 2021, will remove a number of deprecated
APIs.
[Kubernetes 1.22 Release Information](https://www.kubernetes.dev/resources/release/)
has details on the schedule for the v1.22 release.
## API removals for Kubernetes v1.22 {#api-changes}
The **v1.22** release will stop serving the API versions we've listed immediately below.
These are all beta APIs that were previously deprecated in favor of newer and more stable
API versions.
<!-- sorted by API group -->
* Beta versions of the `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` API (the **admissionregistration.k8s.io/v1beta1** API versions)
* The beta `CustomResourceDefinition` API (**apiextensions.k8s.io/v1beta1**)
* The beta `APIService` API (**apiregistration.k8s.io/v1beta1**)
* The beta `TokenReview` API (**authentication.k8s.io/v1beta1**)
* Beta API versions of `SubjectAccessReview`, `LocalSubjectAccessReview`, `SelfSubjectAccessReview` (API versions from **authorization.k8s.io/v1beta1**)
* The beta `CertificateSigningRequest` API (**certificates.k8s.io/v1beta1**)
* The beta `Lease` API (**coordination.k8s.io/v1beta1**)
* All beta `Ingress` APIs (the **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions)
The Kubernetes documentation covers these
[API removals for v1.22](/docs/reference/using-api/deprecation-guide/#v1-22) and explains
how each of those APIs change between beta and stable.
## What to do
We're going to run through each of the resources that are affected by these removals
and explain the steps you'll need to take.
`Ingress`
: Migrate to use the **networking.k8s.io/v1**
[Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API,
[available since v1.19](/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/#ingress-graduates-to-general-availability).
The related API [IngressClass](/docs/reference/kubernetes-api/service-resources/ingress-class-v1/)
is designed to complement the [Ingress](/docs/concepts/services-networking/ingress/)
concept, allowing you to configure multiple kinds of Ingress within one cluster.
If you're currently using the deprecated
[`kubernetes.io/ingress.class`](https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-ingress-class-deprecated)
annotation, plan to switch to using the `.spec.ingressClassName` field instead.
On any cluster running Kubernetes v1.19 or later, you can use the v1 API to
retrieve or update existing Ingress objects, even if they were created using an
older API version.
When you convert an Ingress to the v1 API, you should review each rule in that Ingress.
Older Ingresses use the legacy `ImplementationSpecific` path type. Instead of `ImplementationSpecific`, switch [path matching](/docs/concepts/services-networking/ingress/#path-types) to either `Prefix` or `Exact`. One of the benefits of moving to these alternative path types is that it becomes easier to migrate between different Ingress classes.
**ⓘ** As well as upgrading _your_ own use of the Ingress API as a client, make sure that
every ingress controller that you use is compatible with the v1 Ingress API.
Read [Ingress Prerequisites](/docs/concepts/services-networking/ingress/#prerequisites)
for more context about Ingress and ingress controllers.
`ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration`
: Migrate to use the **admissionregistration.k8s.io/v1** API versions of
[ValidatingWebhookConfiguration](/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/)
and [MutatingWebhookConfiguration](/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/),
available since v1.16.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
`CustomResourceDefinition`
: Migrate to use the [CustomResourceDefinition](/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/)
**apiextensions.k8s.io/v1** API, available since v1.16.
You can use the v1 API to retrieve or update existing objects, even if they were created
using an older API version. If you defined any custom resources in your cluster, those
are still served after you upgrade.
If you're using external CustomResourceDefinitions, you can use
[`kubectl convert`](#kubectl-convert) to translate existing manifests to use the newer API.
Because there are some functional differences between beta and stable CustomResourceDefinitions,
our advice is to test out each one to make sure it works how you expect after the upgrade.
`APIService`
: Migrate to use the **apiregistration.k8s.io/v1** [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/)
API, available since v1.10.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
If you already have API aggregation using an APIService object, this aggregation continues
to work after you upgrade.
`TokenReview`
: Migrate to use the **authentication.k8s.io/v1** [TokenReview](/docs/reference/kubernetes-api/authentication-resources/token-review-v1/)
API, available since v1.10.
As well as serving this API via HTTP, the Kubernetes API server uses the same format to
[send](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
TokenReviews to webhooks. The v1.22 release continues to use the v1beta1 API for TokenReviews
sent to webhooks by default. See [Looking ahead](#looking-ahead) for some specific tips about
switching to the stable API.
`SubjectAccessReview`, `SelfSubjectAccessReview` and `LocalSubjectAccessReview`
: Migrate to use the **authorization.k8s.io/v1** versions of those
[authorization APIs](/docs/reference/kubernetes-api/authorization-resources/), available since v1.6.
`CertificateSigningRequest`
: Migrate to use the **certificates.k8s.io/v1**
[CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/)
API, available since v1.19.
You can use the v1 API to retrieve or update existing objects, even if they were created
using an older API version. Existing issued certificates retain their validity when you upgrade.
`Lease`
: Migrate to use the **coordination.k8s.io/v1** [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)
API, available since v1.14.
You can use the v1 API to retrieve or update existing objects, even if they were created
using an older API version.
### `kubectl convert`
There is a plugin to `kubectl` that provides the `kubectl convert` subcommand.
It's an official plugin that you can download as part of Kubernetes.
See [Download Kubernetes](/releases/download/) for more details.
You can use `kubectl convert` to update manifest files to use a different API
version. For example, if you have a manifest in source control that uses the beta
Ingress API, you can check that definition out,
and run
`kubectl convert -f <manifest> --output-version <group>/<version>`.
You can use the `kubectl convert` command to automatically convert an
existing manifest.
For example, to convert an older Ingress definition to
`networking.k8s.io/v1`, you can run:
```bash
kubectl convert -f ./legacy-ingress.yaml --output-version networking.k8s.io/v1
```
The automatic conversion uses a similar technique to how the Kubernetes control plane
updates objects that were originally created using an older API version. Because it's
a mechanical conversion, you might need to go in and change the manifest to adjust
defaults etc.
### Rehearse for the upgrade
If you manage your cluster's API server component, you can try out these API
removals before you upgrade to Kubernetes v1.22.
To do that, add the following to the kube-apiserver command line arguments:
`--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1=false,apiregistration.k8s.io/v1beta1=false,authentication.k8s.io/v1beta1=false,authorization.k8s.io/v1beta1=false,certificates.k8s.io/v1beta1=false,coordination.k8s.io/v1beta1=false,extensions/v1beta1/ingresses=false,networking.k8s.io/v1beta1=false`
(as a side effect, this also turns off v1beta1 of EndpointSlice - watch out for
that when you're testing).
Once you've switched all the kube-apiservers in your cluster to use that setting,
those beta APIs are removed. You can test that API clients (`kubectl`, deployment
tools, custom controllers etc) still work how you expect, and you can revert if
you need to without having to plan a more disruptive downgrade.
### Advice for software authors
Maybe you're reading this because you're a developer of an addon or other
component that integrates with Kubernetes?
If you develop an Ingress controller, webhook authenticator, an API aggregation, or
any other tool that relies on these deprecated APIs, you should already have started
to switch your software over.
You can use the tips in
[Rehearse for the upgrade](#rehearse-for-the-upgrade) to run your own Kubernetes
cluster that only uses the new APIs, and make sure that your code works OK.
For your documentation, make sure readers are aware of any steps they should take
for the Kubernetes v1.22 upgrade.
Where possible, give your users a hand to adopt the new APIs early - perhaps in a
test environment - so they can give you feedback about any problems.
There are some [more deprecations](#looking-ahead) coming in Kubernetes v1.25,
so plan to have those covered too.
## Kubernetes API removals
Here's some background about why Kubernetes removes some APIs, and also a promise
about _stable_ APIs in Kubernetes.
Kubernetes follows a defined
[deprecation policy](/docs/reference/using-api/deprecation-policy/) for its
features, including the Kubernetes API. That policy allows for replacing stable
(“GA”) APIs from Kubernetes. Importantly, this policy means that a stable API only
be deprecated when a newer stable version of that same API is available.
That stability guarantee matters: if you're using a stable Kubernetes API, there
won't ever be a new version released that forces you to switch to an alpha or beta
feature.
Earlier stages are different. Alpha features are under test and potentially
incomplete. Almost always, alpha features are disabled by default.
Kubernetes releases can and do remove alpha features that haven't worked out.
After alpha, comes beta. These features are typically enabled by default; if the
testing works out, the feature can graduate to stable. If not, it might need
a redesign.
Last year, Kubernetes officially
[adopted](/blog/2020/08/21/moving-forward-from-beta/#avoiding-permanent-beta)
a policy for APIs that have reached their beta phase:
> For Kubernetes REST APIs, when a new feature's API reaches beta, that starts
> a countdown. The beta-quality API now has three releases &hellip;
> to either:
>
> * reach GA, and deprecate the beta, or
> * have a new beta version (and deprecate the previous beta).
_At the time of that article, three Kubernetes releases equated to roughly nine
calendar months. Later that same month, Kubernetes
adopted a new
release cadence of three releases per calendar year, so the countdown period is
now roughly twelve calendar months._
Whether an API removal is because of a beta feature graduating to stable, or
because that API hasn't proved successful, Kubernetes will continue to remove
APIs by following its deprecation policy and making sure that migration options
are documented.
### Looking ahead
There's a setting that's relevant if you use webhook authentication checks.
A future Kubernetes release will switch to sending TokenReview objects
to webhooks using the `authentication.k8s.io/v1` API by default. At the moment,
the default is to send `authentication.k8s.io/v1beta1` TokenReviews to webhooks,
and that's still the default for Kubernetes v1.22.
However, you can switch over to the stable API right now if you want:
add `--authentication-token-webhook-version=v1` to the command line options for
the kube-apiserver, and check that webhooks for authentication still work how you
expected.
Once you're happy it works OK, you can leave the `--authentication-token-webhook-version=v1`
option set across your control plane.
The **v1.25** release that's planned for next year will stop serving beta versions of
several Kubernetes APIs that are stable right now and have been for some time.
The same v1.25 release will **remove** PodSecurityPolicy, which is deprecated and won't
graduate to stable. See
[PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)
for more information.
The official [list of API removals](/docs/reference/using-api/deprecation-guide/#v1-25)
planned for Kubernetes 1.25 is:
* The beta `CronJob` API (**batch/v1beta1**)
* The beta `EndpointSlice` API (**networking.k8s.io/v1beta1**)
* The beta `PodDisruptionBudget` API (**policy/v1beta1**)
* The beta `PodSecurityPolicy` API (**policy/v1beta1**)
## Want to know more?
Deprecations are announced in the Kubernetes release notes. You can see the announcements
of pending deprecations in the release notes for
[1.19](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecations),
[1.20](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation),
and [1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation).
For information on the process of deprecation and removal, check out the official Kubernetes
[deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api)
document.

View File

@ -0,0 +1,65 @@
---
layout: blog
title: "Spotlight on SIG Usability"
date: 2021-07-15
slug: sig-usability-spotlight-2021
---
**Author:** Kunal Kushwaha, Civo
## Introduction
Are you interested in learning about what [SIG Usability](https://github.com/kubernetes/community/tree/master/sig-usability) does and how you can get involved? Well, you're at the right place. SIG Usability is all about making Kubernetes more accessible to new folks, and its main activity is conducting user research for the community. In this blog, we have summarized our conversation with [Gaby Moreno](https://twitter.com/morengab), who walks us through the various aspects of being a part of the SIG and shares some insights about how others can get involved.
Gaby is a co-lead for SIG Usability. She works as a Product Designer at IBM and enjoys working on the user experience of open, hybrid cloud technologies like Kubernetes, OpenShift, Terraform, and Cloud Foundry.
## A summary of our conversation
### Q. Could you tell us a little about what SIG Usability does?
A. SIG Usability at a high level started because there was no dedicated user experience team for Kubernetes. The extent of SIG Usability is focussed on the end-client ease of use of the Kubernetes project. The main activity is user research for the community, which includes speaking to Kubernetes users.
This covers points like user experience and accessibility. The objectives of the SIG are to guarantee that the Kubernetes project is maximally usable by people of a wide range of foundations and capacities, such as incorporating internationalization and ensuring the openness of documentation.
### Q. Why should new and existing contributors consider joining SIG Usability?
A. There are plenty of territories where new contributors can begin. For example:
- User research projects, where people can help understand the usability of the end-user experiences, including error messages, end-to-end tasks, etc.
- Accessibility guidelines for Kubernetes community artifacts, examples include: internationalization of documentation, color choices for people with color blindness, ensuring compatibility with screen reader technology, user interface design for core components with user interfaces, and more.
### Q. What do you do to help new contributors get started?
A. New contributors can get started by shadowing one of the user interviews, going through user interview transcripts, analyzing them, and designing surveys.
SIG Usability is also open to new project ideas. If you have an idea, well do what we can to support it. There are regular SIG Meetings where people can ask their questions live. These meetings are also recorded for those who may not be able to attend. As always, you can reach out to us on Slack as well.
### Q. What does the survey include?
A. In simple terms, the survey gathers information about how people use Kubernetes, such as trends in learning to deploy a new system, error messages they receive, and workflows.
One of our goals is to standardize the responses accordingly. The ultimate goal is to analyze survey responses for important user stories whose needs aren't being met.
### Q. Are there any particular skills youd like to recruit for? What skills are contributors to SIG Usability likely to learn?
A. Although contributing to SIG Usability does not have any pre-requisites as such, experience with user research, qualitative research, or prior experience with how to conduct an interview would be great plus points. Quantitative research, like survey design and screening, is also helpful and something that we expect contributors to learn.
### Q. What are you getting positive feedback on, and whats coming up next for SIG Usability?
A. We have had new members joining and coming to monthly meetings regularly and showing interests in becoming a contributor and helping the community. We have also had a lot of people reach out to us via Slack showcasing their interest in the SIG.
Currently, we are focused on finishing the study mentioned in our [talk](https://www.youtube.com/watch?v=Byn0N_ZstE0), also our project for this year. We are always happy to have new contributors join us.
### Q: Any closing thoughts/resources youd like to share?
A. We love meeting new contributors and assisting them in investigating different Kubernetes project spaces. We will work with and team up with other SIGs to facilitate engaging with end-users, running studies, and help them integrate accessible design practices into their development practices.
Here are some resources for you to get started:
- [GitHub](https://github.com/kubernetes/community/tree/master/sig-usability)
- [Mailing list](https://groups.google.com/g/kubernetes-sig-usability)
- [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fusability)
- [Slack](https://slack.k8s.io/)
- [Slack channel #sig-usability](https://kubernetes.slack.com/archives/CLC5EF63T)
## Wrap Up
SIG Usability hosted a [KubeCon talk](https://www.youtube.com/watch?v=Byn0N_ZstE0) about studying Kubernetes users' experiences. The talk focuses on updates to the user study projects, understanding who is using Kubernetes, what they are trying to achieve, how the project is addressing their needs, and where we need to improve the project and the client experience. Join the SIG's update to find out about the most recent research results, what the plans are for the forthcoming year, and how to get involved in the upstream usability team as a contributor!

View File

@ -0,0 +1,83 @@
---
layout: blog
title: "Kubernetes Release Cadence Change: Heres What You Need To Know"
date: 2021-07-20
slug: new-kubernetes-release-cadence
---
**Authors**: Celeste Horgan, Adolfo García Veytia, James Laverack, Jeremy Rickard
On April 23, 2021, the Release Team merged a Kubernetes Enhancement Proposal (KEP) changing the Kubernetes release cycle from four releases a year (once a quarter) to three releases a year.
This blog post provides a high level overview about what this means for the Kubernetes community's contributors and maintainers.
## What's changing and when
Starting with the [Kubernetes 1.22 release](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.22), a lightweight policy will drive the creation of each release schedule. This policy states:
* The first Kubernetes release of a calendar year should start at the second or third
week of January to provide people more time for contributors coming back from the
end of year holidays.
* The last Kubernetes release of a calendar year should be finished by the middle of
December.
* A Kubernetes release cycle has a length of approximately 15 weeks.
* The week of KubeCon + CloudNativeCon is not considered a 'working week' for SIG Release. The Release Team will not hold meetings or make decisions in this period.
* An explicit SIG Release break of at least two weeks between each cycle will
be enforced.
As a result, Kubernetes will follow a three releases per year cadence. Kubernetes 1.23 will be the final release of the 2021 calendar year. This new policy results in a very predictable release schedule, allowing us to forecast upcoming release dates:
*Proposed Kubernetes Release Schedule for the remainder of 2021*
| Week Number in Year | Release Number | Release Week | Note |
| -------- | -------- | -------- | -------- |
| 35 | 1.23 | 1 (August 23) | |
| 50 | 1.23 | 16 (December 07) | KubeCon + CloudNativeCon NA Break (Oct 11-15) |
*Proposed Kubernetes Release Schedule for 2022*
| Week Number in Year | Release Number | Release Week | Note |
| -------- | -------- | -------- | -------- |
| 1 | 1.24 | 1 (January 03) | |
| 15 | 1.24 | 15 (April 12) | |
| 17 | 1.25 | 1 (April 26) | KubeCon + CloudNativeCon EU likely to occur |
| 32 | 1.25 | 15 (August 09) | |
| 34 | 1.26 | 1 (August 22 | KubeCon + CloudNativeCon NA likely to occur |
| 49 | 1.26 | 14 (December 06) |
These proposed dates reflect only the start and end dates, and they are subject to change. The Release Team will select dates for enhancement freeze, code freeze, and other milestones at the start of each release. For more information on these milestones, please refer to the [release phases](https://www.k8s.dev/resources/release/#phases) documentation. Feedback from prior releases will feed into this process.
## What this means for end users
The major change end users will experience is a slower release cadence and a slower rate of enhancement graduation. Kubernetes release artifacts, release notes, and all other aspects of any given release will stay the same.
Prior to this change an enhancement could graduate from alpha to stable in 9 months. With the change in cadence, this will stretch to 12 months. Additionally, graduation of features over the last few releases has in some part been driven by release team activities.
With fewer releases, users can expect to see the rate of feature graduation slow. Users can also expect releases to contain a larger number of enhancements that they need to be aware of during upgrades. However, with fewer releases to consume per year, it's intended that end user organizations will spend less time on upgrades and gain more time on supporting their Kubernetes clusters. It also means that Kubernetes releases are in support for a slightly longer period of time, so bug fixes and security patches will be available for releases for a longer period of time.
## What this means for Kubernetes contributors
With a lower release cadence, contributors have more time for project enhancements, feature development, planning, and testing. A slower release cadence also provides more room for maintaining their mental health, preparing for events like KubeCon + CloudNativeCon or work on downstream integrations.
## Why we decided to change the release cadence
The Kubernetes 1.19 cycle was far longer than usual. SIG Release extended it to lessen the burden on both Kubernetes contributors and end users due the COVID-19 pandemic. Following this extended release, the Kubernetes 1.20 release became the third, and final, release for 2020.
As the Kubernetes project matures, the number of enhancements per cycle grows, along with the burden on contributors, the Release Engineering team. Downstream consumers and integrators also face increased challenges keeping up with [ever more feature-packed releases](https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/). A wider project adoption means the complexity of supporting a rapidly evolving platform affects a bigger downstream chain of consumers.
Changing the release cadence from four to three releases per year balances a variety of factors for stakeholders: while it's not strictly an LTS policy, consumers and integrators will get longer support terms for each minor version as the extended release cycles lead to the [previous three releases being supported](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/) for a longer period. Contributors get more time to [mature enhancements](https://www.cncf.io/blog/2021/04/12/enhancing-the-kubernetes-enhancements-process/) and [get them ready for production](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md).
Finally, the management overhead for SIG Release and the Release Engineering team diminishes allowing the team to spend more time on improving the quality of the software releases and the tooling that drives them.
## How you can help
Join the [discussion](https://github.com/kubernetes/sig-release/discussions/1566) about communicating future release dates and be sure to be on the lookout for post release surveys.
## Where you can find out more
- Read the KEP [here](https://github.com/kubernetes/enhancements/tree/master/keps/sig-release/2572-release-cadence)
- Join the [kubernetes-dev](https://groups.google.com/g/kubernetes-dev) mailing list
- Join [Kubernetes Slack](https://slack.k8s.io) and follow the #announcements channel

View File

@ -0,0 +1,71 @@
---
layout: blog
title: 'Updating NGINX-Ingress to use the stable Ingress API'
date: 2021-07-26
slug: update-with-ingress-nginx
---
**Authors:** James Strong, Ricardo Katz
With all Kubernetes APIs, there is a process to creating, maintaining, and
ultimately deprecating them once they become GA. The networking.k8s.io API group is no
different. The upcoming Kubernetes 1.22 release will remove several deprecated APIs
that are relevant to networking:
- the `networking.k8s.io/v1beta1` API version of [IngressClass](/docs/concepts/services-networking/ingress/#ingress-class)
- all beta versions of [Ingress](/docs/concepts/services-networking/ingress/): `extensions/v1beta1` and `networking.k8s.io/v1beta1`
On a v1.22 Kubernetes cluster, you'll be able to access Ingress and IngressClass
objects through the stable (v1) APIs, but access via their beta APIs won't be possible.
This change has been in
in discussion since
[2017](https://github.com/kubernetes/kubernetes/issues/43214),
[2019](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) with
1.16 Kubernetes API deprecations, and most recently in
KEP-1453:
[Graduate Ingress API to GA](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1453-ingress-api#122).
During community meetings, the networking Special Interest Group has decided to continue
supporting Kubernetes versions older than 1.22 with Ingress-NGINX version 0.47.0.
Support for Ingress-NGINX will continue for six months after Kubernetes 1.22
is released. Any additional bug fixes and CVEs for Ingress-NGINX will be
addressed on a need-by-need basis.
Ingress-NGINX will have separate branches and releases of Ingress-NGINX to
support this model, mirroring the Kubernetes project process. Future
releases of the Ingress-NGINX project will track and support the latest
versions of Kubernetes.
{{< table caption="Ingress NGINX supported version with Kubernetes Versions" >}}
Kubernetes version | Ingress-NGINX version | Notes
:-------------------|:----------------------|:------------
v1.22 | v1.0.0-alpha.2 | New features, plus bug fixes.
v1.21 | v0.47.x | Bugfixes only, and just for security issues or crashes. No end-of-support date announced.
v1.20 | v0.47.x | Bugfixes only, and just for security issues or crashes. No end-of-support date announced.
v1.19 | v0.47.x | Bugfixes only, and just for security issues or crashes. Fixes only provided until 6 months after Kubernetes v1.22.0 is released.
{{< /table >}}
Because of the updates in Kubernetes 1.22, **v0.47.0** will not work with
Kubernetes 1.22.
# What you need to do
The team is currently in the process of upgrading ingress-nginx to support
the v1 migration, you can track the progress
[here](https://github.com/kubernetes/ingress-nginx/pull/7156).
We're not making feature improvements to `ingress-nginx` until after the support for
Ingress v1 is complete.
In the meantime to ensure no compatibility issues:
* Update to the latest version of Ingress-NGINX; currently
[v0.47.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v0.47.0)
* After Kubernetes 1.22 is released, ensure you are using the latest version of
Ingress-NGINX that supports the stable APIs for Ingress and IngressClass.
* Test Ingress-NGINX version v1.0.0-alpha.2 with Cluster versions >= 1.19
and report any issues to the projects Github page.
The communitys feedback and support in this effort is welcome. The
Ingress-NGINX Sub-project regularly holds community meetings where we discuss
this and other issues facing the project. For more information on the sub-project,
please see [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

View File

@ -13,7 +13,7 @@ cid: community
<div class="intro">
<br class="mobile">
<p>The Kubernetes community -- users, contributors, and the culture we've built together -- is one of the biggest reasons for the meteoric rise of this open source project. Our culture and values continue to grow and change as the project itself grows and changes. We all work together toward constant improvement of the project and the ways we work on it.
<br><br>We are the people who file issues and pull requests, attend SIG meetings, Kubernetes meetups, and KubeCon, advocate for it's adoption and innovation, run <code>kubectl get pods</code>, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community.</p>
<br><br>We are the people who file issues and pull requests, attend SIG meetings, Kubernetes meetups, and KubeCon, advocate for its adoption and innovation, run <code>kubectl get pods</code>, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community.</p>
<br class="mobile">
</div>

View File

@ -210,7 +210,7 @@ To upgrade a HA control plane to use the cloud controller manager, see [Migrate
Want to know how to implement your own cloud controller manager, or extend an existing project?
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.17/cloud.go#L42-L62) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.21/cloud.go#L42-L69) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).
The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface.

View File

@ -0,0 +1,164 @@
---
title: Garbage Collection
content_type: concept
weight: 50
---
<!-- overview -->
{{<glossary_definition term_id="garbage-collection" length="short">}} This
allows the clean up of resources like the following:
* [Failed pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)
* [Completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
* [Objects without owner references](#owners-dependents)
* [Unused containers and container images](#containers-images)
* [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete)
* [Stale or expired CertificateSigningRequests (CSRs)](/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* {{<glossary_tooltip text="Nodes" term_id="node">}} deleted in the following scenarios:
* On a cloud when the cluster uses a [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
* On-premises when the cluster uses an addon similar to a cloud controller
manager
* [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats)
## Owners and dependents {#owners-dependents}
Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
Owner references tell the control plane which objects are dependent on others.
Kubernetes uses owner references to give the control plane, and other API
clients, the opportunity to clean up related resources before deleting an
object. In most cases, Kubernetes manages owner references automatically.
Ownership is different from the [labels and selectors](/docs/concepts/overview/working-with-objects/labels/)
mechanism that some resources also use. For example, consider a
{{<glossary_tooltip text="Service" term_id="service">}} that creates
`EndpointSlice` objects. The Service uses *labels* to allow the control plane to
determine which `EndpointSlice` objects are used for that Service. In addition
to the labels, each `EndpointSlice` that is managed on behalf of a Service has
an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they dont control.
## Cascading deletion {#cascading-deletion}
Kubernetes checks for and deletes objects that no longer have owner
references, like the pods left behind when you delete a ReplicaSet. When you
delete an object, you can control whether Kubernetes deletes the object's
dependents automatically, in a process called *cascading deletion*. There are
two types of cascading deletion, as follows:
* Foreground cascading deletion
* Background cascading deletion
You can also control how and when garbage collection deletes resources that have
owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
### Foreground cascading deletion {#foreground-deletion}
In foreground cascading deletion, the owner object you're deleting first enters
a *deletion in progress* state. In this state, the following happens to the
owner object:
* The Kubernetes API server sets the object's `metadata.deletionTimestamp`
field to the time the object was marked for deletion.
* The Kubernetes API server also sets the `metadata.finalizers` field to
`foregroundDeletion`.
* The object remains visible through the Kubernetes API until the deletion
process is complete.
After the owner object enters the deletion in progress state, the controller
deletes the dependents. After deleting all the dependent objects, the controller
deletes the owner object. At this point, the object is no longer visible in the
Kubernetes API.
During foreground cascading deletion, the only dependents that block owner
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field.
See [Use foreground cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)
to learn more.
### Background cascading deletion {#background-deletion}
In background cascading deletion, the Kubernetes API server deletes the owner
object immediately and the controller cleans up the dependent objects in
the background. By default, Kubernetes uses background cascading deletion unless
you manually use foreground deletion or choose to orphan the dependent objects.
See [Use background cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-background-cascading-deletion)
to learn more.
### Orphaned dependents
When Kubernetes deletes an owner object, the dependents left behind are called
*orphan* objects. By default, Kubernetes deletes dependent objects. To learn how
to override this behaviour, see [Delete owner objects and orphan dependents](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy).
## Garbage collection of unused containers and images {#containers-images}
The {{<glossary_tooltip text="kubelet" term_id="kubelet">}} performs garbage
collection on unused images every five minutes and on unused containers every
minute. You should avoid using external garbage collection tools, as these can
break the kubelet behavior and remove containers that should exist.
To configure options for unused container and image garbage collection, tune the
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
and change the parameters related to garbage collection using the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
resource type.
### Container image lifecycle
Kubernetes manages the lifecycle of all images through its *image manager*,
which is part of the kubelet, with the cooperation of cadvisor. The kubelet
considers the following disk usage limits when making garbage collection
decisions:
* `HighThresholdPercent`
* `LowThresholdPercent`
Disk usage above the configured `HighThresholdPercent` value triggers garbage
collection, which deletes images in order based on the last time they were used,
starting with the oldest first. The kubelet deletes images
until disk usage reaches the `LowThresholdPercent` value.
### Container image garbage collection {#container-image-garbage-collection}
The kubelet garbage collects unused containers based on the following variables,
which you can define:
* `MinAge`: the minimum age at which the kubelet can garbage collect a
container. Disable by setting to `0`.
* `MaxPerPodContainer`: the maximum number of dead containers each Pod pair
can have. Disable by setting to less than `0`.
* `MaxContainers`: the maximum number of dead containers the cluster can have.
Disable by setting to less than `0`.
In addition to these variables, the kubelet garbage collects unidentified and
deleted containers, typically starting with the oldest first.
`MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other
in situations where retaining the maximum number of containers per Pod
(`MaxPerPodContainer`) would go outside the allowable total of global dead
containers (`MaxContainers`). In this situation, the kubelet adjusts
`MaxPodPerContainer` to address the conflict. A worst-case scenario would be to
downgrade `MaxPerPodContainer` to `1` and evict the oldest containers.
Additionally, containers owned by pods that have been deleted are removed once
they are older than `MinAge`.
{{<note>}}
The kubelet only garbage collects the containers it manages.
{{</note>}}
## Configuring garbage collection {#configuring-gc}
You can tune garbage collection of resources by configuring options specific to
the controllers managing those resources. The following pages show you how to
configure garbage collection:
* [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
* [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
<!-- * [Configuring unused container and image garbage collection](/docs/tasks/administer-cluster/reconfigure-kubelet/) -->
## {{% heading "whatsnext" %}}
* Learn more about [ownership of Kubernetes objects](/docs/concepts/overview/working-with-objects/owners-dependents/).
* Learn more about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) (beta) that cleans up finished Jobs.

View File

@ -14,7 +14,7 @@ A node may be a virtual or physical machine, depending on the cluster. Each node
is managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}
{{< glossary_tooltip text="Pods" term_id="pod" >}}.
Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have only one node.

View File

@ -1,98 +0,0 @@
---
title: Garbage collection for container images
content_type: concept
weight: 70
---
<!-- overview -->
Garbage collection is a helpful function of kubelet that will clean up unused
[images](/docs/concepts/containers/#container-images) and unused
[containers](/docs/concepts/containers/). Kubelet will perform garbage collection
for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially
break the behavior of kubelet by removing containers expected to exist.
<!-- body -->
## Image Collection
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
will trigger garbage collection. The garbage collection will delete least recently used images until the low
threshold has been met.
## Container Collection
The policy for garbage collecting containers considers three user-defined variables.
`MinAge` is the minimum age at which a container can be garbage collected.
`MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have.
`MaxContainers` is the maximum number of total dead containers.
These variables can be individually disabled by setting `MinAge` to zero and
setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
Kubelet will act on containers that are unidentified, deleted, or outside of
the boundaries set by the previously mentioned flags. The oldest containers
will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may
potentially conflict with each other in situations where retaining the maximum
number of containers per pod (`MaxPerPodContainer`) would go outside the
allowable range of global dead containers (`MaxContainers`).
`MaxPerPodContainer` would be adjusted in this situation: A worst case
scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest
containers. Additionally, containers owned by pods that have been deleted are
removed once they are older than `MinAge`.
Containers that are not managed by kubelet are not subject to container garbage collection.
## User Configuration
You can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
Default is 85%.
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
to free. Default is 80%.
You can customize the garbage collection policy through the following kubelet flags:
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
2. `maximum-dead-containers-per-container`, maximum number of old instances to be retained
per container. Default is 1.
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
Default is -1, which means there is no global limit.
Containers can potentially be garbage collected before their usefulness has expired. These containers
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
similar reason.
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
## Deprecation
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
Including:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
## {{% heading "whatsnext" %}}
See [Configuring Out Of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
for more details.

View File

@ -407,9 +407,9 @@ stringData:
There are several options to create a Secret:
- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- [create Secrets using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [create Secrets from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [create Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
## Editing a Secret
@ -1239,7 +1239,7 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa
## {{% heading "whatsnext" %}}
- Learn how to [manage Secret using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- Learn how to [manage Secrets using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

View File

@ -330,4 +330,5 @@ Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.
## {{% heading "whatsnext" %}}
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md)
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).
* Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection).

View File

@ -118,7 +118,7 @@ Runtime handlers are configured through containerd's configuration at
`/etc/containerd/config.toml`. Valid handlers are configured under the runtimes section:
```
[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```
See containerd's config documentation for more details:

View File

@ -45,7 +45,7 @@ Containers have become popular because they provide extra benefits, such as:
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability).
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
* Observability: not only surfaces OS-level information and metrics, but also application health and other signals.
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
* Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.

View File

@ -0,0 +1,80 @@
---
title: Finalizers
content_type: concept
weight: 60
---
<!-- overview -->
{{<glossary_definition term_id="finalizer" length="long">}}
You can use finalizers to control {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
of resources by alerting {{<glossary_tooltip text="controllers" term_id="controller">}} to perform specific cleanup tasks before
deleting the target resource.
Finalizers don't usually specify the code to execute. Instead, they are
typically lists of keys on a specific resource similar to annotations.
Kubernetes specifies some finalizers automatically, but you can also specify
your own.
## How finalizers work
When you create a resource using a manifest file, you can specify finalizers in
the `metadata.finalizers` field. When you attempt to delete the resource, the
controller that manages it notices the values in the `finalizers` field and does
the following:
* Modifies the object to add a `metadata.deletionTimestamp` field with the
time you started the deletion.
* Marks the object as read-only until its `metadata.finalizers` field is empty.
The controller then attempts to satisfy the requirements of the finalizers
specified for that resource. Each time a finalizer condition is satisfied, the
controller removes that key from the resource's `finalizers` field. When the
field is empty, garbage collection continues. You can also use finalizers to
prevent deletion of unmanaged resources.
A common example of a finalizer is `kubernetes.io/pv-protection`, which prevents
accidental deletion of `PersistentVolume` objects. When a `PersistentVolume`
object is in use by a Pod, Kubernetes adds the `pv-protection` finalizer. If you
try to delete the `PersistentVolume`, it enters a `Terminating` status, but the
controller can't delete it because the finalizer exists. When the Pod stops
using the `PersistentVolume`, Kubernetes clears the `pv-protection` finalizer,
and the controller deletes the volume.
## Owner references, labels, and finalizers {#owners-labels-finalizers}
Like {{<glossary_tooltip text="labels" term_id="label">}}, [owner references](/concepts/overview/working-with-objects/owners-dependents/)
describe the relationships between objects in Kubernetes, but are used for a
different purpose. When a
{{<glossary_tooltip text="controllers" term_id="controller">}} manages objects
like Pods, it uses labels to track changes to groups of related objects. For
example, when a {{<glossary_tooltip text="Job" term_id="job">}} creates one or
more Pods, the Job controller applies labels to those pods and tracks changes to
any Pods in the cluster with the same label.
The Job controller also adds *owner references* to those Pods, pointing at the
Job that created the Pods. If you delete the Job while these Pods are running,
Kubernetes uses the owner references (not labels) to determine which Pods in the
cluster need cleanup.
Kubernetes also processes finalizers when it identifies owner references on a
resource targeted for deletion.
In some situations, finalizers can block the deletion of dependent objects,
which can cause the targeted owner object to remain in a read-only state for
longer than expected without being fully deleted. In these situations, you
should check finalizers and owner references on the target owner and dependent
objects to troubleshoot the cause.
{{<note>}}
In cases where objects are stuck in a deleting state, try to avoid manually
removing finalizers to allow deletion to continue. Finalizers are usually added
to resources for a reason, so forcefully removing them can lead to issues in
your cluster.
{{</note>}}
## {{% heading "whatsnext" %}}
* Read [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/)
on the Kubernetes blog.

View File

@ -42,7 +42,7 @@ Example labels:
* `"partition" : "customerA"`, `"partition" : "customerB"`
* `"track" : "daily"`, `"track" : "weekly"`
These are examples of commonly used labels; you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
These are examples of [commonly used labels](/docs/concepts/overview/working-with-objects/common-labels/); you are free to develop your own conventions. Keep in mind that label Key must be unique for a given object.
## Syntax and character set
@ -50,7 +50,7 @@ _Labels_ are key/value pairs. Valid label keys have two segments: an optional pr
If the prefix is omitted, the label Key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix.
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
The `kubernetes.io/` and `k8s.io/` prefixes are [reserved](/docs/reference/labels-annotations-taints/) for Kubernetes core components.
Valid label value:
* must be 63 characters or less (can be empty),

View File

@ -28,7 +28,7 @@ For non-unique user-provided attributes, Kubernetes provides [labels](/docs/conc
In cases when objects represent a physical entity, like a Node representing a physical host, when the host is re-created under the same name without deleting and re-creating the Node, Kubernetes treats the new host as the old one, which may lead to inconsistencies.
{{< /note >}}
Below are three types of commonly used name constraints for resources.
Below are four types of commonly used name constraints for resources.
### DNS Subdomain Names
@ -41,7 +41,7 @@ This means the name must:
- start with an alphanumeric character
- end with an alphanumeric character
### DNS Label Names
### RFC 1123 Label Names {#dns-label-names}
Some resource types require their names to follow the DNS
label standard as defined in [RFC 1123](https://tools.ietf.org/html/rfc1123).
@ -52,6 +52,17 @@ This means the name must:
- start with an alphanumeric character
- end with an alphanumeric character
### RFC 1035 Label Names
Some resource types require their names to follow the DNS
label standard as defined in [RFC 1035](https://tools.ietf.org/html/rfc1035).
This means the name must:
- contain at most 63 characters
- contain only lowercase alphanumeric characters or '-'
- start with an alphabetic character
- end with an alphanumeric character
### Path Segment Names
Some resource types require their names to be able to be safely encoded as a

View File

@ -0,0 +1,71 @@
---
title: Owners and Dependents
content_type: concept
weight: 60
---
<!-- overview -->
In Kubernetes, some objects are *owners* of other objects. For example, a
{{<glossary_tooltip text="ReplicaSet" term_id="replica-set">}} is the owner of a set of Pods. These owned objects are *dependents*
of their owner.
Ownership is different from the [labels and selectors](/docs/concepts/overview/working-with-objects/labels/)
mechanism that some resources also use. For example, consider a Service that
creates `EndpointSlice` objects. The Service uses labels to allow the control plane to
determine which `EndpointSlice` objects are used for that Service. In addition
to the labels, each `EndpointSlice` that is managed on behalf of a Service has
an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they dont control.
## Owner references in object specifications
Dependent objects have a `metadata.ownerReferences` field that references their
owner object. A valid owner reference consists of the object name and a UID
within the same namespace as the dependent object. Kubernetes sets the value of
this field automatically for objects that are dependents of other objects like
ReplicaSets, DaemonSets, Deployments, Jobs and CronJobs, and ReplicationControllers.
You can also configure these relationships manually by changing the value of
this field. However, you usually don't need to and can allow Kubernetes to
automatically manage the relationships.
Dependent objects also have an `ownerReferences.blockOwnerDeletion` field that
takes a boolean value and controls whether specific dependents can block garbage
collection from deleting their owner object. Kubernetes automatically sets this
field to `true` if a {{<glossary_tooltip text="controller" term_id="controller">}}
(for example, the Deployment controller) sets the value of the
`metadata.ownerReferences` field. You can also set the value of the
`blockOwnerDeletion` field manually to control which dependents block garbage
collection.
A Kubernetes admission controller controls user access to change this field for
dependent resources, based on the delete permissions of the owner. This control
prevents unauthorized users from delaying owner object deletion.
## Ownership and finalizers
When you tell Kubernetes to delete a resource, the API server allows the
managing controller to process any [finalizer rules](/docs/concepts/overview/working-with-objects/finalizers/)
for the resource. {{<glossary_tooltip text="Finalizers" term_id="finalizer">}}
prevent accidental deletion of resources your cluster may still need to function
correctly. For example, if you try to delete a `PersistentVolume` that is still
in use by a Pod, the deletion does not happen immediately because the
`PersistentVolume` has the `kubernetes.io/pv-protection` finalizer on it.
Instead, the volume remains in the `Terminating` status until Kubernetes clears
the finalizer, which only happens after the `PersistentVolume` is no longer
bound to a Pod.
Kubernetes also adds finalizers to an owner resource when you use either
[foreground or orphan cascading deletion](/docs/concepts/architecture/garbage-collection/#cascading-deletion).
In foreground deletion, it adds the `foreground` finalizer so that the
controller must delete dependent resources that also have
`ownerReferences.blockOwnerDeletion=true` before it deletes the owner. If you
specify an orphan deletion policy, Kubernetes adds the `orphan` finalizer so
that the controller ignores dependent resources after it deletes the owner
object.
## {{% heading "whatsnext" %}}
* Learn more about [Kubernetes finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
* Learn about [garbage collection](/docs/concepts/architecture/garbage-collection).
* Read the API reference for [object metadata](/docs/reference/kubernetes-api/common-definitions/object-meta/#System).

View File

@ -353,7 +353,7 @@ the removal of the lowest priority Pods is not sufficient to allow the scheduler
to schedule the preemptor Pod, or if the lowest priority Pods are protected by
`PodDisruptionBudget`.
The kubelet uses Priority to determine pod order for [out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
The kubelet uses Priority to determine pod order for [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
You can use the QoS class to estimate the order in which pods are most likely
to get evicted. The kubelet ranks pods for eviction based on the following factors:
@ -361,10 +361,10 @@ to get evicted. The kubelet ranks pods for eviction based on the following facto
1. Pod Priority
1. Amount of resource usage relative to requests
See [evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
for more details.
kubelet out-of-resource eviction does not evict Pods when their
kubelet node-pressure eviction does not evict Pods when their
usage does not exceed their requests. If a Pod with lower priority is not
exceeding its requests, it won't be evicted. Another Pod with higher priority
that exceeds its requests may be evicted.

View File

@ -8,7 +8,7 @@ weight: 90
<!-- overview -->
{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the

View File

@ -267,7 +267,7 @@ This ensures that DaemonSet pods are never evicted due to these problems.
## Taint Nodes by Condition
The control plane, using the node {{<glossary_tooltip text="controller" term_id="controller">}},
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/pod-eviction#node-conditions).
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions).
The scheduler checks taints, not node conditions, when it makes scheduling
decisions. This ensures that node conditions don't directly affect scheduling.
@ -298,7 +298,7 @@ arbitrary tolerations to DaemonSets.
## {{% heading "whatsnext" %}}
* Read about [out of resource handling](/docs/concepts/scheduling-eviction/out-of-resource/) and how you can configure it
* Read about [pod priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it
* Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)

View File

@ -32,6 +32,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),

View File

@ -72,7 +72,7 @@ A Service in Kubernetes is a REST object, similar to a Pod. Like all of the
REST objects, you can `POST` a Service definition to the API server to create
a new instance.
The name of a Service object must be a valid
[DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names).
[RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names).
For example, suppose you have a set of Pods where each listens on TCP port 9376
and contains a label `app=MyApp`:
@ -188,7 +188,7 @@ selectors and uses DNS names instead. For more information, see the
[ExternalName](#externalname) section later in this document.
### Over Capacity Endpoints
If an Endpoints resource has more than 1000 endpoints then a Kubernetes v1.21 (or later)
If an Endpoints resource has more than 1000 endpoints then a Kubernetes v1.21
cluster annotates that Endpoints with `endpoints.kubernetes.io/over-capacity: warning`.
This annotation indicates that the affected Endpoints object is over capacity.

View File

@ -76,7 +76,7 @@ for provisioning PVs. This field must be specified.
| Glusterfs | &#x2713; | [Glusterfs](#glusterfs) |
| iSCSI | - | - |
| Quobyte | &#x2713; | [Quobyte](#quobyte) |
| NFS | - | - |
| NFS | - | [NFS](#nfs) |
| RBD | &#x2713; | [Ceph RBD](#ceph-rbd) |
| VsphereVolume | &#x2713; | [vSphere](#vsphere) |
| PortworxVolume | &#x2713; | [Portworx Volume](#portworx-volume) |
@ -423,6 +423,29 @@ parameters:
`gluster-dynamic-<claimname>`. The dynamic endpoint and service are automatically
deleted when the persistent volume claim is deleted.
### NFS
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example-nfs
provisioner: example.com/external-nfs
parameters:
server: nfs-server.example.com
path: /share
readOnly: false
```
* `server`: Server is the hostname or IP address of the NFS server.
* `path`: Path that is exported by the NFS server.
* `readOnly`: A flag indicating whether the storage will be mounted as read only (default false).
Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS.
Here are some examples:
* [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
* [NFS subdir external provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
### OpenStack Cinder
```yaml

View File

@ -1,184 +0,0 @@
---
title: Garbage Collection
content_type: concept
weight: 60
---
<!-- overview -->
The role of the Kubernetes garbage collector is to delete certain objects
that once had an owner, but no longer have an owner.
<!-- body -->
## Owners and dependents
Some Kubernetes objects are owners of other objects. For example, a ReplicaSet
is the owner of a set of Pods. The owned objects are called *dependents* of the
owner object. Every dependent object has a `metadata.ownerReferences` field that
points to the owning object.
Sometimes, Kubernetes sets the value of `ownerReference` automatically. For
example, when you create a ReplicaSet, Kubernetes automatically sets the
`ownerReference` field of each Pod in the ReplicaSet. In 1.8, Kubernetes
automatically sets the value of `ownerReference` for objects created or adopted
by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job
and CronJob.
You can also specify relationships between owners and dependents by manually
setting the `ownerReference` field.
Here's a configuration file for a ReplicaSet that has three Pods:
{{< codenew file="controllers/replicaset.yaml" >}}
If you create the ReplicaSet and then view the Pod metadata, you can see
OwnerReferences field:
```shell
kubectl apply -f https://k8s.io/examples/controllers/replicaset.yaml
kubectl get pods --output=yaml
```
The output shows that the Pod owner is a ReplicaSet named `my-repset`:
```yaml
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
```
{{< note >}}
Cross-namespace owner references are disallowed by design.
Namespaced dependents can specify cluster-scoped or namespaced owners.
A namespaced owner **must** exist in the same namespace as the dependent.
If it does not, the owner reference is treated as absent, and the dependent
is subject to deletion once all owners are verified absent.
Cluster-scoped dependents can only specify cluster-scoped owners.
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
it is treated as having an unresolvable owner reference, and is not able to be garbage collected.
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
You can check for that kind of Event by running
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
{{< /note >}}
## Controlling how the garbage collector deletes dependents
When you delete an object, you can specify whether the object's dependents are
also deleted automatically. Deleting dependents automatically is called *cascading
deletion*. There are two modes of *cascading deletion*: *background* and *foreground*.
If you delete an object without deleting its dependents
automatically, the dependents are said to be *orphaned*.
### Foreground cascading deletion
In *foreground cascading deletion*, the root object first
enters a "deletion in progress" state. In the "deletion in progress" state,
the following things are true:
* The object is still visible via the REST API
* The object's `deletionTimestamp` is set
* The object's `metadata.finalizers` contains the value "foregroundDeletion".
Once the "deletion in progress" state is set, the garbage
collector deletes the object's dependents. Once the garbage collector has deleted all
"blocking" dependents (objects with `ownerReference.blockOwnerDeletion=true`), it deletes
the owner object.
Note that in the "foregroundDeletion", only dependents with
`ownerReference.blockOwnerDeletion=true` block the deletion of the owner object.
Kubernetes version 1.7 added an [admission controller](/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement) that controls user access to set
`blockOwnerDeletion` to true based on delete permissions on the owner object, so that
unauthorized dependents cannot delay deletion of an owner object.
If an object's `ownerReferences` field is set by a controller (such as Deployment or ReplicaSet),
blockOwnerDeletion is set automatically and you do not need to manually modify this field.
### Background cascading deletion
In *background cascading deletion*, Kubernetes deletes the owner object
immediately and the garbage collector then deletes the dependents in
the background.
### Setting the cascading deletion policy
To control the cascading deletion policy, set the `propagationPolicy`
field on the `deleteOptions` argument when deleting an Object. Possible values include "Orphan",
"Foreground", or "Background".
Here's an example that deletes dependents in background:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
Here's an example that deletes dependents in foreground:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
Here's an example that orphans dependents:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
kubectl also supports cascading deletion.
To delete dependents in the foreground using kubectl, set `--cascade=foreground`. To
orphan dependents, set `--cascade=orphan`.
The default behavior is to delete the dependents in the background which is the
behavior when `--cascade` is omitted or explicitly set to `background`.
Here's an example that orphans the dependents of a ReplicaSet:
```shell
kubectl delete replicaset my-repset --cascade=orphan
```
### Additional note on Deployments
Prior to 1.7, When using cascading deletes with Deployments you *must* use `propagationPolicy: Foreground`
to delete not only the ReplicaSets created, but also their Pods. If this type of _propagationPolicy_
is not used, only the ReplicaSets will be deleted, and the Pods will be orphaned.
See [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment-284766613) for more information.
## Known issues
Tracked at [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
## {{% heading "whatsnext" %}}
[Design Doc 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
[Design Doc 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)

View File

@ -283,6 +283,17 @@ on the Kubernetes API server for each static Pod.
This means that the Pods running on a node are visible on the API server,
but cannot be controlled from there.
## Container probes
A _probe_ is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:
- `ExecAction` (performed with the help of the container runtime)
- `TCPSocketAction` (checked directly by the kubelet)
- `HTTPGetAction` (checked directly by the kubelet)
You can read more about [probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
in the Pod Lifecycle documentation.
## {{% heading "whatsnext" %}}
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).

View File

@ -31,7 +31,7 @@ an application. Examples are:
- cloud provider or hypervisor failure makes VM disappear
- a kernel panic
- the node disappears from the cluster due to cluster network partition
- eviction of a pod due to the node being [out-of-resources](/docs/tasks/administer-cluster/out-of-resource/).
- eviction of a pod due to the node being [out-of-resources](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
Except for the out-of-resources condition, all these conditions
should be familiar to most users; they are not specific

View File

@ -304,13 +304,23 @@ specify a readiness probe. In this case, the readiness probe might be the same
as the liveness probe, but the existence of the readiness probe in the spec means
that the Pod will start without receiving any traffic and only start receiving
traffic after the probe starts succeeding.
If your container needs to work on loading large data, configuration files, or
migrations during startup, specify a readiness probe.
If you want your container to be able to take itself down for maintenance, you
can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe.
If your app has a strict dependency on back-end services, you can implement both
a liveness and a readiness probe. The liveness probe passes when the app itself
is healthy, but the readiness probe additionally checks that each required
back-end service is available. This helps you avoid directing traffic to Pods
that can only respond with error messages.
If your container needs to work on loading large data, configuration files, or
migrations during startup, you can use a
[startup probe](#when-should-you-use-a-startup-probe). However, if you want to
detect the difference between an app that has failed and an app that is still
processing its startup data, you might prefer a readiness probe.
{{< note >}}
If you want to be able to drain requests when the Pod is deleted, you do not
necessarily need a readiness probe; on deletion, the Pod automatically puts itself

View File

@ -2,7 +2,7 @@
title: API-initiated eviction
id: api-eviction
date: 2021-04-27
full_link: /docs/concepts/scheduling-eviction/pod-eviction/#api-eviction
full_link: /docs/concepts/scheduling-eviction/api-eviction/
short_description: >
API-initiated eviction is the process by which you use the Eviction API to create an
Eviction object that triggers graceful pod termination.

View File

@ -0,0 +1,31 @@
---
title: Finalizer
id: finalizer
date: 2021-07-07
full_link: /docs/concepts/overview/working-with-objects/finalizers/
short_description: >
A namespaced key that tells Kubernetes to wait until specific conditions are met
before it fully deletes an object marked for deletion.
aka:
tags:
- fundamental
- operation
---
Finalizers are namespaced keys that tell Kubernetes to wait until specific
conditions are met before it fully deletes resources marked for deletion.
Finalizers alert {{<glossary_tooltip text="controllers" term_id="controller">}}
to clean up resources the deleted object owned.
<!--more-->
When you tell Kubernetes to delete an object that has finalizers specified for
it, the Kubernetes API marks the object for deletion, putting it into a
read-only state. The target object remains in a terminating state while the
control plane, or other components, take the actions defined by the finalizers.
After these actions are complete, the controller removes the relevant finalizers
from the target object. When the `metadata.finalizers` field is empty,
Kubernetes considers the deletion complete.
You can use finalizers to control {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
of resources. For example, you can define a finalizer to clean up related resources or
infrastructure before the controller deletes the target resource.

View File

@ -0,0 +1,24 @@
---
title: Garbage Collection
id: garbage-collection
date: 2021-07-07
full_link: /docs/concepts/workloads/controllers/garbage-collection/
short_description: >
A collective term for the various mechanisms Kubernetes uses to clean up cluster
resources.
aka:
tags:
- fundamental
- operation
---
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources.
<!--more-->
Kubernetes uses garbage collection to clean up resources like [unused containers and images](/docs/concepts/workloads/controllers/garbage-collection/#containers-images),
[failed Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection),
[objects owned by the targeted resource](/docs/concepts/overview/working-with-objects/owners-dependents/),
[completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/), and resources
that have expired or failed.

View File

@ -11,7 +11,7 @@ tags:
- architecture
- fundamental
---
Control Plane component that runs {{< glossary_tooltip text="controller" term_id="controller" >}} processes.
Control plane component that runs {{< glossary_tooltip text="controller" term_id="controller" >}} processes.
<!--more-->

View File

@ -2,7 +2,7 @@
title: kube-scheduler
id: kube-scheduler
date: 2018-04-12
full_link: /docs/reference/generated/kube-scheduler/
full_link: /docs/reference/command-line-tools-reference/kube-scheduler/
short_description: >
Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.

View File

@ -200,7 +200,7 @@ Used on: Service
The kube-proxy has this label for custom proxy, which delegates service control to custom proxy.
## experimental.windows.kubernetes.io/isolation-type
## experimental.windows.kubernetes.io/isolation-type (deprecated) {#experimental-windows-kubernetes-io-isolation-type}
Example: `experimental.windows.kubernetes.io/isolation-type: "hyperv"`
@ -210,6 +210,7 @@ The annotation is used to run Windows containers with Hyper-V isolation. To use
{{< note >}}
You can only set this annotation on Pods that have a single container.
Starting from v1.20, this annotation is deprecated. Experimental Hyper-V support was removed in 1.21.
{{< /note >}}
## ingressclass.kubernetes.io/is-default-class

View File

@ -126,8 +126,8 @@ The default configuration can be printed out using the
If your configuration is not using the latest version it is **recommended** that you migrate using
the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
For more information on the fields and usage of the configuration you can navigate to our API reference
page and pick a version from [the list](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#section-directories).
For more information on the fields and usage of the configuration you can navigate to our
[API reference page](/docs/reference/config-api/kubeadm-config.v1beta2/).
### Adding kube-proxy parameters {#kube-proxy}

View File

@ -286,8 +286,8 @@ The default configuration can be printed out using the
If your configuration is not using the latest version it is **recommended** that you migrate using
the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
For more information on the fields and usage of the configuration you can navigate to our API reference
page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories).
For more information on the fields and usage of the configuration you can navigate to our
[API reference](/docs/reference/config-api/kubeadm-config.v1beta2/).
## {{% heading "whatsnext" %}}

View File

@ -26,8 +26,8 @@ to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resou
## Helm
[`Kubernetes Helm`](https://github.com/kubernetes/helm) is a tool for managing packages of pre-configured
Kubernetes resources, aka Kubernetes charts.
[Helm](https://helm.sh/) is a tool for managing packages of pre-configured
Kubernetes resources. These packages are known as _Helm charts_.
Use Helm to:

View File

@ -208,6 +208,77 @@ more remaining items and the API server does not include a `remainingItemCount`
field in its response. The intended use of the `remainingItemCount` is estimating
the size of a collection.
## Lists
There are dozens of list types (such as `PodList`, `ServiceList`, and `NodeList`) defined in the Kubernetes API.
You can get more information about each list type from the [Kubernetes API](https://kubernetes.io/docs/reference/kubernetes-api/) documentation.
When you query the API for a particular type, all items returned by that query are of that type. For example, when you
ask for a list of services, the list type is shown as `kind: ServiceList` and each item in that list represents a single Service. For example:
```console
GET /api/v1/services
---
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "2947301"
},
"items": [
{
"metadata": {
"name": "kubernetes",
"namespace": "default",
...
"metadata": {
"name": "kube-dns",
"namespace": "kube-system",
...
```
Some tools, such as `kubectl` provide another way to query the Kubernetes API. Because the output of `kubectl` might include multiple list types, the list of items is represented as `kind: List`. For example:
```console
$ kubectl get services -A -o yaml
apiVersion: v1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
items:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-06-03T14:54:12Z"
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
...
- apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
creationTimestamp: "2021-06-03T14:54:14Z"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
name: kube-dns
namespace: kube-system
```
{{< note >}}
Keep in mind that the Kubernetes API does not have a `kind: List` type. `kind: List` is an internal mechanism type for lists of mixed resources and should not be depended upon.
{{< /note >}}
## Receiving resources as Tables

View File

@ -53,11 +53,11 @@ The **events.k8s.io/v1beta1** API version of Event will no longer be served in v
* Notable changes in **events.k8s.io/v1**:
* `type` is limited to `Normal` and `Warning`
* `involvedObject` is renamed to `regarding`
* `action`, `reason`, `reportingComponent`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events
* `action`, `reason`, `reportingController`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
* use `series.count` instead of the deprecated `count` field (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
* use `reportingComponent` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
* use `reportingController` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
* use `reportingInstance` instead of the deprecated `source.host` field (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
#### PodDisruptionBudget {#poddisruptionbudget-v125}

View File

@ -415,7 +415,7 @@ and make sure that the node is empty, then deconfigure the node.
Talking to the control-plane node with the appropriate credentials, run:
```bash
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
```
Before removing the node, reset the state installed by `kubeadm`:

View File

@ -150,3 +150,4 @@ networking:
* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
* Read about [Dual-stack](/docs/concepts/services-networking/dual-stack/) cluster networking
* Learn more about the kubeadm [configuration format](/docs/reference/config-api/kubeadm-config.v1beta2/)

View File

@ -11,7 +11,7 @@ card:
<!-- overview -->
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">This page shows how to install the `kubeadm` toolbox.
For information how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
@ -240,8 +240,9 @@ Install CNI plugins (required for most pod network):
```bash
CNI_VERSION="v0.8.2"
ARCH="amd64"
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
```
Define the directory to download command files
@ -260,15 +261,17 @@ Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)
```bash
CRICTL_VERSION="v1.17.0"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
```
Install `kubeadm`, `kubelet`, `kubectl` and add a `kubelet` systemd service:
```bash
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
ARCH="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
sudo chmod +x {kubeadm,kubelet,kubectl}
RELEASE_VERSION="v0.4.0"
@ -314,4 +317,3 @@ If you are running into difficulties with kubeadm, please consult our [troublesh
## {{% heading "whatsnext" %}}
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)

View File

@ -163,7 +163,7 @@ services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetw
## Pods are not accessible via their Service IP
- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-fails-to-reach-itself-via-the-service-ip)
which allows pods to access themselves via their Service IP. This is an issue related to
[CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
add-on provider to get the latest status of their support for hairpin mode.
@ -258,7 +258,12 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
```
The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using DigitalOcean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) can be used for this.
The workaround is to tell `kubelet` which IP to use using `--node-ip`.
When using DigitalOcean, it can be the public one (assigned to `eth0`) or
the private one (assigned to `eth1`) should you want to use the optional
private network. The `kubeletExtraArgs` section of the kubeadm
[`NodeRegistrationOptions` structure](/docs/reference/config-api/kubeadm-config.v1beta2/#kubeadm-k8s-io-v1beta2-NodeRegistrationOptions)
can be used for this.
Then restart `kubelet`:
@ -331,7 +336,7 @@ Alternatively, you can try separating the `key=value` pairs like so:
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"`
but this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`.
A known workaround is to use the kubeadm [configuration file](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags).
A known workaround is to use the kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta2/).
## kube-proxy scheduled before node is initialized by cloud-controller-manager

View File

@ -31,7 +31,7 @@ for database debugging.
1. Create a Deployment that runs MongoDB:
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml
```
The output of a successful command verifies that the deployment was created:
@ -84,7 +84,7 @@ for database debugging.
2. Create a Service to expose MongoDB on the network:
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml
```
The output of a successful command verifies that the Service was created:

View File

@ -0,0 +1,352 @@
---
title: Use Cascading Deletion in a Cluster
content_type: task
---
<!--overview-->
This page shows you how to specify the type of [cascading deletion](/docs/concepts/workloads/controllers/garbage-collection/#cascading-deletion)
to use in your cluster during {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
You also need to [create a sample Deployment](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment)
to experiment with the different types of cascading deletion. You will need to
recreate the Deployment for each type.
## Check owner references on your pods
Check that the `ownerReferences` field is present on your pods:
```shell
kubectl get pods -l app=nginx --output=yaml
```
The output has an `ownerReferences` field similar to this:
```
apiVersion: v1
...
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-deployment-6b474476c4
uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7
...
```
## Use foreground cascading deletion {#use-foreground-cascading-deletion}
By default, Kubernetes uses [background cascading deletion](/docs/concepts/workloads/controllers/garbage-collection/#background-deletion)
to delete dependents of an object. You can switch to foreground cascading deletion
using either `kubectl` or the Kubernetes API, depending on the Kubernetes
version your cluster runs. {{<version-check>}}
{{<tabs name="foreground_deletion">}}
{{% tab name="Kubernetes 1.20.x and later" %}}
You can delete objects using foreground cascading deletion using `kubectl` or the
Kubernetes API.
**Using kubectl**
Run the following command:
<!--TODO: verify release after which the --cascade flag is switched to a string in https://github.com/kubernetes/kubectl/commit/fd930e3995957b0093ecc4b9fd8b0525d94d3b4e-->
```shell
kubectl delete deployment nginx-deployment --cascade=foreground
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
The output contains a `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}}
like this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "nginx-deployment",
"namespace": "default",
"uid": "d1ce1b02-cae8-4288-8a53-30e84d8fa505",
"resourceVersion": "1363097",
"creationTimestamp": "2021-07-08T20:24:37Z",
"deletionTimestamp": "2021-07-08T20:27:39Z",
"finalizers": [
"foregroundDeletion"
]
...
```
{{% /tab %}}
{{% tab name="Versions prior to Kubernetes 1.20.x" %}}
You can delete objects using foreground cascading deletion by calling the
Kubernetes API.
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
The output contains a `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}}
like this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "nginx-deployment",
"namespace": "default",
"uid": "d1ce1b02-cae8-4288-8a53-30e84d8fa505",
"resourceVersion": "1363097",
"creationTimestamp": "2021-07-08T20:24:37Z",
"deletionTimestamp": "2021-07-08T20:27:39Z",
"finalizers": [
"foregroundDeletion"
]
...
```
{{% /tab %}}
{{</tabs>}}
## Use background cascading deletion {#use-background-cascading-deletion}
1. [Create a sample Deployment](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment).
1. Use either `kubectl` or the Kubernetes API to delete the Deployment,
depending on the Kubernetes version your cluster runs. {{<version-check>}}
{{<tabs name="background_deletion">}}
{{% tab name="Kubernetes version 1.20.x and later" %}}
You can delete objects using background cascading deletion using `kubectl`
or the Kubernetes API.
Kubernetes uses background cascading deletion by default, and does so
even if you run the following commands without the `--cascade` flag or the
`propagationPolicy` argument.
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=background
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
The output is similar to this:
```
"kind": "Status",
"apiVersion": "v1",
...
"status": "Success",
"details": {
"name": "nginx-deployment",
"group": "apps",
"kind": "deployments",
"uid": "cc9eefb9-2d49-4445-b1c1-d261c9396456"
}
```
{{% /tab %}}
{{% tab name="Versions prior to Kubernetes 1.20.x" %}}
Kubernetes uses background cascading deletion by default, and does so
even if you run the following commands without the `--cascade` flag or the
`propagationPolicy: Background` argument.
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=true
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
The output is similar to this:
```
"kind": "Status",
"apiVersion": "v1",
...
"status": "Success",
"details": {
"name": "nginx-deployment",
"group": "apps",
"kind": "deployments",
"uid": "cc9eefb9-2d49-4445-b1c1-d261c9396456"
}
```
{{% /tab %}}
{{</tabs>}}
## Delete owner objects and orphan dependents {#set-orphan-deletion-policy}
By default, when you tell Kubernetes to delete an object, the
{{<glossary_tooltip text="controller" term_id="controller">}} also deletes
dependent objects. You can make Kubernetes *orphan* these dependents using
`kubectl` or the Kubernetes API, depending on the Kubernetes version your
cluster runs. {{<version-check>}}
{{<tabs name="orphan_objects">}}
{{% tab name="Kubernetes version 1.20.x and later" %}}
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=orphan
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
The output contains `orphan` in the `finalizers` field, similar to this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"namespace": "default",
"uid": "6f577034-42a0-479d-be21-78018c466f1f",
"creationTimestamp": "2021-07-09T16:46:37Z",
"deletionTimestamp": "2021-07-09T16:47:08Z",
"deletionGracePeriodSeconds": 0,
"finalizers": [
"orphan"
],
...
```
{{% /tab %}}
{{% tab name="Versions prior to Kubernetes 1.20.x" %}}
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
**Using kubectl**
Run the following command:
```shell
kubectl delete deployment nginx-deployment --cascade=false
```
**Using the Kubernetes API**
1. Start a local proxy session:
```shell
kubectl proxy --port=8080
```
1. Use `curl` to trigger deletion:
```shell
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
The output contains `orphan` in the `finalizers` field, similar to this:
```
"kind": "Deployment",
"apiVersion": "apps/v1",
"namespace": "default",
"uid": "6f577034-42a0-479d-be21-78018c466f1f",
"creationTimestamp": "2021-07-09T16:46:37Z",
"deletionTimestamp": "2021-07-09T16:47:08Z",
"deletionGracePeriodSeconds": 0,
"finalizers": [
"orphan"
],
...
```
{{% /tab %}}
{{</tabs>}}
You can check that the Pods managed by the Deployment are still running:
```shell
kubectl get pods -l app=nginx
```
## {{% heading "whatsnext" %}}
* Learn about [owners and dependents](/docs/concepts/overview/working-with-objects/owners-dependents/) in Kubernetes.
* Learn about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
* Learn about [garbage collection](/docs/concepts/workloads/controllers/garbage-collection/).

View File

@ -1,5 +1,5 @@
---
title: Managing Secret using Configuration File
title: Managing Secrets using Configuration File
content_type: task
weight: 20
description: Creating Secret objects using resource configuration file.
@ -193,6 +193,6 @@ kubectl delete secret mysecret
## {{% heading "whatsnext" %}}
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
- Learn how to [manage Secret with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- Learn how to [manage Secrets with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

View File

@ -67,7 +67,7 @@ single quotes (`'`). For example, if your password is `S!B\*d$zDsb=`,
run the following command:
```shell
kubectl create secret generic dev-db-secret \
kubectl create secret generic db-user-pass \
--from-literal=username=devuser \
--from-literal=password='S!B\*d$zDsb='
```

View File

@ -1,5 +1,5 @@
---
title: Managing Secret using Kustomize
title: Managing Secrets using Kustomize
content_type: task
weight: 30
description: Creating Secret objects using kustomization.yaml file.
@ -135,6 +135,6 @@ kubectl delete secret db-user-pass-96mffmfh4k
## {{% heading "whatsnext" %}}
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
- Learn how to [manage Secret with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secrets with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secrets using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)

View File

@ -349,8 +349,11 @@ JSON Web Key Set (JWKS) at `/openid/v1/jwks`. The OpenID Provider Configuration
is sometimes referred to as the _discovery document_.
Clusters include a default RBAC ClusterRole called
`system:service-account-issuer-discovery`. No role bindings are provided
by default. Administrators may, for example, choose whether to bind the role to
`system:service-account-issuer-discovery`. A default RBAC ClusterRoleBinding
assigns this role to the `system:serviceaccounts` group, which all service
accounts implicitly belong to. This allows pods running on the cluster to access
the service account discovery document via their mounted service account token.
Administrators may, additionally, choose to bind the role to
`system:authenticated` or `system:unauthenticated` depending on their security
requirements and which external systems they intend to federate with.

View File

@ -11,6 +11,8 @@ This page shows how to perform a rolling update on a DaemonSet.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
## DaemonSet Update Strategy
@ -191,4 +193,3 @@ kubectl delete ds fluentd-elasticsearch -n kube-system
* See [Performing a rollback on a DaemonSet](/docs/tasks/manage-daemon/rollback-daemon-set/)
* See [Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)

View File

@ -198,14 +198,17 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe
## Autoscaling during rolling update
Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object, which manages the underlying replica sets for you.
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
Kubernetes lets you perform a rolling update on a Deployment. In that
case, the Deployment manages the underlying ReplicaSets for you.
When you configure autoscaling for a Deployment, you bind a
HorizontalPodAutoscaler to a single Deployment. The HorizontalPodAutoscaler
manages the `replicas` field of the Deployment. The deployment controller is responsible
for setting the `replicas` of the underlying ReplicaSets so that they add up to a suitable
number during the rollout and also afterwards.
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update.
The reason this doesn't work is that when rolling update creates a new replication controller,
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
If you perform a rolling update of a StatefulSet that has an autoscaled number of
replicas, the StatefulSet directly manages its set of Pods (there is no intermediate resource
similar to ReplicaSet).
## Support for cooldown/delay

View File

@ -379,7 +379,7 @@ This might impact other applications on the Node, so it's best to
**only do this in a test cluster**.
```shell
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
kubectl drain <node-name> --force --delete-emptydir-data --ignore-daemonsets
```
Now you can watch as the Pod reschedules on a different Node:

View File

@ -0,0 +1,11 @@
---
title: "kubectl-convert overview"
description: >-
A kubectl plugin that allows you to convert manifests from one version
of a Kubernetes API to a different version.
headless: true
---
A plugin for Kubernetes command-line tool `kubectl`, which allows you to convert manifests between different API
versions. This can be particularly helpful to migrate manifests to a non-deprecated api version with newer Kubernetes release.
For more info, visit [migrate to non deprecated apis](/docs/reference/using-api/deprecation-guide/#migrate-to-non-deprecated-apis)

View File

@ -172,7 +172,7 @@ kubectl version --client
{{< include "included/verify-kubectl.md" >}}
## Optional kubectl configurations
## Optional kubectl configurations and plugins
### Enable shell autocompletion
@ -185,6 +185,61 @@ Below are the procedures to set up autocompletion for Bash and Zsh.
{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}}
{{< /tabs >}}
### Install `kubectl convert` plugin
{{< include "included/kubectl-convert-overview.md" >}}
1. Download the latest release with the command:
```bash
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert
```
1. Validate the binary (optional)
Download the kubectl-convert checksum file:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
Validate the kubectl-convert binary against the checksum file:
```bash
echo "$(<kubectl-convert.sha256) kubectl-convert" | sha256sum --check
```
If valid, the output is:
```console
kubectl-convert: OK
```
If the check fails, `sha256` exits with nonzero status and prints output similar to:
```bash
kubectl-convert: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
Download the same version of the binary and checksum.
{{< /note >}}
1. Install kubectl-convert
```bash
sudo install -o root -g root -m 0755 kubectl-convert /usr/local/bin/kubectl-convert
```
1. Verify plugin is successfully installed
```shell
kubectl convert --help
```
If you do not see an error, it means the plugin is successfully installed.
## {{% heading "whatsnext" %}}
{{< include "included/kubectl-whats-next.md" >}}

View File

@ -155,7 +155,7 @@ If you are on macOS and using [Macports](https://macports.org/) package manager,
{{< include "included/verify-kubectl.md" >}}
## Optional kubectl configurations
## Optional kubectl configurations and plugins
### Enable shell autocompletion
@ -168,6 +168,82 @@ Below are the procedures to set up autocompletion for Bash and Zsh.
{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}}
{{< /tabs >}}
### Install `kubectl convert` plugin
{{< include "included/kubectl-convert-overview.md" >}}
1. Download the latest release with the command:
{{< tabs name="download_convert_binary_macos" >}}
{{< tab name="Intel" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert"
{{< /tab >}}
{{< tab name="Apple Silicon" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert"
{{< /tab >}}
{{< /tabs >}}
1. Validate the binary (optional)
Download the kubectl checksum file:
{{< tabs name="download_convert_checksum_macos" >}}
{{< tab name="Intel" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert.sha256"
{{< /tab >}}
{{< tab name="Apple Silicon" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256"
{{< /tab >}}
{{< /tabs >}}
Validate the kubectl-convert binary against the checksum file:
```bash
echo "$(<kubectl-convert.sha256) kubectl-convert" | shasum -a 256 --check
```
If valid, the output is:
```console
kubectl-convert: OK
```
If the check fails, `shasum` exits with nonzero status and prints output similar to:
```bash
kubectl-convert: FAILED
shasum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
Download the same version of the binary and checksum.
{{< /note >}}
1. Make kubectl-convert binary executable
```bash
chmod +x ./kubectl-convert
```
1. Move the kubectl-convert binary to a file location on your system `PATH`.
```bash
sudo mv ./kubectl-convert /usr/local/bin/kubectl-convert
sudo chown root: /usr/local/bin/kubectl-convert
```
{{< note >}}
Make sure `/usr/local/bin` is in your PATH environment variable.
{{< /note >}}
1. Verify plugin is successfully installed
```shell
kubectl convert --help
```
If you do not see an error, it means the plugin is successfully installed.
## {{% heading "whatsnext" %}}
{{< include "included/kubectl-whats-next.md" >}}

View File

@ -130,7 +130,7 @@ Edit the config file with a text editor of your choice, such as Notepad.
{{< include "included/verify-kubectl.md" >}}
## Optional kubectl configurations
## Optional kubectl configurations and plugins
### Enable shell autocompletion
@ -140,6 +140,49 @@ Below are the procedures to set up autocompletion for Zsh, if you are running th
{{< include "included/optional-kubectl-configs-zsh.md" >}}
### Install `kubectl convert` plugin
{{< include "included/kubectl-convert-overview.md" >}}
1. Download the latest release with the command:
```powershell
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe
```
1. Validate the binary (optional)
Download the kubectl-convert checksum file:
```powershell
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe.sha256
```
Validate the kubectl-convert binary against the checksum file:
- Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded:
```cmd
CertUtil -hashfile kubectl-convert.exe SHA256
type kubectl-convert.exe.sha256
```
- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result:
```powershell
$($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256)
```
1. Add the binary in to your `PATH`.
1. Verify plugin is successfully installed
```shell
kubectl convert --help
```
If you do not see an error, it means the plugin is successfully installed.
## {{% heading "whatsnext" %}}
{{< include "included/kubectl-whats-next.md" >}}

View File

@ -348,6 +348,11 @@ node with the required profile.
### Restricting profiles with the PodSecurityPolicy
{{< note >}}
PodSecurityPolicy is deprecated in Kubernetes v1.21, and will be removed in v1.25.
See [PodSecurityPolicy documentation](/docs/concepts/policy/pod-security-policy/) for more information.
{{< /note >}}
If the PodSecurityPolicy extension is enabled, cluster-wide AppArmor restrictions can be applied. To
enable the PodSecurityPolicy, the following flag must be set on the `apiserver`:

View File

@ -37,7 +37,7 @@ weight: 10
<li><i>ClusterIP</i> (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.</li>
<li><i>NodePort</i> - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>. Superset of ClusterIP.</li>
<li><i>LoadBalancer</i> - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.</li>
<li><i>ExternalName</i> - Maps the Service to the contents of the <code>externalName</code> field (e.g. `foo.bar.example.com`), by returning a <code>CNAME</code> record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of <code>kube-dns</code>, or CoreDNS version 0.0.8 or higher.</li>
<li><i>ExternalName</i> - Maps the Service to the contents of the <code>externalName</code> field (e.g. <code>foo.bar.example.com</code>), by returning a <code>CNAME</code> record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of <code>kube-dns</code>, or CoreDNS version 0.0.8 or higher.</li>
</ul>
<p>More information about the different types of Services can be found in the <a href="/docs/tutorials/services/source-ip/">Using Source IP</a> tutorial. Also see <a href="/docs/concepts/services-networking/connect-applications-service">Connecting Applications with Services</a>.</p>
<p>Additionally, note that there are some use cases with Services that involve not defining <code>selector</code> in the spec. A Service created without <code>selector</code> will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using <code>type: ExternalName</code>.</p>

View File

@ -266,7 +266,7 @@ to also be deleted. Never assume you'll be able to access data if its volume cla
The Pods in this tutorial use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile)
image from Google's [container registry](https://cloud.google.com/container-registry/docs/).
The Docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)
The Docker image above is based on [debian-base](https://github.com/kubernetes/release/tree/master/images/build/debian-base)
and includes OpenJDK 8.
This image includes a standard Cassandra installation from the Apache Debian repo.

View File

@ -937,7 +937,7 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
drain the node on which the `zk-0` Pod is scheduled.
```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```
```
@ -972,7 +972,7 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
`zk-1` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned
```
```
@ -1015,7 +1015,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which
`zk-2` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```
```
@ -1101,7 +1101,7 @@ zk-1 1/1 Running 0 13m
Attempt to drain the node on which `zk-2` is scheduled.
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```
The output:

View File

@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
selector:
matchLabels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
containers:
- name: mongo
image: mongo:4.2
args:
- --bind_ip
- 0.0.0.0
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 27017

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend

View File

@ -78,9 +78,9 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| July 2021 | 2021-07-10 | 2021-07-14 |
| August 2021 | 2021-08-07 | 2021-08-11 |
| September 2021 | 2021-09-11 | 2021-09-15 |
| October 2021 | 2021-10-09 | 2021-10-13 |
## Detailed Release History for Active Branches
@ -92,6 +92,7 @@ End of Life for **1.21** is **2022-06-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
| 1.21.4 | 2021-08-07 | 2021-08-11 | |
| 1.21.3 | 2021-07-10 | 2021-07-14 | |
| 1.21.2 | 2021-06-12 | 2021-06-16 | |
| 1.21.1 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |
@ -104,6 +105,7 @@ End of Life for **1.20** is **2022-02-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- |
| 1.20.10 | 2021-08-07 | 2021-08-11 | |
| 1.20.9 | 2021-07-10 | 2021-07-14 | |
| 1.20.8 | 2021-06-12 | 2021-06-16 | |
| 1.20.7 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |
@ -122,6 +124,7 @@ End of Life for **1.19** is **2021-10-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| ------------- | -------------------- | ----------- | ------------------------------------------------------------------------- |
| 1.19.14 | 2021-08-07 | 2021-08-11 | |
| 1.19.13 | 2021-07-10 | 2021-07-14 | |
| 1.19.12 | 2021-06-12 | 2021-06-16 | |
| 1.19.11 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |
@ -140,7 +143,7 @@ End of Life for **1.19** is **2021-10-28**
These releases are no longer supported.
| Minor Version | Final Patch Release | EOL date | NOTE |
| MINOR VERSION | FINAL PATCH RELEASE | EOL DATE | NOTE |
| ------------- | ------------------- | ---------- | ---------------------------------------------------------------------- |
| 1.18 | 1.18.20 | 2021-06-18 | Created to resolve regression introduced in 1.18.19 |
| 1.18 | 1.18.19 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |

View File

@ -10,6 +10,7 @@ and building/packaging Kubernetes.
The responsibilities of each role are described below.
- [Contact](#contact)
- [Security Embargo Policy](#security-embargo-policy)
- [Handbooks](#handbooks)
- [Release Managers](#release-managers)
- [Becoming a Release Manager](#becoming-a-release-manager)
@ -28,6 +29,10 @@ The responsibilities of each role are described below.
| [release-managers-private@kubernetes.io](mailto:release-managers-private@kubernetes.io) | N/A | Private | Private discussion for privileged Release Managers | Release Managers, SIG Release leadership |
| [security-release-team@kubernetes.io](mailto:security-release-team@kubernetes.io) | [#security-release-team](https://kubernetes.slack.com/archives/G0162T1RYHG) (channel) / @security-rel-team (user group) | Private | Security release coordination with the Product Security Committee | [security-discuss-private@kubernetes.io](mailto:security-discuss-private@kubernetes.io), [release-managers-private@kubernetes.io](mailto:release-managers-private@kubernetes.io) |
### Security Embargo Policy
Some information about releases is subject to embargo and we have defined policy about how those embargos are set. Please refer [Security Embargo Policy](https://github.com/kubernetes/security/blob/master/private-distributors-list.md#embargo-policy) here for more information.
## Handbooks
**NOTE: The Patch Release Team and Branch Manager handbooks will be de-duplicated at a later date.**

View File

@ -16,6 +16,7 @@ spec:
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch

View File

@ -178,7 +178,7 @@ Lorsque le StatefulSet {{< glossary_tooltip term_id="controller" >}} crée un Po
il ajoute une étiquette, `statefulset.kubernetes.io/pod-name`, renseignée avec le nom du Pod.
Cette étiquette vous permet d'attacher un Service à un Pod spécifique du StatefulSet.
## Garanties de déploiment et de mise à l'échelle
## Garanties de déploiement et de mise à l'échelle
* Pour un StatefulSet avec N réplicas, lorsque les Pods sont déployés, ils sont créés de manière séquentielle, dans l'ordre {0..N-1}.
* Lorsque les Pods sont supprimés, ils sont terminés dans l'ordre inverse, {N-1..0}.

View File

@ -1,303 +0,0 @@
---
reviewers:
- yastij
title: Choisir la bonne solution
description: Panorama de solutions Kubernetes
weight: 10
content_type: concept
---
<!-- overview -->
Kubernetes peut fonctionner sur des plateformes variées: sur votre PC portable, sur des VMs d'un fournisseur de cloud, ou un rack
de serveurs bare-metal. L'effort demandé pour configurer un cluster varie de l'éxécution d'une simple commande à la création
de votre propre cluster personnalisé. Utilisez ce guide pour choisir la solution qui correspond le mieux à vos besoins.
Si vous voulez simplement jeter un coup d'oeil rapide, utilisez alors de préférence les [solutions locales basées sur Docker](#solutions-locales).
Lorsque vous êtes prêts à augmenter le nombre de machines et souhaitez bénéficier de la haute disponibilité, une
[solution hébergée](#solutions-hebergées) est la plus simple à déployer et à maintenir.
[Les solutions cloud clés en main](#solutions-clés-en-main) ne demandent que peu de commande pour déployer et couvrent un large panel de
fournisseurs de cloud. [Les solutions clés en main pour cloud privé](#solutions-on-premises-clés-en-main) possèdent la simplicité des solutions cloud clés en main combinées avec la sécurité de votre propre réseau privé.
Si vous avez déjà un moyen de configurer vos resources, utilisez [kubeadm](/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) pour facilement
déployer un cluster grâce à une seule ligne de commande par machine.
[Les solutions personnalisées](#solutions-personnalisées) varient d'instructions pas à pas, à des conseils relativement généraux pour déployer un
cluster Kubernetes en partant du début.
<!-- body -->
## Solutions locales
* [Minikube](/fr/docs/setup/learning-environment/minikube/) est une méthode pour créer un cluster Kubernetes local à noeud unique pour le développement et le test. L'installation est entièrement automatisée et ne nécessite pas de compte de fournisseur de cloud.
* [Docker Desktop](https://www.docker.com/products/docker-desktop) est une
application facile à installer pour votre environnement Mac ou Windows qui vous permet de
commencer à coder et déployer votre code dans des conteneurs en quelques minutes sur un nœud unique Kubernetes.
* [Minishift](https://docs.okd.io/latest/minishift/) installe la version communautaire de la plate-forme d'entreprise OpenShift
de Kubernetes pour le développement local et les tests. Il offre une VM tout-en-un (`minishift start`) pour Windows, macOS et Linux,
le `oc cluster up` containerisé (Linux uniquement) et [est livré avec quelques Add Ons faciles à installer](https://github.com/minishift/minishift-addons/tree/master/add-ons).
* [MicroK8s](https://microk8s.io/) fournit une commande unique d'installation de la dernière version de Kubernetes sur une machine locale
pour le développement et les tests. L'installation est rapide (~30 sec) et supporte de nombreux plugins dont Istio avec une seule commande.
* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) peut utiliser VirtualBox sur votre machine
pour déployer Kubernetes sur une ou plusieurs machines virtuelles afin de développer et réaliser des scénarios de test. Cette solution
peut créer un cluster multi-nœuds complet.
* [IBM Cloud Private-CE (Community Edition) sur Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) est un script IaC (Infrastructure as Code) basé sur Terraform/Packer/BASH pour créer un cluster LXD à sept nœuds (1 Boot, 1 Master, 1 Management, 1 Proxy et 3 Workers) sur une machine Linux.
* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) est un cluster Kubernetes multi-nœuds (tandis que minikube est
un nœud unique) qui ne nécessite qu'un docker-engine. Il utilise la technique du docker-in-docker pour déployer le cluster Kubernetes.
* [Ubuntu sur LXD](/docs/getting-start-guides/ubuntu/local/) supporte un déploiement de 9 instances sur votre machine locale.
## Solutions hebergées
* [AppsCode.com](https://appscode.com/products/cloud-deployment/) fournit des clusters Kubernetes managés pour divers clouds publics, dont AWS et Google Cloud Platform.
* [APPUiO](https://appuio.ch) propose une plate-forme de cloud public OpenShift, supportant n'importe quel workload Kubernetes. De plus, APPUiO propose des Clusters OpenShift privés et managés, fonctionnant sur n'importe quel cloud public ou privé.
* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) offre un service managé de Kubernetes.
* [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/) offre des clusters Kubernetes managés.
* [Containership Kubernetes Engine (CKE)](https://containership.io/containership-platform) Approvisionnement et gestion intuitive de clusters
Kubernetes sur GCP, Azure, AWS, Packet, et DigitalOcean. Mises à niveau transparentes, auto-scaling, métriques, création de
workloads, et plus encore.
* [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/) offre un service managé de Kubernetes.
* [Giant Swarm](https://giantswarm.io/product/) offre des clusters Kubernetes managés dans leur propre centre de données, on-premises ou sur des clouds public.
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offre des clusters Kubernetes managés.
* [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-getting-started) offre des clusters Kubernetes managés avec choix d'isolation, des outils opérationnels, une vision intégrée de la sécurité des images et des conteneurs et une intégration avec Watson, IoT et les données.
* [Kubermatic](https://www.loodse.com) fournit des clusters Kubernetes managés pour divers clouds publics, y compris AWS et Digital Ocean, ainsi que sur site avec intégration OpenStack.
* [Kublr](https://kublr.com) offre des clusters Kubernetes sécurisés, évolutifs et hautement fiables sur AWS, Azure, GCP et on-premises,
de qualité professionnelle. Il inclut la sauvegarde et la reprise après sinistre prêtes à l'emploi, la journalisation et la surveillance centralisées multi-clusters, ainsi qu'une fonction d'alerte intégrée.
* [Madcore.Ai](https://madcore.ai) est un outil CLI orienté développement pour déployer l'infrastructure Kubernetes dans AWS. Les masters, un groupe d'autoscaling pour les workers sur des spot instances, les ingress-ssl-lego, Heapster, et Grafana.
* [Nutanix Karbon](https://www.nutanix.com/products/karbon/) est une plateforme de gestion et d'exploitation Kubernetes multi-clusters hautement disponibles qui simplifie l'approvisionnement, les opérations et la gestion du cycle de vie de Kubernetes.
* [OpenShift Dedicated](https://www.openshift.com/dedicated/) offre des clusters Kubernetes gérés et optimisés par OpenShift.
* [OpenShift Online](https://www.openshift.com/features/) fournit un accès hébergé gratuit aux applications Kubernetes.
* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) est un service entièrement géré, évolutif et hautement disponible que vous pouvez utiliser pour déployer vos applications conteneurisées dans le cloud.
* [Platform9](https://platform9.com/products/kubernetes/) offre des Kubernetes gérés on-premises ou sur n'importe quel cloud public, et fournit une surveillance et des alertes de santé 24h/24 et 7j/7. (Kube2go, une plate-forme de service de déploiement de cluster Kubernetes pour le déploiement de l'interface utilisateur Web9, a été intégrée à Platform9 Sandbox.)
* [Stackpoint.io](https://stackpoint.io) fournit l'automatisation et la gestion de l'infrastructure Kubernetes pour plusieurs clouds publics.
* [SysEleven MetaKube](https://www.syseleven.io/products-services/managed-kubernetes/) offre un Kubernetes-as-a-Service sur un cloud public OpenStack. Il inclut la gestion du cycle de vie, les tableaux de bord d'administration, la surveillance, la mise à l'échelle automatique et bien plus encore.
* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) est une offre d'entreprise Kubernetes-as-a-Service faisant partie du catalogue de services Cloud VMware qui fournit des clusters Kubernetes faciles à utiliser, sécurisés par défaut, rentables et basés sur du SaaS.
## Solutions clés en main
Ces solutions vous permettent de créer des clusters Kubernetes sur une gamme de fournisseurs de Cloud IaaaS avec seulement
quelques commandes. Ces solutions sont activement développées et bénéficient du soutien actif de la communauté.
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/)
* [APPUiO](https://appuio.ch)
* [AWS](/docs/setup/turnkey/aws/)
* [Azure](/docs/setup/turnkey/azure/)
* [CenturyLink Cloud](/docs/setup/turnkey/clc/)
* [Conjure-up Kubernetes with Ubuntu on AWS, Azure, Google Cloud, Oracle Cloud](/docs/getting-started-guides/ubuntu/)
* [Containership](https://containership.io/containership-platform)
* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
* [Gardener](https://gardener.cloud/)
* [Giant Swarm](https://giantswarm.io)
* [Google Compute Engine (GCE)](/docs/setup/turnkey/gce/)
* [IBM Cloud](https://github.com/patrocinio/kubernetes-softlayer)
* [Kontena Pharos](https://kontena.io/pharos/)
* [Kubermatic](https://cloud.kubermatic.io)
* [Kublr](https://kublr.com/)
* [Madcore.Ai](https://madcore.ai/)
* [Nirmata](https://nirmata.com/)
* [Nutanix Karbon](https://www.nutanix.com/products/karbon/)
* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm)
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
* [Stackpoint.io](/docs/setup/turnkey/stackpoint/)
* [Tectonic by CoreOS](https://coreos.com/tectonic)
* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks)
## Solutions On-Premises clés en main
Ces solutions vous permettent de créer des clusters Kubernetes sur votre cloud privé sécurisé avec seulement quelques commandes.
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
* [APPUiO](https://appuio.ch)
* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
* [Giant Swarm](https://giantswarm.io)
* [GKE On-Prem | Google Cloud](https://cloud.google.com/gke-on-prem/)
* [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/)
* [Kontena Pharos](https://kontena.io/pharos/)
* [Kubermatic](https://www.loodse.com)
* [Kublr](https://kublr.com/)
* [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/)
* [Nirmata](https://nirmata.com/)
* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) by [Red Hat](https://www.redhat.com)
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
* [SUSE CaaS Platform](https://www.suse.com/products/caas-platform)
* [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/)
## Solutions personnalisées
Kubernetes peut fonctionner sur une large gamme de fournisseurs de Cloud et d'environnements bare-metal, ainsi qu'avec de nombreux
systèmes d'exploitation.
Si vous pouvez trouver un guide ci-dessous qui correspond à vos besoins, utilisez-le. C'est peut-être un peu dépassé, mais...
ce sera plus facile que de partir de zéro. Si vous voulez repartir de zéro, soit parce que vous avez des exigences particulières,
ou simplement parce que vous voulez comprendre ce qu'il y a à l'interieur de Kubernetes
essayez le guide [Getting Started from Scratch](/docs/setup/release/building-from-source/).
### Universel
Si vous avez déjà un moyen de configurer les ressources d'hébergement, utilisez
[kubeadm](/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) pour déployer facilement un cluster
avec une seule commande par machine.
### Cloud
Ces solutions sont des combinaisons de fournisseurs de cloud computing et de systèmes d'exploitation qui ne sont pas couverts par les solutions ci-dessus.
* [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/)
* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/)
* [Gardener](https://gardener.cloud/)
* [Kublr](https://kublr.com/)
* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
* [Kubespray](/docs/setup/custom-cloud/kubespray/)
* [Rancher Kubernetes Engine (RKE)](https://github.com/rancher/rke)
### VMs On-Premises
* [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/)
* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel)
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel)
* [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization/)
* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com)
* [oVirt](/docs/setup/on-premises-vm/ovirt/)
* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
### Bare Metal
* [CoreOS](/docs/setup/custom-cloud/coreos/)
* [Digital Rebar](/docs/setup/on-premises-metal/krib/)
* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/)
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com)
### Integrations
Ces solutions fournissent une intégration avec des orchestrateurs, des resources managers ou des plateformes tierces.
* [DCOS](/docs/setup/on-premises-vm/dcos/)
* Community Edition DCOS utilise AWS
* Enterprise Edition DCOS supporte l'hébergement cloud, les VMs on-premises, et le bare-metal
## Tableau des Solutions
Ci-dessous vous trouverez un tableau récapitulatif de toutes les solutions listées précédemment.
| Fournisseur de IaaS | Config. Mgmt. | OS | Réseau | Docs | Niveau de support |
|------------------------------------------------|------------------------------------------------------------------------------|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| tous | tous | multi-support | tout les CNI | [docs](/fr/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle)) |
| Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial |
| Docker Enterprise | personnalisé | [multi-support](https://success.docker.com/article/compatibility-matrix) | [multi-support](https://docs.docker.com/ee/ucp/kubernetes/install-cni-plugin/) | [docs](https://docs.docker.com/ee/) | Commercial |
| IBM Cloud Private | Ansible | multi-support | multi-support | [docs](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html) | [Commercial](https://www.ibm.com/mysupport/s/topic/0TO500000001o0fGAA/ibm-cloud-private?language=en_US&productId=01t50000004X1PWAA0) and [Community](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/troubleshoot/support_types.html) |
| Red Hat OpenShift | Ansible & CoreOS | RHEL & CoreOS | [multi-support](https://docs.openshift.com/container-platform/3.11/architecture/networking/network_plugins.html) | [docs](https://docs.openshift.com/container-platform/3.11/welcome/index.html) | Commercial |
| Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial |
| AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial |
| Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai)) |
| Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial |
| Kublr | personnalisé | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial |
| Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial |
| IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-getting-started) | Commercial |
| Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial |
| GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project |
| Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial |
| Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/setup/turnkey/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) |
| Bare-metal | personnalisé | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project |
| Bare-metal | personnalisé | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) |
| libvirt | personnalisé | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) |
| KVM | personnalisé | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) |
| DCOS | Marathon | CoreOS/Alpine | personnalisé | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) |
| AWS | CoreOS | CoreOS | flannel | [docs](/docs/setup/turnkey/aws/) | Community |
| GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) |
| Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) |
| CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) |
| VMware vSphere | tous | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) |
| Bare-metal | personnalisé | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) |
| lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| GCE | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| Oracle Cloud | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| Rackspace | personnalisé | CoreOS | flannel/calico/canal | [docs](https://developer.rackspace.com/docs/rkaas/latest/) | [Commercial](https://www.rackspace.com/managed-kubernetes) |
| VMware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
| AWS | Saltstack | Debian | AWS | [docs](/docs/setup/turnkey/aws/) | Community ([@justinsb](https://github.com/justinsb)) |
| AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) |
| Bare-metal | personnalisé | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) |
| oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) |
| tous | tous | tous | tous | [docs](/docs/setup/release/building-from-source/) | Community ([@erictune](https://github.com/erictune)) |
| tous | tous | tous | tous | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community |
| tous | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher) |
| tous | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/) |
| Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial |
| Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial |
| IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial |
| Digital Rebar | kubeadm | tous | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar)) |
| VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial |
| Mirantis Cloud Platform | Salt | Ubuntu | multi-support | [docs](https://docs.mirantis.com/mcp/) | Commercial |
{{< note >}}
Le tableau ci-dessus est ordonné par versions testées et utilisées dans les noeuds, suivis par leur niveau de support.
{{< /note >}}
### Définition des colonnes
* **IaaS Provider** est le produit ou l'organisation qui fournit les machines virtuelles ou physiques (nœuds) sur lesquelles Kubernetes fonctionne.
* **OS** est le système d'exploitation de base des nœuds.
* **Config. Mgmt.** est le système de gestion de configuration qui permet d'installer et de maintenir Kubernetes sur les
nœuds.
* **Le réseau** est ce qui implémente le [modèle de réseau](/docs/concepts/cluster-administration/networking/). Ceux qui ont le type de réseautage
Aucun_ ne peut pas prendre en charge plus d'un nœud unique, ou peut prendre en charge plusieurs nœuds VM dans un nœud physique unique.
* **Conformité** indique si un cluster créé avec cette configuration a passé la conformité du projet.
pour le support de l'API et des fonctionnalités de base de Kubernetes v1.0.0.
* **Niveaux de soutien**
* **Projet** : Les contributeurs de Kubernetes utilisent régulièrement cette configuration, donc elle fonctionne généralement avec la dernière version.
de Kubernetes.
* **Commercial** : Une offre commerciale avec son propre dispositif d'accompagnement.
* **Communauté** : Soutenu activement par les contributions de la communauté. Peut ne pas fonctionner avec les versions récentes de Kubernetes.
* **Inactif** : Pas de maintenance active. Déconseillé aux nouveaux utilisateurs de Kubernetes et peut être retiré.
* **Note** contient d'autres informations pertinentes, telles que la version de Kubernetes utilisée.
<!-- reference style links below here -->
<!-- GCE conformance test result -->
[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
<!-- Vagrant conformance test result -->
[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
<!-- Google Kubernetes Engine conformance test result -->
[3]: https://gist.github.com/erictune/2f39b22f72565365e59b

View File

@ -27,7 +27,7 @@ Laman ini akan menjabarkan beberapa *add-ons* yang tersedia serta tautan instruk
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) memungkinkan Kubernetes agar dapat terkoneksi dengan beragam *plugin* CNI, seperti Calico, Canal, Flannel, Romana, atau Weave dengan mulus.
* [Contiv](http://contiv.github.io) menyediakan jaringan yang dapat dikonfigurasi (*native* L3 menggunakan BGP, *overlay* menggunakan vxlan, klasik L2, dan Cisco-SDN/ACI) untuk berbagai penggunaan serta *policy framework* yang kaya dan beragam. Proyek Contiv merupakan proyek [open source](http://github.com/contiv). Laman [instalasi](http://github.com/contiv/install) ini akan menjabarkan cara instalasi, baik untuk klaster dengan kubeadm maupun non-kubeadm.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), yang berbasis dari [Tungsten Fabric](https://tungsten.io), merupakan sebuah proyek *open source* yang menyediakan virtualisasi jaringan *multi-cloud* serta platform manajemen *policy*. Contrail dan Tungsten Fabric terintegrasi dengan sistem orkestrasi lainnya seperti Kubernetes, OpenShift, OpenStack dan Mesos, serta menyediakan mode isolasi untuk mesin virtual (VM), kontainer/pod dan *bare metal*.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) merupakan penyedia jaringan *overlay* yang dapat digunakan pada Kubernetes.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) merupakan penyedia jaringan *overlay* yang dapat digunakan pada Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) merupakan solusi jaringan yang mendukung multipel jaringan pada Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) menyediakan integrasi antara VMware NSX-T dan orkestrator kontainer seperti Kubernetes, termasuk juga integrasi antara NSX-T dan platform CaaS/PaaS berbasis kontainer seperti *Pivotal Container Service* (PKS) dan OpenShift.

View File

@ -107,7 +107,7 @@ dapat menemukan kata-kata tersebut dalam bahasa Indonesia.
### Panduan untuk kata-kata API Objek Kubernetes
Gunakan gaya "CamelCase" untuk menulis objek API Kubernetes, lihat daftar
lengkapnya [di sini](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/).
lengkapnya [di sini](/docs/reference/kubernetes-api/).
Sebagai contoh:
* *Benar*: PersistentVolume. *Salah*: volume persisten, `PersistentVolume`,
@ -130,7 +130,7 @@ ditulis dalam huruf kapital pada halaman asli bahasa Inggris.
### Panduan untuk "Feature Gate" Kubernetes
Istilah [_functional gate_](https://kubernetes.io/ko/docs/reference/command-line-tools-reference/feature-gates/)
Istilah [_feature gate_](/docs/reference/command-line-tools-reference/feature-gates/)
Kubernetes tidak perlu diterjemahkan ke dalam bahasa Indonesia dan tetap
dipertahankan dalam bentuk aslinya.
@ -175,4 +175,4 @@ scale | | skala | |
process | kata kerja | memproses | https://kbbi.web.id/proses |
replica | kata benda | replika | https://kbbi.web.id/replika |
flag | | tanda, parameter, argumen | |
event | | _event_ | |
event | | _event_ | |

View File

@ -16,6 +16,7 @@ spec:
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch

View File

@ -87,10 +87,12 @@ kubectl edit SampleDB/example-database # 手動でいくつかの設定を変更
* [Custom Resources](/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources/)をより深く学びます
* ユースケースに合わせた、既製のオペレーターを[OperatorHub.io](https://operatorhub.io/)から見つけます
* 自前のオペレーターを書くために既存のツールを使います、例:
* [Charmed Operator Framework](https://juju.is/)
* [KUDO](https://kudo.dev/)Kubernetes Universal Declarative Operatorを使います
* [kubebuilder](https://book.kubebuilder.io/)を使います
* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html)を自分で実装したWebHooksと一緒に使います
* [Operator Framework](https://operatorframework.io)を使います
* [shell-operator](https://github.com/flant/shell-operator)
* 自前のオペレーターを他のユーザーのために[公開](https://operatorhub.io/)します
* オペレーターパターンを紹介している[CoreOSオリジナル記事](https://coreos.com/blog/introducing-operators.html)を読みます
* Google Cloudが出したオペレーター作成のベストプラクティス[記事](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)を読みます

View File

@ -1,5 +1,8 @@
---
title: "Kubernetesのオブジェクトについて"
title: "Kubernetesオブジェクトを利用する"
weight: 40
description: >
Kubernetesオブジェクトは、Kubernetes上で永続的なエンティティです。Kubernetesはこれらのエンティティを使い、クラスターの状態を表現します。
Kubernetesオブジェクトモデルと、これらのオブジェクトの利用方法について学びます。
---

View File

@ -136,7 +136,8 @@ content_type: concept
| `TokenRequest` | `true` | Beta | 1.12 | |
| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 |
| `TokenRequestProjection` | `true` | Beta | 1.12 | |
| `TTLAfterFinished` | `false` | Alpha | 1.12 | |
| `TTLAfterFinished` | `false` | Alpha | 1.12 | 1.20 |
| `TTLAfterFinished` | `true` | Beta | 1.21 | |
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
| `TopologyManager` | `true` | Beta | 1.18 | |
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |

View File

@ -457,8 +457,6 @@ error: one plugin warning was found
cat ./kubectl-whoami
```
次の例では、下記の内容を含んだ`kubectl-whoami`が既に作成済であることを前提としています。
The next few examples assume that you already made `kubectl-whoami` have
the following contents:
```shell
#!/bin/bash

View File

@ -134,7 +134,7 @@ Kubernetes v1.14以降、Windowsコンテナワークロードは、Group Manage
Podの仕様で`"kubernetes.io/os": windows`のようなnodeSelectorが指定されていない場合、PodをWindowsまたはLinuxの任意のホストでスケジュールすることができます。WindowsコンテナはWindowsでのみ実行でき、LinuxコンテナはLinuxでのみ実行できるため、これは問題になる可能性があります。ベストプラクティスは、nodeSelectorを使用することです。
ただし、多くの場合、ユーザーには既存の多数のLinuxコンテナのdepolyment、およびコミュニティHelmチャートのような既成構成のエコシステムやOperatorのようなプログラム的にPodを生成するケースがあることを理解しています。このような状況では、nodeSelectorsを追加するための構成変更をためらう可能性があります。代替策は、Taintsを使用することです。kubeletは登録中にTaintsを設定できるため、Windowsだけで実行する時に自動的にTaintを追加するように簡単に変更できます。
ただし、多くの場合、ユーザーには既存の多数のLinuxコンテナのdeployment、およびコミュニティHelmチャートのような既成構成のエコシステムやOperatorのようなプログラム的にPodを生成するケースがあることを理解しています。このような状況では、nodeSelectorsを追加するための構成変更をためらう可能性があります。代替策は、Taintsを使用することです。kubeletは登録中にTaintsを設定できるため、Windowsだけで実行する時に自動的にTaintを追加するように簡単に変更できます。
例:`--register-with-taints='os=windows:NoSchedule'`

View File

@ -119,6 +119,8 @@ kubectl get secret mysecret -o yaml
```yaml
apiVersion: v1
data:
config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19
kind: Secret
metadata:
creationTimestamp: 2018-11-15T20:40:59Z
@ -127,8 +129,6 @@ metadata:
resourceVersion: "7225"
uid: c280ad2e-e916-11e8-98f2-025000000001
type: Opaque
data:
config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19
```
`kubectl get`と`kubectl describe`コマンドはデフォルトではSecretの内容を表示しません。
@ -154,6 +154,8 @@ stringData:
```yaml
apiVersion: v1
data:
username: YWRtaW5pc3RyYXRvcg==
kind: Secret
metadata:
creationTimestamp: 2018-11-15T20:46:46Z
@ -162,8 +164,6 @@ metadata:
resourceVersion: "7579"
uid: 91460ecb-e917-11e8-98f2-025000000001
type: Opaque
data:
username: YWRtaW5pc3RyYXRvcg==
```
`YWRtaW5pc3RyYXRvcg==`をデコードすると`administrator`となります。

View File

@ -1,6 +1,6 @@
---
title: HostAliasesを使用してPodの/etc/hostsにエントリーを追加する
content_type: concept
content_type: task
weight: 60
min-kubernetes-server-version: 1.7
---
@ -13,7 +13,7 @@ Podの`/etc/hosts`ファイルにエントリーを追加すると、DNSやそ
HostAliasesを使用せずにファイルを修正することはおすすめできません。このファイルはkubeletが管理しており、Podの作成や再起動時に上書きされる可能性があるためです。
<!-- body -->
<!-- steps -->
## デフォルトのhostsファイルの内容

View File

@ -37,13 +37,13 @@ StatefulSet自体が削除された後で、関連するヘッドレスサービ
kubectl delete service <service-name>
```
kubectlを使ってStatefulSetを削除すると0にスケールダウンされ、すべてのPodが削除されます。PodではなくStatefulSetだけを削除したい場合は、`--cascade=false`を使用してください。
kubectlを使ってStatefulSetを削除すると0にスケールダウンされ、すべてのPodが削除されます。PodではなくStatefulSetだけを削除したい場合は、`--cascade=orphan`を使用してください。
```shell
kubectl delete -f <file.yaml> --cascade=false
kubectl delete -f <file.yaml> --cascade=orphan
```
`--cascade=false`を`kubectl delete`に渡すことで、StatefulSetオブジェクト自身が削除された後でも、StatefulSetによって管理されていたPodは残ります。Podに`app=myapp`というラベルが付いている場合は、次のようにして削除できます:
`--cascade=orphan`を`kubectl delete`に渡すことで、StatefulSetオブジェクト自身が削除された後でも、StatefulSetによって管理されていたPodは残ります。Podに`app=myapp`というラベルが付いている場合は、次のようにして削除できます:
```shell
kubectl delete pods -l app=myapp

View File

@ -246,7 +246,7 @@ StatefulSetに関連するすべてのリソースを自動的に破棄するよ
## Cassandraコンテナの環境変数
このチュートリアルのPodでは、Googleの[コンテナレジストリ](https://cloud.google.com/container-registry/docs/)の[`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile)イメージを使用しました。このDockerイメージは[debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)をベースにしており、OpenJDK 8が含まれています。
このチュートリアルのPodでは、Googleの[コンテナレジストリ](https://cloud.google.com/container-registry/docs/)の[`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile)イメージを使用しました。このDockerイメージは[debian-base](https://github.com/kubernetes/release/tree/master/images/build/debian-base)をベースにしており、OpenJDK 8が含まれています。
このイメージには、Apache Debianリポジトリの標準のCassandraインストールが含まれます。
環境変数を利用すると、`cassandra.yaml`に挿入された値を変更できます。

View File

@ -10,7 +10,7 @@ sitemap:
{{% blocks/feature image="flower" %}}
K8s라고도 알려진 [쿠버네티스]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})는 컨테이너화된 애플리케이션을 자동으로 배포, 스케일링 및 관리해주는 오픈소스 시스템입니다.
애플리케이션을 구성하는 컨테이너들의 쉬운 관리 및 발견을 위해서 컨테이너들을 논리적인 단위로 그룹화합니다. 쿠버네티스는 [Google에서 15년간 프로덕션 워크로드 운영한 경험](http://queue.acm.org/detail.cfm?id=2898444)을 토대로 구축되었으며, 커뮤니티에서 제공한 최상의 아이디어와 방법들이 결합되어 있습니다.
애플리케이션을 구성하는 컨테이너들의 쉬운 관리 및 발견을 위해서 컨테이너들을 논리적인 단위로 그룹화합니다. 쿠버네티스는 [Google에서 15년간 프로덕션 워크로드 운영한 경험](https://queue.acm.org/detail.cfm?id=2898444)을 토대로 구축되었으며, 커뮤니티에서 제공한 최상의 아이디어와 방법들이 결합되어 있습니다.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
@ -48,7 +48,7 @@ Google이 일주일에 수십억 개의 컨테이너들을 운영하게 해준
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu21" button id="desktopKCButton">Revisit KubeCon EU 2021</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -23,7 +23,7 @@ case_study_details:
<h2>Solution</h2>
<p>Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, <a href="http://kubernetes.io/">Kubernetes</a> container orchestration. Kubernetes, Ghods says, has allowed Box's developers to "target a universal set of concepts that are portable across all clouds."</p>
<p>Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, <a href="https://kubernetes.io/">Kubernetes</a> container orchestration. Kubernetes, Ghods says, has allowed Box's developers to "target a universal set of concepts that are portable across all clouds."</p>
<h2>Impact</h2>
@ -37,7 +37,7 @@ case_study_details:
In the summer of 2014, Box was feeling the pain of a decade's worth of hardware and software infrastructure that wasn't keeping up with the company's needs.
{{< /case-studies/lead >}}
<p>A platform that allows its more than 50 million users (including governments and big businesses like <a href="https://www.ge.com/">General Electric</a>) to manage and share content in the cloud, Box was originally a <a href="http://php.net/">PHP</a> monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we've been expanding into regions around the globe, and as the public cloud wars have been heating up, we've been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It's been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."</p>
<p>A platform that allows its more than 50 million users (including governments and big businesses like <a href="https://www.ge.com/">General Electric</a>) to manage and share content in the cloud, Box was originally a <a href="https://php.net/">PHP</a> monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we've been expanding into regions around the globe, and as the public cloud wars have been heating up, we've been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It's been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."</p>
<p>Box's cloud native journey accelerated that June, when Ghods attended <a href="https://www.docker.com/events/dockercon">DockerCon</a>. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.</p>

View File

@ -1,4 +1,7 @@
---
title: 노드
content_type: concept
weight: 10
@ -8,7 +11,8 @@ weight: 10
쿠버네티스는 컨테이너를 파드내에 배치하고 _노드_ 에서 실행함으로 워크로드를 구동한다.
노드는 클러스터에 따라 가상 또는 물리적 머신일 수 있다. 각 노드는
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에 의해 관리되며
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에
의해 관리되며
{{< glossary_tooltip text="파드" term_id="pod" >}}를
실행하는 데 필요한 서비스를 포함한다.
@ -272,17 +276,18 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할
#### 안정성
대부분의 경우, 노드 컨트롤러는 초당 `--node-eviction-rate`(기본값 0.1)로
축출 비율을 제한한다. 이 말은 10초당 1개의 노드를 초과하여
축출 속도를 제한한다. 이 말은 10초당 1개의 노드를 초과하여
파드 축출을 하지 않는다는 의미가 된다.
노드 축출 행위는 주어진 가용성 영역 내 하나의 노드가 상태가 불량할
경우 변화한다. 노드 컨트롤러는 영역 내 동시에 상태가 불량한 노드의 퍼센티지가 얼마나 되는지
체크한다(NodeReady 컨디션은 ConditionUnknown 또는
ConditionFalse 다.).
- 상태가 불량한 노드의 일부가 최소 `--unhealthy-zone-threshold`
(기본값 0.55)가 되면 축출 비율은 감소한다.
ConditionFalse 다).
- 상태가 불량한 노드의 비율이 최소 `--unhealthy-zone-threshold`
(기본값 0.55)가 되면 축출 속도가 감소한다.
- 클러스터가 작으면 (즉 `--large-cluster-size-threshold`
노드 이하면 - 기본값 50) 축출은 중지되고, 그렇지 않으면 축출 비율은 초당
노드 이하면 - 기본값 50) 축출이 중지된다.
- 이외의 경우, 축출 속도는 초당
`--secondary-node-eviction-rate`(기본값 0.01)로 감소된다.
이 정책들이 가용성 영역 단위로 실행되어지는 이유는 나머지가 연결되어 있는 동안
@ -293,7 +298,7 @@ ConditionFalse 다.).
노드가 가용성 영역들에 걸쳐 퍼져 있는 주된 이유는 하나의 전체 영역이
장애가 발생할 경우 워크로드가 상태 양호한 영역으로 이전되어질 수 있도록 하기 위해서이다.
그러므로, 하나의 영역 내 모든 노드들이 상태가 불량하면 노드 컨트롤러는
`--node-eviction-rate` 의 정상 비율로 축출한다. 코너 케이스란 모든 영역이
`--node-eviction-rate` 의 정상 속도로 축출한다. 코너 케이스란 모든 영역이
완전히 상태불량 (즉 클러스터 내 양호한 노드가 없는 경우) 한 경우이다.
이러한 경우, 노드 컨트롤러는 마스터 연결에 문제가 있어 일부 연결이
복원될 때까지 모든 축출을 중지하는 것으로 여긴다.
@ -347,7 +352,8 @@ Kubelet은 노드가 종료되는 동안 파드가 일반 [파드 종료 프로
사용하여 주어진 기간 동안 노드 종료를 지연시키므로 systemd에 의존한다.
그레이스풀 노드 셧다운은 1.21에서 기본적으로 활성화된 `GracefulNodeShutdown`
[기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)로 제어된다.
[기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)로
제어된다.
기본적으로, 아래 설명된 두 구성 옵션,
`ShutdownGracePeriod``ShutdownGracePeriodCriticalPods` 는 모두 0으로 설정되어 있으므로,
@ -371,6 +377,20 @@ Kubelet은 노드가 종료되는 동안 파드가 일반 [파드 종료 프로
유예 종료에 할당되고, 마지막 10초는
[중요 파드](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)의 종료에 할당된다.
{{< note >}}
그레이스풀 노드 셧다운 과정에서 축출된 파드는 `Failed` 라고 표시된다.
`kubectl get pods` 명령을 실행하면 축출된 파드의 상태가 `Shutdown`으로 표시된다.
그리고 `kubectl describe pod` 명령을 실행하면 노드 셧다운으로 인해 파드가 축출되었음을 알 수 있다.
```
Status: Failed
Reason: Shutdown
Message: Node is shutting, evicting pods
```
실패한 파드 오브젝트는 명시적으로 삭제하거나 [가비지 콜렉션에 의해 정리](/ko/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)되기 전까지는 보존된다.
이는 갑작스러운 노드 종료의 경우와 비교했을 때 동작에 차이가 있다.
{{< /note >}}
## {{% heading "whatsnext" %}}

View File

@ -50,7 +50,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
URL을 구성 소스로 지정할 수도 있다. 이는 GitHub에 체크인된 구성 파일에서 직접 배포하는 데 편리하다.
```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/nginx/nginx-deployment.yaml
```
```shell

View File

@ -17,6 +17,11 @@ kubeconfig 파일들을 사용하여 클러스터, 사용자, 네임스페이스
`kubeconfig`라는 이름의 파일이 있다는 의미는 아니다.
{{< /note >}}
{{< warning >}}
신뢰할 수 있는 소스의 kubeconfig 파일만 사용한다. 특수 제작된 kubeconfig 파일을 사용하면 악성 코드가 실행되거나 파일이 노출될 수 있다.
신뢰할 수 없는 kubeconfig 파일을 사용해야 하는 경우 셸 스크립트를 사용하는 경우처럼 먼저 신중하게 검사한다.
{{< /warning>}}
기본적으로 `kubectl``$HOME/.kube` 디렉터리에서 `config`라는 이름의 파일을 찾는다.
`KUBECONFIG` 환경 변수를 설정하거나
[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/) 플래그를 지정해서
@ -154,4 +159,3 @@ kubeconfig 파일에서 파일과 경로 참조는 kubeconfig 파일의 위치

View File

@ -77,6 +77,20 @@ weight: 10
`imagePullPolicy` 가 특정값 없이 정의되면, `Always` 로 설정된다.
### 이미지풀백오프(ImagePullBackOff)
kubelet이 컨테이너 런타임을 사용하여 파드의 컨테이너 생성을 시작할 때,
`ImagePullBackOff`로 인해 컨테이너가
[Waiting](/ko/docs/concepts/workloads/pods/pod-lifecycle/#container-state-waiting) 상태에 있을 수 있다.
`ImagePullBackOff`라는 상태는 (이미지 이름이 잘못됨, 또는 `imagePullSecret` 없이
비공개 레지스트리에서 풀링 시도 등의 이유로) 쿠버네티스가 컨테이너 이미지를
가져올 수 없기 때문에 컨테이너를 실행할 수 없음을 의미한다. `BackOff`라는 단어는
쿠버네티스가 백오프 딜레이를 증가시키면서 이미지 풀링을 계속 시도할 것임을 나타낸다.
쿠버네티스는 시간 간격을 늘려가면서 시도를 계속하며, 시간 간격의 상한은 쿠버네티스 코드에
300초(5분)로 정해져 있다.
## 이미지 인덱스가 있는 다중 아키텍처 이미지
바이너리 이미지를 제공할 뿐만 아니라, 컨테이너 레지스트리는 [컨테이너 이미지 인덱스](https://github.com/opencontainers/image-spec/blob/master/image-index.md)를 제공할 수도 있다. 이미지 인덱스는 컨테이너의 아키텍처별 버전에 대한 여러 [이미지 매니페스트](https://github.com/opencontainers/image-spec/blob/master/manifest.md)를 가리킬 수 있다. 아이디어는 이미지의 이름(예를 들어, `pause`, `example/mycontainer`, `kube-apiserver`)을 가질 수 있다는 것이다. 그래서 다른 시스템들이 사용하고 있는 컴퓨터 아키텍처에 적합한 바이너리 이미지를 가져올 수 있다.

View File

@ -116,7 +116,7 @@ kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하
* [Charmed Operator Framework](https://juju.is/)
* [kubebuilder](https://book.kubebuilder.io/) 사용하기
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* 웹훅(WebHook)과 함께 [Metacontroller](https://metacontroller.app/)를
* 웹훅(WebHook)과 함께 [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html)를
사용하여 직접 구현하기
* [오퍼레이터 프레임워크](https://operatorframework.io)
* [shell-operator](https://github.com/flant/shell-operator)
@ -124,6 +124,7 @@ kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하
## {{% heading "whatsnext" %}}
* {{< glossary_tooltip text="CNCF" term_id="cncf" >}} [오퍼레이터 백서](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md) 읽어보기
* [사용자 정의 리소스](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에 대해 더 알아보기
* [OperatorHub.io](https://operatorhub.io/)에서 유스케이스에 맞는 이미 만들어진 오퍼레이터 찾기
* 다른 사람들이 사용할 수 있도록 자신의 오퍼레이터를 [게시](https://operatorhub.io/)하기

View File

@ -11,7 +11,8 @@ weight: 30
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
파드시큐리티폴리시(PodSecurityPolicy)는 쿠버네티스 v1.21부터 더이상 사용되지 않으며, v1.25에서 제거된다.
파드시큐리티폴리시(PodSecurityPolicy)는 쿠버네티스 v1.21부터 더이상 사용되지 않으며, v1.25에서 제거된다. 사용 중단에 대한 상세 사항은
[파드시큐리티폴리시 사용 중단: 과거, 현재, 그리고 미래](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)를 참조한다.
파드 시큐리티 폴리시를 사용하면 파드 생성 및 업데이트에 대한 세분화된 권한을
부여할 수 있다.
@ -48,10 +49,9 @@ _Pod Security Policy_ 는 파드 명세의 보안 관련 측면을 제어하는
## 파드 시큐리티 폴리시 활성화
파드 시큐리티 폴리시 제어는 선택 사항(하지만 권장함)인
[어드미션
컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy)로
구현된다. [어드미션 컨트롤러 활성화](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in)하면
파드 시큐리티 폴리시 제어는 선택 사항인 [어드미션
컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy)로 구현된다.
[어드미션 컨트롤러를 활성화](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in)하면
파드시큐리티폴리시가 적용되지만,
정책을 승인하지 않고 활성화하면 클러스터에
**파드가 생성되지 않는다.**
@ -110,11 +110,15 @@ roleRef:
name: <role name>
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize specific service accounts:
# 네임스페이스의 모든 서비스 어카운트 승인(권장):
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts:<authorized namespace>
# 특정 서비스 어카운트 승인(권장하지 않음):
- kind: ServiceAccount
name: <authorized service account name>
namespace: <authorized pod namespace>
# Authorize specific users (not recommended):
# 특정 사용자 승인(권장하지 않음):
- kind: User
apiGroup: rbac.authorization.k8s.io
name: <authorized user name>
@ -124,21 +128,55 @@ subjects:
실행되는 파드에 대해서만 사용 권한을 부여한다. 네임스페이스에서 실행되는 모든 파드에 접근 권한을
부여하기 위해 시스템 그룹과 쌍을 이룰 수 있다.
```yaml
# Authorize all service accounts in a namespace:
# 네임스페이스의 모든 서비스 어카운트 승인:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts
# Or equivalently, all authenticated users in a namespace:
# 또는 동일하게, 네임스페이스의 모든 승인된 사용자에게 사용 권한 부여
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
```
RBAC 바인딩에 대한 자세한 예는,
[역할 바인딩 예제](/docs/reference/access-authn-authz/rbac#role-binding-examples)를 참고하길 바란다.
[역할 바인딩 예제](/docs/reference/access-authn-authz/rbac#role-binding-examples)를 참고다.
파드시큐리티폴리시 인증에 대한 전체 예제는
[아래](#예제)를 참고하길 바란다.
[아래](#예제)를 참고다.
### 추천 예제
파드시큐리티폴리시는 새롭고 간결해진 `PodSecurity` {{< glossary_tooltip
text="어드미션 컨트롤러" term_id="admission-controller" >}}로 대체되고 있다.
이 변경에 대한 상세사항은
[파드시큐리티폴리시 사용 중단: 과거, 현재, 그리고 미래](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)를 참조한다.
다음 가이드라인을 참조하여 파드시큐리티폴리시를 새로운 어드미션 컨트롤러로 쉽게 전환할 수 있다.
1. 파드시큐리티폴리시를 [파드 보안 표준](/docs/concepts/security/pod-security-standards/)에 의해 정의된 폴리시로 한정한다.
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}
- {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}}
2. `system:serviceaccounts:<namespace>` (여기서 `<namespace>`는 타겟 네임스페이스) 그룹을 사용하여
파드시큐리티폴리시를 전체 네임스페이스에만 바인드한다. 예시는 다음과 같다.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
# 이 클러스터롤바인딩(ClusterRoleBinding)을 통해 "development" 네임스페이스의 모든 파드가 기준 파드시큐리티폴리시(PSP)를 사용할 수 있다.
kind: ClusterRoleBinding
metadata:
name: psp-baseline-namespaces
roleRef:
kind: ClusterRole
name: psp-baseline
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:serviceaccounts:development
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:serviceaccounts:canary
apiGroup: rbac.authorization.k8s.io
```
### 문제 해결
@ -567,7 +605,7 @@ spec:
리눅스 기능은 전통적으로 슈퍼유저와 관련된 권한을 보다 세밀하게 분류한다.
이러한 기능 중 일부는 권한 에스컬레이션 또는 컨테이너 분류에 사용될 수 있으며
파드시큐리티폴리시에 의해 제한될 수 있다. 리눅스 기능에 대한 자세한 내용은
[기능(7)](http://man7.org/linux/man-pages/man7/capabilities.7.html)을
[기능(7)](https://man7.org/linux/man-pages/man7/capabilities.7.html)을
참고하길 바란다.
다음 필드는 대문자로 표기된 기능 이름 목록을
@ -661,5 +699,10 @@ spec:
## {{% heading "whatsnext" %}}
- [파드시큐리티폴리시 사용 중단: 과거, 현재, 그리고
미래](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)에서
파드시큐리티폴리시의 미래에 대해 알아본다.
- 폴리시 권장 사항에 대해서는 [파드 보안 표준](/docs/concepts/security/pod-security-standards/)을 참조한다.
- API 세부 정보는 [파드 시큐리티 폴리시 레퍼런스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) 참조한다.

Some files were not shown because too many files have changed in this diff Show More