diff --git a/content/de/docs/concepts/containers/images.md b/content/de/docs/concepts/containers/images.md index 8f41b0c2e1..03ec9b4a0e 100644 --- a/content/de/docs/concepts/containers/images.md +++ b/content/de/docs/concepts/containers/images.md @@ -5,7 +5,7 @@ weight: 10 --- {{% capture overview %}} -Sie erstellen ihr Docker Image und laden es in eine Registry hoch bevor es in einem Kubernetes Pod referenziert werden kann. +Sie erstellen ihr Docker Image und laden es in eine Registry hoch, bevor es in einem Kubernetes Pod referenziert werden kann. Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die des `docker` Kommandos, inklusive privater Registries und Tags. @@ -16,8 +16,8 @@ Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die ## Aktualisieren von Images -Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu das dass Kubelet Images überspringt die bereits auf einem Node vorliegen. -Wenn sie stattdessen möchten das ein Image immer forciert heruntergeladen wird, können sie folgendes tun: +Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Kubelet Images überspringt, die bereits auf einem Node vorliegen. +Wenn sie stattdessen möchten, dass ein Image immer forciert heruntergeladen wird, können sie folgendes tun: - Die `imagePullPolicy` des Containers auf `Always` setzen. @@ -25,7 +25,7 @@ Wenn sie stattdessen möchten das ein Image immer forciert heruntergeladen wird, - Die `imagePullPolicy` und den Tag des Images auslassen. - Den [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) Admission Controller aktivieren. -Beachten Sie das die die Nutzung des `:latest` Tags vermeiden sollten, weitere Informationen siehe: [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images). +Beachten Sie, dass Sie die Nutzung des `:latest` Tags vermeiden sollten. Für weitere Informationen siehe: [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images). ## Multi-Architektur Images mit Manifesten bauen @@ -38,17 +38,17 @@ https://docs.docker.com/edge/engine/reference/commandline/manifest/ Hier einige Beispiele wie wir dies in unserem Build - Prozess nutzen: https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos= -Diese Kommandos basieren rein auf dem Docker Kommandozeileninterface und werden auch damit ausgeführt. Sie sollten entweder die Datei `$HOME/.docker/config.json` bearbeiten und den `experimental` Schlüssel auf `enabled`setzen, oder einfach die Umgebungsvariable `DOCKER_CLI_EXPERIMENTAL` auf `enabled`setzen wenn Sie das Docker Kommandozeileninterface aufrufen. +Diese Kommandos basieren rein auf dem Docker Kommandozeileninterface und werden auch damit ausgeführt. Sie sollten entweder die Datei `$HOME/.docker/config.json` bearbeiten und den `experimental` Schlüssel auf `enabled` setzen, oder einfach die Umgebungsvariable `DOCKER_CLI_EXPERIMENTAL` auf `enabled` setzen, wenn Sie das Docker Kommandozeileninterface aufrufen. {{< note >}} -Nutzen die bitte Docker *18.06 oder neuer*, ältere Versionen haben entweder bugs oder unterstützen die experimentelle Kommandozeilenoption nicht. Beispiel: https://github.com/docker/cli/issues/1135 verursacht Probleme unter containerd. +Nutzen die bitte Docker *18.06 oder neuer*, ältere Versionen haben entweder Bugs oder unterstützen die experimentelle Kommandozeilenoption nicht. Beispiel: https://github.com/docker/cli/issues/1135 verursacht Probleme unter containerd. {{< /note >}} -Wenn mit alten Manifesten Probleme auftreten können sie die alten Manifeste in `$HOME/.docker/manifests` entfernen um von vorne zu beginnen. +Wenn mit alten Manifesten Probleme auftreten, können sie die alten Manifeste in `$HOME/.docker/manifests` entfernen, um von vorne zu beginnen. -Für Kubernetes selbst haben wir typischerweise Images mit dem Suffix `-$(ARCH)` genutzt. Um die Abwärtskompatibilität zu erhalten bitten wir Sie die älteren Images mit diesen Suffixen zu generieren. Die Idee dahinter ist z.B. das `pause` image zu generieren, welches das Manifest für alle Architekturen hat, `pause-amd64` wäre dann abwärtskompatibel zu älteren Konfigurationen, oder YAML - Dateien die ein Image mit Suffixen hart kodiert haben. +Für Kubernetes selbst nutzen wir typischerweise Images mit dem Suffix `-$(ARCH)`. Um die Abwärtskompatibilität zu erhalten, bitten wir Sie, die älteren Images mit diesen Suffixen zu generieren. Die Idee dahinter ist z.B., das `pause` image zu generieren, welches das Manifest für alle Architekturen hat, `pause-amd64` wäre dann abwärtskompatibel zu älteren Konfigurationen, oder YAML - Dateien, die ein Image mit Suffixen hart kodiert enthalten. ## Nutzung einer privaten Registry @@ -73,32 +73,32 @@ Authentifizierungsdaten können auf verschiedene Weisen hinterlegt werden: - Setzt die Konfiguration der Nodes durch einen Cluster - Aministrator voraus - Im Voraus heruntergeladene Images - Alle Pods können jedes gecachte Image auf einem Node nutzen - - Setzt root - Zugriff auf allen Nodes zum einrichten voraus - - Spezifizieren eines ImagePullSecrets bei einem Pod - - Nur Pods die eigene Schlüssel vorhalten haben Zugriff auf eine private Registry + - Setzt root - Zugriff auf allen Nodes zum Einrichten voraus + - Spezifizieren eines ImagePullSecrets auf einem Pod + - Nur Pods, die eigene Secret tragen, haben Zugriff auf eine private Registry -Jeder Option wird im Folgenden im Detail beschrieben +Jede Option wird im Folgenden im Detail beschrieben ### Bei Nutzung der Google Container Registry Kubernetes hat eine native Unterstützung für die [Google Container Registry (GCR)](https://cloud.google.com/tools/container-registry/) wenn es auf der Google Compute -Engine (GCE) läuft. Wenn Sie ihren Cluster auf GCE oder der Google Kubernetes Engine betreiben, genügt es einfach den vollen Image Namen zu nutzen (z.B. gcr.io/my_project/image:tag ). +Engine (GCE) läuft. Wenn Sie ihren Cluster auf GCE oder der Google Kubernetes Engine betreiben, genügt es, einfach den vollen Image Namen zu nutzen (z.B. gcr.io/my_project/image:tag ). -Alle Pods in einem Cluster haben dann lesenden Zugriff auf Images in dieser Registry. +Alle Pods in einem Cluster haben dann Lesezugriff auf Images in dieser Registry. Das Kubelet authentifiziert sich bei GCR mit Nutzung des Google service Kontos der jeweiligen Instanz. -Das Google service Konto der Instanz hat einen `https://www.googleapis.com/auth/devstorage.read_only`, so kann es vom GCR des Projektes hochladen, aber nicht herunterladen. +Das Google Servicekonto der Instanz hat einen `https://www.googleapis.com/auth/devstorage.read_only`, so kann es vom GCR des Projektes herunter, aber nicht hochladen. ### Bei Nutzung der Amazon Elastic Container Registry -Kubernetes eine native Unterstützung für die [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) wenn Knoten AWS EC2 Instanzen sind. +Kubernetes bietet native Unterstützung für die [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/), wenn Knoten AWS EC2 Instanzen sind. Es muss einfach nur der komplette Image Name (z.B. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`) in der Pod - Definition genutzt werden. -Alle Benutzer eines Clusters die Pods erstellen dürfen können dann jedes der Images in der ECR Registry zum Ausführen von Pods nutzen. +Alle Benutzer eines Clusters, die Pods erstellen dürfen, können dann jedes der Images in der ECR Registry zum Ausführen von Pods nutzen. Das Kubelet wird periodisch ECR Zugriffsdaten herunterladen und auffrischen, es benötigt hierfür die folgenden Berechtigungen: @@ -126,12 +126,12 @@ Fehlerbehebung: - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider` ### Bei Nutzung der Azure Container Registry (ACR) -Bei Nutzung der [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) können sie sich entweder als ein administrativer Nutzer, oder als ein Service Principal authentifizieren +Bei Nutzung der [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) können sie sich entweder als ein administrativer Nutzer, oder als ein Service Principal authentifizieren. In jedem Fall wird die Authentifizierung über die Standard - Docker Authentifizierung ausgeführt. Diese Anleitung bezieht sich auf das [azure-cli](https://github.com/azure/azure-cli) Kommandozeilenwerkzeug. Sie müssen zunächst eine Registry und Authentifizierungsdaten erstellen, eine komplette Dokumentation dazu finden sie hier: [Azure container registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli). -Sobald sie ihre Container Registry erstelt haben, nutzen sie die folgenden Authentifizierungsdaten: +Sobald sie ihre Container Registry erstellt haben, nutzen sie die folgenden Authentifizierungsdaten: * `DOCKER_USER` : Service Principal oder Administratorbenutzername * `DOCKER_PASSWORD`: Service Principal Password oder Administratorpasswort @@ -139,37 +139,37 @@ Sobald sie ihre Container Registry erstelt haben, nutzen sie die folgenden Authe * `DOCKER_EMAIL`: `${some-email-address}` Wenn sie diese Variablen befüllt haben, können sie: -[configure a Kubernetes Secret and use it to deploy a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). +[Kubernetes konfigurieren und damit einen Pod deployen](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). ### Bei Nutzung der IBM Cloud Container Registry -Die IBM Cloud Container Registry bietet eine mandantenfähige Private Image Registry die Sie nutzen können um ihre Docker Images sicher speichern und teilen zu können. -Im Standard werden Images in ihrer Private Registry vom integrierten Schwachstellenscaner durchsucht um Sicherheitsprobleme und potentielle Schwachstellen zu finden. Benutzer können ihren IBM Cloud Account nutzen um Zugang zu ihren Images zu erhalten, oder um einen Token zu gernerieren, der Zugriff auf die Registry Namespaces erlaubt. +Die IBM Cloud Container Registry bietet eine mandantenfähige Private Image Registry, die Sie nutzen können, um ihre Docker Images sicher zu speichern und zu teilen. +Standardmäßig werden Images in ihrer Private Registry vom integrierten Schwachstellenscaner durchsucht, um Sicherheitsprobleme und potentielle Schwachstellen zu finden. Benutzer können ihren IBM Cloud Account nutzen, um Zugang zu ihren Images zu erhalten, oder um einen Token zu generieren, der Zugriff auf die Registry Namespaces erlaubt. Um das IBM Cloud Container Registry Kommandozeilenwerkzeug zu installieren und einen Namespace für ihre Images zu erstellen, folgen sie dieser Dokumentation [Getting started with IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started). -Sie können die IBM Cloud Container Registry nutzen um Container aus [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) und ihren eigenen Images in den `default` Namespace ihres IBM Cloud Kubernetes Service Clusters zu deployen. +Sie können die IBM Cloud Container Registry nutzen, um Container aus [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) und ihren eigenen Images in den `default` Namespace ihres IBM Cloud Kubernetes Service Clusters zu deployen. Um einen Container in einen anderen Namespace, oder um ein Image aus einer anderen IBM Cloud Container Registry Region oder einem IBM Cloud account zu deployen, erstellen sie ein Kubernetes `imagePullSecret`. Weitere Informationen finden sie unter: [Building containers from images](https://cloud.ibm.com/docs/containers?topic=containers-images). ### Knoten für die Nutzung einer Private Registry konfigurieren {{< note >}} -Wenn sie auf Google Kubernetes Engine laufen gibt es schon eine `.dockercfg` auf jedem Knoten die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden. +Wenn sie Google Kubernetes Engine verwenden, gibt es schon eine `.dockercfg` auf jedem Knoten, die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden. {{< /note >}} {{< note >}} -Wenn sie auf AWS EC2 laufen und die EC2 Container Registry (ECR) nutzen, wird das Kubelet auf jedem Knoten die ECR Zugriffsdaten verwalten und aktualisieren. Dann kann die folgende Vorgehensweise nicht angewendet werden. +Wenn sie AWS EC2 verwenden und die EC2 Container Registry (ECR) nutzen, wird das Kubelet auf jedem Knoten die ECR Zugriffsdaten verwalten und aktualisieren. Dann kann die folgende Vorgehensweise nicht angewendet werden. {{< /note >}} {{< note >}} -Diese Vorgehensweise ist anwendbar wenn sie ihre Knoten - Konfiguration ändern können, sie wird nicht zuverlässig auf GCE other einem anderen Cloud - Provider funktionieren der automatisch Knoten ersetzt. +Diese Vorgehensweise ist anwendbar, wenn sie ihre Knoten-Konfiguration ändern können; Sie wird nicht zuverlässig auf GCE oder einem anderen Cloud - Provider funktionieren, der automatisch Knoten ersetzt. {{< /note >}} {{< note >}} -Kubernetes unterstützt zur Zeit nur die `auths` und `HttpHeaders` Sektionen der Docker Konfiguration . Das bedeutet das die Hilfswerkzeuge (`credHelpers` ooderr `credsStore`) nicht unterstützt werden. +Kubernetes unterstützt zurzeit nur die `auths` und `HttpHeaders` Abschnitte der Dockerkonfiguration. Das bedeutet, dass die Hilfswerkzeuge (`credHelpers` ooderr `credsStore`) nicht unterstützt werden. {{< /note >}} -Docker speichert Schlüssel für eigene Registries in der Datei `$HOME/.dockercfg` oder `$HOME/.docker/config.json`. Wenn sie die gleiche Datei in einen der unten aufgeführten Suchpfade ablegen wird Kubelet sie als Hilfswerkzeug für Zugriffsdaten nutzen wenn es Images bezieht. +Docker speichert Schlüssel für eigene Registries entweder unter `$HOME/.dockercfg` oder `$HOME/.docker/config.json`. Wenn sie die gleiche Datei in einen der unten aufgeführten Suchpfade speichern, wird Kubelet sie als Hilfswerkzeug für Zugriffsdaten beim Beziehen von Images nutzen. * `{--root-dir:-/var/lib/kubelet}/config.json` @@ -182,20 +182,21 @@ Docker speichert Schlüssel für eigene Registries in der Datei `$HOME/.dockercf * `/.dockercfg` {{< note >}} -Eventuell müssen sie `HOME=/root` in ihrer Umgebungsvariablendatei setzen +Eventuell müssen sie `HOME=/root` in ihrer Umgebungsvariablendatei setzen. {{< /note >}} -Dies sind die empfohlenen Schritte um ihre Knoten für eine Nutzung einer eigenen Registry zu konfigurieren, in diesem Beispiel führen sie folgende Schritte auf ihrem Desktop/Laptop aus: +Dies sind die empfohlenen Schritte, um ihre Knoten für eine Nutzung einer eigenen Registry zu konfigurieren. +In diesem Beispiel führen sie folgende Schritte auf ihrem Desktop/Laptop aus: 1. Führen sie `docker login [server]` für jeden Satz ihrer Zugriffsdaten aus. Dies aktualisiert `$HOME/.docker/config.json`. - 2. Prüfen Sie `$HOME/.docker/config.json` in einem Editor darauf ob dort nur Zugriffsdaten enthalten sind die Sie nutzen möchten. + 2. Prüfen Sie `$HOME/.docker/config.json` in einem Editor darauf, ob dort nur Zugriffsdaten enthalten sind, die Sie nutzen möchten. 3. Erhalten sie eine Liste ihrer Knoten: - Wenn sie die Namen benötigen: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')` - Wenn sie die IP - Adressen benötigen: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')` 4. Kopieren sie ihre lokale `.docker/config.json` in einen der oben genannten Suchpfade. - Zum Beispiel: `for n in $nodes; do scp ~/.docker/config.json root@$n:/var/lib/kubelet/config.json; done` -Prüfen durch das Erstellen eines Pods der ein eigenes Image nutzt, z.B.: +Prüfen durch das Erstellen eines Pods, der ein eigenes Image nutzt, z.B.: ```yaml kubectl apply -f - <}} -Wenn sie auf Google Kubernetes Engine laufen gibt es schon eine `.dockercfg` auf jedem Knoten die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden. +Wenn sie Google Kubernetes Engine verwenden, gibt es schon eine `.dockercfg` auf jedem Knoten die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden. {{< /note >}} {{< note >}} -Diese Vorgehensweise ist anwendbar wenn sie ihre Knoten - Konfiguration ändern können, sie wird nicht zuverlässig auf GCE other einem anderen Cloud - Provider funktionieren der automatisch Knoten ersetzt. +Diese Vorgehensweise ist anwendbar, wenn sie ihre Knoten-Konfiguration ändern können; Sie wird nicht zuverlässig auf GCE oder einem anderen Cloud - Provider funktionieren, der automatisch Knoten ersetzt. {{< /note >}} -Im Standard wird das Kubelet versuchen jedes Image von der spezifizierten Registry herunterzuladen. -Falls jedoch die `imagePullPolicy` Eigenschaft der Containers auf `IfNotPresent` oder `Never` gesetzt wurde, wird ein lokales Image genutzt (präferiert oder exklusiv, je nach dem). +Standardmäßig wird das Kubelet versuchen, jedes Image von der spezifizierten Registry herunterzuladen. +Falls jedoch die `imagePullPolicy` Eigenschaft der Containers auf `IfNotPresent` oder `Never` gesetzt wurde, wird ein lokales Image genutzt (präferiert oder exklusiv, jenachdem). -Wenn Sie sich auf im Voraus heruntergeladene Images als Ersatz für eine Registry - Authentifizierung verlassen möchten, müssen sie sicherstellen das alle Knoten die gleichen im voraus heruntergeladenen Images aufweisen. +Wenn Sie sich auf im Voraus heruntergeladene Images als Ersatz für eine Registry - Authentifizierung verlassen möchten, müssen sie sicherstellen, dass alle Knoten die gleichen, im Voraus heruntergeladenen Images aufweisen. -Diese Medthode kann dazu genutzt werden bestimmte Images aus Geschwindigkeitsgründen im Voraus zu laden, oder als Alternative zur Authentifizierung an einer eigenen Registry zu nutzen. +Diese Medthode kann dazu genutzt werden, bestimmte Images aus Geschwindigkeitsgründen im Voraus zu laden, oder als Alternative zur Authentifizierung an einer eigenen Registry zu nutzen. Alle Pods haben Leserechte auf alle im Voraus geladenen Images. @@ -259,24 +260,24 @@ Kubernetes unterstützt die Spezifikation von Registrierungsschlüsseln für ein #### Erstellung eines Secrets mit einer Docker Konfiguration -Führen sie foldenen Befehl mit Ersetzung der groß geschriebenen Werte aus: +Führen sie folgenden Befehl mit Ersetzung der groß geschriebenen Werte aus: ```shell kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL ``` -Wenn sie bereits eine Datei mit Docker Zugriffsdaten haben, könenn sie die Zugriffsdaten als ein Kubernetes Secret importieren: +Wenn Sie bereits eine Datei mit Docker-Zugriffsdaten haben, können Sie die Zugriffsdaten als ein Kubernetes Secret importieren: [Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) beschreibt die Erstellung. -Dies ist insbesondere dann sinnvoll wenn sie mehrere eigene Container Registries nutzen, da `kubectl create secret docker-registry` ein Secret erstellt das nur mit einer einzelnen eigenen Registry funktioniert. +Dies ist insbesondere dann sinnvoll, wenn sie mehrere eigene Container Registries nutzen, da `kubectl create secret docker-registry` ein Secret erstellt, das nur mit einer einzelnen eigenen Registry funktioniert. {{< note >}} -Pods können nur eigene Image Pull Secret in ihrem eigenen Namespace referenzieren, somit muss dieser Prozess jedes mal einzeln für je Namespace angewendet werden. +Pods können nur eigene Image Pull Secret in ihrem eigenen Namespace referenzieren, somit muss dieser Prozess jedes mal einzeln für jeden Namespace angewendet werden. {{< /note >}} #### Referenzierung eines imagePullSecrets bei einem Pod -Nun können sie Pods erstellen die dieses Secret referenzieren indem sie eine `imagePullSecrets` Sektion zu ihrer Pod - Definition hinzufügen. +Nun können Sie Pods erstellen, die dieses Secret referenzieren, indem Sie einen Aschnitt `imagePullSecrets` zu ihrer Pod - Definition hinzufügen. ```shell cat < pod.yaml @@ -299,9 +300,9 @@ resources: EOF ``` -Dies muss für jeden Pod getan werden der eine eigene Registry nutzt. +Dies muss für jeden Pod getan werden, der eine eigene Registry nutzt. -Die Erstellung dieser Sektion kann jedoch automatisiert werden indem man imagePullSecrets einer serviceAccount](/docs/user-guide/service-accounts) Ressource hinzufügt. +Die Erstellung dieser Sektion kann jedoch automatisiert werden, indem man imagePullSecrets einer [serviceAccount](/docs/user-guide/service-accounts) Ressource hinzufügt. [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) bietet detaillierte Anweisungen hierzu. Sie können dies in Verbindung mit einer auf jedem Knoten genutzten `.docker/config.json` benutzen, die Zugriffsdaten werden dann zusammengeführt. Dieser Vorgehensweise wird in der Google Kubernetes Engine funktionieren. diff --git a/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md b/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md new file mode 100644 index 0000000000..42b6a3c5a0 --- /dev/null +++ b/content/en/blog/_posts/2020-05-21-wsl2-dockerdesktop-k8s.md @@ -0,0 +1,589 @@ +--- +layout: blog +title: "WSL+Docker: Kubernetes on the Windows Desktop" +date: 2020-05-21 +slug: wsl-docker-kubernetes-on-the-windows-desktop +--- + +**Authors**: [Nuno do Carmo](https://twitter.com/nunixtech) Docker Captain and WSL Corsair; [Ihor Dvoretskyi](https://twitter.com/idvoretskyi), Developer Advocate, Cloud Native Computing Foundation + +# Introduction + +New to Windows 10 and WSL2, or new to Docker and Kubernetes? Welcome to this blog post where we will install from scratch Kubernetes in Docker [KinD](https://kind.sigs.k8s.io/) and [Minikube](https://minikube.sigs.k8s.io/docs/). + + +# Why Kubernetes on Windows? + +For the last few years, Kubernetes became a de-facto standard platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation. + +Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - [the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/), the line between Windows and Linux environments became even less visible. + + +Also, WSL brought an ability to run Kubernetes on Windows almost seamlessly! + + +Below, we will cover in brief how to install and use various solutions to run Kubernetes locally. + +# Prerequisites + +Since we will explain how to install KinD, we won't go into too much detail around the installation of KinD's dependencies. + +However, here is the list of the prerequisites needed and their version/lane: + +- OS: Windows 10 version 2004, Build 19041 +- [WSL2 enabled](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install) + - In order to install the distros as WSL2 by default, once WSL2 installed, run the command `wsl.exe --set-default-version 2` in Powershell +- WSL2 distro installed from the Windows Store - the distro used is Ubuntu-18.04 +- [Docker Desktop for Windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows), stable channel - the version used is 2.2.0.4 +- [Optional] Microsoft Terminal installed from the Windows Store + - Open the Windows store and type "Terminal" in the search, it will be (normally) the first option + +![Windows Store Terminal](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-windows-store-terminal.png) + +And that's actually it. For Docker Desktop for Windows, no need to configure anything yet as we will explain it in the next section. + +# WSL2: First contact + +Once everything is installed, we can launch the WSL2 terminal from the Start menu, and type "Ubuntu" for searching the applications and documents: + +![Start Menu Search](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-start-menu-search.png) + +Once found, click on the name and it will launch the default Windows console with the Ubuntu bash shell running. + +Like for any normal Linux distro, you need to create a user and set a password: + +![User-Password](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-user-password.png) + +## [Optional] Update the `sudoers` + +As we are working, normally, on our local computer, it might be nice to update the `sudoers` and set the group `%sudo` to be password-less: + +```bash +# Edit the sudoers with the visudo command +sudo visudo + +# Change the %sudo group to be password-less +%sudo ALL=(ALL:ALL) NOPASSWD: ALL + +# Press CTRL+X to exit +# Press Y to save +# Press Enter to confirm +``` + +![visudo](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-visudo.png) + +## Update Ubuntu + +Before we move to the Docker Desktop settings, let's update our system and ensure we start in the best conditions: + +```bash +# Update the repositories and list of the packages available +sudo apt update +# Update the system based on the packages installed > the "-y" will approve the change automatically +sudo apt upgrade -y +``` + +![apt-update-upgrade](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-apt-update-upgrade.png) + +# Docker Desktop: faster with WSL2 + +Before we move into the settings, let's do a small test, it will display really how cool the new integration with Docker Desktop is: + +```bash +# Try to see if the docker cli and daemon are installed +docker version +# Same for kubectl +kubectl version +``` + +![kubectl-error](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-error.png) + +You got an error? Perfect! It's actually good news, so let's now move on to the settings. + +## Docker Desktop settings: enable WSL2 integration + +First let's start Docker Desktop for Windows if it's not still the case. Open the Windows start menu and type "docker", click on the name to start the application: + +![docker-start](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-start.png) + +You should now see the Docker icon with the other taskbar icons near the clock: + +![docker-taskbar](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-taskbar.png) + +Now click on the Docker icon and choose settings. A new window will appear: + +![docker-settings-general](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-general.png) + +By default, the WSL2 integration is not active, so click the "Enable the experimental WSL 2 based engine" and click "Apply & Restart": + +![docker-settings-wsl2](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-wsl2-activated.png) + +What this feature did behind the scenes was to create two new distros in WSL2, containing and running all the needed backend sockets, daemons and also the CLI tools (read: docker and kubectl command). + +Still, this first setting is still not enough to run the commands inside our distro. If we try, we will have the same error as before. + +In order to fix it, and finally be able to use the commands, we need to tell the Docker Desktop to "attach" itself to our distro also: + +![docker-resources-wsl](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-resources-wsl-integration.png) + +Let's now switch back to our WSL2 terminal and see if we can (finally) launch the commands: + +```bash +# Try to see if the docker cli and daemon are installed +docker version +# Same for kubectl +kubectl version +``` + +![docker-kubectl-success](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-success.png) + +> Tip: if nothing happens, restart Docker Desktop and restart the WSL process in Powershell: `Restart-Service LxssManager` and launch a new Ubuntu session + +And success! The basic settings are now done and we move to the installation of KinD. + +# KinD: Kubernetes made easy in a container + +Right now, we have Docker that is installed, configured and the last test worked fine. + +However, if we look carefully at the `kubectl` command, it found the "Client Version" (1.15.5), but it didn't find any server. + +This is normal as we didn't enable the Docker Kubernetes cluster. So let's install KinD and create our first cluster. + +And as sources are always important to mention, we will follow (partially) the how-to on the [official KinD website](https://kind.sigs.k8s.io/docs/user/quick-start/): + +```bash +# Download the latest version of KinD +curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64 +# Make the binary executable +chmod +x ./kind +# Move the binary to your executable path +sudo mv ./kind /usr/local/bin/ +``` + +![kind-install](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install.png) + +## KinD: the first cluster + +We are ready to create our first cluster: + +```bash +# Check if the KUBECONFIG is not set +echo $KUBECONFIG +# Check if the .kube directory is created > if not, no need to create it +ls $HOME/.kube +# Create the cluster and give it a name (optional) +kind create cluster --name wslkind +# Check if the .kube has been created and populated with files +ls $HOME/.kube +``` + +![kind-cluster-create](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create.png) + +> Tip: as you can see, the Terminal was changed so the nice icons are all displayed + +The cluster has been successfully created, and because we are using Docker Desktop, the network is all set for us to use "as is". + +So we can open the `Kubernetes master` URL in our Windows browser: + +![kind-browser-k8s-master](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-k8s-master.png) + +And this is the real strength from Docker Desktop for Windows with the WSL2 backend. Docker really did an amazing integration. + +## KinD: counting 1 - 2 - 3 + +Our first cluster was created and it's the "normal" one node cluster: + +```bash +# Check how many nodes it created +kubectl get nodes +# Check the services for the whole cluster +kubectl get all --all-namespaces +``` + +![kind-list-nodes-services](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-nodes-services.png) + +While this will be enough for most people, let's leverage one of the coolest feature, multi-node clustering: + + +```bash +# Delete the existing cluster +kind delete cluster --name wslkind +# Create a config file for a 3 nodes cluster +cat << EOF > kind-3nodes.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: + - role: control-plane + - role: worker + - role: worker +EOF +# Create a new cluster with the config file +kind create cluster --name wslkindmultinodes --config ./kind-3nodes.yaml +# Check how many nodes it created +kubectl get nodes +``` + +![kind-cluster-create-multinodes](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create-multinodes.png) + +> Tip: depending on how fast we run the "get nodes" command, it can be that not all the nodes are ready, wait few seconds and run it again, everything should be ready + +And that's it, we have created a three-node cluster, and if we look at the services one more time, we will see several that have now three replicas: + + +```bash +# Check the services for the whole cluster +kubectl get all --all-namespaces +``` + +![wsl2-kind-list-services-multinodes](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-services-multinodes.png) + +## KinD: can I see a nice dashboard? + +Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview. + +For that, the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard) project has been created. The installation and first connection test is quite fast, so let's do it: + +```bash +# Install the Dashboard application into our cluster +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml +# Check the resources it created based on the new namespace created +kubectl get all -n kubernetes-dashboard +``` + +![kind-install-dashboard](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install-dashboard.png) + +As it created a service with a ClusterIP (read: internal network address), we cannot reach it if we type the URL in our Windows browser: + +![kind-browse-dashboard-error](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-error.png) + +That's because we need to create a temporary proxy: + + +```bash +# Start a kubectl proxy +kubectl proxy +# Enter the URL on your browser: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ +``` + +![kind-browse-dashboard-success](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-success.png) + +Finally to login, we can either enter a Token, which we didn't create, or enter the `kubeconfig` file from our Cluster. + +If we try to login with the `kubeconfig`, we will get the error "Internal error (500): Not enough data to create auth info structure". This is due to the lack of credentials in the `kubeconfig` file. + +So to avoid you ending with the same error, let's follow the [recommended RBAC approach](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md). + +Let's open a new WSL2 session: + +```bash +# Create a new ServiceAccount +kubectl apply -f - < Tip: as you can see, the Terminal was changed so the nice icons are all displayed + +So let's fix the issue by installing the missing package: + +```bash +# Install the conntrack package +sudo apt install -y conntrack +``` + +![minikube-install-conntrack](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install conntrack.png) + +Let's try to launch it again: + +```bash +# Create a minikube one node cluster +minikube start --driver=none +# We got a permissions error > try again with sudo +sudo minikube start --driver=none +``` + +![minikube-start-error-systemd](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error-systemd.png) + +Ok, this error cloud be problematic ... in the past. Luckily for us, there's a solution + +## Minikube: enabling SystemD + +In order to enable SystemD on WSL2, we will apply the [scripts](https://forum.snapcraft.io/t/running-snaps-on-wsl2-insiders-only-for-now/13033) from [Daniel Llewellyn](https://twitter.com/diddledan). + +I invite you to read the full blog post and how he came to the solution, and the various iterations he did to fix several issues. + +So in a nutshell, here are the commands: + +```bash +# Install the needed packages +sudo apt install -yqq daemonize dbus-user-session fontconfig +``` + +![minikube-systemd-packages](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-packages.png) + +```bash +# Create the start-systemd-namespace script +sudo vi /usr/sbin/start-systemd-namespace +#!/bin/bash + +SYSTEMD_PID=$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}') +if [ -z "$SYSTEMD_PID" ] || [ "$SYSTEMD_PID" != "1" ]; then + export PRE_NAMESPACE_PATH="$PATH" + (set -o posix; set) | \ + grep -v "^BASH" | \ + grep -v "^DIRSTACK=" | \ + grep -v "^EUID=" | \ + grep -v "^GROUPS=" | \ + grep -v "^HOME=" | \ + grep -v "^HOSTNAME=" | \ + grep -v "^HOSTTYPE=" | \ + grep -v "^IFS='.*"$'\n'"'" | \ + grep -v "^LANG=" | \ + grep -v "^LOGNAME=" | \ + grep -v "^MACHTYPE=" | \ + grep -v "^NAME=" | \ + grep -v "^OPTERR=" | \ + grep -v "^OPTIND=" | \ + grep -v "^OSTYPE=" | \ + grep -v "^PIPESTATUS=" | \ + grep -v "^POSIXLY_CORRECT=" | \ + grep -v "^PPID=" | \ + grep -v "^PS1=" | \ + grep -v "^PS4=" | \ + grep -v "^SHELL=" | \ + grep -v "^SHELLOPTS=" | \ + grep -v "^SHLVL=" | \ + grep -v "^SYSTEMD_PID=" | \ + grep -v "^UID=" | \ + grep -v "^USER=" | \ + grep -v "^_=" | \ + cat - > "$HOME/.systemd-env" + echo "PATH='$PATH'" >> "$HOME/.systemd-env" + exec sudo /usr/sbin/enter-systemd-namespace "$BASH_EXECUTION_STRING" +fi +if [ -n "$PRE_NAMESPACE_PATH" ]; then + export PATH="$PRE_NAMESPACE_PATH" +fi +``` + +```bash +# Create the enter-systemd-namespace +sudo vi /usr/sbin/enter-systemd-namespace +#!/bin/bash + +if [ "$UID" != 0 ]; then + echo "You need to run $0 through sudo" + exit 1 +fi + +SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')" +if [ -z "$SYSTEMD_PID" ]; then + /usr/sbin/daemonize /usr/bin/unshare --fork --pid --mount-proc /lib/systemd/systemd --system-unit=basic.target + while [ -z "$SYSTEMD_PID" ]; do + SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')" + done +fi + +if [ -n "$SYSTEMD_PID" ] && [ "$SYSTEMD_PID" != "1" ]; then + if [ -n "$1" ] && [ "$1" != "bash --login" ] && [ "$1" != "/bin/bash --login" ]; then + exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \ + /usr/bin/sudo -H -u "$SUDO_USER" \ + /bin/bash -c 'set -a; source "$HOME/.systemd-env"; set +a; exec bash -c '"$(printf "%q" "$@")" + else + exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \ + /bin/login -p -f "$SUDO_USER" \ + $(/bin/cat "$HOME/.systemd-env" | grep -v "^PATH=") + fi + echo "Existential crisis" +fi +``` + +```bash +# Edit the permissions of the enter-systemd-namespace script +sudo chmod +x /usr/sbin/enter-systemd-namespace +# Edit the bash.bashrc file +sudo sed -i 2a"# Start or enter a PID namespace in WSL2\nsource /usr/sbin/start-systemd-namespace\n" /etc/bash.bashrc +``` + +![minikube-systemd-files](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-files.png) + +Finally, exit and launch a new session. You **do not** need to stop WSL2, a new session is enough: + +![minikube-systemd-enabled](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-enabled.png) + +## Minikube: the first cluster + +We are ready to create our first cluster: + +```bash +# Check if the KUBECONFIG is not set +echo $KUBECONFIG +# Check if the .kube directory is created > if not, no need to create it +ls $HOME/.kube +# Check if the .minikube directory is created > if yes, delete it +ls $HOME/.minikube +# Create the cluster with sudo +sudo minikube start --driver=none +``` + +In order to be able to use `kubectl` with our user, and not `sudo`, Minikube recommends running the `chown` command: + +```bash +# Change the owner of the .kube and .minikube directories +sudo chown -R $USER $HOME/.kube $HOME/.minikube +# Check the access and if the cluster is running +kubectl cluster-info +# Check the resources created +kubectl get all --all-namespaces +``` + +![minikube-start-fixed](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-fixed.png) + +The cluster has been successfully created, and Minikube used the WSL2 IP, which is great for several reasons, and one of them is that we can open the `Kubernetes master` URL in our Windows browser: + +![minikube-browse-k8s-master](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master.png) + +And the real strength of WSL2 integration, the port `8443` once open on WSL2 distro, it actually forwards it to Windows, so instead of the need to remind the IP address, we can also reach the `Kubernetes master` URL via `localhost`: + +![minikube-browse-k8s-master-localhost](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master-localhost.png) + +## Minikube: can I see a nice dashboard? + +Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview. + +For that, Minikube embeded the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard). Thanks to it, running and accessing the Dashboard is very simple: + +```bash +# Enable the Dashboard service +sudo minikube dashboard +# Access the Dashboard from a browser on Windows side +``` + +![minikube-browse-dashboard](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard.png) + +The command creates also a proxy, which means that once we end the command, by pressing `CTRL+C`, the Dashboard will no more be accessible. + +Still, if we look at the namespace `kubernetes-dashboard`, we will see that the service is still created: + +```bash +# Get all the services from the dashboard namespace +kubectl get all --namespace kubernetes-dashboard +``` + +![minikube-dashboard-get-all](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-get-all.png) + +Let's edit the service and change it's type to `LoadBalancer`: + +```bash +# Edit the Dashoard service +kubectl edit service/kubernetes-dashboard --namespace kubernetes-dashboard +# Go to the very end and remove the last 2 lines +status: + loadBalancer: {} +# Change the type from ClusterIO to LoadBalancer + type: LoadBalancer +# Save the file +``` + +![minikube-dashboard-type-loadbalancer](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-type-loadbalancer.png) + +Check again the Dashboard service and let's access the Dashboard via the LoadBalancer: + +```bash +# Get all the services from the dashboard namespace +kubectl get all --namespace kubernetes-dashboard +# Access the Dashboard from a browser on Windows side with the URL: localhost: +``` + +![minikube-browse-dashboard-loadbalancer](/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard-loadbalancer.png) + +# Conclusion + +It's clear that we are far from done as we could have some LoadBalancing implemented and/or other services (storage, ingress, registry, etc...). + +Concerning Minikube on WSL2, as it needed to enable SystemD, we can consider it as an intermediate level to be implemented. + +So with two solutions, what could be the "best for you"? Both bring their own advantages and inconveniences, so here an overview from our point of view solely: + +| Criteria | KinD | Minikube | +| -------------------- | ----------------------------- | -------- | +| Installation on WSL2 | Very Easy | Medium | +| Multi-node | Yes | No | +| Plugins | Manual install | Yes | +| Persistence | Yes, however not designed for | Yes | +| Alternatives | K3d | Microk8s | + +We hope you could have a real taste of the integration between the different components: WSL2 - Docker Desktop - KinD/Minikube. And that gave you some ideas or, even better, some answers to your Kubernetes workflows with KinD and/or Minikube on Windows and WSL2. + +See you soon for other adventures in the Kubernetes ocean. + +[Nuno](https://twitter.com/nunixtech) & [Ihor](https://twitter.com/idvoretskyi) diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 62bb4da2f4..32274f5a3b 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -308,7 +308,7 @@ Node objects track information about the Node's resource capacity (for example: of memory available, and the number of CPUs). Nodes that [self register](#self-registration-of-nodes) report their capacity during registration. If you [manually](#manual-node-administration) add a Node, then -you need to set the node's capacity informaton when you add it. +you need to set the node's capacity information when you add it. The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that there are enough resources for all the Pods on a Node. The scheduler checks that the sum diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index d07bf762bb..b7b7b829db 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -77,7 +77,7 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the - `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally. -- `imagePullPolicy: Always`: the image is pulled every time the pod is started. +- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container. - `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied. diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 0fa5b13efa..c7b123cacc 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -580,7 +580,7 @@ spec: - name: foo secret: secretName: mysecret - defaultMode: 256 + defaultMode: 0400 ``` Then, the secret will be mounted on `/etc/foo` and all the files created by the @@ -590,6 +590,38 @@ Note that the JSON spec doesn't support octal notation, so use the value 256 for 0400 permissions. If you use YAML instead of JSON for the Pod, you can use octal notation to specify permissions in a more natural way. +Note if you `kubectl exec` into the Pod, you need to follow the symlink to find +the expected file mode. For example, + +Check the secrets file mode on the pod. +``` +kubectl exec mypod -it sh + +cd /etc/foo +ls -l +``` + +The output is similar to this: +``` +total 0 +lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password +lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username +``` + +Follow the symlink to find the correct file mode. + +``` +cd /etc/foo/..data +ls -l +``` + +The output is similar to this: +``` +total 8 +-r-------- 1 root root 12 May 18 00:18 password +-r-------- 1 root root 5 May 18 00:18 username +``` + You can also use mapping, as in the previous example, and specify different permissions for different files like this: @@ -612,12 +644,12 @@ spec: items: - key: username path: my-group/my-username - mode: 511 + mode: 0777 ``` In this case, the file resulting in `/etc/foo/my-group/my-username` will have -permission value of `0777`. Owing to JSON limitations, you must specify the mode -in decimal notation. +permission value of `0777`. If you use JSON, owing to JSON limitations, you +must specify the mode in decimal notation, `511`. Note that this permission value might be displayed in decimal notation if you read it later. diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index a9b76cdd51..2ff4ae2377 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -83,11 +83,13 @@ For example: #### Support traffic shaping +**Experimental Feature** + The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use the official [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth) plugin offered by the CNI plugin team or use your own plugin with bandwidth control functionality. -If you want to enable traffic shaping support, you must add a `bandwidth` plugin to your CNI configuration file -(default `/etc/cni/net.d`). +If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file +(default `/etc/cni/net.d`) and ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`). ```json { diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index c4415e8a1b..8d6e907afd 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -51,11 +51,11 @@ You can list the current namespaces in a cluster using: kubectl get namespace ``` ``` -NAME STATUS AGE -default Active 1d -kube-system Active 1d -kube-public Active 1d -kube-node-lease Active 1d +NAME STATUS AGE +default Active 1d +kube-node-lease Active 1d +kube-public Active 1d +kube-system Active 1d ``` Kubernetes starts with three initial namespaces: diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index f482c5efb2..52aa593e6f 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -374,6 +374,8 @@ several security mechanisms. {{< codenew file="policy/restricted-psp.yaml" >}} +See [Pod Security Standards](/docs/concepts/security/pod-security-standards/#policy-instantiation) for more examples. + ## Policy Reference ### Privileged @@ -633,6 +635,8 @@ Refer to the [Sysctl documentation]( {{% capture whatsnext %}} +See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations. + Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details. {{% /capture %}} diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index 0812b2ce05..c803676d3a 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -77,21 +77,10 @@ A toleration "matches" a taint if the keys are the same and the effects are the There are two special cases: -* An empty `key` with operator `Exists` matches all keys, values and effects which means this +An empty `key` with operator `Exists` matches all keys, values and effects which means this will tolerate everything. -```yaml -tolerations: -- operator: "Exists" -``` - -* An empty `effect` matches all effects with key `key`. - -```yaml -tolerations: -- key: "key" - operator: "Exists" -``` +An empty `effect` matches all effects with key `key`. {{< /note >}} diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md new file mode 100644 index 0000000000..1adf042c91 --- /dev/null +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -0,0 +1,300 @@ +--- +reviewers: +- tallclair +title: Pod Security Standards +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +Security settings for Pods are typically applied by using [security +contexts](/docs/tasks/configure-pod-container/security-context/). Security Contexts allow for the +definition of privilege and access controls on a per-Pod basis. + +The enforcement and policy-based definition of cluster requirements of security contexts has +previously been achieved using [Pod Security Policy](/docs/concepts/policy/pod-security-policy/). A +_Pod Security Policy_ is a cluster-level resource that controls security sensitive aspects of the +Pod specification. + +However, numerous means of policy enforcement have arisen that augment or replace the use of +PodSecurityPolicy. The intent of this page is to detail recommended Pod security profiles, decoupled +from any specific instantiation. + +{{% /capture %}} + +{{% capture body %}} + +## Policy Types + +There is an immediate need for base policy definitions to broadly cover the security spectrum. These +should range from highly restricted to highly flexible: + +- **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This + policy allows for known privilege escalations. +- **_Baseline/Default_** - Minimally restrictive policy while preventing known privilege + escalations. Allows the default (minimally specified) Pod configuration. +- **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices. + +## Policies + +### Privileged + +The Privileged policy is purposely-open, and entirely unrestricted. This type of policy is typically +aimed at system- and infrastructure-level workloads managed by privileged, trusted users. + +The privileged policy is defined by an absence of restrictions. For blacklist-oriented enforcement +mechanisms (such as gatekeeper), the privileged profile may be an absence of applied constraints +rather than an instantiated policy. In contrast, for a whitelist oriented mechanism (such as Pod +Security Policy) the privileged policy should enable all controls (disable all restrictions). + +### Baseline/Default + +The Baseline/Default policy is aimed at ease of adoption for common containerized workloads while +preventing known privilege escalations. This policy is targeted at application operators and +developers of non-critical applications. The following listed controls should be +enforced/disallowed: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Baseline policy specification
ControlPolicy
Host Namespaces + Sharing the host namespaces must be disallowed.
+
Restricted Fields:
+ spec.hostNetwork
+ spec.hostPID
+ spec.hostIPC
+
Allowed Values: false
+
Privileged Containers + Privileged Pods disable most security mechanisms and must be disallowed.
+
Restricted Fields:
+ spec.containers[*].securityContext.privileged
+ spec.initContainers[*].securityContext.privileged
+
Allowed Values: false, undefined/nil
+
Capabilities + Adding additional capabilities beyond the default set must be disallowed.
+
Restricted Fields:
+ spec.containers[*].securityContext.capabilities.add
+ spec.initContainers[*].securityContext.capabilities.add
+
Allowed Values: empty (optionally whitelisted defaults)
+
HostPath Volumes + HostPath volumes must be forbidden.
+
Restricted Fields:
+ spec.volumes[*].hostPath
+
Allowed Values: undefined/nil
+
Host Ports + HostPorts should be disallowed, or at minimum restricted to a whitelist.
+
Restricted Fields:
+ spec.containers[*].ports[*].hostPort
+ spec.initContainers[*].ports[*].hostPort
+
Allowed Values: 0, undefined, (whitelisted)
+
AppArmor (optional) + On supported hosts, the `runtime/default` AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to a whitelisted set of profiles.
+
Restricted Fields:
+ metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']
+
Allowed Values: runtime/default, undefined
+
SELinux (optional) + Setting custom SELinux options should be disallowed.
+
Restricted Fields:
+ spec.securityContext.seLinuxOptions
+ spec.containers[*].securityContext.seLinuxOptions
+ spec.initContainers[*].securityContext.seLinuxOptions
+
Allowed Values: undefined/nil
+
+ +### Restricted + +The Restricted policy is aimed at enforcing current Pod hardening best practices, at the expense of +some compatibility. It is targeted at operators and developers of security-critical applications, as +well as lower-trust users.The following listed controls should be enforced/disallowed: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Restricted policy specification
ControlPolicy
Everything from the default profile.
Volume Types + In addition to restricting HostPath volumes, the restricted profile limits usage of non-core volume types to those defined through PersistentVolumes.
+
Restricted Fields:
+ spec.volumes[*].hostPath
+ spec.volumes[*].gcePersistentDisk
+ spec.volumes[*].awsElasticBlockStore
+ spec.volumes[*].gitRepo
+ spec.volumes[*].nfs
+ spec.volumes[*].iscsi
+ spec.volumes[*].glusterfs
+ spec.volumes[*].rbd
+ spec.volumes[*].flexVolume
+ spec.volumes[*].cinder
+ spec.volumes[*].cephFS
+ spec.volumes[*].flocker
+ spec.volumes[*].fc
+ spec.volumes[*].azureFile
+ spec.volumes[*].vsphereVolume
+ spec.volumes[*].quobyte
+ spec.volumes[*].azureDisk
+ spec.volumes[*].portworxVolume
+ spec.volumes[*].scaleIO
+ spec.volumes[*].storageos
+ spec.volumes[*].csi
+
Allowed Values: undefined/nil
+
Privilege Escalation + Privilege escalation to root should not be allowed.
+
Restricted Fields:
+ spec.containers[*].securityContext.privileged
+ spec.initContainers[*].securityContext.privileged
+
Allowed Values: false, undefined/nil
+
Running as Non-root + Containers must be required to run as non-root users.
+
Restricted Fields:
+ spec.securityContext.runAsNonRoot
+ spec.containers[*].securityContext.runAsNonRoot
+ spec.initContainers[*].securityContext.runAsNonRoot
+
Allowed Values: true
+
Non-root groups (optional) + Containers should be forbidden from running with a root primary or supplementary GID.
+
Restricted Fields:
+ spec.securityContext.runAsGroup
+ spec.securityContext.supplementalGroups[*]
+ spec.securityContext.fsGroup
+ spec.containers[*].securityContext.runAsGroup
+ spec.containers[*].securityContext.supplementalGroups[*]
+ spec.containers[*].securityContext.fsGroup
+ spec.initContainers[*].securityContext.runAsGroup
+ spec.initContainers[*].securityContext.supplementalGroups[*]
+ spec.initContainers[*].securityContext.fsGroup
+
Allowed Values:
+ non-zero
+ undefined / nil (except for `*.runAsGroup`)
+
Seccomp + The runtime/default seccomp profile must be required, or allow additional whitelisted values.
+
Restricted Fields:
+ metadata.annotations['seccomp.security.alpha.kubernetes.io/pod']
+ metadata.annotations['container.seccomp.security.alpha.kubernetes.io/*']
+
Allowed Values:
+ runtime/default
+ undefined (container annotation)
+
+ +## Policy Instantiation + +Decoupling policy definition from policy instantiation allows for a common understanding and +consistent language of policies across clusters, independent of the underlying enforcement +mechanism. + +As mechanisms mature, they will be defined below on a per-policy basis. The methods of enforcement +of individual policies are not defined here. + +[**PodSecurityPolicy**](/docs/concepts/policy/pod-security-policy/) + +- [Privileged](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/privileged-psp.yaml) +- [Baseline](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/baseline-psp.yaml) +- [Restricted](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml) + +## FAQ + +### Why isn't there a profile between privileged and default? + +The three profiles defined here have a clear linear progression from most secure (restricted) to least +secure (privileged), and cover a broad set of workloads. Privileges required above the baseline +policy are typically very application specific, so we do not offer a standard profile in this +niche. This is not to say that the privileged profile should always be used in this case, but that +policies in this space need to be defined on a case-by-case basis. + +SIG Auth may reconsider this position in the future, should a clear need for other profiles arise. + +### What's the difference between a security policy and a security context? + +[Security Contexts](/docs/tasks/configure-pod-container/security-context/) configure Pods and +Containers at runtime. Security contexts are defined as part of the Pod and container specifications +in the Pod manifest, and represent parameters to the container runtime. + +Security policies are control plane mechanisms to enforce specific settings in the Security Context, +as well as other parameters outside the Security Contex. As of February 2020, the current native +solution for enforcing these security policies is [Pod Security +Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security +policy on Pods across a cluster. Other alternatives for enforcing security policy are being +developed in the Kubernetes ecosystem, such as [OPA +Gatekeeper](https://github.com/open-policy-agent/gatekeeper). + +### What profiles should I apply to my Windows Pods? + +Windows in Kubernetes has some limitations and differentiators from standard Linux-based +workloads. Specifically, the Pod SecurityContext fields [have no effect on +Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext). As +such, no standardized Pod Security profiles currently exists. + +### What about sandboxed Pods? + +There is not currently an API standard that controls whether a Pod is considered sandboxed or +not. Sandbox Pods may be identified by the use of a sandboxed runtime (such as gVisor or Kata +Containers), but there is no standard definition of what a sandboxed runtime is. + +The protections necessary for sandboxed workloads can differ from others. For example, the need to +restrict privileged permissions is lessened when the workload is isolated from the underlying +kernel. This allows for workloads requiring heightened permissions to still be isolated. + +Additionally, the protection of sandboxed workloads is highly dependent on the method of +sandboxing. As such, no single ‘recommended’ policy is recommended for all sandboxed workloads. + +{{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index bc17b74d15..50c012ffc6 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -19,7 +19,7 @@ By default, Docker uses host-private networking, so containers can talk to other Coordinating port allocations across multiple developers or teams that provide containers is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model. -This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes). +This guide uses a simple nginx server to demonstrate proof of concept. {{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index b479d8c96a..9cba184168 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -254,7 +254,7 @@ options ndots:5 ### Feature availability -The availability of Pod DNS Config and DNS Policy "`None`"" is shown as below. +The availability of Pod DNS Config and DNS Policy "`None`" is shown as below. | k8s version | Feature support | | :---------: |:-----------:| diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 92cd35953b..a5d0df82ce 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -125,7 +125,7 @@ That introduces the following issues: scheduler instead of the DaemonSet controller, by adding the `NodeAffinity` term to the DaemonSet pods, instead of the `.spec.nodeName` term. The default scheduler is then used to bind the pod to the target host. If node affinity of -the DaemonSet pod already exists, it is replaced. The DaemonSet controller only +the DaemonSet pod already exists, it is replaced (the original node affinity was taken into account before selecting the target host). The DaemonSet controller only performs these operations when creating or modifying DaemonSet pods, and no changes are made to the `spec.template` of the DaemonSet. diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 8848774103..8c03d14268 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -472,7 +472,7 @@ starts a Spark master controller (see [spark example](https://github.com/kuberne driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job -object, but complete control over what Pods are created and how work is assigned to them. +object, but maintains complete control over what Pods are created and how work is assigned to them. ## Cron Jobs {#cron-jobs} diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index eab71e4560..7034f9a18e 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -1079,37 +1079,37 @@ In order from most secure to least secure, the approaches are: 2. Grant a role to the "default" service account in a namespace - If an application does not specify a `serviceAccountName`, it uses the "default" service account. +If an application does not specify a `serviceAccountName`, it uses the "default" service account. - {{< note >}} - Permissions given to the "default" service account are available to any pod - in the namespace that does not specify a `serviceAccountName`. - {{< /note >}} +{{< note >}} +Permissions given to the "default" service account are available to any pod +in the namespace that does not specify a `serviceAccountName`. +{{< /note >}} - For example, grant read-only permission within "my-namespace" to the "default" service account: +For example, grant read-only permission within "my-namespace" to the "default" service account: - ```shell - kubectl create rolebinding default-view \ - --clusterrole=view \ - --serviceaccount=my-namespace:default \ - --namespace=my-namespace - ``` +```shell +kubectl create rolebinding default-view \ + --clusterrole=view \ + --serviceaccount=my-namespace:default \ + --namespace=my-namespace +``` - Many [add-ons](/docs/concepts/cluster-administration/addons/) run as the - "default" service account in the `kube-system` namespace. - To allow those add-ons to run with super-user access, grant cluster-admin - permissions to the "default" service account in the `kube-system` namespace. +Many [add-ons](/docs/concepts/cluster-administration/addons/) run as the +"default" service account in the `kube-system` namespace. +To allow those add-ons to run with super-user access, grant cluster-admin +permissions to the "default" service account in the `kube-system` namespace. - {{< caution >}} - Enabling this means the `kube-system` namespace contains Secrets - that grant super-user access to your cluster's API. - {{< /caution >}} +{{< caution >}} +Enabling this means the `kube-system` namespace contains Secrets +that grant super-user access to your cluster's API. +{{< /caution >}} - ```shell - kubectl create clusterrolebinding add-on-cluster-admin \ - --clusterrole=cluster-admin \ - --serviceaccount=kube-system:default - ``` +```shell +kubectl create clusterrolebinding add-on-cluster-admin \ + --clusterrole=cluster-admin \ + --serviceaccount=kube-system:default +``` 3. Grant a role to all service accounts in a namespace diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index 0069334214..88132f5218 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -119,7 +119,7 @@ track=stable - **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/tasks/configure-pod-container/limit-range/) for the container. By default, Pods run with unbounded CPU and memory limits. -- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default. +- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/tasks/inject-data-application/define-command-argument-container/). You can use the command options and arguments to override the default. - **Run as privileged**: This setting determines whether processes in [privileged containers](/docs/user-guide/pods/#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices. diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index afeae088aa..de6875b582 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -47,7 +47,9 @@ This tutorial provides a container image that uses NGINX to echo back all the re {{< kat-button >}} - {{< note >}}If you installed Minikube locally, run `minikube start`.{{< /note >}} +{{< note >}} + If you installed Minikube locally, run `minikube start`. +{{< /note >}} 2. Open the Kubernetes dashboard in a browser: @@ -113,7 +115,9 @@ Pod runs a Container based on the provided Docker image. kubectl config view ``` - {{< note >}}For more information about `kubectl`commands, see the [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}} +{{< note >}} + For more information about `kubectl`commands, see the [kubectl overview](/docs/user-guide/kubectl-overview/). +{{< /note >}} ## Create a Service diff --git a/content/en/examples/policy/baseline-psp.yaml b/content/en/examples/policy/baseline-psp.yaml new file mode 100644 index 0000000000..36e440588b --- /dev/null +++ b/content/en/examples/policy/baseline-psp.yaml @@ -0,0 +1,74 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: baseline + annotations: + # Optional: Allow the default AppArmor profile, requires setting the default. + apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' + apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + # Optional: Allow the default seccomp profile, requires setting the default. + seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default,unconfined' + seccomp.security.alpha.kubernetes.io/defaultProfileName: 'unconfined' +spec: + privileged: false + # The moby default capability set, defined here: + # https://github.com/moby/moby/blob/0a5cec2833f82a6ad797d70acbf9cbbaf8956017/oci/caps/defaults.go#L6-L19 + allowedCapabilities: + - 'CHOWN' + - 'DAC_OVERRIDE' + - 'FSETID' + - 'FOWNER' + - 'MKNOD' + - 'NET_RAW' + - 'SETGID' + - 'SETUID' + - 'SETFCAP' + - 'SETPCAP' + - 'NET_BIND_SERVICE' + - 'SYS_CHROOT' + - 'KILL' + - 'AUDIT_WRITE' + # Allow all volume types except hostpath + volumes: + # 'core' volume types + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # Assume that persistentVolumes set up by the cluster admin are safe to use. + - 'persistentVolumeClaim' + # Allow all other non-hostpath volume types. + - 'awsElasticBlockStore' + - 'azureDisk' + - 'azureFile' + - 'cephFS' + - 'cinder' + - 'csi' + - 'fc' + - 'flexVolume' + - 'flocker' + - 'gcePersistentDisk' + - 'gitRepo' + - 'glusterfs' + - 'iscsi' + - 'nfs' + - 'photonPersistentDisk' + - 'portworxVolume' + - 'quobyte' + - 'rbd' + - 'scaleIO' + - 'storageos' + - 'vsphereVolume' + hostNetwork: false + hostIPC: false + hostPID: false + readOnlyRootFilesystem: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'RunAsAny' + fsGroup: + rule: 'RunAsAny' diff --git a/content/en/training/_index.html b/content/en/training/_index.html index f2f28f9713..6dc509ff5b 100644 --- a/content/en/training/_index.html +++ b/content/en/training/_index.html @@ -97,7 +97,7 @@ class: training

The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.


- Go to Certification + Go to Certification diff --git a/content/es/docs/home/_index.md b/content/es/docs/home/_index.md index 0728e74ba7..56bb4ca94e 100644 --- a/content/es/docs/home/_index.md +++ b/content/es/docs/home/_index.md @@ -3,7 +3,7 @@ title: Documentación de Kubernetes noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "Home" main_menu: true weight: 10 diff --git a/content/es/docs/tutorials/_index.md b/content/es/docs/tutorials/_index.md index e7638f1344..7fed31f2f6 100644 --- a/content/es/docs/tutorials/_index.md +++ b/content/es/docs/tutorials/_index.md @@ -27,7 +27,7 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a * [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) -* [Hello Minikube](/docs/tutorials/hello-minikube/) +* [Hello Minikube](/es/docs/tutorials/hello-minikube/) ## Configuración diff --git a/content/es/docs/tutorials/hello-minikube.md b/content/es/docs/tutorials/hello-minikube.md new file mode 100644 index 0000000000..144256637b --- /dev/null +++ b/content/es/docs/tutorials/hello-minikube.md @@ -0,0 +1,275 @@ +--- +title: Hello Minikube +content_template: templates/tutorial +weight: 5 +menu: + main: + title: "Get Started" + weight: 10 + post: > +

¿Listo para poner manos a la obra? Construye un clúster sencillo de Kubernetes que ejecuta un Hola Mundo para Node.js

+card: + name: tutorials + weight: 10 +--- + +{{% capture overview %}} + +Este tutorial muestra como ejecutar una aplicación Node.js Hola Mundo en Kubernetes utilizando +[Minikube](/docs/setup/learning-environment/minikube) y Katacoda. +Katacoda provee un ambiente de Kubernetes desde el navegador. + +{{< note >}} +También se puede seguir este tutorial si se ha instalado [Minikube localmente](/docs/tasks/tools/install-minikube/). +{{< /note >}} + +{{% /capture %}} + +{{% capture objectives %}} + +* Desplegar una aplicación Hola Mundo en Minikube. +* Ejecutar la aplicación. +* Ver los logs de la aplicación. + +{{% /capture %}} + +{{% capture prerequisites %}} + +Este tutorial provee una imagen de contenedor construida desde los siguientes archivos: + +{{< codenew language="js" file="minikube/server.js" >}} + +{{< codenew language="conf" file="minikube/Dockerfile" >}} + +Para más información sobre el comando `docker build`, lea la [documentación de Docker ](https://docs.docker.com/engine/reference/commandline/build/). + +{{% /capture %}} + +{{% capture lessoncontent %}} + +## Crear un clúster Minikube + +1. Haz clic en **Launch Terminal** + + {{< kat-button >}} + + {{< note >}}Si se tiene instalado Minikube local, ejecutar `minikube start`.{{< /note >}} + +2. Abrir el tablero de Kubernetes dashboard en un navegador: + + ```shell + minikube dashboard + ``` + +3. Solo en el ambiente de Katacoda: En la parte superior de la terminal, haz clic en el símbolo + y luego clic en **Select port to view on Host 1**. + +4. Solo en el ambiente de Katacoda: Escribir `30000`, y hacer clic en **Display Port**. + +## Crear un Deployment + +Un [*Pod*](/docs/concepts/workloads/pods/pod/) en Kubernetes es un grupo de uno o más contenedores, +asociados con propósitos de administración y redes. El Pod en este tutorial tiene solo un contenedor. +Un [*Deployment*](/docs/concepts/workloads/controllers/deployment/) en Kubernetes verifica la salud del Pod y reinicia su contenedor si este es eliminado. Los Deployments son la manera recomendada de manejar la creación y escalación. + +1. Ejecutar el comando `kubectl create` para crear un Deployment que maneje un Pod. El Pod ejecuta un contenedor basado en la imagen proveida por Docker. + + ```shell + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + ``` + +2. Ver el Deployment: + + ```shell + kubectl get deployments + ``` + + El resultado es similar a: + + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + hello-node 1/1 1 1 1m + ``` + +3. Ver el Pod: + + ```shell + kubectl get pods + ``` + + El resultado es similar a: + + ``` + NAME READY STATUS RESTARTS AGE + hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m + ``` + +4. Ver los eventos del clúster: + + ```shell + kubectl get events + ``` + +5. Ver la configuración `kubectl`: + + ```shell + kubectl config view + ``` + + {{< note >}} Para más información sobre el comando `kubectl`, ver [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}} + +## Crear un Service + +Por defecto, el Pod es accedido por su dirección IP interna dentro del clúster de Kubernetes, para hacer que el contenedor `hello-node` sea accesible desde afuera de la red virtual Kubernetes, se debe exponer el Pod como un + [*Service*](/docs/concepts/services-networking/service/) de Kubernetes. + +1. Exponer el Pod a la red pública de internet utilizando el comando `kubectl expose`: + + ```shell + kubectl expose deployment hello-node --type=LoadBalancer --port=8080 + ``` + + El flag `--type=LoadBalancer` indica que se quiere exponer el Service fuera del clúster. + +2. Ver el Service creado: + + ```shell + kubectl get services + ``` + + El resultado es similar a: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s + kubernetes ClusterIP 10.96.0.1 443/TCP 23m + ``` + + Para los proveedores Cloud que soportan balanceadores de carga, una dirección IP externa será provisionada para acceder al servicio, en Minikube, el tipo `LoadBalancer` permite que el servicio sea accesible a través del comando `minikube service`. + +3. Ejecutar el siguiente comando: + + ```shell + minikube service hello-node + ``` + +4. Solo en el ambiente de Katacoda: Hacer clic sobre el símbolo +, y luego en **Select port to view on Host 1**. + +5. Solo en el ambiente de Katacoda: Anotar el puerto de 5 dígitos ubicado al lado del valor de `8080` en el resultado de servicios. Este número de puerto es generado aleatoriamente y puede ser diferente al indicado en el ejemplo. Escribir el número de puerto en el cuadro de texto y hacer clic en Display Port. Usando el ejemplo anterior, usted escribiría `30369`. + + Esto abre una ventana de navegador que contiene la aplicación y muestra el mensaje "Hello World". + +## Habilitar Extensiones + +Minikube tiene un conjunto de {{< glossary_tooltip text="Extensiones" term_id="addons" >}} que pueden ser habilitados y desahabilitados en el ambiente local de Kubernetes. + +1. Listar las extensiones soportadas actualmente: + + ```shell + minikube addons list + ``` + + El resultado es similar a: + + ``` + addon-manager: enabled + dashboard: enabled + default-storageclass: enabled + efk: disabled + freshpod: disabled + gvisor: disabled + helm-tiller: disabled + ingress: disabled + ingress-dns: disabled + logviewer: disabled + metrics-server: disabled + nvidia-driver-installer: disabled + nvidia-gpu-device-plugin: disabled + registry: disabled + registry-creds: disabled + storage-provisioner: enabled + storage-provisioner-gluster: disabled + ``` + +2. Habilitar una extensión, por ejemplo, `metrics-server`: + + ```shell + minikube addons enable metrics-server + ``` + + El resultado es similar a: + + ``` + metrics-server was successfully enabled + ``` + +3. Ver el Pod y Service creados: + + ```shell + kubectl get pod,svc -n kube-system + ``` + + El resultado es similar a: + + ``` + NAME READY STATUS RESTARTS AGE + pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m + pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m + pod/metrics-server-67fb648c5 1/1 Running 0 26s + pod/etcd-minikube 1/1 Running 0 34m + pod/influxdb-grafana-b29w8 2/2 Running 0 26s + pod/kube-addon-manager-minikube 1/1 Running 0 34m + pod/kube-apiserver-minikube 1/1 Running 0 34m + pod/kube-controller-manager-minikube 1/1 Running 0 34m + pod/kube-proxy-rnlps 1/1 Running 0 34m + pod/kube-scheduler-minikube 1/1 Running 0 34m + pod/storage-provisioner 1/1 Running 0 34m + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s + service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m + service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s + service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s + ``` + +4. Deshabilitar `metrics-server`: + + ```shell + minikube addons disable metrics-server + ``` + + El resultado es similar a: + + ``` + metrics-server was successfully disabled + ``` + +## Limpieza + +Ahora se puede eliminar los recursos creados en el clúster: + +```shell +kubectl delete service hello-node +kubectl delete deployment hello-node +``` + +Opcional, detener la máquina virtual de Minikube: + +```shell +minikube stop +``` + +Opcional, eliminar la máquina virtual de Minikube: + +```shell +minikube delete +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Leer más sobre [Deployments](/docs/concepts/workloads/controllers/deployment/). +* Leer más sobre [Desplegando aplicaciones](/docs/tasks/run-application/run-stateless-application-deployment/). +* Leer más sobre [Services](/docs/concepts/services-networking/service/). + +{{% /capture %}} diff --git a/content/es/examples/minikube/Dockerfile b/content/es/examples/minikube/Dockerfile new file mode 100644 index 0000000000..dd58cb7e75 --- /dev/null +++ b/content/es/examples/minikube/Dockerfile @@ -0,0 +1,4 @@ +FROM node:6.14.2 +EXPOSE 8080 +COPY server.js . +CMD [ "node", "server.js" ] diff --git a/content/es/examples/minikube/server.js b/content/es/examples/minikube/server.js new file mode 100644 index 0000000000..76345a17d8 --- /dev/null +++ b/content/es/examples/minikube/server.js @@ -0,0 +1,9 @@ +var http = require('http'); + +var handleRequest = function(request, response) { + console.log('Received request for URL: ' + request.url); + response.writeHead(200); + response.end('Hello World!'); +}; +var www = http.createServer(handleRequest); +www.listen(8080); diff --git a/content/fr/docs/concepts/containers/container-lifecycle-hooks.md b/content/fr/docs/concepts/containers/container-lifecycle-hooks.md index 3b400dbbfa..65aed32b62 100644 --- a/content/fr/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/fr/docs/concepts/containers/container-lifecycle-hooks.md @@ -100,7 +100,7 @@ Voici un exemple d'affichage d'événements lors de l'exécution de cette comman ``` Events: - FirstSeen LastSeen Count From SubobjectPath Type Reason Message + FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" @@ -117,7 +117,7 @@ Events: {{% capture whatsnext %}} -* En savoir plus sur l'[Environnement d'un conteneur](/fr/docs/concepts/containers/container-environment-variables/). +* En savoir plus sur l'[Environnement d'un conteneur](/fr/docs/concepts/containers/container-environment/). * Entraînez-vous à [attacher des handlers de conteneurs à des événements de cycle de vie](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). diff --git a/content/fr/docs/concepts/containers/images.md b/content/fr/docs/concepts/containers/images.md index 8ec3644ddf..0e0160dd4b 100644 --- a/content/fr/docs/concepts/containers/images.md +++ b/content/fr/docs/concepts/containers/images.md @@ -60,12 +60,13 @@ Ces certificats peuvent être fournis de différentes manières : - automatiqueent configuré dans Google Compute Engine ou Google Kubernetes Engine - tous les pods peuvent lire le registre privé du projet - En utilisant Amazon Elastic Container Registry (ECR) - - utilise des rôles et politiques IAM pour contrôler l'accès aux dépôts ECR + - utilise les rôles et politiques IAM pour contrôler l'accès aux dépôts ECR - rafraîchit automatiquement les certificats de login ECR - En utilisant Oracle Cloud Infrastructure Registry (OCIR) - - utilisez les rôles et politiques IAM pour contrôler l'accès aux dépôts OCIR + - utilise les rôles et politiques IAM pour contrôler l'accès aux dépôts OCIR - En utilisant Azure Container Registry (ACR) - En utilisant IBM Cloud Container Registry + - utilise les rôles et politiques IAM pour contrôler l'accès à l'IBM Cloud Container Registry - En configurant les nœuds pour s'authentifier auprès d'un registre privé - tous les pods peuvent lire les registres privés configurés - nécessite la configuration des nœuds par un administrateur du cluster @@ -120,9 +121,9 @@ Dépannage : - Vérifiez toutes les exigences ci-dessus. - Copiez les certificats de $REGION (par ex. `us-west-2`) sur votre poste de travail. Connectez-vous en SSH sur l'hôte et exécutez Docker manuellement avec ces certificats. Est-ce que ça marche ? - Vérifiez que kubelet s'exécute avec `--cloud-provider=aws`. -- Recherchez dans les logs de kubelet (par ex. `journalctl -u kubelet`) des lignes de logs ressemblant à : - - `plugins.go:56] Registering credential provider: aws-ecr-key` - - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider` +- Augmentez la verbosité des logs de kubelet à au moins 3 et recherchez dans les logs de kubelet (par exemple avec `journalctl -u kubelet`) des lignes similaires à : ++ - `aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API` ++ - `aws_credentials.go:116] Got ECR credentials from ECR API for .dkr.ecr..amazonaws.com` ### Utiliser Azure Container Registry (ACR) En utilisant [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) @@ -143,11 +144,11 @@ Une fois que vous avez défini ces variables, vous pouvez ### Utiliser IBM Cloud Container Registry -IBM Cloud Container Registry fournit un registre d'images multi-tenant privé que vous pouvez utiliser pour stocker et partager de manière sécurisée vos images Docker. Par défaut, les images de votre registre privé sont scannées par le Vulnerability Advisor intégré pour détecter des failles de sécurité et des vulnérabilités potentielles. Les utilisateurs de votre compte IBM Cloud peuvent accéder à vos images, ou vous pouvez créer un token pour garantir l'accès à des namespaces du registre. +IBM Cloud Container Registry fournit un registre d'images multi-tenant privé que vous pouvez utiliser pour stocker et partager de manière sécurisée vos images. Par défaut, les images de votre registre privé sont scannées par le Vulnerability Advisor intégré pour détecter des failles de sécurité et des vulnérabilités potentielles. Les utilisateurs de votre compte IBM Cloud peuvent accéder à vos images, ou vous pouvez des rôles et politiques IAM pour fournir l'accès aux namespaces de l'IBM Cloud Container Registry. -Pour installer le plugin du CLI de IBM Cloud Container Registry et créer un namespace pour vos images, voir [Débuter avec IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started). +Pour installer le plugin du CLI de IBM Cloud Container Registry et créer un namespace pour vos images, voir [Débuter avec IBM Cloud Container Registry](https://cloud.ibm.com/docs/Registry?topic=registry-getting-started). -Vous pouvez utiliser le IBM Cloud Container Registry pour déployer des conteneurs depuis des [images publiques de IBM Cloud](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) et vos images privées dans le namespace `default` de votre cluster IBM Cloud Kubernetes Service. Pour déployer un conteneur dans d'autres namespaces, ou pour utiliser une image d'une autre région de IBM Cloud Container Registry ou d'un autre compte IBM Cloud, créez un `imagePullSecret` Kubernetes. Pour plus d'informations, voir [Construire des conteneurs à partir d'images](https://cloud.ibm.com/docs/containers?topic=containers-images#images). +Si vous utilisez le même compte et la même région, vous pouvez déployer des images stockées dans IBM Cloud Container Registry vers la namespace `default` de votre cluster IBM Cloud Kubernetes Service sans configuration supplémentaire, voir [Construire des conteneurs à partir d'images](https://cloud.ibm.com/docs/containers?topic=containers-images). Pour les autres options de configuration, voir [Comprendre comment autoriser votre cluster à télécharger des images depuis un registre](https://cloud.ibm.com/docs/containers?topic=containers-registry#cluster_registry_auth). ### Configurer les nœuds pour s'authentifier auprès d'un registre privé @@ -209,19 +210,33 @@ spec: imagePullPolicy: Always command: [ "echo", "SUCCESS" ] EOF +``` + +``` pod/test-image-privee-1 created ``` -Si tout fonctionne, alors, après quelques instants, vous devriez voir : + +Si tout fonctionne, alors, après quelques instants, vous pouvez exécuter : ```shell kubectl logs test-image-privee-1 +``` + +et voir que la commande affiche : + +``` SUCCESS ``` -En cas de problèmes, vous verrez : +Si vous suspectez que la commande a échouée, vous pouvez exécuter : ```shell -kubectl describe pods/test-image-privee-1 | grep "Failed" +kubectl describe pods/test-image-privee-1 | grep 'Failed' +``` + +En cas d'échec, l'affichage sera similaire à : + +``` Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found ``` @@ -338,7 +353,7 @@ Il y a plusieurs solutions pour configurer des registres privés. Voici quelques - Générez des certificats de registre pour chaque *tenant*, placez-les dans des secrets, et placez ces secrets dans les namespaces de chaque *tenant*. pod - Le *tenant* ajoute ce secret dans les imagePullSecrets de chaque pod. -{{% /capture %}} - Si vous devez accéder à plusieurs registres, vous pouvez créer un secret pour chaque registre. Kubelet va fusionner tous les `imagePullSecrets` dans un unique `.docker/config.json` virtuel. + +{{% /capture %}} diff --git a/content/fr/docs/reference/kubectl/cheatsheet.md b/content/fr/docs/reference/kubectl/cheatsheet.md index 918debad70..aa39822757 100644 --- a/content/fr/docs/reference/kubectl/cheatsheet.md +++ b/content/fr/docs/reference/kubectl/cheatsheet.md @@ -43,7 +43,7 @@ complete -F __start_kubectl k ```bash source <(kubectl completion zsh) # active l'auto-complétion pour zsh dans le shell courant -echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh +echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh ``` ## Contexte et configuration de Kubectl @@ -87,7 +87,7 @@ kubectl config unset users.foo # Supprime l'utilisateur fo ## Création d'objets -Les manifests Kubernetes peuvent être définis en json ou yaml. Les extensions de fichier `.yaml`, +Les manifests Kubernetes peuvent être définis en YAML ou JSON. Les extensions de fichier `.yaml`, `.yml`, et `.json` peuvent être utilisés. ```bash @@ -145,7 +145,7 @@ EOF # Commandes Get avec un affichage basique kubectl get services # Liste tous les services d'un namespace kubectl get pods --all-namespaces # Liste tous les Pods de tous les namespaces -kubectl get pods -o wide # Liste tous les Pods du namespace, avec plus de détails +kubectl get pods -o wide # Liste tous les Pods du namespace courant, avec plus de détails kubectl get deployment my-dep # Liste un déploiement particulier kubectl get pods # Liste tous les Pods dans un namespace kubectl get pod my-pod -o yaml # Affiche le YAML du Pod @@ -154,20 +154,20 @@ kubectl get pod my-pod -o yaml # Affiche le YAML du Pod kubectl describe nodes my-node kubectl describe pods my-pod -# Liste des services triés par nom -kubectl get services --sort-by=.metadata.name # Liste les services classés par nom +# Liste les services triés par nom +kubectl get services --sort-by=.metadata.name # Liste les pods classés par nombre de redémarrages kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' -# Affiche les pods du namespace test classés par capacité de stockage -kubectl get pods -n test --sort-by=.spec.capacity.storage +# Affiche les volumes persistants classés par capacité de stockage +kubectl get pv --sort-by=.spec.capacity.storage # Affiche la version des labels de tous les pods ayant un label app=cassandra kubectl get pods --selector=app=cassandra -o \ jsonpath='{.items[*].metadata.labels.version}' -# Affiche tous les noeuds (en utilisant un sélecteur pour exclure ceux ayant un label +# Affiche tous les noeuds (en utilisant un sélecteur pour exclure ceux ayant un label # nommé 'node-role.kubernetes.io/master') kubectl get node --selector='!node-role.kubernetes.io/master' @@ -252,7 +252,7 @@ kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", ``` ## Édition de ressources -Ceci édite n'importe quelle ressource de l'API dans un éditeur. +Édite n'importe quelle ressource de l'API dans un éditeur. ```bash kubectl edit svc/docker-registry # Édite le service nommé docker-registry @@ -274,7 +274,7 @@ kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale plusie kubectl delete -f ./pod.json # Supprime un pod en utilisant le type et le nom spécifiés dans pod.json kubectl delete pod,service baz foo # Supprime les pods et services ayant les mêmes noms "baz" et "foo" kubectl delete pods,services -l name=myLabel # Supprime les pods et services ayant le label name=myLabel -kubectl -n my-ns delete po,svc --all # Supprime tous les pods et services dans le namespace my-ns +kubectl -n my-ns delete pod,svc --all # Supprime tous les pods et services dans le namespace my-ns # Supprime tous les pods correspondants à pattern1 ou pattern2 avec awk kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs kubectl delete -n mynamespace pod ``` @@ -292,9 +292,9 @@ kubectl logs -f my-pod # Fait défiler (stream) les kubectl logs -f my-pod -c my-container # Fait défiler (stream) les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) kubectl logs -f -l name=myLabel --all-containers # Fait défiler (stream) les logs de tous les pods ayant le label name=myLabel (stdout) kubectl run -i --tty busybox --image=busybox -- sh # Exécute un pod comme un shell interactif -kubectl run nginx --image=nginx --restart=Never -n -mynamespace # Run pod nginx in a specific namespace -kubectl run nginx --image=nginx --restart=Never # Run pod nginx and write its spec into a file called pod.yaml +kubectl run nginx --image=nginx --restart=Never -n +mynamespace # Exécute le pod nginx dans un namespace spécifique +kubectl run nginx --image=nginx --restart=Never # Simule l'exécution du pod nginx et écrit sa spécification dans un fichier pod.yaml --dry-run -o yaml > pod.yaml kubectl attach my-pod -i # Attache à un conteneur en cours d'exécution @@ -340,7 +340,7 @@ kubectl api-resources --api-group=extensions # Toutes les ressources dans le gro ### Formattage de l'affichage -Pour afficher les détails sur votre terminal dans un format spécifique, vous pouvez utiliser une des options `-o` ou `--output` avec les commandes `kubectl` qui les prennent en charge. +Pour afficher les détails sur votre terminal dans un format spécifique, utilisez l'option `-o` (ou `--output`) avec les commandes `kubectl` qui la prend en charge. Format d'affichage | Description --------------| ----------- @@ -353,6 +353,21 @@ Format d'affichage | Description `-o=wide` | Affiche dans le format texte avec toute information supplémentaire, et pour des pods, le nom du noeud est inclus `-o=yaml` | Affiche un objet de l'API formaté en YAML +Exemples utilisant `-o=custom-columns` : + +```bash +# Toutes les images s'exécutant dans un cluster +kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' + + # Toutes les images excepté "k8s.gcr.io/coredns:1.6.2" +kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' + +# Tous les champs dans metadata quel que soit leur nom +kubectl get pods -A -o=custom-columns='DATA:metadata.*' +``` + +Plus d'exemples dans la [documentation de référence](/fr/docs/reference/kubectl/overview/#colonnes-personnalisées) de kubectl. + ### Verbosité de l'affichage de Kubectl et débogage La verbosité de Kubectl est contrôlée par une des options `-v` ou `--v` suivie d'un entier représentant le niveau de log. Les conventions générales de logging de Kubernetes et les niveaux de log associés sont décrits [ici](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md). diff --git a/content/fr/docs/reference/kubectl/conventions.md b/content/fr/docs/reference/kubectl/conventions.md index ffd9a1a3ed..8b458871f6 100644 --- a/content/fr/docs/reference/kubectl/conventions.md +++ b/content/fr/docs/reference/kubectl/conventions.md @@ -16,7 +16,6 @@ Pour une sortie stable dans un script : * Demandez un des formats de sortie orienté machine, comme `-o name`, `-o json`, `-o yaml`, `-o go-template` ou `-o jsonpath`. * Spécifiez complètement la version. Par exemple, `jobs.v1.batch/monjob`. Cela va assurer que kubectl n'utilise pas sa version par défaut, qui risque d'évoluer avec le temps. -* Utilisez le flag `--generator` pour coller à un comportement spécifique lorsque vous utilisez les commandes basées sur un générateur, comme `kubectl run` ou `kubectl expose`. * Ne vous basez pas sur un contexte, des préférences ou tout autre état implicite. ## Bonnes pratiques @@ -26,48 +25,34 @@ Pour une sortie stable dans un script : Pour que `kubectl run` satisfasse l'infrastructure as code : * Taggez les images avec un tag spécifique à une version et n'utilisez pas ce tag pour une nouvelle version. Par exemple, utilisez `:v1234`, `v1.2.3`, `r03062016-1-4`, plutôt que `:latest` (Pour plus d'informations, voir [Bonnes pratiques pour la configuration](/docs/concepts/configuration/overview/#container-images)). -* Capturez les paramètres dans un script enregistré, ou tout au moins utilisez `--record` pour annoter les objets créés avec la ligne de commande correspondante pour une image peu paramétrée. * Capturez le script pour une image fortement paramétrée. * Passez à des fichiers de configuration enregistrés dans un système de contrôle de source pour des fonctionnalités désirées mais non exprimables avec des flags de `kubectl run`. -* Collez à une version spécifique de [générateur](#generators), comme `kubectl run --generator=deployment/v1beta1`. + +Vous pouvez utiliser l'option `--dry-run` pour prévisualiser l'objet qui serait envoyé à votre cluster, sans réellement l'envoyer. + +{{< note >}} +Tous les générateurs `kubectl` sont dépréciés. Voir la documentation de Kubernetes v1.17 pour une [liste](https://v1-17.docs.kubernetes.io/fr/docs/reference/kubectl/conventions/#g%C3%A9n%C3%A9rateurs) de générateurs et comment ils étaient utilisés. +{{< /note >}} #### Générateurs - -Vous pouvez créer les ressources suivantes en utilisant `kubectl run` avec le flag `--generator` : - -| Ressource | groupe api | commande kubectl | -|-----------------------------------|--------------------|---------------------------------------------------| -| Pod | v1 | `kubectl run --generator=run-pod/v1` | -| Replication controller (déprécié) | v1 | `kubectl run --generator=run/v1` | -| Deployment (déprécié) | extensions/v1beta1 | `kubectl run --generator=deployment/v1beta1` | -| Deployment (déprécié) | apps/v1beta1 | `kubectl run --generator=deployment/apps.v1beta1` | -| Job (déprécié) | batch/v1 | `kubectl run --generator=job/v1` | -| CronJob (déprécié) | batch/v1beta1 | `kubectl run --generator=cronjob/v1beta1` | -| CronJob (déprécié) | batch/v2alpha1 | `kubectl run --generator=cronjob/v2alpha1` | - -{{< note >}} -`kubectl run --generator` sauf pour `run-pod/v1` est déprécié depuis v1.12. -{{< /note >}} - -Si vous n'indiquez pas de flag de générateur, d'autres flags vous demandent d'utiliser un générateur spécifique. La table suivante liste les flags qui vous forcent à préciser un générateur spécifique, selon la version du cluster : - -| Ressource générée | Cluster v1.4 et suivants | Cluster v1.3 | Cluster v1.2 | Cluster v1.1 et précédents | -|:----------------------:|--------------------------|-----------------------|--------------------------------------------|--------------------------------------------| -| Pod | `--restart=Never` | `--restart=Never` | `--generator=run-pod/v1` | `--restart=OnFailure` OU `--restart=Never` | -| Replication Controller | `--generator=run/v1` | `--generator=run/v1` | `--generator=run/v1` | `--restart=Always` | -| Deployment | `--restart=Always` | `--restart=Always` | `--restart=Always` | N/A | -| Job | `--restart=OnFailure` | `--restart=OnFailure` | `--restart=OnFailure` OU `--restart=Never` | N/A | -| Cron Job | `--schedule=` | N/A | N/A | N/A | - -{{< note >}} -Ces flags utilisent un générateur par défaut uniquement lorsque vous n'avez utilisé aucun flag. -Cela veut dire que lorsque vous combinez `--generator` avec d'autres flags, le générateur que vous avez spécifié plus tard ne change pas. Par exemple, dans cluster v1.4, si vous spécifiez d'abord `--restart=Always`, un Deployment est créé ; si vous spécifiez ensuite `--restart=Always` et `--generator=run/v1`, alors un Replication Controller sera créé. -Ceci vous permet de coller à un comportement spécifique avec le générateur, même si le générateur par défaut est changé par la suite. -{{< /note >}} - -Les flags définissent le générateur dans l'ordre suivant : d'abord le flag `--schedule`, puis le flag `--restart`, et finalement le flag `--generator`. - -Pour vérifier la ressource qui a été finalement créée, utilisez le flag `--dry-run`, qui fournit l'objet qui sera soumis au cluster. +Vous pouvez générer les ressources suivantes avec une commande kubectl, `kubectl create --dry-run -o yaml`: +``` + clusterrole Crée un ClusterRole. + clusterrolebinding Crée un ClusterRoleBinding pour un ClusterRole particulier. + configmap Crée une configmap à partir d'un fichier local, un répertoire ou une valeur litérale. + cronjob Crée un cronjob avec le nom spécifié. + deployment Crée un deployment avec le nom spécifié. + job Crée un job avec le nom spécifié. + namespace Crée un namespace avec le nom spécifié. + poddisruptionbudget Crée un pod disruption budget avec le nom spécifié. + priorityclass Crée une priorityclass avec le nom spécifié. + quota Crée un quota avec le nom spécifié. + role Crée un role avec une unique règle. + rolebinding Crée un RoleBinding pour un Role ou ClusterRole particulier. + secret Crée un secret en utilisant la sous-commande spécifiée. + service Crée un service en utilisant la sous-commande spécifiée. + serviceaccount Crée un service account avec le nom spécifié. +``` ### `kubectl apply` diff --git a/content/fr/docs/reference/kubectl/jsonpath.md b/content/fr/docs/reference/kubectl/jsonpath.md index c77167e62e..427ae93516 100644 --- a/content/fr/docs/reference/kubectl/jsonpath.md +++ b/content/fr/docs/reference/kubectl/jsonpath.md @@ -71,10 +71,10 @@ Fonction | Description | Exemple --------------------|----------------------------|-----------------------------------------------------------------|------------------ `text` | le texte en clair | `le type est {.kind}` | `le type est List` `@` | l'objet courant | `{@}` | identique à l'entrée -`.` ou `[]` | opérateur fils | `{.kind}` ou `{['kind']}` | `List` +`.` ou `[]` | opérateur fils | `{.kind}`, `{['kind']}` ou `{['name\.type']}` | `List` `..` | descente récursive | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e` `*` | joker. Tous les objets | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]` -`[start:end :step]` | opérateur d'indice | `{.users[0].name}` | `myself` +`[start:end:step]` | opérateur d'indice | `{.users[0].name}` | `myself` `[,]` | opérateur d'union | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]` `?()` | filtre | `{.users[?(@.name=="e2e")].user.password}` | `secret` `range`, `end` | itération de liste | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]` @@ -87,14 +87,18 @@ kubectl get pods -o json kubectl get pods -o=jsonpath='{@}' kubectl get pods -o=jsonpath='{.items[0]}' kubectl get pods -o=jsonpath='{.items[0].metadata.name}' +kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}" kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' ``` +{{< note >}} Sous Windows, vous devez utiliser des guillemets _doubles_ autour des modèles JSONPath qui contiennent des espaces (et non des guillemets simples comme ci-dessus pour bash). Ceci entraîne que vous devez utiliser un guillemet simple ou un double guillemet échappé autour des chaînes litérales dans le modèle. Par exemple : ```cmd -C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}" -C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}" +kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}" +kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}" ``` +{{< /note >}} + {{% /capture %}} diff --git a/content/fr/docs/reference/kubectl/kubectl.md b/content/fr/docs/reference/kubectl/kubectl.md index b8c5a3f3c3..ceaa94b6c5 100755 --- a/content/fr/docs/reference/kubectl/kubectl.md +++ b/content/fr/docs/reference/kubectl/kubectl.md @@ -510,6 +510,7 @@ kubectl [flags] {{% capture seealso %}} +* [kubectl alpha](/docs/reference/generated/kubectl/kubectl-commands#alpha) - Commandes pour fonctionnalités alpha * [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Met à jour les annotations d'une ressource * [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - Affiche les ressources de l'API prises en charge sur le serveur * [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - Affiche les versions de l'API prises en charge sur le serveur, sous la forme "groupe/version" @@ -545,7 +546,7 @@ kubectl [flags] * [kubectl replace](/docs/reference/generated/kubectl/kubectl-commands#replace) - Remplace une ressource par fichier ou stdin * [kubectl rollout](/docs/reference/generated/kubectl/kubectl-commands#rollout) - Gère le rollout d'une ressource * [kubectl run](/docs/reference/generated/kubectl/kubectl-commands#run) - Exécute une image donnée dans le cluster -* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Définit une nouvelle taille pour un Deployment, ReplicaSet, Replication Controller, ou Job +* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Définit une nouvelle taille pour un Deployment, ReplicaSet ou Replication Controller * [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - Définit des fonctionnalités spécifiques sur des objets * [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - Met à jour les marques (taints) sur un ou plusieurs nœuds * [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - Affiche l'utilisation de ressources matérielles (CPU/Memory/Storage) diff --git a/content/fr/docs/reference/kubectl/overview.md b/content/fr/docs/reference/kubectl/overview.md index 7ed5b714b1..01d36d469e 100644 --- a/content/fr/docs/reference/kubectl/overview.md +++ b/content/fr/docs/reference/kubectl/overview.md @@ -9,7 +9,7 @@ card: --- {{% capture overview %}} -Kubectl est une interface en ligne de commande qui permet d'exécuter des commandes sur des clusters Kubernetes. `kubectl` recherche un fichier appelé config dans le répertoire $HOME/.kube. Vous pouvez spécifier d'autres fichiers [kubeconfig](https://kube +Kubectl est un outil en ligne de commande pour contrôler des clusters Kubernetes. `kubectl` recherche un fichier appelé config dans le répertoire $HOME/.kube. Vous pouvez spécifier d'autres fichiers [kubeconfig](https://kube rnetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) en définissant la variable d'environnement KUBECONFIG ou en utilisant le paramètre [`--kubeconfig`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/). Cet aperçu couvre la syntaxe `kubectl`, décrit les opérations et fournit des exemples classiques. Pour des détails sur chaque commande, incluant toutes les options et sous-commandes autorisées, voir la documentation de référence de [kubectl](/docs/reference/generated/kubectl/kubectl-commands/). Pour des instructions d'installation, voir [installer kubectl](/docs/tasks/kubectl/install/). @@ -67,34 +67,51 @@ Si vous avez besoin d'aide, exécutez `kubectl help` depuis la fenêtre de termi Le tableau suivant inclut une courte description et la syntaxe générale pour chaque opération `kubectl` : -Opération | Syntaxe | Description --------------------- | -------------------- | -------------------- +Opération | Syntaxe | Description +----------------| ---------------------------------------------------------------------------------------------------------------------------------------------------------| -------------------- +`alpha` | `kubectl alpha SOUS-COMMANDE [flags]` | Liste les commandes disponibles qui correspondent à des fonctionnalités alpha, qui ne sont pas activées par défaut dans les clusters Kubernetes. `annotate` | kubectl annotate (-f FICHIER | TYPE NOM | TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Ajoute ou modifie les annotations d'une ou plusieurs ressources. +`api-resources` | `kubectl api-resources [flags]` | Liste les ressources d'API disponibles. `api-versions` | `kubectl api-versions [flags]` | Liste les versions d'API disponibles. `apply` | `kubectl apply -f FICHIER [flags]` | Applique un changement de configuration à une ressource depuis un fichier ou stdin. `attach` | `kubectl attach POD -c CONTENEUR [-i] [-t] [flags]` | Attache à un conteneur en cours d'exécution soit pour voir la sortie standard soit pour interagir avec le conteneur (stdin). +`auth` | `kubectl auth [flags] [options]` | Inspecte les autorisations. `autoscale` | kubectl autoscale (-f FICHIER | TYPE NOM | TYPE/NOM) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | Scale automatiquement l'ensemble des pods gérés par un replication controller. +`certificate` | `kubectl certificate SOUS-COMMANDE [options]` | Modifie les ressources de type certificat. `cluster-info` | `kubectl cluster-info [flags]` | Affiche les informations des endpoints du master et des services du cluster. +`completion` | `kubectl completion SHELL [options]` | Affiche le code de complétion pour le shell spécifié (bash ou zsh). `config` | `kubectl config SOUS-COMMANDE [flags]` | Modifie les fichiers kubeconfig. Voir les sous-commandes individuelles pour plus de détails. +`convert` | `kubectl convert -f FICHIER [options]` | Convertit des fichiers de configuration entre différentes versions d'API. Les formats YAML et JSON sont acceptés. +`cordon` | `kubectl cordon NOEUD [options]` | Marque un nœud comme non programmable. +`cp` | `kubectl cp [options]` | Copie des fichiers et des répertoires vers et depuis des conteneurs. `create` | `kubectl create -f FICHIER [flags]` | Crée une ou plusieurs ressources depuis un fichier ou stdin. `delete` | kubectl delete (-f FICHIER | TYPE [NOM | /NOM | -l label | --all]) [flags] | Supprime des ressources soit depuis un fichier ou stdin, ou en indiquant des sélecteurs de label, des noms, des sélecteurs de ressources ou des ressources. `describe` | kubectl describe (-f FICHIER | TYPE [PREFIXE_NOM | /NOM | -l label]) [flags] | Affiche l'état détaillé d'une ou plusieurs ressources. -`diff` | `kubectl diff -f FICHIER [flags]` | Diff un fichier ou stdin par rapport à la configuration en cours (**BETA**) +`diff` | `kubectl diff -f FICHIER [flags]` | Diff un fichier ou stdin par rapport à la configuration en cours +`drain` | `kubectl drain NOEUD [options]` | Vide un nœud en préparation de sa mise en maintenance. `edit` | kubectl edit (-f FICHIER | TYPE NOM | TYPE/NOM) [flags] | Édite et met à jour la définition d'une ou plusieurs ressources sur le serveur en utilisant l'éditeur par défaut. `exec` | `kubectl exec POD [-c CONTENEUR] [-i] [-t] [flags] [-- COMMANDE [args...]]` | Exécute une commande à l'intérieur d'un conteneur dans un pod. `explain` | `kubectl explain [--recursive=false] [flags]` | Obtient des informations sur différentes ressources. Par exemple pods, nœuds, services, etc. `expose` | kubectl expose (-f FICHIER | TYPE NOM | TYPE/NOM) [--port=port] [--protocol=TCP|UDP] [--target-port=nombre-ou-nom] [--name=nom] [--external-ip=ip-externe-ou-service] [--type=type] [flags] | Expose un replication controller, service ou pod comme un nouveau service Kubernetes. `get` | kubectl get (-f FICHIER | TYPE [NOM | /NOM | -l label]) [--watch] [--sort-by=CHAMP] [[-o | --output]=FORMAT_AFFICHAGE] [flags] | Liste une ou plusieurs ressources. +`kustomize` | `kubectl kustomize [flags] [options]` | Liste un ensemble de ressources d'API généré à partir d'instructions d'un fichier kustomization.yaml. Le paramètre doit être le chemin d'un répertoire contenant ce fichier, ou l'URL d'un dépôt git incluant un suffixe de chemin par rapport à la racine du dépôt. `label` | kubectl label (-f FICHIER | TYPE NOM | TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Ajoute ou met à jour les labels d'une ou plusieurs ressources. `logs` | `kubectl logs POD [-c CONTENEUR] [--follow] [flags]` | Affiche les logs d'un conteneur dans un pod. +`options` | `kubectl options` | Liste des options globales, s'appliquant à toutes commandes. `patch` | kubectl patch (-f FICHIER | TYPE NOM | TYPE/NOM) --patch PATCH [flags] | Met à jour un ou plusieurs champs d'une resource en utilisant le processus de merge patch stratégique. +`plugin` | `kubectl plugin [flags] [options]` | Fournit des utilitaires pour interagir avec des plugins. `port-forward` | `kubectl port-forward POD [PORT_LOCAL:]PORT_DISTANT [...[PORT_LOCAL_N:]PORT_DISTANT_N] [flags]` | Transfère un ou plusieurs ports locaux vers un pod. `proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Exécute un proxy vers un API server Kubernetes. `replace` | `kubectl replace -f FICHIER` | Remplace une ressource depuis un fichier ou stdin. -`rolling-update`| kubectl rolling-update ANCIEN_NOM_CONTROLEUR ([NOUVEAU_NOM_CONTROLEUR] --image=NOUVELLE_IMAGE_CONTENEUR | -f NOUVELLE_SPEC_CONTROLEUR) [flags] | Exécute un rolling update en remplaçant graduellement le replication controller indiqué et ses pods. -`run` | `kubectl run NOM --image=image [--env="cle=valeur"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Exécute dans le cluster l'image indiquée. +`rollout` | `kubectl rollout SOUS-COMMANDE [options]` | Gère le rollout d'une ressource. Les types de ressources valides sont : deployments, daemonsets et statefulsets. +`run` | `kubectl run NOM --image=image [--env="cle=valeur"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | Exécute dans le cluster l'image indiquée. `scale` | kubectl scale (-f FICHIER | TYPE NOM | TYPE/NOM) --replicas=QUANTITE [--resource-version=version] [--current-replicas=quantité] [flags] | Met à jour la taille du replication controller indiqué. +`set` | `kubectl set SOUS-COMMANDE [options]` | Configure les ressources de l'application. +`taint` | `kubectl taint NOEUD NNOM CLE_1=VAL_1:EFFET_TAINT_1 ... CLE_N=VAL_N:EFFET_TAINT_N [options]` | Met à jour les marques (taints) d'un ou plusieurs nœuds. +`top` | `kubectl top [flags] [options]` | Affiche l'utilisation des ressources (CPU/Mémoire/Stockage). +`uncordon` | `kubectl uncordon NOEUD [options]` | Marque un noeud comme programmable. `version` | `kubectl version [--client] [flags]` | Affiche la version de Kubernetes du serveur et du client. +`wait` | kubectl wait ([-f FICHIER] | ressource.groupe/ressource.nom | ressource.groupe [(-l label | --all)]) [--for=delete|--for condition=available] [options] | Expérimental : Attend un condition spécifique sur une ou plusieurs ressources. Rappelez-vous : Pour tout savoir sur les opérations, voir la documentation de référence de [kubectl](/docs/user-guide/kubectl/). @@ -105,7 +122,8 @@ Le tableau suivant inclut la liste de tous les types de ressources pris en charg (cette sortie peut être obtenue depuis `kubectl api-resources`, et correspond à Kubernetes 1.13.3.) | Nom de la ressource | Noms abrégés | Groupe API | Par namespace | Genre de la ressource | -|---|---|---|---|---| +|---------------------|--------------|------------|---------------|-----------------------| +| `bindings` | | | true | Binding| | `componentstatuses` | `cs` | | false | ComponentStatus | | `configmaps` | `cm` | | true | ConfigMap | | `endpoints` | `ep` | | true | Endpoints | @@ -150,6 +168,8 @@ Le tableau suivant inclut la liste de tous les types de ressources pris en charg | `rolebindings` | | rbac.authorization.k8s.io | true | RoleBinding | | `roles` | | rbac.authorization.k8s.io | true | Role | | `priorityclasses` | `pc` | scheduling.k8s.io | false | PriorityClass | +| `csidrivers` | | storage.k8s.io | false | CSIDriver | +| `csinodes` | | storage.k8s.io | false | CSINode | | `storageclasses` | `sc` | storage.k8s.io | false | StorageClass | | `volumeattachments` | | storage.k8s.io | false | VolumeAttachment | @@ -242,8 +262,8 @@ kubectl get pods --server-print=false La sortie ressemble à : ```shell -NAME READY STATUS RESTARTS AGE -nom-pod 1/1 Running 0 1m +NAME AGE +nom-pod 1m ``` ### Ordonner les listes d'objets @@ -297,8 +317,8 @@ $ kubectl get replicationcontroller # Liste ensemble tous les replication controller et les services dans le format de sortie texte. $ kubectl get rc,services -# Liste tous les daemon sets, dont ceux non initialisés, dans le format de sortie texte. -$ kubectl get ds --include-uninitialized +# Liste tous les daemon sets dans le format de sortie texte. +kubectl get ds # Liste tous les pods s'exécutant sur le nœud serveur01 $ kubectl get pods --field-selector=spec.nodeName=serveur01 @@ -317,8 +337,8 @@ $ kubectl describe pods/ # Rappelez-vous : les noms des pods étant créés par un replication controller sont préfixés par le nom du replication controller. $ kubectl describe pods -# Décrit tous les pods, sans inclure les non initialisés -$ kubectl describe pods --include-uninitialized=false +# Décrit tous les pods +$ kubectl describe pods ``` {{< note >}} @@ -332,11 +352,8 @@ Vous pouvez utiliser les options `-w` ou `--watch` pour initier l'écoute des mo # Supprime un pod en utilisant le type et le nom spécifiés dans le fichier pod.yaml. $ kubectl delete -f pod.yaml -# Supprime tous les pods et services ayant le label name=. -$ kubectl delete pods,services -l name= - -# Supprime tous les pods et services ayant le label name=, en incluant les non initialisés. -$ kubectl delete pods,services -l name= --include-uninitialized +# Supprime tous les pods et services ayant le label = +$ kubectl delete pods,services -l = # Supprime tous les pods, en incluant les non initialisés. $ kubectl delete pods --all @@ -346,13 +363,13 @@ $ kubectl delete pods --all ```shell # Affiche la sortie de la commande 'date' depuis le pod . Par défaut, la sortie se fait depuis le premier conteneur. -$ kubectl exec date +$ kubectl exec -- date # Affiche la sortie de la commande 'date' depuis le conteneur du pod . -$ kubectl exec -c date +$ kubectl exec -c -- date # Obtient un TTY interactif et exécute /bin/bash depuis le pod . Par défaut, la sortie se fait depuis le premier conteneur. -$ kubectl exec -ti /bin/bash +$ kubectl exec -ti -- /bin/bash ``` `kubectl logs` - Affiche les logs d'un conteneur dans un pod. @@ -365,6 +382,16 @@ $ kubectl logs $ kubectl logs -f ``` +`kubectl diff` - Affiche un diff des mises à jour proposées au cluster. + +```shell +# Diff les ressources présentes dans "pod.json". +kubectl diff -f pod.json + +# Diff les ressources présentes dans le fichier lu sur l'entrée standard. +cat service.yaml | kubectl diff -f - +``` + ## Exemples : Créer et utiliser des plugins Utilisez les exemples suivants pour vous familiariser avec l'écriture et l'utilisation de plugins `kubectl` : @@ -428,7 +455,7 @@ $ cat ./kubectl-whoami # ce plugin utilise la commande `kubectl config` pour afficher # l'information sur l'utilisateur courant, en se basant sur # le contexte couramment sélectionné -kubectl config view --template='{{ range .contexts }}{{ if eq .name "'$(kubectl config current-context)'" }}Current user: {{ .context.user }}{{ end }}{{ end }}' +kubectl config view --template='{{ range .contexts }}{{ if eq .name "'$(kubectl config current-context)'" }}Current user: {{ printf "%s\n" .context.user }}{{ end }}{{ end }}' ``` Exécuter le plugin ci-dessus vous donne une sortie contenant l'utilisateur du contexte couramment sélectionné dans votre fichier KUBECONFIG : diff --git a/content/fr/docs/tasks/_index.md b/content/fr/docs/tasks/_index.md index 4fd48dcd55..63dc8f8e0f 100644 --- a/content/fr/docs/tasks/_index.md +++ b/content/fr/docs/tasks/_index.md @@ -16,7 +16,7 @@ Une page montre comment effectuer une seule chose, généralement en donnant une {{% capture body %}} -## Interface web (Dashboard) #{dashboard} +## Interface web (Dashboard) {#dashboard} Déployer et accéder au dashboard web de votre cluster pour vous aider à le gérer et administrer un cluster Kubernetes. diff --git a/content/it/docs/concepts/overview/kubernetes-api.md b/content/it/docs/concepts/overview/kubernetes-api.md new file mode 100644 index 0000000000..7f122bcd14 --- /dev/null +++ b/content/it/docs/concepts/overview/kubernetes-api.md @@ -0,0 +1,126 @@ +--- +title: Le API di Kubernetes +content_template: templates/concept +weight: 30 +card: + name: concepts + weight: 20 +--- + +{{% capture overview %}} + +Le convenzioni generali seguite dalle API sono descritte in [API conventions doc](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). + +Gli *endpoints* delle API, la lista delle risorse esposte ed i relativi esempi sono descritti in [API Reference](/docs/reference). + +L'accesso alle API da remoto è discusso in [Controllare l'accesso alle API](/docs/reference/access-authn-authz/controlling-access/). + +Le API di Kubernetes servono anche come riferimento per lo schema dichiarativo della configurazione del sistema stesso. Il comando [kubectl](/docs/reference/kubectl/overview/) può essere usato per creare, aggiornare, cancellare ed ottenere le istanze delle risorse esposte attraverso le API. + +Kubernetes assicura la persistenza del suo stato (al momento in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) usando la rappresentazione delle risorse implementata dalle API. + +Kubernetes stesso è diviso in differenti componenti, i quali interagiscono tra loro attraverso le stesse API. + +{{% /capture %}} + + +{{% capture body %}} + +## Evoluzione delle API + +In base alla nostra esperienza, ogni sistema di successo ha bisogno di evolvere ovvero deve estendersi aggiungendo funzionalità o modificare le esistenti per adattarle a nuovi casi d'uso. Le API di Kubernetes sono quindi destinate a cambiare e ad estendersi. In generale, ci si deve aspettare che nuove risorse vengano aggiunte di frequente cosi come nuovi campi possano altresì essere aggiunti a risorse esistenti. L'eliminazione di risorse o di campi devono seguire la [politica di deprecazione delle API](/docs/reference/using-api/deprecation-policy/). + +In cosa consiste una modifica compatibile e come modificare le API è descritto dal [API change document](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md). + +## Definizioni OpenAPI e Swagger + +La documentazione completa e dettagliata delle API è fornita attraverso la specifica [OpenAPI](https://www.openapis.org/). + +Dalla versione 1.10 di Kubernetes, l'API server di Kubernetes espone le specifiche OpenAPI attraverso il seguente *endpoint* `/openapi/v2`. Attraverso i seguenti *headers* HTTP è possibile richiedere un formato specifico: + +Header | Possibili Valori +------ | --------------- +Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (il content-type di default è `application/json` per `*/*` ovvero questo header può anche essere omesso) +Accept-Encoding | `gzip` (questo header è facoltativo) + +Prima della versione 1.14, gli *endpoints* che includono il formato del nome all'interno del segmento (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`) +espongo le specifiche OpenAPI in formati differenti. Questi *endpoints* sono deprecati, e saranno rimossi dalla versione 1.14 di Kubernetes. + +**Esempi per ottenere le specifiche OpenAPI**: + +Prima della 1.10 | Dalla versione 1.10 di Kubernetes +---------------- | ----------------------------- +GET /swagger.json | GET /openapi/v2 **Accept**: application/json +GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf +GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip + +Kubernetes implementa per le sue API anche una serializzazione alternativa basata sul formato Protobuf che è stato pensato principalmente per la comunicazione intra-cluster, documentato nella seguente [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md), e i files IDL per ciascun schema si trovano nei *Go packages* che definisco i tipi delle API. + +Prima della versione 1.14, l'*apiserver* di Kubernetes espone anche un'*endpoint*, `/swaggerapi`, che può essere usato per ottenere +le documentazione per le API di Kubernetes secondo le specifiche [Swagger v1.2](http://swagger.io/) . +Questo *endpoint* è deprecato, ed è stato rimosso nella versione 1.14 di Kubernetes. + +## Versionamento delle API + +Per facilitare l'eliminazione di campi specifici o la modifica della rappresentazione di una data risorsa, Kubernetes supporta molteplici versioni della stessa API disponibili attraverso differenti indirizzi, come ad esempio `/api/v1` oppure +`/apis/extensions/v1beta1`. + +Abbiamo deciso di versionare a livello di API piuttosto che a livello di risorsa o di campo per assicurare che una data API rappresenti una chiara, consistente vista delle risorse di sistema e dei sui comportamenti, e per abilitare un controllo degli accessi sia per le API in via di decommissionamento che per quelle sperimentali. + +Si noti che il versionamento delle API ed il versionamento del Software sono indirettamente collegati. La [API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) descrive la relazione tra le versioni delle API ed le versioni del Software. + +Differenti versioni delle API implicano differenti livelli di stabilità e supporto. I criteri per ciascuno livello sono descritti in dettaglio nella [API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Queste modifiche sono qui ricapitolate: + +- Livello alpha: + - Il nome di versione contiene `alpha` (e.g. `v1alpha1`). + - Potrebbe contenere dei *bug*. Abilitare questa funzionalità potrebbe esporre al rischio di *bugs*. Disabilitata di default. + - Il supporto di questa funzionalità potrebbe essere rimosso in ogni momento senza previa notifica. + - Questa API potrebbe cambiare in modo incompatibile in rilasci futuri del Software e senza previa notifica. + - Se ne raccomandata l'utilizzo solo in *clusters* di test creati per un breve periodo di vita, a causa di potenziali *bugs* e delle mancanza di un supporto di lungo periodo. +- Livello beta: + - Il nome di versione contiene `beta` (e.g. `v2beta3`). + - Il codice è propriamente testato. Abilitare la funzionalità è considerato sicuro. Abilitata di default. + - Il supporto per la funzionalità nel suo complesso non sarà rimosso, tuttavia potrebbe subire delle modifiche. + - Lo schema e/o la semantica delle risorse potrebbe cambiare in modo incompatibile in successivi rilasci beta o stabili. Nel caso questo dovesse verificarsi, verrano fornite istruzioni per la migrazione alla versione successiva. Questo potrebbe richiedere la cancellazione, modifica, e la ri-creazione degli oggetti supportati da questa API. Questo processo di modifica potrebbe richiedere delle valutazioni. La modifica potrebbe richiedere un periodo di non disponibilità dell'applicazione che utilizza questa funzionalità. + - Raccomandata solo per applicazioni non critiche per la vostra impresa a causa dei potenziali cambiamenti incompatibili in rilasci successivi. Se avete più *clusters* che possono essere aggiornati separatamente, potreste essere in grado di gestire meglio questa limitazione. + - **Per favore utilizzate le nostre versioni beta e forniteci riscontri relativamente ad esse! Una volta promosse a stabili, potrebbe non essere semplice apportare cambiamenti successivi.** +- Livello stabile: + - Il nome di versione è `vX` dove `X` è un intero. + - Le funzionalità relative alle versioni stabili continueranno ad essere presenti per parecchie versioni successive. + +## API groups + +Per facilitare l'estendibilità delle API di Kubernetes, sono stati implementati gli [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md). +L'*API group* è specificato nel percorso REST ed anche nel campo `apiVersion` di un oggetto serializzato. + +Al momento ci sono diversi *API groups* in uso: + +1. Il gruppo *core*, spesso referenziato come il *legacy group*, è disponibile al percorso REST `/api/v1` ed utilizza `apiVersion: v1`. + +1. I gruppi basati su un nome specifico sono disponibili attraverso il percorso REST `/apis/$GROUP_NAME/$VERSION`, ed usano `apiVersion: $GROUP_NAME/$VERSION` (e.g. `apiVersion: batch/v1`). La lista completa degli *API groups* supportati e' descritta nel documento [Kubernetes API reference](/docs/reference/). + +Vi sono due modi per supportati per estendere le API attraverso le [*custom resources*](/docs/concepts/api-extension/custom-resources/): + +1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) + è pensato per utenti con esigenze CRUD basilari. +1. Utenti che necessitano di un nuovo completo set di API che utilizzi appieno la semantica di Kubernetes possono implementare il loro *apiserver* ed utilizzare l'[*aggregator*](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) + per fornire ai propri utilizzatori la stessa esperienza a cui sono abituati con le API incluse nativamente in Kubernetes. + + +## Abilitare o disabilitare gli *API groups* + +Alcune risorse ed *API groups* sono abilitati di default. Questi posso essere abilitati o disabilitati attraverso il settaggio/flag `--runtime-config` +applicato sull'*apiserver*. `--runtime-config` accetta valori separati da virgola. Per esempio: per disabilitare `batch/v1`, usa la seguente configurazione `--runtime-config=batch/v1=false`, per abilitare `batch/v2alpha1`, utilizzate `--runtime-config=batch/v2alpha1`. +Il *flag* accetta set di coppie *chiave/valore* separati da virgola che descrivono la configurazione a *runtime* dell'*apiserver*. + +{{< note >}}Abilitare o disabilitare risorse o gruppi richiede il riavvio dell'*apiserver* e del *controller-manager* affinché le modifiche specificate attraverso il flag `--runtime-config` abbiano effetto.{{< /note >}} + +## Abilitare specifiche risorse nel gruppo extensions/v1beta1 + +DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies e ReplicaSets presenti nel gruppo di API `extensions/v1beta1` sono disabilitate di default. +Per esempio: per abilitare deployments and daemonsets, utilizza la seguente configurazione +`--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`. + +{{< note >}}Abilitare/disabilitare una singola risorsa è supportato solo per il gruppo di API `extensions/v1beta1` per ragioni storiche.{{< /note >}} + +{{% /capture %}} diff --git a/content/it/docs/tutorials/_index.md b/content/it/docs/tutorials/_index.md new file mode 100644 index 0000000000..cdd1c473e8 --- /dev/null +++ b/content/it/docs/tutorials/_index.md @@ -0,0 +1,75 @@ +--- +title: Tutorials +main_menu: true +weight: 60 +content_template: templates/concept +--- + +{{% capture overview %}} + +Questa sezione della documentazione di Kubernetes contiene i tutorials. +Un tutorial mostra come raggiungere un obiettivo più complesso di un singolo +[task](/docs/tasks/). Solitamente un tutorial ha diverse sezioni, ognuna delle quali +consiste in una sequenza di più task. +Prima di procedere con vari tutorial, raccomandiamo di aggiungere il +[Glossario](/docs/reference/glossary/) ai tuoi bookmark per riferimenti successivi. + +{{% /capture %}} + +{{% capture body %}} + +## Per cominciare + +* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) è un approfondito tutorial che aiuta a capire cosa è Kubernetes e che permette di testare in modo interattivo alcune semplici funzionalità di Kubernetes. + +* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) + +* [Hello Minikube](/docs/tutorials/hello-minikube/) + +## Configurazione + +* [Configurare Redis utilizzando una ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) + +## Stateless Applications + +* [Esporre un External IP Address per permettere l'accesso alle applicazioni nel Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) + +* [Esempio: Rilasciare l'applicazione PHP Guestbook con Redis](/docs/tutorials/stateless-application/guestbook/) + +## Stateful Applications + +* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/) + +* [Esempio: WordPress e MySQL con i PersistentVolumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) + +* [Esempio: Rilasciare Cassandra con i StatefulSets](/docs/tutorials/stateful-application/cassandra/) + +* [Eseguire ZooKeeper, un sistema distribuito CP](/docs/tutorials/stateful-application/zookeeper/) + +## CI/CD Pipelines + +* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview) + +* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2) + +* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3) + +* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4) + +## Clusters + +* [AppArmor](/docs/tutorials/clusters/apparmor/) + +## Servizi + +* [Utilizzare Source IP](/docs/tutorials/services/source-ip/) + +{{% /capture %}} + +{{% capture whatsnext %}} + +Se sei interessato a scrivere un tutorial, vedi +[Utilizzare i Page Templates](/docs/home/contribute/page-templates/) +per informazioni su come creare una tutorial page e sul tutorial template. + +{{% /capture %}} diff --git a/content/it/docs/tutorials/hello-minikube.md b/content/it/docs/tutorials/hello-minikube.md new file mode 100644 index 0000000000..80b9b64b11 --- /dev/null +++ b/content/it/docs/tutorials/hello-minikube.md @@ -0,0 +1,280 @@ +--- +title: Hello Minikube +content_template: templates/tutorial +weight: 5 +menu: + main: + title: "Cominciamo!" + weight: 10 + post: > +

Sei pronto a cominciare con Kubernetes? Crea un Kubernetes cluster ed esegui un'appliczione di esempio.

+card: + name: tutorials + weight: 10 +--- + +{{% capture overview %}} + +Questo tutorial mostra come eseguire una semplice applicazione in Kubernetes +utilizzando [Minikube](/docs/setup/learning-environment/minikube) e Katacoda. +Katacoda permette di operare su un'installazione di Kubernetes dal tuo browser. + +{{< note >}} +Come alternativa, è possibile eseguire questo tutorial [installando minikube](/docs/tasks/tools/install-minikube/) localmente. +{{< /note >}} + +{{% /capture %}} + +{{% capture objectives %}} + +* Rilasciare una semplice applicazione su Minikube. +* Eseguire l'applicazione. +* Visualizzare i log dell'applicazione. + +{{% /capture %}} + +{{% capture prerequisites %}} + +Questo tutorial fornisce una container image che utilizza NGINX per risponde a tutte le richieste +con un echo che visualizza i dati della richiesta stessa. + +{{% /capture %}} + +{{% capture lessoncontent %}} + +## Crea un Minikube cluster + +1. Click **Launch Terminal** + + {{< kat-button >}} + + {{< note >}}Se hai installato Minikube localmente, esegui `minikube start`.{{< /note >}} + +2. Apri la console di Kubernetes nel browser: + + ```shell + minikube dashboard + ``` + +3. Katacoda environment only: In alto alla finestra del terminale, fai click segno più, e a seguire click su **Select port to view on Host 1**. + +4. Katacoda environment only: Inserisci `30000`, a seguire click su **Display Port**. + +## Crea un Deployment + +Un Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) è un gruppo di uno o più Containers, +che sono uniti tra loro dal punto di vista amministrativo e che condividono lo stesso network. +Il Pod in questo tutorial ha un solo Container. Un Kubernetes +[*Deployment*](/docs/concepts/workloads/controllers/deployment/) monitora lo stato del Pod ed +eventualmente provvedere a farlo ripartire nel caso questo termini. L'uso dei Deployments è la +modalità raccomandata per gestire la creazione e lo scaling dei Pods. + + +1. Usa il comando `kubectl create` per creare un Deployment che gestisce un singolo Pod. Il Pod +eseguirà un Container basato sulla Docker image specificata. + + ```shell + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + ``` + +2. Visualizza il Deployment: + + ```shell + kubectl get deployments + ``` + + L'output del comando è simile a: + + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + hello-node 1/1 1 1 1m + ``` + +3. Visualizza il Pod creato dal Deployment: + + ```shell + kubectl get pods + ``` + + L'output del comando è simile a: + + ``` + NAME READY STATUS RESTARTS AGE + hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m + ``` + +4. Visualizza gli eventi del cluster Kubernetes: + + ```shell + kubectl get events + ``` + +5. Visualizza la configurazione di `kubectl`: + + ```shell + kubectl config view + ``` + +{{< note >}}Per maggiori informazioni sui comandi di `kubectl`, vedi [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}} + +## Crea un Service + +Con le impostazioni di default, un Pod è accessibile solamente dagli indirizzi IP interni +al Kubernetes cluster. Per far si che il Container `hello-node` sia accessibile dall'esterno +del Kubernetes virtual network, è necessario esporre il Pod utilizzando un +Kubernetes [*Service*](/docs/concepts/services-networking/service/). + +1. Esponi il Pod su internet untilizzando il comando `kubectl expose`: + + ```shell + kubectl expose deployment hello-node --type=LoadBalancer --port=8080 + ``` + + Il flag `--type=LoadBalancer` indica la volontà di esporre il Service + all'esterno del Kubernetes cluster. + +2. Visualizza il Servizio appena creato: + + ```shell + kubectl get services + ``` + + L'output del comando è simile a: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s + kubernetes ClusterIP 10.96.0.1 443/TCP 23m + ``` + + Nei cloud providers che supportano i servizi di tipo load balancers, + viene fornito un indirizzo IP pubblico per permettere l'acesso al Service. Su Minikube, + il service type `LoadBalancer` rende il Service accessibile attraverso il comando `minikube service`. + +3. Esegui il comando: + + ```shell + minikube service hello-node + ``` + +4. Katacoda environment only: Fai click segno più, e a seguire click su **Select port to view on Host 1**. + +5. Katacoda environment only: Fai attenzione al numero di 5 cifre visualizzato a fianco di `8080` nell'output del comando. Questo port number è generato casualmente e può essere diverso nel tuo caso. Inserisci il tuo port number nella textbox, e a seguire fai click su Display Port. Nell'esempio precedente, avresti scritto `30369`. + + Questo apre un finestra nel browser dove l'applicazione visuallizza l'echo delle richieste ricevute. + +## Attiva gli addons + +Minikube include un set {{< glossary_tooltip text="addons" term_id="addons" >}} che possono essere attivati, disattivati o eseguti nel ambiente Kubernetes locale. + +1. Elenca gli addons disponibili: + + ```shell + minikube addons list + ``` + + L'output del comando è simile a: + + ``` + addon-manager: enabled + dashboard: enabled + default-storageclass: enabled + efk: disabled + freshpod: disabled + gvisor: disabled + helm-tiller: disabled + ingress: disabled + ingress-dns: disabled + logviewer: disabled + metrics-server: disabled + nvidia-driver-installer: disabled + nvidia-gpu-device-plugin: disabled + registry: disabled + registry-creds: disabled + storage-provisioner: enabled + storage-provisioner-gluster: disabled + ``` + +2. Attiva un addon, per esempio, `metrics-server`: + + ```shell + minikube addons enable metrics-server + ``` + + L'output del comando è simile a: + + ``` + metrics-server was successfully enabled + ``` + +3. Visualizza i Pods ed i Service creati in precedenza: + + ```shell + kubectl get pod,svc -n kube-system + ``` + + L'output del comando è simile a: + + ``` + NAME READY STATUS RESTARTS AGE + pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m + pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m + pod/metrics-server-67fb648c5 1/1 Running 0 26s + pod/etcd-minikube 1/1 Running 0 34m + pod/influxdb-grafana-b29w8 2/2 Running 0 26s + pod/kube-addon-manager-minikube 1/1 Running 0 34m + pod/kube-apiserver-minikube 1/1 Running 0 34m + pod/kube-controller-manager-minikube 1/1 Running 0 34m + pod/kube-proxy-rnlps 1/1 Running 0 34m + pod/kube-scheduler-minikube 1/1 Running 0 34m + pod/storage-provisioner 1/1 Running 0 34m + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s + service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m + service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s + service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s + ``` + +4. Disabilita `metrics-server`: + + ```shell + minikube addons disable metrics-server + ``` + + L'output del comando è simile a: + + ``` + metrics-server was successfully disabled + ``` + +## Clean up + +Adesso puoi procedere a fare clean up delle risorse che hai creato nel tuo cluster: + +```shell +kubectl delete service hello-node +kubectl delete deployment hello-node +``` + +Eventualmente, puoi stoppare la Minikube virtual machine (VM): + +```shell +minikube stop +``` + +Eventualmente, puoi cancellare la Minikube VM: + +```shell +minikube delete +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Approfondisci la tua conoscenza dei [Deployments](/docs/concepts/workloads/controllers/deployment/). +* Approfondisci la tua conoscenza di [Rilasciare applicazioni](/docs/tasks/run-application/run-stateless-application-deployment/). +* Approfondisci la tua conoscenza dei [Services](/docs/concepts/services-networking/service/). + +{{% /capture %}} diff --git a/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index f24ae129ec..cede5188bb 100644 --- a/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -48,59 +48,59 @@ min-kubernetes-server-version: 1.18 ## 업그레이드할 버전 결정 -1. 최신의 안정 버전인 1.18을 찾는다. +최신의 안정 버전인 1.18을 찾는다. - {{< tabs name="k8s_install_versions" >}} - {{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} +{{< tabs name="k8s_install_versions" >}} +{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} apt update apt-cache madison kubeadm # 목록에서 최신 버전 1.18을 찾는다 # 1.18.x-00과 같아야 한다. 여기서 x는 최신 패치이다. - {{% /tab %}} - {{% tab name="CentOS, RHEL 또는 Fedora" %}} +{{% /tab %}} +{{% tab name="CentOS, RHEL 또는 Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes # 목록에서 최신 버전 1.18을 찾는다 # 1.18.x-0과 같아야 한다. 여기서 x는 최신 패치이다. - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{< /tabs >}} ## 컨트롤 플레인 노드 업그레이드 ### 첫 번째 컨트롤 플레인 노드 업그레이드 -1. 첫 번째 컨트롤 플레인 노드에서 kubeadm을 업그레이드한다. +- 첫 번째 컨트롤 플레인 노드에서 kubeadm을 업그레이드한다. - {{< tabs name="k8s_install_kubeadm_first_cp" >}} - {{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} +{{< tabs name="k8s_install_kubeadm_first_cp" >}} +{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} # 1.18.x-00에서 x를 최신 패치 버전으로 바꾼다. apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm - + - # apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다 apt-get update && \ apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 - {{% /tab %}} - {{% tab name="CentOS, RHEL 또는 Fedora" %}} +{{% /tab %}} +{{% tab name="CentOS, RHEL 또는 Fedora" %}} # 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다. yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{< /tabs >}} -1. 다운로드하려는 버전이 잘 받아졌는지 확인한다. +- 다운로드하려는 버전이 잘 받아졌는지 확인한다. ```shell kubeadm version ``` -1. 컨트롤 플레인 노드를 드레인(drain)한다. +- 컨트롤 플레인 노드를 드레인(drain)한다. ```shell # 을 컨트롤 플레인 노드 이름으로 바꾼다. kubectl drain --ignore-daemonsets ``` -1. 컨트롤 플레인 노드에서 다음을 실행한다. +- 컨트롤 플레인 노드에서 다음을 실행한다. ```shell sudo kubeadm upgrade plan @@ -143,13 +143,13 @@ min-kubernetes-server-version: 1.18 이 명령은 클러스터를 업그레이드할 수 있는지를 확인하고, 업그레이드할 수 있는 버전을 가져온다. - {{< note >}} - 또한 `kubeadm upgrade` 는 이 노드에서 관리하는 인증서를 자동으로 갱신한다. - 인증서 갱신을 하지 않으려면 `--certificate-renewal=false` 플래그를 사용할 수 있다. - 자세한 내용은 [인증서 관리 가이드](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)를 참고한다. - {{}} +{{< note >}} +또한 `kubeadm upgrade` 는 이 노드에서 관리하는 인증서를 자동으로 갱신한다. +인증서 갱신을 하지 않으려면 `--certificate-renewal=false` 플래그를 사용할 수 있다. +자세한 내용은 [인증서 관리 가이드](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)를 참고한다. +{{}} -1. 업그레이드할 버전을 선택하고, 적절한 명령을 실행한다. 예를 들면 다음과 같다. +- 업그레이드할 버전을 선택하고, 적절한 명령을 실행한다. 예를 들면 다음과 같다. ```shell # 이 업그레이드를 위해 선택한 패치 버전으로 x를 바꾼다. @@ -238,7 +238,7 @@ min-kubernetes-server-version: 1.18 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` -1. CNI 제공자 플러그인을 수동으로 업그레이드한다. +- CNI 제공자 플러그인을 수동으로 업그레이드한다. CNI(컨테이너 네트워크 인터페이스) 제공자는 자체 업그레이드 지침을 따를 수 있다. [애드온](/docs/concepts/cluster-administration/addons/) 페이지에서 @@ -246,7 +246,7 @@ min-kubernetes-server-version: 1.18 CNI 제공자가 데몬셋(DaemonSet)으로 실행되는 경우 추가 컨트롤 플레인 노드에는 이 단계가 필요하지 않다. -1. 컨트롤 플레인 노드에 적용된 cordon을 해제한다. +- 컨트롤 플레인 노드에 적용된 cordon을 해제한다. ```shell # 을 컨트롤 플레인 노드 이름으로 바꾼다. @@ -255,46 +255,46 @@ min-kubernetes-server-version: 1.18 ### 추가 컨트롤 플레인 노드 업그레이드 -1. 첫 번째 컨트롤 플레인 노드와 동일하지만 다음을 사용한다. +첫 번째 컨트롤 플레인 노드와 동일하지만 다음을 사용한다. - ``` - sudo kubeadm upgrade node - ``` +``` +sudo kubeadm upgrade node +``` - 아래 명령 대신 위의 명령을 사용한다. +아래 명령 대신 위의 명령을 사용한다. - ``` - sudo kubeadm upgrade apply - ``` +``` +sudo kubeadm upgrade apply +``` - 또한 `sudo kubeadm upgrade plan` 은 필요하지 않다. +또한 `sudo kubeadm upgrade plan` 은 필요하지 않다. ### kubelet과 kubectl 업그레이드 -1. 모든 컨트롤 플레인 노드에서 kubelet 및 kubectl을 업그레이드한다. +모든 컨트롤 플레인 노드에서 kubelet 및 kubectl을 업그레이드한다. - {{< tabs name="k8s_install_kubelet" >}} - {{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} +{{< tabs name="k8s_install_kubelet" >}} +{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} # 1.18.x-00의 x를 최신 패치 버전으로 바꾼다 apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ apt-mark hold kubelet kubectl - + - # apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다 apt-get update && \ apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 - {{% /tab %}} - {{% tab name="CentOS, RHEL 또는 Fedora" %}} +{{% /tab %}} +{{% tab name="CentOS, RHEL 또는 Fedora" %}} # 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다 yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{< /tabs >}} -1. kubelet을 다시 시작한다. +kubelet을 다시 시작한다. - ```shell - sudo systemctl restart kubelet - ``` +```shell +sudo systemctl restart kubelet +``` ## 워커 노드 업그레이드 @@ -303,28 +303,28 @@ min-kubernetes-server-version: 1.18 ### kubeadm 업그레이드 -1. 모든 워커 노드에서 kubeadm을 업그레이드한다. +- 모든 워커 노드에서 kubeadm을 업그레이드한다. - {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} - {{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} +{{< tabs name="k8s_install_kubeadm_worker_nodes" >}} +{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} # 1.18.x-00의 x를 최신 패치 버전으로 바꾼다 apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm - + - # apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다 apt-get update && \ apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 - {{% /tab %}} - {{% tab name="CentOS, RHEL 또는 Fedora" %}} +{{% /tab %}} +{{% tab name="CentOS, RHEL 또는 Fedora" %}} # 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다 yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{< /tabs >}} ### 노드 드레인 -1. 스케줄 불가능(unschedulable)으로 표시하고 워크로드를 축출하여 유지 보수할 노드를 준비한다. +- 스케줄 불가능(unschedulable)으로 표시하고 워크로드를 축출하여 유지 보수할 노드를 준비한다. ```shell # 을 드레이닝하려는 노드 이름으로 바꾼다. @@ -349,26 +349,26 @@ min-kubernetes-server-version: 1.18 ### kubelet과 kubectl 업그레이드 -1. 모든 워커 노드에서 kubelet 및 kubectl을 업그레이드한다. +- 모든 워커 노드에서 kubelet 및 kubectl을 업그레이드한다. - {{< tabs name="k8s_kubelet_and_kubectl" >}} - {{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} +{{< tabs name="k8s_kubelet_and_kubectl" >}} +{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}} # 1.18.x-00의 x를 최신 패치 버전으로 바꾼다 apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ apt-mark hold kubelet kubectl - + - # apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다 apt-get update && \ apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 - {{% /tab %}} - {{% tab name="CentOS, RHEL 또는 Fedora" %}} +{{% /tab %}} +{{% tab name="CentOS, RHEL 또는 Fedora" %}} # 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다 yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{< /tabs >}} -1. kubelet을 다시 시작한다. +- kubelet을 다시 시작한다. ```shell sudo systemctl restart kubelet @@ -376,7 +376,7 @@ min-kubernetes-server-version: 1.18 ### 노드에 적용된 cordon 해제 -1. 스케줄 가능(schedulable)으로 표시하여 노드를 다시 온라인 상태로 만든다. +- 스케줄 가능(schedulable)으로 표시하여 노드를 다시 온라인 상태로 만든다. ```shell # 을 노드의 이름으로 바꾼다. diff --git a/content/pt/docs/concepts/configuration/_index.md b/content/pt/docs/concepts/configuration/_index.md new file mode 100644 index 0000000000..f833471a5c --- /dev/null +++ b/content/pt/docs/concepts/configuration/_index.md @@ -0,0 +1,5 @@ +--- +title: "Configuração" +weight: 80 +--- + diff --git a/content/pt/docs/concepts/configuration/pod-overhead.md b/content/pt/docs/concepts/configuration/pod-overhead.md new file mode 100644 index 0000000000..5a18b11f09 --- /dev/null +++ b/content/pt/docs/concepts/configuration/pod-overhead.md @@ -0,0 +1,197 @@ +--- +reviewers: +- dchen1107 +- egernst +- tallclair +title: Pod Overhead +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.18" state="beta" >}} + +Quando executa um Pod num nó, o próprio Pod usa uma quantidade de recursos do sistema. Estes +recursos são adicionais aos recursos necessários para executar o(s) _container(s)_ dentro do Pod. +Sobrecarga de Pod, do inglês _Pod Overhead_, é uma funcionalidade que serve para contabilizar os recursos consumidos pela +infraestrutura do Pod para além das solicitações e limites do _container_. + + +{{% /capture %}} + + +{{% capture body %}} + +No Kubernetes, a sobrecarga de _Pods_ é definido no tempo de +[admissão](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) +de acordo com a sobrecarga associada à +[RuntimeClass](/docs/concepts/containers/runtime-class/) do _Pod_. + +Quando é ativada a Sobrecarga de Pod, a sobrecarga é considerada adicionalmente à soma das +solicitações de recursos do _container_ ao agendar um Pod. Semelhantemente, o _kubelet_ +incluirá a sobrecarga do Pod ao dimensionar o cgroup do Pod e ao +executar a classificação de despejo do Pod. + +## Possibilitando a Sobrecarga do Pod {#set-up} + +Terá de garantir que o [portão de funcionalidade](/docs/reference/command-line-tools-reference/feature-gates/) +`PodOverhead` está ativo (está ativo por defeito a partir da versão 1.18) +por todo o cluster, e uma `RuntimeClass` é utilizada que defina o campo `overhead`. + +## Exemplo de uso + +Para usar a funcionalidade PodOverhead, é necessário uma RuntimeClass que define o campo `overhead`. +Por exemplo, poderia usar a definição da RuntimeClass abaixo com um _container runtime_ virtualizado +que usa cerca de 120MiB por Pod para a máquina virtual e o sistema operativo convidado: + +```yaml +--- +kind: RuntimeClass +apiVersion: node.k8s.io/v1beta1 +metadata: + name: kata-fc +handler: kata-fc +overhead: + podFixed: + memory: "120Mi" + cpu: "250m" +``` + +As cargas de trabalho que são criadas e que especificam o manipulador RuntimeClass `kata-fc` irão +usar a sobrecarga de memória e cpu em conta para os cálculos da quota de recursos, agendamento de nós, +assim como dimensionamento do cgroup do Pod. + +Considere executar a seguinte carga de trabalho de exemplo, test-pod: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-pod +spec: + runtimeClassName: kata-fc + containers: + - name: busybox-ctr + image: busybox + stdin: true + tty: true + resources: + limits: + cpu: 500m + memory: 100Mi + - name: nginx-ctr + image: nginx + resources: + limits: + cpu: 1500m + memory: 100Mi +``` + +Na altura de admissão o [controlador de admissão](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) RuntimeClass +atualiza o _PodSpec_ da carga de trabalho de forma a incluir o `overhead` como descrito na RuntimeClass. Se o _PodSpec_ já tiver este campo definido +o _Pod_ será rejeitado. No exemplo dado, como apenas o nome do RuntimeClass é especificado, o controlador de admissão muda o _Pod_ de forma a +incluir um `overhead`. + +Depois do controlador de admissão RuntimeClass, pode verificar o _PodSpec_ atualizado: + +```bash +kubectl get pod test-pod -o jsonpath='{.spec.overhead}' +``` + +O output é: +``` +map[cpu:250m memory:120Mi] +``` + +Se for definido um _ResourceQuota_, a soma dos pedidos dos _containers_ assim como o campo `overhead` são contados. + +Quando o kube-scheduler está a decidir que nó deve executar um novo _Pod_, o agendador considera o `overhead` do _Pod_, +assim como a soma de pedidos aos _containers_ para esse _Pod_. Para este exemplo, o agendador adiciona os +pedidos e a sobrecarga, depois procura um nó com 2.25 CPU e 320 MiB de memória disponível. + +Assim que um _Pod_ é agendado a um nó, o kubelet nesse nó cria um novo {{< glossary_tooltip text="cgroup" term_id="cgroup" >}} +para o _Pod_. É dentro deste _pod_ que o _container runtime_ subjacente vai criar _containers_. + +Se o recurso tiver um limite definido para cada _container_ (_QoS_ garantida ou _Burstrable QoS_ com limites definidos), +o kubelet definirá um limite superior para o cgroup do _pod_ associado a esse recurso (cpu.cfs_quota_us para CPU +e memory.limit_in_bytes de memória). Este limite superior é baseado na soma dos limites do _container_ mais o `overhead` +definido no _PodSpec_. + +Para o CPU, se o _Pod_ for QoS garantida ou _Burstrable QoS_, o kubelet vai definir `cpu.shares` baseado na soma dos +pedidos ao _container_ mais o `overhead` definido no _PodSpec_. + +Olhando para o nosso exemplo, verifique os pedidos ao _container_ para a carga de trabalho: +```bash +kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}' +``` + +O total de pedidos ao _container_ são 2000m CPU e 200MiB de memória: +``` +map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi] +``` + +Verifique isto contra o que é observado pelo nó: +```bash +kubectl describe node | grep test-pod -B2 +``` + +O output mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui _PodOverhead_: +``` + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE + --------- ---- ------------ ---------- --------------- ------------- --- + default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m +``` + +## Verificar os limites cgroup do Pod + +Verifique os cgroups de memória do Pod no nó onde a carga de trabalho está em execução. No seguinte exemplo, [`crictl`] (https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md) +é usado no nó, que fornece uma CLI para _container runtimes_ compatíveis com CRI. Isto é um +exemplo avançado para mostrar o comportamento do _PodOverhead_, e não é esperado que os utilizadores precisem de verificar +cgroups diretamente no nó. + +Primeiro, no nó em particular, determine o identificador do _Pod_: + +```bash +# Execute no nó onde o Pod está agendado +POD_ID="$(sudo crictl pods --name test-pod -q)" +``` + +A partir disto, pode determinar o caminho do cgroup para o _Pod_: +```bash +# Execute no nó onde o Pod está agendado +sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath +``` + +O caminho do cgroup resultante inclui o _container_ `pause` do _Pod_. O cgroup no nível do _Pod_ está um diretório acima. +``` + "cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a" +``` + +Neste caso especifico, o caminho do cgroup do pod é `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verifique a configuração cgroup de nível do _Pod_ para a memória: +```bash +# Execute no nó onde o Pod está agendado +# Mude também o nome do cgroup de forma a combinar com o cgroup alocado ao pod. + cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes +``` + +Isto é 320 MiB, como esperado: +``` +335544320 +``` + +### Observabilidade + +Uma métrica `kube_pod_overhead` está disponível em [kube-state-metrics] (https://github.com/kubernetes/kube-state-metrics) +para ajudar a identificar quando o _PodOverhead_ está a ser utilizado e para ajudar a observar a estabilidade das cargas de trabalho +em execução com uma sobrecarga (_Overhead_) definida. Esta funcionalidade não está disponível na versão 1.9 do kube-state-metrics, +mas é esperado num próximo _release_. Os utilizadores necessitarão entretanto de construir kube-state-metrics a partir da fonte. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [RuntimeClass](/docs/concepts/containers/runtime-class/) +* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) + +{{% /capture %}} diff --git a/content/ru/docs/tasks/configure-pod-container/_index.md b/content/ru/docs/tasks/configure-pod-container/_index.md new file mode 100755 index 0000000000..f4f9e8cb05 --- /dev/null +++ b/content/ru/docs/tasks/configure-pod-container/_index.md @@ -0,0 +1,5 @@ +--- +title: "Настройка Pod'ов и контейнеров" +weight: 20 +--- + diff --git a/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md new file mode 100644 index 0000000000..7d29c427e8 --- /dev/null +++ b/content/ru/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -0,0 +1,341 @@ +--- +title: Настройка Liveness, Readiness и Startup проб +content_template: templates/task +weight: 110 +--- + +{{% capture overview %}} + +На этой странице рассказывается, как настроить liveness, readiness и startup пробы для контейнеров. + +[Kubelet](/docs/admin/kubelet/) использует liveness пробу для проверки, +когда перезапустить контейнер. +Например, liveness проба должна поймать блокировку, +когда приложение запущено, но не может ничего сделать. +В этом случае перезапуск приложения может помочь сделать приложение +более доступным, несмотря на баги. + +Kubelet использует readiness пробы, чтобы узнать, +готов ли контейнер принимать траффик. +Pod считается готовым, когда все его контейнеры готовы. + +Одно из применений такого сигнала - контроль, какие Pod будут использованы +в качестве бек-енда для сервиса. +Пока Pod не в статусе ready, он будет исключен из балансировщиков нагрузки сервиса. + +Kubelet использует startup пробы, чтобы понять, когда приложение в контейнере было запущено. +Если проба настроена, он блокирует liveness и readiness проверки, до того как проба становится успешной, и проверяет, что эта проба не мешает запуску приложения. +Это может быть использовано для проверки работоспособности медленно стартующих контейнеров, +чтобы избежать убийства kubelet'ом прежде, чем они будут запущены. + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Определение liveness команды + +Многие приложения, работающие в течение длительного времени, ломаются +и могут быть восстановлены только перезапуском. +Kubernetes предоставляет liveness пробы, чтобы обнаруживать и исправлять такие ситуации. + +В этом упражнении вы создадите Pod, который запускает контейнер, основанный на образе `k8s.gcr.io/busybox`. Конфигурационный файл для Pod'а: + +{{< codenew file="pods/probe/exec-liveness.yaml" >}} + +В конфигурационном файле вы можете видеть, что Pod состоит из одного `Container`. +Поле `periodSeconds` определяет, что kubelet должен производить liveness +пробы каждые 5 секунд. Поле `initialDelaySeconds` говорит kubelet'у, что он должен ждать 5 секунд перед первой пробой. Для проведения пробы +kubelet исполняет команду `cat /tmp/healthy` в целевом контейнере. +Если команда успешна, она возвращает 0, и kubelet считает контейнер живым и здоровым. +Если команда возвращает ненулевое значение, kubelet убивает и перезапускает контейнер. + +Когда контейнер запускается, он исполняет команду + +```shell +/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600" +``` + +Для первых 30 секунд жизни контейнера существует файл `/tmp/healthy`. +Поэтому в течение первых 30 секунд команда `cat /tmp/healthy` возвращает код успеха. После 30 секунд `cat /tmp/healthy` возвращает код ошибки. + +Создание Pod: + +```shell +kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml +``` + +В течение 30 секунд посмотрим события Pod: + +```shell +kubectl describe pod liveness-exec +``` + +Вывод команды показывает, что ещё ни одна liveness проба не завалена: + +``` +FirstSeen LastSeen Count From SubobjectPath Type Reason Message +--------- -------- ----- ---- ------------- -------- ------ ------- +24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e +``` + +После 35 секунд посмотрим события Pod снова: + +```shell +kubectl describe pod liveness-exec +``` + +Внизу вывода появились сообщения, показывающие, что liveness +проба завалена и containers был убит и пересоздан. + +``` +FirstSeen LastSeen Count From SubobjectPath Type Reason Message +--------- -------- ----- ---- ------------- -------- ------ ------- +37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e +2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory +``` + +Подождите ещё 30 секунд и убедитесь, что контейнер был перезапущен: + +```shell +kubectl get pod liveness-exec +``` + +Вывод команды показывает, что `RESTARTS` увеличено на 1: + +``` +NAME READY STATUS RESTARTS AGE +liveness-exec 1/1 Running 1 1m +``` + +## Определение liveness HTTP запроса + +Другой вид liveness пробы использует запрос HTTP GET. Ниже представлен файл конфигурации для Pod, который запускает контейнер, основанный на образе `k8s.gcr.io/liveness`. + +{{< codenew file="pods/probe/http-liveness.yaml" >}} + +В конфигурационном файле вы можете видеть Pod с одним контейнером. +Поле `periodSeconds` определяет, что kubelet должен производить liveness +пробы каждые 3 секунды. Поле `initialDelaySeconds` сообщает kubelet'у, что он должен ждать 3 секунды перед проведением первой пробы. Для проведения пробы +kubelet отправляет запрос HTTP GET на сервер, который запущен в контейнере и слушает порт 8080. Если обработчик пути `/healthz` на сервере +возвращает код успеха, kubelet рассматривает контейнер как живой и здоровый. Если обработчик возвращает код ошибки, kubelet убивает и перезапускает контейнер. + +Любой код, больший или равный 200 и меньший 400, означает успех. Любой другой код интерпретируется как ошибка. + +Вы можете посмотреть исходные коды сервера в +[server.go](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/test/images/agnhost/liveness/server.go). + +В течение первых 10 секунд жизни контейнера обработчик `/healthz` +возвращает статус 200. После обработчик возвращает статус 500. + +```go +http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { + duration := time.Now().Sub(started) + if duration.Seconds() > 10 { + w.WriteHeader(500) + w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds()))) + } else { + w.WriteHeader(200) + w.Write([]byte("ok")) + } +}) +``` + +Kubelet начинает выполнять health checks через 3 секунды после старта контейнера. +Итак, первая пара проверок будет успешна. Но после 10 секунд health +checks будут завалены и kubelet убьёт и перезапустит контейнер. + +Чтобы попробовать HTTP liveness проверку, создайте Pod: + +```shell +kubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml +``` + +Через 10 секунд посмотрите события Pod, чтобы проверить, что liveness probes завалилась и container перезапустился: + +```shell +kubectl describe pod liveness-http +``` + +В релизах до v1.13 (включая v1.13), если переменная окружения +`http_proxy` (или `HTTP_PROXY`) определена на node, где запущен Pod, +HTTP liveness проба использует этот прокси. +В версиях после v1.13, определение локальной HTTP прокси в переменной окружения не влияет на HTTP liveness пробу. + +## Определение TCP liveness пробы + +Третий тип liveness проб использует TCP сокет. С этой конфигурацией +kubelet будет пытаться открыть сокет к вашему контейнеру на определённый порт. +Если он сможет установить соединение, контейнер будет считаться здоровым, если нет, будет считаться заваленным. + +{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}} + +Как вы можете видеть, конфигурация TCP проверок довольно похожа на HTTP проверки. +Этот пример использует обе - readiness и liveness пробы. Kubelet будет отправлять первую readiness пробу через 5 секунд после старта контейнера. Он будет пытаться соединиться с `goproxy` контейнером на порт 8080. Если проба успешна, Pod +будет помечен как ready. Kubelet будет продолжать запускать эту проверку каждые 10 секунд. + +В дополнение к readiness пробе, конфигурация включает liveness пробу. +Kubelet запустит первую liveness пробу через 15 секунд после старта контейнера. Аналогично readiness пробе, он будет пытаться соединиться с контейнером `goproxy` на порт 8080. Если liveness проба завалится, контейнер будет перезапущен. + +Для испытаний TCP liveness проверки, создадим Pod: + +```shell +kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml +``` + +Через 15 секунд посмотрим события Pod'а для проверки liveness пробы: + +```shell +kubectl describe pod goproxy +``` + +## Использование именованных портов + +Вы можете использовать именованный порт +[ContainerPort](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerport-v1-core) +для HTTP или TCP liveness проверок: + +```yaml +ports: +- name: liveness-port + containerPort: 8080 + hostPort: 8080 + +livenessProbe: + httpGet: + path: /healthz + port: liveness-port +``` + +## Защита медленно запускающихся контейнеров со startup пробами {#define-startup-probes} + +Иногда приходится иметь дело со старыми приложениями, которым может требоваться дополнительное время для запуска +на их первую инициализацию. +В таких случаях бывает сложно настроить параметры liveness пробы без ущерба для скорости реакции на deadlock'и, для выявления которых как раз и нужна liveness проба. +Хитрость заключается в том, чтобы настроить startup пробу с такой же командой, что и HTTP или TCP проверка, но `failureThreshold * periodSeconds` должно быть достаточным, чтобы покрыть наихудшее время старта. + +Итак, предыдущий пример будет выглядеть так: + +```yaml +ports: +- name: liveness-port + containerPort: 8080 + hostPort: 8080 + +livenessProbe: + httpGet: + path: /healthz + port: liveness-port + failureThreshold: 1 + periodSeconds: 10 + +startupProbe: + httpGet: + path: /healthz + port: liveness-port + failureThreshold: 30 + periodSeconds: 10 +``` + +Благодаря startup пробе, приложению дано максимум 5 минут +(30 * 10 = 300 сек.) для завершения его старта. +Как только startup проба успешна 1 раз, liveness проба начинает контролировать дедлоки контейнеров. +Если startup probe так и не заканчивается успехом, контейнер будет убит через 300 секунд и подвергнется `restartPolicy` pod'а. + +## Определение readiness проб + +Иногда приложения временно не могут обслужить траффик. +Например, приложение может требовать загрузки огромных данных +или конфигурационных файлов во время старта, или зависит от внешних сервисов после старта. +В таких случаях вы не хотите убивать приложение, но и +отправлять ему клиентские запросы тоже не хотите. +Kubernetes предоставляет +readiness пробы для определения и нивелирования таких ситуаций. Pod с контейнерами сообщает, что они не готовы принимать траффик через Kubernetes Services. + +{{< note >}} +Readiness пробы запускаются на контейнере в течение всего его жизненного цикла. +{{< /note >}} + +Readiness пробы настраиваются аналогично liveness пробам. Единственная разница в использовании поля `readinessProbe` вместо `livenessProbe`. + +```yaml +readinessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 5 + periodSeconds: 5 +``` + +Конфигурация HTTP и TCP readiness проб также идентичны +liveness пробам. + +Readiness и liveness пробы могут быть использованы одновременно на одном контейнере. +Использование обеих проб может обеспечить отсутствие траффика в контейнер, пока он не готов для этого, и контейнер будет перезапущен, если сломается. + +## Конфигурация проб + +{{< comment >}} +Eventually, some of this section could be moved to a concept topic. +{{< /comment >}} + +[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) имеют несколько полей, которые +вы можете использовать для более точного контроля поведения +liveness и readiness проверок: + +* `initialDelaySeconds`: Количество секунд от старта контейнера до начала liveness или readiness проб. По умолчанию 0 секунд. Минимальное значение 0. +* `periodSeconds`: Длительность времени (в секундах) между двумя последовательными проведениями проб. По умолчанию 10 +секунд. Минимальное значение 1. +* `timeoutSeconds`: Количество секунд ожидания пробы. По умолчанию +1 секунда. Минимальное значение 1. +* `successThreshold`: Минимальное количество последовательных проверок, чтобы проба считалась успешной после неудачной. По умолчанию 1. +Должна быть 1 для liveness. Минимальное значение 1. +* `failureThreshold`: Когда Pod стартует и проба даёт ошибку, Kubernetes будет пытаться `failureThreshold` раз перед тем, как сдаться. Сдаться в этом случае для liveness пробы означает перезапуск контейнера. В случае readiness пробы Pod будет помечен Unready. +По умолчанию 3. Минимальное значение 1. + +[HTTP пробы](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core) +имеют дополнительные поля, которые могут быть установлены для `httpGet`: + +* `host`: Имя хоста для соединения, по умолчанию pod IP. Вы, возможно, захотите установить заголовок "Host" в httpHeaders вместо этого. +* `scheme`: Схема для соединения к хосту (HTTP or HTTPS). По умолчанию HTTP. +* `path`: Путь для доступа к HTTP серверу. +* `httpHeaders`: Кастомные заголовки запроса. HTTP позволяет повторяющиеся заголовки. +* `port`: Имя или номер порта для доступа к контейнеру. Номер должен быть в диапазоне от 1 до 65535. + +Для HTTP проб kubelet отправляет HTTP запрос на настроенный путь и +порт для проведения проверок. Kubelet отправляет пробу на IP адрес pod’а, +если адрес не переопределён необязательным полем `host` в `httpGet`. Если поле `scheme` установлено в `HTTPS`, kubelet отправляет HTTPS запрос, пропуская проверку сертификата. В большинстве сценариев вам не нужно устанавливать поле `host`. +Рассмотрим один сценарий, где бы он мог понадобиться. Допустим, контейнер слушает 127.0.0.1 и поле Pod'а `hostNetwork` задано в true. Поле `host` опции `httpGet` должно быть задано в 127.0.0.1. Если pod полагается на виртуальные хосты, что, скорее всего, более распространённая ситуация, лучше вместо поля `host` устанавливать заголовок `Host` в `httpHeaders`. + +Для TCP проб kubelet устанавливает соединение с ноды, не внутри pod, что означает, что вы не можете использовать service name в параметре `host`, пока kubelet не может выполнить его резолв. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Узнать больше о +[Контейнерных пробах](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). + +Вы также можете прочитать API references для: + +* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) +* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) +* [Проба](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) + +{{% /capture %}} + + diff --git a/content/ru/examples/pods/probe/exec-liveness.yaml b/content/ru/examples/pods/probe/exec-liveness.yaml new file mode 100644 index 0000000000..07bf75f85c --- /dev/null +++ b/content/ru/examples/pods/probe/exec-liveness.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Pod +metadata: + labels: + test: liveness + name: liveness-exec +spec: + containers: + - name: liveness + image: k8s.gcr.io/busybox + args: + - /bin/sh + - -c + - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 + livenessProbe: + exec: + command: + - cat + - /tmp/healthy + initialDelaySeconds: 5 + periodSeconds: 5 diff --git a/content/ru/examples/pods/probe/http-liveness.yaml b/content/ru/examples/pods/probe/http-liveness.yaml new file mode 100644 index 0000000000..670af18399 --- /dev/null +++ b/content/ru/examples/pods/probe/http-liveness.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Pod +metadata: + labels: + test: liveness + name: liveness-http +spec: + containers: + - name: liveness + image: k8s.gcr.io/liveness + args: + - /server + livenessProbe: + httpGet: + path: /healthz + port: 8080 + httpHeaders: + - name: Custom-Header + value: Awesome + initialDelaySeconds: 3 + periodSeconds: 3 diff --git a/content/ru/examples/pods/probe/tcp-liveness-readiness.yaml b/content/ru/examples/pods/probe/tcp-liveness-readiness.yaml new file mode 100644 index 0000000000..08fb77ff0f --- /dev/null +++ b/content/ru/examples/pods/probe/tcp-liveness-readiness.yaml @@ -0,0 +1,22 @@ +apiVersion: v1 +kind: Pod +metadata: + name: goproxy + labels: + app: goproxy +spec: + containers: + - name: goproxy + image: k8s.gcr.io/goproxy:0.1 + ports: + - containerPort: 8080 + readinessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 5 + periodSeconds: 10 + livenessProbe: + tcpSocket: + port: 8080 + initialDelaySeconds: 15 + periodSeconds: 20 diff --git a/content/ru/includes/task-tutorial-prereqs.md b/content/ru/includes/task-tutorial-prereqs.md new file mode 100644 index 0000000000..0190567b09 --- /dev/null +++ b/content/ru/includes/task-tutorial-prereqs.md @@ -0,0 +1,8 @@ +Вам нужен Kubernetes кластер и инструмент командной строки kubectl должен быть настроен +на связь с вашим кластером. Если у вас ещё нет кластера, +вы можете создать, его используя +[Minikube](/docs/setup/learning-environment/minikube/), +или вы можете использовать одну из песочниц Kubernetes: + +* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground) +* [Play with Kubernetes](http://labs.play-with-k8s.com/) diff --git a/content/zh/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md b/content/zh/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md new file mode 100644 index 0000000000..02c0930bdd --- /dev/null +++ b/content/zh/blog/_posts/2015-08-00-Weekly-Kubernetes-Community-Hangout.md @@ -0,0 +1,190 @@ + + +--- +title: " Kubernetes社区每周环聊笔记-2015年7月31日 " +date: 2015-08-04 +slug: weekly-kubernetes-community-hangout +url: /blog/2015/08/Weekly-Kubernetes-Community-Hangout +--- + + + +每周,Kubernetes 贡献社区都会通过Google 环聊虚拟开会。我们希望任何有兴趣的人都知道本论坛讨论的内容。 + +这是今天会议的笔记: + + +* 私有镜像仓库演示 - Muhammed + + * 将 docker-registry 作为 RC/Pod/Service 运行 + + * 在每个节点上运行代理 + + * 以 localhost:5000 访问 + + * 讨论: + + * 我们应该在可能的情况下通过 GCS 或 S3 支持它吗? + + * 在每个节点上运行由 $object_store 支持的真实镜像仓库 + + * DNS 代替 localhost? + + * 分解 docker 镜像字符串? + + * 更像 DNS 策略吗? + + +* 运行大型集群 - Joe + + * 三星渴望看到大规模 O(1000) + + * 从 AWS 开始 + + * RH 也有兴趣 - 需要测试计划 + + * 计划下周:讨论工作组 + + * 如果您有兴趣加入有关集群可扩展性的对话,请发送邮件至[joe@0xBEDA.com][4] + + + +* 资源 API 提案 - Clayton + + * 新东西需要更多资源信息 + + * 关于资源 API 的提案 - 向 apiserver 询问有关pod的信息 + + * 发送反馈至:#11951 + + * 关于快照,时间序列和聚合的讨论 + + +* 容器化 kubelet - Clayton + + * 打开 pull + + * Docker挂载传播 - RH 带有补丁 + + * 有关整个系统引导程序的大问题 + + * 双:引导docker /系统docker + + * Kube-in-docker非常好,但可能并不关键 + + * 做些小事以取得进步 + + * 对 docker 施加压力 + +* Web UI(preilly) + + * Web UI 放在哪里? + + * 确定将其拆分出去 + + * 将其用作容器镜像 + + * 作为 kube 发布过程的一部分构建映像 + + * vendor回来了吗?也许吧,也许不是。 + + * DNS将被拆分吗? + + * 可能更紧密地集成在一起,而不是 + + * 其他潜在的衍生产品: + + * apiserver + + * clients diff --git a/content/zh/blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md b/content/zh/blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md new file mode 100644 index 0000000000..0411d844fe --- /dev/null +++ b/content/zh/blog/_posts/2017-10-00-Kubernetes-Community-Steering-Committee-Election-Results.md @@ -0,0 +1,53 @@ +--- +title: " Kubernetes 社区指导委员会选举结果 " +date: 2017-10-05 +slug: kubernetes-community-steering-committee-election-results +url: /blog/2017/10/Kubernetes-Community-Steering-Committee-Election-Results +--- + + +自 2015 年 OSCON 发布 Kubernetes 1.0 以来,大家一直在共同努力,在 Kubernetes 社区中共同分享领导力和责任。 + + +在 Brandon Philips、Brendan Burns、Brian Grant、Clayton Coleman、Joe Beda、Sarah Novotny 和 Tim Hockin 组成的自举治理委员会的工作下 - 代表 5 家不同公司的长期领导者,他们对 Kubernetes 生态系统进行了大量的人才投资和努力 - 编写了初始的[指导委员会章程](https://github.com/kubernetes/steering/blob/master/charter.md),并发起了一次社区选举,以选举 Kubernetes 指导委员会成员。 + + +引用章程 - + + +_指导委员会的最初职责是**实例化 Kubernetes 治理的正式过程**。除定义初始治理过程外,指导委员会还坚信**提供一种方法来迭代指导委员会定义的方法很重要**。我们不相信我们会在第一次或以后把这些做好,也不会一口气完成治理开发工作。指导委员会的作用是成为一个积极响应的机构,可以根据需要进行重构和改造,以适应不断变化的项目和社区。 + + +这是将我们隐式治理结构明确化的最大一步。Kubernetes 的愿景一直是成为一个包容而广泛的社区,用我们的软件带给用户容器的便利性。指导委员会将是一个强有力的引领声音,指导该项目取得成功。 + + +Kubernetes 社区很高兴地宣布 2017 年指导委员会选举的结果。 **请祝贺 Aaron Crickenberger、Derek Carr、Michelle Noorali、Phillip Wittrock、Quinton Hoole 和 Timothy St. Clair**,他们将成为新成立的 Kubernetes 指导委员会的自举治理委员会成员。Derek、Michelle 和 Phillip 将任职 2 年。Aaron、Quinton、和 Timothy 将任职 1 年。 + + +该小组将定期开会,以阐明和简化项目的结构和运行。早期的工作将包括选举 CNCF 理事会的代表,发展项目流程,完善和记录项目的愿景和范围,以及授权和委派更多主题社区团体。 + + +请参阅[完整的指导委员会待办事项列表](https://github.com/kubernetes/steering/blob/master/backlog.md)以获取更多详细信息。 diff --git a/content/zh/docs/concepts/overview/kubernetes-api.md b/content/zh/docs/concepts/overview/kubernetes-api.md index 2d53841443..35ca28fa56 100644 --- a/content/zh/docs/concepts/overview/kubernetes-api.md +++ b/content/zh/docs/concepts/overview/kubernetes-api.md @@ -226,14 +226,14 @@ There are two supported paths to extending the API with [custom resources](/docs 为客户提供无缝的服务。 @@ -244,22 +244,31 @@ to pick up the `--runtime-config` changes. 例如:要禁用batch/v1,请设置 `--runtime-config=batch/v1=false`,以启用batch/v2alpha1,请设置`--runtime-config=batch/v2alpha1`。 该标志接受描述apiserver的运行时配置的逗号分隔的一组键值对。 -重要:启用或禁用组或资源需要重新启动apiserver和控制器管理器来使得 `--runtime-config` 更改生效。 +{{< note >}} + +启用或禁用组或资源需要重新启动apiserver和控制器管理器来使得 `--runtime-config` 更改生效。 + +{{< /note >}} -## 启用组中资源 +## 启用 extensions/v1beta1 组中资源 -DaemonSets,Deployments,HorizontalPodAutoscalers,Ingress,Jobs 和 ReplicaSets是默认启用的。 -其他扩展资源可以通过在apiserver上设置 `--runtime-config` 来启用。 -`--runtime-config` 接受逗号分隔的值。 例如:要禁用 Deployment 和 Ingress, -请设置 `--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingress=false` +在 `extensions/v1beta1` API 组中,DaemonSets,Deployments,StatefulSet, NetworkPolicies, PodSecurityPolicies 和 ReplicaSets 是默认禁用的。 +例如:要启用 deployments 和 daemonsets,请设置 `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`。 + +{{< note >}} + +出于遗留原因,仅在 `extensions / v1beta1` API 组中支持各个资源的启用/禁用。 + +{{< /note >}} {{% /capture %}} diff --git a/content/zh/docs/concepts/policy/resource-quotas.md b/content/zh/docs/concepts/policy/resource-quotas.md index 1e00115344..e2e3a9c2e5 100644 --- a/content/zh/docs/concepts/policy/resource-quotas.md +++ b/content/zh/docs/concepts/policy/resource-quotas.md @@ -2,6 +2,8 @@ approvers: - derekwaynecarr title: 资源配额 +content_template: templates/concept +weight: 10 --- -### 驱动程序(#driver) +### 驱动程序 {#driver} 卷快照类有一个驱动程序,用于确定配置 VolumeSnapshot 的 CSI 卷插件。 必须指定此字段。 diff --git a/content/zh/docs/concepts/storage/volumes.md b/content/zh/docs/concepts/storage/volumes.md index 797352e61b..ab64093e3a 100644 --- a/content/zh/docs/concepts/storage/volumes.md +++ b/content/zh/docs/concepts/storage/volumes.md @@ -1771,10 +1771,6 @@ Choose one of the following methods to create a VMDK. {{< tabs name="tabs_volumes" >}} {{% tab name="使用 vmkfstools 创建" %}} - 首先 ssh 到 ESX,然后使用下面的命令来创建 VMDK: @@ -1783,10 +1779,6 @@ vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk ``` {{% /tab %}} {{% tab name="使用 vmware-vdiskmanager 创建" %}} - 使用下面的命令创建 VMDK: @@ -2409,7 +2401,7 @@ sudo systemctl daemon-reload sudo systemctl restart docker ``` - +{{% /capture %}} {{% capture whatsnext %}} @@ -2418,4 +2410,5 @@ sudo systemctl restart docker --> * 参考[使用持久卷部署 WordPress 和 MySQL](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) 示例。 + {{% /capture %}} diff --git a/content/zh/docs/contribute/intermediate.md b/content/zh/docs/contribute/intermediate.md index 9150aeeb88..b7ae9e36f7 100644 --- a/content/zh/docs/contribute/intermediate.md +++ b/content/zh/docs/contribute/intermediate.md @@ -696,14 +696,15 @@ most up-to-date version of that branch. git commit -m "Your commit message" ``` - {{< note >}} - 不要在提交消息中引用 GitHub issue 或 PR(通过 ID 或 URL)。如果您这样做了,那么每当提交出现在新的Git 分支中时,就会导致该 issue 或 PR 获得通知。稍后,您可以在 GitHub UI 中链接 issues 并将请求拉到一起。 - {{< /note >}} +{{< note >}} + +不要在提交消息中引用 GitHub issue 或 PR(通过 ID 或 URL)。如果您这样做了,那么每当提交出现在新的 Git 分支中时,就会导致该 issue 或 PR 获得通知。稍后,您可以在 GitHub UI 中链接 issues 并将请求拉到一起。 +{{< /note >}} 5. - 包含 Azure 容器仓库配置信息的文件的路径。 - - + +--azure-container-registry-config string + + + + +包含 Azure 容器仓库配置信息的文件的路径。 + + - - - - --bind-address 0.0.0.0     默认值: 0.0.0.0 - - - - - - 代理服务器要使用的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::) - - + + + +--bind-address 0.0.0.0     默认值: 0.0.0.0 + + + + + +代理服务器要使用的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::) + + - - --cleanup - - - - - 如果为 true,清理 iptables 和 ipvs 规则并退出。 - - + +--cleanup + + + + +如果为 true,清理 iptables 和 ipvs 规则并退出。 + + - - - - --cleanup-ipvs     默认值: true - - - - - - 如果设置为 true 并指定了 --cleanup,则 kube-proxy 除了常规清理外,还将刷新 IPVS 规则。 - - + + + +--cleanup-ipvs     默认值: true + + + + + +如果设置为 true 并指定了 --cleanup,则 kube-proxy 除了常规清理外,还将刷新 IPVS 规则。 + + - - --cluster-cidr string - - - - - 集群中 Pod 的 CIDR 范围。配置后,将从该范围之外发送到服务集群 IP 的流量被伪装,从 Pod 发送到外部 LoadBalancer IP 的流量将被重定向到相应的集群 IP。 - - + +--cluster-cidr string + + + + +集群中 Pod 的 CIDR 范围。配置后,将从该范围之外发送到服务集群 IP 的流量被伪装,从 Pod 发送到外部 LoadBalancer IP 的流量将被重定向到相应的集群 IP。 + + - - --config string - - - - - 配置文件的路径。 - - + +--config string + + + + +配置文件的路径。 + + - - - - --config-sync-period duration     默认值: 15m0s - - - - - - 来自 apiserver 的配置的刷新频率。必须大于 0。 - - + + + +--config-sync-period duration     默认值: 15m0s + + + + + +来自 apiserver 的配置的刷新频率。必须大于 0。 + + - - - - --conntrack-max-per-core int32     默认值: 32768 - - - - - - 每个 CPU 核跟踪的最大 NAT 连接数(0 表示保留原样限制并忽略 conntrack-min)。 - - + + + +--conntrack-max-per-core int32     默认值: 32768 + + + + + +每个 CPU 核跟踪的最大 NAT 连接数(0 表示保留原样限制并忽略 conntrack-min)。 + + - - - - --conntrack-min int32     默认值: 131072 - - - - - - 无论 conntrack-max-per-core 多少,要分配的 conntrack 条目的最小数量(将 conntrack-max-per-core 设置为 0 即可保持原样的限制)。 - - + + + +--conntrack-min int32     默认值: 131072 + + + + + +无论 conntrack-max-per-core 多少,要分配的 conntrack 条目的最小数量(将 conntrack-max-per-core 设置为 0 即可保持原样的限制)。 + + - - - - --conntrack-tcp-timeout-close-wait duration     默认值: 1h0m0s - - - - - - 处于 CLOSE_WAIT 状态的 TCP 连接的 NAT 超时 - - + + + +--conntrack-tcp-timeout-close-wait duration     默认值: 1h0m0s + + + + + +处于 CLOSE_WAIT 状态的 TCP 连接的 NAT 超时 + + - - - - --conntrack-tcp-timeout-established duration     默认值: 24h0m0s - - - - - - 已建立的 TCP 连接的空闲超时(0 保持原样) - - + + + +--conntrack-tcp-timeout-established duration     默认值: 24h0m0s + + + + + +已建立的 TCP 连接的空闲超时(0 保持原样) + + - - --feature-gates mapStringBool - - - - - 一组键=值(key=value)对,描述了 alpha/experimental 的特征。可选项有:
APIListChunking=true|false (BETA - 默认值=true)
APIResponseCompression=true|false (BETA - 默认值=true)
AllAlpha=true|false (ALPHA - 默认值=false)
AppArmor=true|false (BETA - 默认值=true)
AttachVolumeLimit=true|false (BETA - 默认值=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - 默认值=false)
BlockVolume=true|false (BETA - 默认值=true)
BoundServiceAccountTokenVolume=true|false (ALPHA - 默认值=false)
CPUManager=true|false (BETA - 默认值=true)
CRIContainerLogRotation=true|false (BETA - 默认值=true)
CSIBlockVolume=true|false (BETA - 默认值=true)
CSIDriverRegistry=true|false (BETA - 默认值=true)
CSIInlineVolume=true|false (BETA - 默认值=true)
CSIMigration=true|false (ALPHA - 默认值=false)
CSIMigrationAWS=true|false (ALPHA - 默认值=false)
CSIMigrationAzureDisk=true|false (ALPHA - 默认值=false)
CSIMigrationAzureFile=true|false (ALPHA - 默认值=false)
CSIMigrationGCE=true|false (ALPHA - 默认值=false)
CSIMigrationOpenStack=true|false (ALPHA - 默认值=false)
CSINodeInfo=true|false (BETA - 默认值=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
CustomResource默认值ing=true|false (BETA - 默认值=true)
DevicePlugins=true|false (BETA - 默认值=true)
DryRun=true|false (BETA - 默认值=true)
DynamicAuditing=true|false (ALPHA - 默认值=false)
DynamicKubeletConfig=true|false (BETA - 默认值=true)
EndpointSlice=true|false (ALPHA - 默认值=false)
EphemeralContainers=true|false (ALPHA - 默认值=false)
EvenPodsSpread=true|false (ALPHA - 默认值=false)
ExpandCSIVolumes=true|false (BETA - 默认值=true)
ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
ExpandPersistentVolumes=true|false (BETA - 默认值=true)
ExperimentalHostUserNamespace默认值ing=true|false (BETA - 默认值=false)
HPAScaleToZero=true|false (ALPHA - 默认值=false)
HyperVContainer=true|false (ALPHA - 默认值=false)
IPv6DualStack=true|false (ALPHA - 默认值=false)
KubeletPodResources=true|false (BETA - 默认值=true)
LegacyNodeRoleBehavior=true|false (ALPHA - 默认值=true)
LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
MountContainers=true|false (ALPHA - 默认值=false)
NodeDisruptionExclusion=true|false (ALPHA - 默认值=false)
NodeLease=true|false (BETA - 默认值=true)
NonPreemptingPriority=true|false (ALPHA - 默认值=false)
PodOverhead=true|false (ALPHA - 默认值=false)
PodShareProcessNamespace=true|false (BETA - 默认值=true)
ProcMountType=true|false (ALPHA - 默认值=false)
QOSReserved=true|false (ALPHA - 默认值=false)
RemainingItemCount=true|false (BETA - 默认值=true)
RemoveSelfLink=true|false (ALPHA - 默认值=false)
RequestManagement=true|false (ALPHA - 默认值=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - 默认值=false)
ResourceQuotaScopeSelectors=true|false (BETA - 默认值=true)
RotateKubeletClientCertificate=true|false (BETA - 默认值=true)
RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
RunAsGroup=true|false (BETA - 默认值=true)
RuntimeClass=true|false (BETA - 默认值=true)
SCTPSupport=true|false (ALPHA - 默认值=false)
ScheduleDaemonSetPods=true|false (BETA - 默认值=true)
ServerSideApply=true|false (BETA - 默认值=true)
ServiceLoadBalancerFinalizer=true|false (BETA - 默认值=true)
ServiceNodeExclusion=true|false (ALPHA - 默认值=false)
StartupProbe=true|false (BETA - 默认值=true)
StorageVersionHash=true|false (BETA - 默认值=true)
StreamingProxyRedirects=true|false (BETA - 默认值=true)
SupportNodePidsLimit=true|false (BETA - 默认值=true)
SupportPodPidsLimit=true|false (BETA - 默认值=true)
Sysctls=true|false (BETA - 默认值=true)
TTLAfterFinished=true|false (ALPHA - 默认值=false)
TaintBasedEvictions=true|false (BETA - 默认值=true)
TaintNodesByCondition=true|false (BETA - 默认值=true)
TokenRequest=true|false (BETA - 默认值=true)
TokenRequestProjection=true|false (BETA - 默认值=true)
TopologyManager=true|false (ALPHA - 默认值=false)
ValidateProxyRedirects=true|false (BETA - 默认值=true)
VolumePVCDataSource=true|false (BETA - 默认值=true)
VolumeSnapshotDataSource=true|false (ALPHA - 默认值=false)
VolumeSubpathEnvExpansion=true|false (BETA - 默认值=true)
WatchBookmark=true|false (BETA - 默认值=true)
WinDSR=true|false (ALPHA - 默认值=false)
WinOverlay=true|false (ALPHA - 默认值=false)
WindowsGMSA=true|false (BETA - 默认值=true)
WindowsRunAsUserName=true|false (ALPHA - 默认值=false) - - + +--feature-gates mapStringBool + + + + +一组键=值(key=value)对,描述了 alpha/experimental 的特征。可选项有:
APIListChunking=true|false (BETA - 默认值=true)
APIResponseCompression=true|false (BETA - 默认值=true)
AllAlpha=true|false (ALPHA - 默认值=false)
AppArmor=true|false (BETA - 默认值=true)
AttachVolumeLimit=true|false (BETA - 默认值=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - 默认值=false)
BlockVolume=true|false (BETA - 默认值=true)
BoundServiceAccountTokenVolume=true|false (ALPHA - 默认值=false)
CPUManager=true|false (BETA - 默认值=true)
CRIContainerLogRotation=true|false (BETA - 默认值=true)
CSIBlockVolume=true|false (BETA - 默认值=true)
CSIDriverRegistry=true|false (BETA - 默认值=true)
CSIInlineVolume=true|false (BETA - 默认值=true)
CSIMigration=true|false (ALPHA - 默认值=false)
CSIMigrationAWS=true|false (ALPHA - 默认值=false)
CSIMigrationAzureDisk=true|false (ALPHA - 默认值=false)
CSIMigrationAzureFile=true|false (ALPHA - 默认值=false)
CSIMigrationGCE=true|false (ALPHA - 默认值=false)
CSIMigrationOpenStack=true|false (ALPHA - 默认值=false)
CSINodeInfo=true|false (BETA - 默认值=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
CustomResource默认值ing=true|false (BETA - 默认值=true)
DevicePlugins=true|false (BETA - 默认值=true)
DryRun=true|false (BETA - 默认值=true)
DynamicAuditing=true|false (ALPHA - 默认值=false)
DynamicKubeletConfig=true|false (BETA - 默认值=true)
EndpointSlice=true|false (ALPHA - 默认值=false)
EphemeralContainers=true|false (ALPHA - 默认值=false)
EvenPodsSpread=true|false (ALPHA - 默认值=false)
ExpandCSIVolumes=true|false (BETA - 默认值=true)
ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
ExpandPersistentVolumes=true|false (BETA - 默认值=true)
ExperimentalHostUserNamespace默认值ing=true|false (BETA - 默认值=false)
HPAScaleToZero=true|false (ALPHA - 默认值=false)
HyperVContainer=true|false (ALPHA - 默认值=false)
IPv6DualStack=true|false (ALPHA - 默认值=false)
KubeletPodResources=true|false (BETA - 默认值=true)
LegacyNodeRoleBehavior=true|false (ALPHA - 默认值=true)
LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
MountContainers=true|false (ALPHA - 默认值=false)
NodeDisruptionExclusion=true|false (ALPHA - 默认值=false)
NodeLease=true|false (BETA - 默认值=true)
NonPreemptingPriority=true|false (ALPHA - 默认值=false)
PodOverhead=true|false (ALPHA - 默认值=false)
PodShareProcessNamespace=true|false (BETA - 默认值=true)
ProcMountType=true|false (ALPHA - 默认值=false)
QOSReserved=true|false (ALPHA - 默认值=false)
RemainingItemCount=true|false (BETA - 默认值=true)
RemoveSelfLink=true|false (ALPHA - 默认值=false)
RequestManagement=true|false (ALPHA - 默认值=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - 默认值=false)
ResourceQuotaScopeSelectors=true|false (BETA - 默认值=true)
RotateKubeletClientCertificate=true|false (BETA - 默认值=true)
RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
RunAsGroup=true|false (BETA - 默认值=true)
RuntimeClass=true|false (BETA - 默认值=true)
SCTPSupport=true|false (ALPHA - 默认值=false)
ScheduleDaemonSetPods=true|false (BETA - 默认值=true)
ServerSideApply=true|false (BETA - 默认值=true)
ServiceLoadBalancerFinalizer=true|false (BETA - 默认值=true)
ServiceNodeExclusion=true|false (ALPHA - 默认值=false)
StartupProbe=true|false (BETA - 默认值=true)
StorageVersionHash=true|false (BETA - 默认值=true)
StreamingProxyRedirects=true|false (BETA - 默认值=true)
SupportNodePidsLimit=true|false (BETA - 默认值=true)
SupportPodPidsLimit=true|false (BETA - 默认值=true)
Sysctls=true|false (BETA - 默认值=true)
TTLAfterFinished=true|false (ALPHA - 默认值=false)
TaintBasedEvictions=true|false (BETA - 默认值=true)
TaintNodesByCondition=true|false (BETA - 默认值=true)
TokenRequest=true|false (BETA - 默认值=true)
TokenRequestProjection=true|false (BETA - 默认值=true)
TopologyManager=true|false (ALPHA - 默认值=false)
ValidateProxyRedirects=true|false (BETA - 默认值=true)
VolumePVCDataSource=true|false (BETA - 默认值=true)
VolumeSnapshotDataSource=true|false (ALPHA - 默认值=false)
VolumeSubpathEnvExpansion=true|false (BETA - 默认值=true)
WatchBookmark=true|false (BETA - 默认值=true)
WinDSR=true|false (ALPHA - 默认值=false)
WinOverlay=true|false (ALPHA - 默认值=false)
WindowsGMSA=true|false (BETA - 默认值=true)
WindowsRunAsUserName=true|false (ALPHA - 默认值=false) + + - - - - --healthz-bind-address 0.0.0.0     默认值: 0.0.0.0:10256 - - - - - - 服务健康检查的 IP 地址和端口(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::) - - + + + +--healthz-bind-address 0.0.0.0     默认值: 0.0.0.0:10256 + + + + + +服务健康检查的 IP 地址和端口(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::) + + - - - - --healthz-port int32     默认值: 10256 - - - - - - 绑定健康检查服务的端口。使用 0 表示禁用。 - - + + + +--healthz-port int32     默认值: 10256 + + + + + +绑定健康检查服务的端口。使用 0 表示禁用。 + + - - -h, --help - - - - - kube-proxy 操作的帮助命令 - - + +-h, --help + + + + +kube-proxy 操作的帮助命令 + + - - --hostname-override string - - - - - 如果非空,将使用此字符串作为标识而不是实际的主机名。 - - + +--hostname-override string + + + + +如果非空,将使用此字符串作为标识而不是实际的主机名。 + + - - - - --iptables-masquerade-bit int32     默认值: 14 - - - - - - 如果使用纯 iptables 代理,则 fwmark 空间的 bit 用于标记需要 SNAT 的数据包。必须在 [0,31] 范围内。 - - + + + +--iptables-masquerade-bit int32     默认值: 14 + + + + + +如果使用纯 iptables 代理,则 fwmark 空间的 bit 用于标记需要 SNAT 的数据包。必须在 [0,31] 范围内。 + + - - --iptables-min-sync-period duration - - - - - iptables 规则可以随着端点和服务的更改而刷新的最小间隔(例如 '5s'、'1m'、'2h22m')。 - - + +--iptables-min-sync-period duration + + + + +iptables 规则可以随着端点和服务的更改而刷新的最小间隔(例如 '5s'、'1m'、'2h22m')。 + + - - - - --iptables-sync-period duration     默认值: 30s - - - - - - 刷新 iptables 规则的最大间隔(例如 '5s'、'1m'、'2h22m')。必须大于 0。 - - + + + +--iptables-sync-period duration     默认值: 30s + + + + + +刷新 iptables 规则的最大间隔(例如 '5s'、'1m'、'2h22m')。必须大于 0。 + + - - --ipvs-exclude-cidrs stringSlice - - - - - 逗号分隔的 CIDR 列表,ipvs 代理在清理 IPVS 规则时不应使用此列表。 - - + +--ipvs-exclude-cidrs stringSlice + + + + +逗号分隔的 CIDR 列表,ipvs 代理在清理 IPVS 规则时不应使用此列表。 + + - - --ipvs-min-sync-period duration - - - - - ipvs 规则可以随着端点和服务的更改而刷新的最小间隔(例如 '5s'、'1m'、'2h22m')。 - - + +--ipvs-min-sync-period duration + + + + +ipvs 规则可以随着端点和服务的更改而刷新的最小间隔(例如 '5s'、'1m'、'2h22m')。 + + - - --ipvs-scheduler string - - - - - 代理模式为 ipvs 时的 ipvs 调度器类型 - - + +--ipvs-scheduler string + + + + +代理模式为 ipvs 时的 ipvs 调度器类型 + + - - --ipvs-strict-arp - - - - - 通过将 arp_ignore 设置为 1 并将 arp_announce 设置为 2 启用严格的 ARP - - + +--ipvs-strict-arp + + + + +通过将 arp_ignore 设置为 1 并将 arp_announce 设置为 2 启用严格的 ARP + + - - - - --ipvs-sync-period duration     默认值: 30s - - - - - - 刷新 ipvs 规则的最大间隔(例如 '5s'、'1m'、'2h22m')。必须大于 0。 - - + + + +--ipvs-sync-period duration     默认值: 30s + + + + + +刷新 ipvs 规则的最大间隔(例如 '5s'、'1m'、'2h22m')。必须大于 0。 + + - - - - --kube-api-burst int32     默认值: 10 - - - - - - 与 kubernetes apiserver 通信的数量 - - + + + +--kube-api-burst int32     默认值: 10 + + + + + +与 kubernetes apiserver 通信的数量 + + - - - - --kube-api-content-type string     默认值: "application/vnd.kubernetes.protobuf" - - - - - - 发送到 apiserver 的请求的内容类型。 - - + + + +--kube-api-content-type string     默认值: "application/vnd.kubernetes.protobuf" + + + + + +发送到 apiserver 的请求的内容类型。 + + - - - - --kube-api-qps float32     默认值: 5 - - - - - - 与 kubernetes apiserver 交互时使用的 QPS - - + + + +--kube-api-qps float32     默认值: 5 + + + + + +与 kubernetes apiserver 交互时使用的 QPS + + - - --kubeconfig string - - - - - 包含授权信息的 kubeconfig 文件的路径(master 位置由 master 标志设置)。 - - + +--kubeconfig string + + + + +包含授权信息的 kubeconfig 文件的路径(master 位置由 master 标志设置)。 + + - - - - --log-flush-frequency duration     默认值: 5s - - - - - - 两次日志刷新之间的最大秒数 - - + + + +--log-flush-frequency duration     默认值: 5s + + + + + +两次日志刷新之间的最大秒数 + + - - --masquerade-all - - - - - 如果使用纯 iptables 代理,则对通过服务集群 IP 发送的所有流量进行 SNAT(通常不需要) - - + +--masquerade-all + + + + +如果使用纯 iptables 代理,则对通过服务集群 IP 发送的所有流量进行 SNAT(通常不需要) + + - - --master string - - - - - Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值) - - + +--master string + + + + +Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值) + + - - - - --metrics-bind-address 0.0.0.0     默认值: 127.0.0.1:10249 - - - - - - metrics 服务器要使用的 IP 地址(所有 IPv4 接口设置为 0.0.0.0,所有 IPv6 接口设置为 `::`) - - + + + +--metrics-bind-address 0.0.0.0     默认值: 127.0.0.1:10249 + + + + + +metrics 服务器要使用的 IP 地址(所有 IPv4 接口设置为 0.0.0.0,所有 IPv6 接口设置为 `::`) + + - - - - --metrics-port int32     默认值: 10249 - - - - - - 绑定 metrics 服务器的端口。使用 0 表示禁用。 - - + + + +--metrics-port int32     默认值: 10249 + + + + + +绑定 metrics 服务器的端口。使用 0 表示禁用。 + + - - --nodeport-addresses stringSlice - - - - - 一个字符串值,指定用于 NodePorts 的地址。值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。默认的空字符串切片([])表示使用所有本地地址。 - - + +--nodeport-addresses stringSlice + + + + +一个字符串值,指定用于 NodePorts 的地址。值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。默认的空字符串切片([])表示使用所有本地地址。 + + - - - - --oom-score-adj int32     默认值: -999 - - - - - - kube-proxy 进程中的 oom-score-adj 值必须在 [-1000,1000] 范围内 - - + + + +--oom-score-adj int32     默认值: -999 + + + + + +kube-proxy 进程中的 oom-score-adj 值必须在 [-1000,1000] 范围内 + + - - --profiling - - - - - 如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。 - - + +--profiling + + + + +如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。 + + - - --proxy-mode ProxyMode - - - - - 使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'(实验)。如果为空,使用最佳可用代理(当前为 iptables)。如果选择了 iptables 代理,无论如何,但系统的内核或 iptables 版本较低,这总是会回退到用户空间代理。 - - + +--proxy-mode ProxyMode + + + + +使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'(实验)。如果为空,使用最佳可用代理(当前为 iptables)。如果选择了 iptables 代理,无论如何,但系统的内核或 iptables 版本较低,这总是会回退到用户空间代理。 + + - - --proxy-port-range port-range - - - - - 可以使用代理服务流量的主机端口(包括 beginPort-endPort、single port、beginPort+offset)的范围。如果(未指定,0 或 0-0)则随机选择端口。 - - + +--proxy-port-range port-range + + + + +可以使用代理服务流量的主机端口(包括 beginPort-endPort、single port、beginPort+offset)的范围。如果(未指定,0 或 0-0)则随机选择端口。 + + - - - - --udp-timeout duration     默认值: 250ms - - - - - - 空闲 UDP 连接将保持打开的时长(例如 '250ms','2s')。必须大于 0。仅适用于 proxy-mode=userspace - - + + + +--udp-timeout duration     默认值: 250ms + + + + + +空闲 UDP 连接将保持打开的时长(例如 '250ms','2s')。必须大于 0。仅适用于 proxy-mode=userspace + + - - --version version[=true] - - - - - 打印版本信息并退出 - - + +--version version[=true] + + + + +打印版本信息并退出 + + - - --write-config-to string - - - - - 如果设置,将配置值写入此文件并退出。 - - + +--write-config-to string + + + + +如果设置,将配置值写入此文件并退出。 + + - + diff --git a/i18n/pl.toml b/i18n/pl.toml index 2230332382..9621301a49 100644 --- a/i18n/pl.toml +++ b/i18n/pl.toml @@ -175,6 +175,9 @@ other = "Cele" [prerequisites_heading] other = "Nim zaczniesz" +[subscribe_button] +other = "Subskrybuj" + [ui_search_placeholder] other = "Szukaj" diff --git a/i18n/zh.toml b/i18n/zh.toml index 83e4399cde..c42c53fd70 100644 --- a/i18n/zh.toml +++ b/i18n/zh.toml @@ -172,6 +172,9 @@ other = "教程目标" [prerequisites_heading] other = "准备开始" +[subscribe_button] +other = "订阅" + [ui_search_placeholder] other = "搜索" diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-apt-update-upgrade.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-apt-update-upgrade.png new file mode 100755 index 0000000000..50d2e48bc8 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-apt-update-upgrade.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-error.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-error.png new file mode 100755 index 0000000000..7198c078a2 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-error.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-success.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-success.png new file mode 100755 index 0000000000..0b6a67b7be Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-kubectl-success.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-resources-wsl-integration.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-resources-wsl-integration.png new file mode 100755 index 0000000000..2376b719ae Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-resources-wsl-integration.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-general.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-general.png new file mode 100755 index 0000000000..4898cebb39 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-general.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-wsl2-activated.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-wsl2-activated.png new file mode 100755 index 0000000000..622b049c77 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-settings-wsl2-activated.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-start.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-start.png new file mode 100755 index 0000000000..8621132c22 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-start.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-taskbar.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-taskbar.png new file mode 100755 index 0000000000..52472ab570 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-docker-taskbar.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-browse-nodes.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-browse-nodes.png new file mode 100755 index 0000000000..d7d6428ba6 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-browse-nodes.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-config-selected.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-config-selected.png new file mode 100755 index 0000000000..ce60ed9dc6 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-config-selected.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-error.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-error.png new file mode 100755 index 0000000000..6159b1a6b2 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-error.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-kube-directory.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-kube-directory.png new file mode 100755 index 0000000000..f5f0011a68 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-kube-directory.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-login-success.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-login-success.png new file mode 100755 index 0000000000..df5a95c74e Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-login-success.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-rbac-serviceaccount.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-rbac-serviceaccount.png new file mode 100755 index 0000000000..4971114f17 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-rbac-serviceaccount.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-serviceaccount.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-serviceaccount.png new file mode 100755 index 0000000000..5c4de5e9d2 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-serviceaccount.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-success.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-success.png new file mode 100755 index 0000000000..b93614dde5 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-success.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-wsl-distros.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-wsl-distros.png new file mode 100755 index 0000000000..5e21087d2b Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-dashboard-wsl-distros.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-k8s-master.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-k8s-master.png new file mode 100755 index 0000000000..6e9f959649 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-browse-k8s-master.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create-multinodes.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create-multinodes.png new file mode 100755 index 0000000000..3bade3996b Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create-multinodes.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create.png new file mode 100755 index 0000000000..4a057f4923 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-cluster-create.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install-dashboard.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install-dashboard.png new file mode 100755 index 0000000000..6886b5dbbf Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install-dashboard.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install.png new file mode 100755 index 0000000000..eca6793efa Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-install.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-nodes-services.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-nodes-services.png new file mode 100755 index 0000000000..0a68fcee21 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-nodes-services.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-services-multinodes.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-services-multinodes.png new file mode 100755 index 0000000000..d3818a7f38 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-kind-list-services-multinodes.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard-loadbalancer.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard-loadbalancer.png new file mode 100755 index 0000000000..5e57046f5c Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard-loadbalancer.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard.png new file mode 100755 index 0000000000..b3850bb40b Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-dashboard.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master-localhost.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master-localhost.png new file mode 100755 index 0000000000..e91a497649 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master-localhost.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master.png new file mode 100755 index 0000000000..8b96119d8d Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-browse-k8s-master.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-get-all.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-get-all.png new file mode 100755 index 0000000000..a1469c5a3a Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-get-all.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-type-loadbalancer.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-type-loadbalancer.png new file mode 100755 index 0000000000..e06fad7d50 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-dashboard-type-loadbalancer.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install conntrack.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install conntrack.png new file mode 100755 index 0000000000..1464a52b53 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install conntrack.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install.png new file mode 100755 index 0000000000..104fbe3cf8 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-install.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error-systemd.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error-systemd.png new file mode 100755 index 0000000000..bddbb74310 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error-systemd.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error.png new file mode 100755 index 0000000000..e4c5b046df Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-error.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-fixed.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-fixed.png new file mode 100755 index 0000000000..2009b67098 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-start-fixed.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-enabled.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-enabled.png new file mode 100755 index 0000000000..8f5062027e Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-enabled.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-files.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-files.png new file mode 100755 index 0000000000..81724ae5cb Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-files.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-packages.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-packages.png new file mode 100755 index 0000000000..42700bceee Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-minikube-systemd-packages.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-start-menu-search.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-start-menu-search.png new file mode 100755 index 0000000000..242b55e282 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-start-menu-search.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-user-password.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-user-password.png new file mode 100755 index 0000000000..b4f66c94ed Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-user-password.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-visudo.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-visudo.png new file mode 100755 index 0000000000..abd7c1edf5 Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-visudo.png differ diff --git a/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-windows-store-terminal.png b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-windows-store-terminal.png new file mode 100755 index 0000000000..1a78cc3acd Binary files /dev/null and b/static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-windows-store-terminal.png differ