From 2dfbdc2cd85094ceb8524089980d3a6f0e7e1c54 Mon Sep 17 00:00:00 2001
From: Maksym Vlasov
Date: Mon, 13 Jan 2020 18:51:38 +0200
Subject: [PATCH 001/105] Initial commit for Ukrainian localization (#18569)
* Initial commit for Ukrainian localization
* Fix misspell and crosslink
* Add Nikita Potapenko to PR reviewers
https://github.com/kubernetes/website/pull/18569#issuecomment-573014402
Co-authored-by: Anastasiya Kulyk <56824659+anastyakulyk@users.noreply.github.com>
---
OWNERS_ALIASES | 8 ++++++
README-uk.md | 71 +++++++++++++++++++++++++++++++++++++++++++++++
README.md | 2 +-
config.toml | 11 ++++++++
content/uk/OWNERS | 13 +++++++++
5 files changed, 104 insertions(+), 1 deletion(-)
create mode 100644 README-uk.md
create mode 100644 content/uk/OWNERS
diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index b6d0cfcd45..0e6ba8684a 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -216,3 +216,11 @@ aliases:
- aisonaku
- potapy4
- dianaabv
+ sig-docs-uk-owners: # Admins for Ukrainian content
+ - anastyakulyk
+ - MaxymVlasov
+ sig-docs-uk-reviews: # PR reviews for Ukrainian content
+ - anastyakulyk
+ - idvoretskyi
+ - MaxymVlasov
+ - Potapy4
diff --git a/README-uk.md b/README-uk.md
new file mode 100644
index 0000000000..68d3b0db0a
--- /dev/null
+++ b/README-uk.md
@@ -0,0 +1,71 @@
+# Документація Kubernetes
+
+[](https://travis-ci.org/kubernetes/website)
+[](https://github.com/kubernetes/website/releases/latest)
+
+Вітаємо! В цьому репозиторії міститься все необхідне для роботи над [вебсайтом і документацією Kubernetes](https://kubernetes.io/). Ми щасливі, що ви хочете зробити свій внесок!
+
+## Внесок у документацію
+
+Ви можете створити копію цього репозиторія у своєму акаунті на GitHub, натиснувши на кнопку **Fork**, що розташована справа зверху. Ця копія називатиметься *fork* (відгалуження). Зробіть будь-які необхідні зміни у своєму відгалуженні. Коли ви будете готові надіслати їх нам, перейдіть до свого відгалуження і створіть новий pull request, щоб сповістити нас.
+
+Після того, як ви створили pull request, рецензент Kubernetes зобов’язується надати вам по ньому чіткий і конструктивний коментар. **Ваш обов’язок як творця pull request - відкоригувати його відповідно до зауважень рецензента Kubernetes.** Також, зауважте: може статися так, що ви отримаєте коментарі від декількох рецензентів Kubernetes або від іншого рецензента, ніж той, якого вам було призначено від початку. Крім того, за потреби один із ваших рецензентів може запросити технічну перевірку від одного з [технічних рецензентів Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers). Рецензенти намагатимуться відреагувати вчасно, проте час відповіді може відрізнятися в залежності від обставин.
+
+Більше інформації про внесок у документацію Kubernetes ви знайдете у наступних джерелах:
+
+* [Внесок: з чого почати](https://kubernetes.io/docs/contribute/start/)
+* [Візуалізація запропонованих змін до документації](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
+* [Використання шаблонів сторінок](http://kubernetes.io/docs/contribute/style/page-templates/)
+* [Керівництво зі стилю оформлення документації](http://kubernetes.io/docs/contribute/style/style-guide/)
+* [Переклад документації Kubernetes іншими мовами](https://kubernetes.io/docs/contribute/localization/)
+
+## Запуск сайту локально за допомогою Docker
+
+Для локального запуску вебсайту Kubernetes рекомендовано запустити спеціальний [Docker](https://docker.com)-образ, що містить генератор статичних сайтів [Hugo](https://gohugo.io).
+
+> Якщо ви працюєте під Windows, вам знадобиться ще декілька інструментів, які можна встановити за допомогою [Chocolatey](https://chocolatey.org). `choco install make`
+
+> Якщо ви вважаєте кращим запустити вебсайт локально без використання Docker, дивіться пункт нижче [Запуск сайту локально за допомогою Hugo](#запуск-сайту-локально-зa-допомогою-hugo).
+
+Якщо у вас вже [запущений](https://www.docker.com/get-started) Docker, зберіть локальний Docker-образ `kubernetes-hugo`:
+
+```bash
+make docker-image
+```
+
+Після того, як образ зібрано, ви можете запустити вебсайт локально:
+
+```bash
+make docker-serve
+```
+
+Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
+
+## Запуск сайту локально зa допомогою Hugo
+
+Для інструкцій по установці Hugo дивіться [офіційну документацію](https://gohugo.io/getting-started/installing/). Обов’язково встановіть розширену версію Hugo, яка позначена змінною оточення `HUGO_VERSION` у файлі [`netlify.toml`](netlify.toml#L9).
+
+Після установки Hugo запустіть вебсайт локально командою:
+
+```bash
+make serve
+```
+
+Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
+
+## Спільнота, обговорення, внесок і підтримка
+
+Дізнайтеся, як долучитися до спільноти Kubernetes на [сторінці спільноти](http://kubernetes.io/community/).
+
+Для зв’язку із супроводжуючими проекту скористайтеся:
+
+- [Slack](https://kubernetes.slack.com/messages/sig-docs)
+- [Поштова розсилка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
+
+### Кодекс поведінки
+
+Участь у спільноті Kubernetes визначається правилами [Кодексу поведінки спільноти Kubernetes](code-of-conduct.md).
+
+## Дякуємо!
+
+Долучення до спільноти - запорука успішного розвитку Kubernetes. Ми цінуємо ваш внесок у наш вебсайт і документацію!
diff --git a/README.md b/README.md
index f403f9e9de..02919f3136 100644
--- a/README.md
+++ b/README.md
@@ -27,7 +27,7 @@ For more information about contributing to the Kubernetes documentation, see:
|[Hindi README](README-hi.md)|[Spanish README](README-es.md)|
|[Indonesian README](README-id.md)|[Chinese README](README-zh.md)|
|[Japanese README](README-ja.md)|[Vietnamese README](README-vi.md)|
-|[Russian README](README-ru.md)|
+|[Russian README](README-ru.md)|[Ukrainian README](README-uk.md)
|||
## Running the website locally using Docker
diff --git a/config.toml b/config.toml
index 990392d659..399cb38e68 100644
--- a/config.toml
+++ b/config.toml
@@ -286,3 +286,14 @@ time_format_blog = "02.01.2006"
# A list of language codes to look for untranslated content, ordered from left to right.
language_alternatives = ["en"]
+[languages.uk]
+title = "Kubernetes"
+description = "Довершена система оркестрації контейнерів"
+languageName = "Українська"
+weight = 13
+contentDir = "content/uk"
+
+[languages.uk.params]
+time_format_blog = "02.01.2006"
+# A list of language codes to look for untranslated content, ordered from left to right.
+language_alternatives = ["en"]
diff --git a/content/uk/OWNERS b/content/uk/OWNERS
new file mode 100644
index 0000000000..09fbf1170c
--- /dev/null
+++ b/content/uk/OWNERS
@@ -0,0 +1,13 @@
+# See the OWNERS docs at https://go.k8s.io/owners
+
+# This is the directory for Ukrainian source content.
+# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
+
+reviewers:
+- sig-docs-uk-reviews
+
+approvers:
+- sig-docs-uk-owners
+
+labels:
+- language/uk
From dfd00c180e6222c49b5d75963647fbb6aed2018e Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Sun, 8 Dec 2019 20:08:01 +0000
Subject: [PATCH 002/105] Fix whitespace
The Kubernetes object is called ReplicationController.
---
content/ko/docs/reference/glossary/replication-controller.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ko/docs/reference/glossary/replication-controller.md b/content/ko/docs/reference/glossary/replication-controller.md
index f5c19a33aa..2bcf7aefab 100755
--- a/content/ko/docs/reference/glossary/replication-controller.md
+++ b/content/ko/docs/reference/glossary/replication-controller.md
@@ -1,5 +1,5 @@
---
-title: 레플리케이션 컨트롤러(Replication Controller)
+title: 레플리케이션 컨트롤러(ReplicationController)
id: replication-controller
date: 2018-04-12
full_link:
From 7bd5562d2b1aaf1adea48585c154741ababe4072 Mon Sep 17 00:00:00 2001
From: MaxymVlasov
Date: Wed, 5 Feb 2020 14:53:39 +0200
Subject: [PATCH 003/105] Fix merge conflict resolution mistake
---
config.toml | 3 +++
1 file changed, 3 insertions(+)
diff --git a/config.toml b/config.toml
index be61230816..5c0328ec9e 100644
--- a/config.toml
+++ b/config.toml
@@ -295,6 +295,9 @@ contentDir = "content/pl"
[languages.pl.params]
time_format_blog = "01.02.2006"
+# A list of language codes to look for untranslated content, ordered from left to right.
+language_alternatives = ["en"]
+
[languages.uk]
title = "Kubernetes"
description = "Довершена система оркестрації контейнерів"
From 93a23a1bf4b7749bd7049f6081de1dff399b7178 Mon Sep 17 00:00:00 2001
From: Brent Klein
Date: Thu, 27 Feb 2020 13:50:15 -0500
Subject: [PATCH 004/105] Updated Deployment description for clarity.
---
.../tutorials/kubernetes-basics/deploy-app/deploy-intro.html | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
index 8f38960de7..af2d5eeed2 100644
--- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
@@ -31,7 +31,8 @@ weight: 10
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it.
To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes
how to create and update instances of your application. Once you've created a Deployment, the Kubernetes
- master schedules mentioned application instances onto individual Nodes in the cluster.
+ master schedules the application instances included in that Deployment to run on individual Nodes in the
+ cluster.
Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism to address machine failure or maintenance.
From 2e8eb9bd4f906fdd9435476b359395a4ccb5d8d0 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Fri, 28 Feb 2020 09:26:54 -0500
Subject: [PATCH 005/105] Added endpoint-slices for language/fr
---
.../services-networking/endpoint-slices.md | 111 ++++++++++++++++++
1 file changed, 111 insertions(+)
create mode 100644 content/fr/docs/concepts/services-networking/endpoint-slices.md
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
new file mode 100644
index 0000000000..47f076d11b
--- /dev/null
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -0,0 +1,111 @@
+---
+reviewers:
+title: EndpointSlices
+feature:
+ title: EndpointSlices
+ description: >
+ Scalable tracking of network endpoints in a Kubernetes cluster.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
+
+_EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Resource pour EndpointSlice {#endpointslice-resource}
+
+Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
+endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux Service selector. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
+
+Par example, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `example`.
+
+```yaml
+apiVersion: discovery.k8s.io/v1beta1
+kind: EndpointSlice
+metadata:
+ name: example-abc
+ labels:
+ kubernetes.io/service-name: example
+addressType: IPv4
+ports:
+ - name: http
+ protocol: TCP
+ port: 80
+endpoints:
+ - addresses:
+ - "10.1.2.3"
+ conditions:
+ ready: true
+ hostname: pod-1
+ topology:
+ kubernetes.io/hostname: node-1
+ topology.kubernetes.io/zone: us-west2-a
+```
+
+EndpointSlices geré par le controleur d'EndpointSlice n'auront, par défaut, pas plus de 100 endpoints chacun. En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire.
+
+EnpointpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devront une amélioration de performance pour les services qui ont une grand quantité d'endpoints.
+
+### Types d'addresses
+
+EndpointSlices supporte trois type d'addresses:
+
+* IPv4
+* IPv6
+* FQDN (Fully Qualified Domain Name) - [serveur entièrement nommé]
+
+### Topologie
+
+Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes.
+Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Noeud, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
+
+* `kubernetes.io/hostname` - Nom du Noeud sur lequel l'endpoint se situe.
+* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe.
+* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe.
+
+Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selector" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur.
+
+### Capacité d'EndpointSlices
+
+Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec le `--max-endpoints-per-slice` {{< glossary_tooltip
+text="kube-controller-manager" term_id="kube-controller-manager" >}} flag [indicateur] jusqu'à un maximum de 1000.
+
+### Distribution d'EndpointSlices
+
+Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisé pour un Service, les Pods peuvent se retrouver avec differents port cible pour le même port nommé, nécessitant différents EndpointSlices.
+
+Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibrent pas activement. La logic du contrôlleur est assez simple:
+
+1. Itérer à travers les EnpointSlices existantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées.
+2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire.
+3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles.
+
+par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par example, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
+
+Avec kube-proxy exécuté sur chaque Noeud et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Noeud du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Noeud, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
+
+En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
+
+## Motivation
+
+Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau.
+
+Puisque tout les endpoint d'un réseau pour un Service ont été stockés dans un seul ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu une grande quantité de trafic réseau et de traitement lorsque les Enpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
+* Read [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/)
+
+{{% /capture %}}
\ No newline at end of file
From 86cf63495c48a67c9f9915d7c3e15602bfcc5707 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 2 Mar 2020 09:31:05 -0500
Subject: [PATCH 006/105] First round review changes
---
.../services-networking/endpoint-slices.md | 28 +++++++++----------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index 47f076d11b..460bc41f25 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -4,7 +4,7 @@ title: EndpointSlices
feature:
title: EndpointSlices
description: >
- Scalable tracking of network endpoints in a Kubernetes cluster.
+ Suivi évolutif des points de terminaison réseau dans un cluster Kubernetes.
content_template: templates/concept
weight: 10
@@ -24,17 +24,17 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése
## Resource pour EndpointSlice {#endpointslice-resource}
Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
-endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux Service selector. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
+endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selecteur" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
-Par example, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `example`.
+Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`.
```yaml
apiVersion: discovery.k8s.io/v1beta1
kind: EndpointSlice
metadata:
- name: example-abc
+ name: exemple-abc
labels:
- kubernetes.io/service-name: example
+ kubernetes.io/service-name: exemple
addressType: IPv4
ports:
- name: http
@@ -66,18 +66,18 @@ EndpointSlices supporte trois type d'addresses:
### Topologie
Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes.
-Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Noeud, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
+Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
-* `kubernetes.io/hostname` - Nom du Noeud sur lequel l'endpoint se situe.
+* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe.
* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe.
* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe.
-Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selector" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur.
+Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selecteur" term_id="selecteur" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur.
### Capacité d'EndpointSlices
-Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec le `--max-endpoints-per-slice` {{< glossary_tooltip
-text="kube-controller-manager" term_id="kube-controller-manager" >}} flag [indicateur] jusqu'à un maximum de 1000.
+Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip
+text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
### Distribution d'EndpointSlices
@@ -89,9 +89,9 @@ Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possib
2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire.
3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles.
-par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par example, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
+Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
-Avec kube-proxy exécuté sur chaque Noeud et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Noeud du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Noeud, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
+Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
@@ -99,13 +99,13 @@ En pratique, cette distribution moins qu'idéale devrait être rare. La plupart
Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau.
-Puisque tout les endpoint d'un réseau pour un Service ont été stockés dans un seul ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu une grande quantité de trafic réseau et de traitement lorsque les Enpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
+Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
{{% /capture %}}
{{% capture whatsnext %}}
* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
-* Read [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/)
+* Lire [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/)
{{% /capture %}}
\ No newline at end of file
From 2e750fa39e706c26c994c15db6b0e9a3685830d3 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 2 Mar 2020 09:31:05 -0500
Subject: [PATCH 007/105] First round review changes
---
.../services-networking/endpoint-slices.md | 26 +++++++++----------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index 47f076d11b..6798eb2c20 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -4,7 +4,7 @@ title: EndpointSlices
feature:
title: EndpointSlices
description: >
- Scalable tracking of network endpoints in a Kubernetes cluster.
+ Suivi évolutif des points de terminaison réseau dans un cluster Kubernetes.
content_template: templates/concept
weight: 10
@@ -24,17 +24,17 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése
## Resource pour EndpointSlice {#endpointslice-resource}
Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
-endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux Service selector. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
+endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
-Par example, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `example`.
+Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`.
```yaml
apiVersion: discovery.k8s.io/v1beta1
kind: EndpointSlice
metadata:
- name: example-abc
+ name: exemple-abc
labels:
- kubernetes.io/service-name: example
+ kubernetes.io/service-name: exemple
addressType: IPv4
ports:
- name: http
@@ -66,9 +66,9 @@ EndpointSlices supporte trois type d'addresses:
### Topologie
Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes.
-Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Noeud, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
+Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
-* `kubernetes.io/hostname` - Nom du Noeud sur lequel l'endpoint se situe.
+* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe.
* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe.
* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe.
@@ -76,8 +76,8 @@ Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que
### Capacité d'EndpointSlices
-Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec le `--max-endpoints-per-slice` {{< glossary_tooltip
-text="kube-controller-manager" term_id="kube-controller-manager" >}} flag [indicateur] jusqu'à un maximum de 1000.
+Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip
+text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
### Distribution d'EndpointSlices
@@ -89,9 +89,9 @@ Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possib
2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire.
3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles.
-par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par example, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
+Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
-Avec kube-proxy exécuté sur chaque Noeud et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Noeud du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Noeud, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
+Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
@@ -99,13 +99,13 @@ En pratique, cette distribution moins qu'idéale devrait être rare. La plupart
Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau.
-Puisque tout les endpoint d'un réseau pour un Service ont été stockés dans un seul ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu une grande quantité de trafic réseau et de traitement lorsque les Enpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
+Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
{{% /capture %}}
{{% capture whatsnext %}}
* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
-* Read [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/)
+* Lire [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/)
{{% /capture %}}
\ No newline at end of file
From 48828fdc4a17ab872d7f3c9ae65c4c09e8934668 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 2 Mar 2020 11:39:54 -0500
Subject: [PATCH 008/105] keeping term_id but changing text
---
.../fr/docs/concepts/services-networking/endpoint-slices.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index 04a63aa6c8..6992428f57 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -24,7 +24,7 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése
## Resource pour EndpointSlice {#endpointslice-resource}
Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
-endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
+endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`.
@@ -72,7 +72,7 @@ Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les info
* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe.
* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe.
-Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selector" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur.
+Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selecteur" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur.
### Capacité d'EndpointSlices
From 6eceb245a2335d1e1a15ebaa4086dfbb5d4610f4 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 2 Mar 2020 12:20:17 -0500
Subject: [PATCH 009/105] remove typo
---
content/fr/docs/concepts/services-networking/endpoint-slices.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index 6992428f57..a96dad2b98 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -106,6 +106,6 @@ Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans
{{% capture whatsnext %}}
* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
-* Lire [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/)
+* Lire [Connecté des Application aux Services](/docs/concepts/services-networking/connect-applications-service/)
{{% /capture %}}
\ No newline at end of file
From 73f5b576d4bb9309037fbb41785cb937efc253a8 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 2 Mar 2020 12:39:39 -0500
Subject: [PATCH 010/105] Fix accent and verb accordance
---
.../services-networking/endpoint-slices.md | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index a96dad2b98..b25516b1a2 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -24,7 +24,7 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése
## Resource pour EndpointSlice {#endpointslice-resource}
Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
-endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
+endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references à n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`.
@@ -53,7 +53,7 @@ endpoints:
EndpointSlices geré par le controleur d'EndpointSlice n'auront, par défaut, pas plus de 100 endpoints chacun. En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire.
-EnpointpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devront une amélioration de performance pour les services qui ont une grand quantité d'endpoints.
+EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'endpoints.
### Types d'addresses
@@ -66,7 +66,7 @@ EndpointSlices supporte trois type d'addresses:
### Topologie
Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes.
-Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
+Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les étiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe.
* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe.
@@ -76,16 +76,15 @@ Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que
### Capacité d'EndpointSlices
-Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip
-text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
+Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
### Distribution d'EndpointSlices
Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisé pour un Service, les Pods peuvent se retrouver avec differents port cible pour le même port nommé, nécessitant différents EndpointSlices.
-Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibrent pas activement. La logic du contrôlleur est assez simple:
+Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logic du contrôlleur est assez simple:
-1. Itérer à travers les EnpointSlices existantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées.
+1. Itérer à travers les EnpointSlices éxistantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées.
2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire.
3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles.
@@ -93,7 +92,7 @@ Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'E
Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
-En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
+En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
## Motivation
From 44e3358c7f3bec220966750b7b170e8bc379450b Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Fri, 6 Mar 2020 10:00:26 -0500
Subject: [PATCH 011/105] second round review
---
.../services-networking/endpoint-slices.md | 20 +++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index b25516b1a2..103a2bf377 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -4,7 +4,7 @@ title: EndpointSlices
feature:
title: EndpointSlices
description: >
- Suivi évolutif des réseau endpoints dans un cluster Kubernetes.
+ Suivi évolutif des réseaux endpoints dans un cluster Kubernetes.
content_template: templates/concept
weight: 10
@@ -15,7 +15,7 @@ weight: 10
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
-_EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints.
+_EndpointSlices_ offrent une méthode simple pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints.
{{% /capture %}}
@@ -76,29 +76,29 @@ Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que
### Capacité d'EndpointSlices
-Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
+Les EndpointSlices sont limités a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
### Distribution d'EndpointSlices
-Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisé pour un Service, les Pods peuvent se retrouver avec differents port cible pour le même port nommé, nécessitant différents EndpointSlices.
+Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices.
-Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logic du contrôlleur est assez simple:
+Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple:
-1. Itérer à travers les EnpointSlices éxistantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées.
+1. Itérer à travers les EnpointSlices existantes, retirer les endpoints qui ne sont plus voulues et mettre à jour les endpoints qui ont changées.
2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire.
3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles.
Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
-Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
+Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
-En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
+En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
## Motivation
-Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau.
+Les Endpoints API fournissent une méthode simple et facile à suivre pour les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'endpoint d'un réseau.
-Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
+Puisque tous les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
{{% /capture %}}
From ef2f47772844dc5cc8fb6c4d410d2d19dc3d5910 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Fri, 6 Mar 2020 10:05:13 -0500
Subject: [PATCH 012/105] Rewording sentences for a more accurate translation
---
content/fr/docs/concepts/services-networking/endpoint-slices.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index 103a2bf377..cf980fd340 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -92,7 +92,7 @@ Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'E
Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
-En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer.
+En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les endpoints correspondants qui se feront remplacer.
## Motivation
From 476598657b092e4ae2cd145d990115c0974ddf0f Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 9 Mar 2020 15:39:33 -0400
Subject: [PATCH 013/105] Add reviewed changes
---
.../services-networking/endpoint-slices.md | 52 +++++++++++--------
1 file changed, 30 insertions(+), 22 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index cf980fd340..bf22340339 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -4,7 +4,7 @@ title: EndpointSlices
feature:
title: EndpointSlices
description: >
- Suivi évolutif des réseaux endpoints dans un cluster Kubernetes.
+ Suivi évolutif des réseaux Endpoints dans un cluster Kubernetes.
content_template: templates/concept
weight: 10
@@ -15,7 +15,7 @@ weight: 10
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
-_EndpointSlices_ offrent une méthode simple pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints.
+_EndpointSlices_ offrent une méthode simple pour suivre les Endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus évolutive et extensible aux Endpoints.
{{% /capture %}}
@@ -24,7 +24,9 @@ _EndpointSlices_ offrent une méthode simple pour suivre les endpoints d'un rés
## Resource pour EndpointSlice {#endpointslice-resource}
Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
-endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references à n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports.
+Endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié.
+Ces EnpointSlices vont inclure des références à n'importe quels Pods qui correspond aux selecteur de Service.
+EndpointSlices groupent ensemble les Endpoints d'un réseau par combinaisons uniques de Services et de Ports.
Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`.
@@ -51,13 +53,15 @@ endpoints:
topology.kubernetes.io/zone: us-west2-a
```
-EndpointSlices geré par le controleur d'EndpointSlice n'auront, par défaut, pas plus de 100 endpoints chacun. En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire.
+Les EndpointSlices géré par le contrôleur d'EndpointSlice n'auront, par défaut, pas plus de 100 Endpoints chacun.
+En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire.
-EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'endpoints.
+EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand il s'agit du routage d'un trafic interne.
+Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'Endpoints.
### Types d'addresses
-EndpointSlices supporte trois type d'addresses:
+Les EndpointSlices supportent 3 types d'addresses:
* IPv4
* IPv6
@@ -65,46 +69,50 @@ EndpointSlices supporte trois type d'addresses:
### Topologie
-Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes.
-Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les étiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice:
+Chaque Endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes.
+Ceci est utilisé pour indiqué où se trouve un Endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les labels de Topologies suivantes seront définies par le contrôleur EndpointSlice:
-* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe.
-* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe.
-* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe.
+* `kubernetes.io/hostname` - Nom du Node sur lequel l'Endpoint se situe.
+* `topology.kubernetes.io/zone` - Zone dans laquelle l'Endpoint se situe.
+* `topology.kubernetes.io/region` - Region dans laquelle l'Endpoint se situe.
-Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selecteur" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur.
+Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que leurs correspondances avec les EndpointSlices sont à jour.
+Le contrôleur gère les EndpointSlices pour tous les Services qui ont un sélecteur - [référence: {{< glossary_tooltip text="sélecteur" term_id="selector" >}}] - specifié. Celles-ci représenteront les IPs des Pods qui correspond au sélecteur.
### Capacité d'EndpointSlices
-Les EndpointSlices sont limités a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
+Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
### Distribution d'EndpointSlices
-Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices.
+Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les Endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices.
Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple:
-1. Itérer à travers les EnpointSlices existantes, retirer les endpoints qui ne sont plus voulues et mettre à jour les endpoints qui ont changées.
-2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire.
-3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles.
+1. Itérer à travers les EnpointSlices existantes, retirer les Endpoints qui ne sont plus voulues et mettre à jour les Endpoints qui ont changées.
+2. Itérer à travers les EndpointSlices qui ont été modifiés dans la première étape et les remplir avec n'importe quel Endpoint nécéssaire.
+3. S'il reste encore des Endpoints neufs à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouveaux.
-Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
+Par-dessus tout, la troisième étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouveaux Endpoints à ajouter et 2 EndpointSlices qui peuvent contenir 5 Endpoints en plus chacun; cette approche créera un nouveau EndpointSlice au lieu de remplir les EndpointSlice existants.
+C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
-En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les endpoints correspondants qui se feront remplacer.
+En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les Endpoints correspondants qui se feront remplacer.
## Motivation
-Les Endpoints API fournissent une méthode simple et facile à suivre pour les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'endpoint d'un réseau.
+L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints d'un réseau dans Kubernetes.
+Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles.
+Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau.
-Puisque tous les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
+Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
{{% /capture %}}
{{% capture whatsnext %}}
* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
-* Lire [Connecté des Application aux Services](/docs/concepts/services-networking/connect-applications-service/)
+* Lire [Connecter des applications aux Services](/docs/concepts/services-networking/connect-applications-service/)
{{% /capture %}}
\ No newline at end of file
From f0b4ddf012217514a524e43b857910ab91f9f198 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 9 Mar 2020 15:44:36 -0400
Subject: [PATCH 014/105] Add last minute changes
---
.../fr/docs/concepts/services-networking/endpoint-slices.md | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index bf22340339..a4283fee91 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -81,11 +81,12 @@ Le contrôleur gère les EndpointSlices pour tous les Services qui ont un sélec
### Capacité d'EndpointSlices
-Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
+Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par défaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000.
### Distribution d'EndpointSlices
-Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les Endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices.
+Chaque EndpointSlice a un ensemble de ports qui s'applique à tous les Endpoints dans la resource.
+Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices.
Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple:
From c61213299d319d4916fe5d01ad7812ed31c2b445 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Mon, 9 Mar 2020 16:03:30 -0400
Subject: [PATCH 015/105] Add changes for missed reviews
---
.../services-networking/endpoint-slices.md | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index a4283fee91..c3fdef9a29 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -90,24 +90,27 @@ Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se re
Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple:
-1. Itérer à travers les EnpointSlices existantes, retirer les Endpoints qui ne sont plus voulues et mettre à jour les Endpoints qui ont changées.
+1. Itérer à travers les EnpointSlices existants, retirer les Endpoints qui ne sont plus voulus et mettre à jour les Endpoints qui ont changés.
2. Itérer à travers les EndpointSlices qui ont été modifiés dans la première étape et les remplir avec n'importe quel Endpoint nécéssaire.
3. S'il reste encore des Endpoints neufs à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouveaux.
Par-dessus tout, la troisième étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouveaux Endpoints à ajouter et 2 EndpointSlices qui peuvent contenir 5 Endpoints en plus chacun; cette approche créera un nouveau EndpointSlice au lieu de remplir les EndpointSlice existants.
C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice.
-Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein.
+Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement d'un EndpointSlice devient relativement coûteux puisqu'ils seront transmis à chaque Node du cluster.
+Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut causer plusieurs EndpointSlices non remplis.
-En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les Endpoints correspondants qui se feront remplacer.
+En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petite pour tenir dans un EndpointSlice existant, et sinon, un nouveau EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également une compaction naturelle des EndpointSlices avec tous leurs pods et les Endpoints correspondants qui se feront remplacer.
## Motivation
-L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints d'un réseau dans Kubernetes.
+L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints dans Kubernetes.
Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles.
-Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau.
+Plus particulièrement, ceux-ci comprenaient des limitations liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau.
-Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
+Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes.
+Cela a affecté les performances des composants Kubernetes (notamment le plan de contrôle) et a causé une grande quantité de trafic réseau et de traitements lorsque les Endpoints changent.
+Les EndpointSlices aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique.
{{% /capture %}}
From dae5f0585b49dd46c5e8f66b25d1760a051fa035 Mon Sep 17 00:00:00 2001
From: Harrison Razanajatovo
Date: Tue, 10 Mar 2020 09:15:51 -0400
Subject: [PATCH 016/105] Simplify for clarity
---
.../fr/docs/concepts/services-networking/endpoint-slices.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md
index c3fdef9a29..b06117cc00 100644
--- a/content/fr/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md
@@ -23,8 +23,8 @@ _EndpointSlices_ offrent une méthode simple pour suivre les Endpoints d'un rés
## Resource pour EndpointSlice {#endpointslice-resource}
-Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau
-Endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié.
+Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de Endpoints.
+Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié.
Ces EnpointSlices vont inclure des références à n'importe quels Pods qui correspond aux selecteur de Service.
EndpointSlices groupent ensemble les Endpoints d'un réseau par combinaisons uniques de Services et de Ports.
From 2ca1c075ad2d6959ddced1661b9f6cc1cb1bb286 Mon Sep 17 00:00:00 2001
From: Niklas Hansson
Date: Fri, 20 Mar 2020 09:43:49 +0100
Subject: [PATCH 017/105] Update the Mac autocomplete to use bash_profile
On OS X, Terminal by default runs a login shell every time, thus I believe it is simpler for new users to not have to change to change the default. behaviour or source the bashrc file every time. https://apple.stackexchange.com/questions/51036/what-is-the-difference-between-bash-profile-and-bashrc
---
content/en/docs/tasks/tools/install-kubectl.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md
index 83c9d1761b..d2922be1a6 100644
--- a/content/en/docs/tasks/tools/install-kubectl.md
+++ b/content/en/docs/tasks/tools/install-kubectl.md
@@ -419,7 +419,7 @@ You can test if you have bash-completion v2 already installed with `type _init_c
brew install bash-completion@2
```
-As stated in the output of this command, add the following to your `~/.bashrc` file:
+As stated in the output of this command, add the following to your `~/.bash_profile` file:
```shell
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
@@ -432,10 +432,10 @@ Reload your shell and verify that bash-completion v2 is correctly installed with
You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
-- Source the completion script in your `~/.bashrc` file:
+- Source the completion script in your `~/.bash_profile` file:
```shell
- echo 'source <(kubectl completion bash)' >>~/.bashrc
+ echo 'source <(kubectl completion bash)' >>~/.bash_profile
```
@@ -448,8 +448,8 @@ You now have to ensure that the kubectl completion script gets sourced in all yo
- If you have an alias for kubectl, you can extend shell completion to work with that alias:
```shell
- echo 'alias k=kubectl' >>~/.bashrc
- echo 'complete -F __start_kubectl k' >>~/.bashrc
+ echo 'alias k=kubectl' >>~/.bash_profile
+ echo 'complete -F __start_kubectl k' >>~/.bash_profile
```
- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
From 5f263770fdd907881eb73f21b24cdf2c775a385f Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Tue, 17 Mar 2020 23:39:20 +0000
Subject: [PATCH 018/105] Tweak link to partners
---
content/en/docs/setup/_index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md
index 880dd46024..7a99422dee 100644
--- a/content/en/docs/setup/_index.md
+++ b/content/en/docs/setup/_index.md
@@ -53,6 +53,6 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b
When evaluating a solution for a production environment, consider which aspects of operating a Kubernetes cluster (or _abstractions_) you want to manage yourself or offload to a provider.
-For a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers, see "[Partners](https://kubernetes.io/partners/#conformance)".
+[Kubernetes Partners](https://kubernetes.io/partners/#conformance) includes a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers.
{{% /capture %}}
From a8032bd74e24a44fab7b8b738d5fa1e28c7a315a Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Tue, 17 Mar 2020 23:39:44 +0000
Subject: [PATCH 019/105] Drop k3s as a learning environment
"K3s is a highly available, certified Kubernetes distribution designed
for production workloads in unattended, resource-constrained, remote
locations or inside IoT appliances"
The product's own web page doesn't mention using it for learning.
---
content/en/docs/setup/_index.md | 1 -
1 file changed, 1 deletion(-)
diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md
index 7a99422dee..53c8737439 100644
--- a/content/en/docs/setup/_index.md
+++ b/content/en/docs/setup/_index.md
@@ -46,7 +46,6 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b
| | [MicroK8s](https://microk8s.io/)|
| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) |
| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)|
-| | [k3s](https://k3s.io)|
## Production environment
From 0bf066fa28d6e379d7a8850b96fe9d33542a3813 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Tue, 17 Mar 2020 23:43:13 +0000
Subject: [PATCH 020/105] Drop IBM Cloud Private-CE as learning environment
These products don't seem like a good fit for learners.
---
content/en/docs/setup/_index.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md
index 53c8737439..35882edc4f 100644
--- a/content/en/docs/setup/_index.md
+++ b/content/en/docs/setup/_index.md
@@ -44,8 +44,6 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b
| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
| | [Minishift](https://docs.okd.io/latest/minishift/)|
| | [MicroK8s](https://microk8s.io/)|
-| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) |
-| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)|
## Production environment
From 89c952b91ec0925ba7d387f301f7af481433bade Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Tue, 17 Mar 2020 23:47:12 +0000
Subject: [PATCH 021/105] Drop link to CDK on LXD
The page this linked to recommends considering microk8s, so let's omit
this one and leave microk8s.
Why leave microk8s? It's a certified Kubernetes distribution focused on
learning environments, and it's multiplatform (ish).
---
content/en/docs/setup/_index.md | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md
index 35882edc4f..16702b40f5 100644
--- a/content/en/docs/setup/_index.md
+++ b/content/en/docs/setup/_index.md
@@ -40,9 +40,8 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b
|Community |Ecosystem |
| ------------ | -------- |
-| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) |
-| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
-| | [Minishift](https://docs.okd.io/latest/minishift/)|
+| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
+| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)|
| | [MicroK8s](https://microk8s.io/)|
From 512023837f9527dad6ff139ba105ca3b20f3fef9 Mon Sep 17 00:00:00 2001
From: Karoline Pauls <43616133+karolinepauls@users.noreply.github.com>
Date: Thu, 26 Mar 2020 00:45:01 +0000
Subject: [PATCH 022/105] api-concepts.md: Watch bookmarks
Replaces "an information" and fixes a lost plural.
---
content/en/docs/reference/using-api/api-concepts.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md
index fbdbf14f87..a0b41aa2d4 100644
--- a/content/en/docs/reference/using-api/api-concepts.md
+++ b/content/en/docs/reference/using-api/api-concepts.md
@@ -89,7 +89,7 @@ A given Kubernetes server will only preserve a historical list of changes for a
### Watch bookmarks
-To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to pass an information that all changes up to a given `resourceVersion` client is requesting has already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.:
+To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to mark that all changes up to a given `resourceVersion` the client is requesting have already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.:
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
---
From dfb8d40026ca64e467a1ddf3ee25fb5858d24f0c Mon Sep 17 00:00:00 2001
From: Rajesh Deshpande
Date: Fri, 20 Mar 2020 17:49:13 +0530
Subject: [PATCH 023/105] Adding example for DaemonSet Rolling Update task
Adding example for DaemonSet Rolling Update task
Adding fluentd daemonset example
Adding fluentd daemonset example
Creating fluend daemonset for update
Creating fluend daemonset for update
Adding proper description for YAML file
Adding proper description for YAML file
---
.../tasks/manage-daemon/update-daemon-set.md | 80 +++++++++++--------
.../controllers/fluentd-daemonset-update.yaml | 48 +++++++++++
.../controllers/fluentd-daemonset.yaml | 42 ++++++++++
3 files changed, 135 insertions(+), 35 deletions(-)
create mode 100644 content/en/examples/controllers/fluentd-daemonset-update.yaml
create mode 100644 content/en/examples/controllers/fluentd-daemonset.yaml
diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md
index 8eec3c1781..ccbce55313 100644
--- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md
+++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md
@@ -43,21 +43,43 @@ To enable the rolling update feature of a DaemonSet, you must set its
You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default
to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well.
+### Creating a DaemonSet with `RollingUpdate` update strategy
-### Step 1: Checking DaemonSet `RollingUpdate` update strategy
+This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
-First, check the update strategy of your DaemonSet, and make sure it's set to
+{{< codenew file="controllers/fluentd-daemonset.yaml" >}}
+
+After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
+
+```shell
+kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
+```
+
+Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
+update the DaemonSet with `kubectl apply`.
+
+```shell
+kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
+```
+
+### Checking DaemonSet `RollingUpdate` update strategy
+
+Check the update strategy of your DaemonSet, and make sure it's set to
`RollingUpdate`:
```shell
-kubectl get ds/ -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
+kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
```
If you haven't created the DaemonSet in the system, check your DaemonSet
manifest with the following command instead:
```shell
-kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
+<<<<<<< HEAD
+kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
+=======
+kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
+>>>>>>> Adding example for DaemonSet Rolling Update task
```
The output from both commands should be:
@@ -69,28 +91,13 @@ RollingUpdate
If the output isn't `RollingUpdate`, go back and modify the DaemonSet object or
manifest accordingly.
-### Step 2: Creating a DaemonSet with `RollingUpdate` update strategy
-If you have already created the DaemonSet, you may skip this step and jump to
-step 3.
-
-After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
-
-```shell
-kubectl create -f ds.yaml
-```
-
-Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
-update the DaemonSet with `kubectl apply`.
-
-```shell
-kubectl apply -f ds.yaml
-```
-
-### Step 3: Updating a DaemonSet template
+### Updating a DaemonSet template
Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling
-update. This can be done with several different `kubectl` commands.
+update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands.
+
+{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}}
#### Declarative commands
@@ -99,21 +106,17 @@ If you update DaemonSets using
use `kubectl apply`:
```shell
-kubectl apply -f ds-v2.yaml
+kubectl apply -f https://k8s.io/examples/application/fluentd-daemonset-update.yaml
```
#### Imperative commands
If you update DaemonSets using
[imperative commands](/docs/tasks/manage-kubernetes-objects/imperative-command/),
-use `kubectl edit` or `kubectl patch`:
+use `kubectl edit` :
```shell
-kubectl edit ds/
-```
-
-```shell
-kubectl patch ds/ -p=
+kubectl edit ds/fluentd-elasticsearch -n kube-system
```
##### Updating only the container image
@@ -122,21 +125,21 @@ If you just need to update the container image in the DaemonSet template, i.e.
`.spec.template.spec.containers[*].image`, use `kubectl set image`:
```shell
-kubectl set image ds/ =
+kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
```
-### Step 4: Watching the rolling update status
+### Watching the rolling update status
Finally, watch the rollout status of the latest DaemonSet rolling update:
```shell
-kubectl rollout status ds/
+kubectl rollout status ds/fluentd-elasticsearch -n kube-system
```
When the rollout is complete, the output is similar to this:
```shell
-daemonset "" successfully rolled out
+daemonset "fluentd-elasticsearch" successfully rolled out
```
## Troubleshooting
@@ -156,7 +159,7 @@ When this happens, find the nodes that don't have the DaemonSet pods scheduled o
by comparing the output of `kubectl get nodes` and the output of:
```shell
-kubectl get pods -l = -o wide
+kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
```
Once you've found those nodes, delete some non-DaemonSet pods from the node to
@@ -183,6 +186,13 @@ If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between
master and nodes will make DaemonSet unable to detect the right rollout
progress.
+## Clean up
+
+Delete DaemonSet from a namespace :
+
+```shell
+kubectl delete ds fluentd-elasticsearch -n kube-system
+```
{{% /capture %}}
diff --git a/content/en/examples/controllers/fluentd-daemonset-update.yaml b/content/en/examples/controllers/fluentd-daemonset-update.yaml
new file mode 100644
index 0000000000..dcf08d4fc9
--- /dev/null
+++ b/content/en/examples/controllers/fluentd-daemonset-update.yaml
@@ -0,0 +1,48 @@
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: fluentd-elasticsearch
+ namespace: kube-system
+ labels:
+ k8s-app: fluentd-logging
+spec:
+ selector:
+ matchLabels:
+ name: fluentd-elasticsearch
+ updateStrategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxUnavailable: 1
+ template:
+ metadata:
+ labels:
+ name: fluentd-elasticsearch
+ spec:
+ tolerations:
+ # this toleration is to have the daemonset runnable on master nodes
+ # remove it if your masters can't run pods
+ - key: node-role.kubernetes.io/master
+ effect: NoSchedule
+ containers:
+ - name: fluentd-elasticsearch
+ image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
+ resources:
+ limits:
+ memory: 200Mi
+ requests:
+ cpu: 100m
+ memory: 200Mi
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: varlibdockercontainers
+ mountPath: /var/lib/docker/containers
+ readOnly: true
+ terminationGracePeriodSeconds: 30
+ volumes:
+ - name: varlog
+ hostPath:
+ path: /var/log
+ - name: varlibdockercontainers
+ hostPath:
+ path: /var/lib/docker/containers
diff --git a/content/en/examples/controllers/fluentd-daemonset.yaml b/content/en/examples/controllers/fluentd-daemonset.yaml
new file mode 100644
index 0000000000..0e1e7d3345
--- /dev/null
+++ b/content/en/examples/controllers/fluentd-daemonset.yaml
@@ -0,0 +1,42 @@
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: fluentd-elasticsearch
+ namespace: kube-system
+ labels:
+ k8s-app: fluentd-logging
+spec:
+ selector:
+ matchLabels:
+ name: fluentd-elasticsearch
+ updateStrategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxUnavailable: 1
+ template:
+ metadata:
+ labels:
+ name: fluentd-elasticsearch
+ spec:
+ tolerations:
+ # this toleration is to have the daemonset runnable on master nodes
+ # remove it if your masters can't run pods
+ - key: node-role.kubernetes.io/master
+ effect: NoSchedule
+ containers:
+ - name: fluentd-elasticsearch
+ image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: varlibdockercontainers
+ mountPath: /var/lib/docker/containers
+ readOnly: true
+ terminationGracePeriodSeconds: 30
+ volumes:
+ - name: varlog
+ hostPath:
+ path: /var/log
+ - name: varlibdockercontainers
+ hostPath:
+ path: /var/lib/docker/containers
From f3d82cf167a7c9003302b8e124da089ce1176b46 Mon Sep 17 00:00:00 2001
From: Rajesh Deshpande
Date: Fri, 27 Mar 2020 12:25:05 +0530
Subject: [PATCH 024/105] Removing junk chars
Removing junk chars
---
content/en/docs/tasks/manage-daemon/update-daemon-set.md | 4 ----
1 file changed, 4 deletions(-)
diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md
index ccbce55313..e4640e3ccd 100644
--- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md
+++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md
@@ -75,11 +75,7 @@ If you haven't created the DaemonSet in the system, check your DaemonSet
manifest with the following command instead:
```shell
-<<<<<<< HEAD
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
-=======
-kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
->>>>>>> Adding example for DaemonSet Rolling Update task
```
The output from both commands should be:
From 07fd1c617f534c174ed4f3e9482f45c71cdb7c9a Mon Sep 17 00:00:00 2001
From: Alpha
Date: Mon, 30 Mar 2020 11:51:12 +0800
Subject: [PATCH 025/105] add a yaml exmaple for type nodeport
---
.../concepts/services-networking/service.md | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index a62faf1e0f..da9d922b92 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -534,6 +534,25 @@ to just expose one or more nodes' IPs directly.
Note that this Service is visible as `:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).)
+For example:
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ type: NodePort
+ selector:
+ app: MyApp
+ ports:
+ - port: 80
+ # By default and for convenience, the targetPort is set to the same value as the port field.
+ targetPort: 80
+ # `targetPort` is the port of pod, you would like to expose
+ NodePort: 30007
+ # `NodePort` is the port of node, you would like to expose
+```
+
### Type LoadBalancer {#loadbalancer}
On cloud providers which support external load balancers, setting the `type`
From ff4ebc4fea0e2a22fa9c0224058eda62eeb5400b Mon Sep 17 00:00:00 2001
From: Alpha
Date: Mon, 30 Mar 2020 13:28:22 +0800
Subject: [PATCH 026/105] update the yaml example based on review
---
content/en/docs/concepts/services-networking/service.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index da9d922b92..7908f8836d 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -546,11 +546,11 @@ spec:
app: MyApp
ports:
- port: 80
- # By default and for convenience, the targetPort is set to the same value as the port field.
targetPort: 80
- # `targetPort` is the port of pod, you would like to expose
- NodePort: 30007
- # `NodePort` is the port of node, you would like to expose
+ # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
+ nodePort: 30007
+ # Optional field
+ # By default and for convenience, the Kubernetes control plane will allocates a port from a range (default: 30000-32767)
```
### Type LoadBalancer {#loadbalancer}
From 1fda1272a572f9643d7fe568f633d4fe48a751ed Mon Sep 17 00:00:00 2001
From: Alpha
Date: Mon, 30 Mar 2020 13:31:08 +0800
Subject: [PATCH 027/105] Update service.md
---
content/en/docs/concepts/services-networking/service.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 7908f8836d..63f7fff957 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -550,7 +550,7 @@ spec:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
nodePort: 30007
# Optional field
- # By default and for convenience, the Kubernetes control plane will allocates a port from a range (default: 30000-32767)
+ # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
```
### Type LoadBalancer {#loadbalancer}
From ee6851e203f4e436cc057c2324d5376fc4373a56 Mon Sep 17 00:00:00 2001
From: Tsahi Duek
Date: Mon, 30 Mar 2020 09:35:24 +0300
Subject: [PATCH 028/105] Language en - fix link for pod-topology-spread
- markdown lint by vscode plugin
---
.../pods/pod-topology-spread-constraints.md | 65 ++++++++++---------
1 file changed, 33 insertions(+), 32 deletions(-)
diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
index 35a373473b..bb1db1907f 100644
--- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
@@ -18,7 +18,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
### Enable Feature Gate
-The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/)
+The `EvenPodsSpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
must be enabled for the
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and**
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}.
@@ -62,10 +62,10 @@ metadata:
name: mypod
spec:
topologySpreadConstraints:
- - maxSkew:
- topologyKey:
- whenUnsatisfiable:
- labelSelector:
+ - maxSkew:
+ topologyKey:
+ whenUnsatisfiable:
+ labelSelector:
```
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
@@ -73,8 +73,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
- **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero.
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
- - `DoNotSchedule` (default) tells the scheduler not to schedule it.
- - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
+ - `DoNotSchedule` (default) tells the scheduler not to schedule it.
+ - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
@@ -160,29 +160,30 @@ There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that:
- 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
- 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
+
+ 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
+ 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
- Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels.
- If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed.
- Suppose you have a 5-node cluster ranging from zoneA to zoneC:
+ Suppose you have a 5-node cluster ranging from zoneA to zoneC:
- ```
- +---------------+---------------+-------+
- | zoneA | zoneB | zoneC |
- +-------+-------+-------+-------+-------+
- | node1 | node2 | node3 | node4 | node5 |
- +-------+-------+-------+-------+-------+
- | P | P | P | | |
- +-------+-------+-------+-------+-------+
- ```
+ ```
+ +---------------+---------------+-------+
+ | zoneA | zoneB | zoneC |
+ +-------+-------+-------+-------+-------+
+ | node1 | node2 | node3 | node4 | node5 |
+ +-------+-------+-------+-------+-------+
+ | P | P | P | | |
+ +-------+-------+-------+-------+-------+
+ ```
- and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
+ and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
+
+ {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
- {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
-
### Cluster-level default constraints
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
@@ -207,16 +208,16 @@ kind: KubeSchedulerConfiguration
profiles:
pluginConfig:
- - name: PodTopologySpread
- args:
- defaultConstraints:
- - maxSkew: 1
- topologyKey: failure-domain.beta.kubernetes.io/zone
- whenUnsatisfiable: ScheduleAnyway
+ - name: PodTopologySpread
+ args:
+ defaultConstraints:
+ - maxSkew: 1
+ topologyKey: failure-domain.beta.kubernetes.io/zone
+ whenUnsatisfiable: ScheduleAnyway
```
{{< note >}}
-The score produced by default scheduling constraints might conflict with the
+The score produced by default scheduling constraints might conflict with the
score produced by the
[`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/profiles/#scheduling-plugins).
It is recommended that you disable this plugin in the scheduling profile when
@@ -229,14 +230,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are
scheduled - more packed or more scattered.
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
-topology domain(s)
+ topology domain(s)
- For `PodAntiAffinity`, only one Pod can be scheduled into a
-single topology domain.
+ single topology domain.
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
workloads and scaling out replicas smoothly.
-See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details.
+See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
## Known Limitations
From 94ec519f6342c32acfb7306dfe5c8ea19e588435 Mon Sep 17 00:00:00 2001
From: Alpha
Date: Mon, 30 Mar 2020 23:21:25 +0800
Subject: [PATCH 029/105] Update service.md
---
content/en/docs/concepts/services-networking/service.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 63f7fff957..65891d3ccb 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -545,12 +545,12 @@ spec:
selector:
app: MyApp
ports:
+ # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
- # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- nodePort: 30007
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
+ nodePort: 30007
```
### Type LoadBalancer {#loadbalancer}
From 56c18bcc080f4f78a5e0a8845fa42f868e8aa440 Mon Sep 17 00:00:00 2001
From: Maksym Vlasov
Date: Tue, 31 Mar 2020 21:14:44 +0300
Subject: [PATCH 030/105] Add minimum requirement content (#3)
* Localize front page
* Localize heading and subheading URLs
Close kubernetes-i18n-ukrainian/website#6
Close kubernetes-i18n-ukrainian/website#11
* Uk translation for k8s basics
Close kubernetes-i18n-ukrainian/website#9
* Uk localization homepage
PR: kubernetes-i18n-ukrainian/website#102
* Buildable version of the Ukrainian website
* Adding localized content of the what-is-kubernetes.md
Close kubernetes-i18n-ukrainian/website#12
* Localizing site strings in toml file
* Localizing tutorials
Close kubernetes-i18n-ukrainian/website#32
* Localizing templates
* Localize glossary terms
Close kubernetes-i18n-ukrainian/website#2
* Create glossary for UK contributors
Close kubernetes-i18n-ukrainian/website#101
* Add butuzov to dream team
Co-authored-by: Anastasiya Kulyk
Co-Authored-By: Maksym Vlasov
Co-authored-by: Oleg Butuzov
---
OWNERS_ALIASES | 2 +
README-uk.md | 4 +-
content/uk/_common-resources/index.md | 3 +
content/uk/_index.html | 85 ++
content/uk/case-studies/_index.html | 13 +
content/uk/docs/_index.md | 3 +
content/uk/docs/concepts/_index.md | 123 ++
.../uk/docs/concepts/configuration/_index.md | 5 +
.../manage-compute-resources-container.md | 623 +++++++++
.../uk/docs/concepts/configuration/secret.md | 1054 +++++++++++++++
content/uk/docs/concepts/overview/_index.md | 4 +
.../concepts/overview/what-is-kubernetes.md | 185 +++
.../concepts/services-networking/_index.md | 4 +
.../services-networking/dual-stack.md | 109 ++
.../services-networking/endpoint-slices.md | 188 +++
.../services-networking/service-topology.md | 127 ++
.../concepts/services-networking/service.md | 1197 +++++++++++++++++
content/uk/docs/concepts/storage/_index.md | 4 +
.../concepts/storage/persistent-volumes.md | 736 ++++++++++
content/uk/docs/concepts/workloads/_index.md | 4 +
.../concepts/workloads/controllers/_index.md | 4 +
.../workloads/controllers/deployment.md | 1152 ++++++++++++++++
.../controllers/jobs-run-to-completion.md | 480 +++++++
.../controllers/replicationcontroller.md | 291 ++++
content/uk/docs/contribute/localization_uk.md | 123 ++
content/uk/docs/home/_index.md | 58 +
.../docs/reference/glossary/applications.md | 16 +
.../glossary/cluster-infrastructure.md | 17 +
.../reference/glossary/cluster-operations.md | 17 +
content/uk/docs/reference/glossary/cluster.md | 22 +
.../docs/reference/glossary/control-plane.md | 17 +
.../uk/docs/reference/glossary/data-plane.md | 17 +
.../uk/docs/reference/glossary/deployment.md | 23 +
content/uk/docs/reference/glossary/index.md | 17 +
.../docs/reference/glossary/kube-apiserver.md | 29 +
.../glossary/kube-controller-manager.md | 22 +
.../uk/docs/reference/glossary/kube-proxy.md | 33 +
.../docs/reference/glossary/kube-scheduler.md | 22 +
content/uk/docs/reference/glossary/kubelet.md | 23 +
content/uk/docs/reference/glossary/pod.md | 23 +
.../uk/docs/reference/glossary/selector.md | 22 +
content/uk/docs/reference/glossary/service.md | 24 +
content/uk/docs/setup/_index.md | 136 ++
.../horizontal-pod-autoscale.md | 293 ++++
.../uk/docs/templates/feature-state-alpha.txt | 7 +
.../uk/docs/templates/feature-state-beta.txt | 22 +
.../templates/feature-state-deprecated.txt | 4 +
.../docs/templates/feature-state-stable.txt | 11 +
content/uk/docs/templates/index.md | 15 +
content/uk/docs/tutorials/_index.md | 90 ++
content/uk/docs/tutorials/hello-minikube.md | 394 ++++++
.../tutorials/kubernetes-basics/_index.html | 138 ++
.../create-cluster/_index.md | 4 +
.../create-cluster/cluster-interactive.html | 37 +
.../create-cluster/cluster-intro.html | 152 +++
.../kubernetes-basics/deploy-app/_index.md | 4 +
.../deploy-app/deploy-interactive.html | 41 +
.../deploy-app/deploy-intro.html | 151 +++
.../kubernetes-basics/explore/_index.md | 4 +
.../explore/explore-interactive.html | 41 +
.../explore/explore-intro.html | 200 +++
.../kubernetes-basics/expose/_index.md | 4 +
.../expose/expose-interactive.html | 38 +
.../expose/expose-intro.html | 169 +++
.../kubernetes-basics/scale/_index.md | 4 +
.../scale/scale-interactive.html | 40 +
.../kubernetes-basics/scale/scale-intro.html | 145 ++
.../kubernetes-basics/update/_index.md | 4 +
.../update/update-interactive.html | 37 +
.../update/update-intro.html | 168 +++
content/uk/examples/controllers/job.yaml | 14 +
.../controllers/nginx-deployment.yaml | 21 +
.../uk/examples/controllers/replication.yaml | 19 +
content/uk/examples/minikube/Dockerfile | 4 +
content/uk/examples/minikube/server.js | 9 +
.../networking/dual-stack-default-svc.yaml | 11 +
.../networking/dual-stack-ipv4-svc.yaml | 12 +
.../networking/dual-stack-ipv6-lb-svc.yaml | 15 +
.../networking/dual-stack-ipv6-svc.yaml | 12 +
i18n/uk.toml | 247 ++++
80 files changed, 9640 insertions(+), 2 deletions(-)
create mode 100644 content/uk/_common-resources/index.md
create mode 100644 content/uk/_index.html
create mode 100644 content/uk/case-studies/_index.html
create mode 100644 content/uk/docs/_index.md
create mode 100644 content/uk/docs/concepts/_index.md
create mode 100644 content/uk/docs/concepts/configuration/_index.md
create mode 100644 content/uk/docs/concepts/configuration/manage-compute-resources-container.md
create mode 100644 content/uk/docs/concepts/configuration/secret.md
create mode 100644 content/uk/docs/concepts/overview/_index.md
create mode 100644 content/uk/docs/concepts/overview/what-is-kubernetes.md
create mode 100644 content/uk/docs/concepts/services-networking/_index.md
create mode 100644 content/uk/docs/concepts/services-networking/dual-stack.md
create mode 100644 content/uk/docs/concepts/services-networking/endpoint-slices.md
create mode 100644 content/uk/docs/concepts/services-networking/service-topology.md
create mode 100644 content/uk/docs/concepts/services-networking/service.md
create mode 100644 content/uk/docs/concepts/storage/_index.md
create mode 100644 content/uk/docs/concepts/storage/persistent-volumes.md
create mode 100644 content/uk/docs/concepts/workloads/_index.md
create mode 100644 content/uk/docs/concepts/workloads/controllers/_index.md
create mode 100644 content/uk/docs/concepts/workloads/controllers/deployment.md
create mode 100644 content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md
create mode 100644 content/uk/docs/concepts/workloads/controllers/replicationcontroller.md
create mode 100644 content/uk/docs/contribute/localization_uk.md
create mode 100644 content/uk/docs/home/_index.md
create mode 100644 content/uk/docs/reference/glossary/applications.md
create mode 100644 content/uk/docs/reference/glossary/cluster-infrastructure.md
create mode 100644 content/uk/docs/reference/glossary/cluster-operations.md
create mode 100644 content/uk/docs/reference/glossary/cluster.md
create mode 100644 content/uk/docs/reference/glossary/control-plane.md
create mode 100644 content/uk/docs/reference/glossary/data-plane.md
create mode 100644 content/uk/docs/reference/glossary/deployment.md
create mode 100644 content/uk/docs/reference/glossary/index.md
create mode 100644 content/uk/docs/reference/glossary/kube-apiserver.md
create mode 100644 content/uk/docs/reference/glossary/kube-controller-manager.md
create mode 100644 content/uk/docs/reference/glossary/kube-proxy.md
create mode 100644 content/uk/docs/reference/glossary/kube-scheduler.md
create mode 100644 content/uk/docs/reference/glossary/kubelet.md
create mode 100644 content/uk/docs/reference/glossary/pod.md
create mode 100644 content/uk/docs/reference/glossary/selector.md
create mode 100755 content/uk/docs/reference/glossary/service.md
create mode 100644 content/uk/docs/setup/_index.md
create mode 100644 content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md
create mode 100644 content/uk/docs/templates/feature-state-alpha.txt
create mode 100644 content/uk/docs/templates/feature-state-beta.txt
create mode 100644 content/uk/docs/templates/feature-state-deprecated.txt
create mode 100644 content/uk/docs/templates/feature-state-stable.txt
create mode 100644 content/uk/docs/templates/index.md
create mode 100644 content/uk/docs/tutorials/_index.md
create mode 100644 content/uk/docs/tutorials/hello-minikube.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/_index.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/explore/_index.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/expose/_index.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/scale/_index.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/update/_index.md
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html
create mode 100644 content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html
create mode 100644 content/uk/examples/controllers/job.yaml
create mode 100644 content/uk/examples/controllers/nginx-deployment.yaml
create mode 100644 content/uk/examples/controllers/replication.yaml
create mode 100644 content/uk/examples/minikube/Dockerfile
create mode 100644 content/uk/examples/minikube/server.js
create mode 100644 content/uk/examples/service/networking/dual-stack-default-svc.yaml
create mode 100644 content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml
create mode 100644 content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml
create mode 100644 content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml
create mode 100644 i18n/uk.toml
diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index ed3ccdbb96..9046573658 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -226,9 +226,11 @@ aliases:
- kpucynski
sig-docs-uk-owners: # Admins for Ukrainian content
- anastyakulyk
+ - butuzov
- MaxymVlasov
sig-docs-uk-reviews: # PR reviews for Ukrainian content
- anastyakulyk
+ - butuzov
- idvoretskyi
- MaxymVlasov
- Potapy4
diff --git a/README-uk.md b/README-uk.md
index 68d3b0db0a..43d782e09a 100644
--- a/README-uk.md
+++ b/README-uk.md
@@ -39,7 +39,7 @@ make docker-image
make docker-serve
```
-Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
+Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
## Запуск сайту локально зa допомогою Hugo
@@ -51,7 +51,7 @@ make docker-serve
make serve
```
-Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
+Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
## Спільнота, обговорення, внесок і підтримка
diff --git a/content/uk/_common-resources/index.md b/content/uk/_common-resources/index.md
new file mode 100644
index 0000000000..ca03031f1e
--- /dev/null
+++ b/content/uk/_common-resources/index.md
@@ -0,0 +1,3 @@
+---
+headless: true
+---
diff --git a/content/uk/_index.html b/content/uk/_index.html
new file mode 100644
index 0000000000..02df4d395d
--- /dev/null
+++ b/content/uk/_index.html
@@ -0,0 +1,85 @@
+---
+title: "Довершена система оркестрації контейнерів"
+abstract: "Автоматичне розгортання, масштабування і управління контейнерами"
+cid: home
+---
+
+{{< announcement >}}
+
+{{< deprecationwarning >}}
+
+{{< blocks/section id="oceanNodes" >}}
+{{% blocks/feature image="flower" %}}
+
+### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) - це система з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками.
+
+
+Вона об'єднує контейнери, що утворюють застосунок, у логічні елементи для легкого управління і виявлення. В основі Kubernetes - [15 років досвіду запуску і виконання застосунків у продуктивних середовищах Google](http://queue.acm.org/detail.cfm?id=2898444), поєднані з найкращими ідеями і практиками від спільноти.
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="scalable" %}}
+
+#### Глобальне масштабування
+
+
+Заснований на тих самих принципах, завдяки яким Google запускає мільярди контейнерів щотижня, Kubernetes масштабується без потреби збільшення вашого штату з експлуатації.
+
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="blocks" %}}
+
+#### Невичерпна функціональність
+
+
+Запущений для локального тестування чи у глобальній корпорації, Kubernetes динамічно зростатиме з вами, забезпечуючи регулярну і легку доставку ваших застосунків незалежно від рівня складності ваших потреб.
+
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="suitcase" %}}
+
+#### Працює всюди
+
+
+Kubernetes - проект з відкритим вихідним кодом. Він дозволяє скористатися перевагами локальної, гібридної чи хмарної інфраструктури, щоб легко переміщати застосунки туди, куди вам потрібно.
+
+{{% /blocks/feature %}}
+
+{{< /blocks/section >}}
+
+{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
+
+
+
Проблеми міграції 150+ мікросервісів у Kubernetes
+
+
Сара Уеллз, технічний директор з експлуатації і безпеки роботи, Financial Times
+
Переглянути відео
+
+
+
+
Відвідати KubeCon в Амстердамі, 30.03-02.04 2020
+
+
+
+
+
Відвідати KubeCon у Шанхаї, 28-30 липня 2020
+
+
+
+
+
+{{< /blocks/section >}}
+
+{{< blocks/kubernetes-features >}}
+
+{{< blocks/case-studies >}}
diff --git a/content/uk/case-studies/_index.html b/content/uk/case-studies/_index.html
new file mode 100644
index 0000000000..6c9c75fc44
--- /dev/null
+++ b/content/uk/case-studies/_index.html
@@ -0,0 +1,13 @@
+---
+title: Case Studies
+title: Приклади використання
+linkTitle: Case Studies
+linkTitle: Приклади використання
+bigheader: Kubernetes User Case Studies
+bigheader: Приклади використання Kubernetes від користувачів.
+abstract: A collection of users running Kubernetes in production.
+abstract: Підбірка користувачів, що використовують Kubernetes для робочих навантажень.
+layout: basic
+class: gridPage
+cid: caseStudies
+---
diff --git a/content/uk/docs/_index.md b/content/uk/docs/_index.md
new file mode 100644
index 0000000000..a601666b67
--- /dev/null
+++ b/content/uk/docs/_index.md
@@ -0,0 +1,3 @@
+---
+title: Документація
+---
diff --git a/content/uk/docs/concepts/_index.md b/content/uk/docs/concepts/_index.md
new file mode 100644
index 0000000000..695068aa4a
--- /dev/null
+++ b/content/uk/docs/concepts/_index.md
@@ -0,0 +1,123 @@
+---
+title: Концепції
+main_menu: true
+content_template: templates/concept
+weight: 40
+---
+
+{{% capture overview %}}
+
+
+В розділі "Концепції" описані складові системи Kubernetes і абстракції, за допомогою яких Kubernetes реалізовує ваш {{< glossary_tooltip text="кластер" term_id="cluster" length="all" >}}. Цей розділ допоможе вам краще зрозуміти, як працює Kubernetes.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+
+## Загальна інформація
+
+
+Для роботи з Kubernetes ви використовуєте *об'єкти API Kubernetes* для того, щоб описати *бажаний стан* вашого кластера: які застосунки або інші робочі навантаження ви плануєте запускати, які образи контейнерів вони використовують, кількість реплік, скільки ресурсів мережі та диску ви хочете виділити тощо. Ви задаєте бажаний стан, створюючи об'єкти в Kubernetes API, зазвичай через інтерфейс командного рядка `kubectl`. Ви також можете взаємодіяти із кластером, задавати або змінювати його бажаний стан безпосередньо через Kubernetes API.
+
+
+Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Генератора подій життєвого циклу Пода ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері:
+
+
+
+* **Kubernetes master** становить собою набір із трьох процесів, запущених на одному вузлі вашого кластера, що визначений як керівний (master). До цих процесів належать: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) і [kube-scheduler](/docs/admin/kube-scheduler/).
+* На кожному не-мастер вузлі вашого кластера виконуються два процеси:
+ * **[kubelet](/docs/admin/kubelet/)**, що обмінюється даними з Kubernetes master.
+ * **[kube-proxy](/docs/admin/kube-proxy/)**, мережевий проксі, що відображає мережеві сервіси Kubernetes на кожному вузлі.
+
+
+
+## Об'єкти Kubernetes
+
+
+Kubernetes оперує певною кількістю абстракцій, що відображають стан вашої системи: розгорнуті у контейнерах застосунки та робочі навантаження, пов'язані з ними ресурси мережі та диску, інша інформація щодо функціонування вашого кластера. Ці абстракції представлені як об'єкти Kubernetes API. Для більш детальної інформації ознайомтесь з [Об'єктами Kubernetes](/docs/concepts/overview/working-with-objects/kubernetes-objects/).
+
+
+До базових об'єктів Kubernetes належать:
+
+* [Под *(Pod)*](/docs/concepts/workloads/pods/pod-overview/)
+* [Сервіс *(Service)*](/docs/concepts/services-networking/service/)
+* [Volume](/docs/concepts/storage/volumes/)
+* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
+
+
+В Kubernetes є також абстракції вищого рівня, які надбудовуються над базовими об'єктами за допомогою [контролерів](/docs/concepts/architecture/controller/) і забезпечують додаткову функціональність і зручність. До них належать:
+
+* [Deployment](/docs/concepts/workloads/controllers/deployment/)
+* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
+* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
+* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
+* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
+
+
+
+## Площина управління Kubernetes (*Kubernetes Control Plane*)
+
+
+Різні частини площини управління Kubernetes, такі як Kubernetes Master і kubelet, регулюють, як Kubernetes спілкується з вашим кластером. Площина управління веде облік усіх об'єктів Kubernetes в системі та безперервно, в циклі перевіряє стан цих об'єктів. У будь-який момент часу контрольні цикли, запущені площиною управління, реагуватимуть на зміни у кластері і намагатимуться привести поточний стан об'єктів до бажаного, що заданий у конфігурації.
+
+
+Наприклад, коли за допомогою API Kubernetes ви створюєте Deployment, ви задаєте новий бажаний стан для системи. Площина управління Kubernetes фіксує створення цього об'єкта і виконує ваші інструкції шляхом запуску потрібних застосунків та їх розподілу між вузлами кластера. В такий спосіб досягається відповідність поточного стану бажаному.
+
+
+
+### Kubernetes Master
+
+
+Kubernetes Master відповідає за підтримку бажаного стану вашого кластера. Щоразу, як ви взаємодієте з Kubernetes, наприклад при використанні інтерфейсу командного рядка `kubectl`, ви обмінюєтесь даними із Kubernetes master вашого кластера.
+
+
+Слово "master" стосується набору процесів, які управляють станом кластера. Переважно всі ці процеси виконуються на одному вузлі кластера, який також називається master. Master-вузол можна реплікувати для забезпечення високої доступності кластера.
+
+
+
+### Вузли Kubernetes
+
+
+Вузлами кластера називають машини (ВМ, фізичні сервери тощо), на яких запущені ваші застосунки та хмарні робочі навантаження. Кожен вузол керується Kubernetes master; ви лише зрідка взаємодіятимете безпосередньо із вузлами.
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+Якщо ви хочете створити нову сторінку у розділі Концепції, у статті
+[Використання шаблонів сторінок](/docs/home/contribute/page-templates/)
+ви знайдете інформацію щодо типу і шаблона сторінки.
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/configuration/_index.md b/content/uk/docs/concepts/configuration/_index.md
new file mode 100644
index 0000000000..588d144f6e
--- /dev/null
+++ b/content/uk/docs/concepts/configuration/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Конфігурація"
+weight: 80
+---
+
diff --git a/content/uk/docs/concepts/configuration/manage-compute-resources-container.md b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md
new file mode 100644
index 0000000000..a90b224f8c
--- /dev/null
+++ b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md
@@ -0,0 +1,623 @@
+---
+title: Managing Compute Resources for Containers
+content_template: templates/concept
+weight: 20
+feature:
+ # title: Automatic bin packing
+ title: Автоматичне пакування у контейнери
+ # description: >
+ # Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
+ description: >
+ Автоматичне розміщення контейнерів з огляду на їхні потреби у ресурсах та інші обмеження, при цьому не поступаючись доступністю. Поєднання критичних і "найкращих з можливих" робочих навантажень для ефективнішого використання і більшого заощадження ресурсів.
+---
+
+{{% capture overview %}}
+
+When you specify a [Pod](/docs/concepts/workloads/pods/pod/), you can optionally specify how
+much CPU and memory (RAM) each Container needs. When Containers have resource
+requests specified, the scheduler can make better decisions about which nodes to
+place Pods on. And when Containers have their limits specified, contention for
+resources on a node can be handled in a specified manner. For more details about
+the difference between requests and limits, see
+[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md).
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Resource types
+
+*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
+CPU is specified in units of cores, and memory is specified in units of bytes.
+If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources.
+Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory
+that are much larger than the default page size.
+
+For example, on a system where the default page size is 4KiB, you could specify a limit,
+`hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a
+total of 80 MiB), that allocation fails.
+
+{{< note >}}
+You cannot overcommit `hugepages-*` resources.
+This is different from the `memory` and `cpu` resources.
+{{< /note >}}
+
+CPU and memory are collectively referred to as *compute resources*, or just
+*resources*. Compute
+resources are measurable quantities that can be requested, allocated, and
+consumed. They are distinct from
+[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and
+[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified
+through the Kubernetes API server.
+
+## Resource requests and limits of Pod and Container
+
+Each Container of a Pod can specify one or more of the following:
+
+* `spec.containers[].resources.limits.cpu`
+* `spec.containers[].resources.limits.memory`
+* `spec.containers[].resources.limits.hugepages-`
+* `spec.containers[].resources.requests.cpu`
+* `spec.containers[].resources.requests.memory`
+* `spec.containers[].resources.requests.hugepages-`
+
+Although requests and limits can only be specified on individual Containers, it
+is convenient to talk about Pod resource requests and limits. A
+*Pod resource request/limit* for a particular resource type is the sum of the
+resource requests/limits of that type for each Container in the Pod.
+
+
+## Meaning of CPU
+
+Limits and requests for CPU resources are measured in *cpu* units.
+One cpu, in Kubernetes, is equivalent to:
+
+- 1 AWS vCPU
+- 1 GCP Core
+- 1 Azure vCore
+- 1 IBM vCPU
+- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
+
+Fractional requests are allowed. A Container with
+`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much
+CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the
+expression `100m`, which can be read as "one hundred millicpu". Some people say
+"one hundred millicores", and this is understood to mean the same thing. A
+request with a decimal point, like `0.1`, is converted to `100m` by the API, and
+precision finer than `1m` is not allowed. For this reason, the form `100m` might
+be preferred.
+
+CPU is always requested as an absolute quantity, never as a relative quantity;
+0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
+
+## Meaning of memory
+
+Limits and requests for `memory` are measured in bytes. You can express memory as
+a plain integer or as a fixed-point integer using one of these suffixes:
+E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
+Mi, Ki. For example, the following represent roughly the same value:
+
+```shell
+128974848, 129e6, 129M, 123Mi
+```
+
+Here's an example.
+The following Pod has two Containers. Each Container has a request of 0.25 cpu
+and 64MiB (226 bytes) of memory. Each Container has a limit of 0.5
+cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
+MiB of memory, and a limit of 1 cpu and 256MiB of memory.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: frontend
+spec:
+ containers:
+ - name: db
+ image: mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: "password"
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+ - name: wp
+ image: wordpress
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+```
+
+## How Pods with resource requests are scheduled
+
+When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
+run on. Each node has a maximum capacity for each of the resource types: the
+amount of CPU and memory it can provide for Pods. The scheduler ensures that,
+for each resource type, the sum of the resource requests of the scheduled
+Containers is less than the capacity of the node. Note that although actual memory
+or CPU resource usage on nodes is very low, the scheduler still refuses to place
+a Pod on a node if the capacity check fails. This protects against a resource
+shortage on a node when resource usage later increases, for example, during a
+daily peak in request rate.
+
+## How Pods with resource limits are run
+
+When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
+to the container runtime.
+
+When using Docker:
+
+- The `spec.containers[].resources.requests.cpu` is converted to its core value,
+ which is potentially fractional, and multiplied by 1024. The greater of this number
+ or 2 is used as the value of the
+ [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint)
+ flag in the `docker run` command.
+
+- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and
+ multiplied by 100. The resulting value is the total amount of CPU time that a container can use
+ every 100ms. A container cannot use more than its share of CPU time during this interval.
+
+ {{< note >}}
+ The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
+ {{ note >}}
+
+- The `spec.containers[].resources.limits.memory` is converted to an integer, and
+ used as the value of the
+ [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
+ flag in the `docker run` command.
+
+If a Container exceeds its memory limit, it might be terminated. If it is
+restartable, the kubelet will restart it, as with any other type of runtime
+failure.
+
+If a Container exceeds its memory request, it is likely that its Pod will
+be evicted whenever the node runs out of memory.
+
+A Container might or might not be allowed to exceed its CPU limit for extended
+periods of time. However, it will not be killed for excessive CPU usage.
+
+To determine whether a Container cannot be scheduled or is being killed due to
+resource limits, see the
+[Troubleshooting](#troubleshooting) section.
+
+## Monitoring compute resource usage
+
+The resource usage of a Pod is reported as part of the Pod status.
+
+If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md)
+is configured for your cluster, then Pod resource usage can be retrieved from
+the monitoring system.
+
+## Troubleshooting
+
+### My Pods are pending with event message failedScheduling
+
+If the scheduler cannot find any node where a Pod can fit, the Pod remains
+unscheduled until a place can be found. An event is produced each time the
+scheduler fails to find a place for the Pod, like this:
+
+```shell
+kubectl describe pod frontend | grep -A 3 Events
+```
+```
+Events:
+ FirstSeen LastSeen Count From Subobject PathReason Message
+ 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
+```
+
+In the preceding example, the Pod named "frontend" fails to be scheduled due to
+insufficient CPU resource on the node. Similar error messages can also suggest
+failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
+is pending with a message of this type, there are several things to try:
+
+- Add more nodes to the cluster.
+- Terminate unneeded Pods to make room for pending Pods.
+- Check that the Pod is not larger than all the nodes. For example, if all the
+ nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will
+ never be scheduled.
+
+You can check node capacities and amounts allocated with the
+`kubectl describe nodes` command. For example:
+
+```shell
+kubectl describe nodes e2e-test-node-pool-4lw4
+```
+```
+Name: e2e-test-node-pool-4lw4
+[ ... lines removed for clarity ...]
+Capacity:
+ cpu: 2
+ memory: 7679792Ki
+ pods: 110
+Allocatable:
+ cpu: 1800m
+ memory: 7474992Ki
+ pods: 110
+[ ... lines removed for clarity ...]
+Non-terminated Pods: (5 in total)
+ Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
+ --------- ---- ------------ ---------- --------------- -------------
+ kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
+ kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
+ kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
+ kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
+ kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
+Allocated resources:
+ (Total limits may be over 100 percent, i.e., overcommitted.)
+ CPU Requests CPU Limits Memory Requests Memory Limits
+ ------------ ---------- --------------- -------------
+ 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
+```
+
+In the preceding output, you can see that if a Pod requests more than 1120m
+CPUs or 6.23Gi of memory, it will not fit on the node.
+
+By looking at the `Pods` section, you can see which Pods are taking up space on
+the node.
+
+The amount of resources available to Pods is less than the node capacity, because
+system daemons use a portion of the available resources. The `allocatable` field
+[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
+gives the amount of resources that are available to Pods. For more information, see
+[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md).
+
+The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
+to limit the total amount of resources that can be consumed. If used in conjunction
+with namespaces, it can prevent one team from hogging all the resources.
+
+### My Container is terminated
+
+Your Container might get terminated because it is resource-starved. To check
+whether a Container is being killed because it is hitting a resource limit, call
+`kubectl describe pod` on the Pod of interest:
+
+```shell
+kubectl describe pod simmemleak-hra99
+```
+```
+Name: simmemleak-hra99
+Namespace: default
+Image(s): saadali/simmemleak
+Node: kubernetes-node-tf0f/10.240.216.66
+Labels: name=simmemleak
+Status: Running
+Reason:
+Message:
+IP: 10.244.2.75
+Replication Controllers: simmemleak (1/1 replicas created)
+Containers:
+ simmemleak:
+ Image: saadali/simmemleak
+ Limits:
+ cpu: 100m
+ memory: 50Mi
+ State: Running
+ Started: Tue, 07 Jul 2015 12:54:41 -0700
+ Last Termination State: Terminated
+ Exit Code: 1
+ Started: Fri, 07 Jul 2015 12:54:30 -0700
+ Finished: Fri, 07 Jul 2015 12:54:33 -0700
+ Ready: False
+ Restart Count: 5
+Conditions:
+ Type Status
+ Ready False
+Events:
+ FirstSeen LastSeen Count From SubobjectPath Reason Message
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
+```
+
+In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
+Container in the Pod was terminated and restarted five times.
+
+You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
+of previously terminated Containers:
+
+```shell
+kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
+```
+```
+Container Name: simmemleak
+LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
+```
+
+You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory.
+
+## Local ephemeral storage
+{{< feature-state state="beta" >}}
+
+Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.
+
+This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope.
+
+{{< note >}}
+If an optional runtime partition is used, root partition will not hold any image layer or writable layers.
+{{< /note >}}
+
+### Requests and limits setting for local ephemeral storage
+Each Container of a Pod can specify one or more of the following:
+
+* `spec.containers[].resources.limits.ephemeral-storage`
+* `spec.containers[].resources.requests.ephemeral-storage`
+
+Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as
+a plain integer or as a fixed-point integer using one of these suffixes:
+E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
+Mi, Ki. For example, the following represent roughly the same value:
+
+```shell
+128974848, 129e6, 129M, 123Mi
+```
+
+For example, the following Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of storage.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: frontend
+spec:
+ containers:
+ - name: db
+ image: mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: "password"
+ resources:
+ requests:
+ ephemeral-storage: "2Gi"
+ limits:
+ ephemeral-storage: "4Gi"
+ - name: wp
+ image: wordpress
+ resources:
+ requests:
+ ephemeral-storage: "2Gi"
+ limits:
+ ephemeral-storage: "4Gi"
+```
+
+### How Pods with ephemeral-storage requests are scheduled
+
+When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
+run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
+
+The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
+
+### How Pods with ephemeral-storage limits run
+
+For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted.
+
+### Monitoring ephemeral-storage consumption
+
+When local ephemeral storage is used, it is monitored on an ongoing
+basis by the kubelet. The monitoring is performed by scanning each
+emptyDir volume, log directories, and writable layers on a periodic
+basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log
+directories or writable layers) may, at the cluster operator's option,
+be managed by use of [project
+quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html).
+Project quotas were originally implemented in XFS, and have more
+recently been ported to ext4fs. Project quotas can be used for both
+monitoring and enforcement; as of Kubernetes 1.16, they are available
+as alpha functionality for monitoring only.
+
+Quotas are faster and more accurate than directory scanning. When a
+directory is assigned to a project, all files created under a
+directory are created in that project, and the kernel merely has to
+keep track of how many blocks are in use by files in that project. If
+a file is created and deleted, but with an open file descriptor, it
+continues to consume space. This space will be tracked by the quota,
+but will not be seen by a directory scan.
+
+Kubernetes uses project IDs starting from 1048576. The IDs in use are
+registered in `/etc/projects` and `/etc/projid`. If project IDs in
+this range are used for other purposes on the system, those project
+IDs must be registered in `/etc/projects` and `/etc/projid` to prevent
+Kubernetes from using them.
+
+To enable use of project quotas, the cluster operator must do the
+following:
+
+* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true`
+ feature gate in the kubelet configuration. This defaults to `false`
+ in Kubernetes 1.16, so must be explicitly set to `true`.
+
+* Ensure that the root partition (or optional runtime partition) is
+ built with project quotas enabled. All XFS filesystems support
+ project quotas, but ext4 filesystems must be built specially.
+
+* Ensure that the root partition (or optional runtime partition) is
+ mounted with project quotas enabled.
+
+#### Building and mounting filesystems with project quotas enabled
+
+XFS filesystems require no special action when building; they are
+automatically built with project quotas enabled.
+
+Ext4fs filesystems must be built with quotas enabled, then they must
+be enabled in the filesystem:
+
+```
+% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device
+% sudo tune2fs -O project -Q prjquota /dev/block_device
+
+```
+
+To mount the filesystem, both ext4fs and XFS require the `prjquota`
+option set in `/etc/fstab`:
+
+```
+/dev/block_device /var/kubernetes_data defaults,prjquota 0 0
+```
+
+
+## Extended resources
+
+Extended resources are fully-qualified resource names outside the
+`kubernetes.io` domain. They allow cluster operators to advertise and users to
+consume the non-Kubernetes-built-in resources.
+
+There are two steps required to use Extended Resources. First, the cluster
+operator must advertise an Extended Resource. Second, users must request the
+Extended Resource in Pods.
+
+### Managing extended resources
+
+#### Node-level extended resources
+
+Node-level extended resources are tied to nodes.
+
+##### Device plugin managed resources
+See [Device
+Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
+for how to advertise device plugin managed resources on each node.
+
+##### Other resources
+To advertise a new node-level extended resource, the cluster operator can
+submit a `PATCH` HTTP request to the API server to specify the available
+quantity in the `status.capacity` for a node in the cluster. After this
+operation, the node's `status.capacity` will include a new resource. The
+`status.allocatable` field is updated automatically with the new resource
+asynchronously by the kubelet. Note that because the scheduler uses the node
+`status.allocatable` value when evaluating Pod fitness, there may be a short
+delay between patching the node capacity with a new resource and the first Pod
+that requests the resource to be scheduled on that node.
+
+**Example:**
+
+Here is an example showing how to use `curl` to form an HTTP request that
+advertises five "example.com/foo" resources on node `k8s-node-1` whose master
+is `k8s-master`.
+
+```shell
+curl --header "Content-Type: application/json-patch+json" \
+--request PATCH \
+--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \
+http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
+```
+
+{{< note >}}
+In the preceding request, `~1` is the encoding for the character `/`
+in the patch path. The operation path value in JSON-Patch is interpreted as a
+JSON-Pointer. For more details, see
+[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
+{{< /note >}}
+
+#### Cluster-level extended resources
+
+Cluster-level extended resources are not tied to nodes. They are usually managed
+by scheduler extenders, which handle the resource consumption and resource quota.
+
+You can specify the extended resources that are handled by scheduler extenders
+in [scheduler policy
+configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31).
+
+**Example:**
+
+The following configuration for a scheduler policy indicates that the
+cluster-level extended resource "example.com/foo" is handled by the scheduler
+extender.
+
+- The scheduler sends a Pod to the scheduler extender only if the Pod requests
+ "example.com/foo".
+- The `ignoredByScheduler` field specifies that the scheduler does not check
+ the "example.com/foo" resource in its `PodFitsResources` predicate.
+
+```json
+{
+ "kind": "Policy",
+ "apiVersion": "v1",
+ "extenders": [
+ {
+ "urlPrefix":"",
+ "bindVerb": "bind",
+ "managedResources": [
+ {
+ "name": "example.com/foo",
+ "ignoredByScheduler": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Consuming extended resources
+
+Users can consume extended resources in Pod specs just like CPU and memory.
+The scheduler takes care of the resource accounting so that no more than the
+available amount is simultaneously allocated to Pods.
+
+The API server restricts quantities of extended resources to whole numbers.
+Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of
+_invalid_ quantities are `0.5` and `1500m`.
+
+{{< note >}}
+Extended resources replace Opaque Integer Resources.
+Users can use any domain name prefix other than `kubernetes.io` which is reserved.
+{{< /note >}}
+
+To consume an extended resource in a Pod, include the resource name as a key
+in the `spec.containers[].resources.limits` map in the container spec.
+
+{{< note >}}
+Extended resources cannot be overcommitted, so request and limit
+must be equal if both are present in a container spec.
+{{< /note >}}
+
+A Pod is scheduled only if all of the resource requests are satisfied, including
+CPU, memory and any extended resources. The Pod remains in the `PENDING` state
+as long as the resource request cannot be satisfied.
+
+**Example:**
+
+The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource).
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: my-pod
+spec:
+ containers:
+ - name: my-container
+ image: myimage
+ resources:
+ requests:
+ cpu: 2
+ example.com/foo: 1
+ limits:
+ example.com/foo: 1
+```
+
+
+
+{{% /capture %}}
+
+
+{{% capture whatsnext %}}
+
+* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
+
+* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
+
+* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
+
+* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/configuration/secret.md b/content/uk/docs/concepts/configuration/secret.md
new file mode 100644
index 0000000000..6261650692
--- /dev/null
+++ b/content/uk/docs/concepts/configuration/secret.md
@@ -0,0 +1,1054 @@
+---
+reviewers:
+- mikedanese
+title: Secrets
+content_template: templates/concept
+feature:
+ title: Управління секретами та конфігурацією
+ description: >
+ Розгортайте та оновлюйте секрети та конфігурацію застосунку без перезбирання образів, не розкриваючи секрети в конфігурацію стека.
+weight: 50
+---
+
+
+{{% capture overview %}}
+
+Kubernetes `secret` objects let you store and manage sensitive information, such
+as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
+is safer and more flexible than putting it verbatim in a
+{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Overview of Secrets
+
+A Secret is an object that contains a small amount of sensitive data such as
+a password, a token, or a key. Such information might otherwise be put in a
+Pod specification or in an image; putting it in a Secret object allows for
+more control over how it is used, and reduces the risk of accidental exposure.
+
+Users can create secrets, and the system also creates some secrets.
+
+To use a secret, a pod needs to reference the secret.
+A secret can be used with a pod in two ways: as files in a
+{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
+its containers, or used by kubelet when pulling images for the pod.
+
+### Built-in Secrets
+
+#### Service Accounts Automatically Create and Attach Secrets with API Credentials
+
+Kubernetes automatically creates secrets which contain credentials for
+accessing the API and it automatically modifies your pods to use this type of
+secret.
+
+The automatic creation and use of API credentials can be disabled or overridden
+if desired. However, if all you need to do is securely access the apiserver,
+this is the recommended workflow.
+
+See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more
+information on how Service Accounts work.
+
+### Creating your own Secrets
+
+#### Creating a Secret Using kubectl create secret
+
+Say that some pods need to access a database. The
+username and password that the pods should use is in the files
+`./username.txt` and `./password.txt` on your local machine.
+
+```shell
+# Create files needed for rest of example.
+echo -n 'admin' > ./username.txt
+echo -n '1f2d1e2e67df' > ./password.txt
+```
+
+The `kubectl create secret` command
+packages these files into a Secret and creates
+the object on the Apiserver.
+
+```shell
+kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
+```
+```
+secret "db-user-pass" created
+```
+{{< note >}}
+Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
+
+```
+kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb'
+```
+
+ You do not need to escape special characters in passwords from files (`--from-file`).
+{{< /note >}}
+
+You can check that the secret was created like this:
+
+```shell
+kubectl get secrets
+```
+```
+NAME TYPE DATA AGE
+db-user-pass Opaque 2 51s
+```
+```shell
+kubectl describe secrets/db-user-pass
+```
+```
+Name: db-user-pass
+Namespace: default
+Labels:
+Annotations:
+
+Type: Opaque
+
+Data
+====
+password.txt: 12 bytes
+username.txt: 5 bytes
+```
+
+{{< note >}}
+`kubectl get` and `kubectl describe` avoid showing the contents of a secret by
+default.
+This is to protect the secret from being exposed accidentally to an onlooker,
+or from being stored in a terminal log.
+{{< /note >}}
+
+See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret.
+
+#### Creating a Secret Manually
+
+You can also create a Secret in a file first, in json or yaml format,
+and then create that object. The
+[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) contains two maps:
+data and stringData. The data field is used to store arbitrary data, encoded using
+base64. The stringData field is provided for convenience, and allows you to provide
+secret data as unencoded strings.
+
+For example, to store two strings in a Secret using the data field, convert
+them to base64 as follows:
+
+```shell
+echo -n 'admin' | base64
+YWRtaW4=
+echo -n '1f2d1e2e67df' | base64
+MWYyZDFlMmU2N2Rm
+```
+
+Write a Secret that looks like this:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysecret
+type: Opaque
+data:
+ username: YWRtaW4=
+ password: MWYyZDFlMmU2N2Rm
+```
+
+Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply):
+
+```shell
+kubectl apply -f ./secret.yaml
+```
+```
+secret "mysecret" created
+```
+
+For certain scenarios, you may wish to use the stringData field instead. This
+field allows you to put a non-base64 encoded string directly into the Secret,
+and the string will be encoded for you when the Secret is created or updated.
+
+A practical example of this might be where you are deploying an application
+that uses a Secret to store a configuration file, and you want to populate
+parts of that configuration file during your deployment process.
+
+If your application uses the following configuration file:
+
+```yaml
+apiUrl: "https://my.api.com/api/v1"
+username: "user"
+password: "password"
+```
+
+You could store this in a Secret using the following:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysecret
+type: Opaque
+stringData:
+ config.yaml: |-
+ apiUrl: "https://my.api.com/api/v1"
+ username: {{username}}
+ password: {{password}}
+```
+
+Your deployment tool could then replace the `{{username}}` and `{{password}}`
+template variables before running `kubectl apply`.
+
+stringData is a write-only convenience field. It is never output when
+retrieving Secrets. For example, if you run the following command:
+
+```shell
+kubectl get secret mysecret -o yaml
+```
+
+The output will be similar to:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ creationTimestamp: 2018-11-15T20:40:59Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "7225"
+ uid: c280ad2e-e916-11e8-98f2-025000000001
+type: Opaque
+data:
+ config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19
+```
+
+If a field is specified in both data and stringData, the value from stringData
+is used. For example, the following Secret definition:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysecret
+type: Opaque
+data:
+ username: YWRtaW4=
+stringData:
+ username: administrator
+```
+
+Results in the following secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ creationTimestamp: 2018-11-15T20:46:46Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "7579"
+ uid: 91460ecb-e917-11e8-98f2-025000000001
+type: Opaque
+data:
+ username: YWRtaW5pc3RyYXRvcg==
+```
+
+Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
+
+The keys of data and stringData must consist of alphanumeric characters,
+'-', '_' or '.'.
+
+**Encoding Note:** The serialized JSON and YAML values of secret data are
+encoded as base64 strings. Newlines are not valid within these strings and must
+be omitted. When using the `base64` utility on Darwin/macOS users should avoid
+using the `-b` option to split long lines. Conversely Linux users *should* add
+the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if
+`-w` option is not available.
+
+#### Creating a Secret from Generator
+Kubectl supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/)
+since 1.14. With this new feature,
+you can also create a Secret from generators and then apply it to create the object on
+the Apiserver. The generators
+should be specified in a `kustomization.yaml` inside a directory.
+
+For example, to generate a Secret from files `./username.txt` and `./password.txt`
+```shell
+# Create a kustomization.yaml file with SecretGenerator
+cat <./kustomization.yaml
+secretGenerator:
+- name: db-user-pass
+ files:
+ - username.txt
+ - password.txt
+EOF
+```
+Apply the kustomization directory to create the Secret object.
+```shell
+$ kubectl apply -k .
+secret/db-user-pass-96mffmfh4k created
+```
+
+You can check that the secret was created like this:
+
+```shell
+$ kubectl get secrets
+NAME TYPE DATA AGE
+db-user-pass-96mffmfh4k Opaque 2 51s
+
+$ kubectl describe secrets/db-user-pass-96mffmfh4k
+Name: db-user-pass
+Namespace: default
+Labels:
+Annotations:
+
+Type: Opaque
+
+Data
+====
+password.txt: 12 bytes
+username.txt: 5 bytes
+```
+
+For example, to generate a Secret from literals `username=admin` and `password=secret`,
+you can specify the secret generator in `kustomization.yaml` as
+```shell
+# Create a kustomization.yaml file with SecretGenerator
+$ cat <./kustomization.yaml
+secretGenerator:
+- name: db-user-pass
+ literals:
+ - username=admin
+ - password=secret
+EOF
+```
+Apply the kustomization directory to create the Secret object.
+```shell
+$ kubectl apply -k .
+secret/db-user-pass-dddghtt9b5 created
+```
+{{< note >}}
+The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new
+Secret is generated each time the contents is modified.
+{{< /note >}}
+
+#### Decoding a Secret
+
+Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section:
+
+```shell
+kubectl get secret mysecret -o yaml
+```
+```
+apiVersion: v1
+kind: Secret
+metadata:
+ creationTimestamp: 2016-01-22T18:41:56Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "164619"
+ uid: cfee02d6-c137-11e5-8d73-42010af00002
+type: Opaque
+data:
+ username: YWRtaW4=
+ password: MWYyZDFlMmU2N2Rm
+```
+
+Decode the password field:
+
+```shell
+echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
+```
+```
+1f2d1e2e67df
+```
+
+#### Editing a Secret
+
+An existing secret may be edited with the following command:
+
+```shell
+kubectl edit secrets mysecret
+```
+
+This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field:
+
+```
+# Please edit the object below. Lines beginning with a '#' will be ignored,
+# and an empty file will abort the edit. If an error occurs while saving this file will be
+# reopened with the relevant failures.
+#
+apiVersion: v1
+data:
+ username: YWRtaW4=
+ password: MWYyZDFlMmU2N2Rm
+kind: Secret
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: { ... }
+ creationTimestamp: 2016-01-22T18:41:56Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "164619"
+ uid: cfee02d6-c137-11e5-8d73-42010af00002
+type: Opaque
+```
+
+## Using Secrets
+
+Secrets can be mounted as data volumes or be exposed as
+{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}}
+to be used by a container in a pod. They can also be used by other parts of the
+system, without being directly exposed to the pod. For example, they can hold
+credentials that other parts of the system should use to interact with external
+systems on your behalf.
+
+### Using Secrets as Files from a Pod
+
+To consume a Secret in a volume in a Pod:
+
+1. Create a secret or use an existing one. Multiple pods can reference the same secret.
+1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].secret.secretName` field equal to the name of the secret object.
+1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear.
+1. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`.
+
+This is an example of a pod that mounts a secret in a volume:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ readOnly: true
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+```
+
+Each secret you want to use needs to be referred to in `.spec.volumes`.
+
+If there are multiple containers in the pod, then each container needs its
+own `volumeMounts` block, but only one `.spec.volumes` is needed per secret.
+
+You can package many files into one secret, or use many secrets, whichever is convenient.
+
+**Projection of secret keys to specific paths**
+
+We can also control the paths within the volume where Secret keys are projected.
+You can use `.spec.volumes[].secret.items` field to change target path of each key:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ readOnly: true
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ items:
+ - key: username
+ path: my-group/my-username
+```
+
+What will happen:
+
+* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`.
+* `password` secret is not projected
+
+If `.spec.volumes[].secret.items` is used, only keys specified in `items` are projected.
+To consume all keys from the secret, all of them must be listed in the `items` field.
+All listed keys must exist in the corresponding secret. Otherwise, the volume is not created.
+
+**Secret files permissions**
+
+You can also specify the permission mode bits files part of a secret will have.
+If you don't specify any, `0644` is used by default. You can specify a default
+mode for the whole secret volume and override per key if needed.
+
+For example, you can specify a default mode like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ defaultMode: 256
+```
+
+Then, the secret will be mounted on `/etc/foo` and all the files created by the
+secret volume mount will have permission `0400`.
+
+Note that the JSON spec doesn't support octal notation, so use the value 256 for
+0400 permissions. If you use yaml instead of json for the pod, you can use octal
+notation to specify permissions in a more natural way.
+
+You can also use mapping, as in the previous example, and specify different
+permission for different files like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ items:
+ - key: username
+ path: my-group/my-username
+ mode: 511
+```
+
+In this case, the file resulting in `/etc/foo/my-group/my-username` will have
+permission value of `0777`. Owing to JSON limitations, you must specify the mode
+in decimal notation.
+
+Note that this permission value might be displayed in decimal notation if you
+read it later.
+
+**Consuming Secret Values from Volumes**
+
+Inside the container that mounts a secret volume, the secret keys appear as
+files and the secret values are base-64 decoded and stored inside these files.
+This is the result of commands
+executed inside the container from the example above:
+
+```shell
+ls /etc/foo/
+```
+```
+username
+password
+```
+
+```shell
+cat /etc/foo/username
+```
+```
+admin
+```
+
+
+```shell
+cat /etc/foo/password
+```
+```
+1f2d1e2e67df
+```
+
+The program in a container is responsible for reading the secrets from the
+files.
+
+**Mounted Secrets are updated automatically**
+
+When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
+Kubelet is checking whether the mounted secret is fresh on every periodic sync.
+However, it is using its local cache for getting the current value of the Secret.
+The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in
+[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)).
+It can be either propagated via watch (default), ttl-based, or simply redirecting
+all requests to directly kube-apiserver.
+As a result, the total delay from the moment when the Secret is updated to the moment
+when new keys are projected to the Pod can be as long as kubelet sync period + cache
+propagation delay, where cache propagation delay depends on the chosen cache type
+(it equals to watch propagation delay, ttl of cache, or zero corespondingly).
+
+{{< note >}}
+A container using a Secret as a
+[subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive
+Secret updates.
+{{< /note >}}
+
+### Using Secrets as Environment Variables
+
+To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
+in a pod:
+
+1. Create a secret or use an existing one. Multiple pods can reference the same secret.
+1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`.
+1. Modify your image and/or command line so that the program looks for values in the specified environment variables
+
+This is an example of a pod that uses secrets from environment variables:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-env-pod
+spec:
+ containers:
+ - name: mycontainer
+ image: redis
+ env:
+ - name: SECRET_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: mysecret
+ key: username
+ - name: SECRET_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: mysecret
+ key: password
+ restartPolicy: Never
+```
+
+**Consuming Secret Values from Environment Variables**
+
+Inside a container that consumes a secret in an environment variables, the secret keys appear as
+normal environment variables containing the base-64 decoded values of the secret data.
+This is the result of commands executed inside the container from the example above:
+
+```shell
+echo $SECRET_USERNAME
+```
+```
+admin
+```
+```shell
+echo $SECRET_PASSWORD
+```
+```
+1f2d1e2e67df
+```
+
+### Using imagePullSecrets
+
+An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
+password to the Kubelet so it can pull a private image on behalf of your Pod.
+
+**Manually specifying an imagePullSecret**
+
+Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
+
+### Arranging for imagePullSecrets to be Automatically Attached
+
+You can manually create an imagePullSecret, and reference it from
+a serviceAccount. Any pods created with that serviceAccount
+or that default to use that serviceAccount, will get their imagePullSecret
+field set to that of the service account.
+See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
+ for a detailed explanation of that process.
+
+### Automatic Mounting of Manually Created Secrets
+
+Manually created secrets (e.g. one containing a token for accessing a github account)
+can be automatically attached to pods based on their service account.
+See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset/) for a detailed explanation of that process.
+
+## Details
+
+### Restrictions
+
+Secret volume sources are validated to ensure that the specified object
+reference actually points to an object of type `Secret`. Therefore, a secret
+needs to be created before any pods that depend on it.
+
+Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
+They can only be referenced by pods in that same namespace.
+
+Individual secrets are limited to 1MiB in size. This is to discourage creation
+of very large secrets which would exhaust apiserver and kubelet memory.
+However, creation of many smaller secrets could also exhaust memory. More
+comprehensive limits on memory usage due to secrets is a planned feature.
+
+Kubelet only supports use of secrets for Pods it gets from the API server.
+This includes any pods created using kubectl, or indirectly via a replication
+controller. It does not include pods created via the kubelets
+`--manifest-url` flag, its `--config` flag, or its REST API (these are
+not common ways to create pods.)
+
+Secrets must be created before they are consumed in pods as environment
+variables unless they are marked as optional. References to Secrets that do
+not exist will prevent the pod from starting.
+
+References via `secretKeyRef` to keys that do not exist in a named Secret
+will prevent the pod from starting.
+
+Secrets used to populate environment variables via `envFrom` that have keys
+that are considered invalid environment variable names will have those keys
+skipped. The pod will be allowed to start. There will be an event whose
+reason is `InvalidVariableNames` and the message will contain the list of
+invalid keys that were skipped. The example shows a pod which refers to the
+default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad.
+
+```shell
+kubectl get events
+```
+```
+LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
+0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
+```
+
+### Secret and Pod Lifetime interaction
+
+When a pod is created via the API, there is no check whether a referenced
+secret exists. Once a pod is scheduled, the kubelet will try to fetch the
+secret value. If the secret cannot be fetched because it does not exist or
+because of a temporary lack of connection to the API server, kubelet will
+periodically retry. It will report an event about the pod explaining the
+reason it is not started yet. Once the secret is fetched, the kubelet will
+create and mount a volume containing it. None of the pod's containers will
+start until all the pod's volumes are mounted.
+
+## Use cases
+
+### Use-Case: Pod with ssh keys
+
+Create a kustomization.yaml with SecretGenerator containing some ssh keys:
+
+```shell
+kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
+```
+
+```
+secret "ssh-key-secret" created
+```
+
+{{< caution >}}
+Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised.
+{{< /caution >}}
+
+
+Now we can create a pod which references the secret with the ssh key and
+consumes it in a volume:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-test-pod
+ labels:
+ name: secret-test
+spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: ssh-key-secret
+ containers:
+ - name: ssh-test-container
+ image: mySshImage
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+```
+
+When the container's command runs, the pieces of the key will be available in:
+
+```shell
+/etc/secret-volume/ssh-publickey
+/etc/secret-volume/ssh-privatekey
+```
+
+The container is then free to use the secret data to establish an ssh connection.
+
+### Use-Case: Pods with prod / test credentials
+
+This example illustrates a pod which consumes a secret containing prod
+credentials and another pod which consumes a secret with test environment
+credentials.
+
+Make the kustomization.yaml with SecretGenerator
+
+```shell
+kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
+```
+```
+secret "prod-db-secret" created
+```
+
+```shell
+kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
+```
+```
+secret "test-db-secret" created
+```
+{{< note >}}
+Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
+
+```
+kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb'
+```
+
+ You do not need to escape special characters in passwords from files (`--from-file`).
+{{< /note >}}
+
+Now make the pods:
+
+```shell
+$ cat < pod.yaml
+apiVersion: v1
+kind: List
+items:
+- kind: Pod
+ apiVersion: v1
+ metadata:
+ name: prod-db-client-pod
+ labels:
+ name: prod-db-client
+ spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: prod-db-secret
+ containers:
+ - name: db-client-container
+ image: myClientImage
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+- kind: Pod
+ apiVersion: v1
+ metadata:
+ name: test-db-client-pod
+ labels:
+ name: test-db-client
+ spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: test-db-secret
+ containers:
+ - name: db-client-container
+ image: myClientImage
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+EOF
+```
+
+Add the pods to the same kustomization.yaml
+```shell
+$ cat <> kustomization.yaml
+resources:
+- pod.yaml
+EOF
+```
+
+Apply all those objects on the Apiserver by
+
+```shell
+kubectl apply -k .
+```
+
+Both containers will have the following files present on their filesystems with the values for each container's environment:
+
+```shell
+/etc/secret-volume/username
+/etc/secret-volume/password
+```
+
+Note how the specs for the two pods differ only in one field; this facilitates
+creating pods with different capabilities from a common pod config template.
+
+You could further simplify the base pod specification by using two Service Accounts:
+one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
+`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: prod-db-client-pod
+ labels:
+ name: prod-db-client
+spec:
+ serviceAccount: prod-db-client
+ containers:
+ - name: db-client-container
+ image: myClientImage
+```
+
+### Use-case: Dotfiles in secret volume
+
+In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply
+make that key begin with a dot. For example, when the following secret is mounted into a volume:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: dotfile-secret
+data:
+ .secret-file: dmFsdWUtMg0KDQo=
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-dotfiles-pod
+spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: dotfile-secret
+ containers:
+ - name: dotfile-test-container
+ image: k8s.gcr.io/busybox
+ command:
+ - ls
+ - "-l"
+ - "/etc/secret-volume"
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+```
+
+
+The `secret-volume` will contain a single file, called `.secret-file`, and
+the `dotfile-test-container` will have this file present at the path
+`/etc/secret-volume/.secret-file`.
+
+{{< note >}}
+Files beginning with dot characters are hidden from the output of `ls -l`;
+you must use `ls -la` to see them when listing directory contents.
+{{< /note >}}
+
+### Use-case: Secret visible to one container in a pod
+
+Consider a program that needs to handle HTTP requests, do some complex business
+logic, and then sign some messages with an HMAC. Because it has complex
+application logic, there might be an unnoticed remote file reading exploit in
+the server, which could expose the private key to an attacker.
+
+This could be divided into two processes in two containers: a frontend container
+which handles user interaction and business logic, but which cannot see the
+private key; and a signer container that can see the private key, and responds
+to simple signing requests from the frontend (e.g. over localhost networking).
+
+With this partitioned approach, an attacker now has to trick the application
+server into doing something rather arbitrary, which may be harder than getting
+it to read a file.
+
+
+
+## Best practices
+
+### Clients that use the secrets API
+
+When deploying applications that interact with the secrets API, access should be
+limited using [authorization policies](
+/docs/reference/access-authn-authz/authorization/) such as [RBAC](
+/docs/reference/access-authn-authz/rbac/).
+
+Secrets often hold values that span a spectrum of importance, many of which can
+cause escalations within Kubernetes (e.g. service account tokens) and to
+external systems. Even if an individual app can reason about the power of the
+secrets it expects to interact with, other apps within the same namespace can
+render those assumptions invalid.
+
+For these reasons `watch` and `list` requests for secrets within a namespace are
+extremely powerful capabilities and should be avoided, since listing secrets allows
+the clients to inspect the values of all secrets that are in that namespace. The ability to
+`watch` and `list` all secrets in a cluster should be reserved for only the most
+privileged, system-level components.
+
+Applications that need to access the secrets API should perform `get` requests on
+the secrets they need. This lets administrators restrict access to all secrets
+while [white-listing access to individual instances](
+/docs/reference/access-authn-authz/rbac/#referring-to-resources) that
+the app needs.
+
+For improved performance over a looping `get`, clients can design resources that
+reference a secret then `watch` the resource, re-requesting the secret when the
+reference changes. Additionally, a ["bulk watch" API](
+https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md)
+to let clients `watch` individual resources has also been proposed, and will likely
+be available in future releases of Kubernetes.
+
+## Security Properties
+
+
+### Protections
+
+Because `secret` objects can be created independently of the `pods` that use
+them, there is less risk of the secret being exposed during the workflow of
+creating, viewing, and editing pods. The system can also take additional
+precautions with `secret` objects, such as avoiding writing them to disk where
+possible.
+
+A secret is only sent to a node if a pod on that node requires it.
+Kubelet stores the secret into a `tmpfs` so that the secret is not written
+to disk storage. Once the Pod that depends on the secret is deleted, kubelet
+will delete its local copy of the secret data as well.
+
+There may be secrets for several pods on the same node. However, only the
+secrets that a pod requests are potentially visible within its containers.
+Therefore, one Pod does not have access to the secrets of another Pod.
+
+There may be several containers in a pod. However, each container in a pod has
+to request the secret volume in its `volumeMounts` for it to be visible within
+the container. This can be used to construct useful [security partitions at the
+Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
+
+On most Kubernetes-project-maintained distributions, communication between user
+to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS.
+Secrets are protected when transmitted over these channels.
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
+for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}.
+
+### Risks
+
+ - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}};
+ therefore:
+ - Administrators should enable encryption at rest for cluster data (requires v1.13 or later)
+ - Administrators should limit access to etcd to admin users
+ - Administrators may want to wipe/shred disks used by etcd when no longer in use
+ - If running etcd in a cluster, administrators should make sure to use SSL/TLS
+ for etcd peer-to-peer communication.
+ - If you configure the secret through a manifest (JSON or YAML) file which has
+ the secret data encoded as base64, sharing this file or checking it in to a
+ source repository means the secret is compromised. Base64 encoding is _not_ an
+ encryption method and is considered the same as plain text.
+ - Applications still need to protect the value of secret after reading it from the volume,
+ such as not accidentally logging it or transmitting it to an untrusted party.
+ - A user who can create a pod that uses a secret can also see the value of that secret. Even
+ if apiserver policy does not allow that user to read the secret object, the user could
+ run a pod which exposes the secret.
+ - Currently, anyone with root on any node can read _any_ secret from the apiserver,
+ by impersonating the kubelet. It is a planned feature to only send secrets to
+ nodes that actually require them, to restrict the impact of a root exploit on a
+ single node.
+
+
+{{% capture whatsnext %}}
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/overview/_index.md b/content/uk/docs/concepts/overview/_index.md
new file mode 100644
index 0000000000..efffaf0892
--- /dev/null
+++ b/content/uk/docs/concepts/overview/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Огляд"
+weight: 20
+---
diff --git a/content/uk/docs/concepts/overview/what-is-kubernetes.md b/content/uk/docs/concepts/overview/what-is-kubernetes.md
new file mode 100644
index 0000000000..7d484d3bd6
--- /dev/null
+++ b/content/uk/docs/concepts/overview/what-is-kubernetes.md
@@ -0,0 +1,185 @@
+---
+reviewers:
+- bgrant0607
+- mikedanese
+title: Що таке Kubernetes?
+content_template: templates/concept
+weight: 10
+card:
+ name: concepts
+ weight: 10
+---
+
+{{% capture overview %}}
+
+Ця сторінка являє собою узагальнений огляд Kubernetes.
+{{% /capture %}}
+
+{{% capture body %}}
+
+Kubernetes - це платформа з відкритим вихідним кодом для управління контейнеризованими робочими навантаженнями та супутніми службами. Її основні характеристики - кросплатформенність, розширюваність, успішне використання декларативної конфігурації та автоматизації. Вона має гігантську, швидкопрогресуючу екосистему.
+
+
+Назва Kubernetes походить з грецької та означає керманич або пілот. Google відкрив доступ до вихідного коду проекту Kubernetes у 2014 році. Kubernetes побудовано [на базі п'ятнадцятирічного досвіду, що Google отримав, оперуючи масштабними робочими навантаженнями](https://ai.google/research/pubs/pub43438) у купі з найкращими у своєму класі ідеями та практиками, які може запропонувати спільнота.
+
+
+## Озираючись на першопричини
+
+
+Давайте повернемось назад у часі та дізнаємось, завдяки чому Kubernetes став таким корисним.
+
+
+
+
+**Ера традиційного розгортання:** На початку організації запускали застосунки на фізичних серверах. Оскільки в такий спосіб не було можливості задати обмеження використання ресурсів, це спричиняло проблеми виділення та розподілення ресурсів на фізичних серверах. Наприклад: якщо багато застосунків було запущено на фізичному сервері, могли траплятись випадки, коли один застосунок забирав собі найбільше ресурсів, внаслідок чого інші програми просто не справлялись з обов'язками. Рішенням може бути запуск кожного застосунку на окремому фізичному сервері. Але такий підхід погано масштабується, оскільки ресурси не повністю використовуються; на додачу, це дорого, оскільки організаціям потрібно опікуватись багатьма фізичними серверами.
+
+
+**Ера віртуалізованого розгортання:** Як рішення - була представлена віртуалізація. Вона дозволяє запускати численні віртуальні машини (Virtual Machines або VMs) на одному фізичному ЦПУ сервера. Віртуалізація дозволила застосункам бути ізольованими у межах віртуальних машин та забезпечувала безпеку, оскільки інформація застосунку на одній VM не була доступна застосунку на іншій VM.
+
+
+Віртуалізація забезпечує краще використання ресурсів на фізичному сервері та кращу масштабованість, оскільки дозволяє легко додавати та оновлювати застосунки, зменшує витрати на фізичне обладнання тощо. З віртуалізацією ви можете представити ресурси у вигляді одноразових віртуальних машин.
+
+
+Кожна VM є повноцінною машиною з усіма компонентами, включно з власною операційною системою, що запущені поверх віртуалізованого апаратного забезпечення.
+
+
+**Ера розгортання контейнерів:** Контейнери схожі на VM, але мають спрощений варіант ізоляції і використовують спільну операційну систему для усіх застосунків. Саму тому контейнери вважаються легковісними. Подібно до VM, контейнер має власну файлову систему, ЦПУ, пам'ять, простір процесів тощо. Оскільки контейнери вивільнені від підпорядкованої інфраструктури, їх можна легко переміщати між хмарними провайдерами чи дистрибутивами операційних систем.
+
+Контейнери стали популярними, бо надавали додаткові переваги, такі як:
+
+
+
+* Створення та розгортання застосунків за методологією Agile: спрощене та більш ефективне створення образів контейнерів у порівнянні до використання образів віртуальних машин.
+* Безперервна розробка, інтеграція та розгортання: забезпечення надійних та безперервних збирань образів контейнерів, їх швидке розгортання та легкі відкатування (за рахунок незмінності образів).
+* Розподіл відповідальності команд розробки та експлуатації: створення образів контейнерів застосунків під час збирання/релізу на противагу часу розгортання, і як наслідок, вивільнення застосунків із інфраструктури.
+* Спостереження не лише за інформацією та метриками на рівні операційної системи, але й за станом застосунку та іншими сигналами.
+* Однорідність середовища для розробки, тестування та робочого навантаження: запускається так само як на робочому комп'ютері, так і у хмарного провайдера.
+* ОС та хмарна кросплатформність: запускається на Ubuntu, RHEL, CoreOS, у власному дата-центрі, у Google Kubernetes Engine і взагалі будь-де.
+* Керування орієнтоване на застосунки: підвищення рівня абстракції від запуску операційної системи у віртуальному апаратному забезпеченні до запуску застосунку в операційній системі, використовуючи логічні ресурси.
+* Нещільно зв'язані, розподілені, еластичні, вивільнені мікросервіси: застосунки розбиваються на менші, незалежні частини для динамічного розгортання та управління, на відміну від монолітної архітектури, що працює на одній великій виділеній машині.
+* Ізоляція ресурсів: передбачувана продуктивність застосунку.
+* Використання ресурсів: висока ефективність та щільність.
+
+
+## Чому вам потрібен Kebernetes і що він може робити
+
+
+Контейнери - це прекрасний спосіб упакувати та запустити ваші застосунки. У прод оточенні вам потрібно керувати контейнерами, в яких працюють застосунки, і стежити, щоб не було простою. Наприклад, якщо один контейнер припиняє роботу, інший має бути запущений йому на заміну. Чи не легше було б, якби цим керувала сама система?
+
+
+Ось де Kubernetes приходить на допомогу! Kubernetes надає вам каркас для еластичного запуску розподілених систем. Він опікується масштабуванням та аварійним відновленням вашого застосунку, пропонує шаблони розгортань тощо. Наприклад, Kubernetes дозволяє легко створювати розгортання за стратегією canary у вашій системі.
+
+
+Kubernetes надає вам:
+
+
+
+* **Виявлення сервісів та балансування навантаження**
+Kubernetes може надавати доступ до контейнера, використовуючи DNS-ім'я або його власну IP-адресу. Якщо контейнер зазнає завеликого мережевого навантаження, Kubernetes здатний збалансувати та розподілити його таким чином, щоб якість обслуговування залишалась стабільною.
+* **Оркестрація сховища інформації**
+Kubernetes дозволяє вам автоматично монтувати системи збереження інформації на ваш вибір: локальні сховища, рішення від хмарних провайдерів тощо.
+* **Автоматичне розгортання та відкатування**
+За допомогою Kubernetes ви можете описати бажаний стан контейнерів, що розгортаються, і він регульовано простежить за виконанням цього стану. Наприклад, ви можете автоматизувати в Kubernetes процеси створення нових контейнерів для розгортання, видалення існуючих контейнерів і передачу їхніх ресурсів на новостворені контейнери.
+* **Автоматичне розміщення задач**
+Ви надаєте Kubernetes кластер для запуску контейнерізованих задач і вказуєте, скільки ресурсів ЦПУ та пам'яті (RAM) необхідно для роботи кожного контейнера. Kubernetes розподіляє контейнери по вузлах кластера для максимально ефективного використання ресурсів.
+* **Самозцілення**
+Kubernetes перезапускає контейнери, що відмовили; заміняє контейнери; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності.
+* **Управління секретами та конфігурацією**
+Kubernetes дозволяє вам зберігати та керувати чутливою інформацією, такою як паролі, OAuth токени та SSH ключі. Ви можете розгортати та оновлювати секрети та конфігурацію без перезбирання образів ваших контейнерів, не розкриваючи секрети в конфігурацію стека.
+
+
+
+## Чим не є Kubernetes
+
+
+Kubernetes не є комплексною системою PaaS (Платформа як послуга) у традиційному розумінні. Оскільки Kubernetes оперує швидше на рівні контейнерів, аніж на рівні апаратного забезпечення, деяка загальнозастосована функціональність і справді є спільною з PaaS, як-от розгортання, масштабування, розподіл навантаження, логування і моніторинг. Водночас Kubernetes не є монолітним, а вищезазначені особливості підключаються і є опціональними. Kubernetes надає будівельні блоки для створення платформ для розробників, але залишає за користувачем право вибору у важливих питаннях.
+
+
+Kubernetes:
+
+
+
+* Не обмежує типи застосунків, що підтримуються. Kubernetes намагається підтримувати найрізноманітніші типи навантажень, включно із застосунками зі станом (stateful) та без стану (stateless), навантаження по обробці даних тощо. Якщо ваш застосунок можна контейнеризувати, він чудово запуститься під Kubernetes.
+* Не розгортає застосунки з вихідного коду та не збирає ваші застосунки. Процеси безперервної інтеграції, доставки та розгортання (CI/CD) визначаються на рівні організації, та в залежності від технічних вимог.
+* Не надає сервіси на рівні застосунків як вбудовані: програмне забезпечення проміжного рівня (наприклад, шина передачі повідомлень), фреймворки обробки даних (наприклад, Spark), бази даних (наприклад, MySQL), кеш, некластерні системи збереження інформації (наприклад, Ceph). Ці компоненти можуть бути запущені у Kubernetes та/або бути доступними для застосунків за допомогою спеціальних механізмів, наприклад [Open Service Broker](https://openservicebrokerapi.org/).
+* Не нав'язує використання інструментів для логування, моніторингу та сповіщень, натомість надає певні інтеграційні рішення як прототипи, та механізми зі збирання та експорту метрик.
+* Не надає та не змушує використовувати якусь конфігураційну мову/систему (як наприклад `Jsonnet`), натомість надає можливість використовувати API, що може бути використаний довільними формами декларативних специфікацій.
+* Не надає і не запроваджує жодних систем машинної конфігурації, підтримки, управління або самозцілення.
+* На додачу, Kubernetes - не просто система оркестрації. Власне кажучи, вона усуває потребу оркестрації як такої. Технічне визначення оркестрації - це запуск визначених процесів: спочатку A, за ним B, потім C. На противагу, Kubernetes складається з певної множини незалежних, складних процесів контролерів, що безперервно опрацьовують стан у напрямку, що заданий бажаною конфігурацією. Неважливо, як ви дістанетесь з пункту A до пункту C. Централізоване управління також не є вимогою. Все це виливається в систему, яку легко використовувати, яка є потужною, надійною, стійкою та здатною до легкого розширення.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Перегляньте [компоненти Kubernetes](/docs/concepts/overview/components/)
+* Готові [розпочати роботу](/docs/setup/)?
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/_index.md b/content/uk/docs/concepts/services-networking/_index.md
new file mode 100644
index 0000000000..634694311a
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Сервіси, балансування навантаження та мережа"
+weight: 60
+---
diff --git a/content/uk/docs/concepts/services-networking/dual-stack.md b/content/uk/docs/concepts/services-networking/dual-stack.md
new file mode 100644
index 0000000000..a4e7bf57af
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/dual-stack.md
@@ -0,0 +1,109 @@
+---
+reviewers:
+- lachie83
+- khenidak
+- aramase
+title: IPv4/IPv6 dual-stack
+feature:
+ title: Подвійний стек IPv4/IPv6
+ description: >
+ Призначення IPv4- та IPv6-адрес подам і сервісам.
+
+content_template: templates/concept
+weight: 70
+---
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
+
+ IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}.
+
+If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Supported Features
+
+Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
+
+ * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod)
+ * IPv4 and IPv6 enabled Services (each Service must be for a single address family)
+ * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces
+
+## Prerequisites
+
+The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:
+
+ * Kubernetes 1.16 or later
+ * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces)
+ * A network plugin that supports dual-stack (such as Kubenet or Calico)
+ * Kube-proxy running in mode IPVS
+
+## Enable IPv4/IPv6 dual-stack
+
+To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments:
+
+ * kube-controller-manager:
+ * `--feature-gates="IPv6DualStack=true"`
+ * `--cluster-cidr=,` eg. `--cluster-cidr=10.244.0.0/16,fc00::/24`
+ * `--service-cluster-ip-range=,`
+ * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6
+ * kubelet:
+ * `--feature-gates="IPv6DualStack=true"`
+ * kube-proxy:
+ * `--proxy-mode=ipvs`
+ * `--cluster-cidrs=,`
+ * `--feature-gates="IPv6DualStack=true"`
+
+{{< caution >}}
+If you specify an IPv6 address block larger than a /24 via `--cluster-cidr` on the command line, that assignment will fail.
+{{< /caution >}}
+
+## Services
+
+If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service.
+You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic.
+
+{{< note >}}
+The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager.
+{{< /note >}}
+
+You can set `.spec.ipFamily` to either:
+
+ * `IPv4`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv4`
+ * `IPv6`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv6`
+
+The following Service specification does not include the `ipFamily` field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured `service-cluster-ip-range` to this Service.
+
+{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
+
+The following Service specification includes the `ipFamily` field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service.
+
+{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}}
+
+For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service.
+
+{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}}
+
+### Type LoadBalancer
+
+On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service.
+
+## Egress Traffic
+
+The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters.
+
+## Known Issues
+
+ * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/endpoint-slices.md b/content/uk/docs/concepts/services-networking/endpoint-slices.md
new file mode 100644
index 0000000000..f6e918b13c
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/endpoint-slices.md
@@ -0,0 +1,188 @@
+---
+reviewers:
+- freehan
+title: EndpointSlices
+feature:
+ title: EndpointSlices
+ description: >
+ Динамічне відстеження мережевих вузлів у кластері Kubernetes.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
+
+_EndpointSlices_ provide a simple way to track network endpoints within a
+Kubernetes cluster. They offer a more scalable and extensible alternative to
+Endpoints.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## EndpointSlice resources {#endpointslice-resource}
+
+In Kubernetes, an EndpointSlice contains references to a set of network
+endpoints. The EndpointSlice controller automatically creates EndpointSlices
+for a Kubernetes Service when a {{< glossary_tooltip text="selector"
+term_id="selector" >}} is specified. These EndpointSlices will include
+references to any Pods that match the Service selector. EndpointSlices group
+network endpoints together by unique Service and Port combinations.
+
+As an example, here's a sample EndpointSlice resource for the `example`
+Kubernetes Service.
+
+```yaml
+apiVersion: discovery.k8s.io/v1beta1
+kind: EndpointSlice
+metadata:
+ name: example-abc
+ labels:
+ kubernetes.io/service-name: example
+addressType: IPv4
+ports:
+ - name: http
+ protocol: TCP
+ port: 80
+endpoints:
+ - addresses:
+ - "10.1.2.3"
+ conditions:
+ ready: true
+ hostname: pod-1
+ topology:
+ kubernetes.io/hostname: node-1
+ topology.kubernetes.io/zone: us-west2-a
+```
+
+By default, EndpointSlices managed by the EndpointSlice controller will have no
+more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1
+with Endpoints and Services and have similar performance.
+
+EndpointSlices can act as the source of truth for kube-proxy when it comes to
+how to route internal traffic. When enabled, they should provide a performance
+improvement for services with large numbers of endpoints.
+
+### Address Types
+
+EndpointSlices support three address types:
+
+* IPv4
+* IPv6
+* FQDN (Fully Qualified Domain Name)
+
+### Topology
+
+Each endpoint within an EndpointSlice can contain relevant topology information.
+This is used to indicate where an endpoint is, containing information about the
+corresponding Node, zone, and region. When the values are available, the
+following Topology labels will be set by the EndpointSlice controller:
+
+* `kubernetes.io/hostname` - The name of the Node this endpoint is on.
+* `topology.kubernetes.io/zone` - The zone this endpoint is in.
+* `topology.kubernetes.io/region` - The region this endpoint is in.
+
+The values of these labels are derived from resources associated with each
+endpoint in a slice. The hostname label represents the value of the NodeName
+field on the corresponding Pod. The zone and region labels represent the value
+of the labels with the same names on the corresponding Node.
+
+### Management
+
+By default, EndpointSlices are created and managed by the EndpointSlice
+controller. There are a variety of other use cases for EndpointSlices, such as
+service mesh implementations, that could result in other entities or controllers
+managing additional sets of EndpointSlices. To ensure that multiple entities can
+manage EndpointSlices without interfering with each other, a
+`endpointslice.kubernetes.io/managed-by` label is used to indicate the entity
+managing an EndpointSlice. The EndpointSlice controller sets
+`endpointslice-controller.k8s.io` as the value for this label on all
+EndpointSlices it manages. Other entities managing EndpointSlices should also
+set a unique value for this label.
+
+### Ownership
+
+In most use cases, EndpointSlices will be owned by the Service that it tracks
+endpoints for. This is indicated by an owner reference on each EndpointSlice as
+well as a `kubernetes.io/service-name` label that enables simple lookups of all
+EndpointSlices belonging to a Service.
+
+## EndpointSlice Controller
+
+The EndpointSlice controller watches Services and Pods to ensure corresponding
+EndpointSlices are up to date. The controller will manage EndpointSlices for
+every Service with a selector specified. These will represent the IPs of Pods
+matching the Service selector.
+
+### Size of EndpointSlices
+
+By default, EndpointSlices are limited to a size of 100 endpoints each. You can
+configure this with the `--max-endpoints-per-slice` {{< glossary_tooltip
+text="kube-controller-manager" term_id="kube-controller-manager" >}} flag up to
+a maximum of 1000.
+
+### Distribution of EndpointSlices
+
+Each EndpointSlice has a set of ports that applies to all endpoints within the
+resource. When named ports are used for a Service, Pods may end up with
+different target port numbers for the same named port, requiring different
+EndpointSlices. This is similar to the logic behind how subsets are grouped
+with Endpoints.
+
+The controller tries to fill EndpointSlices as full as possible, but does not
+actively rebalance them. The logic of the controller is fairly straightforward:
+
+1. Iterate through existing EndpointSlices, remove endpoints that are no longer
+ desired and update matching endpoints that have changed.
+2. Iterate through EndpointSlices that have been modified in the first step and
+ fill them up with any new endpoints needed.
+3. If there's still new endpoints left to add, try to fit them into a previously
+ unchanged slice and/or create new ones.
+
+Importantly, the third step prioritizes limiting EndpointSlice updates over a
+perfectly full distribution of EndpointSlices. As an example, if there are 10
+new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each,
+this approach will create a new EndpointSlice instead of filling up the 2
+existing EndpointSlices. In other words, a single EndpointSlice creation is
+preferrable to multiple EndpointSlice updates.
+
+With kube-proxy running on each Node and watching EndpointSlices, every change
+to an EndpointSlice becomes relatively expensive since it will be transmitted to
+every Node in the cluster. This approach is intended to limit the number of
+changes that need to be sent to every Node, even if it may result with multiple
+EndpointSlices that are not full.
+
+In practice, this less than ideal distribution should be rare. Most changes
+processed by the EndpointSlice controller will be small enough to fit in an
+existing EndpointSlice, and if not, a new EndpointSlice is likely going to be
+necessary soon anyway. Rolling updates of Deployments also provide a natural
+repacking of EndpointSlices with all pods and their corresponding endpoints
+getting replaced.
+
+## Motivation
+
+The Endpoints API has provided a simple and straightforward way of
+tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
+and Services have gotten larger, limitations of that API became more visible.
+Most notably, those included challenges with scaling to larger numbers of
+network endpoints.
+
+Since all network endpoints for a Service were stored in a single Endpoints
+resource, those resources could get quite large. That affected the performance
+of Kubernetes components (notably the master control plane) and resulted in
+significant amounts of network traffic and processing when Endpoints changed.
+EndpointSlices help you mitigate those issues as well as provide an extensible
+platform for additional features such as topological routing.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
+* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/service-topology.md b/content/uk/docs/concepts/services-networking/service-topology.md
new file mode 100644
index 0000000000..c1be99267b
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/service-topology.md
@@ -0,0 +1,127 @@
+---
+reviewers:
+- johnbelamaric
+- imroc
+title: Service Topology
+feature:
+ title: Топологія Сервісів
+ description: >
+ Маршрутизація трафіка Сервісом відповідно до топології кластера.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
+
+_Service Topology_ enables a service to route traffic based upon the Node
+topology of the cluster. For example, a service can specify that traffic be
+preferentially routed to endpoints that are on the same Node as the client, or
+in the same availability zone.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Introduction
+
+By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to
+any backend address for the Service. Since Kubernetes 1.7 it has been possible
+to route "external" traffic to the Pods running on the Node that received the
+traffic, but this is not supported for `ClusterIP` Services, and more complex
+topologies — such as routing zonally — have not been possible. The
+_Service Topology_ feature resolves this by allowing the Service creator to
+define a policy for routing traffic based upon the Node labels for the
+originating and destination Nodes.
+
+By using Node label matching between the source and destination, the operator
+may designate groups of Nodes that are "closer" and "farther" from one another,
+using whatever metric makes sense for that operator's requirements. For many
+operators in public clouds, for example, there is a preference to keep service
+traffic within the same zone, because interzonal traffic has a cost associated
+with it, while intrazonal traffic does not. Other common needs include being able
+to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to
+Nodes connected to the same top-of-rack switch for the lowest latency.
+
+## Prerequisites
+
+The following prerequisites are needed in order to enable topology aware service
+routing:
+
+ * Kubernetes 1.17 or later
+ * Kube-proxy running in iptables mode or IPVS mode
+ * Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/)
+
+## Enable Service Topology
+
+To enable service topology, enable the `ServiceTopology` feature gate for
+kube-apiserver and kube-proxy:
+
+```
+--feature-gates="ServiceTopology=true"
+```
+
+## Using Service Topology
+
+If your cluster has Service Topology enabled, you can control Service traffic
+routing by specifying the `topologyKeys` field on the Service spec. This field
+is a preference-order list of Node labels which will be used to sort endpoints
+when accessing this Service. Traffic will be directed to a Node whose value for
+the first label matches the originating Node's value for that label. If there is
+no backend for the Service on a matching Node, then the second label will be
+considered, and so forth, until no labels remain.
+
+If no match is found, the traffic will be rejected, just as if there were no
+backends for the Service at all. That is, endpoints are chosen based on the first
+topology key with available backends. If this field is specified and all entries
+have no backends that match the topology of the client, the service has no
+backends for that client and connections should fail. The special value `"*"` may
+be used to mean "any topology". This catch-all value, if used, only makes sense
+as the last value in the list.
+
+If `topologyKeys` is not specified or empty, no topology constraints will be applied.
+
+Consider a cluster with Nodes that are labeled with their hostname, zone name,
+and region name. Then you can set the `topologyKeys` values of a service to direct
+traffic as follows.
+
+* Only to endpoints on the same node, failing if no endpoint exists on the node:
+ `["kubernetes.io/hostname"]`.
+* Preferentially to endpoints on the same node, falling back to endpoints in the
+ same zone, followed by the same region, and failing otherwise: `["kubernetes.io/hostname",
+ "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`.
+ This may be useful, for example, in cases where data locality is critical.
+* Preferentially to the same zone, but fallback on any available endpoint if
+ none are available within this zone:
+ `["topology.kubernetes.io/zone", "*"]`.
+
+
+
+## Constraints
+
+* Service topology is not compatible with `externalTrafficPolicy=Local`, and
+ therefore a Service cannot use both of these features. It is possible to use
+ both features in the same cluster on different Services, just not on the same
+ Service.
+
+* Valid topology keys are currently limited to `kubernetes.io/hostname`,
+ `topology.kubernetes.io/zone`, and `topology.kubernetes.io/region`, but will
+ be generalized to other node labels in the future.
+
+* Topology keys must be valid label keys and at most 16 keys may be specified.
+
+* The catch-all value, `"*"`, must be the last value in the topology keys, if
+ it is used.
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology)
+* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/service.md b/content/uk/docs/concepts/services-networking/service.md
new file mode 100644
index 0000000000..d6a72fcc63
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/service.md
@@ -0,0 +1,1197 @@
+---
+reviewers:
+- bprashanth
+title: Service
+feature:
+ title: Виявлення Сервісів і балансування навантаження
+ description: >
+ Не потрібно змінювати ваш застосунок для використання незнайомого механізму виявлення Сервісів. Kubernetes призначає Подам власні IP-адреси, а набору Подів - єдине DNS-ім'я, і балансує навантаження між ними.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< glossary_definition term_id="service" length="short" >}}
+
+With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism.
+Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods,
+and can load-balance across them.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Motivation
+
+Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal.
+They are born and when they die, they are not resurrected.
+If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app,
+it can create and destroy Pods dynamically.
+
+Each Pod gets its own IP address, however in a Deployment, the set of Pods
+running in one moment in time could be different from
+the set of Pods running that application a moment later.
+
+This leads to a problem: if some set of Pods (call them “backends”) provides
+functionality to other Pods (call them “frontends”) inside your cluster,
+how do the frontends find out and keep track of which IP address to connect
+to, so that the frontend can use the backend part of the workload?
+
+Enter _Services_.
+
+## Service resources {#service-resource}
+
+In Kubernetes, a Service is an abstraction which defines a logical set of Pods
+and a policy by which to access them (sometimes this pattern is called
+a micro-service). The set of Pods targeted by a Service is usually determined
+by a {{< glossary_tooltip text="selector" term_id="selector" >}}
+(see [below](#services-without-selectors) for why you might want a Service
+_without_ a selector).
+
+For example, consider a stateless image-processing backend which is running with
+3 replicas. Those replicas are fungible—frontends do not care which backend
+they use. While the actual Pods that compose the backend set may change, the
+frontend clients should not need to be aware of that, nor should they need to keep
+track of the set of backends themselves.
+
+The Service abstraction enables this decoupling.
+
+### Cloud-native service discovery
+
+If you're able to use Kubernetes APIs for service discovery in your application,
+you can query the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
+for Endpoints, that get updated whenever the set of Pods in a Service changes.
+
+For non-native applications, Kubernetes offers ways to place a network port or load
+balancer in between your application and the backend Pods.
+
+## Defining a Service
+
+A Service in Kubernetes is a REST object, similar to a Pod. Like all of the
+REST objects, you can `POST` a Service definition to the API server to create
+a new instance.
+
+For example, suppose you have a set of Pods that each listen on TCP port 9376
+and carry a label `app=MyApp`:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+```
+
+This specification creates a new Service object named “my-service”, which
+targets TCP port 9376 on any Pod with the `app=MyApp` label.
+
+Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
+which is used by the Service proxies
+(see [Virtual IPs and service proxies](#virtual-ips-and-service-proxies) below).
+
+The controller for the Service selector continuously scans for Pods that
+match its selector, and then POSTs any updates to an Endpoint object
+also named “my-service”.
+
+{{< note >}}
+A Service can map _any_ incoming `port` to a `targetPort`. By default and
+for convenience, the `targetPort` is set to the same value as the `port`
+field.
+{{< /note >}}
+
+Port definitions in Pods have names, and you can reference these names in the
+`targetPort` attribute of a Service. This works even if there is a mixture
+of Pods in the Service using a single configured name, with the same network
+protocol available via different port numbers.
+This offers a lot of flexibility for deploying and evolving your Services.
+For example, you can change the port numbers that Pods expose in the next
+version of your backend software, without breaking clients.
+
+The default protocol for Services is TCP; you can also use any other
+[supported protocol](#protocol-support).
+
+As many Services need to expose more than one port, Kubernetes supports multiple
+port definitions on a Service object.
+Each port definition can have the same `protocol`, or a different one.
+
+### Services without selectors
+
+Services most commonly abstract access to Kubernetes Pods, but they can also
+abstract other kinds of backends.
+For example:
+
+ * You want to have an external database cluster in production, but in your
+ test environment you use your own databases.
+ * You want to point your Service to a Service in a different
+ {{< glossary_tooltip term_id="namespace" >}} or on another cluster.
+ * You are migrating a workload to Kubernetes. Whilst evaluating the approach,
+ you run only a proportion of your backends in Kubernetes.
+
+In any of these scenarios you can define a Service _without_ a Pod selector.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+```
+
+Because this Service has no selector, the corresponding Endpoint object is *not*
+created automatically. You can manually map the Service to the network address and port
+where it's running, by adding an Endpoint object manually:
+
+```yaml
+apiVersion: v1
+kind: Endpoints
+metadata:
+ name: my-service
+subsets:
+ - addresses:
+ - ip: 192.0.2.42
+ ports:
+ - port: 9376
+```
+
+{{< note >}}
+The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or
+link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
+
+Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services,
+because {{< glossary_tooltip term_id="kube-proxy" >}} doesn't support virtual IPs
+as a destination.
+{{< /note >}}
+
+Accessing a Service without a selector works the same as if it had a selector.
+In the example above, traffic is routed to the single endpoint defined in
+the YAML: `192.0.2.42:9376` (TCP).
+
+An ExternalName Service is a special case of Service that does not have
+selectors and uses DNS names instead. For more information, see the
+[ExternalName](#externalname) section later in this document.
+
+### EndpointSlices
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
+
+EndpointSlices are an API resource that can provide a more scalable alternative
+to Endpoints. Although conceptually quite similar to Endpoints, EndpointSlices
+allow for distributing network endpoints across multiple resources. By default,
+an EndpointSlice is considered "full" once it reaches 100 endpoints, at which
+point additional EndpointSlices will be created to store any additional
+endpoints.
+
+EndpointSlices provide additional attributes and functionality which is
+described in detail in [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/).
+
+## Virtual IPs and service proxies
+
+Every node in a Kubernetes cluster runs a `kube-proxy`. `kube-proxy` is
+responsible for implementing a form of virtual IP for `Services` of type other
+than [`ExternalName`](#externalname).
+
+### Why not use round-robin DNS?
+
+A question that pops up every now and then is why Kubernetes relies on
+proxying to forward inbound traffic to backends. What about other
+approaches? For example, would it be possible to configure DNS records that
+have multiple A values (or AAAA for IPv6), and rely on round-robin name
+resolution?
+
+There are a few reasons for using proxying for Services:
+
+ * There is a long history of DNS implementations not respecting record TTLs,
+ and caching the results of name lookups after they should have expired.
+ * Some apps do DNS lookups only once and cache the results indefinitely.
+ * Even if apps and libraries did proper re-resolution, the low or zero TTLs
+ on the DNS records could impose a high load on DNS that then becomes
+ difficult to manage.
+
+### User space proxy mode {#proxy-mode-userspace}
+
+In this mode, kube-proxy watches the Kubernetes master for the addition and
+removal of Service and Endpoint objects. For each Service it opens a
+port (randomly chosen) on the local node. Any connections to this "proxy port"
+are
+proxied to one of the Service's backend Pods (as reported via
+Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into
+account when deciding which backend Pod to use.
+
+Lastly, the user-space proxy installs iptables rules which capture traffic to
+the Service's `clusterIP` (which is virtual) and `port`. The rules
+redirect that traffic to the proxy port which proxies the backend Pod.
+
+By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
+
+
+
+### `iptables` proxy mode {#proxy-mode-iptables}
+
+In this mode, kube-proxy watches the Kubernetes control plane for the addition and
+removal of Service and Endpoint objects. For each Service, it installs
+iptables rules, which capture traffic to the Service's `clusterIP` and `port`,
+and redirect that traffic to one of the Service's
+backend sets. For each Endpoint object, it installs iptables rules which
+select a backend Pod.
+
+By default, kube-proxy in iptables mode chooses a backend at random.
+
+Using iptables to handle traffic has a lower system overhead, because traffic
+is handled by Linux netfilter without the need to switch between userspace and the
+kernel space. This approach is also likely to be more reliable.
+
+If kube-proxy is running in iptables mode and the first Pod that's selected
+does not respond, the connection fails. This is different from userspace
+mode: in that scenario, kube-proxy would detect that the connection to the first
+Pod had failed and would automatically retry with a different backend Pod.
+
+You can use Pod [readiness probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
+to verify that backend Pods are working OK, so that kube-proxy in iptables mode
+only sees backends that test out as healthy. Doing this means you avoid
+having traffic sent via kube-proxy to a Pod that's known to have failed.
+
+
+
+### IPVS proxy mode {#proxy-mode-ipvs}
+
+{{< feature-state for_k8s_version="v1.11" state="stable" >}}
+
+In `ipvs` mode, kube-proxy watches Kubernetes Services and Endpoints,
+calls `netlink` interface to create IPVS rules accordingly and synchronizes
+IPVS rules with Kubernetes Services and Endpoints periodically.
+This control loop ensures that IPVS status matches the desired
+state.
+When accessing a Service, IPVS directs traffic to one of the backend Pods.
+
+The IPVS proxy mode is based on netfilter hook function that is similar to
+iptables mode, but uses a hash table as the underlying data structure and works
+in the kernel space.
+That means kube-proxy in IPVS mode redirects traffic with lower latency than
+kube-proxy in iptables mode, with much better performance when synchronising
+proxy rules. Compared to the other proxy modes, IPVS mode also supports a
+higher throughput of network traffic.
+
+IPVS provides more options for balancing traffic to backend Pods;
+these are:
+
+- `rr`: round-robin
+- `lc`: least connection (smallest number of open connections)
+- `dh`: destination hashing
+- `sh`: source hashing
+- `sed`: shortest expected delay
+- `nq`: never queue
+
+{{< note >}}
+To run kube-proxy in IPVS mode, you must make the IPVS Linux available on
+the node before you starting kube-proxy.
+
+When kube-proxy starts in IPVS proxy mode, it verifies whether IPVS
+kernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy
+falls back to running in iptables proxy mode.
+{{< /note >}}
+
+
+
+In these proxy models, the traffic bound for the Service’s IP:Port is
+proxied to an appropriate backend without the clients knowing anything
+about Kubernetes or Services or Pods.
+
+If you want to make sure that connections from a particular client
+are passed to the same Pod each time, you can select the session affinity based
+on the client's IP addresses by setting `service.spec.sessionAffinity` to "ClientIP"
+(the default is "None").
+You can also set the maximum session sticky time by setting
+`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately.
+(the default value is 10800, which works out to be 3 hours).
+
+## Multi-Port Services
+
+For some Services, you need to expose more than one port.
+Kubernetes lets you configure multiple port definitions on a Service object.
+When using multiple ports for a Service, you must give all of your ports names
+so that these are unambiguous.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: 9376
+ - name: https
+ protocol: TCP
+ port: 443
+ targetPort: 9377
+```
+
+{{< note >}}
+As with Kubernetes {{< glossary_tooltip term_id="name" text="names">}} in general, names for ports
+must only contain lowercase alphanumeric characters and `-`. Port names must
+also start and end with an alphanumeric character.
+
+For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not.
+{{< /note >}}
+
+## Choosing your own IP address
+
+You can specify your own cluster IP address as part of a `Service` creation
+request. To do this, set the `.spec.clusterIP` field. For example, if you
+already have an existing DNS entry that you wish to reuse, or legacy systems
+that are configured for a specific IP address and difficult to re-configure.
+
+The IP address that you choose must be a valid IPv4 or IPv6 address from within the
+`service-cluster-ip-range` CIDR range that is configured for the API server.
+If you try to create a Service with an invalid clusterIP address value, the API
+server will return a 422 HTTP status code to indicate that there's a problem.
+
+## Discovering services
+
+Kubernetes supports 2 primary modes of finding a Service - environment
+variables and DNS.
+
+### Environment variables
+
+When a Pod is run on a Node, the kubelet adds a set of environment variables
+for each active Service. It supports both [Docker links
+compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
+[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
+and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
+where the Service name is upper-cased and dashes are converted to underscores.
+
+For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
+allocated cluster IP address 10.0.0.11, produces the following environment
+variables:
+
+```shell
+REDIS_MASTER_SERVICE_HOST=10.0.0.11
+REDIS_MASTER_SERVICE_PORT=6379
+REDIS_MASTER_PORT=tcp://10.0.0.11:6379
+REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
+REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
+REDIS_MASTER_PORT_6379_TCP_PORT=6379
+REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
+```
+
+{{< note >}}
+When you have a Pod that needs to access a Service, and you are using
+the environment variable method to publish the port and cluster IP to the client
+Pods, you must create the Service *before* the client Pods come into existence.
+Otherwise, those client Pods won't have their environment variables populated.
+
+If you only use DNS to discover the cluster IP for a Service, you don't need to
+worry about this ordering issue.
+{{< /note >}}
+
+### DNS
+
+You can (and almost always should) set up a DNS service for your Kubernetes
+cluster using an [add-on](/docs/concepts/cluster-administration/addons/).
+
+A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new
+Services and creates a set of DNS records for each one. If DNS has been enabled
+throughout your cluster then all Pods should automatically be able to resolve
+Services by their DNS name.
+
+For example, if you have a Service called `"my-service"` in a Kubernetes
+Namespace `"my-ns"`, the control plane and the DNS Service acting together
+create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace
+should be able to find it by simply doing a name lookup for `my-service`
+(`"my-service.my-ns"` would also work).
+
+Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
+will resolve to the cluster IP assigned for the Service.
+
+Kubernetes also supports DNS SRV (Service) records for named ports. If the
+`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to
+`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover
+the port number for `"http"`, as well as the IP address.
+
+The Kubernetes DNS server is the only way to access `ExternalName` Services.
+You can find more information about `ExternalName` resolution in
+[DNS Pods and Services](/docs/concepts/services-networking/dns-pod-service/).
+
+## Headless Services
+
+Sometimes you don't need load-balancing and a single Service IP. In
+this case, you can create what are termed “headless” Services, by explicitly
+specifying `"None"` for the cluster IP (`.spec.clusterIP`).
+
+You can use a headless Service to interface with other service discovery mechanisms,
+without being tied to Kubernetes' implementation.
+
+For headless `Services`, a cluster IP is not allocated, kube-proxy does not handle
+these Services, and there is no load balancing or proxying done by the platform
+for them. How DNS is automatically configured depends on whether the Service has
+selectors defined:
+
+### With selectors
+
+For headless Services that define selectors, the endpoints controller creates
+`Endpoints` records in the API, and modifies the DNS configuration to return
+records (addresses) that point directly to the `Pods` backing the `Service`.
+
+### Without selectors
+
+For headless Services that do not define selectors, the endpoints controller does
+not create `Endpoints` records. However, the DNS system looks for and configures
+either:
+
+ * CNAME records for [`ExternalName`](#externalname)-type Services.
+ * A records for any `Endpoints` that share a name with the Service, for all
+ other types.
+
+## Publishing Services (ServiceTypes) {#publishing-services-service-types}
+
+For some parts of your application (for example, frontends) you may want to expose a
+Service onto an external IP address, that's outside of your cluster.
+
+Kubernetes `ServiceTypes` allow you to specify what kind of Service you want.
+The default is `ClusterIP`.
+
+`Type` values and their behaviors are:
+
+ * `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
+ makes the Service only reachable from within the cluster. This is the
+ default `ServiceType`.
+ * [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
+ (the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
+ routes, is automatically created. You'll be able to contact the `NodePort` Service,
+ from outside the cluster,
+ by requesting `:`.
+ * [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
+ provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
+ load balancer routes, are automatically created.
+ * [`ExternalName`](#externalname): Maps the Service to the contents of the
+ `externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
+
+ with its value. No proxying of any kind is set up.
+ {{< note >}}
+ You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type.
+ {{< /note >}}
+
+You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.
+
+### Type NodePort {#nodeport}
+
+If you set the `type` field to `NodePort`, the Kubernetes control plane
+allocates a port from a range specified by `--service-node-port-range` flag (default: 30000-32767).
+Each node proxies that port (the same port number on every Node) into your Service.
+Your Service reports the allocated port in its `.spec.ports[*].nodePort` field.
+
+
+If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10.
+This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node.
+
+For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases).
+
+If you want a specific port number, you can specify a value in the `nodePort`
+field. The control plane will either allocate you that port or report that
+the API transaction failed.
+This means that you need to take care of possible port collisions yourself.
+You also have to use a valid port number, one that's inside the range configured
+for NodePort use.
+
+Using a NodePort gives you the freedom to set up your own load balancing solution,
+to configure environments that are not fully supported by Kubernetes, or even
+to just expose one or more nodes' IPs directly.
+
+Note that this Service is visible as `:spec.ports[*].nodePort`
+and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).)
+
+### Type LoadBalancer {#loadbalancer}
+
+On cloud providers which support external load balancers, setting the `type`
+field to `LoadBalancer` provisions a load balancer for your Service.
+The actual creation of the load balancer happens asynchronously, and
+information about the provisioned balancer is published in the Service's
+`.status.loadBalancer` field.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ clusterIP: 10.0.171.239
+ type: LoadBalancer
+status:
+ loadBalancer:
+ ingress:
+ - ip: 192.0.2.127
+```
+
+Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
+
+For LoadBalancer type of Services, when there is more than one port defined, all
+ports must have the same protocol and the protocol must be one of `TCP`, `UDP`,
+and `SCTP`.
+
+Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created
+with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
+the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP`
+but your cloud provider does not support the feature, the `loadbalancerIP` field that you
+set is ignored.
+
+{{< note >}}
+If you're using SCTP, see the [caveat](#caveat-sctp-loadbalancer-service-type) below about the
+`LoadBalancer` Service type.
+{{< /note >}}
+
+{{< note >}}
+
+On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need
+to create a static type public IP address resource. This public IP address resource should
+be in the same resource group of the other automatically created resources of the cluster.
+For example, `MC_myResourceGroup_myAKSCluster_eastus`.
+
+Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
+
+{{< /note >}}
+
+#### Internal load balancer
+In a mixed environment it is sometimes necessary to route traffic from Services inside the same
+(virtual) network address block.
+
+In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
+
+You can achieve this by adding one the following annotations to a Service.
+The annotation to add depends on the cloud Service provider you're using.
+
+{{< tabs name="service_tabs" >}}
+{{% tab name="Default" %}}
+Select one of the tabs.
+{{% /tab %}}
+{{% tab name="GCP" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ cloud.google.com/load-balancer-type: "Internal"
+[...]
+```
+{{% /tab %}}
+{{% tab name="AWS" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-internal: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="Azure" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-internal: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="OpenStack" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="Baidu Cloud" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="Tencent Cloud" %}}
+```yaml
+[...]
+metadata:
+ annotations:
+ service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
+[...]
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+#### TLS support on AWS {#ssl-support-on-aws}
+
+For partial TLS / SSL support on clusters running on AWS, you can add three
+annotations to a `LoadBalancer` service:
+
+```yaml
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
+```
+
+The first specifies the ARN of the certificate to use. It can be either a
+certificate from a third party issuer that was uploaded to IAM or one created
+within AWS Certificate Manager.
+
+```yaml
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-backend-protocol: (https|http|ssl|tcp)
+```
+
+The second annotation specifies which protocol a Pod speaks. For HTTPS and
+SSL, the ELB expects the Pod to authenticate itself over the encrypted
+connection, using a certificate.
+
+HTTP and HTTPS selects layer 7 proxying: the ELB terminates
+the connection with the user, parses headers, and injects the `X-Forwarded-For`
+header with the user's IP address (Pods only see the IP address of the
+ELB at the other end of its connection) when forwarding requests.
+
+TCP and SSL selects layer 4 proxying: the ELB forwards traffic without
+modifying the headers.
+
+In a mixed-use environment where some ports are secured and others are left unencrypted,
+you can use the following annotations:
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
+ service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
+```
+
+In the above example, if the Service contained three ports, `80`, `443`, and
+`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just
+be proxied HTTP.
+
+From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
+To see which policies are available for use, you can use the `aws` command line tool:
+
+```bash
+aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName'
+```
+
+You can then specify any one of those policies using the
+"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`"
+annotation; for example:
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
+```
+
+#### PROXY protocol support on AWS
+
+To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)
+support for clusters running on AWS, you can use the following service
+annotation:
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
+```
+
+Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB
+and cannot be configured otherwise.
+
+#### ELB Access Logs on AWS
+
+There are several annotations to manage access logs for ELB Services on AWS.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`
+controls whether access logs are enabled.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`
+controls the interval in minutes for publishing the access logs. You can specify
+an interval of either 5 or 60 minutes.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`
+controls the name of the Amazon S3 bucket where load balancer access logs are
+stored.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`
+specifies the logical hierarchy you created for your Amazon S3 bucket.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
+ # Specifies whether access logs are enabled for the load balancer
+ service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
+ # The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
+ service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
+ # The name of the Amazon S3 bucket where the access logs are stored
+ service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
+ # The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
+```
+
+#### Connection Draining on AWS
+
+Connection draining for Classic ELBs can be managed with the annotation
+`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
+to the value of `"true"`. The annotation
+`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
+also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
+
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
+ service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
+```
+
+#### Other ELB annotations
+
+There are other annotations to manage Classic Elastic Load Balancers that are described below.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
+ # The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer
+
+ service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
+ # Specifies whether cross-zone load balancing is enabled for the load balancer
+
+ service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
+ # A comma-separated list of key-value pairs which will be recorded as
+ # additional tags in the ELB.
+
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
+ # The number of successive successful health checks required for a backend to
+ # be considered healthy for traffic. Defaults to 2, must be between 2 and 10
+
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
+ # The number of unsuccessful health checks required for a backend to be
+ # considered unhealthy for traffic. Defaults to 6, must be between 2 and 10
+
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
+ # The approximate interval, in seconds, between health checks of an
+ # individual instance. Defaults to 10, must be between 5 and 300
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
+ # The amount of time, in seconds, during which no response means a failed
+ # health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
+ # value. Defaults to 5, must be between 2 and 60
+
+ service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
+ # A list of additional security groups to be added to the ELB
+```
+
+#### Network Load Balancer support on AWS {#aws-nlb-support}
+
+{{< feature-state for_k8s_version="v1.15" state="beta" >}}
+
+To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
+```
+
+{{< note >}}
+NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
+on Elastic Load Balancing for a list of supported instance types.
+{{< /note >}}
+
+Unlike Classic Elastic Load Balancers, Network Load Balancers (NLBs) forward the
+client's IP address through to the node. If a Service's `.spec.externalTrafficPolicy`
+is set to `Cluster`, the client's IP address is not propagated to the end
+Pods.
+
+By setting `.spec.externalTrafficPolicy` to `Local`, the client IP addresses is
+propagated to the end Pods, but this could result in uneven distribution of
+traffic. Nodes without any Pods for a particular LoadBalancer Service will fail
+the NLB Target Group's health check on the auto-assigned
+`.spec.healthCheckNodePort` and not receive any traffic.
+
+In order to achieve even traffic, either use a DaemonSet or specify a
+[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
+to not locate on the same node.
+
+You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer)
+annotation.
+
+In order for client traffic to reach instances behind an NLB, the Node security
+groups are modified with the following IP rules:
+
+| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
+|------|----------|---------|------------|---------------------|
+| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ |
+| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ |
+| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ |
+
+In order to limit which client IP's can access the Network Load Balancer,
+specify `loadBalancerSourceRanges`.
+
+```yaml
+spec:
+ loadBalancerSourceRanges:
+ - "143.231.0.0/16"
+```
+
+{{< note >}}
+If `.spec.loadBalancerSourceRanges` is not set, Kubernetes
+allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have
+public IP addresses, be aware that non-NLB traffic can also reach all instances
+in those modified security groups.
+
+{{< /note >}}
+
+#### Other CLB annotations on Tencent Kubernetes Engine (TKE)
+
+There are other annotations for managing Cloud Load Balancers on TKE as shown below.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ # Bind Loadbalancers with specified nodes
+ service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2)
+
+ # ID of an existing load balancer
+ service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx
+
+ # Custom parameters for the load balancer (LB), does not support modification of LB type yet
+ service.kubernetes.io/service.extensiveParameters: ""
+
+ # Custom parameters for the LB listener
+ service.kubernetes.io/service.listenerParameters: ""
+
+ # Specifies the type of Load balancer;
+ # valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer)
+ service.kubernetes.io/loadbalance-type: xxxxx
+
+ # Specifies the public network bandwidth billing method;
+ # valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth).
+ service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx
+
+ # Specifies the bandwidth value (value range: [1,2000] Mbps).
+ service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10"
+
+ # When this annotation is set,the loadbalancers will only register nodes
+ # with pod running on it, otherwise all nodes will be registered.
+ service.kubernetes.io/local-svc-only-bind-node-with-pod: true
+```
+
+### Type ExternalName {#externalname}
+
+Services of type ExternalName map a Service to a DNS name, not to a typical selector such as
+`my-service` or `cassandra`. You specify these Services with the `spec.externalName` parameter.
+
+This Service definition, for example, maps
+the `my-service` Service in the `prod` namespace to `my.database.example.com`:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ namespace: prod
+spec:
+ type: ExternalName
+ externalName: my.database.example.com
+```
+{{< note >}}
+ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
+is intended to specify a canonical DNS name. To hardcode an IP address, consider using
+[headless Services](#headless-services).
+{{< /note >}}
+
+When looking up the host `my-service.prod.svc.cluster.local`, the cluster DNS Service
+returns a `CNAME` record with the value `my.database.example.com`. Accessing
+`my-service` works in the same way as other Services but with the crucial
+difference that redirection happens at the DNS level rather than via proxying or
+forwarding. Should you later decide to move your database into your cluster, you
+can start its Pods, add appropriate selectors or endpoints, and change the
+Service's `type`.
+
+{{< warning >}}
+You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.
+
+For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
+{{< /warning >}}
+
+{{< note >}}
+This section is indebted to the [Kubernetes Tips - Part
+1](https://akomljen.com/kubernetes-tips-part-1/) blog post from [Alen Komljen](https://akomljen.com/).
+{{< /note >}}
+
+### External IPs
+
+If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those
+`externalIPs`. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port,
+will be routed to one of the Service endpoints. `externalIPs` are not managed by Kubernetes and are the responsibility
+of the cluster administrator.
+
+In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`.
+In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`)
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: 9376
+ externalIPs:
+ - 80.11.12.10
+```
+
+## Shortcomings
+
+Using the userspace proxy for VIPs, work at small to medium scale, but will
+not scale to very large clusters with thousands of Services. The [original
+design proposal for portals](http://issue.k8s.io/1107) has more details on
+this.
+
+Using the userspace proxy obscures the source IP address of a packet accessing
+a Service.
+This makes some kinds of network filtering (firewalling) impossible. The iptables
+proxy mode does not
+obscure in-cluster source IPs, but it does still impact clients coming through
+a load balancer or node-port.
+
+The `Type` field is designed as nested functionality - each level adds to the
+previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
+not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does)
+but the current API requires it.
+
+## Virtual IP implementation {#the-gory-details-of-virtual-ips}
+
+The previous information should be sufficient for many people who just want to
+use Services. However, there is a lot going on behind the scenes that may be
+worth understanding.
+
+### Avoiding collisions
+
+One of the primary philosophies of Kubernetes is that you should not be
+exposed to situations that could cause your actions to fail through no fault
+of your own. For the design of the Service resource, this means not making
+you choose your own port number if that choice might collide with
+someone else's choice. That is an isolation failure.
+
+In order to allow you to choose a port number for your Services, we must
+ensure that no two Services can collide. Kubernetes does that by allocating each
+Service its own IP address.
+
+To ensure each Service receives a unique IP, an internal allocator atomically
+updates a global allocation map in {{< glossary_tooltip term_id="etcd" >}}
+prior to creating each Service. The map object must exist in the registry for
+Services to get IP address assignments, otherwise creations will
+fail with a message indicating an IP address could not be allocated.
+
+In the control plane, a background controller is responsible for creating that
+map (needed to support migrating from older versions of Kubernetes that used
+in-memory locking). Kubernetes also uses controllers to checking for invalid
+assignments (eg due to administrator intervention) and for cleaning up allocated
+IP addresses that are no longer used by any Services.
+
+### Service IP addresses {#ips-and-vips}
+
+Unlike Pod IP addresses, which actually route to a fixed destination,
+Service IPs are not actually answered by a single host. Instead, kube-proxy
+uses iptables (packet processing logic in Linux) to define _virtual_ IP addresses
+which are transparently redirected as needed. When clients connect to the
+VIP, their traffic is automatically transported to an appropriate endpoint.
+The environment variables and DNS for Services are actually populated in
+terms of the Service's virtual IP address (and port).
+
+kube-proxy supports three proxy modes—userspace, iptables and IPVS—which
+each operate slightly differently.
+
+#### Userspace
+
+As an example, consider the image processing application described above.
+When the backend Service is created, the Kubernetes master assigns a virtual
+IP address, for example 10.0.0.1. Assuming the Service port is 1234, the
+Service is observed by all of the kube-proxy instances in the cluster.
+When a proxy sees a new Service, it opens a new random port, establishes an
+iptables redirect from the virtual IP address to this new port, and starts accepting
+connections on it.
+
+When a client connects to the Service's virtual IP address, the iptables
+rule kicks in, and redirects the packets to the proxy's own port.
+The “Service proxy” chooses a backend, and starts proxying traffic from the client to the backend.
+
+This means that Service owners can choose any port they want without risk of
+collision. Clients can simply connect to an IP and port, without being aware
+of which Pods they are actually accessing.
+
+#### iptables
+
+Again, consider the image processing application described above.
+When the backend Service is created, the Kubernetes control plane assigns a virtual
+IP address, for example 10.0.0.1. Assuming the Service port is 1234, the
+Service is observed by all of the kube-proxy instances in the cluster.
+When a proxy sees a new Service, it installs a series of iptables rules which
+redirect from the virtual IP address to per-Service rules. The per-Service
+rules link to per-Endpoint rules which redirect traffic (using destination NAT)
+to the backends.
+
+When a client connects to the Service's virtual IP address the iptables rule kicks in.
+A backend is chosen (either based on session affinity or randomly) and packets are
+redirected to the backend. Unlike the userspace proxy, packets are never
+copied to userspace, the kube-proxy does not have to be running for the virtual
+IP address to work, and Nodes see traffic arriving from the unaltered client IP
+address.
+
+This same basic flow executes when traffic comes in through a node-port or
+through a load-balancer, though in those cases the client IP does get altered.
+
+#### IPVS
+
+iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
+IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
+
+## API Object
+
+Service is a top-level resource in the Kubernetes REST API. You can find more details
+about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
+
+## Supported protocols {#protocol-support}
+
+### TCP
+
+You can use TCP for any kind of Service, and it's the default network protocol.
+
+### UDP
+
+You can use UDP for most Services. For type=LoadBalancer Services, UDP support
+depends on the cloud provider offering this facility.
+
+### HTTP
+
+If your cloud provider supports it, you can use a Service in LoadBalancer mode
+to set up external HTTP / HTTPS reverse proxying, forwarded to the Endpoints
+of the Service.
+
+{{< note >}}
+You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
+to expose HTTP / HTTPS Services.
+{{< /note >}}
+
+### PROXY protocol
+
+If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)),
+you can use a Service in LoadBalancer mode to configure a load balancer outside
+of Kubernetes itself, that will forward connections prefixed with
+[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
+
+The load balancer will send an initial series of octets describing the
+incoming connection, similar to this example
+
+```
+PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n
+```
+followed by the data from the client.
+
+### SCTP
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+Kubernetes supports SCTP as a `protocol` value in Service, Endpoint, NetworkPolicy and Pod definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `--feature-gates=SCTPSupport=true,…`.
+
+When the feature gate is enabled, you can set the `protocol` field of a Service, Endpoint, NetworkPolicy or Pod to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections.
+
+#### Warnings {#caveat-sctp-overview}
+
+##### Support for multihomed SCTP associations {#caveat-sctp-multihomed}
+
+{{< warning >}}
+The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.
+
+NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
+{{< /warning >}}
+
+##### Service with type=LoadBalancer {#caveat-sctp-loadbalancer-service-type}
+
+{{< warning >}}
+You can only create a Service with `type` LoadBalancer plus `protocol` SCTP if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise, the Service creation request is rejected. The current set of cloud load balancer providers (Azure, AWS, CloudStack, GCE, OpenStack) all lack support for SCTP.
+{{< /warning >}}
+
+##### Windows {#caveat-sctp-windows-os}
+
+{{< warning >}}
+SCTP is not supported on Windows based nodes.
+{{< /warning >}}
+
+##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace}
+
+{{< warning >}}
+The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
+{{< /warning >}}
+
+## Future work
+
+In the future, the proxy policy for Services can become more nuanced than
+simple round-robin balancing, for example master-elected or sharded. We also
+envision that some Services will have "real" load balancers, in which case the
+virtual IP address will simply transport the packets there.
+
+The Kubernetes project intends to improve support for L7 (HTTP) Services.
+
+The Kubernetes project intends to have more flexible ingress modes for Services
+that encompass the current ClusterIP, NodePort, and LoadBalancer modes and more.
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
+* Read about [Ingress](/docs/concepts/services-networking/ingress/)
+* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/storage/_index.md b/content/uk/docs/concepts/storage/_index.md
new file mode 100644
index 0000000000..23108a421c
--- /dev/null
+++ b/content/uk/docs/concepts/storage/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Сховища інформації"
+weight: 70
+---
diff --git a/content/uk/docs/concepts/storage/persistent-volumes.md b/content/uk/docs/concepts/storage/persistent-volumes.md
new file mode 100644
index 0000000000..e348abb931
--- /dev/null
+++ b/content/uk/docs/concepts/storage/persistent-volumes.md
@@ -0,0 +1,736 @@
+---
+reviewers:
+- jsafrane
+- saad-ali
+- thockin
+- msau42
+title: Persistent Volumes
+feature:
+ title: Оркестрація сховищем
+ description: >
+ Автоматично монтує систему збереження даних на ваш вибір: з локального носія даних, із хмарного сховища від провайдера публічних хмарних сервісів, як-от GCP чи AWS , або з мережевого сховища, такого як: NFS, iSCSI, Gluster, Ceph, Cinder чи Flocker.
+
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Introduction
+
+Managing storage is a distinct problem from managing compute instances. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`.
+
+A `PersistentVolume` (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
+
+A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).
+
+While `PersistentVolumeClaims` allow a user to consume abstract storage
+resources, it is common that users need `PersistentVolumes` with varying
+properties, such as performance, for different problems. Cluster administrators
+need to be able to offer a variety of `PersistentVolumes` that differ in more
+ways than just size and access modes, without exposing users to the details of
+how those volumes are implemented. For these needs, there is the `StorageClass`
+resource.
+
+See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).
+
+
+## Lifecycle of a volume and claim
+
+PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:
+
+### Provisioning
+
+There are two ways PVs may be provisioned: statically or dynamically.
+
+#### Static
+A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
+
+#### Dynamic
+When none of the static PVs the administrator created match a user's `PersistentVolumeClaim`,
+the cluster may try to dynamically provision a volume specially for the PVC.
+This provisioning is based on `StorageClasses`: the PVC must request a
+[storage class](/docs/concepts/storage/storage-classes/) and
+the administrator must have created and configured that class for dynamic
+provisioning to occur. Claims that request the class `""` effectively disable
+dynamic provisioning for themselves.
+
+To enable dynamic storage provisioning based on storage class, the cluster administrator
+needs to enable the `DefaultStorageClass` [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
+on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is
+among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of
+the API server component. For more information on API server command-line flags,
+check [kube-apiserver](/docs/admin/kube-apiserver/) documentation.
+
+### Binding
+
+A user creates, or in the case of dynamic provisioning, has already created, a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, `PersistentVolumeClaim` binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping.
+
+Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
+
+### Using
+
+Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod.
+
+Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).
+
+### Storage Object in Use Protection
+The purpose of the Storage Object in Use Protection feature is to ensure that Persistent Volume Claims (PVCs) in active use by a Pod and Persistent Volume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss.
+
+{{< note >}}
+PVC is in active use by a Pod when a Pod object exists that is using the PVC.
+{{< /note >}}
+
+If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC.
+
+You can see that a PVC is protected when the PVC's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pvc-protection`:
+
+```shell
+kubectl describe pvc hostpath
+Name: hostpath
+Namespace: default
+StorageClass: example-hostpath
+Status: Terminating
+Volume:
+Labels:
+Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath
+ volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath
+Finalizers: [kubernetes.io/pvc-protection]
+...
+```
+
+You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too:
+
+```shell
+kubectl describe pv task-pv-volume
+Name: task-pv-volume
+Labels: type=local
+Annotations:
+Finalizers: [kubernetes.io/pv-protection]
+StorageClass: standard
+Status: Terminating
+Claim:
+Reclaim Policy: Delete
+Access Modes: RWO
+Capacity: 1Gi
+Message:
+Source:
+ Type: HostPath (bare host directory volume)
+ Path: /tmp/data
+ HostPathType:
+Events:
+```
+
+### Reclaiming
+
+When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.
+
+#### Retain
+
+The `Retain` reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.
+
+1. Delete the `PersistentVolume`. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
+1. Manually clean up the data on the associated storage asset accordingly.
+1. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new `PersistentVolume` with the storage asset definition.
+
+#### Delete
+
+For volume plugins that support the `Delete` reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to `Delete`. The administrator should configure the `StorageClass` according to users' expectations; otherwise, the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
+
+#### Recycle
+
+{{< warning >}}
+The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.
+{{< /warning >}}
+
+If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim.
+
+However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pv-recycler
+ namespace: default
+spec:
+ restartPolicy: Never
+ volumes:
+ - name: vol
+ hostPath:
+ path: /any/path/it/will/be/replaced
+ containers:
+ - name: pv-recycler
+ image: "k8s.gcr.io/busybox"
+ command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
+ volumeMounts:
+ - name: vol
+ mountPath: /scrub
+```
+
+However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled.
+
+### Expanding Persistent Volumes Claims
+
+{{< feature-state for_k8s_version="v1.11" state="beta" >}}
+
+Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand
+the following types of volumes:
+
+* gcePersistentDisk
+* awsElasticBlockStore
+* Cinder
+* glusterfs
+* rbd
+* Azure File
+* Azure Disk
+* Portworx
+* FlexVolumes
+* CSI
+
+You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
+
+``` yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: gluster-vol-default
+provisioner: kubernetes.io/glusterfs
+parameters:
+ resturl: "http://192.168.10.100:8080"
+ restuser: ""
+ secretNamespace: ""
+ secretName: ""
+allowVolumeExpansion: true
+```
+
+To request a larger volume for a PVC, edit the PVC object and specify a larger
+size. This triggers expansion of the volume that backs the underlying `PersistentVolume`. A
+new `PersistentVolume` is never created to satisfy the claim. Instead, an existing volume is resized.
+
+#### CSI Volume expansion
+
+{{< feature-state for_k8s_version="v1.16" state="beta" >}}
+
+Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information.
+
+
+#### Resizing a volume containing a file system
+
+You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4.
+
+When a volume contains a file system, the file system is only resized when a new Pod is using
+the `PersistentVolumeClaim` in ReadWrite mode. File system expansion is either done when a Pod is starting up
+or when a Pod is running and the underlying file system supports online expansion.
+
+FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to `true`.
+The FlexVolume can be resized on Pod restart.
+
+#### Resizing an in-use PersistentVolumeClaim
+
+{{< feature-state for_k8s_version="v1.15" state="beta" >}}
+
+{{< note >}}
+Expanding in-use PVCs is available as beta since Kubernetes 1.15, and as alpha since 1.11. The `ExpandInUsePersistentVolumes` feature must be enabled, which is the case automatically for many clusters for beta features. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.
+{{< /note >}}
+
+In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC.
+Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded.
+This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that
+uses the PVC before the expansion can complete.
+
+
+Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod.
+
+{{< note >}}
+FlexVolume resize is possible only when the underlying driver supports resize.
+{{< /note >}}
+
+{{< note >}}
+Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume quota of one modification every 6 hours.
+{{< /note >}}
+
+
+## Types of Persistent Volumes
+
+`PersistentVolume` types are implemented as plugins. Kubernetes currently supports the following plugins:
+
+* GCEPersistentDisk
+* AWSElasticBlockStore
+* AzureFile
+* AzureDisk
+* CSI
+* FC (Fibre Channel)
+* FlexVolume
+* Flocker
+* NFS
+* iSCSI
+* RBD (Ceph Block Device)
+* CephFS
+* Cinder (OpenStack block storage)
+* Glusterfs
+* VsphereVolume
+* Quobyte Volumes
+* HostPath (Single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
+* Portworx Volumes
+* ScaleIO Volumes
+* StorageOS
+
+## Persistent Volumes
+
+Each PV contains a spec and status, which is the specification and status of the volume.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv0003
+spec:
+ capacity:
+ storage: 5Gi
+ volumeMode: Filesystem
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Recycle
+ storageClassName: slow
+ mountOptions:
+ - hard
+ - nfsvers=4.1
+ nfs:
+ path: /tmp
+ server: 172.17.0.2
+```
+
+### Capacity
+
+Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`.
+
+Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
+
+### Volume Mode
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume.
+Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem`
+to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API
+parameter.
+
+### Access Modes
+
+A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
+
+The access modes are:
+
+* ReadWriteOnce -- the volume can be mounted as read-write by a single node
+* ReadOnlyMany -- the volume can be mounted read-only by many nodes
+* ReadWriteMany -- the volume can be mounted as read-write by many nodes
+
+In the CLI, the access modes are abbreviated to:
+
+* RWO - ReadWriteOnce
+* ROX - ReadOnlyMany
+* RWX - ReadWriteMany
+
+> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
+
+
+| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany|
+| :--- | :---: | :---: | :---: |
+| AWSElasticBlockStore | ✓ | - | - |
+| AzureFile | ✓ | ✓ | ✓ |
+| AzureDisk | ✓ | - | - |
+| CephFS | ✓ | ✓ | ✓ |
+| Cinder | ✓ | - | - |
+| CSI | depends on the driver | depends on the driver | depends on the driver |
+| FC | ✓ | ✓ | - |
+| FlexVolume | ✓ | ✓ | depends on the driver |
+| Flocker | ✓ | - | - |
+| GCEPersistentDisk | ✓ | ✓ | - |
+| Glusterfs | ✓ | ✓ | ✓ |
+| HostPath | ✓ | - | - |
+| iSCSI | ✓ | ✓ | - |
+| Quobyte | ✓ | ✓ | ✓ |
+| NFS | ✓ | ✓ | ✓ |
+| RBD | ✓ | ✓ | - |
+| VsphereVolume | ✓ | - | - (works when Pods are collocated) |
+| PortworxVolume | ✓ | - | ✓ |
+| ScaleIO | ✓ | ✓ | - |
+| StorageOS | ✓ | - | - |
+
+### Class
+
+A PV can have a class, which is specified by setting the
+`storageClassName` attribute to the name of a
+[StorageClass](/docs/concepts/storage/storage-classes/).
+A PV of a particular class can only be bound to PVCs requesting
+that class. A PV with no `storageClassName` has no class and can only be bound
+to PVCs that request no particular class.
+
+In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
+of the `storageClassName` attribute. This annotation is still working; however,
+it will become fully deprecated in a future Kubernetes release.
+
+### Reclaim Policy
+
+Current reclaim policies are:
+
+* Retain -- manual reclamation
+* Recycle -- basic scrub (`rm -rf /thevolume/*`)
+* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
+
+Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
+
+### Mount Options
+
+A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node.
+
+{{< note >}}
+Not all Persistent Volume types support mount options.
+{{< /note >}}
+
+The following volume types support mount options:
+
+* AWSElasticBlockStore
+* AzureDisk
+* AzureFile
+* CephFS
+* Cinder (OpenStack block storage)
+* GCEPersistentDisk
+* Glusterfs
+* NFS
+* Quobyte Volumes
+* RBD (Ceph Block Device)
+* StorageOS
+* VsphereVolume
+* iSCSI
+
+Mount options are not validated, so mount will simply fail if one is invalid.
+
+In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
+of the `mountOptions` attribute. This annotation is still working; however,
+it will become fully deprecated in a future Kubernetes release.
+
+### Node Affinity
+
+{{< note >}}
+For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
+{{< /note >}}
+
+A PV can specify [node affinity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity.
+
+### Phase
+
+A volume will be in one of the following phases:
+
+* Available -- a free resource that is not yet bound to a claim
+* Bound -- the volume is bound to a claim
+* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster
+* Failed -- the volume has failed its automatic reclamation
+
+The CLI will show the name of the PVC bound to the PV.
+
+## PersistentVolumeClaims
+
+Each PVC contains a spec and status, which is the specification and status of the claim.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: myclaim
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 8Gi
+ storageClassName: slow
+ selector:
+ matchLabels:
+ release: "stable"
+ matchExpressions:
+ - {key: environment, operator: In, values: [dev]}
+```
+
+### Access Modes
+
+Claims use the same conventions as volumes when requesting storage with specific access modes.
+
+### Volume Modes
+
+Claims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device.
+
+### Resources
+
+Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims.
+
+### Selector
+
+Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:
+
+* `matchLabels` - the volume must have a label with this value
+* `matchExpressions` - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist.
+
+All of the requirements, from both `matchLabels` and `matchExpressions`, are ANDed together – they must all be satisfied in order to match.
+
+### Class
+
+A claim can request a particular class by specifying the name of a
+[StorageClass](/docs/concepts/storage/storage-classes/)
+using the attribute `storageClassName`.
+Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can
+be bound to the PVC.
+
+PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set
+equal to `""` is always interpreted to be requesting a PV with no class, so it
+can only be bound to PVs with no class (no annotation or one set equal to
+`""`). A PVC with no `storageClassName` is not quite the same and is treated differently
+by the cluster, depending on whether the
+[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
+is turned on.
+
+* If the admission plugin is turned on, the administrator may specify a
+ default `StorageClass`. All PVCs that have no `storageClassName` can be bound only to
+ PVs of that default. Specifying a default `StorageClass` is done by setting the
+ annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in
+ a `StorageClass` object. If the administrator does not specify a default, the
+ cluster responds to PVC creation as if the admission plugin were turned off. If
+ more than one default is specified, the admission plugin forbids the creation of
+ all PVCs.
+* If the admission plugin is turned off, there is no notion of a default
+ `StorageClass`. All PVCs that have no `storageClassName` can be bound only to PVs that
+ have no class. In this case, the PVCs that have no `storageClassName` are treated the
+ same way as PVCs that have their `storageClassName` set to `""`.
+
+Depending on installation method, a default StorageClass may be deployed
+to a Kubernetes cluster by addon manager during installation.
+
+When a PVC specifies a `selector` in addition to requesting a `StorageClass`,
+the requirements are ANDed together: only a PV of the requested class and with
+the requested labels may be bound to the PVC.
+
+{{< note >}}
+Currently, a PVC with a non-empty `selector` can't have a PV dynamically provisioned for it.
+{{< /note >}}
+
+In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
+of `storageClassName` attribute. This annotation is still working; however,
+it won't be supported in a future Kubernetes release.
+
+## Claims As Volumes
+
+Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the Pod.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: myfrontend
+ image: nginx
+ volumeMounts:
+ - mountPath: "/var/www/html"
+ name: mypd
+ volumes:
+ - name: mypd
+ persistentVolumeClaim:
+ claimName: myclaim
+```
+
+### A Note on Namespaces
+
+`PersistentVolumes` binds are exclusive, and since `PersistentVolumeClaims` are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
+
+## Raw Block Volume Support
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+The following volume plugins support raw block volumes, including dynamic provisioning where
+applicable:
+
+* AWSElasticBlockStore
+* AzureDisk
+* FC (Fibre Channel)
+* GCEPersistentDisk
+* iSCSI
+* Local volume
+* RBD (Ceph Block Device)
+* VsphereVolume (alpha)
+
+{{< note >}}
+Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9.
+Support for the additional plugins was added in 1.10.
+{{< /note >}}
+
+### Persistent Volumes using a Raw Block Volume
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: block-pv
+spec:
+ capacity:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Block
+ persistentVolumeReclaimPolicy: Retain
+ fc:
+ targetWWNs: ["50060e801049cfd1"]
+ lun: 0
+ readOnly: false
+```
+### Persistent Volume Claim requesting a Raw Block Volume
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: block-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Block
+ resources:
+ requests:
+ storage: 10Gi
+```
+### Pod specification adding Raw Block Device path in container
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod-with-block-volume
+spec:
+ containers:
+ - name: fc-container
+ image: fedora:26
+ command: ["/bin/sh", "-c"]
+ args: [ "tail -f /dev/null" ]
+ volumeDevices:
+ - name: data
+ devicePath: /dev/xvda
+ volumes:
+ - name: data
+ persistentVolumeClaim:
+ claimName: block-pvc
+```
+
+{{< note >}}
+When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path.
+{{< /note >}}
+
+### Binding Block Volumes
+
+If a user requests a raw block volume by indicating this using the `volumeMode` field in the `PersistentVolumeClaim` spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec.
+Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations:
+Volume binding matrix for statically provisioned volumes:
+
+| PV volumeMode | PVC volumeMode | Result |
+| --------------|:---------------:| ----------------:|
+| unspecified | unspecified | BIND |
+| unspecified | Block | NO BIND |
+| unspecified | Filesystem | BIND |
+| Block | unspecified | NO BIND |
+| Block | Block | BIND |
+| Block | Filesystem | NO BIND |
+| Filesystem | Filesystem | BIND |
+| Filesystem | Block | NO BIND |
+| Filesystem | unspecified | BIND |
+
+{{< note >}}
+Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices.
+{{< /note >}}
+
+## Volume Snapshot and Restore Volume from Snapshot Support
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/).
+
+To enable support for restoring a volume from a volume snapshot data source, enable the
+`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager.
+
+### Create Persistent Volume Claim from Volume Snapshot
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: restore-pvc
+spec:
+ storageClassName: csi-hostpath-sc
+ dataSource:
+ name: new-snapshot-test
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+```
+
+## Volume Cloning
+
+{{< feature-state for_k8s_version="v1.16" state="beta" >}}
+
+Volume clone feature was added to support CSI Volume Plugins only. For details, see [volume cloning](/docs/concepts/storage/volume-pvc-datasource/).
+
+To enable support for cloning a volume from a PVC data source, enable the
+`VolumePVCDataSource` feature gate on the apiserver and controller-manager.
+
+### Create Persistent Volume Claim from an existing pvc
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: cloned-pvc
+spec:
+ storageClassName: my-csi-plugin
+ dataSource:
+ name: existing-src-pvc-name
+ kind: PersistentVolumeClaim
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+```
+
+## Writing Portable Configuration
+
+If you're writing configuration templates or examples that run on a wide range of clusters
+and need persistent storage, it is recommended that you use the following pattern:
+
+- Include PersistentVolumeClaim objects in your bundle of config (alongside
+ Deployments, ConfigMaps, etc).
+- Do not include PersistentVolume objects in the config, since the user instantiating
+ the config may not have permission to create PersistentVolumes.
+- Give the user the option of providing a storage class name when instantiating
+ the template.
+ - If the user provides a storage class name, put that value into the
+ `persistentVolumeClaim.storageClassName` field.
+ This will cause the PVC to match the right storage
+ class if the cluster has StorageClasses enabled by the admin.
+ - If the user does not provide a storage class name, leave the
+ `persistentVolumeClaim.storageClassName` field as nil. This will cause a
+ PV to be automatically provisioned for the user with the default StorageClass
+ in the cluster. Many cluster environments have a default StorageClass installed,
+ or administrators can create their own default StorageClass.
+- In your tooling, watch for PVCs that are not getting bound after some time
+ and surface this to the user, as this may indicate that the cluster has no
+ dynamic storage support (in which case the user should create a matching PV)
+ or the cluster has no storage system (in which case the user cannot deploy
+ config requiring PVCs).
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/workloads/_index.md b/content/uk/docs/concepts/workloads/_index.md
new file mode 100644
index 0000000000..c826cbbcbc
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Робочі навантаження"
+weight: 50
+---
diff --git a/content/uk/docs/concepts/workloads/controllers/_index.md b/content/uk/docs/concepts/workloads/controllers/_index.md
new file mode 100644
index 0000000000..3e5306f908
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Контролери"
+weight: 20
+---
diff --git a/content/uk/docs/concepts/workloads/controllers/deployment.md b/content/uk/docs/concepts/workloads/controllers/deployment.md
new file mode 100644
index 0000000000..4d676e76f0
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/deployment.md
@@ -0,0 +1,1152 @@
+---
+reviewers:
+- janetkuo
+title: Deployments
+feature:
+ title: Автоматичне розгортання і відкатування
+ description: >
+ Kubernetes вносить зміни до вашого застосунку чи його конфігурації по мірі їх надходження. Водночас система моніторить робочий стан застосунку для того, щоб ці зміни не призвели до одночасної зупинки усіх ваших Подів. У випадку будь-яких збоїв, Kubernetes відкотить зміни назад. Скористайтеся перевагами зростаючої екосистеми інструментів для розгортання застосунків.
+
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
+[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
+
+You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
+
+{{< note >}}
+Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
+{{< /note >}}
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Use Case
+
+The following are typical use cases for Deployments:
+
+* [Create a Deployment to rollout a ReplicaSet](#creating-a-deployment). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
+* [Declare the new state of the Pods](#updating-a-deployment) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
+* [Rollback to an earlier Deployment revision](#rolling-back-a-deployment) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
+* [Scale up the Deployment to facilitate more load](#scaling-a-deployment).
+* [Pause the Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
+* [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck.
+* [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore.
+
+## Creating a Deployment
+
+The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
+
+{{< codenew file="controllers/nginx-deployment.yaml" >}}
+
+In this example:
+
+* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
+* The Deployment creates three replicated Pods, indicated by the `replicas` field.
+* The `selector` field defines how the Deployment finds which Pods to manage.
+ In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
+ However, more sophisticated selection rules are possible,
+ as long as the Pod template itself satisfies the rule.
+ {{< note >}}
+ The `matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map
+ is equivalent to an element of `matchExpressions`, whose key field is "key" the operator is "In",
+ and the values array contains only "value".
+ All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
+ {{< /note >}}
+
+* The `template` field contains the following sub-fields:
+ * The Pods are labeled `app: nginx`using the `labels` field.
+ * The Pod template's specification, or `.template.spec` field, indicates that
+ the Pods run one container, `nginx`, which runs the `nginx`
+ [Docker Hub](https://hub.docker.com/) image at version 1.7.9.
+ * Create one container and name it `nginx` using the `name` field.
+
+ Follow the steps given below to create the above Deployment:
+
+ Before you begin, make sure your Kubernetes cluster is up and running.
+
+ 1. Create the Deployment by running the following command:
+
+ {{< note >}}
+ You may specify the `--record` flag to write the command executed in the resource annotation `kubernetes.io/change-cause`. It is useful for future introspection.
+ For example, to see the commands executed in each Deployment revision.
+ {{< /note >}}
+
+ ```shell
+ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
+ ```
+
+ 2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following:
+ ```shell
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 0/3 0 0 1s
+ ```
+ When you inspect the Deployments in your cluster, the following fields are displayed:
+
+ * `NAME` lists the names of the Deployments in the cluster.
+ * `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_.
+ * `CURRENT` displays how many replicas are currently running.
+ * `UP-TO-DATE` displays the number of replicas that have been updated to achieve the desired state.
+ * `AVAILABLE` displays how many replicas of the application are available to your users.
+ * `AGE` displays the amount of time that the application has been running.
+
+ Notice how the number of desired replicas is 3 according to `.spec.replicas` field.
+
+ 3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. The output is similar to this:
+ ```shell
+ Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
+ deployment.apps/nginx-deployment successfully rolled out
+ ```
+
+ 4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
+ ```shell
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 18s
+ ```
+ Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
+
+ 5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs`. The output is similar to this:
+ ```shell
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-75675f5897 3 3 3 18s
+ ```
+ Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is
+ randomly generated and uses the pod-template-hash as a seed.
+
+ 6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. The following output is returned:
+ ```shell
+ NAME READY STATUS RESTARTS AGE LABELS
+ nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+ nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+ nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+ ```
+ The created ReplicaSet ensures that there are three `nginx` Pods.
+
+ {{< note >}}
+ You must specify an appropriate selector and Pod template labels in a Deployment (in this case,
+ `app: nginx`). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly.
+ {{< /note >}}
+
+### Pod-template-hash label
+
+{{< note >}}
+Do not change this label.
+{{< /note >}}
+
+The `pod-template-hash` label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.
+
+This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the `PodTemplate` of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels,
+and in any existing Pods that the ReplicaSet might have.
+
+## Updating a Deployment
+
+{{< note >}}
+A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, `.spec.template`)
+is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
+{{< /note >}}
+
+Follow the steps given below to update your Deployment:
+
+1. Let's update the nginx Pods to use the `nginx:1.9.1` image instead of the `nginx:1.7.9` image.
+
+ ```shell
+ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
+ ```
+ or simply use the following command:
+
+ ```shell
+ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+ Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
+
+ ```shell
+ kubectl edit deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment edited
+ ```
+
+2. To see the rollout status, run:
+
+ ```shell
+ kubectl rollout status deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
+ ```
+ or
+ ```
+ deployment.apps/nginx-deployment successfully rolled out
+ ```
+
+Get more details on your updated Deployment:
+
+* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
+ The output is similar to this:
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 36s
+ ```
+
+* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
+up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
+
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-1564180365 3 3 3 6s
+ nginx-deployment-2035384211 0 0 0 36s
+ ```
+
+* Running `get pods` should now show only the new Pods:
+
+ ```shell
+ kubectl get pods
+ ```
+
+ The output is similar to this:
+ ```
+ NAME READY STATUS RESTARTS AGE
+ nginx-deployment-1564180365-khku8 1/1 Running 0 14s
+ nginx-deployment-1564180365-nacti 1/1 Running 0 14s
+ nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
+ ```
+
+ Next time you want to update these Pods, you only need to update the Deployment's Pod template again.
+
+ Deployment ensures that only a certain number of Pods are down while they are being updated. By default,
+ it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
+
+ Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
+ By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
+
+ For example, if you look at the above Deployment closely, you will see that it first created a new Pod,
+ then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of
+ new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
+ It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available.
+
+* Get details of your Deployment:
+ ```shell
+ kubectl describe deployments
+ ```
+ The output is similar to this:
+ ```
+ Name: nginx-deployment
+ Namespace: default
+ CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000
+ Labels: app=nginx
+ Annotations: deployment.kubernetes.io/revision=2
+ Selector: app=nginx
+ Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.9.1
+ Port: 80/TCP
+ Environment:
+ Mounts:
+ Volumes:
+ Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+ OldReplicaSets:
+ NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
+ Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3
+ Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1
+ Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2
+ Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2
+ Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1
+ Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3
+ Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0
+ ```
+ Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
+ and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet
+ (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at
+ least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down
+ the new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas
+ in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
+
+### Rollover (aka multiple updates in-flight)
+
+Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up
+the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels
+match `.spec.selector` but whose template does not match `.spec.template` are scaled down. Eventually, the new
+ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0.
+
+If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet
+as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously
+ -- it will add it to its list of old ReplicaSets and start scaling it down.
+
+For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9`,
+but then update the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3
+replicas of `nginx:1.7.9` had been created. In that case, the Deployment immediately starts
+killing the 3 `nginx:1.7.9` Pods that it had created, and starts creating
+`nginx:1.9.1` Pods. It does not wait for the 5 replicas of `nginx:1.7.9` to be created
+before changing course.
+
+### Label selector updates
+
+It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front.
+In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped
+all of the implications.
+
+{{< note >}}
+In API version `apps/v1`, a Deployment's label selector is immutable after it gets created.
+{{< /note >}}
+
+* Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too,
+otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does
+not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and
+creating a new ReplicaSet.
+* Selector updates changes the existing value in a selector key -- result in the same behavior as additions.
+* Selector removals removes an existing key from the Deployment selector -- do not require any changes in the
+Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the
+removed label still exists in any existing Pods and ReplicaSets.
+
+## Rolling Back a Deployment
+
+Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.
+By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want
+(you can change that by modifying revision history limit).
+
+{{< note >}}
+A Deployment's revision is created when a Deployment's rollout is triggered. This means that the
+new revision is created if and only if the Deployment's Pod template (`.spec.template`) is changed,
+for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment,
+do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling.
+This means that when you roll back to an earlier revision, only the Deployment's Pod template part is
+rolled back.
+{{< /note >}}
+
+* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
+
+ ```shell
+ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+* The rollout gets stuck. You can verify it by checking the rollout status:
+
+ ```shell
+ kubectl rollout status deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
+ ```
+
+* Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,
+[read more here](#deployment-status).
+
+* You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211`) is 2, and new replicas (nginx-deployment-3066724191) is 1.
+
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-1564180365 3 3 3 25s
+ nginx-deployment-2035384211 0 0 0 36s
+ nginx-deployment-3066724191 1 1 0 6s
+ ```
+
+* Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.
+
+ ```shell
+ kubectl get pods
+ ```
+
+ The output is similar to this:
+ ```
+ NAME READY STATUS RESTARTS AGE
+ nginx-deployment-1564180365-70iae 1/1 Running 0 25s
+ nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
+ nginx-deployment-1564180365-hysrc 1/1 Running 0 25s
+ nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
+ ```
+
+ {{< note >}}
+ The Deployment controller stops the bad rollout automatically, and stops scaling up the new
+ ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified.
+ Kubernetes by default sets the value to 25%.
+ {{< /note >}}
+
+* Get the description of the Deployment:
+ ```shell
+ kubectl describe deployment
+ ```
+
+ The output is similar to this:
+ ```
+ Name: nginx-deployment
+ Namespace: default
+ CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
+ Labels: app=nginx
+ Selector: app=nginx
+ Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.91
+ Port: 80/TCP
+ Host Port: 0/TCP
+ Environment:
+ Mounts:
+ Volumes:
+ Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True ReplicaSetUpdated
+ OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)
+ NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)
+ Events:
+ FirstSeen LastSeen Count From SubObjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- -------- ------ -------
+ 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
+ 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
+ 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
+ 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
+ 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1
+ 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
+ 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
+ 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
+ ```
+
+ To fix this, you need to rollback to a previous revision of Deployment that is stable.
+
+### Checking Rollout History of a Deployment
+
+Follow the steps given below to check the rollout history:
+
+1. First, check the revisions of this Deployment:
+ ```shell
+ kubectl rollout history deployment.v1.apps/nginx-deployment
+ ```
+ The output is similar to this:
+ ```
+ deployments "nginx-deployment"
+ REVISION CHANGE-CAUSE
+ 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
+ 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
+ 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
+ ```
+
+ `CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by:
+
+ * Annotating the Deployment with `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"`
+ * Append the `--record` flag to save the `kubectl` command that is making changes to the resource.
+ * Manually editing the manifest of the resource.
+
+2. To see the details of each revision, run:
+ ```shell
+ kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
+ ```
+
+ The output is similar to this:
+ ```
+ deployments "nginx-deployment" revision 2
+ Labels: app=nginx
+ pod-template-hash=1159050644
+ Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
+ Containers:
+ nginx:
+ Image: nginx:1.9.1
+ Port: 80/TCP
+ QoS Tier:
+ cpu: BestEffort
+ memory: BestEffort
+ Environment Variables:
+ No volumes.
+ ```
+
+### Rolling Back to a Previous Revision
+Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.
+
+1. Now you've decided to undo the current rollout and rollback to the previous revision:
+ ```shell
+ kubectl rollout undo deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment
+ ```
+ Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
+
+ ```shell
+ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment
+ ```
+
+ For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout).
+
+ The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event
+ for rolling back to revision 2 is generated from Deployment controller.
+
+2. Check if the rollback was successful and the Deployment is running as expected, run:
+ ```shell
+ kubectl get deployment nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 30m
+ ```
+3. Get the description of the Deployment:
+ ```shell
+ kubectl describe deployment nginx-deployment
+ ```
+ The output is similar to this:
+ ```
+ Name: nginx-deployment
+ Namespace: default
+ CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
+ Labels: app=nginx
+ Annotations: deployment.kubernetes.io/revision=4
+ kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
+ Selector: app=nginx
+ Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.9.1
+ Port: 80/TCP
+ Host Port: 0/TCP
+ Environment:
+ Mounts:
+ Volumes:
+ Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+ OldReplicaSets:
+ NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)
+ Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1
+ Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2
+ Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3
+ Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1
+ Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
+ Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0
+ ```
+
+## Scaling a Deployment
+
+You can scale a Deployment by using the following command:
+
+```shell
+kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
+```
+The output is similar to this:
+```
+deployment.apps/nginx-deployment scaled
+```
+
+Assuming [horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is enabled
+in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
+Pods you want to run based on the CPU utilization of your existing Pods.
+
+```shell
+kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80
+```
+The output is similar to this:
+```
+deployment.apps/nginx-deployment scaled
+```
+
+### Proportional scaling
+
+RollingUpdate Deployments support running multiple versions of an application at the same time. When you
+or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
+or paused), the Deployment controller balances the additional replicas in the existing active
+ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*.
+
+For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2.
+
+* Ensure that the 10 replicas in your Deployment are running.
+ ```shell
+ kubectl get deploy
+ ```
+ The output is similar to this:
+
+ ```
+ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 10 10 10 10 50s
+ ```
+
+* You update to a new image which happens to be unresolvable from inside the cluster.
+ ```shell
+ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+* The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the
+`maxUnavailable` requirement that you mentioned above. Check out the rollout status:
+ ```shell
+ kubectl get rs
+ ```
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-1989198191 5 5 0 9s
+ nginx-deployment-618515232 8 8 8 1m
+ ```
+
+* Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
+to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using
+proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you
+spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the
+most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the
+ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.
+
+In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the
+new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
+the new replicas become healthy. To confirm this, run:
+
+```shell
+kubectl get deploy
+```
+
+The output is similar to this:
+```
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+nginx-deployment 15 18 7 8 7m
+```
+The rollout status confirms how the replicas were added to each ReplicaSet.
+```shell
+kubectl get rs
+```
+
+The output is similar to this:
+```
+NAME DESIRED CURRENT READY AGE
+nginx-deployment-1989198191 7 7 0 7m
+nginx-deployment-618515232 11 11 11 7m
+```
+
+## Pausing and Resuming a Deployment
+
+You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
+apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
+
+* For example, with a Deployment that was just created:
+ Get the Deployment details:
+ ```shell
+ kubectl get deploy
+ ```
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ nginx 3 3 3 3 1m
+ ```
+ Get the rollout status:
+ ```shell
+ kubectl get rs
+ ```
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 3 3 3 1m
+ ```
+
+* Pause by running the following command:
+ ```shell
+ kubectl rollout pause deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment paused
+ ```
+
+* Then update the image of the Deployment:
+ ```shell
+ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+* Notice that no new rollout started:
+ ```shell
+ kubectl rollout history deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployments "nginx"
+ REVISION CHANGE-CAUSE
+ 1
+ ```
+* Get the rollout status to ensure that the Deployment is updates successfully:
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 3 3 3 2m
+ ```
+
+* You can make as many updates as you wish, for example, update the resources that will be used:
+ ```shell
+ kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment resource requirements updated
+ ```
+
+ The initial state of the Deployment prior to pausing it will continue its function, but new updates to
+ the Deployment will not have any effect as long as the Deployment is paused.
+
+* Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
+ ```shell
+ kubectl rollout resume deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment resumed
+ ```
+* Watch the status of the rollout until it's done.
+ ```shell
+ kubectl get rs -w
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 2 2 2 2m
+ nginx-3926361531 2 2 0 6s
+ nginx-3926361531 2 2 1 18s
+ nginx-2142116321 1 2 2 2m
+ nginx-2142116321 1 2 2 2m
+ nginx-3926361531 3 2 1 18s
+ nginx-3926361531 3 2 1 18s
+ nginx-2142116321 1 1 1 2m
+ nginx-3926361531 3 3 1 18s
+ nginx-3926361531 3 3 2 19s
+ nginx-2142116321 0 1 1 2m
+ nginx-2142116321 0 1 1 2m
+ nginx-2142116321 0 0 0 2m
+ nginx-3926361531 3 3 3 20s
+ ```
+* Get the status of the latest rollout:
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 0 0 0 2m
+ nginx-3926361531 3 3 3 28s
+ ```
+{{< note >}}
+You cannot rollback a paused Deployment until you resume it.
+{{< /note >}}
+
+## Deployment status
+
+A Deployment enters various states during its lifecycle. It can be [progressing](#progressing-deployment) while
+rolling out a new ReplicaSet, it can be [complete](#complete-deployment), or it can [fail to progress](#failed-deployment).
+
+### Progressing Deployment
+
+Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed:
+
+* The Deployment creates a new ReplicaSet.
+* The Deployment is scaling up its newest ReplicaSet.
+* The Deployment is scaling down its older ReplicaSet(s).
+* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)).
+
+You can monitor the progress for a Deployment by using `kubectl rollout status`.
+
+### Complete Deployment
+
+Kubernetes marks a Deployment as _complete_ when it has the following characteristics:
+
+* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
+updates you've requested have been completed.
+* All of the replicas associated with the Deployment are available.
+* No old replicas for the Deployment are running.
+
+You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed
+successfully, `kubectl rollout status` returns a zero exit code.
+
+```shell
+kubectl rollout status deployment.v1.apps/nginx-deployment
+```
+The output is similar to this:
+```
+Waiting for rollout to finish: 2 of 3 updated replicas are available...
+deployment.apps/nginx-deployment successfully rolled out
+$ echo $?
+0
+```
+
+### Failed Deployment
+
+Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur
+due to some of the following factors:
+
+* Insufficient quota
+* Readiness probe failures
+* Image pull errors
+* Insufficient permissions
+* Limit ranges
+* Application runtime misconfiguration
+
+One way you can detect this condition is to specify a deadline parameter in your Deployment spec:
+([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `.spec.progressDeadlineSeconds` denotes the
+number of seconds the Deployment controller waits before indicating (in the Deployment status) that the
+Deployment progress has stalled.
+
+The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
+lack of progress for a Deployment after 10 minutes:
+
+```shell
+kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
+```
+The output is similar to this:
+```
+deployment.apps/nginx-deployment patched
+```
+Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
+attributes to the Deployment's `.status.conditions`:
+
+* Type=Progressing
+* Status=False
+* Reason=ProgressDeadlineExceeded
+
+See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions.
+
+{{< note >}}
+Kubernetes takes no action on a stalled Deployment other than to report a status condition with
+`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
+example, rollback the Deployment to its previous version.
+{{< /note >}}
+
+{{< note >}}
+If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can
+safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the
+deadline.
+{{< /note >}}
+
+You may experience transient errors with your Deployments, either due to a low timeout that you have set or
+due to any other kind of error that can be treated as transient. For example, let's suppose you have
+insufficient quota. If you describe the Deployment you will notice the following section:
+
+```shell
+kubectl describe deployment nginx-deployment
+```
+The output is similar to this:
+```
+<...>
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True ReplicaSetUpdated
+ ReplicaFailure True FailedCreate
+<...>
+```
+
+If you run `kubectl get deployment nginx-deployment -o yaml`, the Deployment status is similar to this:
+
+```
+status:
+ availableReplicas: 2
+ conditions:
+ - lastTransitionTime: 2016-10-04T12:25:39Z
+ lastUpdateTime: 2016-10-04T12:25:39Z
+ message: Replica set "nginx-deployment-4262182780" is progressing.
+ reason: ReplicaSetUpdated
+ status: "True"
+ type: Progressing
+ - lastTransitionTime: 2016-10-04T12:25:42Z
+ lastUpdateTime: 2016-10-04T12:25:42Z
+ message: Deployment has minimum availability.
+ reason: MinimumReplicasAvailable
+ status: "True"
+ type: Available
+ - lastTransitionTime: 2016-10-04T12:25:39Z
+ lastUpdateTime: 2016-10-04T12:25:39Z
+ message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
+ object-counts, requested: pods=1, used: pods=3, limited: pods=2'
+ reason: FailedCreate
+ status: "True"
+ type: ReplicaFailure
+ observedGeneration: 3
+ replicas: 2
+ unavailableReplicas: 2
+```
+
+Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the
+reason for the Progressing condition:
+
+```
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing False ProgressDeadlineExceeded
+ ReplicaFailure True FailedCreate
+```
+
+You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other
+controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota
+conditions and the Deployment controller then completes the Deployment rollout, you'll see the
+Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`).
+
+```
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+```
+
+`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated
+by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment
+is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
+required new replicas are available (see the Reason of the condition for the particulars - in our case
+`Reason=NewReplicaSetAvailable` means that the Deployment is complete).
+
+You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status`
+returns a non-zero exit code if the Deployment has exceeded the progression deadline.
+
+```shell
+kubectl rollout status deployment.v1.apps/nginx-deployment
+```
+The output is similar to this:
+```
+Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
+error: deployment "nginx" exceeded its progress deadline
+$ echo $?
+1
+```
+
+### Operating on a failed deployment
+
+All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
+to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.
+
+## Clean up Policy
+
+You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for
+this Deployment you want to retain. The rest will be garbage-collected in the background. By default,
+it is 10.
+
+{{< note >}}
+Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment
+thus that Deployment will not be able to roll back.
+{{< /note >}}
+
+## Canary Deployment
+
+If you want to roll out releases to a subset of users or servers using the Deployment, you
+can create multiple Deployments, one for each release, following the canary pattern described in
+[managing resources](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments).
+
+## Writing a Deployment Spec
+
+As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields.
+For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/),
+configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
+
+A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
+
+### Pod Template
+
+The `.spec.template` and `.spec.selector` are the only required field of the `.spec`.
+
+The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an
+`apiVersion` or `kind`.
+
+In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate
+labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)).
+
+Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is
+allowed, which is the default if not specified.
+
+### Replicas
+
+`.spec.replicas` is an optional field that specifies the number of desired Pods. It defaults to 1.
+
+### Selector
+
+`.spec.selector` is an required field that specifies a [label selector](/docs/concepts/overview/working-with-objects/labels/)
+for the Pods targeted by this Deployment.
+
+`.spec.selector` must match `.spec.template.metadata.labels`, or it will be rejected by the API.
+
+In API version `apps/v1`, `.spec.selector` and `.metadata.labels` do not default to `.spec.template.metadata.labels` if not set. So they must be set explicitly. Also note that `.spec.selector` is immutable after creation of the Deployment in `apps/v1`.
+
+A Deployment may terminate Pods whose labels match the selector if their template is different
+from `.spec.template` or if the total number of such Pods exceeds `.spec.replicas`. It brings up new
+Pods with `.spec.template` if the number of Pods is less than the desired number.
+
+{{< note >}}
+You should not create other Pods whose labels match this selector, either directly, by creating
+another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you
+do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this.
+{{< /note >}}
+
+If you have multiple controllers that have overlapping selectors, the controllers will fight with each
+other and won't behave correctly.
+
+### Strategy
+
+`.spec.strategy` specifies the strategy used to replace old Pods by new ones.
+`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
+the default value.
+
+#### Recreate Deployment
+
+All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`.
+
+#### Rolling Update Deployment
+
+The Deployment updates Pods in a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/)
+fashion when `.spec.strategy.type==RollingUpdate`. You can specify `maxUnavailable` and `maxSurge` to control
+the rolling update process.
+
+##### Max Unavailable
+
+`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the maximum number
+of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5)
+or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by
+rounding down. The value cannot be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. The default value is 25%.
+
+For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired
+Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled
+down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available
+at all times during the update is at least 70% of the desired Pods.
+
+##### Max Surge
+
+`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the maximum number of Pods
+that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a
+percentage of desired Pods (for example, 10%). The value cannot be 0 if `MaxUnavailable` is 0. The absolute number
+is calculated from the percentage by rounding up. The default value is 25%.
+
+For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the
+rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired
+Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the
+total number of Pods running at any time during the update is at most 130% of desired Pods.
+
+### Progress Deadline Seconds
+
+`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
+to wait for your Deployment to progress before the system reports back that the Deployment has
+[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`.
+and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep
+retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment
+controller will roll back a Deployment as soon as it observes such a condition.
+
+If specified, this field needs to be greater than `.spec.minReadySeconds`.
+
+### Min Ready Seconds
+
+`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly
+created Pod should be ready without any of its containers crashing, for it to be considered available.
+This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when
+a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
+
+### Rollback To
+
+Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1` and `apps/v1beta1`, and is no longer supported in API versions starting `apps/v1beta2`. Instead, `kubectl rollout undo` as introduced in [Rolling Back to a Previous Revision](#rolling-back-to-a-previous-revision) should be used.
+
+### Revision History Limit
+
+A Deployment's revision history is stored in the ReplicaSets it controls.
+
+`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain
+to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.
+
+More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up.
+In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.
+
+### Paused
+
+`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. The only difference between
+a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused
+Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when
+it is created.
+
+## Alternative to Deployments
+
+### kubectl rolling-update
+
+[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) updates Pods and ReplicationControllers
+in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have
+additional features, such as rolling back to any previous revision even after the rolling update is done.
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md
new file mode 100644
index 0000000000..36bf7876bc
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md
@@ -0,0 +1,480 @@
+---
+reviewers:
+- erictune
+- soltysh
+title: Jobs - Run to Completion
+content_template: templates/concept
+feature:
+ title: Пакетна обробка
+ description: >
+ На додачу до Сервісів, Kubernetes може керувати вашими робочими навантаженнями систем безперервної інтеграції та пакетної обробки, за потреби замінюючи контейнери, що відмовляють.
+weight: 70
+---
+
+{{% capture overview %}}
+
+A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
+As pods successfully complete, the Job tracks the successful completions. When a specified number
+of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
+the Pods it created.
+
+A simple case is to create one Job object in order to reliably run one Pod to completion.
+The Job object will start a new Pod if the first Pod fails or is deleted (for example
+due to a node hardware failure or a node reboot).
+
+You can also use a Job to run multiple Pods in parallel.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Running an example Job
+
+Here is an example Job config. It computes π to 2000 places and prints it out.
+It takes around 10s to complete.
+
+{{< codenew file="controllers/job.yaml" >}}
+
+You can run the example with this command:
+
+```shell
+kubectl apply -f https://k8s.io/examples/controllers/job.yaml
+```
+```
+job.batch/pi created
+```
+
+Check on the status of the Job with `kubectl`:
+
+```shell
+kubectl describe jobs/pi
+```
+```
+Name: pi
+Namespace: default
+Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ job-name=pi
+Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":...
+Parallelism: 1
+Completions: 1
+Start Time: Mon, 02 Dec 2019 15:20:11 +0200
+Completed At: Mon, 02 Dec 2019 15:21:16 +0200
+Duration: 65s
+Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
+Pod Template:
+ Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ job-name=pi
+ Containers:
+ pi:
+ Image: perl
+ Port:
+ Host Port:
+ Command:
+ perl
+ -Mbignum=bpi
+ -wle
+ print bpi(2000)
+ Environment:
+ Mounts:
+ Volumes:
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7
+```
+
+To view completed Pods of a Job, use `kubectl get pods`.
+
+To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
+
+```shell
+pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
+echo $pods
+```
+```
+pi-5rwd7
+```
+
+Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
+that just gets the name from each Pod in the returned list.
+
+View the standard output of one of the pods:
+
+```shell
+kubectl logs $pods
+```
+The output is similar to this:
+```shell
+3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
+```
+
+## Writing a Job Spec
+
+As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields.
+
+A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
+
+### Pod Template
+
+The `.spec.template` is the only required field of the `.spec`.
+
+The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`.
+
+In addition to required fields for a Pod, a pod template in a Job must specify appropriate
+labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
+
+Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed.
+
+### Pod Selector
+
+The `.spec.selector` field is optional. In almost all cases you should not specify it.
+See section [specifying your own pod selector](#specifying-your-own-pod-selector).
+
+
+### Parallel Jobs
+
+There are three main types of task suitable to run as a Job:
+
+1. Non-parallel Jobs
+ - normally, only one Pod is started, unless the Pod fails.
+ - the Job is complete as soon as its Pod terminates successfully.
+1. Parallel Jobs with a *fixed completion count*:
+ - specify a non-zero positive value for `.spec.completions`.
+ - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
+ - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
+1. Parallel Jobs with a *work queue*:
+ - do not specify `.spec.completions`, default to `.spec.parallelism`.
+ - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
+ - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
+ - when _any_ Pod from the Job terminates with success, no new Pods are created.
+ - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
+ - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.
+
+For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
+unset, both are defaulted to 1.
+
+For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed.
+You can set `.spec.parallelism`, or leave it unset and it will default to 1.
+
+For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
+a non-negative integer.
+
+For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
+
+
+#### Controlling Parallelism
+
+The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
+If it is unspecified, it defaults to 1.
+If it is specified as 0, then the Job is effectively paused until it is increased.
+
+Actual parallelism (number of pods running at any instant) may be more or less than requested
+parallelism, for a variety of reasons:
+
+- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of
+ remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
+- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however.
+- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react.
+- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.),
+ then there may be fewer pods than requested.
+- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
+- When a Pod is gracefully shut down, it takes time to stop.
+
+## Handling Pod and Container Failures
+
+A container in a Pod may fail for a number of reasons, such as because the process in it exited with
+a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this
+happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays
+on the node, but the container is re-run. Therefore, your program needs to handle the case when it is
+restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`.
+See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`.
+
+An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
+(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
+`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller
+starts a new Pod. This means that your application needs to handle the case when it is restarted in a new
+pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
+caused by previous runs.
+
+Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
+`.spec.template.spec.restartPolicy = "Never"`, the same program may
+sometimes be started twice.
+
+If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
+multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
+
+### Pod backoff failure policy
+
+There are situations where you want to fail a Job after some amount of retries
+due to a logical error in configuration etc.
+To do so, set `.spec.backoffLimit` to specify the number of retries before
+considering a Job as failed. The back-off limit is set by default to 6. Failed
+Pods associated with the Job are recreated by the Job controller with an
+exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The
+back-off count is reset if no new failed Pods appear before the Job's next
+status check.
+
+{{< note >}}
+Issue [#54870](https://github.com/kubernetes/kubernetes/issues/54870) still exists for versions of Kubernetes prior to version 1.12
+{{< /note >}}
+{{< note >}}
+If your job has `restartPolicy = "OnFailure"`, keep in mind that your container running the Job
+will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting
+`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output
+from failed Jobs is not lost inadvertently.
+{{< /note >}}
+
+## Job Termination and Cleanup
+
+When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around
+allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.
+The job object also remains after it is completed so that you can view its status. It is up to the user to delete
+old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
+
+By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
+`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated.
+
+Another way to terminate a Job is by setting an active deadline.
+Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
+The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
+Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
+
+Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
+
+Example:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi-with-timeout
+spec:
+ backoffLimit: 5
+ activeDeadlineSeconds: 100
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+```
+
+Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
+
+Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`.
+That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve.
+
+## Clean Up Finished Jobs Automatically
+
+Finished Jobs are usually no longer needed in the system. Keeping them around in
+the system will put pressure on the API server. If the Jobs are managed directly
+by a higher level controller, such as
+[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be
+cleaned up by CronJobs based on the specified capacity-based cleanup policy.
+
+### TTL Mechanism for Finished Jobs
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+Another way to clean up finished Jobs (either `Complete` or `Failed`)
+automatically is to use a TTL mechanism provided by a
+[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
+finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of
+the Job.
+
+When the TTL controller cleans up the Job, it will delete the Job cascadingly,
+i.e. delete its dependent objects, such as Pods, together with the Job. Note
+that when the Job is deleted, its lifecycle guarantees, such as finalizers, will
+be honored.
+
+For example:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi-with-ttl
+spec:
+ ttlSecondsAfterFinished: 100
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+```
+
+The Job `pi-with-ttl` will be eligible to be automatically deleted, `100`
+seconds after it finishes.
+
+If the field is set to `0`, the Job will be eligible to be automatically deleted
+immediately after it finishes. If the field is unset, this Job won't be cleaned
+up by the TTL controller after it finishes.
+
+Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For
+more information, see the documentation for
+[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
+finished resources.
+
+## Job Patterns
+
+The Job object can be used to support reliable parallel execution of Pods. The Job object is not
+designed to support closely-communicating parallel processes, as commonly found in scientific
+computing. It does support parallel processing of a set of independent but related *work items*.
+These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
+NoSQL database to scan, and so on.
+
+In a complex system, there may be multiple different sets of work items. Here we are just
+considering one set of work items that the user wants to manage together — a *batch job*.
+
+There are several different patterns for parallel computation, each with strengths and weaknesses.
+The tradeoffs are:
+
+- One Job object for each work item, vs. a single Job object for all work items. The latter is
+ better for large numbers of work items. The former creates some overhead for the user and for the
+ system to manage large numbers of Job objects.
+- Number of pods created equals number of work items, vs. each Pod can process multiple work items.
+ The former typically requires less modification to existing code and containers. The latter
+ is better for large numbers of work items, for similar reasons to the previous bullet.
+- Several approaches use a work queue. This requires running a queue service,
+ and modifications to the existing program or container to make it use the work queue.
+ Other approaches are easier to adapt to an existing containerised application.
+
+
+The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
+The pattern names are also links to examples and more detailed description.
+
+| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
+| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
+| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ |
+| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ |
+| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
+| Single Job with Static Work Assignment | ✓ | | ✓ | |
+
+When you specify completions with `.spec.completions`, each Pod created by the Job controller
+has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that
+all pods for a task will have the same command line and the same
+image, the same volumes, and (almost) the same environment variables. These patterns
+are different ways to arrange for pods to work on different things.
+
+This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
+Here, `W` is the number of work items.
+
+| Pattern | `.spec.completions` | `.spec.parallelism` |
+| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
+| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 |
+| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
+| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
+| Single Job with Static Work Assignment | W | any |
+
+
+## Advanced Usage
+
+### Specifying your own pod selector
+
+Normally, when you create a Job object, you do not specify `.spec.selector`.
+The system defaulting logic adds this field when the Job is created.
+It picks a selector value that will not overlap with any other jobs.
+
+However, in some cases, you might need to override this automatically set selector.
+To do this, you can specify the `.spec.selector` of the Job.
+
+Be very careful when doing this. If you specify a label selector which is not
+unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated
+job may be deleted, or this Job may count other Pods as completing it, or one or both
+Jobs may refuse to create Pods or run to completion. If a non-unique selector is
+chosen, then other controllers (e.g. ReplicationController) and their Pods may behave
+in unpredictable ways too. Kubernetes will not stop you from making a mistake when
+specifying `.spec.selector`.
+
+Here is an example of a case when you might want to use this feature.
+
+Say Job `old` is already running. You want existing Pods
+to keep running, but you want the rest of the Pods it creates
+to use a different pod template and for the Job to have a new name.
+You cannot update the Job because these fields are not updatable.
+Therefore, you delete Job `old` but _leave its pods
+running_, using `kubectl delete jobs/old --cascade=false`.
+Before deleting it, you make a note of what selector it uses:
+
+```
+kubectl get job old -o yaml
+```
+```
+kind: Job
+metadata:
+ name: old
+ ...
+spec:
+ selector:
+ matchLabels:
+ controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
+ ...
+```
+
+Then you create a new Job with name `new` and you explicitly specify the same selector.
+Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
+they are controlled by Job `new` as well.
+
+You need to specify `manualSelector: true` in the new Job since you are not using
+the selector that the system normally generates for you automatically.
+
+```
+kind: Job
+metadata:
+ name: new
+ ...
+spec:
+ manualSelector: true
+ selector:
+ matchLabels:
+ controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
+ ...
+```
+
+The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
+`manualSelector: true` tells the system to that you know what you are doing and to allow this
+mismatch.
+
+## Alternatives
+
+### Bare Pods
+
+When the node that a Pod is running on reboots or fails, the pod is terminated
+and will not be restarted. However, a Job will create new Pods to replace terminated ones.
+For this reason, we recommend that you use a Job rather than a bare Pod, even if your application
+requires only a single Pod.
+
+### Replication Controller
+
+Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller).
+A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job
+manages Pods that are expected to terminate (e.g. batch tasks).
+
+As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate
+for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
+(Note: If `RestartPolicy` is not set, the default value is `Always`.)
+
+### Single Job starts Controller Pod
+
+Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort
+of custom controller for those Pods. This allows the most flexibility, but may be somewhat
+complicated to get started with and offers less integration with Kubernetes.
+
+One example of this pattern would be a Job which starts a Pod which runs a script that in turn
+starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark
+driver, and then cleans up.
+
+An advantage of this approach is that the overall process gets the completion guarantee of a Job
+object, but complete control over what Pods are created and how work is assigned to them.
+
+## Cron Jobs {#cron-jobs}
+
+You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`.
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md
new file mode 100644
index 0000000000..c8a666ac11
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -0,0 +1,291 @@
+---
+reviewers:
+- bprashanth
+- janetkuo
+title: ReplicationController
+feature:
+ title: Самозцілення
+ anchor: How a ReplicationController Works
+ description: >
+ Перезапускає контейнери, що відмовили; заміняє і перерозподіляє контейнери у випадку непрацездатності вузла; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності.
+
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+{{< note >}}
+A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
+{{< /note >}}
+
+A _ReplicationController_ ensures that a specified number of pod replicas are running at any one
+time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
+always up and available.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## How a ReplicationController Works
+
+If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
+ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
+ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
+For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade.
+For this reason, you should use a ReplicationController even if your application requires
+only a single pod. A ReplicationController is similar to a process supervisor,
+but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods
+across multiple nodes.
+
+ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in
+kubectl commands.
+
+A simple case is to create one ReplicationController object to reliably run one instance of
+a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated
+service, such as web servers.
+
+## Running an example ReplicationController
+
+This example ReplicationController config runs three copies of the nginx web server.
+
+{{< codenew file="controllers/replication.yaml" >}}
+
+Run the example job by downloading the example file and then running this command:
+
+```shell
+kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
+```
+```
+replicationcontroller/nginx created
+```
+
+Check on the status of the ReplicationController using this command:
+
+```shell
+kubectl describe replicationcontrollers/nginx
+```
+```
+Name: nginx
+Namespace: default
+Selector: app=nginx
+Labels: app=nginx
+Annotations:
+Replicas: 3 current / 3 desired
+Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
+Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx
+ Port: 80/TCP
+ Environment:
+ Mounts:
+ Volumes:
+Events:
+ FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- ---- ------ -------
+ 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m
+ 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0
+ 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v
+```
+
+Here, three pods are created, but none is running yet, perhaps because the image is being pulled.
+A little later, the same command may show:
+
+```shell
+Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
+```
+
+To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:
+
+```shell
+pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
+echo $pods
+```
+```
+nginx-3ntk0 nginx-4ok8v nginx-qrm3m
+```
+
+Here, the selector is the same as the selector for the ReplicationController (seen in the
+`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
+specifies an expression that just gets the name from each pod in the returned list.
+
+
+## Writing a ReplicationController Spec
+
+As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
+For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/).
+
+A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
+
+### Pod Template
+
+The `.spec.template` is the only required field of the `.spec`.
+
+The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
+
+In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
+labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [pod selector](#pod-selector).
+
+Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
+
+For local container restarts, ReplicationControllers delegate to an agent on the node,
+for example the [Kubelet](/docs/admin/kubelet/) or Docker.
+
+### Labels on the ReplicationController
+
+The ReplicationController can itself have labels (`.metadata.labels`). Typically, you
+would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
+then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be
+different, and the `.metadata.labels` do not affect the behavior of the ReplicationController.
+
+### Pod Selector
+
+The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController
+manages all the pods with labels that match the selector. It does not distinguish
+between pods that it created or deleted and pods that another person or process created or
+deleted. This allows the ReplicationController to be replaced without affecting the running pods.
+
+If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
+be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
+`.spec.template.metadata.labels`.
+
+Also you should not normally create any pods whose labels match this selector, either directly, with
+another ReplicationController, or with another controller such as Job. If you do so, the
+ReplicationController thinks that it created the other pods. Kubernetes does not stop you
+from doing this.
+
+If you do end up with multiple controllers that have overlapping selectors, you
+will have to manage the deletion yourself (see [below](#working-with-replicationcontrollers)).
+
+### Multiple Replicas
+
+You can specify how many pods should run concurrently by setting `.spec.replicas` to the number
+of pods you would like to have running concurrently. The number running at any time may be higher
+or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully
+shutdown, and a replacement starts early.
+
+If you do not specify `.spec.replicas`, then it defaults to 1.
+
+## Working with ReplicationControllers
+
+### Deleting a ReplicationController and its Pods
+
+To delete a ReplicationController and all its pods, use [`kubectl
+delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl will scale the ReplicationController to zero and wait
+for it to delete each pod before deleting the ReplicationController itself. If this kubectl
+command is interrupted, it can be restarted.
+
+When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
+0, wait for pod deletions, then delete the ReplicationController).
+
+### Deleting just a ReplicationController
+
+You can delete a ReplicationController without affecting any of its pods.
+
+Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
+
+When using the REST API or go client library, simply delete the ReplicationController object.
+
+Once the original is deleted, you can create a new ReplicationController to replace it. As long
+as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
+However, it will not make any effort to make existing pods match a new, different pod template.
+To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
+
+### Isolating pods from a ReplicationController
+
+Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
+
+## Common usage patterns
+
+### Rescheduling
+
+As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent).
+
+### Scaling
+
+The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
+
+### Rolling updates
+
+The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
+
+As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
+
+Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
+
+The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
+
+Rolling update is implemented in the client tool
+[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
+
+### Multiple release tracks
+
+In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
+
+For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.
+
+### Using ReplicationControllers with Services
+
+Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic
+goes to the old version, and some goes to the new version.
+
+A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.
+
+## Writing programs for Replication
+
+Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.
+
+## Responsibilities of the ReplicationController
+
+The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
+
+The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
+
+The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
+
+
+## API Object
+
+Replication controller is a top-level resource in the Kubernetes REST API. More details about the
+API object can be found at:
+[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core).
+
+## Alternatives to ReplicationController
+
+### ReplicaSet
+
+[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
+It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
+Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.
+
+
+### Deployment (Recommended)
+
+[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
+in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
+because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
+
+### Bare Pods
+
+Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
+
+### Job
+
+Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own
+(that is, batch jobs).
+
+### DaemonSet
+
+Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicationController for pods that provide a
+machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
+to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
+safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
+
+## For more information
+
+Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
+
+{{% /capture %}}
diff --git a/content/uk/docs/contribute/localization_uk.md b/content/uk/docs/contribute/localization_uk.md
new file mode 100644
index 0000000000..67a9c1789b
--- /dev/null
+++ b/content/uk/docs/contribute/localization_uk.md
@@ -0,0 +1,123 @@
+---
+title: Рекомендації з перекладу на українську мову
+content_template: templates/concept
+anchors:
+ - anchor: "#правила-перекладу"
+ title: Правила перекладу
+ - anchor: "#словник"
+ title: Словник
+---
+
+{{% capture overview %}}
+
+Дорогі друзі! Раді вітати вас у спільноті українських контриб'юторів проекту Kubernetes. Ця сторінка створена з метою полегшити вашу роботу при перекладі документації. Вона містить правила, якими ми керувалися під час перекладу, і базовий словник, який ми почали укладати. Перелічені у ньому терміни ви знайдете в українській версії документації Kubernetes. Будемо дуже вдячні, якщо ви допоможете нам доповнити цей словник і розширити правила перекладу.
+
+Сподіваємось, наші рекомендації стануть вам у пригоді.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Правила перекладу {#правила-перекладу}
+
+* У випадку, якщо у перекладі термін набуває неоднозначності і розуміння тексту ускладнюється, надайте у дужках англійський варіант, наприклад: кінцеві точки (endpoints). Якщо при перекладі термін втрачає своє значення, краще не перекладати його, наприклад: характеристики affinity.
+
+* Співзвучні слова передаємо транслітерацією зі збереженням написання (Service -> Сервіс).
+
+* Реалії Kubernetes пишемо з великої літери: Сервіс, Под, але вузол (node).
+
+* Для слів з великих літер, які не мають трансліт-аналогу, використовуємо англійські слова (Deployment, Volume, Namespace).
+
+* Складені слова вважаємо за власні назви і не перекладаємо (LabelSelector, kube-apiserver).
+
+* Частовживані і усталені за межами K8s слова перекладаємо українською і пишемо з маленької літери (label -> мітка).
+
+* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Більшість необхідних нам термінів є словами іншомовного походження, які у родовому відмінку однини приймають закінчення -а: Пода, Deployment'а. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv).
+
+## Словник {#словник}
+
+English | Українська |
+--- | --- |
+addon | розширення |
+application | застосунок |
+backend | бекенд |
+build | збирати (процес) |
+build | збирання (результат) |
+cache | кеш |
+CLI | інтерфейс командного рядка |
+cloud | хмара; хмарний провайдер |
+containerized | контейнеризований |
+Continuous development | безперервна розробка |
+Continuous integration | безперервна інтеграція |
+Continuous deployment | безперервне розгортання |
+contribute | робити внесок (до проекту), допомагати (проекту) |
+contributor | контриб'ютор, учасник проекту |
+control plane | площина управління |
+controller | контролер |
+CPU | ЦПУ |
+dashboard | дашборд |
+data plane | площина даних |
+default settings | типові налаштування |
+default (by) | за умовчанням |
+Deployment | Deployment |
+deprecated | застарілий |
+desired state | бажаний стан |
+downtime | недоступність, простій |
+ecosystem | сімейство проектів (екосистема) |
+endpoint | кінцева точка |
+expose (a service) | відкрити доступ (до сервісу) |
+fail | відмовити |
+feature | компонент |
+framework | фреймворк |
+frontend | фронтенд |
+image | образ |
+Ingress | Ingress |
+instance | інстанс |
+issue | запит |
+kube-proxy | kube-proxy |
+kubelet | kubelet |
+Kubernetes features | функціональні можливості Kubernetes |
+label | мітка |
+lifecycle | життєвий цикл |
+logging | логування |
+maintenance | обслуговування |
+master | master |
+map | спроектувати, зіставити, встановити відповідність |
+monitor | моніторити |
+monitoring | моніторинг |
+Namespace | Namespace |
+network policy | мережева політика |
+node | вузол |
+orchestrate | оркеструвати |
+output | вивід |
+patch | патч |
+Pod | Под |
+production | прод |
+pull request | pull request |
+release | реліз |
+replica | репліка |
+rollback | відкатування |
+rolling update | послідовне оновлення |
+rollout (new updates) | викатка (оновлень) |
+run | запускати |
+scale | масштабувати |
+schedule | розподіляти (Поди по вузлах) |
+scheduler | scheduler |
+secret | секрет |
+selector | селектор |
+self-healing | самозцілення |
+self-restoring | самовідновлення |
+service | сервіс |
+service discovery | виявлення сервісу |
+source code | вихідний код |
+stateful app | застосунок зі станом |
+stateless app | застосунок без стану |
+task | завдання |
+terminated | зупинений |
+traffic | трафік |
+VM (virtual machine) | ВМ |
+Volume | Volume |
+workload | робоче навантаження |
+YAML | YAML |
+
+{{% /capture %}}
diff --git a/content/uk/docs/home/_index.md b/content/uk/docs/home/_index.md
new file mode 100644
index 0000000000..d72f91478f
--- /dev/null
+++ b/content/uk/docs/home/_index.md
@@ -0,0 +1,58 @@
+---
+approvers:
+- chenopis
+title: Документація Kubernetes
+noedit: true
+cid: docsHome
+layout: docsportal_home
+class: gridPage
+linkTitle: "Home"
+main_menu: true
+weight: 10
+hide_feedback: true
+menu:
+ main:
+ title: "Документація"
+ weight: 20
+ post: >
+ Дізнайтеся про основи роботи з Kubernetes, використовуючи схеми, навчальну та довідкову документацію. Ви можете навіть зробити свій внесок у документацію !
+overview: >
+ Kubernetes - рушій оркестрації контейнерів з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками. Цей проект розробляється під егідою Cloud Native Computing Foundation (CNCF ).
+cards:
+- name: concepts
+ title: "Розуміння основ"
+ description: "Дізнайтеся про Kubernetes і його фундаментальні концепції."
+ button: "Дізнатися про концепції"
+ button_path: "/docs/concepts"
+- name: tutorials
+ title: "Спробуйте Kubernetes"
+ description: "Дізнайтеся із навчальних матеріалів, як розгортати застосунки в Kubernetes."
+ button: "Переглянути навчальні матеріали"
+ button_path: "/docs/tutorials"
+- name: setup
+ title: "Налаштування кластера"
+ description: "Розгорніть Kubernetes з урахуванням власних ресурсів і потреб."
+ button: "Налаштувати Kubernetes"
+ button_path: "/docs/setup"
+- name: tasks
+ title: "Дізнайтеся, як користуватись Kubernetes"
+ description: "Ознайомтеся з типовими задачами і способами їх виконання за допомогою короткого алгоритму дій."
+ button: "Переглянути задачі"
+ button_path: "/docs/tasks"
+- name: reference
+ title: Переглянути довідкову інформацію
+ description: Ознайомтеся з термінологією, синтаксисом командного рядка, типами ресурсів API і документацією з налаштування інструментів.
+ button: Переглянути довідкову інформацію
+ button_path: /docs/reference
+- name: contribute
+ title: Зробити внесок у документацію
+ description: Будь-хто може зробити свій внесок, незалежно від того, чи ви нещодавно долучилися до проекту, чи працюєте над ним вже довгий час.
+ button: Зробити внесок у документацію
+ button_path: /docs/contribute
+- name: download
+ title: Завантажити Kubernetes
+ description: Якщо ви встановлюєте Kubernetes чи оновлюєтесь до останньої версії, звіряйтеся з актуальною інформацією по релізу.
+- name: about
+ title: Про документацію
+ description: Цей вебсайт містить документацію по актуальній і чотирьох попередніх версіях Kubernetes.
+---
diff --git a/content/uk/docs/reference/glossary/applications.md b/content/uk/docs/reference/glossary/applications.md
new file mode 100644
index 0000000000..c42c6ec343
--- /dev/null
+++ b/content/uk/docs/reference/glossary/applications.md
@@ -0,0 +1,16 @@
+---
+# title: Applications
+title: Застосунки
+id: applications
+date: 2019-05-12
+full_link:
+# short_description: >
+# The layer where various containerized applications run.
+short_description: >
+ Шар, в якому запущено контейнерізовані застосунки.
+aka:
+tags:
+- fundamental
+---
+
+Шар, в якому запущено контейнерізовані застосунки.
diff --git a/content/uk/docs/reference/glossary/cluster-infrastructure.md b/content/uk/docs/reference/glossary/cluster-infrastructure.md
new file mode 100644
index 0000000000..557180912a
--- /dev/null
+++ b/content/uk/docs/reference/glossary/cluster-infrastructure.md
@@ -0,0 +1,17 @@
+---
+# title: Cluster Infrastructure
+title: Інфраструктура кластера
+id: cluster-infrastructure
+date: 2019-05-12
+full_link:
+# short_description: >
+# The infrastructure layer provides and maintains VMs, networking, security groups and others.
+short_description: >
+ Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо.
+
+aka:
+tags:
+- operations
+---
+
+Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо.
diff --git a/content/uk/docs/reference/glossary/cluster-operations.md b/content/uk/docs/reference/glossary/cluster-operations.md
new file mode 100644
index 0000000000..e274bb4f7f
--- /dev/null
+++ b/content/uk/docs/reference/glossary/cluster-operations.md
@@ -0,0 +1,17 @@
+---
+# title: Cluster Operations
+title: Операції з кластером
+id: cluster-operations
+date: 2019-05-12
+full_link:
+# short_description: >
+# Activities such as upgrading the clusters, implementing security, storage, ingress, networking, logging and monitoring, and other operations involved in managing a Kubernetes cluster.
+short_description: >
+Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером.
+
+aka:
+tags:
+- operations
+---
+
+Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером.
diff --git a/content/uk/docs/reference/glossary/cluster.md b/content/uk/docs/reference/glossary/cluster.md
new file mode 100644
index 0000000000..58fc3bd6fd
--- /dev/null
+++ b/content/uk/docs/reference/glossary/cluster.md
@@ -0,0 +1,22 @@
+---
+# title: Cluster
+title: Кластер
+id: cluster
+date: 2019-06-15
+full_link:
+# short_description: >
+# A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
+short_description: >
+ Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол.
+
+aka:
+tags:
+- fundamental
+- operation
+---
+
+Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол.
+
+
+
+На робочих вузлах розміщуються Поди, які є складовими застосунку. Площина управління керує робочими вузлами і Подами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності.
diff --git a/content/uk/docs/reference/glossary/control-plane.md b/content/uk/docs/reference/glossary/control-plane.md
new file mode 100644
index 0000000000..da9fd4c08a
--- /dev/null
+++ b/content/uk/docs/reference/glossary/control-plane.md
@@ -0,0 +1,17 @@
+---
+# title: Control Plane
+title: Площина управління
+id: control-plane
+date: 2019-05-12
+full_link:
+# short_description: >
+# The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.
+short_description: >
+Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів.
+
+aka:
+tags:
+- fundamental
+---
+
+Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів.
diff --git a/content/uk/docs/reference/glossary/data-plane.md b/content/uk/docs/reference/glossary/data-plane.md
new file mode 100644
index 0000000000..263a544010
--- /dev/null
+++ b/content/uk/docs/reference/glossary/data-plane.md
@@ -0,0 +1,17 @@
+---
+# title: Data Plane
+title: Площина даних
+id: data-plane
+date: 2019-05-12
+full_link:
+# short_description: >
+# The layer that provides capacity such as CPU, memory, network, and storage so that the containers can run and connect to a network.
+short_description: >
+ Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі.
+
+aka:
+tags:
+- fundamental
+---
+
+Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі.
diff --git a/content/uk/docs/reference/glossary/deployment.md b/content/uk/docs/reference/glossary/deployment.md
new file mode 100644
index 0000000000..5e62f7f784
--- /dev/null
+++ b/content/uk/docs/reference/glossary/deployment.md
@@ -0,0 +1,23 @@
+---
+title: Deployment
+id: deployment
+date: 2018-04-12
+full_link: /docs/concepts/workloads/controllers/deployment/
+# short_description: >
+# An API object that manages a replicated application.
+short_description: >
+ Об'єкт API, що керує реплікованим застосунком.
+
+aka:
+tags:
+- fundamental
+- core-object
+- workload
+---
+
+Об'єкт API, що керує реплікованим застосунком.
+
+
+
+
+Кожна репліка являє собою {{< glossary_tooltip term_id="Под" >}}; Поди розподіляються між вузлами кластера.
diff --git a/content/uk/docs/reference/glossary/index.md b/content/uk/docs/reference/glossary/index.md
new file mode 100644
index 0000000000..3cbc4533ba
--- /dev/null
+++ b/content/uk/docs/reference/glossary/index.md
@@ -0,0 +1,17 @@
+---
+approvers:
+- chenopis
+- abiogenesis-now
+# title: Standardized Glossary
+title: Глосарій
+layout: glossary
+noedit: true
+default_active_tag: fundamental
+weight: 5
+card:
+ name: reference
+ weight: 10
+# title: Glossary
+ title: Глосарій
+---
+
diff --git a/content/uk/docs/reference/glossary/kube-apiserver.md b/content/uk/docs/reference/glossary/kube-apiserver.md
new file mode 100644
index 0000000000..82e3caa0ba
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-apiserver.md
@@ -0,0 +1,29 @@
+---
+# title: API server
+title: API-сервер
+id: kube-apiserver
+date: 2018-04-12
+full_link: /docs/reference/generated/kube-apiserver/
+# short_description: >
+# Control plane component that serves the Kubernetes API.
+short_description: >
+ Компонент площини управління, що надає доступ до API Kubernetes.
+
+aka:
+- kube-apiserver
+tags:
+- architecture
+- fundamental
+---
+
+API-сервер є компонентом {{< glossary_tooltip text="площини управління" term_id="control-plane" >}} Kubernetes, через який можна отримати доступ до API Kubernetes. API-сервер є фронтендом площини управління Kubernetes.
+
+
+
+
+
+
+Основною реалізацією Kubernetes API-сервера є [kube-apiserver](/docs/reference/generated/kube-apiserver/). kube-apiserver підтримує горизонтальне масштабування, тобто масштабується за рахунок збільшення кількості інстансів. kube-apiserver можна запустити на декількох інстансах, збалансувавши між ними трафік.
diff --git a/content/uk/docs/reference/glossary/kube-controller-manager.md b/content/uk/docs/reference/glossary/kube-controller-manager.md
new file mode 100644
index 0000000000..edd56dcc90
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-controller-manager.md
@@ -0,0 +1,22 @@
+---
+title: kube-controller-manager
+id: kube-controller-manager
+date: 2018-04-12
+full_link: /docs/reference/command-line-tools-reference/kube-controller-manager/
+# short_description: >
+# Control Plane component that runs controller processes.
+short_description: >
+ Компонент площини управління, який запускає процеси контролера.
+
+aka:
+tags:
+- architecture
+- fundamental
+---
+
+Компонент площини управління, який запускає процеси {{< glossary_tooltip text="контролера" term_id="controller" >}}.
+
+
+
+
+За логікою, кожен {{< glossary_tooltip text="контролер" term_id="controller" >}} є окремим процесом. Однак для спрощення їх збирають в один бінарний файл і запускають як єдиний процес.
diff --git a/content/uk/docs/reference/glossary/kube-proxy.md b/content/uk/docs/reference/glossary/kube-proxy.md
new file mode 100644
index 0000000000..5086226f8e
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-proxy.md
@@ -0,0 +1,33 @@
+---
+title: kube-proxy
+id: kube-proxy
+date: 2018-04-12
+full_link: /docs/reference/command-line-tools-reference/kube-proxy/
+# short_description: >
+# `kube-proxy` is a network proxy that runs on each node in the cluster.
+short_description: >
+ `kube-proxy` - це мережеве проксі, що запущене на кожному вузлі кластера.
+
+aka:
+tags:
+- fundamental
+- networking
+---
+
+[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="сервісу">}}.
+
+
+
+
+kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Подів всередині чи поза межами кластера.
+
+
+kube-proxy використовує шар фільтрації пакетів операційної системи, за наявності такого. В іншому випадку kube-proxy скеровує трафік самостійно.
diff --git a/content/uk/docs/reference/glossary/kube-scheduler.md b/content/uk/docs/reference/glossary/kube-scheduler.md
new file mode 100644
index 0000000000..87f460222c
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-scheduler.md
@@ -0,0 +1,22 @@
+---
+title: kube-scheduler
+id: kube-scheduler
+date: 2018-04-12
+full_link: /docs/reference/generated/kube-scheduler/
+# short_description: >
+# Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.
+short_description: >
+ Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
+
+aka:
+tags:
+- architecture
+---
+
+Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
+
+
+
+
+При виборі вузла враховуються наступні фактори: індивідуальна і колективна потреба у ресурсах, обмеження за апаратним/програмним забезпеченням і політиками, характеристики affinity і anti-affinity, локальність даних, сумісність робочих навантажень і граничні терміни виконання.
diff --git a/content/uk/docs/reference/glossary/kubelet.md b/content/uk/docs/reference/glossary/kubelet.md
new file mode 100644
index 0000000000..c1178ddf45
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kubelet.md
@@ -0,0 +1,23 @@
+---
+title: Kubelet
+id: kubelet
+date: 2018-04-12
+full_link: /docs/reference/generated/kubelet
+# short_description: >
+# An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
+short_description: >
+ Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах.
+
+aka:
+tags:
+- fundamental
+- core-object
+---
+
+Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах.
+
+
+
+
+kubelet використовує специфікації PodSpecs, які надаються за допомогою різних механізмів, і забезпечує працездатність і справність усіх контейнерів, що описані у PodSpecs. kubelet керує лише тими контейнерами, що були створені Kubernetes.
diff --git a/content/uk/docs/reference/glossary/pod.md b/content/uk/docs/reference/glossary/pod.md
new file mode 100644
index 0000000000..b205c0bd1d
--- /dev/null
+++ b/content/uk/docs/reference/glossary/pod.md
@@ -0,0 +1,23 @@
+---
+# title: Pod
+title: Под
+id: pod
+date: 2018-04-12
+full_link: /docs/concepts/workloads/pods/pod-overview/
+# short_description: >
+# The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster.
+short_description: >
+ Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу контейнерів, що запущені у вашому кластері.
+
+aka:
+tags:
+- core-object
+- fundamental
+---
+
+ Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері.
+
+
+
+
+Як правило, в одному Поді запускається один контейнер. У Поді також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Подами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}.
diff --git a/content/uk/docs/reference/glossary/selector.md b/content/uk/docs/reference/glossary/selector.md
new file mode 100644
index 0000000000..77eb861f4e
--- /dev/null
+++ b/content/uk/docs/reference/glossary/selector.md
@@ -0,0 +1,22 @@
+---
+# title: Selector
+title: Селектор
+id: selector
+date: 2018-04-12
+full_link: /docs/concepts/overview/working-with-objects/labels/
+# short_description: >
+# Allows users to filter a list of resources based on labels.
+short_description: >
+ Дозволяє користувачам фільтрувати ресурси за мітками.
+
+aka:
+tags:
+- fundamental
+---
+
+Дозволяє користувачам фільтрувати ресурси за мітками.
+
+
+
+
+Селектори застосовуються при створенні запитів для фільтрації ресурсів за {{< glossary_tooltip text="мітками" term_id="label" >}}.
diff --git a/content/uk/docs/reference/glossary/service.md b/content/uk/docs/reference/glossary/service.md
new file mode 100755
index 0000000000..91407b199a
--- /dev/null
+++ b/content/uk/docs/reference/glossary/service.md
@@ -0,0 +1,24 @@
+---
+title: Сервіс
+id: service
+date: 2018-04-12
+full_link: /docs/concepts/services-networking/service/
+# A way to expose an application running on a set of Pods as a network service.
+short_description: >
+ Спосіб відкрити доступ до застосунку, що запущений на декількох Подах у вигляді мережевої служби.
+
+aka:
+tags:
+- fundamental
+- core-object
+---
+
+Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Подів" term_id="pod" >}} у вигляді мережевої служби.
+
+
+
+
+Переважно група Подів визначається як Сервіс за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Подів змінить групу Подів, визначених селектором. Сервіс забезпечує надходження мережевого трафіка до актуальної групи Подів для підтримки робочого навантаження.
diff --git a/content/uk/docs/setup/_index.md b/content/uk/docs/setup/_index.md
new file mode 100644
index 0000000000..f7874f9fc4
--- /dev/null
+++ b/content/uk/docs/setup/_index.md
@@ -0,0 +1,136 @@
+---
+reviewers:
+- brendandburns
+- erictune
+- mikedanese
+no_issue: true
+title: Початок роботи
+main_menu: true
+weight: 20
+content_template: templates/concept
+card:
+ name: setup
+ weight: 20
+ anchors:
+ - anchor: "#навчальне-середовище"
+ title: Навчальне середовище
+ - anchor: "#прод-оточення"
+ title: Прод оточення
+---
+
+{{% capture overview %}}
+
+
+У цьому розділі розглянуто різні варіанти налаштування і запуску Kubernetes.
+
+
+Різні рішення Kubernetes відповідають різним вимогам: легкість в експлуатації, безпека, система контролю, наявні ресурси та досвід, необхідний для управління кластером.
+
+
+Ви можете розгорнути Kubernetes кластер на робочому комп'ютері, у хмарі чи в локальному дата-центрі, або обрати керований Kubernetes кластер. Також можна створити індивідуальні рішення на базі різних провайдерів хмарних сервісів або на звичайних серверах.
+
+
+Простіше кажучи, ви можете створити Kubernetes кластер у навчальному і в прод оточеннях.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+
+## Навчальне оточення {#навчальне-оточення}
+
+
+Для вивчення Kubernetes використовуйте рішення на базі Docker: інструменти, підтримувані спільнотою Kubernetes, або інші інструменти з сімейства проектів для налаштування Kubernetes кластера на локальному комп'ютері.
+
+{{< table caption="Таблиця інструментів для локального розгортання Kubernetes, які підтримуються спільнотою або входять до сімейства проектів Kubernetes." >}}
+
+|Спільнота |Сімейство проектів |
+| ------------ | -------- |
+| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) |
+| [kind (Kubernetes IN Docker)](https://github.com/kubernetes-sigs/kind) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
+| | [Minishift](https://docs.okd.io/latest/minishift/)|
+| | [MicroK8s](https://microk8s.io/)|
+| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) |
+| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)|
+| | [k3s](https://k3s.io)|
+
+
+## Прод оточення {#прод-оточення}
+
+
+Обираючи рішення для проду, визначіться, якими з функціональних складових (або абстракцій) Kubernetes кластера ви хочете керувати самі, а управління якими - доручити провайдеру.
+
+
+У Kubernetes кластері можливі наступні абстракції: {{< glossary_tooltip text="застосунки" term_id="applications" >}}, {{< glossary_tooltip text="площина даних" term_id="data-plane" >}}, {{< glossary_tooltip text="площина управління" term_id="control-plane" >}}, {{< glossary_tooltip text="інфраструктура кластера" term_id="cluster-infrastructure" >}} та {{< glossary_tooltip text="операції з кластером" term_id="cluster-operations" >}}.
+
+
+На діаграмі нижче показані можливі абстракції Kubernetes кластера із зазначенням, які з них потребують самостійного управління, а які можуть бути керовані провайдером.
+
+Рішення для прод оточення
+
+{{< table caption="Таблиця рішень для прод оточення містить перелік провайдерів і їх технологій." >}}
+
+Таблиця рішень для прод оточення містить перелік провайдерів і технологій, які вони пропонують.
+
+|Провайдери | Керований сервіс | Хмара "під ключ" | Локальний дата-центр | Під замовлення (хмара) | Під замовлення (локальні ВМ)| Під замовлення (сервери без ОС) |
+| --------- | ------ | ------ | ------ | ------ | ------ | ----- |
+| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | |
+| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | |
+| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | |
+| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | |
+| [APPUiO](https://appuio.ch/) | ✔ | ✔ | ✔ | | | |
+| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ |
+| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | |
+| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | |
+| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ |
+| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔|
+| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔
+| [Containership](https://containership.io) | ✔ |✔ | | | |
+| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) |
+| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔
+| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | |
+| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔
+| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [Custom Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) |
+| [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | |
+| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | |
+| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | |
+| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | |
+| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | |
+| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | |
+| [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ |
+| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | |
+| [KubeSail](https://kubesail.com/) | ✔ | | | | |
+| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ |
+| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ |
+| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | |
+| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | |
+| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | ✔ | ✔ | ✔ | | |
+| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | |
+| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) |
+| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | |
+| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/)
+| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | |
+| [oVirt](https://www.ovirt.org/) | | | | | ✔ |
+| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | |
+| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔
+| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/)
+| [Supergiant](https://supergiant.io/) | |✔ | | | |
+| [SUSE](https://www.suse.com/) | | ✔ | | | |
+| [SysEleven](https://www.syseleven.io/) | ✔ | | | | |
+| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ |
+| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | |
+| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks)
+| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | |
+
+{{% /capture %}}
diff --git a/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md
new file mode 100644
index 0000000000..90dbfdb914
--- /dev/null
+++ b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -0,0 +1,293 @@
+---
+reviewers:
+- fgrzadkowski
+- jszczepkowski
+- directxman12
+title: Horizontal Pod Autoscaler
+feature:
+ title: Горизонтальне масштабування
+ description: >
+ Масштабуйте ваш застосунок за допомогою простої команди, інтерфейсу користувача чи автоматично, виходячи із навантаження на ЦПУ.
+
+content_template: templates/concept
+weight: 90
+---
+
+{{% capture overview %}}
+
+The Horizontal Pod Autoscaler automatically scales the number of pods
+in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with
+[custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
+support, on some other application-provided metrics). Note that Horizontal
+Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.
+
+The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
+The resource determines the behavior of the controller.
+The controller periodically adjusts the number of replicas in a replication controller or deployment
+to match the observed average CPU utilization to the target specified by user.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## How does the Horizontal Pod Autoscaler work?
+
+
+
+The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled
+by the controller manager's `--horizontal-pod-autoscaler-sync-period` flag (with a default
+value of 15 seconds).
+
+During each period, the controller manager queries the resource utilization against the
+metrics specified in each HorizontalPodAutoscaler definition. The controller manager
+obtains the metrics from either the resource metrics API (for per-pod resource metrics),
+or the custom metrics API (for all other metrics).
+
+* For per-pod resource metrics (like CPU), the controller fetches the metrics
+ from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler.
+ Then, if a target utilization value is set, the controller calculates the utilization
+ value as a percentage of the equivalent resource request on the containers in
+ each pod. If a target raw value is set, the raw metric values are used directly.
+ The controller then takes the mean of the utilization or the raw value (depending on the type
+ of target specified) across all targeted pods, and produces a ratio used to scale
+ the number of desired replicas.
+
+ Please note that if some of the pod's containers do not have the relevant resource request set,
+ CPU utilization for the pod will not be defined and the autoscaler will
+ not take any action for that metric. See the [algorithm
+ details](#algorithm-details) section below for more information about
+ how the autoscaling algorithm works.
+
+* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
+ except that it works with raw values, not utilization values.
+
+* For object metrics and external metrics, a single metric is fetched, which describes
+ the object in question. This metric is compared to the target
+ value, to produce a ratio as above. In the `autoscaling/v2beta2` API
+ version, this value can optionally be divided by the number of pods before the
+ comparison is made.
+
+The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`,
+`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by
+metrics-server, which needs to be launched separately. See
+[metrics-server](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server)
+for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster.
+
+{{< note >}}
+{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
+Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
+{{< /note >}}
+
+See [Support for metrics APIs](#support-for-metrics-apis) for more details.
+
+The autoscaler accesses corresponding scalable controllers (such as replication controllers, deployments, and replica sets)
+by using the scale sub-resource. Scale is an interface that allows you to dynamically set the number of replicas and examine
+each of their current states. More details on scale sub-resource can be found
+[here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
+
+### Algorithm Details
+
+From the most basic perspective, the Horizontal Pod Autoscaler controller
+operates on the ratio between desired metric value and current metric
+value:
+
+```
+desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
+```
+
+For example, if the current metric value is `200m`, and the desired value
+is `100m`, the number of replicas will be doubled, since `200.0 / 100.0 ==
+2.0` If the current value is instead `50m`, we'll halve the number of
+replicas, since `50.0 / 100.0 == 0.5`. We'll skip scaling if the ratio is
+sufficiently close to 1.0 (within a globally-configurable tolerance, from
+the `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1).
+
+When a `targetAverageValue` or `targetAverageUtilization` is specified,
+the `currentMetricValue` is computed by taking the average of the given
+metric across all Pods in the HorizontalPodAutoscaler's scale target.
+Before checking the tolerance and deciding on the final values, we take
+pod readiness and missing metrics into consideration, however.
+
+All Pods with a deletion timestamp set (i.e. Pods in the process of being
+shut down) and all failed Pods are discarded.
+
+If a particular Pod is missing metrics, it is set aside for later; Pods
+with missing metrics will be used to adjust the final scaling amount.
+
+When scaling on CPU, if any pod has yet to become ready (i.e. it's still
+initializing) *or* the most recent metric point for the pod was before it
+became ready, that pod is set aside as well.
+
+Due to technical constraints, the HorizontalPodAutoscaler controller
+cannot exactly determine the first time a pod becomes ready when
+determining whether to set aside certain CPU metrics. Instead, it
+considers a Pod "not yet ready" if it's unready and transitioned to
+unready within a short, configurable window of time since it started.
+This value is configured with the `--horizontal-pod-autoscaler-initial-readiness-delay` flag, and its default is 30
+seconds. Once a pod has become ready, it considers any transition to
+ready to be the first if it occurred within a longer, configurable time
+since it started. This value is configured with the `--horizontal-pod-autoscaler-cpu-initialization-period` flag, and its
+default is 5 minutes.
+
+The `currentMetricValue / desiredMetricValue` base scale ratio is then
+calculated using the remaining pods not set aside or discarded from above.
+
+If there were any missing metrics, we recompute the average more
+conservatively, assuming those pods were consuming 100% of the desired
+value in case of a scale down, and 0% in case of a scale up. This dampens
+the magnitude of any potential scale.
+
+Furthermore, if any not-yet-ready pods were present, and we would have
+scaled up without factoring in missing metrics or not-yet-ready pods, we
+conservatively assume the not-yet-ready pods are consuming 0% of the
+desired metric, further dampening the magnitude of a scale up.
+
+After factoring in the not-yet-ready pods and missing metrics, we
+recalculate the usage ratio. If the new ratio reverses the scale
+direction, or is within the tolerance, we skip scaling. Otherwise, we use
+the new ratio to scale.
+
+Note that the *original* value for the average utilization is reported
+back via the HorizontalPodAutoscaler status, without factoring in the
+not-yet-ready pods or missing metrics, even when the new usage ratio is
+used.
+
+If multiple metrics are specified in a HorizontalPodAutoscaler, this
+calculation is done for each metric, and then the largest of the desired
+replica counts is chosen. If any of these metrics cannot be converted
+into a desired replica count (e.g. due to an error fetching the metrics
+from the metrics APIs) and a scale down is suggested by the metrics which
+can be fetched, scaling is skipped. This means that the HPA is still capable
+of scaling up if one or more metrics give a `desiredReplicas` greater than
+the current value.
+
+Finally, just before HPA scales the target, the scale recommendation is recorded. The
+controller considers all recommendations within a configurable window choosing the
+highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
+This means that scaledowns will occur gradually, smoothing out the impact of rapidly
+fluctuating metric values.
+
+## API Object
+
+The Horizontal Pod Autoscaler is an API resource in the Kubernetes `autoscaling` API group.
+The current stable version, which only includes support for CPU autoscaling,
+can be found in the `autoscaling/v1` API version.
+
+The beta version, which includes support for scaling on memory and custom metrics,
+can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2`
+are preserved as annotations when working with `autoscaling/v1`.
+
+More details about the API object can be found at
+[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
+
+## Support for Horizontal Pod Autoscaler in kubectl
+
+Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`.
+We can create a new autoscaler using `kubectl create` command.
+We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
+Finally, we can delete an autoscaler using `kubectl delete hpa`.
+
+In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
+For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
+will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
+and the number of replicas between 2 and 5.
+The detailed documentation of `kubectl autoscale` can be found [here](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
+
+
+## Autoscaling during rolling update
+
+Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly,
+or by using the deployment object, which manages the underlying replica sets for you.
+Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
+it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
+
+Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
+i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
+The reason this doesn't work is that when rolling update creates a new replication controller,
+the Horizontal Pod Autoscaler will not be bound to the new replication controller.
+
+## Support for cooldown/delay
+
+When managing the scale of a group of replicas using the Horizontal Pod Autoscaler,
+it is possible that the number of replicas keeps fluctuating frequently due to the
+dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*.
+
+Starting from v1.6, a cluster operator can mitigate this problem by tuning
+the global HPA settings exposed as flags for the `kube-controller-manager` component:
+
+Starting from v1.12, a new algorithmic update removes the need for the
+upscale delay.
+
+- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
+ duration that specifies how long the autoscaler has to wait before another
+ downscale operation can be performed after the current one has completed.
+ The default value is 5 minutes (`5m0s`).
+
+{{< note >}}
+When tuning these parameter values, a cluster operator should be aware of the possible
+consequences. If the delay (cooldown) value is set too long, there could be complaints
+that the Horizontal Pod Autoscaler is not responsive to workload changes. However, if
+the delay value is set too short, the scale of the replicas set may keep thrashing as
+usual.
+{{< /note >}}
+
+## Support for multiple metrics
+
+Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API
+version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod
+Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the
+proposed scales will be used as the new scale.
+
+## Support for custom metrics
+
+{{< note >}}
+Kubernetes 1.2 added alpha support for scaling based on application-specific metrics using special annotations.
+Support for these annotations was removed in Kubernetes 1.6 in favor of the new autoscaling API. While the old method for collecting
+custom metrics is still available, these metrics will not be available for use by the Horizontal Pod Autoscaler, and the former
+annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller.
+{{< /note >}}
+
+Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler.
+You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API.
+Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics.
+
+See [Support for metrics APIs](#support-for-metrics-apis) for the requirements.
+
+## Support for metrics APIs
+
+By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these
+APIs, cluster administrators must ensure that:
+
+* The [API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) is enabled.
+
+* The corresponding APIs are registered:
+
+ * For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-incubator/metrics-server).
+ It can be launched as a cluster addon.
+
+ * For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors.
+ Check with your metrics pipeline, or the [list of known solutions](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api).
+ If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-incubator/custom-metrics-apiserver) to get started.
+
+ * For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
+
+* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated.
+
+For more information on these different metrics paths and how they differ please see the relevant design proposals for
+[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md),
+[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md)
+and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md).
+
+For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
+and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
+* kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
+* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
+
+{{% /capture %}}
diff --git a/content/uk/docs/templates/feature-state-alpha.txt b/content/uk/docs/templates/feature-state-alpha.txt
new file mode 100644
index 0000000000..e061aa52be
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-alpha.txt
@@ -0,0 +1,7 @@
+Наразі цей компонент у статусі *alpha*, що означає:
+
+* Назва версії містить слово alpha (напр. v1alpha1).
+* Увімкнення цього компонента може призвести до помилок у системі. За умовчанням цей компонент вимкнутий.
+* Підтримка цього компонентa може бути припинена у будь-який час без попередження.
+* API може стати несумісним у наступних релізах без попередження.
+* Рекомендований до використання лише у тестових кластерах через підвищений ризик виникнення помилок і відсутність довгострокової підтримки.
diff --git a/content/uk/docs/templates/feature-state-beta.txt b/content/uk/docs/templates/feature-state-beta.txt
new file mode 100644
index 0000000000..3790be73f4
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-beta.txt
@@ -0,0 +1,22 @@
+
+Наразі цей компонент у статусі *beta*, що означає:
+
+
+* Назва версії містить слово beta (наприклад, v2beta3).
+
+* Код добре відтестований. Увімкнення цього компонента не загрожує роботі системи. Компонент увімкнутий за умовчанням.
+
+* Загальна підтримка цього компонента триватиме, однак деталі можуть змінитися.
+
+* У наступній beta- чи стабільній версії схема та/або семантика об'єктів може змінитися і стати несумісною. У такому випадку ми надамо інструкції для міграції на наступну версію. Це може призвести до видалення, редагування і перестворення об'єктів API. У процесі редагування вам, можливо, знадобиться продумати зміни в об'єкті. Це може призвести до недоступності застосунків, для роботи яких цей компонент є істотно важливим.
+
+* Використання компонента рекомендоване лише у некритичних для безперебійної діяльності випадках через ризик несумісних змін у подальших релізах. Це обмеження може бути пом'якшене у випадку декількох кластерів, які можна оновлювати окремо.
+
+* **Будь ласка, спробуйте beta-версії наших компонентів і поділіться з нами своєю думкою! Після того, як компонент вийде зі статусу beta, нам буде важче змінити його.**
diff --git a/content/uk/docs/templates/feature-state-deprecated.txt b/content/uk/docs/templates/feature-state-deprecated.txt
new file mode 100644
index 0000000000..7c35b3fc2f
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-deprecated.txt
@@ -0,0 +1,4 @@
+
+
+Цей компонент є *застарілим*. Дізнатися більше про цей статус ви можете зі статті [Політика Kubernetes щодо застарілих компонентів](/docs/reference/deprecation-policy/).
diff --git a/content/uk/docs/templates/feature-state-stable.txt b/content/uk/docs/templates/feature-state-stable.txt
new file mode 100644
index 0000000000..a794f5ceb6
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-stable.txt
@@ -0,0 +1,11 @@
+
+
+Цей компонент є *стабільним*, що означає:
+
+
+* Назва версії становить vX, де X є цілим числом.
+
+* Стабільні версії компонентів з'являтимуться у багатьох наступних версіях програмного забезпечення.
\ No newline at end of file
diff --git a/content/uk/docs/templates/index.md b/content/uk/docs/templates/index.md
new file mode 100644
index 0000000000..0e0b890542
--- /dev/null
+++ b/content/uk/docs/templates/index.md
@@ -0,0 +1,15 @@
+---
+headless: true
+
+resources:
+- src: "*alpha*"
+ title: "alpha"
+- src: "*beta*"
+ title: "beta"
+- src: "*deprecated*"
+# title: "deprecated"
+ title: "застарілий"
+- src: "*stable*"
+# title: "stable"
+ title: "стабільний"
+---
\ No newline at end of file
diff --git a/content/uk/docs/tutorials/_index.md b/content/uk/docs/tutorials/_index.md
new file mode 100644
index 0000000000..ad03de23df
--- /dev/null
+++ b/content/uk/docs/tutorials/_index.md
@@ -0,0 +1,90 @@
+---
+#title: Tutorials
+title: Навчальні матеріали
+main_menu: true
+weight: 60
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+
+У цьому розділі документації Kubernetes зібрані навчальні матеріали. Кожний матеріал показує, як досягти окремої мети, що більша за одне [завдання](/docs/tasks/). Зазвичай навчальний матеріал має декілька розділів, кожен з яких містить певну послідовність дій. До ознайомлення з навчальними матеріалами вам, можливо, знадобиться додати у закладки сторінку з [Глосарієм](/docs/reference/glossary/) для подальшого консультування.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+## Основи
+
+
+* [Основи Kubernetes](/docs/tutorials/kubernetes-basics/) - детальний навчальний матеріал з інтерактивними уроками, що допоможе вам зрозуміти Kubernetes і спробувати його базову функціональність.
+
+* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
+
+* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
+
+* [Привіт Minikube](/docs/tutorials/hello-minikube/)
+
+
+## Конфігурація
+
+* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
+
+## Застосунки без стану (Stateless Applications)
+
+* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
+
+* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
+
+## Застосунки зі станом (Stateful Applications)
+
+* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
+
+* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
+
+* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
+
+* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
+
+## CI/CD Pipeline
+
+* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
+
+* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
+
+* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
+
+* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
+
+## Кластери
+
+* [AppArmor](/docs/tutorials/clusters/apparmor/)
+
+## Сервіси
+
+* [Using Source IP](/docs/tutorials/services/source-ip/)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+Якщо ви хочете написати навчальний матеріал, у статті
+[Використання шаблонів сторінок](/docs/home/contribute/page-templates/)
+ви знайдете інформацію про тип навчальної сторінки і шаблон.
+
+{{% /capture %}}
diff --git a/content/uk/docs/tutorials/hello-minikube.md b/content/uk/docs/tutorials/hello-minikube.md
new file mode 100644
index 0000000000..356426112b
--- /dev/null
+++ b/content/uk/docs/tutorials/hello-minikube.md
@@ -0,0 +1,394 @@
+---
+#title: Hello Minikube
+title: Привіт Minikube
+content_template: templates/tutorial
+weight: 5
+menu:
+ main:
+ #title: "Get Started"
+ title: "Початок роботи"
+ weight: 10
+ #post: >
+ #Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.
+ post: >
+ Готові попрацювати? Створимо простий Kubernetes кластер для запуску Node.js застосунку "Hello World".
+card:
+ #name: tutorials
+ name: навчальні матеріали
+ weight: 10
+---
+
+{{% capture overview %}}
+
+
+З цього навчального матеріалу ви дізнаєтесь, як запустити у Kubernetes простий Hello World застосунок на Node.js за допомогою [Minikube](/docs/setup/learning-environment/minikube) і Katacoda. Katacoda надає безплатне Kubernetes середовище, що доступне у вашому браузері.
+
+
+{{< note >}}
+Також ви можете навчатись за цим матеріалом, якщо встановили [Minikube локально](/docs/tasks/tools/install-minikube/).
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture objectives %}}
+
+
+* Розгорнути Hello World застосунок у Minikube.
+
+* Запустити застосунок.
+
+* Переглянути логи застосунку.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+
+У цьому навчальному матеріалі ми використовуємо образ контейнера, зібраний із наступних файлів:
+
+{{< codenew language="js" file="minikube/server.js" >}}
+
+{{< codenew language="conf" file="minikube/Dockerfile" >}}
+
+
+Більше інформації про команду `docker build` ви знайдете у [документації Docker](https://docs.docker.com/engine/reference/commandline/build/).
+
+{{% /capture %}}
+
+{{% capture lessoncontent %}}
+
+
+## Створення Minikube кластера
+
+
+1. Натисніть кнопку **Запуск термінала**
+
+ {{< kat-button >}}
+
+
+ {{< note >}}Якщо Minikube встановлений локально, виконайте команду `minikube start`.{{< /note >}}
+
+
+2. Відкрийте Kubernetes дашборд у браузері:
+
+ ```shell
+ minikube dashboard
+ ```
+
+
+3. Тільки для Katacoda: у верхній частині вікна термінала натисніть знак плюс, а потім -- **Select port to view on Host 1**.
+
+
+4. Тільки для Katacoda: введіть `30000`, а потім натисніть **Display Port**.
+
+
+## Створення Deployment
+
+
+[*Под*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Под має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Пода і перезапускає контейнер Пода, якщо контейнер перестає працювати. Створювати і масштабувати Поди рекомендується за допомогою Deployment'ів.
+
+
+1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Подом. Под запускає контейнер на основі наданого Docker образу.
+
+ ```shell
+ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
+ ```
+
+
+2. Перегляньте інформацію про запущений Deployment:
+
+ ```shell
+ kubectl get deployments
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ hello-node 1/1 1 1 1m
+ ```
+
+
+3. Перегляньте інформацію про запущені Поди:
+
+ ```shell
+ kubectl get pods
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
+ ```
+
+
+4. Перегляньте події кластера:
+
+ ```shell
+ kubectl get events
+ ```
+
+
+5. Перегляньте конфігурацію `kubectl`:
+
+ ```shell
+ kubectl config view
+ ```
+
+
+ {{< note >}}Більше про команди `kubectl` ви можете дізнатися зі статті [Загальна інформація про kubectl](/docs/user-guide/kubectl-overview/).{{< /note >}}
+
+
+## Створення Сервісу
+
+
+За умовчанням, Под доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Под необхідно відкрити як Kubernetes [*Сервіс*](/docs/concepts/services-networking/service/).
+
+
+1. Відкрийте Под для публічного доступу з інтернету за допомогою команди `kubectl expose`:
+
+ ```shell
+ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
+ ```
+
+
+ Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Сервісу за межами кластера.
+
+
+2. Перегляньте інформацію за Сервісом, який ви щойно створили:
+
+ ```shell
+ kubectl get services
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s
+ kubernetes ClusterIP 10.96.0.1 443/TCP 23m
+ ```
+
+
+ Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Сервісу надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Сервіс доступним ззовні за допомогою команди `minikube service`.
+
+
+3. Виконайте наступну команду:
+
+ ```shell
+ minikube service hello-node
+ ```
+
+
+4. Тільки для Katacoda: натисніть знак плюс, а потім -- **Select port to view on Host 1**.
+
+
+5. Тільки для Katacoda: запишіть п'ятизначний номер порту, що відображається напроти `8080` у виводі сервісу. Номер цього порту генерується довільно і тому може бути іншим у вашому випадку. Введіть номер порту у призначене для цього текстове поле і натисніть Display Port. У нашому прикладі номер порту `30369`.
+
+
+ Це відкриє вікно браузера, в якому запущений ваш застосунок, і покаже повідомлення "Hello World".
+
+
+## Увімкнення розширень
+
+
+Minikube має ряд вбудованих {{< glossary_tooltip text="розширень" term_id="addons" >}}, які можна увімкнути, вимкнути і відкрити у локальному Kubernetes оточенні.
+
+
+1. Перегляньте перелік підтримуваних розширень:
+
+ ```shell
+ minikube addons list
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ addon-manager: enabled
+ dashboard: enabled
+ default-storageclass: enabled
+ efk: disabled
+ freshpod: disabled
+ gvisor: disabled
+ helm-tiller: disabled
+ ingress: disabled
+ ingress-dns: disabled
+ logviewer: disabled
+ metrics-server: disabled
+ nvidia-driver-installer: disabled
+ nvidia-gpu-device-plugin: disabled
+ registry: disabled
+ registry-creds: disabled
+ storage-provisioner: enabled
+ storage-provisioner-gluster: disabled
+ ```
+
+
+2. Увімкніть розширення, наприклад `metrics-server`:
+
+ ```shell
+ minikube addons enable metrics-server
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ metrics-server was successfully enabled
+ ```
+
+
+3. Перегляньте інформацію про Под і Сервіс, які ви щойно створили:
+
+ ```shell
+ kubectl get pod,svc -n kube-system
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
+ pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
+ pod/metrics-server-67fb648c5 1/1 Running 0 26s
+ pod/etcd-minikube 1/1 Running 0 34m
+ pod/influxdb-grafana-b29w8 2/2 Running 0 26s
+ pod/kube-addon-manager-minikube 1/1 Running 0 34m
+ pod/kube-apiserver-minikube 1/1 Running 0 34m
+ pod/kube-controller-manager-minikube 1/1 Running 0 34m
+ pod/kube-proxy-rnlps 1/1 Running 0 34m
+ pod/kube-scheduler-minikube 1/1 Running 0 34m
+ pod/storage-provisioner 1/1 Running 0 34m
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s
+ service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m
+ service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s
+ service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s
+ ```
+
+
+4. Вимкніть `metrics-server`:
+
+ ```shell
+ minikube addons disable metrics-server
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ metrics-server was successfully disabled
+ ```
+
+
+## Вивільнення ресурсів
+
+
+Тепер ви можете видалити ресурси, які створили у вашому кластері:
+
+```shell
+kubectl delete service hello-node
+kubectl delete deployment hello-node
+```
+
+
+За бажанням, зупиніть віртуальну машину (ВМ) з Minikube:
+
+```shell
+minikube stop
+```
+
+
+За бажанням, видаліть ВМ з Minikube:
+
+```shell
+minikube delete
+```
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+* Дізнайтеся більше про [об'єкти Deployment](/docs/concepts/workloads/controllers/deployment/).
+
+* Дізнайтеся більше про [розгортання застосунків](/docs/user-guide/deploying-applications/).
+
+* Дізнайтеся більше про [об'єкти сервісу](/docs/concepts/services-networking/service/).
+
+{{% /capture %}}
diff --git a/content/uk/docs/tutorials/kubernetes-basics/_index.html b/content/uk/docs/tutorials/kubernetes-basics/_index.html
new file mode 100644
index 0000000000..466b8b3437
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/_index.html
@@ -0,0 +1,138 @@
+---
+title: Дізнатися про основи Kubernetes
+linkTitle: Основи Kubernetes
+weight: 10
+card:
+ name: навчальні матеріали
+ weight: 20
+ title: Знайомство з основами
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Основи Kubernetes
+
+
Цей навчальний матеріал ознайомить вас з основами системи оркестрації Kubernetes кластера. Кожен модуль містить загальну інформацію щодо основної функціональності і концепцій Kubernetes, а також інтерактивний онлайн-урок. Завдяки цим інтерактивним урокам ви зможете самостійно керувати простим кластером і розгорнутими в ньому контейнеризованими застосунками.
+
+
З інтерактивних уроків ви дізнаєтесь:
+
+
+ як розгорнути контейнеризований застосунок у кластері.
+
+ як масштабувати Deployment.
+
+ як розгорнути нову версію контейнеризованого застосунку.
+
+ як відлагодити контейнеризований застосунок.
+
+
+
Навчальні матеріали використовують Katacoda для запуску у вашому браузері віртуального термінала, в якому запущено Minikube - невеликий локально розгорнутий Kubernetes, що може працювати будь-де. Вам не потрібно встановлювати або налаштовувати жодне програмне забезпечення: кожен інтерактивний урок запускається просто у вашому браузері.
+
+
+
+
+
+
+
+
+
Чим Kubernetes може бути корисний для вас?
+
+
Від сучасних вебсервісів користувачі очікують доступності 24/7, а розробники - можливості розгортати нові версії цих застосунків по кілька разів на день. Контейнеризація, що допомагає упакувати програмне забезпечення, якнайкраще сприяє цим цілям. Вона дозволяє випускати і оновлювати застосунки легко, швидко та без простою. Із Kubernetes ви можете бути певні, що ваші контейнеризовані застосунки запущені там і тоді, де ви цього хочете, а також забезпечені усіма необхідними для роботи ресурсами та інструментами. Kubernetes - це висококласна платформа з відкритим вихідним кодом, в основі якої - накопичений досвід оркестрації контейнерів від Google, поєднаний із найкращими ідеями і практиками від спільноти.
+
+
+
+
+
+
+
+
Навчальні модулі "Основи Kubernetes"
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md
new file mode 100644
index 0000000000..9173c90d34
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md
@@ -0,0 +1,4 @@
+---
+title: Створення кластера
+weight: 10
+---
diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
new file mode 100644
index 0000000000..20d89f23ca
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
@@ -0,0 +1,37 @@
+---
+title: Інтерактивний урок - Створення кластера
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
new file mode 100644
index 0000000000..1a4e179a69
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -0,0 +1,152 @@
+---
+title: Використання Minikube для створення кластера
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+ Зрозуміти, що таке Kubernetes кластер.
+
+ Зрозуміти, що таке Minikube.
+
+ Запустити Kubernetes кластер за допомогою онлайн-термінала.
+
+
+
+
+
+
Kubernetes кластери
+
+
+ Kubernetes координує високодоступний кластер комп'ютерів, з'єднаних таким чином, щоб працювати як одне ціле. Абстракції Kubernetes дозволяють вам розгортати контейнеризовані застосунки в кластері без конкретної прив'язки до окремих машин. Для того, щоб скористатися цією новою моделлю розгортання, застосунки потрібно упакувати таким чином, щоб звільнити їх від прив'язки до окремих хостів, тобто контейнеризувати. Контейнеризовані застосунки більш гнучкі і доступні, ніж попередні моделі розгортання, що передбачали встановлення застосунків безпосередньо на призначені для цього машини у вигляді програмного забезпечення, яке глибоко інтегрувалося із хостом. Kubernetes дозволяє автоматизувати розподіл і запуск контейнерів застосунку у кластері, а це набагато ефективніше. Kubernetes - це платформа з відкритим вихідним кодом, готова для використання у проді.
+
+
+
Kubernetes кластер складається з двох типів ресурсів:
+
+ master , що координує роботу кластера
+ вузли (nodes) - робочі машини, на яких запущені застосунки
+
+
+
+
+
+
+
+
Зміст:
+
+
+ Kubernetes кластер
+
+ Minikube
+
+
+
+
+
+ Kubernetes - це довершена платформа з відкритим вихідним кодом, що оркеструє розміщення і запуск контейнерів застосунку всередині та між комп'ютерними кластерами.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Master відповідає за керування кластером. Master координує всі процеси у вашому кластері, такі як запуск застосунків, підтримка їх бажаного стану, масштабування застосунків і викатка оновлень.
+
+
+
Вузол (node) - це ВМ або фізичний комп'ютер, що виступає у ролі робочої машини в Kubernetes кластері. Кожен вузол має kubelet - агент для управління вузлом і обміну даними з Kubernetes master. Також на вузлі мають бути встановлені інструменти для виконання операцій з контейнерами, такі як Docker або rkt. Kubernetes кластер у проді повинен складатися як мінімум із трьох вузлів.
+
+
+
+
+
+
Master'и керують кластером, а вузли використовуються для запуску застосунків.
+
+
+
+
+
+
+
+
Коли ви розгортаєте застосунки у Kubernetes, ви кажете master-вузлу запустити контейнери застосунку. Master розподіляє контейнери для запуску на вузлах кластера. Для обміну даними з master вузли використовують Kubernetes API , який надається master-вузлом. Кінцеві користувачі також можуть взаємодіяти із кластером безпосередньо через Kubernetes API.
+
+
+
Kubernetes кластер можна розгорнути як на фізичних, так і на віртуальних серверах. Щоб розпочати розробку під Kubernetes, ви можете скористатися Minikube - спрощеною реалізацією Kubernetes. Minikube створює на вашому локальному комп'ютері ВМ, на якій розгортає простий кластер з одного вузла. Існують версії Minikube для операційних систем Linux, macOS та Windows. Minikube CLI надає основні операції для роботи з вашим кластером, такі як start, stop, status і delete. Однак у цьому уроці ви використовуватимете онлайн термінал із вже встановленим Minikube.
+
+
+
Тепер ви знаєте, що таке Kubernetes. Тож давайте перейдемо до інтерактивного уроку і створимо ваш перший кластер!
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md
new file mode 100644
index 0000000000..a9c1ff2376
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md
@@ -0,0 +1,4 @@
+---
+title: Розгортання застосунку
+weight: 20
+---
diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html
new file mode 100644
index 0000000000..d89a05a95b
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html
@@ -0,0 +1,41 @@
+---
+title: Інтерактивний урок - Розгортання застосунку
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
new file mode 100644
index 0000000000..ce9229ca85
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
@@ -0,0 +1,151 @@
+---
+title: Використання kubectl для створення Deployment'а
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+ Дізнатися, що таке Deployment застосунків.
+
+ Розгорнути свій перший застосунок у Kubernetes за допомогою kubectl.
+
+
+
+
+
+
Процеси Kubernetes Deployment
+
+
+ Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити Deployment конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Поди для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Поди по окремих вузлах кластера.
+
+
+
+
Після створення Поди застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Под, зупинив роботу або був видалений, Deployment контролер переміщає цей Под на інший вузол кластера. Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.
+
+
+
До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Подів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками.
+
+
+
+
+
+
+
Зміст:
+
+
+ Deployment'и
+ Kubectl
+
+
+
+
+
+ Deployment відповідає за створення і оновлення Подів для вашого застосунку
+
+
+
+
+
+
+
+
+
Як розгорнути ваш перший застосунок у Kubernetes
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Ви можете створити Deployment і керувати ним за допомогою командного рядка Kubernetes - kubectl . kubectl взаємодіє з кластером через API Kubernetes. У цьому модулі ви вивчите найпоширеніші команди kubectl для створення Deployment'ів, які запускатимуть ваші застосунки у Kubernetes кластері.
+
+
+
Коли ви створюєте Deployment, вам необхідно задати образ контейнера для вашого застосунку і скільки реплік ви хочете запустити. Згодом цю інформацію можна змінити, оновивши Deployment. У навчальних модулях 5 і 6 йдеться про те, як масштабувати і оновлювати Deployment'и.
+
+
+
+
+
+
+
+
+
Для того, щоб розгортати застосунки в Kubernetes, їх потрібно упакувати в один із підтримуваних форматів контейнерів
+
+
+
+
+
+
+
+
+ Для створення Deployment'а ви використовуватимете застосунок, написаний на Node.js і упакований в Docker контейнер. (Якщо ви ще не пробували створити Node.js застосунок і розгорнути його у контейнері, радимо почати саме з цього; інструкції ви знайдете у навчальному матеріалі Привіт Minikube ).
+
+
+
+
Тепер ви знаєте, що таке Deployment. Тож давайте перейдемо до інтерактивного уроку і розгорнемо ваш перший застосунок!
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md b/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md
new file mode 100644
index 0000000000..93ac6a7774
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md
@@ -0,0 +1,4 @@
+---
+title: Вивчення застосунку
+weight: 30
+---
diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html
new file mode 100644
index 0000000000..a4bec18079
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html
@@ -0,0 +1,41 @@
+---
+title: Інтерактивний урок - Вивчення застосунку
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html
new file mode 100644
index 0000000000..93899fd77f
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html
@@ -0,0 +1,200 @@
+---
+title: Ознайомлення з Подами і вузлами (nodes)
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+ Дізнатися, що таке Поди Kubernetes.
+
+ Дізнатися, що таке вузли Kubernetes.
+
+ Діагностика розгорнутих застосунків.
+
+
+
+
+
+
Поди Kubernetes
+
+
Коли ви створили Deployment у модулі 2 , Kubernetes створив Под , щоб розмістити ваш застосунок. Под - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:
+
+
+ Спільні сховища даних, або Volumes
+
+ Мережа, адже кожен Под у кластері має унікальну IP-адресу
+
+ Інформація з запуску кожного контейнера, така як версія образу контейнера або використання певних портів
+
+
+
Под моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Поді може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Пода мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.
+
+
+
Под є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Поди вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Под прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Поди розподіляються по інших доступних вузлах кластера.
+
+
+
+
+
Зміст:
+
+ Поди
+ Вузли
+ Основні команди kubectl
+
+
+
+
+
+ Под - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити.
+
+
+
+
+
+
+
+
+
Узагальнена схема Подів
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Вузли
+
+
Под завжди запускається на вузлі . Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Подів. Kubernetes master автоматично розподіляє Поди по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.
+
+
+
На кожному вузлі Kubernetes запущені як мінімум:
+
+
+ kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Поди і контейнери, запущені на машині.
+
+ оточення для контейнерів (таке як Docker, rkt), що забезпечує завантаження образу контейнера з реєстру, розпакування контейнера і запуск застосунку.
+
+
+
+
+
+
+
+
Контейнери повинні бути разом в одному Поді, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск.
+
+
+
+
+
+
+
+
+
Узагальнена схема вузлів
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Діагностика за допомогою kubectl
+
+
У модулі 2 ви вже використовували інтерфейс командного рядка kubectl. У модулі 3 ви продовжуватимете користуватися ним для отримання інформації про застосунки та оточення, в яких вони розгорнуті. Нижченаведені команди kubectl допоможуть вам виконати наступні поширені дії:
+
+
+ kubectl get - відобразити список ресурсів
+
+ kubectl describe - показати детальну інформацію про ресурс
+
+ kubectl logs - вивести логи контейнера, розміщеного в Поді
+
+ kubectl exec - виконати команду в контейнері, розміщеному в Поді
+
+
+
+
За допомогою цих команд ви можете подивитись, коли і в якому оточенні був розгорнутий застосунок, перевірити його поточний статус і конфігурацію.
+
+
+
А зараз, коли ми дізналися більше про складові нашого кластера і командний рядок, давайте детальніше розглянемо наш застосунок.
+
+
+
+
+
+
Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Подів.
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md b/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md
new file mode 100644
index 0000000000..ef49a1b632
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md
@@ -0,0 +1,4 @@
+---
+title: Відкриття доступу до застосунку за межами кластера
+weight: 40
+---
diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html
new file mode 100644
index 0000000000..4f3a87928d
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html
@@ -0,0 +1,38 @@
+---
+title: Інтерактивний урок - Відкриття доступу до застосунку
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html
new file mode 100644
index 0000000000..0ddad3b8da
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -0,0 +1,169 @@
+---
+#title: Using a Service to Expose Your App
+title: Використання Cервісу для відкриття доступу до застосунку за межами кластера
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+ Дізнатись, що таке Cервіс у Kubernetes
+
+ Зрозуміти, яке відношення до Cервісу мають мітки та LabelSelector
+
+ Відкрити доступ до застосунку за межами Kubernetes кластера, використовуючи Cервіс
+
+
+
+
+
+
Загальна інформація про Kubernetes Cервіси
+
+
+
Поди Kubernetes "смертні" і мають власний життєвий цикл . Коли робочий вузол припиняє роботу, ми також втрачаємо всі Поди, запущені на ньому. ReplicaSet здатна динамічно повернути кластер до бажаного стану шляхом створення нових Подів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Поду. Водночас, кожний Под у Kubernetes кластері має унікальну IP-адресу, навіть Поди на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Подами для того, щоб ваші застосунки продовжували працювати.
+
+
+
Сервіс у Kubernetes - це абстракція, що визначає логічний набір Подів і політику доступу до них. Сервіси уможливлюють слабку зв'язаність між залежними Подами. Для визначення Сервісу використовують YAML-файл (рекомендовано) або JSON, як для решти об'єктів Kubernetes. Набір Подів, призначених для Сервісу, зазвичай визначається через LabelSelector (нижче пояснюється, чому параметр selector іноді не включають у специфікацію сервісу).
+
+
+
Попри те, що кожен Под має унікальний IP, ці IP-адреси не видні за межами кластера без Сервісу. Сервіси уможливлюють надходження трафіка до ваших застосунків. Відкрити Сервіс можна по-різному, вказавши потрібний type у ServiceSpec:
+
+
+ ClusterIP (типове налаштування) - відкриває доступ до Сервісу у кластері за внутрішнім IP. Цей тип робить Сервіс доступним лише у межах кластера.
+
+ NodePort - відкриває доступ до Сервісу на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Сервіс доступним поза межами кластера, використовуючи <NodeIP>:<NodePort>. Є надмножиною відносно ClusterIP.
+
+ LoadBalancer - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Сервісу статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.
+
+ ExternalName - відкриває доступ до Сервісу, використовуючи довільне ім'я (визначається параметром externalName у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії kube-dns 1.7 і вище.
+
+
+
Більше інформації про різні типи Сервісів ви знайдете у навчальному матеріалі Використання вихідної IP-адреси . Дивіться також Поєднання застосунків з Сервісами .
+
+
Також зауважте, що для деяких сценаріїв використання Сервісів параметр selector не задається у специфікації Сервісу. Сервіс, створений без визначення параметра selector, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Сервіс на конкретні кінцеві точки (endpoints). Інший випадок, коли селектор може бути не потрібний - використання строго заданого параметра type: ExternalName.
+
+
+
+
+
Зміст
+
+
+ Відкриття Подів для зовнішнього трафіка
+
+ Балансування навантаження трафіка між Подами
+
+ Використання міток
+
+
+
+
+
Сервіс Kubernetes - це шар абстракції, який визначає логічний набір Подів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Подів.
+
+
+
+
+
+
+
+
+
Сервіси і мітки
+
+
+
+
+
+
+
+
+
+
+
+
+
Сервіс маршрутизує трафік між Подами, що входять до його складу. Сервіс - це абстракція, завдяки якій Поди в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Сервіси в Kubernetes здійснюють виявлення і маршрутизацію між залежними Подами (як наприклад, фронтенд- і бекенд-компоненти застосунку).
+
+
Сервіси співвідносяться з набором Подів за допомогою міток і селекторів -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:
+
+
+ Позначення об'єктів для дев, тест і прод оточень
+
+ Прикріплення тегу версії
+
+ Класифікування об'єктів за допомогою тегів
+
+
+
+
+
+
+
Ви можете створити Сервіс одночасно із Deployment, виконавши команду --expose в kubectl.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Сервісу і прикріпимо мітки.
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md b/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md
new file mode 100644
index 0000000000..c6e1a94dc1
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md
@@ -0,0 +1,4 @@
+---
+title: Масштабування застосунку
+weight: 50
+---
diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html
new file mode 100644
index 0000000000..540ae92b23
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html
@@ -0,0 +1,40 @@
+---
+title: Інтерактивний урок - Масштабування застосунку
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html
new file mode 100644
index 0000000000..c2318da8ed
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html
@@ -0,0 +1,145 @@
+---
+title: Запуск вашого застосунку на декількох Подах
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+ Масштабувати застосунок за допомогою kubectl.
+
+
+
+
+
+
Масштабування застосунку
+
+
+
У попередніх модулях ми створили Deployment і відкрили його для зовнішнього трафіка за допомогою Сервісу . Deployment створив лише один Под для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.
+
+
+
Масштабування досягається шляхом зміни кількості реплік у Deployment'і.
+
+
+
+
+
+
Зміст:
+
+
+ Масштабування Deployment'а
+
+
+
+
+
Кількість Подів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run
+
+
+
+
+
+
+
+
Загальна інформація про масштабування
+
+
+
+
+
+
+
+
+
+
+
+
Масштабування Deployment'а забезпечує створення нових Подів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Подів відповідно до нового бажаного стану. Kubernetes також підтримує автоматичне масштабування , однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Подів у визначеному Deployment'і.
+
+
+
Запустивши застосунок на декількох Подах, необхідно розподілити між ними трафік. Сервіси мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Подами відкритого Deployment'а. Сервіси безперервно моніторять запущені Поди за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Поди.
+
+
+
+
+
+
Масштабування досягається шляхом зміни кількості реплік у Deployment'і.
+
+
+
+
+
+
+
+
+
+
Після запуску декількох примірників застосунку ви зможете виконувати послідовне оновлення без шкоди для доступності системи. Ми розповімо вам про це у наступному модулі. А зараз давайте повернемось до онлайн термінала і масштабуємо наш застосунок.
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/_index.md b/content/uk/docs/tutorials/kubernetes-basics/update/_index.md
new file mode 100644
index 0000000000..c253433db1
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/update/_index.md
@@ -0,0 +1,4 @@
+---
+title: Оновлення застосунку
+weight: 60
+---
diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html
new file mode 100644
index 0000000000..5d8e398b40
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html
@@ -0,0 +1,37 @@
+---
+title: Інтерактивний урок - Оновлення застосунку
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html
new file mode 100644
index 0000000000..5630eeaf81
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html
@@ -0,0 +1,168 @@
+---
+title: Виконання послідовного оновлення (rolling update)
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+ Виконати послідовне оновлення, використовуючи kubectl.
+
+
+
+
+
+
Оновлення застосунку
+
+
+
Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. Послідовні оновлення дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Подів іншими. Нові Поди розподіляються по вузлах з доступними ресурсами.
+
+
+
У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Подах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Подів, недоступних під час оновлення, і максимальна кількість нових Подів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Подів) еквіваленті.
+ У Kubernetes оновлення версіонуються, тому кожне оновлення Deployment'а можна відкотити до попередньої (стабільної) версії.
+
+
+
+
+
+
Зміст:
+
+
+ Оновлення застосунку
+
+
+
+
+
Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Подів іншими.
+
+
+
+
+
+
+
+
Загальна інформація про послідовне оновлення
+
+
+
+
+
+
+
+
+
+
Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Сервіс розподілятиме трафік лише на доступні Поди. Під доступним мається на увазі Под, готовий до експлуатації користувачами застосунку.
+
+
+
Послідовне оновлення дозволяє вам:
+
+
+ Просувати застосунок з одного оточення в інше (шляхом оновлення образу контейнера)
+
+ Відкочуватися до попередніх версій
+
+ Здійснювати безперервну інтеграцію та розгортання застосунків без простою
+
+
+
+
+
+
+
+
Якщо Deployment "відкритий у світ", то під час оновлення сервіс розподілятиме трафік лише на доступні Поди.
+
+
+
+
+
+
+
+
+
+
В інтерактивному уроці ми оновимо наш застосунок до нової версії, а потім відкотимося до попередньої.
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/uk/examples/controllers/job.yaml b/content/uk/examples/controllers/job.yaml
new file mode 100644
index 0000000000..b448f2eb81
--- /dev/null
+++ b/content/uk/examples/controllers/job.yaml
@@ -0,0 +1,14 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi
+spec:
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+ backoffLimit: 4
+
diff --git a/content/uk/examples/controllers/nginx-deployment.yaml b/content/uk/examples/controllers/nginx-deployment.yaml
new file mode 100644
index 0000000000..f7f95deebb
--- /dev/null
+++ b/content/uk/examples/controllers/nginx-deployment.yaml
@@ -0,0 +1,21 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.7.9
+ ports:
+ - containerPort: 80
diff --git a/content/uk/examples/controllers/replication.yaml b/content/uk/examples/controllers/replication.yaml
new file mode 100644
index 0000000000..6eff0b9b57
--- /dev/null
+++ b/content/uk/examples/controllers/replication.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: nginx
+spec:
+ replicas: 3
+ selector:
+ app: nginx
+ template:
+ metadata:
+ name: nginx
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
diff --git a/content/uk/examples/minikube/Dockerfile b/content/uk/examples/minikube/Dockerfile
new file mode 100644
index 0000000000..dd58cb7e75
--- /dev/null
+++ b/content/uk/examples/minikube/Dockerfile
@@ -0,0 +1,4 @@
+FROM node:6.14.2
+EXPOSE 8080
+COPY server.js .
+CMD [ "node", "server.js" ]
diff --git a/content/uk/examples/minikube/server.js b/content/uk/examples/minikube/server.js
new file mode 100644
index 0000000000..76345a17d8
--- /dev/null
+++ b/content/uk/examples/minikube/server.js
@@ -0,0 +1,9 @@
+var http = require('http');
+
+var handleRequest = function(request, response) {
+ console.log('Received request for URL: ' + request.url);
+ response.writeHead(200);
+ response.end('Hello World!');
+};
+var www = http.createServer(handleRequest);
+www.listen(8080);
diff --git a/content/uk/examples/service/networking/dual-stack-default-svc.yaml b/content/uk/examples/service/networking/dual-stack-default-svc.yaml
new file mode 100644
index 0000000000..00ed87ba19
--- /dev/null
+++ b/content/uk/examples/service/networking/dual-stack-default-svc.yaml
@@ -0,0 +1,11 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
\ No newline at end of file
diff --git a/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml
new file mode 100644
index 0000000000..a875f44d6d
--- /dev/null
+++ b/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ ipFamily: IPv4
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
\ No newline at end of file
diff --git a/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml
new file mode 100644
index 0000000000..2586ec9b39
--- /dev/null
+++ b/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ labels:
+ app: MyApp
+spec:
+ ipFamily: IPv6
+ type: LoadBalancer
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
\ No newline at end of file
diff --git a/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml
new file mode 100644
index 0000000000..2aa0725059
--- /dev/null
+++ b/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ ipFamily: IPv6
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
\ No newline at end of file
diff --git a/i18n/uk.toml b/i18n/uk.toml
new file mode 100644
index 0000000000..a86a980e7a
--- /dev/null
+++ b/i18n/uk.toml
@@ -0,0 +1,247 @@
+# i18n strings for the Ukrainian (main) site.
+
+[caution]
+# other = "Caution:"
+other = "Увага:"
+
+[cleanup_heading]
+# other = "Cleaning up"
+other = "Очистка"
+
+[community_events_calendar]
+# other = "Events Calendar"
+other = "Календар подій"
+
+[community_forum_name]
+# other = "Forum"
+other = "Форум"
+
+[community_github_name]
+other = "GitHub"
+
+[community_slack_name]
+other = "Slack"
+
+[community_stack_overflow_name]
+other = "Stack Overflow"
+
+[community_twitter_name]
+other = "Twitter"
+
+[community_youtube_name]
+other = "YouTube"
+
+[deprecation_warning]
+# other = " documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the "
+other = " документація більше не підтримується. Версія, яку ви зараз переглядаєте, є статичною. Для перегляду актуальної документації дивіться "
+
+[deprecation_file_warning]
+# other = "Deprecated"
+other = "Застаріла версія"
+
+[docs_label_browse]
+# other = "Browse Docs"
+other = "Переглянути документацію"
+
+[docs_label_contributors]
+# other = "Contributors"
+other = "Контриб'ютори"
+
+[docs_label_i_am]
+# other = "I AM..."
+other = "Я..."
+
+[docs_label_users]
+# other = "Users"
+other = "Користувачі"
+
+[feedback_heading]
+# other = "Feedback"
+other = "Ваша думка"
+
+[feedback_no]
+# other = "No"
+other = "Ні"
+
+[feedback_question]
+# other = "Was this page helpful?"
+other = "Чи була ця сторінка корисною?"
+
+[feedback_yes]
+# other = "Yes"
+other = "Так"
+
+[latest_version]
+# other = "latest version."
+other = "остання версія."
+
+[layouts_blog_pager_prev]
+# other = "<< Prev"
+other = "<< Назад"
+
+[layouts_blog_pager_next]
+# other = "Next >>"
+other = "Далі >>"
+
+[layouts_case_studies_list_tell]
+# other = "Tell your story"
+other = "Розкажіть свою історію"
+
+[layouts_docs_glossary_aka]
+# other = "Also known as"
+other = "Також відомий як"
+
+[layouts_docs_glossary_description]
+# other = "This glossary is intended to be a comprehensive, standardized list of Kubernetes terminology. It includes technical terms that are specific to Kubernetes, as well as more general terms that provide useful context."
+other = "Даний словник створений як повний стандартизований список термінології Kubernetes. Він включає в себе технічні терміни, специфічні для Kubernetes, а також більш загальні терміни, необхідні для кращого розуміння контексту."
+
+[layouts_docs_glossary_deselect_all]
+# other = "Deselect all"
+other = "Очистити вибір"
+
+[layouts_docs_glossary_click_details_after]
+# other = "indicators below to get a longer explanation for any particular term."
+other = "для отримання розширеного пояснення конкретного терміна."
+
+[layouts_docs_glossary_click_details_before]
+# other = "Click on the"
+other = "Натисність на"
+
+[layouts_docs_glossary_filter]
+# other = "Filter terms according to their tags"
+other = "Відфільтрувати терміни за тегами"
+
+[layouts_docs_glossary_select_all]
+# other = "Select all"
+other = "Вибрати все"
+
+[layouts_docs_partials_feedback_improvement]
+# other = "suggest an improvement"
+other = "запропонувати покращення"
+
+[layouts_docs_partials_feedback_issue]
+# other = "Open an issue in the GitHub repo if you want to "
+other = "Створіть issue в GitHub репозиторії, якщо ви хочете "
+
+[layouts_docs_partials_feedback_or]
+# other = "or"
+other = "або"
+
+[layouts_docs_partials_feedback_problem]
+# other = "report a problem"
+other = "повідомити про проблему"
+
+[layouts_docs_partials_feedback_thanks]
+# other = "Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on"
+other = "Дякуємо за ваш відгук. Якщо ви маєте конкретне запитання щодо використання Kubernetes, ви можете поставити його"
+
+[layouts_docs_search_fetching]
+# other = "Fetching results..."
+other = "Отримання результатів..."
+
+[main_by]
+other = "by"
+
+[main_cncf_project]
+# other = """We are a CNCF graduated project"""
+other = """Ми є проектом CNCF """
+
+[main_community_explore]
+# other = "Explore the community"
+other = "Познайомитись із спільнотою"
+
+[main_contribute]
+# other = "Contribute"
+other = "Допомогти проекту"
+
+[main_copyright_notice]
+# other = """The Linux Foundation ®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page """
+other = """The Linux Foundation ®. Всі права застережено. The Linux Foundation є зареєстрованою торговою маркою. Перелік торгових марок The Linux Foundation ви знайдете на нашій сторінці Використання торгових марок """
+
+[main_documentation_license]
+# other = """The Kubernetes Authors | Documentation Distributed under CC BY 4.0 """
+other = """Автори Kubernetes | Документація розповсюджується під ліцензією CC BY 4.0 """
+
+[main_edit_this_page]
+# other = "Edit This Page"
+other = "Редагувати цю сторінку"
+
+[main_github_create_an_issue]
+# other = "Create an Issue"
+other = "Створити issue"
+
+[main_github_invite]
+# other = "Interested in hacking on the core Kubernetes code base?"
+other = "Хочете зламати основну кодову базу Kubernetes?"
+
+[main_github_view_on]
+# other = "View On GitHub"
+other = "Переглянути у GitHub"
+
+[main_kubernetes_features]
+# other = "Kubernetes Features"
+other = "Функціональні можливості Kubernetes"
+
+[main_kubeweekly_baseline]
+# other = "Interested in receiving the latest Kubernetes news? Sign up for KubeWeekly."
+other = "Хочете отримувати останні новини Kubernetes? Підпишіться на KubeWeekly."
+
+[main_kubernetes_past_link]
+# other = "View past newsletters"
+other = "Переглянути попередні інформаційні розсилки"
+
+[main_kubeweekly_signup]
+# other = "Subscribe"
+other = "Підписатися"
+
+[main_page_history]
+# other ="Page History"
+other ="Історія сторінки"
+
+[main_page_last_modified_on]
+# other = "Page last modified on"
+other = "Сторінка востаннє редагувалася"
+
+[main_read_about]
+# other = "Read about"
+other = "Прочитати про"
+
+[main_read_more]
+# other = "Read more"
+other = "Прочитати більше"
+
+[note]
+# other = "Note:"
+other = "Примітка:"
+
+[objectives_heading]
+# other = "Objectives"
+other = "Цілі"
+
+[prerequisites_heading]
+# other = "Before you begin"
+other = "Перш ніж ви розпочнете"
+
+[ui_search_placeholder]
+# other = "Search"
+other = "Пошук"
+
+[version_check_mustbe]
+# other = "Your Kubernetes server must be version "
+other = "Версія вашого Kubernetes сервера має бути "
+
+[version_check_mustbeorlater]
+# other = "Your Kubernetes server must be at or later than version "
+other = "Версія вашого Kubernetes сервера має дорівнювати або бути молодшою ніж "
+
+[version_check_tocheck]
+# other = "To check the version, enter "
+other = "Для перевірки версії введіть "
+
+[warning]
+# other = "Warning:"
+other = "Попередження:"
+
+[whatsnext_heading]
+# other = "What's next"
+other = "Що далі"
From a864c1b7e386c0e807ad92a49449d0d76f904dea Mon Sep 17 00:00:00 2001
From: Daniel Barclay
Date: Tue, 31 Mar 2020 16:17:59 -0400
Subject: [PATCH 031/105] Fix grammar problems and otherwise try to improve
wording.
---
.../configure-access-multiple-clusters.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
index 67077aa331..6e87400937 100644
--- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
+++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
@@ -149,12 +149,12 @@ users:
username: exp
```
-The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above is the placeholders
-for the real path of the certification files. You need change these to the real path
-of certification files in your environment.
+The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above are the placeholders
+for the pathnames of the certificate files. You need change these to the actual pathnames
+of certificate files in your environment.
-Some times you may want to use base64 encoded data here instead of the path of the
-certification files, then you need add the suffix `-data` to the keys. For example,
+Sometimes you may want to use Base64-encoded data embeddedhere instead of separate
+certificate files; in that case you need add the suffix `-data` to the keys, for example,
`certificate-authority-data`, `client-certificate-data`, `client-key-data`.
Each context is a triple (cluster, user, namespace). For example, the
From 4d097c56dd0242b570c465f926634d3b83f4632d Mon Sep 17 00:00:00 2001
From: davidair
Date: Tue, 31 Mar 2020 18:15:14 -0400
Subject: [PATCH 032/105] Update debug-application.md
Fixing URL typo
---
.../docs/tasks/debug-application-cluster/debug-application.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md
index f63173e334..08f0fad008 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-application.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md
@@ -65,7 +65,7 @@ Again, the information from `kubectl describe ...` should be informative. The m
#### My pod is crashing or otherwise unhealthy
Once your pod has been scheduled, the methods described in [Debug Running Pods](
-/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging.
+/docs/tasks/debug-application-cluster/debug-running-pod/) are available for debugging.
#### My pod is running but not doing what I told it to do
From cda5fa93e3ef7f0208906ec272aadb2ecf9a0e53 Mon Sep 17 00:00:00 2001
From: Joao Luna
Date: Fri, 3 Apr 2020 16:15:42 +0100
Subject: [PATCH 033/105] Adding
content/pt/docs/concepts/extend-kubernetes/operator.md
---
.../docs/concepts/extend-kubernetes/_index.md | 4 +
.../concepts/extend-kubernetes/operator.md | 137 ++++++++++++++++++
2 files changed, 141 insertions(+)
create mode 100644 content/pt/docs/concepts/extend-kubernetes/_index.md
create mode 100644 content/pt/docs/concepts/extend-kubernetes/operator.md
diff --git a/content/pt/docs/concepts/extend-kubernetes/_index.md b/content/pt/docs/concepts/extend-kubernetes/_index.md
new file mode 100644
index 0000000000..db8257b625
--- /dev/null
+++ b/content/pt/docs/concepts/extend-kubernetes/_index.md
@@ -0,0 +1,4 @@
+---
+title: Extendendo o Kubernetes
+weight: 110
+---
diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md
new file mode 100644
index 0000000000..3036db17d0
--- /dev/null
+++ b/content/pt/docs/concepts/extend-kubernetes/operator.md
@@ -0,0 +1,137 @@
+---
+title: Padrão Operador
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+Operadores são extensões de software para o Kubernetes que
+fazem uso de [*recursos personalizados*](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
+para gerir aplicações e os seus componentes. Operadores seguem os
+princípios do Kubernetes, notavelmente o [ciclo de controle](/docs/concepts/#kubernetes-control-plane).
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Motivação
+
+O padrão Operador tem como objetivo capturar o principal objetivo de um operador
+humano que gere um serviço ou um conjunto de serviços. Operadores humanos
+responsáveis por aplicações e serviços específicos têm um conhecimento
+profundo da forma como o sistema é suposto se comportar, como é instalado
+e como deve reagir na ocorrência de problemas.
+
+As pessoas que correm cargas de trabalho no Kubernetes habitualmente gostam
+de usar automação para cuidar de tarefas repetitivas. O padrão Operador captura
+a forma como pode escrever código para automatizar uma tarefa para além do que
+o Kubernetes fornece.
+
+## Operadores no Kubernetes
+
+O Kubernetes é desenhado para automação. *Out of the box*, você tem bastante
+automação embutida no núcleo do Kubernetes. Pode usar
+o Kubernetes para automatizar instalações e executar cargas de trabalho,
+e pode ainda automatizar a forma como o Kubernetes faz isso.
+
+O conceito de {{< glossary_tooltip text="controlador" term_id="controller" >}} no
+Kubernetes permite a extensão do comportamento sem modificar o código do próprio
+Kubernetes.
+Operadores são clientes da API do Kubernetes que atuam como controladores para
+um dado [*Custom Resource*](/docs/concepts/api-extension/custom-resources/)
+
+## Exemplo de um Operador {#example}
+
+Algumas das coisas que um operador pode ser usado para automatizar incluem:
+
+* instalar uma aplicação a pedido
+* obter e restaurar backups do estado dessa aplicação
+* manipular atualizações do código da aplicação juntamente com alterações
+ como esquemas de base de dados ou definições de configuração extra
+* publicar um *Service* para aplicações que não suportam a APIs do Kubernetes
+ para as descobrir
+* simular una falha em todo ou parte do cluster de forma a testar a resiliência
+* escolher um lider para uma aplicação distribuída sem um processo
+ de eleição de membro interno
+
+Como deve um Operador parecer em mais detalhe? Aqui está um exemplo em mais
+detalhe:
+
+1. Um recurso personalizado (*custom resource*) chamado SampleDB, que você pode
+ configurar para dentro do *cluster*.
+2. Um *Deployment* que garante que um *Pod* está a executar que contém a
+ parte controlador do operador.
+3. Uma imagem do *container* do código do operador.
+4. Código do controlador que consulta o plano de controle para descobrir quais
+ recursos *SampleDB* estão configurados.
+5. O núcleo do Operador é o código para informar ao servidor da API (*API server*) como fazer
+ a realidade coincidir com os recursos configurados.
+ * Se você adicionar um novo *SampleDB*, o operador configurará *PersistentVolumeClaims*
+ para fornecer armazenamento de base de dados durável, um *StatefulSet* para executar *SampleDB* e
+ um *Job* para lidar com a configuração inicial.
+ * Se você apagá-lo, o Operador tira um *snapshot* e então garante que
+ o *StatefulSet* e *Volumes* também são removidos.
+6. O operador também gere backups regulares da base de dados. Para cada recurso *SampleDB*,
+ o operador determina quando deve criar um *Pod* que possa se conectar
+ à base de dados e faça backups. Esses *Pods* dependeriam de um *ConfigMap*
+ e / ou um *Secret* que possui detalhes e credenciais de conexão com à base de dados.
+7. Como o Operador tem como objetivo fornecer automação robusta para o recurso
+ que gere, haveria código de suporte adicional. Para este exemplo,
+ O código verifica se a base de dados está a executar uma versão antiga e, se estiver,
+ cria objetos *Job* que o atualizam para si.
+
+## Instalar Operadores
+
+A forma mais comum de instalar um Operador é a de adicionar a
+definição personalizada de recurso (*Custom Resource Definition*) e
+o seu Controlador associado ao seu cluster.
+O Controlador vai normalmente executar fora do
+{{< glossary_tooltip text="plano de controle" term_id="control-plane" >}},
+como você faria com qualquer aplicação containerizada.
+Por exemplo, você pode executar o controlador no seu cluster como um *Deployment*.
+
+## Usando um Operador {#using-operators}
+
+Uma vez que você tenha um Operador instalado, usaria-o adicionando, modificando
+ou apagando a espécie de recurso que o Operador usa. Seguindo o exemplo acima,
+você configuraria um *Deployment* para o próprio Operador, e depois:
+
+```shell
+kubectl get SampleDB # encontra a base de dados configurada
+
+kubectl edit SampleDB/example-database # mudar manualmente algumas definições
+```
+
+…e é isso! O Operador vai tomar conta de aplicar
+as mudanças assim como manter o serviço existente em boa forma.
+
+## Escrevendo o seu prórpio Operador {#writing-operator}
+
+Se não existir no ecosistema um Operador que implementa
+o comportamento que pretende, pode codificar o seu próprio.
+[No que vem a seguir](#what-s-next) voce vai encontrar
+alguns *links* para bibliotecas e ferramentas que opde usar
+para escrever o seu próprio Operador *cloud native*.
+
+Pode também implementar um Operador (isto é, um Controlador) usando qualquer linguagem / *runtime*
+que pode atua como um [cliente da API do Kubernetes](/docs/reference/using-api/client-libraries/).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Aprenda mais sobre [Recursos Personalizados](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
+* Encontre operadores prontos em [OperatorHub.io](https://operatorhub.io/) para o seu caso de uso
+* Use ferramentes existentes para escrever os seus Operadores:
+ * usando [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
+ * usando [kubebuilder](https://book.kubebuilder.io/)
+ * usando [Metacontroller](https://metacontroller.app/) juntamente com WebHooks que
+ implementa você mesmo
+ * usando o [Operator Framework](https://github.com/operator-framework/getting-started)
+* [Publique](https://operatorhub.io/) o seu operador para que outras pessoas o possam usar
+* Leia o [artigo original da CoreOS](https://coreos.com/blog/introducing-operators.html) que introduz o padrão Operador
+* Leia um [artigo](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) da Google Cloud sobre as melhores práticas para contruir Operadores
+
+{{% /capture %}}
From dc2f875952adcf90b7f8541519acf19259b17e43 Mon Sep 17 00:00:00 2001
From: tanjunchen
Date: Fri, 3 Apr 2020 23:39:56 +0800
Subject: [PATCH 034/105] add tanjunchen as reviewer of /zh
---
OWNERS_ALIASES | 1 +
1 file changed, 1 insertion(+)
diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index 8a8da8978e..33922084fc 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -181,6 +181,7 @@ aliases:
- idealhack
- markthink
- SataQiu
+ - tanjunchen
- tengqm
- xiangpengzhao
- xichengliudui
From ebb020e65a43ea3142905f5597220f11485b4f0f Mon Sep 17 00:00:00 2001
From: Joao Luna
Date: Fri, 3 Apr 2020 16:42:36 +0100
Subject: [PATCH 035/105] Fix typo
---
content/pt/docs/concepts/extend-kubernetes/operator.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md
index 3036db17d0..7ec22a29ee 100644
--- a/content/pt/docs/concepts/extend-kubernetes/operator.md
+++ b/content/pt/docs/concepts/extend-kubernetes/operator.md
@@ -111,7 +111,7 @@ as mudanças assim como manter o serviço existente em boa forma.
Se não existir no ecosistema um Operador que implementa
o comportamento que pretende, pode codificar o seu próprio.
-[No que vem a seguir](#what-s-next) voce vai encontrar
+[No que vem a seguir](#what-s-next) você vai encontrar
alguns *links* para bibliotecas e ferramentas que opde usar
para escrever o seu próprio Operador *cloud native*.
From d36428fa40e7d5cbf35480ed612cd9ec04c87e37 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sun, 5 Apr 2020 02:57:55 +0300
Subject: [PATCH 036/105] Fix left menu button on mobile (home page)
---
content/zh/docs/home/_index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/zh/docs/home/_index.md b/content/zh/docs/home/_index.md
index d50c07e091..cf6cdc0032 100644
--- a/content/zh/docs/home/_index.md
+++ b/content/zh/docs/home/_index.md
@@ -3,7 +3,7 @@ title: Kubernetes 文档
noedit: true
cid: docsHome
layout: docsportal_home
-class: gridPage
+class: gridPage gridPageHome
linkTitle: "主页"
main_menu: true
weight: 10
From b3bb46aa932a8475725ff808a029ba21d64bfe7d Mon Sep 17 00:00:00 2001
From: Joao Luna
Date: Sun, 5 Apr 2020 11:58:24 +0100
Subject: [PATCH 037/105] Apply suggestions from code review
Co-Authored-By: Tim Bannister
---
content/pt/docs/concepts/extend-kubernetes/operator.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md
index 7ec22a29ee..c2be44bd77 100644
--- a/content/pt/docs/concepts/extend-kubernetes/operator.md
+++ b/content/pt/docs/concepts/extend-kubernetes/operator.md
@@ -92,7 +92,7 @@ O Controlador vai normalmente executar fora do
como você faria com qualquer aplicação containerizada.
Por exemplo, você pode executar o controlador no seu cluster como um *Deployment*.
-## Usando um Operador {#using-operators}
+## Usando um Operador
Uma vez que você tenha um Operador instalado, usaria-o adicionando, modificando
ou apagando a espécie de recurso que o Operador usa. Seguindo o exemplo acima,
@@ -111,7 +111,7 @@ as mudanças assim como manter o serviço existente em boa forma.
Se não existir no ecosistema um Operador que implementa
o comportamento que pretende, pode codificar o seu próprio.
-[No que vem a seguir](#what-s-next) você vai encontrar
+[Qual é o próximo](#qual-é-o-próximo) você vai encontrar
alguns *links* para bibliotecas e ferramentas que opde usar
para escrever o seu próprio Operador *cloud native*.
From 7b27b3a662cbe8b3f0e2d6838b3ebc00935e3118 Mon Sep 17 00:00:00 2001
From: Pierre-Yves Aillet
Date: Thu, 19 Mar 2020 21:51:22 +0100
Subject: [PATCH 038/105] doc: add precision on init container start order
Update content/en/docs/concepts/workloads/pods/init-containers.md
Co-Authored-By: Tim Bannister
---
.../docs/concepts/workloads/pods/init-containers.md | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md
index 14e7054a86..2ecbdd702a 100644
--- a/content/en/docs/concepts/workloads/pods/init-containers.md
+++ b/content/en/docs/concepts/workloads/pods/init-containers.md
@@ -71,8 +71,8 @@ have some advantages for start-up related code:
a mechanism to block or delay app container startup until a set of preconditions are met. Once
preconditions are met, all of the app containers in a Pod can start in parallel.
* Init containers can securely run utilities or custom code that would otherwise make an app
- container image less secure. By keeping unnecessary tools separate you can limit the attack
- surface of your app container image.
+ container image less secure. By keeping unnecessary tools separate you can limit the attack
+ surface of your app container image.
### Examples
@@ -245,8 +245,11 @@ init containers. [What's next](#what-s-next) contains a link to a more detailed
## Detailed behavior
-During the startup of a Pod, each init container starts in order, after the
-network and volumes are initialized. Each container must exit successfully before
+During Pod startup, the kubelet delays running init containers until the networking
+and storage are ready. Then the kubelet runs the Pod's init containers in the order
+they appear in the Pod's spec.
+
+Each init container must exit successfully before
the next container starts. If a container fails to start due to the runtime or
exits with failure, it is retried according to the Pod `restartPolicy`. However,
if the Pod `restartPolicy` is set to Always, the init containers use
From 25ffd7679e9126a5adf830f94207f1a7934d9640 Mon Sep 17 00:00:00 2001
From: "Huang, Zhaoquan"
Date: Sun, 5 Apr 2020 23:16:16 +0800
Subject: [PATCH 039/105] Update run-stateless-application-deployment.md
Updated the version number to match the file in the example: https://k8s.io/examples/application/deployment-update.yaml
---
.../run-application/run-stateless-application-deployment.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md
index c9e0aebd51..bebcd3cc70 100644
--- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md
+++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md
@@ -97,7 +97,7 @@ a Deployment that runs the nginx:1.14.2 Docker image:
## Updating the deployment
You can update the deployment by applying a new YAML file. This YAML file
-specifies that the deployment should be updated to use nginx 1.8.
+specifies that the deployment should be updated to use nginx 1.16.1.
{{< codenew file="application/deployment-update.yaml" >}}
From 41ed409d9d51e89622694c08c07abfd41a470774 Mon Sep 17 00:00:00 2001
From: Aris Cahyadi Risdianto
Date: Mon, 23 Mar 2020 10:20:32 +0800
Subject: [PATCH 040/105] Translating controller and container overview in
Concept Documentation.
Fixed formatting, Typos, and added glossary.
Fixed glossary.
Fixed term to solve broken Hugo build.
Fixed Hugo Build problem due to Glossaty formattin.
Fixed Hugo Build problem due to Glossary formatting.
Fixed Hugo Build problem due to Glossary mismatch.
---
.../docs/concepts/architecture/controller.md | 178 ++++++++++++++++++
...-variables.md => container-environment.md} | 2 +-
.../id/docs/concepts/containers/overview.md | 49 +++++
.../id/docs/reference/glossary/controller.md | 30 +++
4 files changed, 258 insertions(+), 1 deletion(-)
create mode 100644 content/id/docs/concepts/architecture/controller.md
rename content/id/docs/concepts/containers/{container-environment-variables.md => container-environment.md} (98%)
create mode 100644 content/id/docs/concepts/containers/overview.md
create mode 100755 content/id/docs/reference/glossary/controller.md
diff --git a/content/id/docs/concepts/architecture/controller.md b/content/id/docs/concepts/architecture/controller.md
new file mode 100644
index 0000000000..4ce6974b34
--- /dev/null
+++ b/content/id/docs/concepts/architecture/controller.md
@@ -0,0 +1,178 @@
+---
+title: Controller
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+Dalam bidang robotika dan otomatisasi, _control loop_ atau kontrol tertutup adalah
+lingkaran tertutup yang mengatur keadaan suatu sistem.
+
+Berikut adalah salah satu contoh kontrol tertutup: termostat di sebuah ruangan.
+
+Ketika kamu mengatur suhunya, itu mengisyaratkan ke termostat
+tentang *keadaan yang kamu inginkan*. Sedangkan suhu kamar yang sebenarnya
+adalah *keadaan saat ini*. Termostat berfungsi untuk membawa keadaan saat ini
+mendekati ke keadaan yang diinginkan, dengan menghidupkan atau mematikan
+perangkat.
+
+Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi keadaan klaster
+{{< glossary_tooltip term_id="cluster" text="klaster" >}} kamu, lalu membuat atau meminta
+perubahan jika diperlukan. Setiap _controller_ mencoba untuk memindahkan status
+klaster saat ini mendekati keadaan yang diinginkan.
+
+{{< glossary_definition term_id="controller" length="short">}}
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Pola _controller_
+
+Sebuah _controller_ melacak sekurang-kurangnya satu jenis sumber daya dari
+Kubernetes.
+[objek-objek](/docs/concepts/overview/working-with-objects/kubernetes-objects/) ini
+memiliki *spec field* yang merepresentasikan keadaan yang diinginkan. Satu atau
+lebih _controller_ untuk *resource* tersebut bertanggung jawab untuk membuat
+keadaan sekarang mendekati keadaan yang diinginkan.
+
+_Controller_ mungkin saja melakukan tindakan itu sendiri; namun secara umum, di
+Kubernetes, _controller_ akan mengirim pesan ke
+{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} yang
+mempunyai efek samping yang bermanfaat. Kamu bisa melihat contoh-contoh
+di bawah ini.
+
+{{< comment >}}
+Beberapa _controller_ bawaan, seperti _controller namespace_, bekerja pada objek
+yang tidak memiliki *spec*. Agar lebih sederhana, halaman ini tidak
+menjelaskannya secara detail.
+{{< /comment >}}
+
+### Kontrol melalui server API
+
+_Controller_ {{< glossary_tooltip term_id="job" >}} adalah contoh dari _controller_
+bawaan dari Kubernetes. _Controller_ bawaan tersebut mengelola status melalui
+interaksi dengan server API dari suatu klaster.
+
+Job adalah sumber daya dalam Kubernetes yang menjalankan a
+{{< glossary_tooltip term_id="pod" >}}, atau mungkin beberapa Pod sekaligus,
+untuk melakukan sebuah pekerjaan dan kemudian berhenti.
+
+(Setelah [dijadwalkan](../../../../en/docs/concepts/scheduling/), objek Pod
+akan menjadi bagian dari keadaan yang diinginkan oleh kubelet).
+
+Ketika _controller job_ melihat tugas baru, maka _controller_ itu memastikan bahwa,
+di suatu tempat pada klaster kamu, kubelet dalam sekumpulan Node menjalankan
+Pod-Pod dengan jumlah yang benar untuk menyelesaikan pekerjaan. _Controller job_
+tidak menjalankan sejumlah Pod atau kontainer apa pun untuk dirinya sendiri.
+Namun, _controller job_ mengisyaratkan kepada server API untuk membuat atau
+menghapus Pod. Komponen-komponen lain dalam
+{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
+bekerja berdasarkan informasi baru (adakah Pod-Pod baru untuk menjadwalkan dan
+menjalankan pekerjan), dan pada akhirnya pekerjaan itu selesai.
+
+Setelah kamu membuat Job baru, status yang diharapkan adalah bagaimana
+pekerjaan itu bisa selesai. _Controller job_ membuat status pekerjaan saat ini
+agar mendekati dengan keadaan yang kamu inginkan: membuat Pod yang melakukan
+pekerjaan yang kamu inginkan untuk Job tersebut, sehingga Job hampir
+terselesaikan.
+
+_Controller_ juga memperbarui objek yang mengkonfigurasinya. Misalnya: setelah
+pekerjaan dilakukan untuk Job tersebut, _controller job_ memperbarui objek Job
+dengan menandainya `Finished`.
+
+(Ini hampir sama dengan bagaimana beberapa termostat mematikan lampu untuk
+mengindikasikan bahwa kamar kamu sekarang sudah berada pada suhu yang kamu
+inginkan).
+
+### Kontrol Langsung
+
+Berbeda dengan sebuah Job, beberapa dari _controller_ perlu melakukan perubahan
+sesuatu di luar dari klaster kamu.
+
+Sebagai contoh, jika kamu menggunakan kontrol tertutup untuk memastikan apakah
+cukup {{< glossary_tooltip text="Node" term_id="node" >}}
+dalam kluster kamu, maka _controller_ memerlukan sesuatu di luar klaster saat ini
+untuk mengatur Node-Node baru apabila dibutuhkan.
+
+_controller_ yang berinteraksi dengan keadaan eksternal dapat menemukan keadaan
+yang diinginkannya melalui server API, dan kemudian berkomunikasi langsung
+dengan sistem eksternal untuk membawa keadaan saat ini mendekat keadaan yang
+diinginkan.
+
+(Sebenarnya ada sebuah _controller_ yang melakukan penskalaan node secara
+horizontal dalam klaster kamu. Silahkan lihat
+[_autoscaling_ klaster](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling)).
+
+## Status sekarang berbanding status yang diinginkan {#sekarang-banding-diinginkan}
+
+Kubernetes mengambil pandangan sistem secara _cloud-native_, dan mampu menangani
+perubahan yang konstan.
+
+Klaster kamu dapat mengalami perubahan kapan saja pada saat pekerjaan sedang
+berlangsung dan kontrol tertutup secara otomatis memperbaiki setiap kegagalan.
+Hal ini berarti bahwa, secara potensi, klaster kamu tidak akan pernah mencapai
+kondisi stabil.
+
+Selama _controller_ dari klaster kamu berjalan dan mampu membuat perubahan yang
+bermanfaat, tidak masalah apabila keadaan keseluruhan stabil atau tidak.
+
+## Perancangan
+
+Sebagai prinsip dasar perancangan, Kubernetes menggunakan banyak _controller_ yang
+masing-masing mengelola aspek tertentu dari keadaan klaster. Yang paling umum,
+kontrol tertutup tertentu menggunakan salah satu jenis sumber daya
+sebagai suatu keadaan yang diinginkan, dan memiliki jenis sumber daya yang
+berbeda untuk dikelola dalam rangka membuat keadaan yang diinginkan terjadi.
+
+Sangat penting untuk memiliki beberapa _controller_ sederhana daripada hanya satu
+_controller_ saja, dimana satu kumpulan monolitik kontrol tertutup saling
+berkaitan satu sama lain. Karena _controller_ bisa saja gagal, sehingga Kubernetes
+dirancang untuk memungkinkan hal tersebut.
+
+Misalnya: _controller_ pekerjaan melacak objek pekerjaan (untuk menemukan
+adanya pekerjaan baru) dan objek Pod (untuk menjalankan pekerjaan tersebut dan
+kemudian melihat lagi ketika pekerjaan itu sudah selesai). Dalam hal ini yang
+lain membuat pekerjaan, sedangkan _controller_ pekerjaan membuat Pod-Pod.
+
+{{< note >}}
+Ada kemungkinan beberapa _controller_ membuat atau memperbarui jenis objek yang
+sama. Namun di belakang layar, _controller_ Kubernetes memastikan bahwa mereka
+hanya memperhatikan sumbr daya yang terkait dengan sumber daya yang mereka
+kendalikan.
+
+Misalnya, kamu dapat memiliki Deployment dan Job; dimana keduanya akan membuat
+Pod. _Controller Job_ tidak akan menghapus Pod yang dibuat oleh Deployment kamu,
+karena ada informasi ({{< glossary_tooltip term_id="label" text="labels" >}})
+yang dapat oleh _controller_ untuk membedakan Pod-Pod tersebut.
+{{< /note >}}
+
+## Berbagai cara menjalankan beberapa _controller_ {#menjalankan-_controller_}
+
+Kubernetes hadir dengan seperangkat _controller_ bawaan yang berjalan di dalam
+{{< glossary_tooltip term_id="kube-controller-manager" >}}. Beberapa _controller_
+bawaan memberikan perilaku inti yang sangat penting.
+
+_Controller Deployment_ dan _controller Job_ adalah contoh dari _controller_ yang
+hadir sebagai bagian dari Kubernetes itu sendiri (_controller_ "bawaan").
+Kubernetes memungkinkan kamu menjalankan _control plane_ yang tangguh, sehingga
+jika ada _controller_ bawaan yang gagal, maka bagian lain dari _control plane_ akan
+mengambil alih pekerjaan.
+
+Kamu juga dapat menemukan pengontrol yang berjalan di luar _control plane_, untuk
+mengembangkan lebih jauh Kubernetes. Atau, jika mau, kamu bisa membuat
+_controller_ baru sendiri. Kamu dapat menjalankan _controller_ kamu sendiri sebagai
+satu kumpulan dari beberapa Pod, atau bisa juga sebagai bagian eksternal dari
+Kubernetes. Manakah yang paling sesuai akan tergantung pada apa yang _controller_
+khusus itu lakukan.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* Silahkan baca tentang [_control plane_ Kubernetes](/docs/concepts/#kubernetes-control-plane)
+* Temukan beberapa dasar tentang [objek-objek Kubernetes](/docs/concepts/#kubernetes-objects)
+* Pelajari lebih lanjut tentang [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
+* Apabila kamu ingin membuat _controller_ sendiri, silakan lihat [pola perluasan](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) dalam memperluas Kubernetes.
+{{% /capture %}}
diff --git a/content/id/docs/concepts/containers/container-environment-variables.md b/content/id/docs/concepts/containers/container-environment.md
similarity index 98%
rename from content/id/docs/concepts/containers/container-environment-variables.md
rename to content/id/docs/concepts/containers/container-environment.md
index 2a44dcbdcd..55c1bea6cb 100644
--- a/content/id/docs/concepts/containers/container-environment-variables.md
+++ b/content/id/docs/concepts/containers/container-environment.md
@@ -1,5 +1,5 @@
---
-title: Variabel Environment Kontainer
+title: Kontainer Environment
content_template: templates/concept
weight: 20
---
diff --git a/content/id/docs/concepts/containers/overview.md b/content/id/docs/concepts/containers/overview.md
new file mode 100644
index 0000000000..7ec5ef55d5
--- /dev/null
+++ b/content/id/docs/concepts/containers/overview.md
@@ -0,0 +1,49 @@
+---
+title: Ikhtisar Kontainer
+content_template: templates/concept
+weight: 1
+---
+
+{{% capture overview %}}
+
+Kontainer adalah teknologi untuk mengemas kode (yang telah dikompilasi) menjadi
+suatu aplikasi beserta dengan dependensi-dependensi yang dibutuhkannya pada saat
+dijalankan. Setiap kontainer yang Anda jalankan dapat diulang; standardisasi
+dengan menyertakan dependensinya berarti Anda akan mendapatkan perilaku yang
+sama di mana pun Anda menjalankannya.
+
+Kontainer memisahkan aplikasi dari infrastruktur host yang ada dibawahnya. Hal
+ini membuat penyebaran lebih mudah di lingkungan cloud atau OS yang berbeda.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Image-Image Kontainer
+
+[Kontainer image](/docs/concepts/containers/images/) meruapakan paket perangkat lunak
+yang siap dijalankan, mengandung semua yang diperlukan untuk menjalankan
+sebuah aplikasi: kode dan setiap *runtime* yang dibutuhkan, *library* dari
+aplikasi dan sistem, dan nilai *default* untuk penganturan yang penting.
+
+Secara desain, kontainer tidak bisa berubah: Anda tidak dapat mengubah kode
+dalam kontainer yang sedang berjalan. Jika Anda memiliki aplikasi yang
+terkontainerisasi dan ingin melakukan perubahan, maka Anda perlu membuat
+kontainer baru dengan menyertakan perubahannya, kemudian membuat ulang kontainer
+dengan memulai dari _image_ yang sudah diubah.
+
+## Kontainer _runtime_
+
+Kontainer *runtime* adalah perangkat lunak yang bertanggung jawab untuk
+menjalankan kontainer. Kubernetes mendukung beberapa kontainer *runtime*:
+{{< glossary_tooltip term_id="docker" >}},
+{{< glossary_tooltip term_id="containerd" >}},
+{{< glossary_tooltip term_id="cri-o" >}}, dan semua implementasi dari
+[Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md).
+
+## Selanjutnya
+
+- Baca tentang [image-image kontainer](https://kubernetes.io/docs/concepts/containers/images/)
+- Baca tentang [Pod](https://kubernetes.io/docs/concepts/workloads/pods/)
+
+{{% /capture %}}
\ No newline at end of file
diff --git a/content/id/docs/reference/glossary/controller.md b/content/id/docs/reference/glossary/controller.md
new file mode 100755
index 0000000000..c88dfccb14
--- /dev/null
+++ b/content/id/docs/reference/glossary/controller.md
@@ -0,0 +1,30 @@
+---
+title: Controller
+id: controller
+date: 2018-04-12
+full_link: /docs/concepts/architecture/controller/
+short_description: >
+ Kontrol tertutup yang mengawasi kondisi bersama dari klaster melalui apiserver dan membuat perubahan yang mencoba untuk membawa kondisi saat ini ke kondisi yang diinginkan.
+
+aka:
+tags:
+- architecture
+- fundamental
+---
+Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi kondisi
+{{< glossary_tooltip term_id="cluster" text="klaster">}} anda, lalu membuat atau
+meminta perubahan jika diperlukan.
+Setiap _controller_ mencoba untuk memindahkan status klaster saat ini lebih
+dekat ke kondisi yang diinginkan.
+
+
+
+_Controller_ mengawasi keadaan bersama dari klaster kamu melalui
+{{< glossary_tooltip text="apiserver" term_id="kube-apiserver" >}} (bagian dari
+{{< glossary_tooltip term_id="control-plane" >}}).
+
+Beberapa _controller_ juga berjalan di dalam _control plane_, menyediakan
+kontrol tertutup yang merupakan inti dari operasi Kubernetes. Sebagai contoh:
+_controller Deployment_, _controller daemonset_, _controller namespace_, dan
+_controller volume persisten_ (dan lainnya) semua berjalan di dalam
+{{< glossary_tooltip term_id="kube-controller-manager" >}}.
From b6449353e68055db913266736e64dc0154364a4f Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sun, 5 Apr 2020 20:58:16 +0300
Subject: [PATCH 041/105] fix docs home page on mobile renders poorly
---
static/css/gridpage.css | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/static/css/gridpage.css b/static/css/gridpage.css
index d762aeb19c..297c7d3637 100644
--- a/static/css/gridpage.css
+++ b/static/css/gridpage.css
@@ -23,7 +23,6 @@
min-height: 152px;
}
-
.gridPage p {
color: rgb(26,26,26);
margin-left: 0 !important;
@@ -289,6 +288,15 @@ section.bullets .content {
}
}
+@media screen and (max-width: 768px){
+ .launch-card {
+ width: 100%;
+ margin-bottom: 30px;
+ padding: 0;
+ min-height: auto;
+ }
+}
+
@media screen and (max-width: 640px){
.case-study {
width: 100%;
From b8df3044842e02cd9496c87af796fba1d7bae10f Mon Sep 17 00:00:00 2001
From: Chao Xu
Date: Thu, 20 Feb 2020 11:35:35 -0800
Subject: [PATCH 042/105] Introducing concepts about Konnectivity Service.
---
.../architecture/master-node-communication.md | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/concepts/architecture/master-node-communication.md b/content/en/docs/concepts/architecture/master-node-communication.md
index 8a4493e49b..ff536d160b 100644
--- a/content/en/docs/concepts/architecture/master-node-communication.md
+++ b/content/en/docs/concepts/architecture/master-node-communication.md
@@ -97,13 +97,28 @@ public networks.
### SSH Tunnels
-Kubernetes supports SSH tunnels to protect the Master -> Cluster communication
+Kubernetes supports SSH tunnels to protect the Master → Cluster communication
paths. In this configuration, the apiserver initiates an SSH tunnel to each node
in the cluster (connecting to the ssh server listening on port 22) and passes
all traffic destined for a kubelet, node, pod, or service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the network in
which the nodes are running.
-SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. A replacement for this communication channel is being designed.
+SSH tunnels are currently deprecated so you shouldn't opt to use them unless you
+know what you are doing. The Konnectivity service is a replacement for this
+communication channel.
+
+### Konnectivity service
+{{< feature-state for_k8s_version="v1.18" state="beta" >}}
+
+As a replacement to the SSH tunnels, the Konnectivity service provides TCP
+level proxy for the Master → Cluster communication. The Konnectivity consists of
+two parts, the Konnectivity server and the Konnectivity agents, running in the
+Master network and the Cluster network respectively. The Konnectivity agents
+initiate connections to the Konnectivity server and maintain the connections.
+All Master → Cluster traffic then goes through these connections.
+
+See [Konnectivity Service Setup](/docs/tasks/setup-konnectivity/) on how to set
+it up in your cluster.
{{% /capture %}}
From ac1d86457505d9b2be53e121c13559ad4d9612ca Mon Sep 17 00:00:00 2001
From: Chao Xu
Date: Fri, 20 Mar 2020 15:31:19 -0700
Subject: [PATCH 043/105] Instructions on how to set up the Konnectivity
service.
---
.../docs/tasks/setup-konnectivity/_index.md | 5 ++
.../setup-konnectivity/setup-konnectivity.md | 37 ++++++++++
.../egress-selector-configuration.yaml | 21 ++++++
.../konnectivity/konnectivity-agent.yaml | 53 ++++++++++++++
.../admin/konnectivity/konnectivity-rbac.yaml | 24 +++++++
.../konnectivity/konnectivity-server.yaml | 70 +++++++++++++++++++
6 files changed, 210 insertions(+)
create mode 100755 content/en/docs/tasks/setup-konnectivity/_index.md
create mode 100644 content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md
create mode 100644 content/en/examples/admin/konnectivity/egress-selector-configuration.yaml
create mode 100644 content/en/examples/admin/konnectivity/konnectivity-agent.yaml
create mode 100644 content/en/examples/admin/konnectivity/konnectivity-rbac.yaml
create mode 100644 content/en/examples/admin/konnectivity/konnectivity-server.yaml
diff --git a/content/en/docs/tasks/setup-konnectivity/_index.md b/content/en/docs/tasks/setup-konnectivity/_index.md
new file mode 100755
index 0000000000..09f254eba0
--- /dev/null
+++ b/content/en/docs/tasks/setup-konnectivity/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Setup Konnectivity Service"
+weight: 20
+---
+
diff --git a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md
new file mode 100644
index 0000000000..0fdbd0127d
--- /dev/null
+++ b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md
@@ -0,0 +1,37 @@
+---
+title: Setup Konnectivity Service
+content_template: templates/task
+weight: 110
+---
+
+The Konnectivity service provides TCP level proxy for the Master → Cluster
+communication.
+
+You can set it up with the following steps.
+
+First, you need to configure the API Server to use the Konnectivity service
+to direct its network traffic to cluster nodes:
+1. Set the `--egress-selector-config-file` flag of the API Server, it is the
+path to the API Server egress configuration file.
+2. At the path, create a configuration file. For example,
+
+{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}}
+
+Next, you need to deploy the Konnectivity service server and agents.
+[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy)
+is a reference implementation.
+
+Deploy the Konnectivity server on your master node. The provided yaml assuming
+Kubernetes components are deployed as {{< glossary_tooltip text="static pod"
+term_id="static-pod" >}} in your cluster. If not , you can deploy it as a
+Daemonset to be reliable.
+
+{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}}
+
+Then deploy the Konnectivity agents in your cluster:
+
+{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}}
+
+Last, if RBAC is enabled in your cluster, create the relevant RBAC rules:
+
+{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}}
diff --git a/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml
new file mode 100644
index 0000000000..6659ff3fbb
--- /dev/null
+++ b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml
@@ -0,0 +1,21 @@
+apiVersion: apiserver.k8s.io/v1beta1
+kind: EgressSelectorConfiguration
+egressSelections:
+# Since we want to control the egress traffic to the cluster, we use the
+# "cluster" as the name. Other supported values are "etcd", and "master".
+- name: cluster
+ connection:
+ # This controls the protocol between the API Server and the Konnectivity
+ # server. Supported values are "GRPC" and "HTTPConnect". There is no
+ # end user visible difference between the two modes. You need to set the
+ # Konnectivity server to work in the same mode.
+ proxyProtocol: GRPC
+ transport:
+ # This controls what transport the API Server uses to communicate with the
+ # Konnectivity server. UDS is recommended if the Konnectivity server
+ # locates on the same machine as the API Server. You need to configure the
+ # Konnectivity server to listen on the same UDS socket.
+ # The other supported transport is "tcp". You will need to set up TLS
+ # config to secure the TCP transport.
+ uds:
+ udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket
diff --git a/content/en/examples/admin/konnectivity/konnectivity-agent.yaml b/content/en/examples/admin/konnectivity/konnectivity-agent.yaml
new file mode 100644
index 0000000000..c3dc71040b
--- /dev/null
+++ b/content/en/examples/admin/konnectivity/konnectivity-agent.yaml
@@ -0,0 +1,53 @@
+apiVersion: apps/v1
+# Alternatively, you can deploy the agents as Deployments. It is not necessary
+# to have an agent on each node.
+kind: DaemonSet
+metadata:
+ labels:
+ addonmanager.kubernetes.io/mode: Reconcile
+ k8s-app: konnectivity-agent
+ namespace: kube-system
+ name: konnectivity-agent
+spec:
+ selector:
+ matchLabels:
+ k8s-app: konnectivity-agent
+ template:
+ metadata:
+ labels:
+ k8s-app: konnectivity-agent
+ spec:
+ priorityClassName: system-cluster-critical
+ tolerations:
+ - key: "CriticalAddonsOnly"
+ operator: "Exists"
+ containers:
+ - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8
+ name: konnectivity-agent
+ command: ["/proxy-agent"]
+ args: [
+ "--logtostderr=true",
+ "--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
+ # Since the konnectivity server runs with hostNetwork=true,
+ # this is the IP address of the master machine.
+ "--proxy-server-host=35.225.206.7",
+ "--proxy-server-port=8132",
+ "--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token"
+ ]
+ volumeMounts:
+ - mountPath: /var/run/secrets/tokens
+ name: konnectivity-agent-token
+ livenessProbe:
+ httpGet:
+ port: 8093
+ path: /healthz
+ initialDelaySeconds: 15
+ timeoutSeconds: 15
+ serviceAccountName: konnectivity-agent
+ volumes:
+ - name: konnectivity-agent-token
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: konnectivity-agent-token
+ audience: system:konnectivity-server
diff --git a/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml b/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml
new file mode 100644
index 0000000000..7687f49b77
--- /dev/null
+++ b/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml
@@ -0,0 +1,24 @@
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: system:konnectivity-server
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: system:auth-delegator
+subjects:
+ - apiGroup: rbac.authorization.k8s.io
+ kind: User
+ name: system:konnectivity-server
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: konnectivity-agent
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
diff --git a/content/en/examples/admin/konnectivity/konnectivity-server.yaml b/content/en/examples/admin/konnectivity/konnectivity-server.yaml
new file mode 100644
index 0000000000..730c26c66a
--- /dev/null
+++ b/content/en/examples/admin/konnectivity/konnectivity-server.yaml
@@ -0,0 +1,70 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: konnectivity-server
+ namespace: kube-system
+spec:
+ priorityClassName: system-cluster-critical
+ hostNetwork: true
+ containers:
+ - name: konnectivity-server-container
+ image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8
+ command: ["/proxy-server"]
+ args: [
+ "--log-file=/var/log/konnectivity-server.log",
+ "--logtostderr=false",
+ "--log-file-max-size=0",
+ # This needs to be consistent with the value set in egressSelectorConfiguration.
+ "--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket",
+ # The following two lines assume the Konnectivity server is
+ # deployed on the same machine as the apiserver, and the certs and
+ # key of the API Server are at the specified location.
+ "--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt",
+ "--cluster-key=/etc/srv/kubernetes/pki/apiserver.key",
+ # This needs to be consistent with the value set in egressSelectorConfiguration.
+ "--mode=grpc",
+ "--server-port=0",
+ "--agent-port=8132",
+ "--admin-port=8133",
+ "--agent-namespace=kube-system",
+ "--agent-service-account=konnectivity-agent",
+ "--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig",
+ "--authentication-audience=system:konnectivity-server"
+ ]
+ livenessProbe:
+ httpGet:
+ scheme: HTTP
+ host: 127.0.0.1
+ port: 8133
+ path: /healthz
+ initialDelaySeconds: 30
+ timeoutSeconds: 60
+ ports:
+ - name: agentport
+ containerPort: 8132
+ hostPort: 8132
+ - name: adminport
+ containerPort: 8133
+ hostPort: 8133
+ volumeMounts:
+ - name: varlogkonnectivityserver
+ mountPath: /var/log/konnectivity-server.log
+ readOnly: false
+ - name: pki
+ mountPath: /etc/srv/kubernetes/pki
+ readOnly: true
+ - name: konnectivity-uds
+ mountPath: /etc/srv/kubernetes/konnectivity-server
+ readOnly: false
+ volumes:
+ - name: varlogkonnectivityserver
+ hostPath:
+ path: /var/log/konnectivity-server.log
+ type: FileOrCreate
+ - name: pki
+ hostPath:
+ path: /etc/srv/kubernetes/pki
+ - name: konnectivity-uds
+ hostPath:
+ path: /etc/srv/kubernetes/konnectivity-server
+ type: DirectoryOrCreate
From 8ba1410113f5593c5686c5b80baa9e9093755e89 Mon Sep 17 00:00:00 2001
From: "Mr.Hien"
Date: Mon, 6 Apr 2020 08:22:56 +0700
Subject: [PATCH 044/105] Update install-kubectl.md
For install of kubectl on debian based distros, also install gnupg2 for apt-key add to work
---
content/en/docs/tasks/tools/install-kubectl.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md
index 4a0be4509a..cdbe96f148 100644
--- a/content/en/docs/tasks/tools/install-kubectl.md
+++ b/content/en/docs/tasks/tools/install-kubectl.md
@@ -59,7 +59,7 @@ You must use a kubectl version that is within one minor version difference of yo
{{< tabs name="kubectl_install" >}}
{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}}
-sudo apt-get update && sudo apt-get install -y apt-transport-https
+sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
From 5ed756714eec05c3a545435a36859aae181ce5c8 Mon Sep 17 00:00:00 2001
From: Jhon Mike
Date: Sun, 5 Apr 2020 22:44:19 -0300
Subject: [PATCH 045/105] translating cron-jobs doc
---
.../concepts/workloads/controllers/_index.md | 4 ++
.../workloads/controllers/cron-jobs.md | 55 +++++++++++++++++++
2 files changed, 59 insertions(+)
create mode 100755 content/pt/docs/concepts/workloads/controllers/_index.md
create mode 100644 content/pt/docs/concepts/workloads/controllers/cron-jobs.md
diff --git a/content/pt/docs/concepts/workloads/controllers/_index.md b/content/pt/docs/concepts/workloads/controllers/_index.md
new file mode 100755
index 0000000000..376ef67943
--- /dev/null
+++ b/content/pt/docs/concepts/workloads/controllers/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Controladores"
+weight: 20
+---
diff --git a/content/pt/docs/concepts/workloads/controllers/cron-jobs.md b/content/pt/docs/concepts/workloads/controllers/cron-jobs.md
new file mode 100644
index 0000000000..5979d82f5c
--- /dev/null
+++ b/content/pt/docs/concepts/workloads/controllers/cron-jobs.md
@@ -0,0 +1,55 @@
+---
+reviewers:
+ - erictune
+ - soltysh
+ - janetkuo
+title: CronJob
+content_template: templates/concept
+weight: 80
+---
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.8" state="beta" >}}
+
+Um _Cron Job_ cria [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) em um cronograma baseado em tempo.
+
+Um objeto CronJob é como um arquivo _crontab_ (tabela cron). Executa um job periodicamente em um determinado horário, escrito no formato [Cron](https://en.wikipedia.org/wiki/Cron).
+
+{{< note >}}
+Todos os **CronJob** `schedule (horários):` são indicados em UTC.
+{{< /note >}}
+
+Ao criar o manifesto para um recurso CronJob, verifique se o nome que você fornece é um [nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
+O nome não deve ter mais que 52 caracteres. Isso ocorre porque o controlador do CronJob anexará automaticamente 11 caracteres ao nome da tarefa fornecido e há uma restrição de que o comprimento máximo de um nome da tarefa não pode ultrapassar 63 caracteres.
+
+Para obter instruções sobre como criar e trabalhar com tarefas cron, e para obter um exemplo de arquivo de especificação para uma tarefa cron, consulte [Executando tarefas automatizadas com tarefas cron](/docs/tasks/job/automated-tasks-with-cron-jobs).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Limitações do Cron Job
+
+Um trabalho cron cria um objeto de trabalho _about_ uma vez por tempod e execução de seu planejamento, Dizemos "about" porque há certas circunstâncias em que duas tarefas podem ser criadas ou nenhum trabalho pode ser criado. Tentamos torná-los únicos, mas não os impedimos completamente. Portanto, os trabalhos devem ser _idempotent_.
+A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, but do not completely prevent them. Therefore, jobs should be _idempotente_.
+
+Se `startingDeadlineSeconds` estiver definido como um valor grande ou não definido (o padrão) e se `concurrencyPolicy` estiver definido como `Allow(Permitir)` os trabalhos sempre serão executados pelo menos uma vez.
+
+Para cada CronJOb, o CronJob {{< glossary_tooltip term_id="controller" >}} verifica quantas agendas faltou na duração, desde o último horário agendado até agora. Se houver mais de 100 agendamentos perdidos, ele não iniciará o trabalho e registrará o erro.
+
+```
+Não é possivel determinar se o trabalho precisa ser iniciado. Muitas horas de início perdidas (> 100). Defina ou diminua .spec.startingDeadlineSeconds ou verifique a inclinação do relógio.
+```
+
+É importante observar que, se o campo `startingDeadlineSeconds` estiver definido (não `nil`), o controlador contará quantas tarefas perdidas ocorreram a partir do valor de `startingDeadlineSeconds` até agora, e não do último horário programado até agora. Por exemplo, se `startingDeadlineSeconds` for `200`, o controlador contará quantas tarefas perdidas ocorreram nos últimos 200 segundos.
+
+Um CronJob é contado como perdido se não tiver sido criado no horário agendado, Por exemplo, se `concurrencyPolicy` estiver definido como `Forbid` e um CronJob tiver sido tentado ser agendado quando havia um agendamento anterior ainda em execução, será contabilizado como perdido.
+
+Por exemplo, suponha que um CronJob esteja definido para agendar um novo trabalho a cada minuto, começando em `08:30:00`, e seu campo `startingDeadlineSeconds` não esteja defindo. Se o controlador CronJob estiver baixo de `08:29:00` para `10:21:00`, o trabalho não será iniciado, pois o número de trabalhos perdidos que perderam o cronograma é maior que 100.
+
+Para ilustrar ainda mais esse conceito, suponha que um CronJob esteja definido para agendar um novo trablaho a cada minuto, começando em `08:30:00`, e seu `startingDeadlineSeconds` definido em 200 segundos. Se o controlador CronJob estiver inativo no mesmo período do exemplo anterior (`08:29:00` a `10:21:00`), o trabalho ainda será iniciado às 10:22:00. Isso acontece quando o controlador agora verifica quantos agendamentos perdidos ocorreram nos últimos 200 segundos (ou seja, 3 agendamentos perdidos), em vez do último horário agendado até agora.
+
+O CronJob é responsável apenas pela criação de trabalhos que correspondem à sua programação, e o trabalho, por sua vez, é responsável pelo gerenciamento dos Pods que ele representa.
+
+{{% /capture %}}
From 202488677759e81fd73df7bfbda1a8253b0dcefe Mon Sep 17 00:00:00 2001
From: Taylor Dolezal
Date: Wed, 1 Apr 2020 16:52:56 -0700
Subject: [PATCH 046/105] Update PR template to specify desired git commit
messages
Co-Authored-By: Tim Bannister
---
.github/PULL_REQUEST_TEMPLATE.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index c7040f7fa8..25b0ff4753 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -6,6 +6,8 @@
your pull request. The description should explain what will change,
and why.
+ PLEASE title the FIRST commit appropriately, so that if you squash all
+ your commits into one, the combined commit message makes sense.
For overall help on editing and submitting pull requests, visit:
https://kubernetes.io/docs/contribute/start/#improve-existing-content
From 48d007a0fcd5f6e182629d6015bffa8a33957e4b Mon Sep 17 00:00:00 2001
From: Joao Luna
Date: Mon, 6 Apr 2020 09:25:29 +0100
Subject: [PATCH 047/105] Apply suggestions from code review
Co-Authored-By: Tim Bannister
---
content/pt/docs/concepts/extend-kubernetes/operator.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md
index c2be44bd77..e735893f38 100644
--- a/content/pt/docs/concepts/extend-kubernetes/operator.md
+++ b/content/pt/docs/concepts/extend-kubernetes/operator.md
@@ -42,7 +42,7 @@ Kubernetes.
Operadores são clientes da API do Kubernetes que atuam como controladores para
um dado [*Custom Resource*](/docs/concepts/api-extension/custom-resources/)
-## Exemplo de um Operador {#example}
+## Exemplo de um Operador {#exemplo}
Algumas das coisas que um operador pode ser usado para automatizar incluem:
@@ -107,7 +107,7 @@ kubectl edit SampleDB/example-database # mudar manualmente algumas definições
…e é isso! O Operador vai tomar conta de aplicar
as mudanças assim como manter o serviço existente em boa forma.
-## Escrevendo o seu prórpio Operador {#writing-operator}
+## Escrevendo o seu prórpio Operador {#escrevendo-operador}
Se não existir no ecosistema um Operador que implementa
o comportamento que pretende, pode codificar o seu próprio.
From b4c76fd68f9fcade50d2320fd4e278ffa714a415 Mon Sep 17 00:00:00 2001
From: Alexey Pyltsyn
Date: Mon, 6 Apr 2020 12:27:25 +0300
Subject: [PATCH 048/105] Translate The Kubernetes API page into Russian
---
.../docs/concepts/overview/kubernetes-api.md | 117 ++++++++++++++++++
1 file changed, 117 insertions(+)
create mode 100644 content/ru/docs/concepts/overview/kubernetes-api.md
diff --git a/content/ru/docs/concepts/overview/kubernetes-api.md b/content/ru/docs/concepts/overview/kubernetes-api.md
new file mode 100644
index 0000000000..5aea5818af
--- /dev/null
+++ b/content/ru/docs/concepts/overview/kubernetes-api.md
@@ -0,0 +1,117 @@
+---
+title: API Kubernetes
+content_template: templates/concept
+weight: 30
+card:
+ name: concepts
+ weight: 30
+---
+
+{{% capture overview %}}
+
+Общие соглашения API описаны на [странице соглашений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
+
+Конечные точки API, типы ресурсов и примеры описаны в [справочнике API](/docs/reference).
+
+Удаленный доступ к API обсуждается в [Controlling API Access doc](/docs/reference/access-authn-authz/controlling-access/).
+
+API Kubernetes также служит основой декларативной схемы конфигурации системы. С помощью инструмента командной строки [kubectl](/ru/docs/reference/kubectl/overview/) можно создавать, обновлять, удалять и получать API-объекты.
+
+Kubernetes также сохраняет сериализованное состояние (в настоящее время в хранилище [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) каждого API-ресурса.
+
+Kubernetes как таковой состоит из множества компонентов, которые взаимодействуют друг с другом через собственные API.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Изменения в API
+
+Исходя из нашего опыта, любая успешная система должна улучшаться и изменяться по мере появления новых сценариев использования или изменения существующих. Поэтому мы надеемся, что и API Kubernetes будет постоянно меняться и расширяться. Однако в течение продолжительного периода времени мы будем поддерживать хорошую обратную совместимость с существующими клиентами. В целом, новые ресурсы API и поля ресурсов будут добавляться часто. Удаление ресурсов или полей регулируются [соответствующим процессом](/docs/reference/using-api/deprecation-policy/).
+
+Определение совместимого изменения и методы изменения API подробно описаны в [документе об изменениях API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md).
+
+## Определения OpenAPI и Swagger
+
+Все детали API документируется с использованием [OpenAPI](https://www.openapis.org/).
+
+Начиная с Kubernetes 1.10, API-сервер Kubernetes основывается на спецификации OpenAPI через конечную точку `/openapi/v2`.
+Нужный формат устанавливается через HTTP-заголовоки:
+
+Заголовок | Возможные значения
+------ | ---------------
+Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (the default content-type is `application/json` for `*/*` or not passing this header)
+Accept-Encoding | `gzip` (not passing this header is acceptable)
+
+До версии 1.14 конечные точки с форматом (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`) предоставляли спецификацию OpenAPI в разных форматах. Эти конечные точки были объявлены устаревшими и удалены в Kubernetes 1.14.
+
+**Примеры получения спецификации OpenAPI**:
+
+До 1.10 | С версии Kubernetes 1.10
+----------- | -----------------------------
+GET /swagger.json | GET /openapi/v2 **Accept**: application/json
+GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
+GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip
+
+В Kubernetes реализован альтернативный формат сериализации API, основанный на Protobuf, который в первую очередь предназначен для взаимодействия внутри кластера. Описание этого формата можно найти в [проектом решении](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md), а IDL-файлы по каждой схемы — в пакетах Go, определяющих API-объекты.
+
+До версии 1.14 apiserver Kubernetes также представлял API, который можно использовать для получения спецификации [Swagger v1.2](http://swagger.io/) для API Kubernetes по пути `/swaggerapi`. Эта конечная точка устарела и была удалена в Kubernetes 1.14
+
+## Версионирование API
+
+Чтобы упростить удаления полей или изменение ресурсов, Kubernetes поддерживает несколько версий API, каждая из которых доступна по собственному пути, например, `/api/v1` или `/apis/extensions/v1beta1`.
+
+Мы выбрали версионирование API, а не конкретных ресурсов или полей, чтобы API отражал четкое и согласованное представление о системных ресурсах и их поведении, а также, чтобы разграничивать API, которые уже не поддерживаются и/или находятся в экспериментальной стадии. Схемы сериализации JSON и Protobuf следуют одним и тем же правилам по внесению изменений в схему, поэтому описание ниже охватывают оба эти формата.
+
+Обратите внимание, что версиоирование API и программное обеспечение косвенно связаны друг с другом. [Предложение по версионированию API и новых выпусков](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) описывает, как связаны между собой версии API с версиями программного обеспечения.
+
+Разные версии API имеют характеризуются разной уровнем стабильностью и поддержкой. Критерии каждого уровня более подробно описаны в [документации изменений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Ниже приводится краткое изложение:
+
+- Альфа-версии:
+ - Названия версий включают надпись `alpha` (например, `v1alpha1`).
+ - Могут содержать баги. Включение такой функциональности может привести к ошибкам. По умолчанию она отключена.
+ - Поддержка функциональности может быть прекращена в любое время без какого-либо оповещения об этом.
+ - API может быть несовместим с более поздними версиями без упоминания об этом.
+ - Рекомендуется для использования только в тестировочных кластерах с коротким жизненным циклом из-за высокого риска наличия багов и отсутствия долгосрочной поддержки.
+- Бета-версии:
+ - Названия версий включают надпись `beta` (например, `v2beta3`).
+ - Код хорошо протестирован. Активация этой функциональности — безопасно. Поэтому она включена по умолчанию.
+ - Поддержка функциональности в целом не будет прекращена, хотя кое-что может измениться.
+ - Схема и/или семантика объектов может стать несовместимой с более поздними бета-версиями или стабильными выпусками. Когда это случится, мы даим инструкции по миграции на следующую версию. Это обновление может включать удаление, редактирование и повторного создание API-объектов. Этот процесс может потребовать тщательного анализа. Кроме этого, это может привести к простою приложений, которые используют данную функциональность.
+ - Рекомендуется только для неосновного производственного использования из-за риска возникновения возможных несовместимых изменений с будущими версиями. Если у вас есть несколько кластеров, которые возможно обновить независимо, вы можете снять это ограничение.
+ - **Пожалуйста, попробуйте в действии бета-версии функциональности и поделитесь своими впечатлениями! После того, как функциональность выйдет из бета-версии, нам может быть нецелесообразно что-то дальше изменять.**
+- Стабильные версии:
+ - Имя версии `vX`, где `vX` — целое число.
+ - Стабильные версии функциональностей появятся в новых версиях.
+
+## API-группы
+
+Чтобы упростить расширение API Kubernetes, реализованы [*группы API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
+Группа API указывается в пути REST и в поле `apiVersion` сериализованного объекта.
+
+В настоящее время используется несколько API-групп:
+
+1. Группа *core*, которая часто упоминается как *устаревшая* (*legacy group*), доступна по пути `/api/v1` и использует `apiVersion: v1`.
+
+1. Именованные группы находятся в пути REST `/apis/$GROUP_NAME/$VERSION` и используют `apiVersion: $GROUP_NAME/$VERSION` (например, `apiVersion: batch/v1`). Полный список поддерживаемых групп API можно увидеть в [справочнике API Kubernetes](/docs/reference/).
+
+Есть два поддерживаемых пути к расширению API с помощью [пользовательских ресурсов](/docs/concepts/api-extension/custom-resources/):
+
+1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) для пользователей, которым нужен очень простой CRUD.
+2. Пользователи, которым нужна полная семантика API Kubernetes, могут реализовать собственный apiserver и использовать [агрегатор](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) для эффективной интеграции для клиентов.
+
+## Включение или отключение групп API
+
+Некоторые ресурсы и группы API включены по умолчанию. Их можно включить или отключить, установив `--runtime-config` для apiserver. Флаг `--runtime-config` принимает значения через запятую. Например, чтобы отключить batch/v1, используйте `--runtime-config=batch/v1=false`, а чтобы включить batch/v2alpha1, используйте флаг `--runtime-config=batch/v2alpha1`.
+Флаг набор пар ключ-значение, указанных через запятую, который описывает конфигурацию во время выполнения сервера.
+
+{{< note >}}Включение или отключение групп или ресурсов требует перезапуска apiserver и controller-manager для применения изменений `--runtime-config`.{{< /note >}}
+
+## Включение определённых ресурсов в группу extensions/v1beta1
+
+DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies и ReplicaSets в API-группе `extensions/v1beta1` по умолчанию отключены.
+Например: чтобы включить deployments и daemonsets, используйте флаг `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`.
+
+{{< note >}}Включение/отключение отдельных ресурсов поддерживается только в API-группе `extensions/v1beta1` по историческим причинам.{{< /note >}}
+
+{{% /capture %}}
From 9c864c965bfcf4df9fce8b655e20414a47492665 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?R=C3=A9my=20L=C3=A9one?=
Date: Sun, 16 Feb 2020 20:47:35 +0100
Subject: [PATCH 049/105] Add a Progressive Web App manifest
Add a valid apple-touch-icon link in header
Co-Authored-By: Tim Bannister
---
layouts/partials/head.html | 2 ++
static/images/kubernetes-192x192.png | Bin 0 -> 87040 bytes
static/images/kubernetes-512x512.png | Bin 0 -> 221877 bytes
static/manifest.webmanifest | 22 ++++++++++++++++++++++
4 files changed, 24 insertions(+)
create mode 100644 static/images/kubernetes-192x192.png
create mode 100644 static/images/kubernetes-512x512.png
create mode 100644 static/manifest.webmanifest
diff --git a/layouts/partials/head.html b/layouts/partials/head.html
index 1e34c3c903..16b09cd103 100644
--- a/layouts/partials/head.html
+++ b/layouts/partials/head.html
@@ -52,3 +52,5 @@
{{ with .Params.js }}{{ range (split . ",") }}
{{ end }}{{ else }}{{ end }}
+
+
diff --git a/static/images/kubernetes-192x192.png b/static/images/kubernetes-192x192.png
new file mode 100644
index 0000000000000000000000000000000000000000..49236b71bbafa6e29cd50d7ac6f9e173f1b3f960
GIT binary patch
literal 87040
zcmY(p1y~(H&nS#@@B@(Vw`2TjenQBOz$;*S$
z{>1@c(BNobkbg1ozZV!dF4#Y5e=#r_aJ>J;mBFe1%L4%h7HR^#_z`a7qu~UHY9elv9@*Mb^A*C9}eEX_&+ieDe-?;oUOi+YRG>h
z7PWIUCFWpcXJjT7fF~v<=65tP<5d=u`0wz)t*@j%oSp4?nV4K%T^U{380{R*nOJyu
zc$k=3nOIpF{&Fxlx!XD$x-r-~k^PU5|6h)nsgtpzg}t+doh|V{xrRn|F3w*`N&hMO
z-||26bha@2KbmZv{%h7>1DXEyFtIQ)GySjZzeD-|QF#>|ElmGv{wH66h5tXC{}1iI
zdia_CDgVDF^FNmUhx*r40eF6<|8<)Hyj&lZCK#9yn6#L%svG!uCR~oZ+WPY{@6lAf
z)#0BsbCJVVRtnV;M7TmJX;PZ{ggjDdX=>5HU&>~wBB`T597ddKtClF{#(dQpd({1zY
zm_{y8QRKJ)k>{Yx;|(2hIbC)8?R(=|%v(C_Flf;It?To#<3r<(=Mkj;by~nq|I>AO
zb@_AtcFsw6tWwVQPTfaF#iNesrk%Hfb>*@Ermj%c^p)O^RVJ8*>VUOTyih4x=|c_i
z;A>g*uN=gNsWB)1C8x~uai>q50aJa9
zy0Mn?j}^M;&rRD646FgY+E4X%hgB-0>Xse>pTS*i*Y`Kw7m6-#eS{5Vh8?94HT_~o
zjRT*sW9Y&O6|;o3fw)A=EZ&j?y|SYc$8)<~DbcgB&vz%8k1Eg($l;LwfhL)}^haXo
zTl!f?8R{F-VrC8E;5t5S5Nm`WmMNGoC_&)5v4>#+LIB1d!DY!tSB)#aGad7~9`r?w
zYdn5skL0Nrcz$yD^MNPGBf&Oep~|^P;X8`s$7WgRqHVRuctY(7Pn5nA5SLeQ*-mI!
zotsb@DPioL)hYp}Ky{@l)*N5A
z9=gI`i{Qyx)SU$KFTb6>-$$aCbt=0&CWqaK)-mZ2UkM9dRzH{wCK#{>C_5Py!!%vb
zIjL!Hf#rCnL`{O9evNjWMXGpmRjE`(zOoosHSrB)W;6u+bD;cx9Ek2RWTT70ascR6
zJW>Ssq`)@<4t-qkDRU;E4;dT5M9!8X)KJIN@5L5ZEo;qg@vyw!>nnm~7~DMNM-!Y2
z1$=zGO@{aje|Ffn{feVihXRqJ`HoT;?xsDt!KB!Qj}GaM(dSA~W4y*Y1-0pyiWe?3
zj`Pzjn_G{Rkef=atDdqo8RSa~2PRt9l`QQ5tEvlRiI5fY2)l0^1ROK5CcqD!A4rCY
z=8zaN^V?|6UXbGr;qRVB}sl5Ra)mTtT>vuPf5&laOp
ze$p*+8M`0dQ*qxoA10TDE31`DG%saSFYYv^P)s=h-@}KvECrMDyCf1?qt4d)4WJ0;
zF`T)t(#^X}eS%?tsm5iH`uJp5FdhhuPB4F{p
zp}X>jp^Kp;aOXt_*zJrEP
zx`bBzpMhSKWVpSs%jN&Ew(Q?sSU0h_$bU0AK)%9!DLUT;@X#&4Y=~g%WRw^UfD^nHIg0&ons3YN0u-p0N2)QoU
zYuoZinsh4va^m>;@LdKoHh}9ePsfta6b+G>>fl>_Cmf^_jb$b;Tz=R?K}v)nK!tx6
zU&RB?RzENJR3P^5>u^5sS^oi1O{A0y!Dcga@P=zKr2dHb9MW>(l_aNKS=X|`jR9Z2
zux}`y3AVHULw1(Wd>^5=Oy;{{ncyp
z{tO@+WJ{)faQVbwQ_)jDJ(y^0XA5xzdAs7Vd@NYw=xUJVz9HEMS?1Z#g|=Fyyya$R
zV|&W6^hT=gsY3jH1c4n~C|6i2vKg@t=V&T3pI~seXWME<_PZcDktmwofEAK
z?a^8!xH1>9Qi75Q?7=T>^(h5InCoe&!6t&=ZQM_+)098Z%NG%jb5+}?qVy0uB}*;U
z#<1X06}_MM1hJxeVD22864FNAPS0rJVfsHkE00Ox_%I}hFC@JgS)tw}^iQn(%V+)t
zJ`CKOCZ8aN-SsJjcElkYI<$*q;JX{(NY$xQ3nEv$?2Q&crNsWE#p!r{4Dq%VbCwt`
zeUL7x4obU4hw6TiafMOP*b#tWQLD`7@v&zQ-udaYpk_+)oWJogng1LXfqOUU>aIte
z20cv*L!GqZBj)ZhWyFs>ULpcrVgB(keXuEiXNYW)GCy`y+EWTI64tYZ&l@PZesWY}UOILXgsgLlk&ah()
z_Ni9?VjF<%Ve{M+-nU(6Kvq}2C=xdLa!460uj^X{TMA1}G$+5fhp+fW6h(>0j$k{T
zL#wnEDM>I%=S#ZB6~>d9&W@O!nkuEiA`d>n?z7*U2l3TW@D9_x9Ltj<$UMIi1GEY|
z?nQcpl!!7XhaKah%8taO!%#;(jViITX#C}BqVDm7nZ&SunJtAG#$@8@oQ>t9CF1Zy8okC`qtGc}{gSzI^PPld2RPYt@lY0U@>XO)sH^_eoV*9^F|?%mym`K+(q
zGYke=KD^0nkOQJMl9~bh$pH*#jAFbKq)pqU*#k*-(JV(afDFCrZ8Q}a%1qCJN1H6v
zR{he=a%BFbnYE-n%2@RFp9s2tFsz_&@g>i$yBgM*W)ne8fX+w*o&)*Y4voXowS=X$D-c%mH7RtFB0}M?j
z9=@=>Y(*9l7BAEXkllMqdeifepazql-{)_7uD3GVbbc;G_a1T5JXkM|DS^D~+DD_3
zLZXKU7K?AgfY|0c3ZE{og|ZJUeCgG6vu^MA?uHeXFtz~f$OF|RIEy9~O`S2;Hq*Ye
zP1p0+o@I#S=D(j`oh{rho4B8yL6c3p>6i$>QQnpK8$n|1JN{&E8&CkG`icI%8_U7*
zl3e6zXimrGMn3c8`IfhV^H90e^K|t}TAfCzlKGO}NCP15Q|o~l4aCOt$}=>(GmZKc
z1$-QPFVzk-;KV0dMQq~1$->S|_yJ|BEyuXh!;%9=8a4izjE3p;UO
zT@yFHV}Zi$gPr>q9@_VM51{n>&>&y>8ysr!izv&LS-sM)9B{`mTnnx_k8kA)b@M(5e^A;jajj#B+F+yfal;aX?
zWL${m_!$u=^IlVb#f0YYK-<7+bt0m}70dZ*I8IIjxHNEWF^n}T2JMDuYL7}7jr_n@XzC3E&P
z=sD9j)&{T(%8kWC6V8-rzq7-+;FdQ6WorM(o+vWdGAP@rf$V(u=?>bXP2On7G)fN>
zlHB{nXiO6aQ{~@GE;*6M-u+S}dgB@rX;CpSuLh`UT0Az;rnRv9%J;Im{I2pp$_tw)
zvXxK6RVJf@I0`xg%eIKdUS_fbn|85U0cKcEVIHB6TP4l3a^EbAUoj;gDEgR$i}i+x
zcF@Qm_!54F_L)
zCgC}76D~HbbiU0CQc)LP+#abA3)#i2eq*xYdcs0-SLg*hwWFOGc>$sqC#4Dg8j1zQ
z9qBp=9HncVi=W)a61n?EpO>Vmq{!@l@Op}1rZ-cV5XT`HuaH|OABK_Y)8NLohwqwP
z*2@rD;;We_(AH6y0Gx8v!YgVdUE~Wn?f4%W{oyQZL9QOU*dbRtQFVi$V+Mh{_Bc>3oX
zvdM(I>#tb<^6__V-ua~Siz`&TE^aFd@l@cJ6lfqxKuf;ati;FeQixOC1CsL{i)NVh
z2EH%*A+Ehbs-S)S9c0TJzt{0vo?e997edrf=RI__f+!g;W5@*cnf`8!Zzoiu$Ehu?
z<0E71>VS?O>w^-}+|i6r83P!bk4f>0
zDp%=B#J2{3c+BPEK%+$8eh|7Vw(E9z246!p*+~EqYSYkusQ%At;-gzzfZa@V}Q$
zH*o&$gQMzn`W{Ttt1V?+OMT1-iyMse10c$PWS6Dalm5lKQJnJ6jqO#?S5*9gALNu|
zTL}G+X-zBly_MVlLPGgduKaT~eo7NVHy8OQ*U+8kK!HJ?Lr0!#XTMFM=lg+0;b-UU`ZQ^@;j)cSx&n
z6|@jB|EN@V93jusj6qEkH1I2=cHfue5iRdH(02=h)mL~Yk|J5Q43Q<8<^$V7f;(}u
zIFb~NBJrHL>r1oKR4Am13sNCGCxgZ|T>sYNX6jp{Tb;VAPx;Ubfy`z*`=hlZLt&QI
z;(66flnlqIp10_+D3lWcyu52q3wRBLynPLibYA2Fn@7+ysk4BY1A82s-vTw#5_wv{
zh3K_7j`a9O-QRqnS!;O}N?5oLW5V4S=gp5LFh34g<
z*i`xC^wnBsl5A-z3L`_>oJJ|ip)E}^2dW4w2dT=i2=~$#kG0k4vn%!_nG`b_oF{DQ
zI3ZO9=s2kGOC~ev5ZD>20zzavOoj10rZ<5WEe2v3Qn=;orqfz$q5z9p?Yx>4>S2t_
z>`3}Mb!?Fo7WEX{DoKbB@wjWEA47k1;ItkU}Y&P2DM_=m*VA
znTrJk-Fu{>q&(3)obkN{&JhtK`~LYwKtol#OI4wr+KA-esk8v5ivhu$zXM0|CAIu3
z`oi>t>nrsc%WrBzNt;edas}Op{ZZkLYW~|IZXDm_6Fnh6ad`5v0ojs@z^-hoC5QkN
zh(D^BN%n++=?RVrAd=t8u&Y0P?c5d^sjt0{A?~<#TMQ84dN$Fisw#
z)c4hpYjL7R1NrRtsYyknv~Z@-;Z)DV;UgZk4nK@zhR5u7_I5m>6zPrjH1*?|*1YW;
zT7aj+6_~vNr`+FU-*!(Y!F{XDvigYf<2S1UDGT_mlk{CRC38VpDUMU2iND6>+C($P
zD$C?vDQdr>&*Ea(7e{_8(|1Hr1h_f4CTi)2lhF5(Aqv%qACk1t9~zkRH6p+I*e3m3RN-x+qotD#Ec|Sec5b@=)v`6Pq@~ta
zPyUW?=416|k@G|TXF{~H8u9ZaTedNvEW7!FdG3&!St>>{Cj9pAq$INLmoKTyh}B}dkm(S#8v_k!y}$f1`%
z4Xm8mI5oqlJEi_e2MIg&YrOhE!&E5X*?9@$gI2cNiX`bCED8%-Rm@}6+(whMG}ARl
zPfhLxU?E$2Yenud{%##CWl5hx;oujwpoIaFH+a|S1Biy%Nt9-%Wn)Q*$KwIH*F090
zhtF6aQq>-)QcZIV6~B*bcgVc3lzWW|@uPTA(XIow?PZ%4q
z%2Z@F&YZ)GN~Ljfw?aV-?i7Yw$R~J^RI2&UK>t9@R;2|(1diOv-kWt(?FZ{IyXKYp
zj6H%QhNq<5XY!7ue{j;Z&Y@7Jos554)IntVsUf@aE$Rdl+rMB3_0T(Rj)=@ldgTaU1-PI$kQCj3yfdl
zX3*|K**9%Q(RrzS^kLsuXsX}vtCL*_X3fYTw=^v7eG*gL&p<>@F$w~YO
zmiUFk$kzP(${~|8)PIU2vz|U=rs7*gJ)JY()I--xyt3}uWNWN)wgvH7Qb$tLZ>c@K
z$t`$y(b6KM(+LFa5>*hEPwqs>R^Pnb(Jn@NAO=XaPwc!r-KKN$U?kGPdf
zbSJY!W*mcPK9>qpQeOW=H5hwKPB
z8j}yOlVTj`A$X1fQlWzZXygHzbM)9gU-uAs;)9h7u%GSHw^lEY`fyK)Kb`7RDANMs
z<|oL8`uAz797LW8dM%MGu2es&UPLp*Pzt|0bu*=jF1KaB6#p@%{=A&?V;}0RA*)0V
zRY+1LQNs*XRW0*NrP}NcX0Po4jzm4*B7NL5mV%#f&4nd5vqPo+mG4hp6$|=;q{R;9
zf3Drh*R}nb{V&w|u4!@RHan=@_ItvWyB<)+;jePBr2r`?Gr=;N%CWJJMtWIE--_CR
z?{B{)J+ZmV-?=JfgJTuFER^CFWMn?vj&_~0woK4{*p_X_L$a%99I2pay}LcLUT`28
z#vg(z+zOZP7PzzcLI>d~=-KG_K!k7Q9KiDl-t(;{VknYO3PIXCSwtME_-NP#ArhV3
z-|r9)U{K>(1!_fKqoBTNTB9ot7~m!42SAFWqE9BfQABoOAvJc*#GwPMS!b=%k{H7m&h0AmB-a
z)qkIaRYGwrsTQDdzh($8VwqZ2Dvd(Qqe&8ca*~5bn}fBGeOF<5q1VP#Ur@mqEm5yJ
zB1o~t70`E!xMPMoj(qb8?^dk3Dzmgb;JK=2fBh*V+K4%iybtOyrF#f`9~bIC>ySGc
z%l`yNub6tqG?}PPgOL|d1-pMEc0N=bCue#*3#Fd+ayTv
z`)H(s@^>H}FgmTdcY%3GX`-eh=FPRe!adupF`R>6aKT!%NUd=FtPZ<$Bzdv|YIgHH
zb&M5-WT)=U8pdIjWV?uo;Qn&}T-aRr$9f1;7}*;qfbVVd~Yjb=$__y6SelmPu5=_0(8ZXKDiRi|5
zdUhZ6S_@_R%6<5YCcP}wsmg$i*!6~r$!x0-PCUc1MnrCks
zv(=uF2%1CXGJ^{6JZFB)rV`lMGOv~wej8yYy*Yw;6|`>krCRCDH!)Zr6f5AOsP+Nz
zKj$DF9!bquOK2m)8Qcr4l2bzPtLFml)jz^I0`b*aD%*-9Ur^C!%Y5b;8;O-{F4VCU
z^e3J7p`&zruh4(C6@~a7GtpNA)qj}vQ1Pqt;z$Vl^$RsdA6*%=?I}e+%{;(mJO(U`
z@}+qzNvz5{n(oenol_H}&RyoYi;y&?zzgXM3Y*^5oHTs*?v!loLzda{=#)m9k`N+)
z4iY(<_B4vS{wm0njL&OFpzlQ*^)VT}m-$uExL-f}O>$U$=hQVgAd}m4e%mRgUXGVe
zmkiK>j={SjvT>!RGJc9L#~{|nKmH9Z;E?TP2Jnfn0;y9}Zb(YpS5PFC<8$}GvbwFq
zNwGH2Q1fyfag_Ha{A@h_0vpqu#WnUz(Knh#?%o`g9#LfZ$V-==5g4+e7NeVe&3p15
zK}pD}m5b_ptFM5Dc;KEzaTMc}JJEyj=||LTczWH7!9dJS|D0~7dD%%7H{xP<7>^)I
zU0%fUM|Dh!-5-uQsJFr=16q_ib$WX>XgzN;bdtq{>%-pmhF$KbjQ2K=4fLMX-R4mM
z0=3I$ti)0Oak5c_5hs*a%2&d(+YOGO%&DDQ(+M?6vY88K(;|gmqb1bcdBRlVo;X&9
zNg>1;Vy7PIeHSk$0V$<9W|W0e(WLbmM`6JC!HEU7vx`)G5b=ig4@alH;K)BZY@CK(U6d>b
zdKG9;vuiXy*3ExM2|wcpW*Fka*d-bAW&wd$@ExBy0bR0;O@5Jlfu8|K?KpOQKr^@b
zpmcGKPzZ3E!qzt>Xa)PZ;*t%UKk6#<*V&Du5;Nx
zA9YZv3$mKw-rqRD+C@$OL`XfOc{2grFr|BtjI3Gb$w65=(n#pv+IU`2h&mVEsO~af
zHZo;P7xnEJv*E#0R(%!=8l+_yI_%vtj#89ey|Rl7_FRFB)jecJ>v9o~{ERnGqq@4g
zW-M7)ll8IKwQPcvY-Hh3yml=3?!iD$5>PX0->A91T9`z9zv|F=czdD#t(g+RQXX4mYfyZCoSLKQVC@}eo0g;HR(CV-
zHnuI#eljz6J%P49=h$ra_}^R;Y>`}`(%wR`g9FQ`59CLUz{Ax7v)6)^=T7-JeAKs<#@1d-xfDTp+K%1i`M7s
zC>D_YW~)q_kk)-HVhRYuO=AL?lD~V*FHt=8eCOwX$pAjXpt6i-+6kG_=5z#v(U@;w
zKq-ZrUosSddt<{1!d`7k)nqi(Er^5Oc21jw7-6U{#v#H`q*GDT(^sppY}mP
zGDQElNbtArr%Y;~ADeOq#d0&qe0%W1evt97c?u3gI1;{j(*Fcu!R>OZk#Gx8>f-jums%hwTb?+#+)`5zzciQxV6B
zLMrcaivNihmdX`h<|23;eVFdL#x&gD^VvAkE$0j;;pyw4b3$dq^q
zEp3?`eoPbdM~%Y`uq>y-vCfJKhC;VWOH6@8C
z8;)ssop-~qTF`7#IA3q|H7vozpz)~0or{-{hkjBrh`EI8MbY<9h8*T7+FiQ(t;sw@
zpig_xcUv>FqoQ}{C2zoc>d?Yh60m}~X6%>h`=*`rm{jd3M|t0ZDDg;inApt7Z(Kc%
zD8@tlCY25un17-H@z+<{d$X_syTV@QntW1g=J|B3(JYDbkr(@J|BXpFNE?@6KhTz?#@JptZUKxE86(k$8Iz3x4sM%2%7SF#wTNV?nm(G5
z_=Ehz6c#*e{Obzu-p5IxPngB1hC*2txUxKX@_
zsBnf9W^epasP-0hYAb!@8Rn=3dA7Y%piO^sCVMKiDQghL-_<$DVdUL%tn0AdlAk~|
zlP60IwglQTI4GRp!tqv8U397b$Kzn|9pdqyd>9m-`Y8MQwmXjk;gLnLMJZsh&xh9m
z#c=rCt8MB^{B3)Co!k!=s?9omy;y38hl)EUc^vv()VJ`@LLvTs{7
zkXYP4^9-7&O;XBy9ek&9^LHp7dR-L?9}`@IaCS%<4t)_j#6i9VY4p-}^ag?lKhnDQ
z4c0zpPf8{15M^~_pveQ#VPXI
znanL$`g~T19yg=M9)wMz3d><9S+dJsp^3P|<~|@`71VD8W}Wxp=?zPoQ>5+dcwZEf
zo(wK!3B6m41o$@Fm<-5?c!@EH1tN%gIh!##pKcK5-k6YnffrZo^wL)86JOjQu*2iKhU<1_`MyLzJ0Mhp
z@&Mmck7x31Ki_u}{E$0!#9TCwueFFuwzQB3y^o)72TK#wO^5Aw6P5bA6~_oKN%%4_
zy%75RhMI)1j-iwe3<6w6YwtT+A+yiCz+FT93ry-_RPUh{W|u{Rags62y@B@@Au`m!
zvi=4?D*}RuDaY+%Jph7(`1|};uC0gy9yVu0
zgq?kpG*aZ%(Pk{Klz7BCQObx)_Ft8AA#v0}2GNJ^wP&2|Ci!o2IH35q(_X8JNlKzz
z7Zu#Z4-}upmknn_1&n7Mtard^_5561qg%2GB_SR-q0+>X_vj+OhdxbqSh_3cf$ew^
zV~EpQ_MxZ?+9R1EOKAA@at-%=^<8^OLZ;DTRugJY-KXS5O$fD+mx7B4?}mHo@Mh2q
z_7|n!gRq}|Xf*V*J$`})mS
zD8Zqx!41l^|p-zM?_G(y+!E47S&|e
zy_R};qC(T5=y2R^sEyjheD}@tnXYaIfO4QSYGKTv81=Eg&hE8V9ueaMGCmnBGR(so
zdA@rIOmHIsv;8;(wPBp`KWC{KuHj%;ksQ!5lV6A0O(k&%AZ5jR*q#doA8Z%JEVJDS
z7lCbtU4{>BWduZx%ML_T?0=}Hj%HiArpL<2&y1@W?aer)@dQu0$ltj~y~nqHGdbeU
zi*Ef~>gn|pGAbHFbY=*b0E*?XDU7?@%=4s%Q5KcqA&MB;Ap@)E2}sdLO#N&m3DLu=
zidUuUEVP7R>D#X7)klH1xV?~8*@Mn|t^F&+_#KA5DJ)Yltz{I}eMIKpl%A3gwa@$8
z264h-x?>l29Q1C5hV?~|uI8&Zo(La4w$2%dQ^=tBLlZ8)<%jMo{L(shCmqRlW9n>*=779-1qDtZXSp<&ux)jgEMdD}u)_7Ap9M3X_ij{?Gc;$OIes{U&
ziO3Y#RCh2#Gnl!hvP(?cx8vq+&fW=G@};Ujk{BS~U>(x0eLGyT}vo=VMHK|Py}VHqR<
z-Yisa94&~D3;5Qvr2-O>RX6D|e2DgmhJ>D04O6*_Oug+oyBK1zj_}+Jz08)dU~Mj$
zKP$`7+1xyJ-DZXCr5Lh(rUp#hP+#M7$g(=KuW1~J=G%Xb1PS0pcX@IC-UkZhbXRQe
zNV9&{c!@_bVD5C%Z+5MzgRHcLj?p9iuJy~C9|F1k#S$&*aX#t|_rGe%&f*8#48qFdMgogo%5KF}KI
zUIkH-oL<$vc+jRk#@EG5GmE_CDo&Nh3xGPgT6sze=1H
z^!X*y`3MUh7)!{`QI7%cNa|ct5eQ$>@;F@m)AC_-qWx!L*Rb%z9Kl{n65Jms0h-9)
z#vsH29W#tC?S}Sy*nUv%q@}BVYkC7(v^VTkYfV8;(Fz)8w)qPwo=MZv+vTbciYd2~
z52b3->HJwRmH9AI&y<0iBZRRck4(MUPmBalHXnT2Ld=>eAf<{{0K5m?7Eu5=Yc|Jw
zBIu6`yyI{ojRC8l
zx4ma$4~5Q+hMNizZBRJ1Q(kdNg`NFy9w2m2x=?_z*(Ly?*U0I&E>U4~`+}HR$B87b
zFr47yA$j`+JD8_GhEnB#s&Rk4hEDS;U&*)dZl-bWaCm)+-3+Ce>$Nf;pEg0rdfapW
zy@4$c8%x$}?*Wi<|7NuI+^2iZBm(0`uo7%~SACQo1Ngws)Ho)%ehH-N^7!A4x)UGA
z>=DC~OU`4=5sN~h(przWHlmIizNusX4&rS
zrpQT|WnwGTwA^xkB^|jrgDF1l_{Ce@blL*>)&*{l$SUGdfHX_@EIe(EcMhj+hx3Wc
zY8wdGPcHHC+U9nb^*TB~pkM8w{Z(M`eU7VI_SMC1NVJZKSJbX&Nq-SQESx}!Dynzp
z<32=TOON%sv%LYK=PpgT<~G_$f5<4N@f{3(dzP8fE!PrukQ)K*rjWBhH-!cv#yOf%
zZMqu&y_ad&v70p+S8AB;E}!)cg*6H&A5WdFW|Z%e;gAr|+=Uw|Baf7lvrvm#yexw`
zvoks6N`dVkOs-!mCuuZ}0kH#%32$6;X1aVgSMH_#d58%_@t?`1h648SZeB#?Q6lEt
zx5OIpikI4a>~Xfv^8Xf4^qz7HI2$ucsZdx|K?;79UWjucT8c%CsB$>==zCB%vPqPr
zbdyd{u%WN$V|gmM3Y$<5cUK<6cE|3C!N;d%;~w|MQWnqu)lSIDYKrPi%6BWHfFcVo
z+E{(BXlQp2(1Row@^$=rnXWsLI(L4G5
zH|uJxsM!zRD^dpiFJ*3tVKg4>Cjse#_=Jm4eU}bj;%M~Dd|CM>_dyPBF@y*1@hF54
z(df^5y@L3|>o003o3r=j0GP|B63F`cc#V4Z{am6A^AX`kYl99#F*gIzX$!hG>X{9S
z=MjkD7%e4n#Xl4|TOK^%qojHV`JA@|Z?jlpKTw+v`qgp&=tCg$3yXwC3-#QndFUK+
z8Ukj+!R^kbHEEVy-hHM`z67hl{nq6efW^1ewH1(<_+xuLp#Qe_?N91o
zc$fX?GWFRmBjFRd@jpl_GL^L|#Hu0>G7uwerr0R5?1v5<7jegH~JL+Gl
z-}Xj>kxW-fb^^25T2Rr;+B^Ib1QNqma!c8E_`CN*_bK!N|1fnz4;!I0mET6}AX-2Y
znkbAtLAhTN|CG4T={Gv@_n=VwGz2GNE(pK
zn41+xjr$oDE!xH@jCz!TA=OM}m@zEIA@C+{W(D(mt)1n-u)i)XETcZ
zQa67pwpwYCG10ZnDO;JM&+*8`DI*`_m}0AhxFWQ*L2wPD;ydaZG(X~$&JRTLwvw|GwJb`R
z%C~3d1n_+BMS7O%TG`8MYvy5PYb1*REaAj%LFc3&q_Z(FOQ!NXg`YEg6l>J=$m4Jf
z;`QDuB~)pZsj`!?ug6TPqU8cq^w!B9?!yrV?F);=f`1Q+&vVkEX)mr}*q;Gn8B+n7|uRAgz!+bHl7l&Jce?6&X<;6bgtFHm+Q`8K8aeZ<
z>^eZ#dzunZ#>l;1fOX$D>oa}$(GQ?*%-ivXw#>vO<9D4L;$Uh
zm$^KrST|r;3qwM595Ny*`tbD+z8ulPORQR`
zW?nph^UXqWnW<1fzmjL~+c}&J=I_|ymz~2O;9M~HEH|5h3RrN))uu_wIZ33UB3B8)
z;hdF%*98b-BPp?J-!Y@%*YIo9QTAR#HGdtuj?o;K?66G;Q5Wu%Za*CFoehIoA60uL
zt%!w|mwwlIu*fq?CqsR3TueOoN*Cb%HaO_VltR|Pyrh%J0X|4rUQs2Hrj8>$3-nR_
zOb2$5vyglC$ViU)HQptM3hV8|YuiaTw`mIWduCg2$`Ef;ev;L7%Q_N~3a2l%$w}SI
z6KwwgX!S@R5u`*;?Y9Ai)!;8~Zwbn|;DX1~b&yGRJjV;Woh
zqy!*OHWD^jTF;|M@o#=f?hlzixF`X_KIZ{G{l{%SLA3aP*aE_p>Bc@zl9ZIYm}Rg>
z8)2?uTbAtNj@C~y$%t`)_5lL0vL+lhj~Ud0;q~#~Js%BGJ?19AJY#W#2po%kh=k4rL7lTVEltTk7^f3aA`$MFBE9u55Hj9AA<6d>ga?@WP79%ZPmJeH%jj;_91ZrV*Q|G$^!^YbiE5NledWL&?UWMnFYz<-0(nol
zqxo3$TKZ&0VUKFj|x9tR=ARj!dV!~LA+{BOQ
zim;jt7JWh~bcq5^5`p_4RS}`T0ZJvKTUQgzLd?Rnim|F>6R2*;28wkiVwU;_8lKefAg$Nw_Tamv_#0l#*2gQ39K>l5wrq;mY&f$zTV%}3MweZ
zo#!$8*(?d2mSi=}NfD95RN^!yy$s$`H1&8zVI|!lAD(S$-TLc}JOs`wRGJ;x;ubxh
zfT|wEbSENx=YRA=Fbu+b9pXomG_|xxg`14XWEtI~)|kRr9kqQ3E!6kEfoQkf>|5Sp
zMS$<)d~@Qgh$ROjF->`Bgwxmx=PhHeqX15lFdi6aYhiD7LusGvV{k8Z!Y2qPULrOG
zPB+2<}Y-1`x+;9mOI*OWb-d`
zwOT44A`ZsCHe%y#jn!p6UI=j<(DcIJpOE#H1CO7=1M?9^pbv^BJ)vkK(_LtX?
zsqv`T(T5X5Wx@0AS3kN;@E;4^a5-XZJ!3e8N7YIP14mQCcNK#R+Qjp7-lTa@>G*G-H%joa=WE5(d
zK(~OtcU$*6oBLL)hwdEWf!!wd`6((BK@n{NEql1Z*22|KV!jq*wv;p1sud(bo&AtM
z^FU?X?*@8-_oDTZjpBOnbZJvnZ%scKg0TwO)MW-0h)Cv?#x5<XAN{@FqNpN
zk&Tg6TwOaps)1=Ni|Xu0jw5
z+2Ul8iWEz9!kpSkeDDBihj3`Lz#KjK#{7eYFoUci)K3%Rxn|Ee()qMlUr~^}l*>p)
za$#BD2p5I=;?bPH*WO7+bfU}GSN_S~i!r}`E=tnSkLW06=SMWtkh35KL|*^k7$K|e
zPcjAJAK=A>s9NdEHZ|97ZS9QsASMKpT-P6R9xThh6Uy
z)zkC@TAv_wL4J|{MrN%Wq2VVUz`tZ;F1HW$K)Uz4d#8^{%a}F;-28b2{!xx1M8r%CbKLqFLtTw}QJdTOH
zVENyI7jl1Hq96q7Vl>GKAv$MP6UpAu1BIS}t5@rHRe$RWQFq1Wo_Ue5dt2IjGT=`n
zJ^O}RsQA13k7UK!60}FK4iZzA1u`1TBKmBDQ0_A`WQh`|;aDyyDXQfzw?B9we4;_w
z$-x_7w2Ob;n08R938UwAOD>A0F!I*~S6e2qwJb2z!8k!M3D@*XYB|@dlD*Q=Nxx_b
z6zAz>kx-@WFK=(F0$26NOZcN1EsTG&24RoX%deJRCK@AjV3HhFJqrK;KmbWZK~y;B
z*OiWML!xf=@#=?0h4LM(x;D5gtO*)(waC+cv$!rAEQ(TUc$Eut-Dub!xBRw1B7)K@
zkFR<+<9-;yw&+;LBaE9LJadE*U`fBvVt6o{>&)o9jVB8f$-8A93k=D<-FT9j0jm7I
z;ND=q+H_A`ZFd|d@2`?idT9Ts?%~Q=mh1bf;g+wdBLZ(F_HAF3y&x8ct_29nU0d)6
z-Gp%46Tb$q`YI9&cFeCa>!_shsX<0dRNe>UI=YW2RVFC-4f6abcT;1TxrOUu;fo*&
z$m+79H>6vyrv&4j=P7Ru5Y^hbO-Et;Q5i)&AFzm^igAS`OWgMdz5B-Jg3uM%ocpxj
zbJSNr7~dgli(3PaJ3nIa=|7ry1(klqItJ`vU%&m;IlkvymCSV}j4!%Y;2omkX2f_d
zND?!hmt2A{95EAJ9I#j8`|b4*iXdM>49thHs>5g{BQb!qYTY-GuOd2*&=-j+X5^p88#Y;AfGLc_PnQO4NQg%KP*d7WHA}
z4A_Z$hr~E|D8=O}gouXW_v(R-9k%3twVrH0*M8pr_4%@+e%*)<1A?H#
z@f=lVe;OjtaZa}~U&9=x90m1etmLX@!AdHmwo=o-EZaCZ1c|xxPZg6(z?gGz|IY1=<33W>jdgR0Mja^4;`6~=SfVY{=%!dU5Y&h$G{(W}=rG9w!#A(lCaQ;n!eLzm-$-~myLOdfu
zCsFMLJ+BUeS`YijKjNxFLNOj?e$6G;8-
z@cBQTxOzK|tM{n?tmqSA*it7&3GYbbAFilG41e-dfbFCJNId^
zCd+(?LlyybGQecO{?lP9WJP*e5J?7RbqEZIWfWsd7VR6+DWE;FnK}@g8Y)XCZXA&M
zrZ{1juIs-w>L;a!H+yniu1-W{RfvAk?>_FEYorjDgHf8{`Ul?=tK%$!e;@N6qd6{X
zIg7C`kY5qm#1tZ>O(59(M__9x-l*tHx7dVoF1BO{n0W$5ww|(^IYoU&S%ekjCUN-=
z!a~KSYQvQaaM)26_IV-TqM68X0g=GGTRfD1gNW~v=U2f9IE8Wpu}wd6-6-}>QQGyg
z`$g(44DY{X2zI-PEjW&Tll_S-y{Hprtc-)tsF4U*avwI@J=bB*^8wS(9W|Ub@
zd_TICZYqM2eosGaO~iYUST4b%x@ZD`<9la2H9@w#kheyzM8DB!rkiplI)22|HQGI0
zG#tk?bfz3!Bdss>xKTiNSp6mVL_Lf*>AAB~8NUc-#Hif~hf8P4c+3WUSpO&{L4>Nf
zi}jXvy%JCD8?I6*uedfK14+;#HMnM{kvI|Ib@L#Wqi
zVGz0=wm%^pGC|tQIxgLGJ-4I-{FMzc
z4g6q8M}By9cs(x!Q*uZ$`PLXx_-oZ
z{x)2#PqJSd6U|+mAHrTf>b1*^7k`!+NV135D7WJoqTVoxO`>641e+mFycHdM*{rY&_24swMc_QqwsZjPi*YgnYTDdjlyAMlMf^wPa+
zYzmTQx;7vwwiv)Pf}kwk8#+b?V!Vozc-O2tuGQv#!anah__*r~IkP?tb!bX=9V4H_
zIfG6i){r$ph-+6%Ui3F0av%f4{23NT>FH`_S*ocDT%<=psHgr~K0^>L8_W;QNn-M`
zhLM+`Y3uuKiB>vP2#cUO49*eg*n?L@nTyOdGH@F`cjuANSCv}$dB3tzFAyL8&Cv>;
z*D-0vJAc!S7#E$0o+!-q!|;HpMccM3i6}OhX(O8?@c#rsxj<;&p?21=_nXML&`Jy
zYmsnuC+328q#vdmZ(w9oXVVgMdKxo>e8Hi`ne}L|Ken_>g^^{J$*Ne?Z)Ex>pz-W;E!-2q
zme>V~=cw?M5D@lFLGxXSMj;Hc;d7psbiL{G=
z-A>ldfB3Q6*=~24&k(?M+5Mt7(Yrm_pX|jCV}@~D_G4=mSQZeAOQG{irSjxw3B!X;
zRtG(XQ$Oa9NgnDQKEXdm;?;FaNKhWC@iyVfg4emhCLze2vhd3LEBkJ-?F*3;FbC7R
z8o}aRKs)_0_*ME)P7e(D^IcCl!uj3@_TBS-ggkGwdP{tOEaf`EYM=JZ+!jVHrea};
zHo+4{06yAa+{>EGZ`gZC(4QPQn9*JKQ_eF(MEM_db`{f>`6dDj?R^o)F5Pmyx4f)A
zC!;?Id~HkysUtM!@qmFM_-);J%Fk09G@F>b3ZwBwF#9r3_$a$VH?chYEk0bl^S|4G
z1YIMr8GyvGLO^g~y(?3m!I2jwZaQ&(%4Z%LX6ufV?LC~&w=3`zZeitq-I&Qx5gXtL
zbOw=JO$$K;Ic@}Gr<~9|G3d}*>66?~Ba5h7-{d^!D63Sz2bVvTzL2obt#x0_?1U4$
zWzjbSXVc;FMicDd$~nqWOH4jU&7(w8gLo!egvCbm+LrPy66Op`0vJ!s#}7PjN!|Tn
zJCs^b*t12P!L90ypO@HMsd_II+2yRtIEgk&%5$>b^Nhk=rgVaCB_xE&STSS+-iQCdxh9DnJ)ODikt01a8!NT5*nzH@#URN04xVzI!bCjgXgALqj*m*nZ4|O@
zX(PUg#Fifo3YsqCD0`;f7<%Ck&i90UpgC9rjPJh)VxsgCVqp&o*ZXK%kVRg(FDLC4
zuxLl}!$jSTT+}nO2tgiqm;TVn_th+szqfNa(Kot8NB{E7ds|3MS*+F_EtRXN~lItEJ9zLT;v+eK<<-J$6Nwodn?}t2Z)qf!u=Kw
zSEU2-1PCI?V7Rg1<>Y7RigYSc8Wg6(jA6v>@+l<~g5TE@NojC;V(D`Z<}FR@^J4UD
zF6$nKQcC7UNSPd|4-qwGvL|-eEp1
zDrdkzM>ky($@rlY`#tLs=xaRRBytN3CboWo@VdM&8_ZGWDh8c$4AFZDNrUn-fFMj3
z1?9USL|%y``})5uB&<`uM?3%!NQ>~(PUci|gCsx0pOMFkLK%RIyOZ^lhVq*nklI(G
z!FJBu?-U7pMZwK2(&GuMUAVJW7KCzVz^P1cvNgX_8U`5IHMFe|lv3wHcI%N4;-_-w
z*1R`*bV#P7Civsld&R_MK?{BhKp284?ExebzYta%vpi>9scykJ&W%g
z8@z%r&NgL>2E%Z3J`nkX^GmLO!0Q@>`8`d%j57C{`6Ng}&(VFz6LYcsU{68@#MBn}
zr+Qll`4LS@`bi{5VQi_^pI6|ZT^ywdHXhCMrx29*InRGmS5c2B)p72&CFZzOo)bicE$+?|!`HGjh11Q^PJ#VMza+T>8n8#2>T5PE;qsLLD!rT5Tt
z!Z^i*CH*0QAauhZJ1mq0fqN}l_6~Y<0JT|2xGL@Yd&JRY|5_LP-+yTGYXtT{i>Iq0fP8iki~O`ejxv*g#N4qL@%b$iPXST+^?4-l_VS!;kc1sFfb4Kh`?y`&
z5{bPpX_bXUWk<1?Zah6dvHt)8rki>GxB7L9v1DI=qJ8`%*D4rWy$u~w=OPF@)<&hj
zM1Ai&GRLz+E(-ZULvX7GhwssiNXvYGpnO?uy(aP;%b+L>Aphm~XW5vY*7-mGgjnQ5?piC1$_D=y{%3UwXvyzc
zPhu2O#_LSo0zV6W5qv~!x!Q@eKP-P}Jj=KX*{w38s&oqSguCIbi`=AsKyZ)TK6S@UMZ
zS0U{j1oj9a5tT{C$dw>rkf9tDQGN$NG
z1AA#0P=i81-!S{S+I@
zZ$F;ahRh&mrnOF7#Z1~B>AxHA!1&(?o$3JJQ=fImkwRQKHsD0|aDzYXBlWUPu58{=
zuq_DpqK|lgVLQfs3;fi7RRV*a5j~=Y=WC|J1K=hp?${L8AG7d$c
z{!jhPss9Ur!dDq=$=@arzhU_j#4D!_y80sQE)h4`Wl0FZX+p3(U$S)a5w80Nn<)qn
z$4j&rM<7F05dB7jNEHHS_?j&0*#gBB0+KPFY(?K7$m|T9AOxr8vcR7tN~Br$ca~h3
z4nuFXp=TQ3|5Rd|0}+RC_GXqy3<9C;QEO}=oR5L$$kJ;uj>!z%99DW(3gLzs*V5*487{$K=y}TpuVk@9Q7CCKrRhtzHqE@dOO)t7NwU
z0jl_Yt}_k@e#3!<+Bz3=yoyoQ(r6YMC1e6x$~M;nW^rEoDMN+#>PHz+3=sc%jtZRPdQG7uXM;j#Yidd
z(Q>vmd+Uu-A_%zV8`~vZ877cZ1QyI8gM<*(t{qv!F#v_AYK|90)=w_riZ(gktHQhq
zX`AGE-_=fIz`yoY*VUIGJMGugEvBB;~i6-K^xIWn$S4;5{#8S_}M5n_b%x}!~;)|-kr|N-F
z+WV)rFFmUp(RB~J^4_^PrMecZz0)7Y^!n!y3<^bH%X-}XDp`)Ly0%kqH#I<$oAEV1MAY@ew
zjdn7irhdJrs_PXjK>?c{J=;jBceb>e6Z{Ca_t#rJ7YQMZt`>1YRBz<5j%pPH(L)eu
zA_JnH4#o(iP}A?RQYN(1b;U+();If$ZcuP-o^k$tQLGTt1ph6dJySC?ni*WzDZN>_
z%xy@}QDuh)m*DIuEWiv*qF1V7b`S(>3rc!z%`hm7@P|>#*U9(UU^8;Dk@H;)USShp
zDt~A&=0VD}-1a7lJp{lp-AnkBMy6IV7^w57tXQ6OAJL1)zs;Qn14B0cP6%MF3c2I-
zQ3&JSh$o%xsYX%vgEC_^sjKTdEBRj!42Gg1^``~Sa@#J*eIWbKArT27OQHR#r=yWx
zEd%T1ldgxcJXOJ&*9VfB=KH#E5!~Wn`;(LA06s1%g6T^fXLrL{PHjX4@veQujbJ@~uPC
zf7G)2HJE(C4kYe=UBCWzM=e4C7Fl<Emen)w{k~h<^H!I{TGE70Jga(kus`_m
zM5MPgsvT3K56SB%kcQaExI-k1#AxYzgSTqo-w#SN!@%EYDck^7FrIfMeXNid<0h|u
zFxn3MZMW&ENC!=;Qf?6Z!t3s8&lDKO7*4;?I^TKZAh%4Ya&W;c{>E%o?#b@isEa#c
zeOAu+aA(}Ya#wyy09>m=z2vi5xUnK_@#9`VwoS-0+r8{&KzF|Tx42-kdtlOR>I3N8y1DX0cSqx&1Yv+NTxC6p
zAefvI4wQ=#{*!sH0!z;j3RKB6MR+{IkI4r`JP+B?w<(J-8bZOP2Zr!VVl3i`nP4K2
z5IARrTTfmYb{$#^p@&R