From 2dfbdc2cd85094ceb8524089980d3a6f0e7e1c54 Mon Sep 17 00:00:00 2001 From: Maksym Vlasov Date: Mon, 13 Jan 2020 18:51:38 +0200 Subject: [PATCH 001/105] Initial commit for Ukrainian localization (#18569) * Initial commit for Ukrainian localization * Fix misspell and crosslink * Add Nikita Potapenko to PR reviewers https://github.com/kubernetes/website/pull/18569#issuecomment-573014402 Co-authored-by: Anastasiya Kulyk <56824659+anastyakulyk@users.noreply.github.com> --- OWNERS_ALIASES | 8 ++++++ README-uk.md | 71 +++++++++++++++++++++++++++++++++++++++++++++++ README.md | 2 +- config.toml | 11 ++++++++ content/uk/OWNERS | 13 +++++++++ 5 files changed, 104 insertions(+), 1 deletion(-) create mode 100644 README-uk.md create mode 100644 content/uk/OWNERS diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index b6d0cfcd45..0e6ba8684a 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -216,3 +216,11 @@ aliases: - aisonaku - potapy4 - dianaabv + sig-docs-uk-owners: # Admins for Ukrainian content + - anastyakulyk + - MaxymVlasov + sig-docs-uk-reviews: # PR reviews for Ukrainian content + - anastyakulyk + - idvoretskyi + - MaxymVlasov + - Potapy4 diff --git a/README-uk.md b/README-uk.md new file mode 100644 index 0000000000..68d3b0db0a --- /dev/null +++ b/README-uk.md @@ -0,0 +1,71 @@ +# Документація Kubernetes + +[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) +[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) + +Вітаємо! В цьому репозиторії міститься все необхідне для роботи над [вебсайтом і документацією Kubernetes](https://kubernetes.io/). Ми щасливі, що ви хочете зробити свій внесок! + +## Внесок у документацію + +Ви можете створити копію цього репозиторія у своєму акаунті на GitHub, натиснувши на кнопку **Fork**, що розташована справа зверху. Ця копія називатиметься *fork* (відгалуження). Зробіть будь-які необхідні зміни у своєму відгалуженні. Коли ви будете готові надіслати їх нам, перейдіть до свого відгалуження і створіть новий pull request, щоб сповістити нас. + +Після того, як ви створили pull request, рецензент Kubernetes зобов’язується надати вам по ньому чіткий і конструктивний коментар. **Ваш обов’язок як творця pull request - відкоригувати його відповідно до зауважень рецензента Kubernetes.** Також, зауважте: може статися так, що ви отримаєте коментарі від декількох рецензентів Kubernetes або від іншого рецензента, ніж той, якого вам було призначено від початку. Крім того, за потреби один із ваших рецензентів може запросити технічну перевірку від одного з [технічних рецензентів Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers). Рецензенти намагатимуться відреагувати вчасно, проте час відповіді може відрізнятися в залежності від обставин. + +Більше інформації про внесок у документацію Kubernetes ви знайдете у наступних джерелах: + +* [Внесок: з чого почати](https://kubernetes.io/docs/contribute/start/) +* [Візуалізація запропонованих змін до документації](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) +* [Використання шаблонів сторінок](http://kubernetes.io/docs/contribute/style/page-templates/) +* [Керівництво зі стилю оформлення документації](http://kubernetes.io/docs/contribute/style/style-guide/) +* [Переклад документації Kubernetes іншими мовами](https://kubernetes.io/docs/contribute/localization/) + +## Запуск сайту локально за допомогою Docker + +Для локального запуску вебсайту Kubernetes рекомендовано запустити спеціальний [Docker](https://docker.com)-образ, що містить генератор статичних сайтів [Hugo](https://gohugo.io). + +> Якщо ви працюєте під Windows, вам знадобиться ще декілька інструментів, які можна встановити за допомогою [Chocolatey](https://chocolatey.org). `choco install make` + +> Якщо ви вважаєте кращим запустити вебсайт локально без використання Docker, дивіться пункт нижче [Запуск сайту локально за допомогою Hugo](#запуск-сайту-локально-зa-допомогою-hugo). + +Якщо у вас вже [запущений](https://www.docker.com/get-started) Docker, зберіть локальний Docker-образ `kubernetes-hugo`: + +```bash +make docker-image +``` + +Після того, як образ зібрано, ви можете запустити вебсайт локально: + +```bash +make docker-serve +``` + +Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері. + +## Запуск сайту локально зa допомогою Hugo + +Для інструкцій по установці Hugo дивіться [офіційну документацію](https://gohugo.io/getting-started/installing/). Обов’язково встановіть розширену версію Hugo, яка позначена змінною оточення `HUGO_VERSION` у файлі [`netlify.toml`](netlify.toml#L9). + +Після установки Hugo запустіть вебсайт локально командою: + +```bash +make serve +``` + +Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері. + +## Спільнота, обговорення, внесок і підтримка + +Дізнайтеся, як долучитися до спільноти Kubernetes на [сторінці спільноти](http://kubernetes.io/community/). + +Для зв’язку із супроводжуючими проекту скористайтеся: + +- [Slack](https://kubernetes.slack.com/messages/sig-docs) +- [Поштова розсилка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) + +### Кодекс поведінки + +Участь у спільноті Kubernetes визначається правилами [Кодексу поведінки спільноти Kubernetes](code-of-conduct.md). + +## Дякуємо! + +Долучення до спільноти - запорука успішного розвитку Kubernetes. Ми цінуємо ваш внесок у наш вебсайт і документацію! diff --git a/README.md b/README.md index f403f9e9de..02919f3136 100644 --- a/README.md +++ b/README.md @@ -27,7 +27,7 @@ For more information about contributing to the Kubernetes documentation, see: |[Hindi README](README-hi.md)|[Spanish README](README-es.md)| |[Indonesian README](README-id.md)|[Chinese README](README-zh.md)| |[Japanese README](README-ja.md)|[Vietnamese README](README-vi.md)| -|[Russian README](README-ru.md)| +|[Russian README](README-ru.md)|[Ukrainian README](README-uk.md) ||| ## Running the website locally using Docker diff --git a/config.toml b/config.toml index 990392d659..399cb38e68 100644 --- a/config.toml +++ b/config.toml @@ -286,3 +286,14 @@ time_format_blog = "02.01.2006" # A list of language codes to look for untranslated content, ordered from left to right. language_alternatives = ["en"] +[languages.uk] +title = "Kubernetes" +description = "Довершена система оркестрації контейнерів" +languageName = "Українська" +weight = 13 +contentDir = "content/uk" + +[languages.uk.params] +time_format_blog = "02.01.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] diff --git a/content/uk/OWNERS b/content/uk/OWNERS new file mode 100644 index 0000000000..09fbf1170c --- /dev/null +++ b/content/uk/OWNERS @@ -0,0 +1,13 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +# This is the directory for Ukrainian source content. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-uk-reviews + +approvers: +- sig-docs-uk-owners + +labels: +- language/uk From dfd00c180e6222c49b5d75963647fbb6aed2018e Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Sun, 8 Dec 2019 20:08:01 +0000 Subject: [PATCH 002/105] Fix whitespace The Kubernetes object is called ReplicationController. --- content/ko/docs/reference/glossary/replication-controller.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ko/docs/reference/glossary/replication-controller.md b/content/ko/docs/reference/glossary/replication-controller.md index f5c19a33aa..2bcf7aefab 100755 --- a/content/ko/docs/reference/glossary/replication-controller.md +++ b/content/ko/docs/reference/glossary/replication-controller.md @@ -1,5 +1,5 @@ --- -title: 레플리케이션 컨트롤러(Replication Controller) +title: 레플리케이션 컨트롤러(ReplicationController) id: replication-controller date: 2018-04-12 full_link: From 7bd5562d2b1aaf1adea48585c154741ababe4072 Mon Sep 17 00:00:00 2001 From: MaxymVlasov Date: Wed, 5 Feb 2020 14:53:39 +0200 Subject: [PATCH 003/105] Fix merge conflict resolution mistake --- config.toml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/config.toml b/config.toml index be61230816..5c0328ec9e 100644 --- a/config.toml +++ b/config.toml @@ -295,6 +295,9 @@ contentDir = "content/pl" [languages.pl.params] time_format_blog = "01.02.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] + [languages.uk] title = "Kubernetes" description = "Довершена система оркестрації контейнерів" From 93a23a1bf4b7749bd7049f6081de1dff399b7178 Mon Sep 17 00:00:00 2001 From: Brent Klein Date: Thu, 27 Feb 2020 13:50:15 -0500 Subject: [PATCH 004/105] Updated Deployment description for clarity. --- .../tutorials/kubernetes-basics/deploy-app/deploy-intro.html | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 8f38960de7..af2d5eeed2 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,8 @@ weight: 10 Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes - master schedules mentioned application instances onto individual Nodes in the cluster. + master schedules the application instances included in that Deployment to run on individual Nodes in the + cluster.

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism to address machine failure or maintenance.

From 2e8eb9bd4f906fdd9435476b359395a4ccb5d8d0 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Fri, 28 Feb 2020 09:26:54 -0500 Subject: [PATCH 005/105] Added endpoint-slices for language/fr --- .../services-networking/endpoint-slices.md | 111 ++++++++++++++++++ 1 file changed, 111 insertions(+) create mode 100644 content/fr/docs/concepts/services-networking/endpoint-slices.md diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md new file mode 100644 index 0000000000..47f076d11b --- /dev/null +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -0,0 +1,111 @@ +--- +reviewers: +title: EndpointSlices +feature: + title: EndpointSlices + description: > + Scalable tracking of network endpoints in a Kubernetes cluster. + +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +_EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints. + +{{% /capture %}} + +{{% capture body %}} + +## Resource pour EndpointSlice {#endpointslice-resource} + +Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau +endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux Service selector. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. + +Par example, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `example`. + +```yaml +apiVersion: discovery.k8s.io/v1beta1 +kind: EndpointSlice +metadata: + name: example-abc + labels: + kubernetes.io/service-name: example +addressType: IPv4 +ports: + - name: http + protocol: TCP + port: 80 +endpoints: + - addresses: + - "10.1.2.3" + conditions: + ready: true + hostname: pod-1 + topology: + kubernetes.io/hostname: node-1 + topology.kubernetes.io/zone: us-west2-a +``` + +EndpointSlices geré par le controleur d'EndpointSlice n'auront, par défaut, pas plus de 100 endpoints chacun. En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire. + +EnpointpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devront une amélioration de performance pour les services qui ont une grand quantité d'endpoints. + +### Types d'addresses + +EndpointSlices supporte trois type d'addresses: + +* IPv4 +* IPv6 +* FQDN (Fully Qualified Domain Name) - [serveur entièrement nommé] + +### Topologie + +Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. +Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Noeud, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: + +* `kubernetes.io/hostname` - Nom du Noeud sur lequel l'endpoint se situe. +* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe. +* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe. + +Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selector" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur. + +### Capacité d'EndpointSlices + +Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec le `--max-endpoints-per-slice` {{< glossary_tooltip +text="kube-controller-manager" term_id="kube-controller-manager" >}} flag [indicateur] jusqu'à un maximum de 1000. + +### Distribution d'EndpointSlices + +Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisé pour un Service, les Pods peuvent se retrouver avec differents port cible pour le même port nommé, nécessitant différents EndpointSlices. + +Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibrent pas activement. La logic du contrôlleur est assez simple: + +1. Itérer à travers les EnpointSlices existantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées. +2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire. +3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles. + +par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par example, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. + +Avec kube-proxy exécuté sur chaque Noeud et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Noeud du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Noeud, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. + +En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. + +## Motivation + +Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau. + +Puisque tout les endpoint d'un réseau pour un Service ont été stockés dans un seul ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu une grande quantité de trafic réseau et de traitement lorsque les Enpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) +* Read [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} \ No newline at end of file From 86cf63495c48a67c9f9915d7c3e15602bfcc5707 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 2 Mar 2020 09:31:05 -0500 Subject: [PATCH 006/105] First round review changes --- .../services-networking/endpoint-slices.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index 47f076d11b..460bc41f25 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -4,7 +4,7 @@ title: EndpointSlices feature: title: EndpointSlices description: > - Scalable tracking of network endpoints in a Kubernetes cluster. + Suivi évolutif des points de terminaison réseau dans un cluster Kubernetes. content_template: templates/concept weight: 10 @@ -24,17 +24,17 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése ## Resource pour EndpointSlice {#endpointslice-resource} Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau -endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux Service selector. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. +endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selecteur" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. -Par example, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `example`. +Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`. ```yaml apiVersion: discovery.k8s.io/v1beta1 kind: EndpointSlice metadata: - name: example-abc + name: exemple-abc labels: - kubernetes.io/service-name: example + kubernetes.io/service-name: exemple addressType: IPv4 ports: - name: http @@ -66,18 +66,18 @@ EndpointSlices supporte trois type d'addresses: ### Topologie Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. -Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Noeud, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: +Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: -* `kubernetes.io/hostname` - Nom du Noeud sur lequel l'endpoint se situe. +* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe. * `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe. * `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe. -Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selector" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur. +Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selecteur" term_id="selecteur" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur. ### Capacité d'EndpointSlices -Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec le `--max-endpoints-per-slice` {{< glossary_tooltip -text="kube-controller-manager" term_id="kube-controller-manager" >}} flag [indicateur] jusqu'à un maximum de 1000. +Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip +text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. ### Distribution d'EndpointSlices @@ -89,9 +89,9 @@ Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possib 2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire. 3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles. -par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par example, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. +Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. -Avec kube-proxy exécuté sur chaque Noeud et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Noeud du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Noeud, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. +Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. @@ -99,13 +99,13 @@ En pratique, cette distribution moins qu'idéale devrait être rare. La plupart Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau. -Puisque tout les endpoint d'un réseau pour un Service ont été stockés dans un seul ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu une grande quantité de trafic réseau et de traitement lorsque les Enpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. +Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. {{% /capture %}} {{% capture whatsnext %}} * [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) -* Read [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/) +* Lire [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/) {{% /capture %}} \ No newline at end of file From 2e750fa39e706c26c994c15db6b0e9a3685830d3 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 2 Mar 2020 09:31:05 -0500 Subject: [PATCH 007/105] First round review changes --- .../services-networking/endpoint-slices.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index 47f076d11b..6798eb2c20 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -4,7 +4,7 @@ title: EndpointSlices feature: title: EndpointSlices description: > - Scalable tracking of network endpoints in a Kubernetes cluster. + Suivi évolutif des points de terminaison réseau dans un cluster Kubernetes. content_template: templates/concept weight: 10 @@ -24,17 +24,17 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése ## Resource pour EndpointSlice {#endpointslice-resource} Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau -endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux Service selector. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. +endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. -Par example, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `example`. +Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`. ```yaml apiVersion: discovery.k8s.io/v1beta1 kind: EndpointSlice metadata: - name: example-abc + name: exemple-abc labels: - kubernetes.io/service-name: example + kubernetes.io/service-name: exemple addressType: IPv4 ports: - name: http @@ -66,9 +66,9 @@ EndpointSlices supporte trois type d'addresses: ### Topologie Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. -Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Noeud, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: +Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: -* `kubernetes.io/hostname` - Nom du Noeud sur lequel l'endpoint se situe. +* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe. * `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe. * `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe. @@ -76,8 +76,8 @@ Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que ### Capacité d'EndpointSlices -Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec le `--max-endpoints-per-slice` {{< glossary_tooltip -text="kube-controller-manager" term_id="kube-controller-manager" >}} flag [indicateur] jusqu'à un maximum de 1000. +Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip +text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. ### Distribution d'EndpointSlices @@ -89,9 +89,9 @@ Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possib 2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire. 3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles. -par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par example, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. +Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. -Avec kube-proxy exécuté sur chaque Noeud et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Noeud du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Noeud, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. +Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. @@ -99,13 +99,13 @@ En pratique, cette distribution moins qu'idéale devrait être rare. La plupart Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau. -Puisque tout les endpoint d'un réseau pour un Service ont été stockés dans un seul ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu une grande quantité de trafic réseau et de traitement lorsque les Enpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. +Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. {{% /capture %}} {{% capture whatsnext %}} * [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) -* Read [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/) +* Lire [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/) {{% /capture %}} \ No newline at end of file From 48828fdc4a17ab872d7f3c9ae65c4c09e8934668 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 2 Mar 2020 11:39:54 -0500 Subject: [PATCH 008/105] keeping term_id but changing text --- .../fr/docs/concepts/services-networking/endpoint-slices.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index 04a63aa6c8..6992428f57 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -24,7 +24,7 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése ## Resource pour EndpointSlice {#endpointslice-resource} Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau -endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selector" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. +endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`. @@ -72,7 +72,7 @@ Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les info * `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe. * `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe. -Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selector" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur. +Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selecteur" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur. ### Capacité d'EndpointSlices From 6eceb245a2335d1e1a15ebaa4086dfbb5d4610f4 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 2 Mar 2020 12:20:17 -0500 Subject: [PATCH 009/105] remove typo --- content/fr/docs/concepts/services-networking/endpoint-slices.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index 6992428f57..a96dad2b98 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -106,6 +106,6 @@ Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans {{% capture whatsnext %}} * [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) -* Lire [Connecté des Agit stpplication aux Services](/docs/concepts/services-networking/connect-applications-service/) +* Lire [Connecté des Application aux Services](/docs/concepts/services-networking/connect-applications-service/) {{% /capture %}} \ No newline at end of file From 73f5b576d4bb9309037fbb41785cb937efc253a8 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 2 Mar 2020 12:39:39 -0500 Subject: [PATCH 010/105] Fix accent and verb accordance --- .../services-networking/endpoint-slices.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index a96dad2b98..b25516b1a2 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -24,7 +24,7 @@ _EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un rése ## Resource pour EndpointSlice {#endpointslice-resource} Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau -endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references a n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. +endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references à n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`. @@ -53,7 +53,7 @@ endpoints: EndpointSlices geré par le controleur d'EndpointSlice n'auront, par défaut, pas plus de 100 endpoints chacun. En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire. -EnpointpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devront une amélioration de performance pour les services qui ont une grand quantité d'endpoints. +EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'endpoints. ### Types d'addresses @@ -66,7 +66,7 @@ EndpointSlices supporte trois type d'addresses: ### Topologie Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. -Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondant. Lorsque les valeurs sont disponibles, les etiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: +Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les étiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: * `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe. * `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe. @@ -76,16 +76,15 @@ Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que ### Capacité d'EndpointSlices -Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip -text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. +Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. ### Distribution d'EndpointSlices Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisé pour un Service, les Pods peuvent se retrouver avec differents port cible pour le même port nommé, nécessitant différents EndpointSlices. -Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibrent pas activement. La logic du contrôlleur est assez simple: +Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logic du contrôlleur est assez simple: -1. Itérer à travers les EnpointSlices existantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées. +1. Itérer à travers les EnpointSlices éxistantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées. 2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire. 3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles. @@ -93,7 +92,7 @@ Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'E Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. -En pratique, cette distribution moins qu'idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. +En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. ## Motivation From 44e3358c7f3bec220966750b7b170e8bc379450b Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Fri, 6 Mar 2020 10:00:26 -0500 Subject: [PATCH 011/105] second round review --- .../services-networking/endpoint-slices.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index b25516b1a2..103a2bf377 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -4,7 +4,7 @@ title: EndpointSlices feature: title: EndpointSlices description: > - Suivi évolutif des réseau endpoints dans un cluster Kubernetes. + Suivi évolutif des réseaux endpoints dans un cluster Kubernetes. content_template: templates/concept weight: 10 @@ -15,7 +15,7 @@ weight: 10 {{< feature-state for_k8s_version="v1.17" state="beta" >}} -_EndpointSlices_ offrent une simple methode pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints. +_EndpointSlices_ offrent une méthode simple pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints. {{% /capture %}} @@ -76,29 +76,29 @@ Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que ### Capacité d'EndpointSlices -Les EndpointSlices sont limité a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. +Les EndpointSlices sont limités a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. ### Distribution d'EndpointSlices -Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisé pour un Service, les Pods peuvent se retrouver avec differents port cible pour le même port nommé, nécessitant différents EndpointSlices. +Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices. -Le contrôlleur essait de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logic du contrôlleur est assez simple: +Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple: -1. Itérer à travers les EnpointSlices éxistantes, retirer les endpoint qui ne sont plus voulues et mettre à jour les endpoints qui ont changées. +1. Itérer à travers les EnpointSlices existantes, retirer les endpoints qui ne sont plus voulues et mettre à jour les endpoints qui ont changées. 2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire. 3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles. Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. -Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement à une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. +Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. -En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traité par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. +En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. ## Motivation -Les Endpoints API ont fournit une methode simple et facile de suivre les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus large, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés à la mise à l'échelle vers un plus grand nombre d'endpoint d'un réseau. +Les Endpoints API fournissent une méthode simple et facile à suivre pour les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'endpoint d'un réseau. -Puisque tout les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez considérable. Ça a affecté les performances des composants Kubernetes (notamment le plan de contrôle maître) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. EndpointSlices vous aide à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. +Puisque tous les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. {{% /capture %}} From ef2f47772844dc5cc8fb6c4d410d2d19dc3d5910 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Fri, 6 Mar 2020 10:05:13 -0500 Subject: [PATCH 012/105] Rewording sentences for a more accurate translation --- content/fr/docs/concepts/services-networking/endpoint-slices.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index 103a2bf377..cf980fd340 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -92,7 +92,7 @@ Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'E Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. -En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également un remballage naturel des EndpointSlices avec tout leur pods et les endpoints correspondants qui se feront remplacer. +En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les endpoints correspondants qui se feront remplacer. ## Motivation From 476598657b092e4ae2cd145d990115c0974ddf0f Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 9 Mar 2020 15:39:33 -0400 Subject: [PATCH 013/105] Add reviewed changes --- .../services-networking/endpoint-slices.md | 52 +++++++++++-------- 1 file changed, 30 insertions(+), 22 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index cf980fd340..bf22340339 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -4,7 +4,7 @@ title: EndpointSlices feature: title: EndpointSlices description: > - Suivi évolutif des réseaux endpoints dans un cluster Kubernetes. + Suivi évolutif des réseaux Endpoints dans un cluster Kubernetes. content_template: templates/concept weight: 10 @@ -15,7 +15,7 @@ weight: 10 {{< feature-state for_k8s_version="v1.17" state="beta" >}} -_EndpointSlices_ offrent une méthode simple pour suivre les endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus evolutive et extensible aux Endpoints. +_EndpointSlices_ offrent une méthode simple pour suivre les Endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus évolutive et extensible aux Endpoints. {{% /capture %}} @@ -24,7 +24,9 @@ _EndpointSlices_ offrent une méthode simple pour suivre les endpoints d'un rés ## Resource pour EndpointSlice {#endpointslice-resource} Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau -endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Kubernetes Service quand un {{< glossary_tooltip text="selecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des references à n'importe quelle Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les endpoints d'un reseau par combinaisons uniques de Services et de Ports. +Endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié. +Ces EnpointSlices vont inclure des références à n'importe quels Pods qui correspond aux selecteur de Service. +EndpointSlices groupent ensemble les Endpoints d'un réseau par combinaisons uniques de Services et de Ports. Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`. @@ -51,13 +53,15 @@ endpoints: topology.kubernetes.io/zone: us-west2-a ``` -EndpointSlices geré par le controleur d'EndpointSlice n'auront, par défaut, pas plus de 100 endpoints chacun. En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire. +Les EndpointSlices géré par le contrôleur d'EndpointSlice n'auront, par défaut, pas plus de 100 Endpoints chacun. +En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire. -EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand it s'agit du routage d'un trafic interne. Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'endpoints. +EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand il s'agit du routage d'un trafic interne. +Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'Endpoints. ### Types d'addresses -EndpointSlices supporte trois type d'addresses: +Les EndpointSlices supportent 3 types d'addresses: * IPv4 * IPv6 @@ -65,46 +69,50 @@ EndpointSlices supporte trois type d'addresses: ### Topologie -Chaque endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. -Ceci est utilisé pour indiqué où se trouve un endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les étiquette de Topologies suivantes seront définies par le contrôleur EndpointSlice: +Chaque Endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. +Ceci est utilisé pour indiqué où se trouve un Endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les labels de Topologies suivantes seront définies par le contrôleur EndpointSlice: -* `kubernetes.io/hostname` - Nom du Node sur lequel l'endpoint se situe. -* `topology.kubernetes.io/zone` - Zone dans laquelle l'endpoint se situe. -* `topology.kubernetes.io/region` - Region dans laquelle l'endpoint se situe. +* `kubernetes.io/hostname` - Nom du Node sur lequel l'Endpoint se situe. +* `topology.kubernetes.io/zone` - Zone dans laquelle l'Endpoint se situe. +* `topology.kubernetes.io/region` - Region dans laquelle l'Endpoint se situe. -Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que les correspondantes EndpointSlices sont mis-à-jour. Le contrôleur gèrera les EndpointSlices pour tout les Services qui ont un selecteur - [reference: {{< glossary_tooltip text="selecteur" term_id="selector" >}}] - specifié. Celles-ci representeront les IPs des Pods qui correspond au selecteur. +Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que leurs correspondances avec les EndpointSlices sont à jour. +Le contrôleur gère les EndpointSlices pour tous les Services qui ont un sélecteur - [référence: {{< glossary_tooltip text="sélecteur" term_id="selector" >}}] - specifié. Celles-ci représenteront les IPs des Pods qui correspond au sélecteur. ### Capacité d'EndpointSlices -Les EndpointSlices sont limités a une capacité de 100 endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. +Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. ### Distribution d'EndpointSlices -Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices. +Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les Endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices. Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple: -1. Itérer à travers les EnpointSlices existantes, retirer les endpoints qui ne sont plus voulues et mettre à jour les endpoints qui ont changées. -2. Itérer à travers les EndpointSlices qui ont été modifiées dans la première étape et les remplir avec n'importe quelle endpoint nécéssaire. -3. Si il reste encore des endpoints neuves à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouvelles. +1. Itérer à travers les EnpointSlices existantes, retirer les Endpoints qui ne sont plus voulues et mettre à jour les Endpoints qui ont changées. +2. Itérer à travers les EndpointSlices qui ont été modifiés dans la première étape et les remplir avec n'importe quel Endpoint nécéssaire. +3. S'il reste encore des Endpoints neufs à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouveaux. -Par-dessus tout, la troisème étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouvelles endpoints à ajouter et 2 EndpointSlices qui peuvent accomoder 5 endpoint en plus chacun; cette approche créera une nouvelle EndpointSlice au lieu de remplir les EndpointSlice existantes. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. +Par-dessus tout, la troisième étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouveaux Endpoints à ajouter et 2 EndpointSlices qui peuvent contenir 5 Endpoints en plus chacun; cette approche créera un nouveau EndpointSlice au lieu de remplir les EndpointSlice existants. +C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. -En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les endpoints correspondants qui se feront remplacer. +En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les Endpoints correspondants qui se feront remplacer. ## Motivation -Les Endpoints API fournissent une méthode simple et facile à suivre pour les endpoint d'un réseau dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'endpoint d'un réseau. +L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints d'un réseau dans Kubernetes. +Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. +Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau. -Puisque tous les endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. +Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. {{% /capture %}} {{% capture whatsnext %}} * [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) -* Lire [Connecté des Application aux Services](/docs/concepts/services-networking/connect-applications-service/) +* Lire [Connecter des applications aux Services](/docs/concepts/services-networking/connect-applications-service/) {{% /capture %}} \ No newline at end of file From f0b4ddf012217514a524e43b857910ab91f9f198 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 9 Mar 2020 15:44:36 -0400 Subject: [PATCH 014/105] Add last minute changes --- .../fr/docs/concepts/services-networking/endpoint-slices.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index bf22340339..a4283fee91 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -81,11 +81,12 @@ Le contrôleur gère les EndpointSlices pour tous les Services qui ont un sélec ### Capacité d'EndpointSlices -Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par defaut. Vous pouvez configurer cela avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. +Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par défaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. ### Distribution d'EndpointSlices -Chaque EndpointSlice a un ensemble de ports qui s'applique à toutes les Endpoints dans la resource. Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices. +Chaque EndpointSlice a un ensemble de ports qui s'applique à tous les Endpoints dans la resource. +Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices. Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple: From c61213299d319d4916fe5d01ad7812ed31c2b445 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Mon, 9 Mar 2020 16:03:30 -0400 Subject: [PATCH 015/105] Add changes for missed reviews --- .../services-networking/endpoint-slices.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index a4283fee91..c3fdef9a29 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -90,24 +90,27 @@ Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se re Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple: -1. Itérer à travers les EnpointSlices existantes, retirer les Endpoints qui ne sont plus voulues et mettre à jour les Endpoints qui ont changées. +1. Itérer à travers les EnpointSlices existants, retirer les Endpoints qui ne sont plus voulus et mettre à jour les Endpoints qui ont changés. 2. Itérer à travers les EndpointSlices qui ont été modifiés dans la première étape et les remplir avec n'importe quel Endpoint nécéssaire. 3. S'il reste encore des Endpoints neufs à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouveaux. Par-dessus tout, la troisième étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouveaux Endpoints à ajouter et 2 EndpointSlices qui peuvent contenir 5 Endpoints en plus chacun; cette approche créera un nouveau EndpointSlice au lieu de remplir les EndpointSlice existants. C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. -Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement a une EndpointSlice devient relativement coûteux puisqu'ils seront transmit à chaque Node du cluster. Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut entraîner plusieurs EndpointSlices qui ne sont pas plein. +Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement d'un EndpointSlice devient relativement coûteux puisqu'ils seront transmis à chaque Node du cluster. +Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut causer plusieurs EndpointSlices non remplis. -En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petit pour tenir dans un EndpointSlice existante, et sinon, une nouvelle EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour régulières des déploiements permettent également un reconditionnement naturel des EndpointSlices avec tout les pods et les Endpoints correspondants qui se feront remplacer. +En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petite pour tenir dans un EndpointSlice existant, et sinon, un nouveau EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également une compaction naturelle des EndpointSlices avec tous leurs pods et les Endpoints correspondants qui se feront remplacer. ## Motivation -L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints d'un réseau dans Kubernetes. +L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints dans Kubernetes. Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. -Plus particulièrement, ceux-ci comprenaient des défis liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau. +Plus particulièrement, ceux-ci comprenaient des limitations liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau. -Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. Cela affecte les performances des composants Kubernetes (notamment le plan de contrôle) et a donné lieu à une grande quantité de trafic réseau et de traitement lorsque les Endpoints changent. Les EndpointSlices vous aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. +Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. +Cela a affecté les performances des composants Kubernetes (notamment le plan de contrôle) et a causé une grande quantité de trafic réseau et de traitements lorsque les Endpoints changent. +Les EndpointSlices aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. {{% /capture %}} From dae5f0585b49dd46c5e8f66b25d1760a051fa035 Mon Sep 17 00:00:00 2001 From: Harrison Razanajatovo Date: Tue, 10 Mar 2020 09:15:51 -0400 Subject: [PATCH 016/105] Simplify for clarity --- .../fr/docs/concepts/services-networking/endpoint-slices.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md index c3fdef9a29..b06117cc00 100644 --- a/content/fr/docs/concepts/services-networking/endpoint-slices.md +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -23,8 +23,8 @@ _EndpointSlices_ offrent une méthode simple pour suivre les Endpoints d'un rés ## Resource pour EndpointSlice {#endpointslice-resource} -Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de reseau -Endpoints. Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié. +Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de Endpoints. +Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié. Ces EnpointSlices vont inclure des références à n'importe quels Pods qui correspond aux selecteur de Service. EndpointSlices groupent ensemble les Endpoints d'un réseau par combinaisons uniques de Services et de Ports. From 2ca1c075ad2d6959ddced1661b9f6cc1cb1bb286 Mon Sep 17 00:00:00 2001 From: Niklas Hansson Date: Fri, 20 Mar 2020 09:43:49 +0100 Subject: [PATCH 017/105] Update the Mac autocomplete to use bash_profile On OS X, Terminal by default runs a login shell every time, thus I believe it is simpler for new users to not have to change to change the default. behaviour or source the bashrc file every time. https://apple.stackexchange.com/questions/51036/what-is-the-difference-between-bash-profile-and-bashrc --- content/en/docs/tasks/tools/install-kubectl.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md index 83c9d1761b..d2922be1a6 100644 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ b/content/en/docs/tasks/tools/install-kubectl.md @@ -419,7 +419,7 @@ You can test if you have bash-completion v2 already installed with `type _init_c brew install bash-completion@2 ``` -As stated in the output of this command, add the following to your `~/.bashrc` file: +As stated in the output of this command, add the following to your `~/.bash_profile` file: ```shell export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" @@ -432,10 +432,10 @@ Reload your shell and verify that bash-completion v2 is correctly installed with You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this: -- Source the completion script in your `~/.bashrc` file: +- Source the completion script in your `~/.bash_profile` file: ```shell - echo 'source <(kubectl completion bash)' >>~/.bashrc + echo 'source <(kubectl completion bash)' >>~/.bash_profile ``` @@ -448,8 +448,8 @@ You now have to ensure that the kubectl completion script gets sourced in all yo - If you have an alias for kubectl, you can extend shell completion to work with that alias: ```shell - echo 'alias k=kubectl' >>~/.bashrc - echo 'complete -F __start_kubectl k' >>~/.bashrc + echo 'alias k=kubectl' >>~/.bash_profile + echo 'complete -F __start_kubectl k' >>~/.bash_profile ``` - If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything. From 5f263770fdd907881eb73f21b24cdf2c775a385f Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 17 Mar 2020 23:39:20 +0000 Subject: [PATCH 018/105] Tweak link to partners --- content/en/docs/setup/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index 880dd46024..7a99422dee 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -53,6 +53,6 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b When evaluating a solution for a production environment, consider which aspects of operating a Kubernetes cluster (or _abstractions_) you want to manage yourself or offload to a provider. -For a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers, see "[Partners](https://kubernetes.io/partners/#conformance)". +[Kubernetes Partners](https://kubernetes.io/partners/#conformance) includes a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers. {{% /capture %}} From a8032bd74e24a44fab7b8b738d5fa1e28c7a315a Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 17 Mar 2020 23:39:44 +0000 Subject: [PATCH 019/105] Drop k3s as a learning environment "K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances" The product's own web page doesn't mention using it for learning. --- content/en/docs/setup/_index.md | 1 - 1 file changed, 1 deletion(-) diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index 7a99422dee..53c8737439 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -46,7 +46,6 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b | | [MicroK8s](https://microk8s.io/)| | | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | | | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| -| | [k3s](https://k3s.io)| ## Production environment From 0bf066fa28d6e379d7a8850b96fe9d33542a3813 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 17 Mar 2020 23:43:13 +0000 Subject: [PATCH 020/105] Drop IBM Cloud Private-CE as learning environment These products don't seem like a good fit for learners. --- content/en/docs/setup/_index.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index 53c8737439..35882edc4f 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -44,8 +44,6 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b | [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| | | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| -| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | -| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| ## Production environment From 89c952b91ec0925ba7d387f301f7af481433bade Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 17 Mar 2020 23:47:12 +0000 Subject: [PATCH 021/105] Drop link to CDK on LXD The page this linked to recommends considering microk8s, so let's omit this one and leave microk8s. Why leave microk8s? It's a certified Kubernetes distribution focused on learning environments, and it's multiplatform (ish). --- content/en/docs/setup/_index.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index 35882edc4f..16702b40f5 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -40,9 +40,8 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b |Community |Ecosystem | | ------------ | -------- | -| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | -| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| -| | [Minishift](https://docs.okd.io/latest/minishift/)| +| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| From 512023837f9527dad6ff139ba105ca3b20f3fef9 Mon Sep 17 00:00:00 2001 From: Karoline Pauls <43616133+karolinepauls@users.noreply.github.com> Date: Thu, 26 Mar 2020 00:45:01 +0000 Subject: [PATCH 022/105] api-concepts.md: Watch bookmarks Replaces "an information" and fixes a lost plural. --- content/en/docs/reference/using-api/api-concepts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index fbdbf14f87..a0b41aa2d4 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -89,7 +89,7 @@ A given Kubernetes server will only preserve a historical list of changes for a ### Watch bookmarks -To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to pass an information that all changes up to a given `resourceVersion` client is requesting has already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.: +To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to mark that all changes up to a given `resourceVersion` the client is requesting have already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.: GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true --- From dfb8d40026ca64e467a1ddf3ee25fb5858d24f0c Mon Sep 17 00:00:00 2001 From: Rajesh Deshpande Date: Fri, 20 Mar 2020 17:49:13 +0530 Subject: [PATCH 023/105] Adding example for DaemonSet Rolling Update task Adding example for DaemonSet Rolling Update task Adding fluentd daemonset example Adding fluentd daemonset example Creating fluend daemonset for update Creating fluend daemonset for update Adding proper description for YAML file Adding proper description for YAML file --- .../tasks/manage-daemon/update-daemon-set.md | 80 +++++++++++-------- .../controllers/fluentd-daemonset-update.yaml | 48 +++++++++++ .../controllers/fluentd-daemonset.yaml | 42 ++++++++++ 3 files changed, 135 insertions(+), 35 deletions(-) create mode 100644 content/en/examples/controllers/fluentd-daemonset-update.yaml create mode 100644 content/en/examples/controllers/fluentd-daemonset.yaml diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index 8eec3c1781..ccbce55313 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -43,21 +43,43 @@ To enable the rolling update feature of a DaemonSet, you must set its You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well. +### Creating a DaemonSet with `RollingUpdate` update strategy -### Step 1: Checking DaemonSet `RollingUpdate` update strategy +This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate' -First, check the update strategy of your DaemonSet, and make sure it's set to +{{< codenew file="controllers/fluentd-daemonset.yaml" >}} + +After verifying the update strategy of the DaemonSet manifest, create the DaemonSet: + +```shell +kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml +``` + +Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to +update the DaemonSet with `kubectl apply`. + +```shell +kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml +``` + +### Checking DaemonSet `RollingUpdate` update strategy + +Check the update strategy of your DaemonSet, and make sure it's set to `RollingUpdate`: ```shell -kubectl get ds/ -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system ``` If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead: ```shell -kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +<<<<<<< HEAD +kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +======= +kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +>>>>>>> Adding example for DaemonSet Rolling Update task ``` The output from both commands should be: @@ -69,28 +91,13 @@ RollingUpdate If the output isn't `RollingUpdate`, go back and modify the DaemonSet object or manifest accordingly. -### Step 2: Creating a DaemonSet with `RollingUpdate` update strategy -If you have already created the DaemonSet, you may skip this step and jump to -step 3. - -After verifying the update strategy of the DaemonSet manifest, create the DaemonSet: - -```shell -kubectl create -f ds.yaml -``` - -Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to -update the DaemonSet with `kubectl apply`. - -```shell -kubectl apply -f ds.yaml -``` - -### Step 3: Updating a DaemonSet template +### Updating a DaemonSet template Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling -update. This can be done with several different `kubectl` commands. +update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands. + +{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}} #### Declarative commands @@ -99,21 +106,17 @@ If you update DaemonSets using use `kubectl apply`: ```shell -kubectl apply -f ds-v2.yaml +kubectl apply -f https://k8s.io/examples/application/fluentd-daemonset-update.yaml ``` #### Imperative commands If you update DaemonSets using [imperative commands](/docs/tasks/manage-kubernetes-objects/imperative-command/), -use `kubectl edit` or `kubectl patch`: +use `kubectl edit` : ```shell -kubectl edit ds/ -``` - -```shell -kubectl patch ds/ -p= +kubectl edit ds/fluentd-elasticsearch -n kube-system ``` ##### Updating only the container image @@ -122,21 +125,21 @@ If you just need to update the container image in the DaemonSet template, i.e. `.spec.template.spec.containers[*].image`, use `kubectl set image`: ```shell -kubectl set image ds/ = +kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system ``` -### Step 4: Watching the rolling update status +### Watching the rolling update status Finally, watch the rollout status of the latest DaemonSet rolling update: ```shell -kubectl rollout status ds/ +kubectl rollout status ds/fluentd-elasticsearch -n kube-system ``` When the rollout is complete, the output is similar to this: ```shell -daemonset "" successfully rolled out +daemonset "fluentd-elasticsearch" successfully rolled out ``` ## Troubleshooting @@ -156,7 +159,7 @@ When this happens, find the nodes that don't have the DaemonSet pods scheduled o by comparing the output of `kubectl get nodes` and the output of: ```shell -kubectl get pods -l = -o wide +kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system ``` Once you've found those nodes, delete some non-DaemonSet pods from the node to @@ -183,6 +186,13 @@ If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between master and nodes will make DaemonSet unable to detect the right rollout progress. +## Clean up + +Delete DaemonSet from a namespace : + +```shell +kubectl delete ds fluentd-elasticsearch -n kube-system +``` {{% /capture %}} diff --git a/content/en/examples/controllers/fluentd-daemonset-update.yaml b/content/en/examples/controllers/fluentd-daemonset-update.yaml new file mode 100644 index 0000000000..dcf08d4fc9 --- /dev/null +++ b/content/en/examples/controllers/fluentd-daemonset-update.yaml @@ -0,0 +1,48 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: fluentd-elasticsearch + namespace: kube-system + labels: + k8s-app: fluentd-logging +spec: + selector: + matchLabels: + name: fluentd-elasticsearch + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + template: + metadata: + labels: + name: fluentd-elasticsearch + spec: + tolerations: + # this toleration is to have the daemonset runnable on master nodes + # remove it if your masters can't run pods + - key: node-role.kubernetes.io/master + effect: NoSchedule + containers: + - name: fluentd-elasticsearch + image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 + resources: + limits: + memory: 200Mi + requests: + cpu: 100m + memory: 200Mi + volumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + terminationGracePeriodSeconds: 30 + volumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers diff --git a/content/en/examples/controllers/fluentd-daemonset.yaml b/content/en/examples/controllers/fluentd-daemonset.yaml new file mode 100644 index 0000000000..0e1e7d3345 --- /dev/null +++ b/content/en/examples/controllers/fluentd-daemonset.yaml @@ -0,0 +1,42 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: fluentd-elasticsearch + namespace: kube-system + labels: + k8s-app: fluentd-logging +spec: + selector: + matchLabels: + name: fluentd-elasticsearch + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + template: + metadata: + labels: + name: fluentd-elasticsearch + spec: + tolerations: + # this toleration is to have the daemonset runnable on master nodes + # remove it if your masters can't run pods + - key: node-role.kubernetes.io/master + effect: NoSchedule + containers: + - name: fluentd-elasticsearch + image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 + volumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + terminationGracePeriodSeconds: 30 + volumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers From f3d82cf167a7c9003302b8e124da089ce1176b46 Mon Sep 17 00:00:00 2001 From: Rajesh Deshpande Date: Fri, 27 Mar 2020 12:25:05 +0530 Subject: [PATCH 024/105] Removing junk chars Removing junk chars --- content/en/docs/tasks/manage-daemon/update-daemon-set.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index ccbce55313..e4640e3ccd 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -75,11 +75,7 @@ If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead: ```shell -<<<<<<< HEAD kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -======= -kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' ->>>>>>> Adding example for DaemonSet Rolling Update task ``` The output from both commands should be: From 07fd1c617f534c174ed4f3e9482f45c71cdb7c9a Mon Sep 17 00:00:00 2001 From: Alpha Date: Mon, 30 Mar 2020 11:51:12 +0800 Subject: [PATCH 025/105] add a yaml exmaple for type nodeport --- .../concepts/services-networking/service.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index a62faf1e0f..da9d922b92 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -534,6 +534,25 @@ to just expose one or more nodes' IPs directly. Note that this Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) +For example: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + type: NodePort + selector: + app: MyApp + ports: + - port: 80 + # By default and for convenience, the targetPort is set to the same value as the port field. + targetPort: 80 + # `targetPort` is the port of pod, you would like to expose + NodePort: 30007 + # `NodePort` is the port of node, you would like to expose +``` + ### Type LoadBalancer {#loadbalancer} On cloud providers which support external load balancers, setting the `type` From ff4ebc4fea0e2a22fa9c0224058eda62eeb5400b Mon Sep 17 00:00:00 2001 From: Alpha Date: Mon, 30 Mar 2020 13:28:22 +0800 Subject: [PATCH 026/105] update the yaml example based on review --- content/en/docs/concepts/services-networking/service.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index da9d922b92..7908f8836d 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -546,11 +546,11 @@ spec: app: MyApp ports: - port: 80 - # By default and for convenience, the targetPort is set to the same value as the port field. targetPort: 80 - # `targetPort` is the port of pod, you would like to expose - NodePort: 30007 - # `NodePort` is the port of node, you would like to expose + # By default and for convenience, the `targetPort` is set to the same value as the `port` field. + nodePort: 30007 + # Optional field + # By default and for convenience, the Kubernetes control plane will allocates a port from a range (default: 30000-32767) ``` ### Type LoadBalancer {#loadbalancer} From 1fda1272a572f9643d7fe568f633d4fe48a751ed Mon Sep 17 00:00:00 2001 From: Alpha Date: Mon, 30 Mar 2020 13:31:08 +0800 Subject: [PATCH 027/105] Update service.md --- content/en/docs/concepts/services-networking/service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 7908f8836d..63f7fff957 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -550,7 +550,7 @@ spec: # By default and for convenience, the `targetPort` is set to the same value as the `port` field. nodePort: 30007 # Optional field - # By default and for convenience, the Kubernetes control plane will allocates a port from a range (default: 30000-32767) + # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) ``` ### Type LoadBalancer {#loadbalancer} From ee6851e203f4e436cc057c2324d5376fc4373a56 Mon Sep 17 00:00:00 2001 From: Tsahi Duek Date: Mon, 30 Mar 2020 09:35:24 +0300 Subject: [PATCH 028/105] Language en - fix link for pod-topology-spread - markdown lint by vscode plugin --- .../pods/pod-topology-spread-constraints.md | 65 ++++++++++--------- 1 file changed, 33 insertions(+), 32 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 35a373473b..bb1db1907f 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -18,7 +18,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te ### Enable Feature Gate -The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/) +The `EvenPodsSpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled for the {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and** {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}. @@ -62,10 +62,10 @@ metadata: name: mypod spec: topologySpreadConstraints: - - maxSkew: - topologyKey: - whenUnsatisfiable: - labelSelector: + - maxSkew: + topologyKey: + whenUnsatisfiable: + labelSelector: ``` You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are: @@ -73,8 +73,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s - **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero. - **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain. - **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: - - `DoNotSchedule` (default) tells the scheduler not to schedule it. - - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. + - `DoNotSchedule` (default) tells the scheduler not to schedule it. + - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. - **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details. You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. @@ -160,29 +160,30 @@ There are some implicit conventions worth noting here: - Only the Pods holding the same namespace as the incoming Pod can be matching candidates. - Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that: - 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". + + 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA". + 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". - Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels. - If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed. - Suppose you have a 5-node cluster ranging from zoneA to zoneC: + Suppose you have a 5-node cluster ranging from zoneA to zoneC: - ``` - +---------------+---------------+-------+ - | zoneA | zoneB | zoneC | - +-------+-------+-------+-------+-------+ - | node1 | node2 | node3 | node4 | node5 | - +-------+-------+-------+-------+-------+ - | P | P | P | | | - +-------+-------+-------+-------+-------+ - ``` + ``` + +---------------+---------------+-------+ + | zoneA | zoneB | zoneC | + +-------+-------+-------+-------+-------+ + | node1 | node2 | node3 | node4 | node5 | + +-------+-------+-------+-------+-------+ + | P | P | P | | | + +-------+-------+-------+-------+-------+ + ``` - and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. + and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. + + {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - ### Cluster-level default constraints {{< feature-state for_k8s_version="v1.18" state="alpha" >}} @@ -207,16 +208,16 @@ kind: KubeSchedulerConfiguration profiles: pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: - - maxSkew: 1 - topologyKey: failure-domain.beta.kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: failure-domain.beta.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway ``` {{< note >}} -The score produced by default scheduling constraints might conflict with the +The score produced by default scheduling constraints might conflict with the score produced by the [`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/profiles/#scheduling-plugins). It is recommended that you disable this plugin in the scheduling profile when @@ -229,14 +230,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are scheduled - more packed or more scattered. - For `PodAffinity`, you can try to pack any number of Pods into qualifying -topology domain(s) + topology domain(s) - For `PodAntiAffinity`, only one Pod can be scheduled into a -single topology domain. + single topology domain. The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different topology domains - to achieve high availability or cost-saving. This can also help on rolling update workloads and scaling out replicas smoothly. -See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details. +See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details. ## Known Limitations From 94ec519f6342c32acfb7306dfe5c8ea19e588435 Mon Sep 17 00:00:00 2001 From: Alpha Date: Mon, 30 Mar 2020 23:21:25 +0800 Subject: [PATCH 029/105] Update service.md --- content/en/docs/concepts/services-networking/service.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 63f7fff957..65891d3ccb 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -545,12 +545,12 @@ spec: selector: app: MyApp ports: + # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - port: 80 targetPort: 80 - # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - nodePort: 30007 # Optional field # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) + nodePort: 30007 ``` ### Type LoadBalancer {#loadbalancer} From 56c18bcc080f4f78a5e0a8845fa42f868e8aa440 Mon Sep 17 00:00:00 2001 From: Maksym Vlasov Date: Tue, 31 Mar 2020 21:14:44 +0300 Subject: [PATCH 030/105] Add minimum requirement content (#3) * Localize front page * Localize heading and subheading URLs Close kubernetes-i18n-ukrainian/website#6 Close kubernetes-i18n-ukrainian/website#11 * Uk translation for k8s basics Close kubernetes-i18n-ukrainian/website#9 * Uk localization homepage PR: kubernetes-i18n-ukrainian/website#102 * Buildable version of the Ukrainian website * Adding localized content of the what-is-kubernetes.md Close kubernetes-i18n-ukrainian/website#12 * Localizing site strings in toml file * Localizing tutorials Close kubernetes-i18n-ukrainian/website#32 * Localizing templates * Localize glossary terms Close kubernetes-i18n-ukrainian/website#2 * Create glossary for UK contributors Close kubernetes-i18n-ukrainian/website#101 * Add butuzov to dream team Co-authored-by: Anastasiya Kulyk Co-Authored-By: Maksym Vlasov Co-authored-by: Oleg Butuzov --- OWNERS_ALIASES | 2 + README-uk.md | 4 +- content/uk/_common-resources/index.md | 3 + content/uk/_index.html | 85 ++ content/uk/case-studies/_index.html | 13 + content/uk/docs/_index.md | 3 + content/uk/docs/concepts/_index.md | 123 ++ .../uk/docs/concepts/configuration/_index.md | 5 + .../manage-compute-resources-container.md | 623 +++++++++ .../uk/docs/concepts/configuration/secret.md | 1054 +++++++++++++++ content/uk/docs/concepts/overview/_index.md | 4 + .../concepts/overview/what-is-kubernetes.md | 185 +++ .../concepts/services-networking/_index.md | 4 + .../services-networking/dual-stack.md | 109 ++ .../services-networking/endpoint-slices.md | 188 +++ .../services-networking/service-topology.md | 127 ++ .../concepts/services-networking/service.md | 1197 +++++++++++++++++ content/uk/docs/concepts/storage/_index.md | 4 + .../concepts/storage/persistent-volumes.md | 736 ++++++++++ content/uk/docs/concepts/workloads/_index.md | 4 + .../concepts/workloads/controllers/_index.md | 4 + .../workloads/controllers/deployment.md | 1152 ++++++++++++++++ .../controllers/jobs-run-to-completion.md | 480 +++++++ .../controllers/replicationcontroller.md | 291 ++++ content/uk/docs/contribute/localization_uk.md | 123 ++ content/uk/docs/home/_index.md | 58 + .../docs/reference/glossary/applications.md | 16 + .../glossary/cluster-infrastructure.md | 17 + .../reference/glossary/cluster-operations.md | 17 + content/uk/docs/reference/glossary/cluster.md | 22 + .../docs/reference/glossary/control-plane.md | 17 + .../uk/docs/reference/glossary/data-plane.md | 17 + .../uk/docs/reference/glossary/deployment.md | 23 + content/uk/docs/reference/glossary/index.md | 17 + .../docs/reference/glossary/kube-apiserver.md | 29 + .../glossary/kube-controller-manager.md | 22 + .../uk/docs/reference/glossary/kube-proxy.md | 33 + .../docs/reference/glossary/kube-scheduler.md | 22 + content/uk/docs/reference/glossary/kubelet.md | 23 + content/uk/docs/reference/glossary/pod.md | 23 + .../uk/docs/reference/glossary/selector.md | 22 + content/uk/docs/reference/glossary/service.md | 24 + content/uk/docs/setup/_index.md | 136 ++ .../horizontal-pod-autoscale.md | 293 ++++ .../uk/docs/templates/feature-state-alpha.txt | 7 + .../uk/docs/templates/feature-state-beta.txt | 22 + .../templates/feature-state-deprecated.txt | 4 + .../docs/templates/feature-state-stable.txt | 11 + content/uk/docs/templates/index.md | 15 + content/uk/docs/tutorials/_index.md | 90 ++ content/uk/docs/tutorials/hello-minikube.md | 394 ++++++ .../tutorials/kubernetes-basics/_index.html | 138 ++ .../create-cluster/_index.md | 4 + .../create-cluster/cluster-interactive.html | 37 + .../create-cluster/cluster-intro.html | 152 +++ .../kubernetes-basics/deploy-app/_index.md | 4 + .../deploy-app/deploy-interactive.html | 41 + .../deploy-app/deploy-intro.html | 151 +++ .../kubernetes-basics/explore/_index.md | 4 + .../explore/explore-interactive.html | 41 + .../explore/explore-intro.html | 200 +++ .../kubernetes-basics/expose/_index.md | 4 + .../expose/expose-interactive.html | 38 + .../expose/expose-intro.html | 169 +++ .../kubernetes-basics/scale/_index.md | 4 + .../scale/scale-interactive.html | 40 + .../kubernetes-basics/scale/scale-intro.html | 145 ++ .../kubernetes-basics/update/_index.md | 4 + .../update/update-interactive.html | 37 + .../update/update-intro.html | 168 +++ content/uk/examples/controllers/job.yaml | 14 + .../controllers/nginx-deployment.yaml | 21 + .../uk/examples/controllers/replication.yaml | 19 + content/uk/examples/minikube/Dockerfile | 4 + content/uk/examples/minikube/server.js | 9 + .../networking/dual-stack-default-svc.yaml | 11 + .../networking/dual-stack-ipv4-svc.yaml | 12 + .../networking/dual-stack-ipv6-lb-svc.yaml | 15 + .../networking/dual-stack-ipv6-svc.yaml | 12 + i18n/uk.toml | 247 ++++ 80 files changed, 9640 insertions(+), 2 deletions(-) create mode 100644 content/uk/_common-resources/index.md create mode 100644 content/uk/_index.html create mode 100644 content/uk/case-studies/_index.html create mode 100644 content/uk/docs/_index.md create mode 100644 content/uk/docs/concepts/_index.md create mode 100644 content/uk/docs/concepts/configuration/_index.md create mode 100644 content/uk/docs/concepts/configuration/manage-compute-resources-container.md create mode 100644 content/uk/docs/concepts/configuration/secret.md create mode 100644 content/uk/docs/concepts/overview/_index.md create mode 100644 content/uk/docs/concepts/overview/what-is-kubernetes.md create mode 100644 content/uk/docs/concepts/services-networking/_index.md create mode 100644 content/uk/docs/concepts/services-networking/dual-stack.md create mode 100644 content/uk/docs/concepts/services-networking/endpoint-slices.md create mode 100644 content/uk/docs/concepts/services-networking/service-topology.md create mode 100644 content/uk/docs/concepts/services-networking/service.md create mode 100644 content/uk/docs/concepts/storage/_index.md create mode 100644 content/uk/docs/concepts/storage/persistent-volumes.md create mode 100644 content/uk/docs/concepts/workloads/_index.md create mode 100644 content/uk/docs/concepts/workloads/controllers/_index.md create mode 100644 content/uk/docs/concepts/workloads/controllers/deployment.md create mode 100644 content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md create mode 100644 content/uk/docs/concepts/workloads/controllers/replicationcontroller.md create mode 100644 content/uk/docs/contribute/localization_uk.md create mode 100644 content/uk/docs/home/_index.md create mode 100644 content/uk/docs/reference/glossary/applications.md create mode 100644 content/uk/docs/reference/glossary/cluster-infrastructure.md create mode 100644 content/uk/docs/reference/glossary/cluster-operations.md create mode 100644 content/uk/docs/reference/glossary/cluster.md create mode 100644 content/uk/docs/reference/glossary/control-plane.md create mode 100644 content/uk/docs/reference/glossary/data-plane.md create mode 100644 content/uk/docs/reference/glossary/deployment.md create mode 100644 content/uk/docs/reference/glossary/index.md create mode 100644 content/uk/docs/reference/glossary/kube-apiserver.md create mode 100644 content/uk/docs/reference/glossary/kube-controller-manager.md create mode 100644 content/uk/docs/reference/glossary/kube-proxy.md create mode 100644 content/uk/docs/reference/glossary/kube-scheduler.md create mode 100644 content/uk/docs/reference/glossary/kubelet.md create mode 100644 content/uk/docs/reference/glossary/pod.md create mode 100644 content/uk/docs/reference/glossary/selector.md create mode 100755 content/uk/docs/reference/glossary/service.md create mode 100644 content/uk/docs/setup/_index.md create mode 100644 content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md create mode 100644 content/uk/docs/templates/feature-state-alpha.txt create mode 100644 content/uk/docs/templates/feature-state-beta.txt create mode 100644 content/uk/docs/templates/feature-state-deprecated.txt create mode 100644 content/uk/docs/templates/feature-state-stable.txt create mode 100644 content/uk/docs/templates/index.md create mode 100644 content/uk/docs/tutorials/_index.md create mode 100644 content/uk/docs/tutorials/hello-minikube.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/_index.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/explore/_index.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/expose/_index.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/scale/_index.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/update/_index.md create mode 100644 content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html create mode 100644 content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html create mode 100644 content/uk/examples/controllers/job.yaml create mode 100644 content/uk/examples/controllers/nginx-deployment.yaml create mode 100644 content/uk/examples/controllers/replication.yaml create mode 100644 content/uk/examples/minikube/Dockerfile create mode 100644 content/uk/examples/minikube/server.js create mode 100644 content/uk/examples/service/networking/dual-stack-default-svc.yaml create mode 100644 content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml create mode 100644 content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml create mode 100644 content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml create mode 100644 i18n/uk.toml diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index ed3ccdbb96..9046573658 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -226,9 +226,11 @@ aliases: - kpucynski sig-docs-uk-owners: # Admins for Ukrainian content - anastyakulyk + - butuzov - MaxymVlasov sig-docs-uk-reviews: # PR reviews for Ukrainian content - anastyakulyk + - butuzov - idvoretskyi - MaxymVlasov - Potapy4 diff --git a/README-uk.md b/README-uk.md index 68d3b0db0a..43d782e09a 100644 --- a/README-uk.md +++ b/README-uk.md @@ -39,7 +39,7 @@ make docker-image make docker-serve ``` -Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері. +Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері. ## Запуск сайту локально зa допомогою Hugo @@ -51,7 +51,7 @@ make docker-serve make serve ``` -Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері. +Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері. ## Спільнота, обговорення, внесок і підтримка diff --git a/content/uk/_common-resources/index.md b/content/uk/_common-resources/index.md new file mode 100644 index 0000000000..ca03031f1e --- /dev/null +++ b/content/uk/_common-resources/index.md @@ -0,0 +1,3 @@ +--- +headless: true +--- diff --git a/content/uk/_index.html b/content/uk/_index.html new file mode 100644 index 0000000000..02df4d395d --- /dev/null +++ b/content/uk/_index.html @@ -0,0 +1,85 @@ +--- +title: "Довершена система оркестрації контейнерів" +abstract: "Автоматичне розгортання, масштабування і управління контейнерами" +cid: home +--- + +{{< announcement >}} + +{{< deprecationwarning >}} + +{{< blocks/section id="oceanNodes" >}} +{{% blocks/feature image="flower" %}} + +### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) - це система з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками. + + +Вона об'єднує контейнери, що утворюють застосунок, у логічні елементи для легкого управління і виявлення. В основі Kubernetes - [15 років досвіду запуску і виконання застосунків у продуктивних середовищах Google](http://queue.acm.org/detail.cfm?id=2898444), поєднані з найкращими ідеями і практиками від спільноти. +{{% /blocks/feature %}} + +{{% blocks/feature image="scalable" %}} + +#### Глобальне масштабування + + +Заснований на тих самих принципах, завдяки яким Google запускає мільярди контейнерів щотижня, Kubernetes масштабується без потреби збільшення вашого штату з експлуатації. + +{{% /blocks/feature %}} + +{{% blocks/feature image="blocks" %}} + +#### Невичерпна функціональність + + +Запущений для локального тестування чи у глобальній корпорації, Kubernetes динамічно зростатиме з вами, забезпечуючи регулярну і легку доставку ваших застосунків незалежно від рівня складності ваших потреб. + +{{% /blocks/feature %}} + +{{% blocks/feature image="suitcase" %}} + +#### Працює всюди + + +Kubernetes - проект з відкритим вихідним кодом. Він дозволяє скористатися перевагами локальної, гібридної чи хмарної інфраструктури, щоб легко переміщати застосунки туди, куди вам потрібно. + +{{% /blocks/feature %}} + +{{< /blocks/section >}} + +{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}} +
+ +

Проблеми міграції 150+ мікросервісів у Kubernetes

+ +

Сара Уеллз, технічний директор з експлуатації і безпеки роботи, Financial Times

+ +
+
+
+ Відвідати KubeCon в Амстердамі, 30.03-02.04 2020 +
+
+
+
+ Відвідати KubeCon у Шанхаї, 28-30 липня 2020 +
+
+ + +
+{{< /blocks/section >}} + +{{< blocks/kubernetes-features >}} + +{{< blocks/case-studies >}} diff --git a/content/uk/case-studies/_index.html b/content/uk/case-studies/_index.html new file mode 100644 index 0000000000..6c9c75fc44 --- /dev/null +++ b/content/uk/case-studies/_index.html @@ -0,0 +1,13 @@ +--- +title: Case Studies +title: Приклади використання +linkTitle: Case Studies +linkTitle: Приклади використання +bigheader: Kubernetes User Case Studies +bigheader: Приклади використання Kubernetes від користувачів. +abstract: A collection of users running Kubernetes in production. +abstract: Підбірка користувачів, що використовують Kubernetes для робочих навантажень. +layout: basic +class: gridPage +cid: caseStudies +--- diff --git a/content/uk/docs/_index.md b/content/uk/docs/_index.md new file mode 100644 index 0000000000..a601666b67 --- /dev/null +++ b/content/uk/docs/_index.md @@ -0,0 +1,3 @@ +--- +title: Документація +--- diff --git a/content/uk/docs/concepts/_index.md b/content/uk/docs/concepts/_index.md new file mode 100644 index 0000000000..695068aa4a --- /dev/null +++ b/content/uk/docs/concepts/_index.md @@ -0,0 +1,123 @@ +--- +title: Концепції +main_menu: true +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + + +В розділі "Концепції" описані складові системи Kubernetes і абстракції, за допомогою яких Kubernetes реалізовує ваш {{< glossary_tooltip text="кластер" term_id="cluster" length="all" >}}. Цей розділ допоможе вам краще зрозуміти, як працює Kubernetes. + +{{% /capture %}} + +{{% capture body %}} + + + +## Загальна інформація + + +Для роботи з Kubernetes ви використовуєте *об'єкти API Kubernetes* для того, щоб описати *бажаний стан* вашого кластера: які застосунки або інші робочі навантаження ви плануєте запускати, які образи контейнерів вони використовують, кількість реплік, скільки ресурсів мережі та диску ви хочете виділити тощо. Ви задаєте бажаний стан, створюючи об'єкти в Kubernetes API, зазвичай через інтерфейс командного рядка `kubectl`. Ви також можете взаємодіяти із кластером, задавати або змінювати його бажаний стан безпосередньо через Kubernetes API. + + +Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Генератора подій життєвого циклу Пода ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері: + + + +* **Kubernetes master** становить собою набір із трьох процесів, запущених на одному вузлі вашого кластера, що визначений як керівний (master). До цих процесів належать: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) і [kube-scheduler](/docs/admin/kube-scheduler/). +* На кожному не-мастер вузлі вашого кластера виконуються два процеси: + * **[kubelet](/docs/admin/kubelet/)**, що обмінюється даними з Kubernetes master. + * **[kube-proxy](/docs/admin/kube-proxy/)**, мережевий проксі, що відображає мережеві сервіси Kubernetes на кожному вузлі. + + + +## Об'єкти Kubernetes + + +Kubernetes оперує певною кількістю абстракцій, що відображають стан вашої системи: розгорнуті у контейнерах застосунки та робочі навантаження, пов'язані з ними ресурси мережі та диску, інша інформація щодо функціонування вашого кластера. Ці абстракції представлені як об'єкти Kubernetes API. Для більш детальної інформації ознайомтесь з [Об'єктами Kubernetes](/docs/concepts/overview/working-with-objects/kubernetes-objects/). + + +До базових об'єктів Kubernetes належать: + +* [Под *(Pod)*](/docs/concepts/workloads/pods/pod-overview/) +* [Сервіс *(Service)*](/docs/concepts/services-networking/service/) +* [Volume](/docs/concepts/storage/volumes/) +* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) + + +В Kubernetes є також абстракції вищого рівня, які надбудовуються над базовими об'єктами за допомогою [контролерів](/docs/concepts/architecture/controller/) і забезпечують додаткову функціональність і зручність. До них належать: + +* [Deployment](/docs/concepts/workloads/controllers/deployment/) +* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) +* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) +* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) + + + +## Площина управління Kubernetes (*Kubernetes Control Plane*) + + +Різні частини площини управління Kubernetes, такі як Kubernetes Master і kubelet, регулюють, як Kubernetes спілкується з вашим кластером. Площина управління веде облік усіх об'єктів Kubernetes в системі та безперервно, в циклі перевіряє стан цих об'єктів. У будь-який момент часу контрольні цикли, запущені площиною управління, реагуватимуть на зміни у кластері і намагатимуться привести поточний стан об'єктів до бажаного, що заданий у конфігурації. + + +Наприклад, коли за допомогою API Kubernetes ви створюєте Deployment, ви задаєте новий бажаний стан для системи. Площина управління Kubernetes фіксує створення цього об'єкта і виконує ваші інструкції шляхом запуску потрібних застосунків та їх розподілу між вузлами кластера. В такий спосіб досягається відповідність поточного стану бажаному. + + + +### Kubernetes Master + + +Kubernetes Master відповідає за підтримку бажаного стану вашого кластера. Щоразу, як ви взаємодієте з Kubernetes, наприклад при використанні інтерфейсу командного рядка `kubectl`, ви обмінюєтесь даними із Kubernetes master вашого кластера. + + +Слово "master" стосується набору процесів, які управляють станом кластера. Переважно всі ці процеси виконуються на одному вузлі кластера, який також називається master. Master-вузол можна реплікувати для забезпечення високої доступності кластера. + + + +### Вузли Kubernetes + + +Вузлами кластера називають машини (ВМ, фізичні сервери тощо), на яких запущені ваші застосунки та хмарні робочі навантаження. Кожен вузол керується Kubernetes master; ви лише зрідка взаємодіятимете безпосередньо із вузлами. + + +{{% /capture %}} + +{{% capture whatsnext %}} + + +Якщо ви хочете створити нову сторінку у розділі Концепції, у статті +[Використання шаблонів сторінок](/docs/home/contribute/page-templates/) +ви знайдете інформацію щодо типу і шаблона сторінки. + +{{% /capture %}} diff --git a/content/uk/docs/concepts/configuration/_index.md b/content/uk/docs/concepts/configuration/_index.md new file mode 100644 index 0000000000..588d144f6e --- /dev/null +++ b/content/uk/docs/concepts/configuration/_index.md @@ -0,0 +1,5 @@ +--- +title: "Конфігурація" +weight: 80 +--- + diff --git a/content/uk/docs/concepts/configuration/manage-compute-resources-container.md b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md new file mode 100644 index 0000000000..a90b224f8c --- /dev/null +++ b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md @@ -0,0 +1,623 @@ +--- +title: Managing Compute Resources for Containers +content_template: templates/concept +weight: 20 +feature: + # title: Automatic bin packing + title: Автоматичне пакування у контейнери + # description: > + # Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. + description: > + Автоматичне розміщення контейнерів з огляду на їхні потреби у ресурсах та інші обмеження, при цьому не поступаючись доступністю. Поєднання критичних і "найкращих з можливих" робочих навантажень для ефективнішого використання і більшого заощадження ресурсів. +--- + +{{% capture overview %}} + +When you specify a [Pod](/docs/concepts/workloads/pods/pod/), you can optionally specify how +much CPU and memory (RAM) each Container needs. When Containers have resource +requests specified, the scheduler can make better decisions about which nodes to +place Pods on. And when Containers have their limits specified, contention for +resources on a node can be handled in a specified manner. For more details about +the difference between requests and limits, see +[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). + +{{% /capture %}} + + +{{% capture body %}} + +## Resource types + +*CPU* and *memory* are each a *resource type*. A resource type has a base unit. +CPU is specified in units of cores, and memory is specified in units of bytes. +If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources. +Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory +that are much larger than the default page size. + +For example, on a system where the default page size is 4KiB, you could specify a limit, +`hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a +total of 80 MiB), that allocation fails. + +{{< note >}} +You cannot overcommit `hugepages-*` resources. +This is different from the `memory` and `cpu` resources. +{{< /note >}} + +CPU and memory are collectively referred to as *compute resources*, or just +*resources*. Compute +resources are measurable quantities that can be requested, allocated, and +consumed. They are distinct from +[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and +[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified +through the Kubernetes API server. + +## Resource requests and limits of Pod and Container + +Each Container of a Pod can specify one or more of the following: + +* `spec.containers[].resources.limits.cpu` +* `spec.containers[].resources.limits.memory` +* `spec.containers[].resources.limits.hugepages-` +* `spec.containers[].resources.requests.cpu` +* `spec.containers[].resources.requests.memory` +* `spec.containers[].resources.requests.hugepages-` + +Although requests and limits can only be specified on individual Containers, it +is convenient to talk about Pod resource requests and limits. A +*Pod resource request/limit* for a particular resource type is the sum of the +resource requests/limits of that type for each Container in the Pod. + + +## Meaning of CPU + +Limits and requests for CPU resources are measured in *cpu* units. +One cpu, in Kubernetes, is equivalent to: + +- 1 AWS vCPU +- 1 GCP Core +- 1 Azure vCore +- 1 IBM vCPU +- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading + +Fractional requests are allowed. A Container with +`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much +CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the +expression `100m`, which can be read as "one hundred millicpu". Some people say +"one hundred millicores", and this is understood to mean the same thing. A +request with a decimal point, like `0.1`, is converted to `100m` by the API, and +precision finer than `1m` is not allowed. For this reason, the form `100m` might +be preferred. + +CPU is always requested as an absolute quantity, never as a relative quantity; +0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine. + +## Meaning of memory + +Limits and requests for `memory` are measured in bytes. You can express memory as +a plain integer or as a fixed-point integer using one of these suffixes: +E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, +Mi, Ki. For example, the following represent roughly the same value: + +```shell +128974848, 129e6, 129M, 123Mi +``` + +Here's an example. +The following Pod has two Containers. Each Container has a request of 0.25 cpu +and 64MiB (226 bytes) of memory. Each Container has a limit of 0.5 +cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128 +MiB of memory, and a limit of 1 cpu and 256MiB of memory. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: frontend +spec: + containers: + - name: db + image: mysql + env: + - name: MYSQL_ROOT_PASSWORD + value: "password" + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + - name: wp + image: wordpress + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" +``` + +## How Pods with resource requests are scheduled + +When you create a Pod, the Kubernetes scheduler selects a node for the Pod to +run on. Each node has a maximum capacity for each of the resource types: the +amount of CPU and memory it can provide for Pods. The scheduler ensures that, +for each resource type, the sum of the resource requests of the scheduled +Containers is less than the capacity of the node. Note that although actual memory +or CPU resource usage on nodes is very low, the scheduler still refuses to place +a Pod on a node if the capacity check fails. This protects against a resource +shortage on a node when resource usage later increases, for example, during a +daily peak in request rate. + +## How Pods with resource limits are run + +When the kubelet starts a Container of a Pod, it passes the CPU and memory limits +to the container runtime. + +When using Docker: + +- The `spec.containers[].resources.requests.cpu` is converted to its core value, + which is potentially fractional, and multiplied by 1024. The greater of this number + or 2 is used as the value of the + [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint) + flag in the `docker run` command. + +- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and + multiplied by 100. The resulting value is the total amount of CPU time that a container can use + every 100ms. A container cannot use more than its share of CPU time during this interval. + + {{< note >}} + The default quota period is 100ms. The minimum resolution of CPU quota is 1ms. + {{}} + +- The `spec.containers[].resources.limits.memory` is converted to an integer, and + used as the value of the + [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints) + flag in the `docker run` command. + +If a Container exceeds its memory limit, it might be terminated. If it is +restartable, the kubelet will restart it, as with any other type of runtime +failure. + +If a Container exceeds its memory request, it is likely that its Pod will +be evicted whenever the node runs out of memory. + +A Container might or might not be allowed to exceed its CPU limit for extended +periods of time. However, it will not be killed for excessive CPU usage. + +To determine whether a Container cannot be scheduled or is being killed due to +resource limits, see the +[Troubleshooting](#troubleshooting) section. + +## Monitoring compute resource usage + +The resource usage of a Pod is reported as part of the Pod status. + +If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md) +is configured for your cluster, then Pod resource usage can be retrieved from +the monitoring system. + +## Troubleshooting + +### My Pods are pending with event message failedScheduling + +If the scheduler cannot find any node where a Pod can fit, the Pod remains +unscheduled until a place can be found. An event is produced each time the +scheduler fails to find a place for the Pod, like this: + +```shell +kubectl describe pod frontend | grep -A 3 Events +``` +``` +Events: + FirstSeen LastSeen Count From Subobject PathReason Message + 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others +``` + +In the preceding example, the Pod named "frontend" fails to be scheduled due to +insufficient CPU resource on the node. Similar error messages can also suggest +failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod +is pending with a message of this type, there are several things to try: + +- Add more nodes to the cluster. +- Terminate unneeded Pods to make room for pending Pods. +- Check that the Pod is not larger than all the nodes. For example, if all the + nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will + never be scheduled. + +You can check node capacities and amounts allocated with the +`kubectl describe nodes` command. For example: + +```shell +kubectl describe nodes e2e-test-node-pool-4lw4 +``` +``` +Name: e2e-test-node-pool-4lw4 +[ ... lines removed for clarity ...] +Capacity: + cpu: 2 + memory: 7679792Ki + pods: 110 +Allocatable: + cpu: 1800m + memory: 7474992Ki + pods: 110 +[ ... lines removed for clarity ...] +Non-terminated Pods: (5 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits + --------- ---- ------------ ---------- --------------- ------------- + kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) + kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) + kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) + kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) + kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) +Allocated resources: + (Total limits may be over 100 percent, i.e., overcommitted.) + CPU Requests CPU Limits Memory Requests Memory Limits + ------------ ---------- --------------- ------------- + 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%) +``` + +In the preceding output, you can see that if a Pod requests more than 1120m +CPUs or 6.23Gi of memory, it will not fit on the node. + +By looking at the `Pods` section, you can see which Pods are taking up space on +the node. + +The amount of resources available to Pods is less than the node capacity, because +system daemons use a portion of the available resources. The `allocatable` field +[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) +gives the amount of resources that are available to Pods. For more information, see +[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). + +The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured +to limit the total amount of resources that can be consumed. If used in conjunction +with namespaces, it can prevent one team from hogging all the resources. + +### My Container is terminated + +Your Container might get terminated because it is resource-starved. To check +whether a Container is being killed because it is hitting a resource limit, call +`kubectl describe pod` on the Pod of interest: + +```shell +kubectl describe pod simmemleak-hra99 +``` +``` +Name: simmemleak-hra99 +Namespace: default +Image(s): saadali/simmemleak +Node: kubernetes-node-tf0f/10.240.216.66 +Labels: name=simmemleak +Status: Running +Reason: +Message: +IP: 10.244.2.75 +Replication Controllers: simmemleak (1/1 replicas created) +Containers: + simmemleak: + Image: saadali/simmemleak + Limits: + cpu: 100m + memory: 50Mi + State: Running + Started: Tue, 07 Jul 2015 12:54:41 -0700 + Last Termination State: Terminated + Exit Code: 1 + Started: Fri, 07 Jul 2015 12:54:30 -0700 + Finished: Fri, 07 Jul 2015 12:54:33 -0700 + Ready: False + Restart Count: 5 +Conditions: + Type Status + Ready False +Events: + FirstSeen LastSeen Count From SubobjectPath Reason Message + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a +``` + +In the preceding example, the `Restart Count: 5` indicates that the `simmemleak` +Container in the Pod was terminated and restarted five times. + +You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status +of previously terminated Containers: + +```shell +kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 +``` +``` +Container Name: simmemleak +LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] +``` + +You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory. + +## Local ephemeral storage +{{< feature-state state="beta" >}} + +Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers. + +This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope. + +{{< note >}} +If an optional runtime partition is used, root partition will not hold any image layer or writable layers. +{{< /note >}} + +### Requests and limits setting for local ephemeral storage +Each Container of a Pod can specify one or more of the following: + +* `spec.containers[].resources.limits.ephemeral-storage` +* `spec.containers[].resources.requests.ephemeral-storage` + +Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as +a plain integer or as a fixed-point integer using one of these suffixes: +E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, +Mi, Ki. For example, the following represent roughly the same value: + +```shell +128974848, 129e6, 129M, 123Mi +``` + +For example, the following Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of storage. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: frontend +spec: + containers: + - name: db + image: mysql + env: + - name: MYSQL_ROOT_PASSWORD + value: "password" + resources: + requests: + ephemeral-storage: "2Gi" + limits: + ephemeral-storage: "4Gi" + - name: wp + image: wordpress + resources: + requests: + ephemeral-storage: "2Gi" + limits: + ephemeral-storage: "4Gi" +``` + +### How Pods with ephemeral-storage requests are scheduled + +When you create a Pod, the Kubernetes scheduler selects a node for the Pod to +run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). + +The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node. + +### How Pods with ephemeral-storage limits run + +For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted. + +### Monitoring ephemeral-storage consumption + +When local ephemeral storage is used, it is monitored on an ongoing +basis by the kubelet. The monitoring is performed by scanning each +emptyDir volume, log directories, and writable layers on a periodic +basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log +directories or writable layers) may, at the cluster operator's option, +be managed by use of [project +quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html). +Project quotas were originally implemented in XFS, and have more +recently been ported to ext4fs. Project quotas can be used for both +monitoring and enforcement; as of Kubernetes 1.16, they are available +as alpha functionality for monitoring only. + +Quotas are faster and more accurate than directory scanning. When a +directory is assigned to a project, all files created under a +directory are created in that project, and the kernel merely has to +keep track of how many blocks are in use by files in that project. If +a file is created and deleted, but with an open file descriptor, it +continues to consume space. This space will be tracked by the quota, +but will not be seen by a directory scan. + +Kubernetes uses project IDs starting from 1048576. The IDs in use are +registered in `/etc/projects` and `/etc/projid`. If project IDs in +this range are used for other purposes on the system, those project +IDs must be registered in `/etc/projects` and `/etc/projid` to prevent +Kubernetes from using them. + +To enable use of project quotas, the cluster operator must do the +following: + +* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true` + feature gate in the kubelet configuration. This defaults to `false` + in Kubernetes 1.16, so must be explicitly set to `true`. + +* Ensure that the root partition (or optional runtime partition) is + built with project quotas enabled. All XFS filesystems support + project quotas, but ext4 filesystems must be built specially. + +* Ensure that the root partition (or optional runtime partition) is + mounted with project quotas enabled. + +#### Building and mounting filesystems with project quotas enabled + +XFS filesystems require no special action when building; they are +automatically built with project quotas enabled. + +Ext4fs filesystems must be built with quotas enabled, then they must +be enabled in the filesystem: + +``` +% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device +% sudo tune2fs -O project -Q prjquota /dev/block_device + +``` + +To mount the filesystem, both ext4fs and XFS require the `prjquota` +option set in `/etc/fstab`: + +``` +/dev/block_device /var/kubernetes_data defaults,prjquota 0 0 +``` + + +## Extended resources + +Extended resources are fully-qualified resource names outside the +`kubernetes.io` domain. They allow cluster operators to advertise and users to +consume the non-Kubernetes-built-in resources. + +There are two steps required to use Extended Resources. First, the cluster +operator must advertise an Extended Resource. Second, users must request the +Extended Resource in Pods. + +### Managing extended resources + +#### Node-level extended resources + +Node-level extended resources are tied to nodes. + +##### Device plugin managed resources +See [Device +Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +for how to advertise device plugin managed resources on each node. + +##### Other resources +To advertise a new node-level extended resource, the cluster operator can +submit a `PATCH` HTTP request to the API server to specify the available +quantity in the `status.capacity` for a node in the cluster. After this +operation, the node's `status.capacity` will include a new resource. The +`status.allocatable` field is updated automatically with the new resource +asynchronously by the kubelet. Note that because the scheduler uses the node +`status.allocatable` value when evaluating Pod fitness, there may be a short +delay between patching the node capacity with a new resource and the first Pod +that requests the resource to be scheduled on that node. + +**Example:** + +Here is an example showing how to use `curl` to form an HTTP request that +advertises five "example.com/foo" resources on node `k8s-node-1` whose master +is `k8s-master`. + +```shell +curl --header "Content-Type: application/json-patch+json" \ +--request PATCH \ +--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \ +http://k8s-master:8080/api/v1/nodes/k8s-node-1/status +``` + +{{< note >}} +In the preceding request, `~1` is the encoding for the character `/` +in the patch path. The operation path value in JSON-Patch is interpreted as a +JSON-Pointer. For more details, see +[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3). +{{< /note >}} + +#### Cluster-level extended resources + +Cluster-level extended resources are not tied to nodes. They are usually managed +by scheduler extenders, which handle the resource consumption and resource quota. + +You can specify the extended resources that are handled by scheduler extenders +in [scheduler policy +configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31). + +**Example:** + +The following configuration for a scheduler policy indicates that the +cluster-level extended resource "example.com/foo" is handled by the scheduler +extender. + +- The scheduler sends a Pod to the scheduler extender only if the Pod requests + "example.com/foo". +- The `ignoredByScheduler` field specifies that the scheduler does not check + the "example.com/foo" resource in its `PodFitsResources` predicate. + +```json +{ + "kind": "Policy", + "apiVersion": "v1", + "extenders": [ + { + "urlPrefix":"", + "bindVerb": "bind", + "managedResources": [ + { + "name": "example.com/foo", + "ignoredByScheduler": true + } + ] + } + ] +} +``` + +### Consuming extended resources + +Users can consume extended resources in Pod specs just like CPU and memory. +The scheduler takes care of the resource accounting so that no more than the +available amount is simultaneously allocated to Pods. + +The API server restricts quantities of extended resources to whole numbers. +Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of +_invalid_ quantities are `0.5` and `1500m`. + +{{< note >}} +Extended resources replace Opaque Integer Resources. +Users can use any domain name prefix other than `kubernetes.io` which is reserved. +{{< /note >}} + +To consume an extended resource in a Pod, include the resource name as a key +in the `spec.containers[].resources.limits` map in the container spec. + +{{< note >}} +Extended resources cannot be overcommitted, so request and limit +must be equal if both are present in a container spec. +{{< /note >}} + +A Pod is scheduled only if all of the resource requests are satisfied, including +CPU, memory and any extended resources. The Pod remains in the `PENDING` state +as long as the resource request cannot be satisfied. + +**Example:** + +The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource). + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: my-pod +spec: + containers: + - name: my-container + image: myimage + resources: + requests: + cpu: 2 + example.com/foo: 1 + limits: + example.com/foo: 1 +``` + + + +{{% /capture %}} + + +{{% capture whatsnext %}} + +* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/). + +* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/). + +* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) + +* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) + +{{% /capture %}} diff --git a/content/uk/docs/concepts/configuration/secret.md b/content/uk/docs/concepts/configuration/secret.md new file mode 100644 index 0000000000..6261650692 --- /dev/null +++ b/content/uk/docs/concepts/configuration/secret.md @@ -0,0 +1,1054 @@ +--- +reviewers: +- mikedanese +title: Secrets +content_template: templates/concept +feature: + title: Управління секретами та конфігурацією + description: > + Розгортайте та оновлюйте секрети та конфігурацію застосунку без перезбирання образів, не розкриваючи секрети в конфігурацію стека. +weight: 50 +--- + + +{{% capture overview %}} + +Kubernetes `secret` objects let you store and manage sensitive information, such +as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` +is safer and more flexible than putting it verbatim in a +{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. + +{{% /capture %}} + +{{% capture body %}} + +## Overview of Secrets + +A Secret is an object that contains a small amount of sensitive data such as +a password, a token, or a key. Such information might otherwise be put in a +Pod specification or in an image; putting it in a Secret object allows for +more control over how it is used, and reduces the risk of accidental exposure. + +Users can create secrets, and the system also creates some secrets. + +To use a secret, a pod needs to reference the secret. +A secret can be used with a pod in two ways: as files in a +{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of +its containers, or used by kubelet when pulling images for the pod. + +### Built-in Secrets + +#### Service Accounts Automatically Create and Attach Secrets with API Credentials + +Kubernetes automatically creates secrets which contain credentials for +accessing the API and it automatically modifies your pods to use this type of +secret. + +The automatic creation and use of API credentials can be disabled or overridden +if desired. However, if all you need to do is securely access the apiserver, +this is the recommended workflow. + +See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more +information on how Service Accounts work. + +### Creating your own Secrets + +#### Creating a Secret Using kubectl create secret + +Say that some pods need to access a database. The +username and password that the pods should use is in the files +`./username.txt` and `./password.txt` on your local machine. + +```shell +# Create files needed for rest of example. +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt +``` + +The `kubectl create secret` command +packages these files into a Secret and creates +the object on the Apiserver. + +```shell +kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +``` +``` +secret "db-user-pass" created +``` +{{< note >}} +Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: + +``` +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' +``` + + You do not need to escape special characters in passwords from files (`--from-file`). +{{< /note >}} + +You can check that the secret was created like this: + +```shell +kubectl get secrets +``` +``` +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s +``` +```shell +kubectl describe secrets/db-user-pass +``` +``` +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +{{< note >}} +`kubectl get` and `kubectl describe` avoid showing the contents of a secret by +default. +This is to protect the secret from being exposed accidentally to an onlooker, +or from being stored in a terminal log. +{{< /note >}} + +See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret. + +#### Creating a Secret Manually + +You can also create a Secret in a file first, in json or yaml format, +and then create that object. The +[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) contains two maps: +data and stringData. The data field is used to store arbitrary data, encoded using +base64. The stringData field is provided for convenience, and allows you to provide +secret data as unencoded strings. + +For example, to store two strings in a Secret using the data field, convert +them to base64 as follows: + +```shell +echo -n 'admin' | base64 +YWRtaW4= +echo -n '1f2d1e2e67df' | base64 +MWYyZDFlMmU2N2Rm +``` + +Write a Secret that looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): + +```shell +kubectl apply -f ./secret.yaml +``` +``` +secret "mysecret" created +``` + +For certain scenarios, you may wish to use the stringData field instead. This +field allows you to put a non-base64 encoded string directly into the Secret, +and the string will be encoded for you when the Secret is created or updated. + +A practical example of this might be where you are deploying an application +that uses a Secret to store a configuration file, and you want to populate +parts of that configuration file during your deployment process. + +If your application uses the following configuration file: + +```yaml +apiUrl: "https://my.api.com/api/v1" +username: "user" +password: "password" +``` + +You could store this in a Secret using the following: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +stringData: + config.yaml: |- + apiUrl: "https://my.api.com/api/v1" + username: {{username}} + password: {{password}} +``` + +Your deployment tool could then replace the `{{username}}` and `{{password}}` +template variables before running `kubectl apply`. + +stringData is a write-only convenience field. It is never output when +retrieving Secrets. For example, if you run the following command: + +```shell +kubectl get secret mysecret -o yaml +``` + +The output will be similar to: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:40:59Z + name: mysecret + namespace: default + resourceVersion: "7225" + uid: c280ad2e-e916-11e8-98f2-025000000001 +type: Opaque +data: + config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 +``` + +If a field is specified in both data and stringData, the value from stringData +is used. For example, the following Secret definition: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= +stringData: + username: administrator +``` + +Results in the following secret: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:46:46Z + name: mysecret + namespace: default + resourceVersion: "7579" + uid: 91460ecb-e917-11e8-98f2-025000000001 +type: Opaque +data: + username: YWRtaW5pc3RyYXRvcg== +``` + +Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. + +The keys of data and stringData must consist of alphanumeric characters, +'-', '_' or '.'. + +**Encoding Note:** The serialized JSON and YAML values of secret data are +encoded as base64 strings. Newlines are not valid within these strings and must +be omitted. When using the `base64` utility on Darwin/macOS users should avoid +using the `-b` option to split long lines. Conversely Linux users *should* add +the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if +`-w` option is not available. + +#### Creating a Secret from Generator +Kubectl supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) +since 1.14. With this new feature, +you can also create a Secret from generators and then apply it to create the object on +the Apiserver. The generators +should be specified in a `kustomization.yaml` inside a directory. + +For example, to generate a Secret from files `./username.txt` and `./password.txt` +```shell +# Create a kustomization.yaml file with SecretGenerator +cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + files: + - username.txt + - password.txt +EOF +``` +Apply the kustomization directory to create the Secret object. +```shell +$ kubectl apply -k . +secret/db-user-pass-96mffmfh4k created +``` + +You can check that the secret was created like this: + +```shell +$ kubectl get secrets +NAME TYPE DATA AGE +db-user-pass-96mffmfh4k Opaque 2 51s + +$ kubectl describe secrets/db-user-pass-96mffmfh4k +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +For example, to generate a Secret from literals `username=admin` and `password=secret`, +you can specify the secret generator in `kustomization.yaml` as +```shell +# Create a kustomization.yaml file with SecretGenerator +$ cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + literals: + - username=admin + - password=secret +EOF +``` +Apply the kustomization directory to create the Secret object. +```shell +$ kubectl apply -k . +secret/db-user-pass-dddghtt9b5 created +``` +{{< note >}} +The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new +Secret is generated each time the contents is modified. +{{< /note >}} + +#### Decoding a Secret + +Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section: + +```shell +kubectl get secret mysecret -o yaml +``` +``` +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Decode the password field: + +```shell +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` +``` +1f2d1e2e67df +``` + +#### Editing a Secret + +An existing secret may be edited with the following command: + +```shell +kubectl edit secrets mysecret +``` + +This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field: + +``` +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +kind: Secret +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: { ... } + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +``` + +## Using Secrets + +Secrets can be mounted as data volumes or be exposed as +{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} +to be used by a container in a pod. They can also be used by other parts of the +system, without being directly exposed to the pod. For example, they can hold +credentials that other parts of the system should use to interact with external +systems on your behalf. + +### Using Secrets as Files from a Pod + +To consume a Secret in a volume in a Pod: + +1. Create a secret or use an existing one. Multiple pods can reference the same secret. +1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].secret.secretName` field equal to the name of the secret object. +1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear. +1. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`. + +This is an example of a pod that mounts a secret in a volume: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret +``` + +Each secret you want to use needs to be referred to in `.spec.volumes`. + +If there are multiple containers in the pod, then each container needs its +own `volumeMounts` block, but only one `.spec.volumes` is needed per secret. + +You can package many files into one secret, or use many secrets, whichever is convenient. + +**Projection of secret keys to specific paths** + +We can also control the paths within the volume where Secret keys are projected. +You can use `.spec.volumes[].secret.items` field to change target path of each key: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username +``` + +What will happen: + +* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`. +* `password` secret is not projected + +If `.spec.volumes[].secret.items` is used, only keys specified in `items` are projected. +To consume all keys from the secret, all of them must be listed in the `items` field. +All listed keys must exist in the corresponding secret. Otherwise, the volume is not created. + +**Secret files permissions** + +You can also specify the permission mode bits files part of a secret will have. +If you don't specify any, `0644` is used by default. You can specify a default +mode for the whole secret volume and override per key if needed. + +For example, you can specify a default mode like this: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + defaultMode: 256 +``` + +Then, the secret will be mounted on `/etc/foo` and all the files created by the +secret volume mount will have permission `0400`. + +Note that the JSON spec doesn't support octal notation, so use the value 256 for +0400 permissions. If you use yaml instead of json for the pod, you can use octal +notation to specify permissions in a more natural way. + +You can also use mapping, as in the previous example, and specify different +permission for different files like this: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username + mode: 511 +``` + +In this case, the file resulting in `/etc/foo/my-group/my-username` will have +permission value of `0777`. Owing to JSON limitations, you must specify the mode +in decimal notation. + +Note that this permission value might be displayed in decimal notation if you +read it later. + +**Consuming Secret Values from Volumes** + +Inside the container that mounts a secret volume, the secret keys appear as +files and the secret values are base-64 decoded and stored inside these files. +This is the result of commands +executed inside the container from the example above: + +```shell +ls /etc/foo/ +``` +``` +username +password +``` + +```shell +cat /etc/foo/username +``` +``` +admin +``` + + +```shell +cat /etc/foo/password +``` +``` +1f2d1e2e67df +``` + +The program in a container is responsible for reading the secrets from the +files. + +**Mounted Secrets are updated automatically** + +When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. +Kubelet is checking whether the mounted secret is fresh on every periodic sync. +However, it is using its local cache for getting the current value of the Secret. +The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in +[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). +It can be either propagated via watch (default), ttl-based, or simply redirecting +all requests to directly kube-apiserver. +As a result, the total delay from the moment when the Secret is updated to the moment +when new keys are projected to the Pod can be as long as kubelet sync period + cache +propagation delay, where cache propagation delay depends on the chosen cache type +(it equals to watch propagation delay, ttl of cache, or zero corespondingly). + +{{< note >}} +A container using a Secret as a +[subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive +Secret updates. +{{< /note >}} + +### Using Secrets as Environment Variables + +To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}} +in a pod: + +1. Create a secret or use an existing one. Multiple pods can reference the same secret. +1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. +1. Modify your image and/or command line so that the program looks for values in the specified environment variables + +This is an example of a pod that uses secrets from environment variables: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-env-pod +spec: + containers: + - name: mycontainer + image: redis + env: + - name: SECRET_USERNAME + valueFrom: + secretKeyRef: + name: mysecret + key: username + - name: SECRET_PASSWORD + valueFrom: + secretKeyRef: + name: mysecret + key: password + restartPolicy: Never +``` + +**Consuming Secret Values from Environment Variables** + +Inside a container that consumes a secret in an environment variables, the secret keys appear as +normal environment variables containing the base-64 decoded values of the secret data. +This is the result of commands executed inside the container from the example above: + +```shell +echo $SECRET_USERNAME +``` +``` +admin +``` +```shell +echo $SECRET_PASSWORD +``` +``` +1f2d1e2e67df +``` + +### Using imagePullSecrets + +An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry +password to the Kubelet so it can pull a private image on behalf of your Pod. + +**Manually specifying an imagePullSecret** + +Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) + +### Arranging for imagePullSecrets to be Automatically Attached + +You can manually create an imagePullSecret, and reference it from +a serviceAccount. Any pods created with that serviceAccount +or that default to use that serviceAccount, will get their imagePullSecret +field set to that of the service account. +See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) + for a detailed explanation of that process. + +### Automatic Mounting of Manually Created Secrets + +Manually created secrets (e.g. one containing a token for accessing a github account) +can be automatically attached to pods based on their service account. +See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset/) for a detailed explanation of that process. + +## Details + +### Restrictions + +Secret volume sources are validated to ensure that the specified object +reference actually points to an object of type `Secret`. Therefore, a secret +needs to be created before any pods that depend on it. + +Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. +They can only be referenced by pods in that same namespace. + +Individual secrets are limited to 1MiB in size. This is to discourage creation +of very large secrets which would exhaust apiserver and kubelet memory. +However, creation of many smaller secrets could also exhaust memory. More +comprehensive limits on memory usage due to secrets is a planned feature. + +Kubelet only supports use of secrets for Pods it gets from the API server. +This includes any pods created using kubectl, or indirectly via a replication +controller. It does not include pods created via the kubelets +`--manifest-url` flag, its `--config` flag, or its REST API (these are +not common ways to create pods.) + +Secrets must be created before they are consumed in pods as environment +variables unless they are marked as optional. References to Secrets that do +not exist will prevent the pod from starting. + +References via `secretKeyRef` to keys that do not exist in a named Secret +will prevent the pod from starting. + +Secrets used to populate environment variables via `envFrom` that have keys +that are considered invalid environment variable names will have those keys +skipped. The pod will be allowed to start. There will be an event whose +reason is `InvalidVariableNames` and the message will contain the list of +invalid keys that were skipped. The example shows a pod which refers to the +default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad. + +```shell +kubectl get events +``` +``` +LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON +0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. +``` + +### Secret and Pod Lifetime interaction + +When a pod is created via the API, there is no check whether a referenced +secret exists. Once a pod is scheduled, the kubelet will try to fetch the +secret value. If the secret cannot be fetched because it does not exist or +because of a temporary lack of connection to the API server, kubelet will +periodically retry. It will report an event about the pod explaining the +reason it is not started yet. Once the secret is fetched, the kubelet will +create and mount a volume containing it. None of the pod's containers will +start until all the pod's volumes are mounted. + +## Use cases + +### Use-Case: Pod with ssh keys + +Create a kustomization.yaml with SecretGenerator containing some ssh keys: + +```shell +kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +``` + +``` +secret "ssh-key-secret" created +``` + +{{< caution >}} +Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised. +{{< /caution >}} + + +Now we can create a pod which references the secret with the ssh key and +consumes it in a volume: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod + labels: + name: secret-test +spec: + volumes: + - name: secret-volume + secret: + secretName: ssh-key-secret + containers: + - name: ssh-test-container + image: mySshImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +When the container's command runs, the pieces of the key will be available in: + +```shell +/etc/secret-volume/ssh-publickey +/etc/secret-volume/ssh-privatekey +``` + +The container is then free to use the secret data to establish an ssh connection. + +### Use-Case: Pods with prod / test credentials + +This example illustrates a pod which consumes a secret containing prod +credentials and another pod which consumes a secret with test environment +credentials. + +Make the kustomization.yaml with SecretGenerator + +```shell +kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 +``` +``` +secret "prod-db-secret" created +``` + +```shell +kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests +``` +``` +secret "test-db-secret" created +``` +{{< note >}} +Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: + +``` +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' +``` + + You do not need to escape special characters in passwords from files (`--from-file`). +{{< /note >}} + +Now make the pods: + +```shell +$ cat < pod.yaml +apiVersion: v1 +kind: List +items: +- kind: Pod + apiVersion: v1 + metadata: + name: prod-db-client-pod + labels: + name: prod-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: prod-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +- kind: Pod + apiVersion: v1 + metadata: + name: test-db-client-pod + labels: + name: test-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: test-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +EOF +``` + +Add the pods to the same kustomization.yaml +```shell +$ cat <> kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Apply all those objects on the Apiserver by + +```shell +kubectl apply -k . +``` + +Both containers will have the following files present on their filesystems with the values for each container's environment: + +```shell +/etc/secret-volume/username +/etc/secret-volume/password +``` + +Note how the specs for the two pods differ only in one field; this facilitates +creating pods with different capabilities from a common pod config template. + +You could further simplify the base pod specification by using two Service Accounts: +one called, say, `prod-user` with the `prod-db-secret`, and one called, say, +`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: prod-db-client-pod + labels: + name: prod-db-client +spec: + serviceAccount: prod-db-client + containers: + - name: db-client-container + image: myClientImage +``` + +### Use-case: Dotfiles in secret volume + +In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply +make that key begin with a dot. For example, when the following secret is mounted into a volume: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: k8s.gcr.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + + +The `secret-volume` will contain a single file, called `.secret-file`, and +the `dotfile-test-container` will have this file present at the path +`/etc/secret-volume/.secret-file`. + +{{< note >}} +Files beginning with dot characters are hidden from the output of `ls -l`; +you must use `ls -la` to see them when listing directory contents. +{{< /note >}} + +### Use-case: Secret visible to one container in a pod + +Consider a program that needs to handle HTTP requests, do some complex business +logic, and then sign some messages with an HMAC. Because it has complex +application logic, there might be an unnoticed remote file reading exploit in +the server, which could expose the private key to an attacker. + +This could be divided into two processes in two containers: a frontend container +which handles user interaction and business logic, but which cannot see the +private key; and a signer container that can see the private key, and responds +to simple signing requests from the frontend (e.g. over localhost networking). + +With this partitioned approach, an attacker now has to trick the application +server into doing something rather arbitrary, which may be harder than getting +it to read a file. + + + +## Best practices + +### Clients that use the secrets API + +When deploying applications that interact with the secrets API, access should be +limited using [authorization policies]( +/docs/reference/access-authn-authz/authorization/) such as [RBAC]( +/docs/reference/access-authn-authz/rbac/). + +Secrets often hold values that span a spectrum of importance, many of which can +cause escalations within Kubernetes (e.g. service account tokens) and to +external systems. Even if an individual app can reason about the power of the +secrets it expects to interact with, other apps within the same namespace can +render those assumptions invalid. + +For these reasons `watch` and `list` requests for secrets within a namespace are +extremely powerful capabilities and should be avoided, since listing secrets allows +the clients to inspect the values of all secrets that are in that namespace. The ability to +`watch` and `list` all secrets in a cluster should be reserved for only the most +privileged, system-level components. + +Applications that need to access the secrets API should perform `get` requests on +the secrets they need. This lets administrators restrict access to all secrets +while [white-listing access to individual instances]( +/docs/reference/access-authn-authz/rbac/#referring-to-resources) that +the app needs. + +For improved performance over a looping `get`, clients can design resources that +reference a secret then `watch` the resource, re-requesting the secret when the +reference changes. Additionally, a ["bulk watch" API]( +https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md) +to let clients `watch` individual resources has also been proposed, and will likely +be available in future releases of Kubernetes. + +## Security Properties + + +### Protections + +Because `secret` objects can be created independently of the `pods` that use +them, there is less risk of the secret being exposed during the workflow of +creating, viewing, and editing pods. The system can also take additional +precautions with `secret` objects, such as avoiding writing them to disk where +possible. + +A secret is only sent to a node if a pod on that node requires it. +Kubelet stores the secret into a `tmpfs` so that the secret is not written +to disk storage. Once the Pod that depends on the secret is deleted, kubelet +will delete its local copy of the secret data as well. + +There may be secrets for several pods on the same node. However, only the +secrets that a pod requests are potentially visible within its containers. +Therefore, one Pod does not have access to the secrets of another Pod. + +There may be several containers in a pod. However, each container in a pod has +to request the secret volume in its `volumeMounts` for it to be visible within +the container. This can be used to construct useful [security partitions at the +Pod level](#use-case-secret-visible-to-one-container-in-a-pod). + +On most Kubernetes-project-maintained distributions, communication between user +to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. +Secrets are protected when transmitted over these channels. + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) +for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}. + +### Risks + + - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}}; + therefore: + - Administrators should enable encryption at rest for cluster data (requires v1.13 or later) + - Administrators should limit access to etcd to admin users + - Administrators may want to wipe/shred disks used by etcd when no longer in use + - If running etcd in a cluster, administrators should make sure to use SSL/TLS + for etcd peer-to-peer communication. + - If you configure the secret through a manifest (JSON or YAML) file which has + the secret data encoded as base64, sharing this file or checking it in to a + source repository means the secret is compromised. Base64 encoding is _not_ an + encryption method and is considered the same as plain text. + - Applications still need to protect the value of secret after reading it from the volume, + such as not accidentally logging it or transmitting it to an untrusted party. + - A user who can create a pod that uses a secret can also see the value of that secret. Even + if apiserver policy does not allow that user to read the secret object, the user could + run a pod which exposes the secret. + - Currently, anyone with root on any node can read _any_ secret from the apiserver, + by impersonating the kubelet. It is a planned feature to only send secrets to + nodes that actually require them, to restrict the impact of a root exploit on a + single node. + + +{{% capture whatsnext %}} + +{{% /capture %}} diff --git a/content/uk/docs/concepts/overview/_index.md b/content/uk/docs/concepts/overview/_index.md new file mode 100644 index 0000000000..efffaf0892 --- /dev/null +++ b/content/uk/docs/concepts/overview/_index.md @@ -0,0 +1,4 @@ +--- +title: "Огляд" +weight: 20 +--- diff --git a/content/uk/docs/concepts/overview/what-is-kubernetes.md b/content/uk/docs/concepts/overview/what-is-kubernetes.md new file mode 100644 index 0000000000..7d484d3bd6 --- /dev/null +++ b/content/uk/docs/concepts/overview/what-is-kubernetes.md @@ -0,0 +1,185 @@ +--- +reviewers: +- bgrant0607 +- mikedanese +title: Що таке Kubernetes? +content_template: templates/concept +weight: 10 +card: + name: concepts + weight: 10 +--- + +{{% capture overview %}} + +Ця сторінка являє собою узагальнений огляд Kubernetes. +{{% /capture %}} + +{{% capture body %}} + +Kubernetes - це платформа з відкритим вихідним кодом для управління контейнеризованими робочими навантаженнями та супутніми службами. Її основні характеристики - кросплатформенність, розширюваність, успішне використання декларативної конфігурації та автоматизації. Вона має гігантську, швидкопрогресуючу екосистему. + + +Назва Kubernetes походить з грецької та означає керманич або пілот. Google відкрив доступ до вихідного коду проекту Kubernetes у 2014 році. Kubernetes побудовано [на базі п'ятнадцятирічного досвіду, що Google отримав, оперуючи масштабними робочими навантаженнями](https://ai.google/research/pubs/pub43438) у купі з найкращими у своєму класі ідеями та практиками, які може запропонувати спільнота. + + +## Озираючись на першопричини + + +Давайте повернемось назад у часі та дізнаємось, завдяки чому Kubernetes став таким корисним. + +![Еволюція розгортання](/images/docs/Container_Evolution.svg) + + +**Ера традиційного розгортання:** На початку організації запускали застосунки на фізичних серверах. Оскільки в такий спосіб не було можливості задати обмеження використання ресурсів, це спричиняло проблеми виділення та розподілення ресурсів на фізичних серверах. Наприклад: якщо багато застосунків було запущено на фізичному сервері, могли траплятись випадки, коли один застосунок забирав собі найбільше ресурсів, внаслідок чого інші програми просто не справлялись з обов'язками. Рішенням може бути запуск кожного застосунку на окремому фізичному сервері. Але такий підхід погано масштабується, оскільки ресурси не повністю використовуються; на додачу, це дорого, оскільки організаціям потрібно опікуватись багатьма фізичними серверами. + + +**Ера віртуалізованого розгортання:** Як рішення - була представлена віртуалізація. Вона дозволяє запускати численні віртуальні машини (Virtual Machines або VMs) на одному фізичному ЦПУ сервера. Віртуалізація дозволила застосункам бути ізольованими у межах віртуальних машин та забезпечувала безпеку, оскільки інформація застосунку на одній VM не була доступна застосунку на іншій VM. + + +Віртуалізація забезпечує краще використання ресурсів на фізичному сервері та кращу масштабованість, оскільки дозволяє легко додавати та оновлювати застосунки, зменшує витрати на фізичне обладнання тощо. З віртуалізацією ви можете представити ресурси у вигляді одноразових віртуальних машин. + + +Кожна VM є повноцінною машиною з усіма компонентами, включно з власною операційною системою, що запущені поверх віртуалізованого апаратного забезпечення. + + +**Ера розгортання контейнерів:** Контейнери схожі на VM, але мають спрощений варіант ізоляції і використовують спільну операційну систему для усіх застосунків. Саму тому контейнери вважаються легковісними. Подібно до VM, контейнер має власну файлову систему, ЦПУ, пам'ять, простір процесів тощо. Оскільки контейнери вивільнені від підпорядкованої інфраструктури, їх можна легко переміщати між хмарними провайдерами чи дистрибутивами операційних систем. + +Контейнери стали популярними, бо надавали додаткові переваги, такі як: + + + +* Створення та розгортання застосунків за методологією Agile: спрощене та більш ефективне створення образів контейнерів у порівнянні до використання образів віртуальних машин. +* Безперервна розробка, інтеграція та розгортання: забезпечення надійних та безперервних збирань образів контейнерів, їх швидке розгортання та легкі відкатування (за рахунок незмінності образів). +* Розподіл відповідальності команд розробки та експлуатації: створення образів контейнерів застосунків під час збирання/релізу на противагу часу розгортання, і як наслідок, вивільнення застосунків із інфраструктури. +* Спостереження не лише за інформацією та метриками на рівні операційної системи, але й за станом застосунку та іншими сигналами. +* Однорідність середовища для розробки, тестування та робочого навантаження: запускається так само як на робочому комп'ютері, так і у хмарного провайдера. +* ОС та хмарна кросплатформність: запускається на Ubuntu, RHEL, CoreOS, у власному дата-центрі, у Google Kubernetes Engine і взагалі будь-де. +* Керування орієнтоване на застосунки: підвищення рівня абстракції від запуску операційної системи у віртуальному апаратному забезпеченні до запуску застосунку в операційній системі, використовуючи логічні ресурси. +* Нещільно зв'язані, розподілені, еластичні, вивільнені мікросервіси: застосунки розбиваються на менші, незалежні частини для динамічного розгортання та управління, на відміну від монолітної архітектури, що працює на одній великій виділеній машині. +* Ізоляція ресурсів: передбачувана продуктивність застосунку. +* Використання ресурсів: висока ефективність та щільність. + + +## Чому вам потрібен Kebernetes і що він може робити + + +Контейнери - це прекрасний спосіб упакувати та запустити ваші застосунки. У прод оточенні вам потрібно керувати контейнерами, в яких працюють застосунки, і стежити, щоб не було простою. Наприклад, якщо один контейнер припиняє роботу, інший має бути запущений йому на заміну. Чи не легше було б, якби цим керувала сама система? + + +Ось де Kubernetes приходить на допомогу! Kubernetes надає вам каркас для еластичного запуску розподілених систем. Він опікується масштабуванням та аварійним відновленням вашого застосунку, пропонує шаблони розгортань тощо. Наприклад, Kubernetes дозволяє легко створювати розгортання за стратегією canary у вашій системі. + + +Kubernetes надає вам: + + + +* **Виявлення сервісів та балансування навантаження** +Kubernetes може надавати доступ до контейнера, використовуючи DNS-ім'я або його власну IP-адресу. Якщо контейнер зазнає завеликого мережевого навантаження, Kubernetes здатний збалансувати та розподілити його таким чином, щоб якість обслуговування залишалась стабільною. +* **Оркестрація сховища інформації** +Kubernetes дозволяє вам автоматично монтувати системи збереження інформації на ваш вибір: локальні сховища, рішення від хмарних провайдерів тощо. +* **Автоматичне розгортання та відкатування** +За допомогою Kubernetes ви можете описати бажаний стан контейнерів, що розгортаються, і він регульовано простежить за виконанням цього стану. Наприклад, ви можете автоматизувати в Kubernetes процеси створення нових контейнерів для розгортання, видалення існуючих контейнерів і передачу їхніх ресурсів на новостворені контейнери. +* **Автоматичне розміщення задач** +Ви надаєте Kubernetes кластер для запуску контейнерізованих задач і вказуєте, скільки ресурсів ЦПУ та пам'яті (RAM) необхідно для роботи кожного контейнера. Kubernetes розподіляє контейнери по вузлах кластера для максимально ефективного використання ресурсів. +* **Самозцілення** +Kubernetes перезапускає контейнери, що відмовили; заміняє контейнери; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності. +* **Управління секретами та конфігурацією** +Kubernetes дозволяє вам зберігати та керувати чутливою інформацією, такою як паролі, OAuth токени та SSH ключі. Ви можете розгортати та оновлювати секрети та конфігурацію без перезбирання образів ваших контейнерів, не розкриваючи секрети в конфігурацію стека. + + + +## Чим не є Kubernetes + + +Kubernetes не є комплексною системою PaaS (Платформа як послуга) у традиційному розумінні. Оскільки Kubernetes оперує швидше на рівні контейнерів, аніж на рівні апаратного забезпечення, деяка загальнозастосована функціональність і справді є спільною з PaaS, як-от розгортання, масштабування, розподіл навантаження, логування і моніторинг. Водночас Kubernetes не є монолітним, а вищезазначені особливості підключаються і є опціональними. Kubernetes надає будівельні блоки для створення платформ для розробників, але залишає за користувачем право вибору у важливих питаннях. + + +Kubernetes: + + + +* Не обмежує типи застосунків, що підтримуються. Kubernetes намагається підтримувати найрізноманітніші типи навантажень, включно із застосунками зі станом (stateful) та без стану (stateless), навантаження по обробці даних тощо. Якщо ваш застосунок можна контейнеризувати, він чудово запуститься під Kubernetes. +* Не розгортає застосунки з вихідного коду та не збирає ваші застосунки. Процеси безперервної інтеграції, доставки та розгортання (CI/CD) визначаються на рівні організації, та в залежності від технічних вимог. +* Не надає сервіси на рівні застосунків як вбудовані: програмне забезпечення проміжного рівня (наприклад, шина передачі повідомлень), фреймворки обробки даних (наприклад, Spark), бази даних (наприклад, MySQL), кеш, некластерні системи збереження інформації (наприклад, Ceph). Ці компоненти можуть бути запущені у Kubernetes та/або бути доступними для застосунків за допомогою спеціальних механізмів, наприклад [Open Service Broker](https://openservicebrokerapi.org/). +* Не нав'язує використання інструментів для логування, моніторингу та сповіщень, натомість надає певні інтеграційні рішення як прототипи, та механізми зі збирання та експорту метрик. +* Не надає та не змушує використовувати якусь конфігураційну мову/систему (як наприклад `Jsonnet`), натомість надає можливість використовувати API, що може бути використаний довільними формами декларативних специфікацій. +* Не надає і не запроваджує жодних систем машинної конфігурації, підтримки, управління або самозцілення. +* На додачу, Kubernetes - не просто система оркестрації. Власне кажучи, вона усуває потребу оркестрації як такої. Технічне визначення оркестрації - це запуск визначених процесів: спочатку A, за ним B, потім C. На противагу, Kubernetes складається з певної множини незалежних, складних процесів контролерів, що безперервно опрацьовують стан у напрямку, що заданий бажаною конфігурацією. Неважливо, як ви дістанетесь з пункту A до пункту C. Централізоване управління також не є вимогою. Все це виливається в систему, яку легко використовувати, яка є потужною, надійною, стійкою та здатною до легкого розширення. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Перегляньте [компоненти Kubernetes](/docs/concepts/overview/components/) +* Готові [розпочати роботу](/docs/setup/)? +{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/_index.md b/content/uk/docs/concepts/services-networking/_index.md new file mode 100644 index 0000000000..634694311a --- /dev/null +++ b/content/uk/docs/concepts/services-networking/_index.md @@ -0,0 +1,4 @@ +--- +title: "Сервіси, балансування навантаження та мережа" +weight: 60 +--- diff --git a/content/uk/docs/concepts/services-networking/dual-stack.md b/content/uk/docs/concepts/services-networking/dual-stack.md new file mode 100644 index 0000000000..a4e7bf57af --- /dev/null +++ b/content/uk/docs/concepts/services-networking/dual-stack.md @@ -0,0 +1,109 @@ +--- +reviewers: +- lachie83 +- khenidak +- aramase +title: IPv4/IPv6 dual-stack +feature: + title: Подвійний стек IPv4/IPv6 + description: > + Призначення IPv4- та IPv6-адрес подам і сервісам. + +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.16" state="alpha" >}} + + IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}. + +If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses. + +{{% /capture %}} + +{{% capture body %}} + +## Supported Features + +Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: + + * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod) + * IPv4 and IPv6 enabled Services (each Service must be for a single address family) + * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces + +## Prerequisites + +The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters: + + * Kubernetes 1.16 or later + * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) + * A network plugin that supports dual-stack (such as Kubenet or Calico) + * Kube-proxy running in mode IPVS + +## Enable IPv4/IPv6 dual-stack + +To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments: + + * kube-controller-manager: + * `--feature-gates="IPv6DualStack=true"` + * `--cluster-cidr=,` eg. `--cluster-cidr=10.244.0.0/16,fc00::/24` + * `--service-cluster-ip-range=,` + * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6 + * kubelet: + * `--feature-gates="IPv6DualStack=true"` + * kube-proxy: + * `--proxy-mode=ipvs` + * `--cluster-cidrs=,` + * `--feature-gates="IPv6DualStack=true"` + +{{< caution >}} +If you specify an IPv6 address block larger than a /24 via `--cluster-cidr` on the command line, that assignment will fail. +{{< /caution >}} + +## Services + +If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service. +You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic. + +{{< note >}} +The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager. +{{< /note >}} + +You can set `.spec.ipFamily` to either: + + * `IPv4`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv4` + * `IPv6`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv6` + +The following Service specification does not include the `ipFamily` field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured `service-cluster-ip-range` to this Service. + +{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + +The following Service specification includes the `ipFamily` field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service. + +{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} + +For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service. + +{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}} + +### Type LoadBalancer + +On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service. + +## Egress Traffic + +The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters. + +## Known Issues + + * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr) + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking + +{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/endpoint-slices.md b/content/uk/docs/concepts/services-networking/endpoint-slices.md new file mode 100644 index 0000000000..f6e918b13c --- /dev/null +++ b/content/uk/docs/concepts/services-networking/endpoint-slices.md @@ -0,0 +1,188 @@ +--- +reviewers: +- freehan +title: EndpointSlices +feature: + title: EndpointSlices + description: > + Динамічне відстеження мережевих вузлів у кластері Kubernetes. + +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +_EndpointSlices_ provide a simple way to track network endpoints within a +Kubernetes cluster. They offer a more scalable and extensible alternative to +Endpoints. + +{{% /capture %}} + +{{% capture body %}} + +## EndpointSlice resources {#endpointslice-resource} + +In Kubernetes, an EndpointSlice contains references to a set of network +endpoints. The EndpointSlice controller automatically creates EndpointSlices +for a Kubernetes Service when a {{< glossary_tooltip text="selector" +term_id="selector" >}} is specified. These EndpointSlices will include +references to any Pods that match the Service selector. EndpointSlices group +network endpoints together by unique Service and Port combinations. + +As an example, here's a sample EndpointSlice resource for the `example` +Kubernetes Service. + +```yaml +apiVersion: discovery.k8s.io/v1beta1 +kind: EndpointSlice +metadata: + name: example-abc + labels: + kubernetes.io/service-name: example +addressType: IPv4 +ports: + - name: http + protocol: TCP + port: 80 +endpoints: + - addresses: + - "10.1.2.3" + conditions: + ready: true + hostname: pod-1 + topology: + kubernetes.io/hostname: node-1 + topology.kubernetes.io/zone: us-west2-a +``` + +By default, EndpointSlices managed by the EndpointSlice controller will have no +more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1 +with Endpoints and Services and have similar performance. + +EndpointSlices can act as the source of truth for kube-proxy when it comes to +how to route internal traffic. When enabled, they should provide a performance +improvement for services with large numbers of endpoints. + +### Address Types + +EndpointSlices support three address types: + +* IPv4 +* IPv6 +* FQDN (Fully Qualified Domain Name) + +### Topology + +Each endpoint within an EndpointSlice can contain relevant topology information. +This is used to indicate where an endpoint is, containing information about the +corresponding Node, zone, and region. When the values are available, the +following Topology labels will be set by the EndpointSlice controller: + +* `kubernetes.io/hostname` - The name of the Node this endpoint is on. +* `topology.kubernetes.io/zone` - The zone this endpoint is in. +* `topology.kubernetes.io/region` - The region this endpoint is in. + +The values of these labels are derived from resources associated with each +endpoint in a slice. The hostname label represents the value of the NodeName +field on the corresponding Pod. The zone and region labels represent the value +of the labels with the same names on the corresponding Node. + +### Management + +By default, EndpointSlices are created and managed by the EndpointSlice +controller. There are a variety of other use cases for EndpointSlices, such as +service mesh implementations, that could result in other entities or controllers +managing additional sets of EndpointSlices. To ensure that multiple entities can +manage EndpointSlices without interfering with each other, a +`endpointslice.kubernetes.io/managed-by` label is used to indicate the entity +managing an EndpointSlice. The EndpointSlice controller sets +`endpointslice-controller.k8s.io` as the value for this label on all +EndpointSlices it manages. Other entities managing EndpointSlices should also +set a unique value for this label. + +### Ownership + +In most use cases, EndpointSlices will be owned by the Service that it tracks +endpoints for. This is indicated by an owner reference on each EndpointSlice as +well as a `kubernetes.io/service-name` label that enables simple lookups of all +EndpointSlices belonging to a Service. + +## EndpointSlice Controller + +The EndpointSlice controller watches Services and Pods to ensure corresponding +EndpointSlices are up to date. The controller will manage EndpointSlices for +every Service with a selector specified. These will represent the IPs of Pods +matching the Service selector. + +### Size of EndpointSlices + +By default, EndpointSlices are limited to a size of 100 endpoints each. You can +configure this with the `--max-endpoints-per-slice` {{< glossary_tooltip +text="kube-controller-manager" term_id="kube-controller-manager" >}} flag up to +a maximum of 1000. + +### Distribution of EndpointSlices + +Each EndpointSlice has a set of ports that applies to all endpoints within the +resource. When named ports are used for a Service, Pods may end up with +different target port numbers for the same named port, requiring different +EndpointSlices. This is similar to the logic behind how subsets are grouped +with Endpoints. + +The controller tries to fill EndpointSlices as full as possible, but does not +actively rebalance them. The logic of the controller is fairly straightforward: + +1. Iterate through existing EndpointSlices, remove endpoints that are no longer + desired and update matching endpoints that have changed. +2. Iterate through EndpointSlices that have been modified in the first step and + fill them up with any new endpoints needed. +3. If there's still new endpoints left to add, try to fit them into a previously + unchanged slice and/or create new ones. + +Importantly, the third step prioritizes limiting EndpointSlice updates over a +perfectly full distribution of EndpointSlices. As an example, if there are 10 +new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each, +this approach will create a new EndpointSlice instead of filling up the 2 +existing EndpointSlices. In other words, a single EndpointSlice creation is +preferrable to multiple EndpointSlice updates. + +With kube-proxy running on each Node and watching EndpointSlices, every change +to an EndpointSlice becomes relatively expensive since it will be transmitted to +every Node in the cluster. This approach is intended to limit the number of +changes that need to be sent to every Node, even if it may result with multiple +EndpointSlices that are not full. + +In practice, this less than ideal distribution should be rare. Most changes +processed by the EndpointSlice controller will be small enough to fit in an +existing EndpointSlice, and if not, a new EndpointSlice is likely going to be +necessary soon anyway. Rolling updates of Deployments also provide a natural +repacking of EndpointSlices with all pods and their corresponding endpoints +getting replaced. + +## Motivation + +The Endpoints API has provided a simple and straightforward way of +tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters +and Services have gotten larger, limitations of that API became more visible. +Most notably, those included challenges with scaling to larger numbers of +network endpoints. + +Since all network endpoints for a Service were stored in a single Endpoints +resource, those resources could get quite large. That affected the performance +of Kubernetes components (notably the master control plane) and resulted in +significant amounts of network traffic and processing when Endpoints changed. +EndpointSlices help you mitigate those issues as well as provide an extensible +platform for additional features such as topological routing. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) +* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/service-topology.md b/content/uk/docs/concepts/services-networking/service-topology.md new file mode 100644 index 0000000000..c1be99267b --- /dev/null +++ b/content/uk/docs/concepts/services-networking/service-topology.md @@ -0,0 +1,127 @@ +--- +reviewers: +- johnbelamaric +- imroc +title: Service Topology +feature: + title: Топологія Сервісів + description: > + Маршрутизація трафіка Сервісом відповідно до топології кластера. + +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +_Service Topology_ enables a service to route traffic based upon the Node +topology of the cluster. For example, a service can specify that traffic be +preferentially routed to endpoints that are on the same Node as the client, or +in the same availability zone. + +{{% /capture %}} + +{{% capture body %}} + +## Introduction + +By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to +any backend address for the Service. Since Kubernetes 1.7 it has been possible +to route "external" traffic to the Pods running on the Node that received the +traffic, but this is not supported for `ClusterIP` Services, and more complex +topologies — such as routing zonally — have not been possible. The +_Service Topology_ feature resolves this by allowing the Service creator to +define a policy for routing traffic based upon the Node labels for the +originating and destination Nodes. + +By using Node label matching between the source and destination, the operator +may designate groups of Nodes that are "closer" and "farther" from one another, +using whatever metric makes sense for that operator's requirements. For many +operators in public clouds, for example, there is a preference to keep service +traffic within the same zone, because interzonal traffic has a cost associated +with it, while intrazonal traffic does not. Other common needs include being able +to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to +Nodes connected to the same top-of-rack switch for the lowest latency. + +## Prerequisites + +The following prerequisites are needed in order to enable topology aware service +routing: + + * Kubernetes 1.17 or later + * Kube-proxy running in iptables mode or IPVS mode + * Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/) + +## Enable Service Topology + +To enable service topology, enable the `ServiceTopology` feature gate for +kube-apiserver and kube-proxy: + +``` +--feature-gates="ServiceTopology=true" +``` + +## Using Service Topology + +If your cluster has Service Topology enabled, you can control Service traffic +routing by specifying the `topologyKeys` field on the Service spec. This field +is a preference-order list of Node labels which will be used to sort endpoints +when accessing this Service. Traffic will be directed to a Node whose value for +the first label matches the originating Node's value for that label. If there is +no backend for the Service on a matching Node, then the second label will be +considered, and so forth, until no labels remain. + +If no match is found, the traffic will be rejected, just as if there were no +backends for the Service at all. That is, endpoints are chosen based on the first +topology key with available backends. If this field is specified and all entries +have no backends that match the topology of the client, the service has no +backends for that client and connections should fail. The special value `"*"` may +be used to mean "any topology". This catch-all value, if used, only makes sense +as the last value in the list. + +If `topologyKeys` is not specified or empty, no topology constraints will be applied. + +Consider a cluster with Nodes that are labeled with their hostname, zone name, +and region name. Then you can set the `topologyKeys` values of a service to direct +traffic as follows. + +* Only to endpoints on the same node, failing if no endpoint exists on the node: + `["kubernetes.io/hostname"]`. +* Preferentially to endpoints on the same node, falling back to endpoints in the + same zone, followed by the same region, and failing otherwise: `["kubernetes.io/hostname", + "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`. + This may be useful, for example, in cases where data locality is critical. +* Preferentially to the same zone, but fallback on any available endpoint if + none are available within this zone: + `["topology.kubernetes.io/zone", "*"]`. + + + +## Constraints + +* Service topology is not compatible with `externalTrafficPolicy=Local`, and + therefore a Service cannot use both of these features. It is possible to use + both features in the same cluster on different Services, just not on the same + Service. + +* Valid topology keys are currently limited to `kubernetes.io/hostname`, + `topology.kubernetes.io/zone`, and `topology.kubernetes.io/region`, but will + be generalized to other node labels in the future. + +* Topology keys must be valid label keys and at most 16 keys may be specified. + +* The catch-all value, `"*"`, must be the last value in the topology keys, if + it is used. + + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology) +* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/service.md b/content/uk/docs/concepts/services-networking/service.md new file mode 100644 index 0000000000..d6a72fcc63 --- /dev/null +++ b/content/uk/docs/concepts/services-networking/service.md @@ -0,0 +1,1197 @@ +--- +reviewers: +- bprashanth +title: Service +feature: + title: Виявлення Сервісів і балансування навантаження + description: > + Не потрібно змінювати ваш застосунок для використання незнайомого механізму виявлення Сервісів. Kubernetes призначає Подам власні IP-адреси, а набору Подів - єдине DNS-ім'я, і балансує навантаження між ними. + +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< glossary_definition term_id="service" length="short" >}} + +With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. +Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, +and can load-balance across them. + +{{% /capture %}} + +{{% capture body %}} + +## Motivation + +Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal. +They are born and when they die, they are not resurrected. +If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app, +it can create and destroy Pods dynamically. + +Each Pod gets its own IP address, however in a Deployment, the set of Pods +running in one moment in time could be different from +the set of Pods running that application a moment later. + +This leads to a problem: if some set of Pods (call them “backends”) provides +functionality to other Pods (call them “frontends”) inside your cluster, +how do the frontends find out and keep track of which IP address to connect +to, so that the frontend can use the backend part of the workload? + +Enter _Services_. + +## Service resources {#service-resource} + +In Kubernetes, a Service is an abstraction which defines a logical set of Pods +and a policy by which to access them (sometimes this pattern is called +a micro-service). The set of Pods targeted by a Service is usually determined +by a {{< glossary_tooltip text="selector" term_id="selector" >}} +(see [below](#services-without-selectors) for why you might want a Service +_without_ a selector). + +For example, consider a stateless image-processing backend which is running with +3 replicas. Those replicas are fungible—frontends do not care which backend +they use. While the actual Pods that compose the backend set may change, the +frontend clients should not need to be aware of that, nor should they need to keep +track of the set of backends themselves. + +The Service abstraction enables this decoupling. + +### Cloud-native service discovery + +If you're able to use Kubernetes APIs for service discovery in your application, +you can query the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} +for Endpoints, that get updated whenever the set of Pods in a Service changes. + +For non-native applications, Kubernetes offers ways to place a network port or load +balancer in between your application and the backend Pods. + +## Defining a Service + +A Service in Kubernetes is a REST object, similar to a Pod. Like all of the +REST objects, you can `POST` a Service definition to the API server to create +a new instance. + +For example, suppose you have a set of Pods that each listen on TCP port 9376 +and carry a label `app=MyApp`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 +``` + +This specification creates a new Service object named “my-service”, which +targets TCP port 9376 on any Pod with the `app=MyApp` label. + +Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), +which is used by the Service proxies +(see [Virtual IPs and service proxies](#virtual-ips-and-service-proxies) below). + +The controller for the Service selector continuously scans for Pods that +match its selector, and then POSTs any updates to an Endpoint object +also named “my-service”. + +{{< note >}} +A Service can map _any_ incoming `port` to a `targetPort`. By default and +for convenience, the `targetPort` is set to the same value as the `port` +field. +{{< /note >}} + +Port definitions in Pods have names, and you can reference these names in the +`targetPort` attribute of a Service. This works even if there is a mixture +of Pods in the Service using a single configured name, with the same network +protocol available via different port numbers. +This offers a lot of flexibility for deploying and evolving your Services. +For example, you can change the port numbers that Pods expose in the next +version of your backend software, without breaking clients. + +The default protocol for Services is TCP; you can also use any other +[supported protocol](#protocol-support). + +As many Services need to expose more than one port, Kubernetes supports multiple +port definitions on a Service object. +Each port definition can have the same `protocol`, or a different one. + +### Services without selectors + +Services most commonly abstract access to Kubernetes Pods, but they can also +abstract other kinds of backends. +For example: + + * You want to have an external database cluster in production, but in your + test environment you use your own databases. + * You want to point your Service to a Service in a different + {{< glossary_tooltip term_id="namespace" >}} or on another cluster. + * You are migrating a workload to Kubernetes. Whilst evaluating the approach, + you run only a proportion of your backends in Kubernetes. + +In any of these scenarios you can define a Service _without_ a Pod selector. +For example: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ports: + - protocol: TCP + port: 80 + targetPort: 9376 +``` + +Because this Service has no selector, the corresponding Endpoint object is *not* +created automatically. You can manually map the Service to the network address and port +where it's running, by adding an Endpoint object manually: + +```yaml +apiVersion: v1 +kind: Endpoints +metadata: + name: my-service +subsets: + - addresses: + - ip: 192.0.2.42 + ports: + - port: 9376 +``` + +{{< note >}} +The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or +link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). + +Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, +because {{< glossary_tooltip term_id="kube-proxy" >}} doesn't support virtual IPs +as a destination. +{{< /note >}} + +Accessing a Service without a selector works the same as if it had a selector. +In the example above, traffic is routed to the single endpoint defined in +the YAML: `192.0.2.42:9376` (TCP). + +An ExternalName Service is a special case of Service that does not have +selectors and uses DNS names instead. For more information, see the +[ExternalName](#externalname) section later in this document. + +### EndpointSlices +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +EndpointSlices are an API resource that can provide a more scalable alternative +to Endpoints. Although conceptually quite similar to Endpoints, EndpointSlices +allow for distributing network endpoints across multiple resources. By default, +an EndpointSlice is considered "full" once it reaches 100 endpoints, at which +point additional EndpointSlices will be created to store any additional +endpoints. + +EndpointSlices provide additional attributes and functionality which is +described in detail in [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/). + +## Virtual IPs and service proxies + +Every node in a Kubernetes cluster runs a `kube-proxy`. `kube-proxy` is +responsible for implementing a form of virtual IP for `Services` of type other +than [`ExternalName`](#externalname). + +### Why not use round-robin DNS? + +A question that pops up every now and then is why Kubernetes relies on +proxying to forward inbound traffic to backends. What about other +approaches? For example, would it be possible to configure DNS records that +have multiple A values (or AAAA for IPv6), and rely on round-robin name +resolution? + +There are a few reasons for using proxying for Services: + + * There is a long history of DNS implementations not respecting record TTLs, + and caching the results of name lookups after they should have expired. + * Some apps do DNS lookups only once and cache the results indefinitely. + * Even if apps and libraries did proper re-resolution, the low or zero TTLs + on the DNS records could impose a high load on DNS that then becomes + difficult to manage. + +### User space proxy mode {#proxy-mode-userspace} + +In this mode, kube-proxy watches the Kubernetes master for the addition and +removal of Service and Endpoint objects. For each Service it opens a +port (randomly chosen) on the local node. Any connections to this "proxy port" +are +proxied to one of the Service's backend Pods (as reported via +Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into +account when deciding which backend Pod to use. + +Lastly, the user-space proxy installs iptables rules which capture traffic to +the Service's `clusterIP` (which is virtual) and `port`. The rules +redirect that traffic to the proxy port which proxies the backend Pod. + +By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm. + +![Services overview diagram for userspace proxy](/images/docs/services-userspace-overview.svg) + +### `iptables` proxy mode {#proxy-mode-iptables} + +In this mode, kube-proxy watches the Kubernetes control plane for the addition and +removal of Service and Endpoint objects. For each Service, it installs +iptables rules, which capture traffic to the Service's `clusterIP` and `port`, +and redirect that traffic to one of the Service's +backend sets. For each Endpoint object, it installs iptables rules which +select a backend Pod. + +By default, kube-proxy in iptables mode chooses a backend at random. + +Using iptables to handle traffic has a lower system overhead, because traffic +is handled by Linux netfilter without the need to switch between userspace and the +kernel space. This approach is also likely to be more reliable. + +If kube-proxy is running in iptables mode and the first Pod that's selected +does not respond, the connection fails. This is different from userspace +mode: in that scenario, kube-proxy would detect that the connection to the first +Pod had failed and would automatically retry with a different backend Pod. + +You can use Pod [readiness probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) +to verify that backend Pods are working OK, so that kube-proxy in iptables mode +only sees backends that test out as healthy. Doing this means you avoid +having traffic sent via kube-proxy to a Pod that's known to have failed. + +![Services overview diagram for iptables proxy](/images/docs/services-iptables-overview.svg) + +### IPVS proxy mode {#proxy-mode-ipvs} + +{{< feature-state for_k8s_version="v1.11" state="stable" >}} + +In `ipvs` mode, kube-proxy watches Kubernetes Services and Endpoints, +calls `netlink` interface to create IPVS rules accordingly and synchronizes +IPVS rules with Kubernetes Services and Endpoints periodically. +This control loop ensures that IPVS status matches the desired +state. +When accessing a Service, IPVS directs traffic to one of the backend Pods. + +The IPVS proxy mode is based on netfilter hook function that is similar to +iptables mode, but uses a hash table as the underlying data structure and works +in the kernel space. +That means kube-proxy in IPVS mode redirects traffic with lower latency than +kube-proxy in iptables mode, with much better performance when synchronising +proxy rules. Compared to the other proxy modes, IPVS mode also supports a +higher throughput of network traffic. + +IPVS provides more options for balancing traffic to backend Pods; +these are: + +- `rr`: round-robin +- `lc`: least connection (smallest number of open connections) +- `dh`: destination hashing +- `sh`: source hashing +- `sed`: shortest expected delay +- `nq`: never queue + +{{< note >}} +To run kube-proxy in IPVS mode, you must make the IPVS Linux available on +the node before you starting kube-proxy. + +When kube-proxy starts in IPVS proxy mode, it verifies whether IPVS +kernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy +falls back to running in iptables proxy mode. +{{< /note >}} + +![Services overview diagram for IPVS proxy](/images/docs/services-ipvs-overview.svg) + +In these proxy models, the traffic bound for the Service’s IP:Port is +proxied to an appropriate backend without the clients knowing anything +about Kubernetes or Services or Pods. + +If you want to make sure that connections from a particular client +are passed to the same Pod each time, you can select the session affinity based +on the client's IP addresses by setting `service.spec.sessionAffinity` to "ClientIP" +(the default is "None"). +You can also set the maximum session sticky time by setting +`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately. +(the default value is 10800, which works out to be 3 hours). + +## Multi-Port Services + +For some Services, you need to expose more than one port. +Kubernetes lets you configure multiple port definitions on a Service object. +When using multiple ports for a Service, you must give all of your ports names +so that these are unambiguous. +For example: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 9376 + - name: https + protocol: TCP + port: 443 + targetPort: 9377 +``` + +{{< note >}} +As with Kubernetes {{< glossary_tooltip term_id="name" text="names">}} in general, names for ports +must only contain lowercase alphanumeric characters and `-`. Port names must +also start and end with an alphanumeric character. + +For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not. +{{< /note >}} + +## Choosing your own IP address + +You can specify your own cluster IP address as part of a `Service` creation +request. To do this, set the `.spec.clusterIP` field. For example, if you +already have an existing DNS entry that you wish to reuse, or legacy systems +that are configured for a specific IP address and difficult to re-configure. + +The IP address that you choose must be a valid IPv4 or IPv6 address from within the +`service-cluster-ip-range` CIDR range that is configured for the API server. +If you try to create a Service with an invalid clusterIP address value, the API +server will return a 422 HTTP status code to indicate that there's a problem. + +## Discovering services + +Kubernetes supports 2 primary modes of finding a Service - environment +variables and DNS. + +### Environment variables + +When a Pod is run on a Node, the kubelet adds a set of environment variables +for each active Service. It supports both [Docker links +compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see +[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49)) +and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, +where the Service name is upper-cased and dashes are converted to underscores. + +For example, the Service `"redis-master"` which exposes TCP port 6379 and has been +allocated cluster IP address 10.0.0.11, produces the following environment +variables: + +```shell +REDIS_MASTER_SERVICE_HOST=10.0.0.11 +REDIS_MASTER_SERVICE_PORT=6379 +REDIS_MASTER_PORT=tcp://10.0.0.11:6379 +REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379 +REDIS_MASTER_PORT_6379_TCP_PROTO=tcp +REDIS_MASTER_PORT_6379_TCP_PORT=6379 +REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11 +``` + +{{< note >}} +When you have a Pod that needs to access a Service, and you are using +the environment variable method to publish the port and cluster IP to the client +Pods, you must create the Service *before* the client Pods come into existence. +Otherwise, those client Pods won't have their environment variables populated. + +If you only use DNS to discover the cluster IP for a Service, you don't need to +worry about this ordering issue. +{{< /note >}} + +### DNS + +You can (and almost always should) set up a DNS service for your Kubernetes +cluster using an [add-on](/docs/concepts/cluster-administration/addons/). + +A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new +Services and creates a set of DNS records for each one. If DNS has been enabled +throughout your cluster then all Pods should automatically be able to resolve +Services by their DNS name. + +For example, if you have a Service called `"my-service"` in a Kubernetes +Namespace `"my-ns"`, the control plane and the DNS Service acting together +create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace +should be able to find it by simply doing a name lookup for `my-service` +(`"my-service.my-ns"` would also work). + +Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names +will resolve to the cluster IP assigned for the Service. + +Kubernetes also supports DNS SRV (Service) records for named ports. If the +`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to +`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover +the port number for `"http"`, as well as the IP address. + +The Kubernetes DNS server is the only way to access `ExternalName` Services. +You can find more information about `ExternalName` resolution in +[DNS Pods and Services](/docs/concepts/services-networking/dns-pod-service/). + +## Headless Services + +Sometimes you don't need load-balancing and a single Service IP. In +this case, you can create what are termed “headless” Services, by explicitly +specifying `"None"` for the cluster IP (`.spec.clusterIP`). + +You can use a headless Service to interface with other service discovery mechanisms, +without being tied to Kubernetes' implementation. + +For headless `Services`, a cluster IP is not allocated, kube-proxy does not handle +these Services, and there is no load balancing or proxying done by the platform +for them. How DNS is automatically configured depends on whether the Service has +selectors defined: + +### With selectors + +For headless Services that define selectors, the endpoints controller creates +`Endpoints` records in the API, and modifies the DNS configuration to return +records (addresses) that point directly to the `Pods` backing the `Service`. + +### Without selectors + +For headless Services that do not define selectors, the endpoints controller does +not create `Endpoints` records. However, the DNS system looks for and configures +either: + + * CNAME records for [`ExternalName`](#externalname)-type Services. + * A records for any `Endpoints` that share a name with the Service, for all + other types. + +## Publishing Services (ServiceTypes) {#publishing-services-service-types} + +For some parts of your application (for example, frontends) you may want to expose a +Service onto an external IP address, that's outside of your cluster. + +Kubernetes `ServiceTypes` allow you to specify what kind of Service you want. +The default is `ClusterIP`. + +`Type` values and their behaviors are: + + * `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value + makes the Service only reachable from within the cluster. This is the + default `ServiceType`. + * [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port + (the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service + routes, is automatically created. You'll be able to contact the `NodePort` Service, + from outside the cluster, + by requesting `:`. + * [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud + provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external + load balancer routes, are automatically created. + * [`ExternalName`](#externalname): Maps the Service to the contents of the + `externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record + + with its value. No proxying of any kind is set up. + {{< note >}} + You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type. + {{< /note >}} + +You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address. + +### Type NodePort {#nodeport} + +If you set the `type` field to `NodePort`, the Kubernetes control plane +allocates a port from a range specified by `--service-node-port-range` flag (default: 30000-32767). +Each node proxies that port (the same port number on every Node) into your Service. +Your Service reports the allocated port in its `.spec.ports[*].nodePort` field. + + +If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10. +This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node. + +For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases). + +If you want a specific port number, you can specify a value in the `nodePort` +field. The control plane will either allocate you that port or report that +the API transaction failed. +This means that you need to take care of possible port collisions yourself. +You also have to use a valid port number, one that's inside the range configured +for NodePort use. + +Using a NodePort gives you the freedom to set up your own load balancing solution, +to configure environments that are not fully supported by Kubernetes, or even +to just expose one or more nodes' IPs directly. + +Note that this Service is visible as `:spec.ports[*].nodePort` +and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) + +### Type LoadBalancer {#loadbalancer} + +On cloud providers which support external load balancers, setting the `type` +field to `LoadBalancer` provisions a load balancer for your Service. +The actual creation of the load balancer happens asynchronously, and +information about the provisioned balancer is published in the Service's +`.status.loadBalancer` field. +For example: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + clusterIP: 10.0.171.239 + type: LoadBalancer +status: + loadBalancer: + ingress: + - ip: 192.0.2.127 +``` + +Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced. + +For LoadBalancer type of Services, when there is more than one port defined, all +ports must have the same protocol and the protocol must be one of `TCP`, `UDP`, +and `SCTP`. + +Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created +with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified, +the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP` +but your cloud provider does not support the feature, the `loadbalancerIP` field that you +set is ignored. + +{{< note >}} +If you're using SCTP, see the [caveat](#caveat-sctp-loadbalancer-service-type) below about the +`LoadBalancer` Service type. +{{< /note >}} + +{{< note >}} + +On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need +to create a static type public IP address resource. This public IP address resource should +be in the same resource group of the other automatically created resources of the cluster. +For example, `MC_myResourceGroup_myAKSCluster_eastus`. + +Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357). + +{{< /note >}} + +#### Internal load balancer +In a mixed environment it is sometimes necessary to route traffic from Services inside the same +(virtual) network address block. + +In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints. + +You can achieve this by adding one the following annotations to a Service. +The annotation to add depends on the cloud Service provider you're using. + +{{< tabs name="service_tabs" >}} +{{% tab name="Default" %}} +Select one of the tabs. +{{% /tab %}} +{{% tab name="GCP" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + cloud.google.com/load-balancer-type: "Internal" +[...] +``` +{{% /tab %}} +{{% tab name="AWS" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-internal: "true" +[...] +``` +{{% /tab %}} +{{% tab name="Azure" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" +[...] +``` +{{% /tab %}} +{{% tab name="OpenStack" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/openstack-internal-load-balancer: "true" +[...] +``` +{{% /tab %}} +{{% tab name="Baidu Cloud" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" +[...] +``` +{{% /tab %}} +{{% tab name="Tencent Cloud" %}} +```yaml +[...] +metadata: + annotations: + service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx +[...] +``` +{{% /tab %}} +{{< /tabs >}} + + +#### TLS support on AWS {#ssl-support-on-aws} + +For partial TLS / SSL support on clusters running on AWS, you can add three +annotations to a `LoadBalancer` service: + +```yaml +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012 +``` + +The first specifies the ARN of the certificate to use. It can be either a +certificate from a third party issuer that was uploaded to IAM or one created +within AWS Certificate Manager. + +```yaml +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: (https|http|ssl|tcp) +``` + +The second annotation specifies which protocol a Pod speaks. For HTTPS and +SSL, the ELB expects the Pod to authenticate itself over the encrypted +connection, using a certificate. + +HTTP and HTTPS selects layer 7 proxying: the ELB terminates +the connection with the user, parses headers, and injects the `X-Forwarded-For` +header with the user's IP address (Pods only see the IP address of the +ELB at the other end of its connection) when forwarding requests. + +TCP and SSL selects layer 4 proxying: the ELB forwards traffic without +modifying the headers. + +In a mixed-use environment where some ports are secured and others are left unencrypted, +you can use the following annotations: + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443" +``` + +In the above example, if the Service contained three ports, `80`, `443`, and +`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just +be proxied HTTP. + +From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. +To see which policies are available for use, you can use the `aws` command line tool: + +```bash +aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName' +``` + +You can then specify any one of those policies using the +"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`" +annotation; for example: + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01" +``` + +#### PROXY protocol support on AWS + +To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) +support for clusters running on AWS, you can use the following service +annotation: + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" +``` + +Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB +and cannot be configured otherwise. + +#### ELB Access Logs on AWS + +There are several annotations to manage access logs for ELB Services on AWS. + +The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled` +controls whether access logs are enabled. + +The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval` +controls the interval in minutes for publishing the access logs. You can specify +an interval of either 5 or 60 minutes. + +The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name` +controls the name of the Amazon S3 bucket where load balancer access logs are +stored. + +The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix` +specifies the logical hierarchy you created for your Amazon S3 bucket. + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" + # Specifies whether access logs are enabled for the load balancer + service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60" + # The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes). + service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket" + # The name of the Amazon S3 bucket where the access logs are stored + service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod" + # The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod` +``` + +#### Connection Draining on AWS + +Connection draining for Classic ELBs can be managed with the annotation +`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set +to the value of `"true"`. The annotation +`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can +also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances. + + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true" + service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60" +``` + +#### Other ELB annotations + +There are other annotations to manage Classic Elastic Load Balancers that are described below. + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60" + # The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer + + service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" + # Specifies whether cross-zone load balancing is enabled for the load balancer + + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops" + # A comma-separated list of key-value pairs which will be recorded as + # additional tags in the ELB. + + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "" + # The number of successive successful health checks required for a backend to + # be considered healthy for traffic. Defaults to 2, must be between 2 and 10 + + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + # The number of unsuccessful health checks required for a backend to be + # considered unhealthy for traffic. Defaults to 6, must be between 2 and 10 + + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20" + # The approximate interval, in seconds, between health checks of an + # individual instance. Defaults to 10, must be between 5 and 300 + service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5" + # The amount of time, in seconds, during which no response means a failed + # health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval + # value. Defaults to 5, must be between 2 and 60 + + service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e" + # A list of additional security groups to be added to the ELB +``` + +#### Network Load Balancer support on AWS {#aws-nlb-support} + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`. + +```yaml + metadata: + name: my-service + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: "nlb" +``` + +{{< note >}} +NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) +on Elastic Load Balancing for a list of supported instance types. +{{< /note >}} + +Unlike Classic Elastic Load Balancers, Network Load Balancers (NLBs) forward the +client's IP address through to the node. If a Service's `.spec.externalTrafficPolicy` +is set to `Cluster`, the client's IP address is not propagated to the end +Pods. + +By setting `.spec.externalTrafficPolicy` to `Local`, the client IP addresses is +propagated to the end Pods, but this could result in uneven distribution of +traffic. Nodes without any Pods for a particular LoadBalancer Service will fail +the NLB Target Group's health check on the auto-assigned +`.spec.healthCheckNodePort` and not receive any traffic. + +In order to achieve even traffic, either use a DaemonSet or specify a +[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) +to not locate on the same node. + +You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer) +annotation. + +In order for client traffic to reach instances behind an NLB, the Node security +groups are modified with the following IP rules: + +| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | +|------|----------|---------|------------|---------------------| +| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ | +| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | +| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | + +In order to limit which client IP's can access the Network Load Balancer, +specify `loadBalancerSourceRanges`. + +```yaml +spec: + loadBalancerSourceRanges: + - "143.231.0.0/16" +``` + +{{< note >}} +If `.spec.loadBalancerSourceRanges` is not set, Kubernetes +allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have +public IP addresses, be aware that non-NLB traffic can also reach all instances +in those modified security groups. + +{{< /note >}} + +#### Other CLB annotations on Tencent Kubernetes Engine (TKE) + +There are other annotations for managing Cloud Load Balancers on TKE as shown below. + +```yaml + metadata: + name: my-service + annotations: + # Bind Loadbalancers with specified nodes + service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2) + + # ID of an existing load balancer + service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx + + # Custom parameters for the load balancer (LB), does not support modification of LB type yet + service.kubernetes.io/service.extensiveParameters: "" + + # Custom parameters for the LB listener + service.kubernetes.io/service.listenerParameters: "" + + # Specifies the type of Load balancer; + # valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer) + service.kubernetes.io/loadbalance-type: xxxxx + + # Specifies the public network bandwidth billing method; + # valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth). + service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx + + # Specifies the bandwidth value (value range: [1,2000] Mbps). + service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10" + + # When this annotation is set,the loadbalancers will only register nodes + # with pod running on it, otherwise all nodes will be registered. + service.kubernetes.io/local-svc-only-bind-node-with-pod: true +``` + +### Type ExternalName {#externalname} + +Services of type ExternalName map a Service to a DNS name, not to a typical selector such as +`my-service` or `cassandra`. You specify these Services with the `spec.externalName` parameter. + +This Service definition, for example, maps +the `my-service` Service in the `prod` namespace to `my.database.example.com`: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service + namespace: prod +spec: + type: ExternalName + externalName: my.database.example.com +``` +{{< note >}} +ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName +is intended to specify a canonical DNS name. To hardcode an IP address, consider using +[headless Services](#headless-services). +{{< /note >}} + +When looking up the host `my-service.prod.svc.cluster.local`, the cluster DNS Service +returns a `CNAME` record with the value `my.database.example.com`. Accessing +`my-service` works in the same way as other Services but with the crucial +difference that redirection happens at the DNS level rather than via proxying or +forwarding. Should you later decide to move your database into your cluster, you +can start its Pods, add appropriate selectors or endpoints, and change the +Service's `type`. + +{{< warning >}} +You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references. + +For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to. +{{< /warning >}} + +{{< note >}} +This section is indebted to the [Kubernetes Tips - Part +1](https://akomljen.com/kubernetes-tips-part-1/) blog post from [Alen Komljen](https://akomljen.com/). +{{< /note >}} + +### External IPs + +If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those +`externalIPs`. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, +will be routed to one of the Service endpoints. `externalIPs` are not managed by Kubernetes and are the responsibility +of the cluster administrator. + +In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`. +In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`) + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 9376 + externalIPs: + - 80.11.12.10 +``` + +## Shortcomings + +Using the userspace proxy for VIPs, work at small to medium scale, but will +not scale to very large clusters with thousands of Services. The [original +design proposal for portals](http://issue.k8s.io/1107) has more details on +this. + +Using the userspace proxy obscures the source IP address of a packet accessing +a Service. +This makes some kinds of network filtering (firewalling) impossible. The iptables +proxy mode does not +obscure in-cluster source IPs, but it does still impact clients coming through +a load balancer or node-port. + +The `Type` field is designed as nested functionality - each level adds to the +previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does +not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does) +but the current API requires it. + +## Virtual IP implementation {#the-gory-details-of-virtual-ips} + +The previous information should be sufficient for many people who just want to +use Services. However, there is a lot going on behind the scenes that may be +worth understanding. + +### Avoiding collisions + +One of the primary philosophies of Kubernetes is that you should not be +exposed to situations that could cause your actions to fail through no fault +of your own. For the design of the Service resource, this means not making +you choose your own port number if that choice might collide with +someone else's choice. That is an isolation failure. + +In order to allow you to choose a port number for your Services, we must +ensure that no two Services can collide. Kubernetes does that by allocating each +Service its own IP address. + +To ensure each Service receives a unique IP, an internal allocator atomically +updates a global allocation map in {{< glossary_tooltip term_id="etcd" >}} +prior to creating each Service. The map object must exist in the registry for +Services to get IP address assignments, otherwise creations will +fail with a message indicating an IP address could not be allocated. + +In the control plane, a background controller is responsible for creating that +map (needed to support migrating from older versions of Kubernetes that used +in-memory locking). Kubernetes also uses controllers to checking for invalid +assignments (eg due to administrator intervention) and for cleaning up allocated +IP addresses that are no longer used by any Services. + +### Service IP addresses {#ips-and-vips} + +Unlike Pod IP addresses, which actually route to a fixed destination, +Service IPs are not actually answered by a single host. Instead, kube-proxy +uses iptables (packet processing logic in Linux) to define _virtual_ IP addresses +which are transparently redirected as needed. When clients connect to the +VIP, their traffic is automatically transported to an appropriate endpoint. +The environment variables and DNS for Services are actually populated in +terms of the Service's virtual IP address (and port). + +kube-proxy supports three proxy modes—userspace, iptables and IPVS—which +each operate slightly differently. + +#### Userspace + +As an example, consider the image processing application described above. +When the backend Service is created, the Kubernetes master assigns a virtual +IP address, for example 10.0.0.1. Assuming the Service port is 1234, the +Service is observed by all of the kube-proxy instances in the cluster. +When a proxy sees a new Service, it opens a new random port, establishes an +iptables redirect from the virtual IP address to this new port, and starts accepting +connections on it. + +When a client connects to the Service's virtual IP address, the iptables +rule kicks in, and redirects the packets to the proxy's own port. +The “Service proxy” chooses a backend, and starts proxying traffic from the client to the backend. + +This means that Service owners can choose any port they want without risk of +collision. Clients can simply connect to an IP and port, without being aware +of which Pods they are actually accessing. + +#### iptables + +Again, consider the image processing application described above. +When the backend Service is created, the Kubernetes control plane assigns a virtual +IP address, for example 10.0.0.1. Assuming the Service port is 1234, the +Service is observed by all of the kube-proxy instances in the cluster. +When a proxy sees a new Service, it installs a series of iptables rules which +redirect from the virtual IP address to per-Service rules. The per-Service +rules link to per-Endpoint rules which redirect traffic (using destination NAT) +to the backends. + +When a client connects to the Service's virtual IP address the iptables rule kicks in. +A backend is chosen (either based on session affinity or randomly) and packets are +redirected to the backend. Unlike the userspace proxy, packets are never +copied to userspace, the kube-proxy does not have to be running for the virtual +IP address to work, and Nodes see traffic arriving from the unaltered client IP +address. + +This same basic flow executes when traffic comes in through a node-port or +through a load-balancer, though in those cases the client IP does get altered. + +#### IPVS + +iptables operations slow down dramatically in large scale cluster e.g 10,000 Services. +IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence). + +## API Object + +Service is a top-level resource in the Kubernetes REST API. You can find more details +about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). + +## Supported protocols {#protocol-support} + +### TCP + +You can use TCP for any kind of Service, and it's the default network protocol. + +### UDP + +You can use UDP for most Services. For type=LoadBalancer Services, UDP support +depends on the cloud provider offering this facility. + +### HTTP + +If your cloud provider supports it, you can use a Service in LoadBalancer mode +to set up external HTTP / HTTPS reverse proxying, forwarded to the Endpoints +of the Service. + +{{< note >}} +You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service +to expose HTTP / HTTPS Services. +{{< /note >}} + +### PROXY protocol + +If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)), +you can use a Service in LoadBalancer mode to configure a load balancer outside +of Kubernetes itself, that will forward connections prefixed with +[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). + +The load balancer will send an initial series of octets describing the +incoming connection, similar to this example + +``` +PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n +``` +followed by the data from the client. + +### SCTP + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +Kubernetes supports SCTP as a `protocol` value in Service, Endpoint, NetworkPolicy and Pod definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `--feature-gates=SCTPSupport=true,…`. + +When the feature gate is enabled, you can set the `protocol` field of a Service, Endpoint, NetworkPolicy or Pod to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections. + +#### Warnings {#caveat-sctp-overview} + +##### Support for multihomed SCTP associations {#caveat-sctp-multihomed} + +{{< warning >}} +The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod. + +NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. +{{< /warning >}} + +##### Service with type=LoadBalancer {#caveat-sctp-loadbalancer-service-type} + +{{< warning >}} +You can only create a Service with `type` LoadBalancer plus `protocol` SCTP if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise, the Service creation request is rejected. The current set of cloud load balancer providers (Azure, AWS, CloudStack, GCE, OpenStack) all lack support for SCTP. +{{< /warning >}} + +##### Windows {#caveat-sctp-windows-os} + +{{< warning >}} +SCTP is not supported on Windows based nodes. +{{< /warning >}} + +##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace} + +{{< warning >}} +The kube-proxy does not support the management of SCTP associations when it is in userspace mode. +{{< /warning >}} + +## Future work + +In the future, the proxy policy for Services can become more nuanced than +simple round-robin balancing, for example master-elected or sharded. We also +envision that some Services will have "real" load balancers, in which case the +virtual IP address will simply transport the packets there. + +The Kubernetes project intends to improve support for L7 (HTTP) Services. + +The Kubernetes project intends to have more flexible ingress modes for Services +that encompass the current ClusterIP, NodePort, and LoadBalancer modes and more. + + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) +* Read about [Ingress](/docs/concepts/services-networking/ingress/) +* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) + +{{% /capture %}} diff --git a/content/uk/docs/concepts/storage/_index.md b/content/uk/docs/concepts/storage/_index.md new file mode 100644 index 0000000000..23108a421c --- /dev/null +++ b/content/uk/docs/concepts/storage/_index.md @@ -0,0 +1,4 @@ +--- +title: "Сховища інформації" +weight: 70 +--- diff --git a/content/uk/docs/concepts/storage/persistent-volumes.md b/content/uk/docs/concepts/storage/persistent-volumes.md new file mode 100644 index 0000000000..e348abb931 --- /dev/null +++ b/content/uk/docs/concepts/storage/persistent-volumes.md @@ -0,0 +1,736 @@ +--- +reviewers: +- jsafrane +- saad-ali +- thockin +- msau42 +title: Persistent Volumes +feature: + title: Оркестрація сховищем + description: > + Автоматично монтує систему збереження даних на ваш вибір: з локального носія даних, із хмарного сховища від провайдера публічних хмарних сервісів, як-от GCP чи AWS, або з мережевого сховища, такого як: NFS, iSCSI, Gluster, Ceph, Cinder чи Flocker. + +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested. + +{{% /capture %}} + + +{{% capture body %}} + +## Introduction + +Managing storage is a distinct problem from managing compute instances. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`. + +A `PersistentVolume` (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. + +A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only). + +While `PersistentVolumeClaims` allow a user to consume abstract storage +resources, it is common that users need `PersistentVolumes` with varying +properties, such as performance, for different problems. Cluster administrators +need to be able to offer a variety of `PersistentVolumes` that differ in more +ways than just size and access modes, without exposing users to the details of +how those volumes are implemented. For these needs, there is the `StorageClass` +resource. + +See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). + + +## Lifecycle of a volume and claim + +PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle: + +### Provisioning + +There are two ways PVs may be provisioned: statically or dynamically. + +#### Static +A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption. + +#### Dynamic +When none of the static PVs the administrator created match a user's `PersistentVolumeClaim`, +the cluster may try to dynamically provision a volume specially for the PVC. +This provisioning is based on `StorageClasses`: the PVC must request a +[storage class](/docs/concepts/storage/storage-classes/) and +the administrator must have created and configured that class for dynamic +provisioning to occur. Claims that request the class `""` effectively disable +dynamic provisioning for themselves. + +To enable dynamic storage provisioning based on storage class, the cluster administrator +needs to enable the `DefaultStorageClass` [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) +on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is +among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of +the API server component. For more information on API server command-line flags, +check [kube-apiserver](/docs/admin/kube-apiserver/) documentation. + +### Binding + +A user creates, or in the case of dynamic provisioning, has already created, a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, `PersistentVolumeClaim` binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping. + +Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. + +### Using + +Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod. + +Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` in their Pod's volumes block. [See below for syntax details](#claims-as-volumes). + +### Storage Object in Use Protection +The purpose of the Storage Object in Use Protection feature is to ensure that Persistent Volume Claims (PVCs) in active use by a Pod and Persistent Volume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss. + +{{< note >}} +PVC is in active use by a Pod when a Pod object exists that is using the PVC. +{{< /note >}} + +If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. + +You can see that a PVC is protected when the PVC's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pvc-protection`: + +```shell +kubectl describe pvc hostpath +Name: hostpath +Namespace: default +StorageClass: example-hostpath +Status: Terminating +Volume: +Labels: +Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath + volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath +Finalizers: [kubernetes.io/pvc-protection] +... +``` + +You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too: + +```shell +kubectl describe pv task-pv-volume +Name: task-pv-volume +Labels: type=local +Annotations: +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Terminating +Claim: +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + +### Reclaiming + +When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted. + +#### Retain + +The `Retain` reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. + +1. Delete the `PersistentVolume`. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted. +1. Manually clean up the data on the associated storage asset accordingly. +1. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new `PersistentVolume` with the storage asset definition. + +#### Delete + +For volume plugins that support the `Delete` reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to `Delete`. The administrator should configure the `StorageClass` according to users' expectations; otherwise, the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/). + +#### Recycle + +{{< warning >}} +The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning. +{{< /warning >}} + +If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. + +However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-recycler + namespace: default +spec: + restartPolicy: Never + volumes: + - name: vol + hostPath: + path: /any/path/it/will/be/replaced + containers: + - name: pv-recycler + image: "k8s.gcr.io/busybox" + command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] + volumeMounts: + - name: vol + mountPath: /scrub +``` + +However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled. + +### Expanding Persistent Volumes Claims + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand +the following types of volumes: + +* gcePersistentDisk +* awsElasticBlockStore +* Cinder +* glusterfs +* rbd +* Azure File +* Azure Disk +* Portworx +* FlexVolumes +* CSI + +You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true. + +``` yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gluster-vol-default +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://192.168.10.100:8080" + restuser: "" + secretNamespace: "" + secretName: "" +allowVolumeExpansion: true +``` + +To request a larger volume for a PVC, edit the PVC object and specify a larger +size. This triggers expansion of the volume that backs the underlying `PersistentVolume`. A +new `PersistentVolume` is never created to satisfy the claim. Instead, an existing volume is resized. + +#### CSI Volume expansion + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information. + + +#### Resizing a volume containing a file system + +You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4. + +When a volume contains a file system, the file system is only resized when a new Pod is using +the `PersistentVolumeClaim` in ReadWrite mode. File system expansion is either done when a Pod is starting up +or when a Pod is running and the underlying file system supports online expansion. + +FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to `true`. +The FlexVolume can be resized on Pod restart. + +#### Resizing an in-use PersistentVolumeClaim + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +{{< note >}} +Expanding in-use PVCs is available as beta since Kubernetes 1.15, and as alpha since 1.11. The `ExpandInUsePersistentVolumes` feature must be enabled, which is the case automatically for many clusters for beta features. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information. +{{< /note >}} + +In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC. +Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded. +This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that +uses the PVC before the expansion can complete. + + +Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod. + +{{< note >}} +FlexVolume resize is possible only when the underlying driver supports resize. +{{< /note >}} + +{{< note >}} +Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume quota of one modification every 6 hours. +{{< /note >}} + + +## Types of Persistent Volumes + +`PersistentVolume` types are implemented as plugins. Kubernetes currently supports the following plugins: + +* GCEPersistentDisk +* AWSElasticBlockStore +* AzureFile +* AzureDisk +* CSI +* FC (Fibre Channel) +* FlexVolume +* Flocker +* NFS +* iSCSI +* RBD (Ceph Block Device) +* CephFS +* Cinder (OpenStack block storage) +* Glusterfs +* VsphereVolume +* Quobyte Volumes +* HostPath (Single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) +* Portworx Volumes +* ScaleIO Volumes +* StorageOS + +## Persistent Volumes + +Each PV contains a spec and status, which is the specification and status of the volume. + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0003 +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Recycle + storageClassName: slow + mountOptions: + - hard + - nfsvers=4.1 + nfs: + path: /tmp + server: 172.17.0.2 +``` + +### Capacity + +Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`. + +Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc. + +### Volume Mode + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume. +Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem` +to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API +parameter. + +### Access Modes + +A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. + +The access modes are: + +* ReadWriteOnce -- the volume can be mounted as read-write by a single node +* ReadOnlyMany -- the volume can be mounted read-only by many nodes +* ReadWriteMany -- the volume can be mounted as read-write by many nodes + +In the CLI, the access modes are abbreviated to: + +* RWO - ReadWriteOnce +* ROX - ReadOnlyMany +* RWX - ReadWriteMany + +> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time. + + +| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany| +| :--- | :---: | :---: | :---: | +| AWSElasticBlockStore | ✓ | - | - | +| AzureFile | ✓ | ✓ | ✓ | +| AzureDisk | ✓ | - | - | +| CephFS | ✓ | ✓ | ✓ | +| Cinder | ✓ | - | - | +| CSI | depends on the driver | depends on the driver | depends on the driver | +| FC | ✓ | ✓ | - | +| FlexVolume | ✓ | ✓ | depends on the driver | +| Flocker | ✓ | - | - | +| GCEPersistentDisk | ✓ | ✓ | - | +| Glusterfs | ✓ | ✓ | ✓ | +| HostPath | ✓ | - | - | +| iSCSI | ✓ | ✓ | - | +| Quobyte | ✓ | ✓ | ✓ | +| NFS | ✓ | ✓ | ✓ | +| RBD | ✓ | ✓ | - | +| VsphereVolume | ✓ | - | - (works when Pods are collocated) | +| PortworxVolume | ✓ | - | ✓ | +| ScaleIO | ✓ | ✓ | - | +| StorageOS | ✓ | - | - | + +### Class + +A PV can have a class, which is specified by setting the +`storageClassName` attribute to the name of a +[StorageClass](/docs/concepts/storage/storage-classes/). +A PV of a particular class can only be bound to PVCs requesting +that class. A PV with no `storageClassName` has no class and can only be bound +to PVCs that request no particular class. + +In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead +of the `storageClassName` attribute. This annotation is still working; however, +it will become fully deprecated in a future Kubernetes release. + +### Reclaim Policy + +Current reclaim policies are: + +* Retain -- manual reclamation +* Recycle -- basic scrub (`rm -rf /thevolume/*`) +* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted + +Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion. + +### Mount Options + +A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node. + +{{< note >}} +Not all Persistent Volume types support mount options. +{{< /note >}} + +The following volume types support mount options: + +* AWSElasticBlockStore +* AzureDisk +* AzureFile +* CephFS +* Cinder (OpenStack block storage) +* GCEPersistentDisk +* Glusterfs +* NFS +* Quobyte Volumes +* RBD (Ceph Block Device) +* StorageOS +* VsphereVolume +* iSCSI + +Mount options are not validated, so mount will simply fail if one is invalid. + +In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead +of the `mountOptions` attribute. This annotation is still working; however, +it will become fully deprecated in a future Kubernetes release. + +### Node Affinity + +{{< note >}} +For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes. +{{< /note >}} + +A PV can specify [node affinity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity. + +### Phase + +A volume will be in one of the following phases: + +* Available -- a free resource that is not yet bound to a claim +* Bound -- the volume is bound to a claim +* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster +* Failed -- the volume has failed its automatic reclamation + +The CLI will show the name of the PVC bound to the PV. + +## PersistentVolumeClaims + +Each PVC contains a spec and status, which is the specification and status of the claim. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myclaim +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 8Gi + storageClassName: slow + selector: + matchLabels: + release: "stable" + matchExpressions: + - {key: environment, operator: In, values: [dev]} +``` + +### Access Modes + +Claims use the same conventions as volumes when requesting storage with specific access modes. + +### Volume Modes + +Claims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device. + +### Resources + +Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims. + +### Selector + +Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: + +* `matchLabels` - the volume must have a label with this value +* `matchExpressions` - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist. + +All of the requirements, from both `matchLabels` and `matchExpressions`, are ANDed together – they must all be satisfied in order to match. + +### Class + +A claim can request a particular class by specifying the name of a +[StorageClass](/docs/concepts/storage/storage-classes/) +using the attribute `storageClassName`. +Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can +be bound to the PVC. + +PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set +equal to `""` is always interpreted to be requesting a PV with no class, so it +can only be bound to PVs with no class (no annotation or one set equal to +`""`). A PVC with no `storageClassName` is not quite the same and is treated differently +by the cluster, depending on whether the +[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) +is turned on. + +* If the admission plugin is turned on, the administrator may specify a + default `StorageClass`. All PVCs that have no `storageClassName` can be bound only to + PVs of that default. Specifying a default `StorageClass` is done by setting the + annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in + a `StorageClass` object. If the administrator does not specify a default, the + cluster responds to PVC creation as if the admission plugin were turned off. If + more than one default is specified, the admission plugin forbids the creation of + all PVCs. +* If the admission plugin is turned off, there is no notion of a default + `StorageClass`. All PVCs that have no `storageClassName` can be bound only to PVs that + have no class. In this case, the PVCs that have no `storageClassName` are treated the + same way as PVCs that have their `storageClassName` set to `""`. + +Depending on installation method, a default StorageClass may be deployed +to a Kubernetes cluster by addon manager during installation. + +When a PVC specifies a `selector` in addition to requesting a `StorageClass`, +the requirements are ANDed together: only a PV of the requested class and with +the requested labels may be bound to the PVC. + +{{< note >}} +Currently, a PVC with a non-empty `selector` can't have a PV dynamically provisioned for it. +{{< /note >}} + +In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead +of `storageClassName` attribute. This annotation is still working; however, +it won't be supported in a future Kubernetes release. + +## Claims As Volumes + +Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the Pod. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: myfrontend + image: nginx + volumeMounts: + - mountPath: "/var/www/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myclaim +``` + +### A Note on Namespaces + +`PersistentVolumes` binds are exclusive, and since `PersistentVolumeClaims` are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace. + +## Raw Block Volume Support + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +The following volume plugins support raw block volumes, including dynamic provisioning where +applicable: + +* AWSElasticBlockStore +* AzureDisk +* FC (Fibre Channel) +* GCEPersistentDisk +* iSCSI +* Local volume +* RBD (Ceph Block Device) +* VsphereVolume (alpha) + +{{< note >}} +Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9. +Support for the additional plugins was added in 1.10. +{{< /note >}} + +### Persistent Volumes using a Raw Block Volume +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: block-pv +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + volumeMode: Block + persistentVolumeReclaimPolicy: Retain + fc: + targetWWNs: ["50060e801049cfd1"] + lun: 0 + readOnly: false +``` +### Persistent Volume Claim requesting a Raw Block Volume +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: block-pvc +spec: + accessModes: + - ReadWriteOnce + volumeMode: Block + resources: + requests: + storage: 10Gi +``` +### Pod specification adding Raw Block Device path in container +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-block-volume +spec: + containers: + - name: fc-container + image: fedora:26 + command: ["/bin/sh", "-c"] + args: [ "tail -f /dev/null" ] + volumeDevices: + - name: data + devicePath: /dev/xvda + volumes: + - name: data + persistentVolumeClaim: + claimName: block-pvc +``` + +{{< note >}} +When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path. +{{< /note >}} + +### Binding Block Volumes + +If a user requests a raw block volume by indicating this using the `volumeMode` field in the `PersistentVolumeClaim` spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec. +Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations: +Volume binding matrix for statically provisioned volumes: + +| PV volumeMode | PVC volumeMode | Result | +| --------------|:---------------:| ----------------:| +| unspecified | unspecified | BIND | +| unspecified | Block | NO BIND | +| unspecified | Filesystem | BIND | +| Block | unspecified | NO BIND | +| Block | Block | BIND | +| Block | Filesystem | NO BIND | +| Filesystem | Filesystem | BIND | +| Filesystem | Block | NO BIND | +| Filesystem | unspecified | BIND | + +{{< note >}} +Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices. +{{< /note >}} + +## Volume Snapshot and Restore Volume from Snapshot Support + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/). + +To enable support for restoring a volume from a volume snapshot data source, enable the +`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager. + +### Create Persistent Volume Claim from Volume Snapshot +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: restore-pvc +spec: + storageClassName: csi-hostpath-sc + dataSource: + name: new-snapshot-test + kind: VolumeSnapshot + apiGroup: snapshot.storage.k8s.io + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## Volume Cloning + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +Volume clone feature was added to support CSI Volume Plugins only. For details, see [volume cloning](/docs/concepts/storage/volume-pvc-datasource/). + +To enable support for cloning a volume from a PVC data source, enable the +`VolumePVCDataSource` feature gate on the apiserver and controller-manager. + +### Create Persistent Volume Claim from an existing pvc +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cloned-pvc +spec: + storageClassName: my-csi-plugin + dataSource: + name: existing-src-pvc-name + kind: PersistentVolumeClaim + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## Writing Portable Configuration + +If you're writing configuration templates or examples that run on a wide range of clusters +and need persistent storage, it is recommended that you use the following pattern: + +- Include PersistentVolumeClaim objects in your bundle of config (alongside + Deployments, ConfigMaps, etc). +- Do not include PersistentVolume objects in the config, since the user instantiating + the config may not have permission to create PersistentVolumes. +- Give the user the option of providing a storage class name when instantiating + the template. + - If the user provides a storage class name, put that value into the + `persistentVolumeClaim.storageClassName` field. + This will cause the PVC to match the right storage + class if the cluster has StorageClasses enabled by the admin. + - If the user does not provide a storage class name, leave the + `persistentVolumeClaim.storageClassName` field as nil. This will cause a + PV to be automatically provisioned for the user with the default StorageClass + in the cluster. Many cluster environments have a default StorageClass installed, + or administrators can create their own default StorageClass. +- In your tooling, watch for PVCs that are not getting bound after some time + and surface this to the user, as this may indicate that the cluster has no + dynamic storage support (in which case the user should create a matching PV) + or the cluster has no storage system (in which case the user cannot deploy + config requiring PVCs). + +{{% /capture %}} diff --git a/content/uk/docs/concepts/workloads/_index.md b/content/uk/docs/concepts/workloads/_index.md new file mode 100644 index 0000000000..c826cbbcbc --- /dev/null +++ b/content/uk/docs/concepts/workloads/_index.md @@ -0,0 +1,4 @@ +--- +title: "Робочі навантаження" +weight: 50 +--- diff --git a/content/uk/docs/concepts/workloads/controllers/_index.md b/content/uk/docs/concepts/workloads/controllers/_index.md new file mode 100644 index 0000000000..3e5306f908 --- /dev/null +++ b/content/uk/docs/concepts/workloads/controllers/_index.md @@ -0,0 +1,4 @@ +--- +title: "Контролери" +weight: 20 +--- diff --git a/content/uk/docs/concepts/workloads/controllers/deployment.md b/content/uk/docs/concepts/workloads/controllers/deployment.md new file mode 100644 index 0000000000..4d676e76f0 --- /dev/null +++ b/content/uk/docs/concepts/workloads/controllers/deployment.md @@ -0,0 +1,1152 @@ +--- +reviewers: +- janetkuo +title: Deployments +feature: + title: Автоматичне розгортання і відкатування + description: > + Kubernetes вносить зміни до вашого застосунку чи його конфігурації по мірі їх надходження. Водночас система моніторить робочий стан застосунку для того, щоб ці зміни не призвели до одночасної зупинки усіх ваших Подів. У випадку будь-яких збоїв, Kubernetes відкотить зміни назад. Скористайтеся перевагами зростаючої екосистеми інструментів для розгортання застосунків. + +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and +[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/). + +You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. + +{{< note >}} +Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. +{{< /note >}} + +{{% /capture %}} + + +{{% capture body %}} + +## Use Case + +The following are typical use cases for Deployments: + +* [Create a Deployment to rollout a ReplicaSet](#creating-a-deployment). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not. +* [Declare the new state of the Pods](#updating-a-deployment) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment. +* [Rollback to an earlier Deployment revision](#rolling-back-a-deployment) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment. +* [Scale up the Deployment to facilitate more load](#scaling-a-deployment). +* [Pause the Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout. +* [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck. +* [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore. + +## Creating a Deployment + +The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods: + +{{< codenew file="controllers/nginx-deployment.yaml" >}} + +In this example: + +* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field. +* The Deployment creates three replicated Pods, indicated by the `replicas` field. +* The `selector` field defines how the Deployment finds which Pods to manage. + In this case, you simply select a label that is defined in the Pod template (`app: nginx`). + However, more sophisticated selection rules are possible, + as long as the Pod template itself satisfies the rule. + {{< note >}} + The `matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map + is equivalent to an element of `matchExpressions`, whose key field is "key" the operator is "In", + and the values array contains only "value". + All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match. + {{< /note >}} + +* The `template` field contains the following sub-fields: + * The Pods are labeled `app: nginx`using the `labels` field. + * The Pod template's specification, or `.template.spec` field, indicates that + the Pods run one container, `nginx`, which runs the `nginx` + [Docker Hub](https://hub.docker.com/) image at version 1.7.9. + * Create one container and name it `nginx` using the `name` field. + + Follow the steps given below to create the above Deployment: + + Before you begin, make sure your Kubernetes cluster is up and running. + + 1. Create the Deployment by running the following command: + + {{< note >}} + You may specify the `--record` flag to write the command executed in the resource annotation `kubernetes.io/change-cause`. It is useful for future introspection. + For example, to see the commands executed in each Deployment revision. + {{< /note >}} + + ```shell + kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml + ``` + + 2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following: + ```shell + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 0/3 0 0 1s + ``` + When you inspect the Deployments in your cluster, the following fields are displayed: + + * `NAME` lists the names of the Deployments in the cluster. + * `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_. + * `CURRENT` displays how many replicas are currently running. + * `UP-TO-DATE` displays the number of replicas that have been updated to achieve the desired state. + * `AVAILABLE` displays how many replicas of the application are available to your users. + * `AGE` displays the amount of time that the application has been running. + + Notice how the number of desired replicas is 3 according to `.spec.replicas` field. + + 3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. The output is similar to this: + ```shell + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + deployment.apps/nginx-deployment successfully rolled out + ``` + + 4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this: + ```shell + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 18s + ``` + Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available. + + 5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs`. The output is similar to this: + ```shell + NAME DESIRED CURRENT READY AGE + nginx-deployment-75675f5897 3 3 3 18s + ``` + Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is + randomly generated and uses the pod-template-hash as a seed. + + 6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. The following output is returned: + ```shell + NAME READY STATUS RESTARTS AGE LABELS + nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + ``` + The created ReplicaSet ensures that there are three `nginx` Pods. + + {{< note >}} + You must specify an appropriate selector and Pod template labels in a Deployment (in this case, + `app: nginx`). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly. + {{< /note >}} + +### Pod-template-hash label + +{{< note >}} +Do not change this label. +{{< /note >}} + +The `pod-template-hash` label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts. + +This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the `PodTemplate` of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, +and in any existing Pods that the ReplicaSet might have. + +## Updating a Deployment + +{{< note >}} +A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, `.spec.template`) +is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. +{{< /note >}} + +Follow the steps given below to update your Deployment: + +1. Let's update the nginx Pods to use the `nginx:1.9.1` image instead of the `nginx:1.7.9` image. + + ```shell + kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + ``` + or simply use the following command: + + ```shell + kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` + + Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`: + + ```shell + kubectl edit deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment edited + ``` + +2. To see the rollout status, run: + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + ``` + or + ``` + deployment.apps/nginx-deployment successfully rolled out + ``` + +Get more details on your updated Deployment: + +* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`. + The output is similar to this: + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 36s + ``` + +* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it +up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. + + ```shell + kubectl get rs + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 6s + nginx-deployment-2035384211 0 0 0 36s + ``` + +* Running `get pods` should now show only the new Pods: + + ```shell + kubectl get pods + ``` + + The output is similar to this: + ``` + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-khku8 1/1 Running 0 14s + nginx-deployment-1564180365-nacti 1/1 Running 0 14s + nginx-deployment-1564180365-z9gth 1/1 Running 0 14s + ``` + + Next time you want to update these Pods, you only need to update the Deployment's Pod template again. + + Deployment ensures that only a certain number of Pods are down while they are being updated. By default, + it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). + + Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. + By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). + + For example, if you look at the above Deployment closely, you will see that it first created a new Pod, + then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of + new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. + It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available. + +* Get details of your Deployment: + ```shell + kubectl describe deployments + ``` + The output is similar to this: + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=2 + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3 + Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1 + Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2 + Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2 + Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1 + Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3 + Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0 + ``` + Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) + and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet + (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at + least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down + the new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas + in the new ReplicaSet, and the old ReplicaSet is scaled down to 0. + +### Rollover (aka multiple updates in-flight) + +Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up +the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels +match `.spec.selector` but whose template does not match `.spec.template` are scaled down. Eventually, the new +ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0. + +If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet +as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously + -- it will add it to its list of old ReplicaSets and start scaling it down. + +For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9`, +but then update the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3 +replicas of `nginx:1.7.9` had been created. In that case, the Deployment immediately starts +killing the 3 `nginx:1.7.9` Pods that it had created, and starts creating +`nginx:1.9.1` Pods. It does not wait for the 5 replicas of `nginx:1.7.9` to be created +before changing course. + +### Label selector updates + +It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. +In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped +all of the implications. + +{{< note >}} +In API version `apps/v1`, a Deployment's label selector is immutable after it gets created. +{{< /note >}} + +* Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, +otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does +not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and +creating a new ReplicaSet. +* Selector updates changes the existing value in a selector key -- result in the same behavior as additions. +* Selector removals removes an existing key from the Deployment selector -- do not require any changes in the +Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the +removed label still exists in any existing Pods and ReplicaSets. + +## Rolling Back a Deployment + +Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. +By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want +(you can change that by modifying revision history limit). + +{{< note >}} +A Deployment's revision is created when a Deployment's rollout is triggered. This means that the +new revision is created if and only if the Deployment's Pod template (`.spec.template`) is changed, +for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment, +do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling. +This means that when you roll back to an earlier revision, only the Deployment's Pod template part is +rolled back. +{{< /note >}} + +* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`: + + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` + +* The rollout gets stuck. You can verify it by checking the rollout status: + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + Waiting for rollout to finish: 1 out of 3 new replicas have been updated... + ``` + +* Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, +[read more here](#deployment-status). + +* You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211`) is 2, and new replicas (nginx-deployment-3066724191) is 1. + + ```shell + kubectl get rs + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 25s + nginx-deployment-2035384211 0 0 0 36s + nginx-deployment-3066724191 1 1 0 6s + ``` + +* Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. + + ```shell + kubectl get pods + ``` + + The output is similar to this: + ``` + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-70iae 1/1 Running 0 25s + nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s + nginx-deployment-1564180365-hysrc 1/1 Running 0 25s + nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s + ``` + + {{< note >}} + The Deployment controller stops the bad rollout automatically, and stops scaling up the new + ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified. + Kubernetes by default sets the value to 25%. + {{< /note >}} + +* Get the description of the Deployment: + ```shell + kubectl describe deployment + ``` + + The output is similar to this: + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 + Labels: app=nginx + Selector: app=nginx + Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.91 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) + NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) + Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 + ``` + + To fix this, you need to rollback to a previous revision of Deployment that is stable. + +### Checking Rollout History of a Deployment + +Follow the steps given below to check the rollout history: + +1. First, check the revisions of this Deployment: + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment + ``` + The output is similar to this: + ``` + deployments "nginx-deployment" + REVISION CHANGE-CAUSE + 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + ``` + + `CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by: + + * Annotating the Deployment with `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"` + * Append the `--record` flag to save the `kubectl` command that is making changes to the resource. + * Manually editing the manifest of the resource. + +2. To see the details of each revision, run: + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 + ``` + + The output is similar to this: + ``` + deployments "nginx-deployment" revision 2 + Labels: app=nginx + pod-template-hash=1159050644 + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + QoS Tier: + cpu: BestEffort + memory: BestEffort + Environment Variables: + No volumes. + ``` + +### Rolling Back to a Previous Revision +Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2. + +1. Now you've decided to undo the current rollout and rollback to the previous revision: + ```shell + kubectl rollout undo deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment + ``` + Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: + + ```shell + kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment + ``` + + For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout). + + The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event + for rolling back to revision 2 is generated from Deployment controller. + +2. Check if the rollback was successful and the Deployment is running as expected, run: + ```shell + kubectl get deployment nginx-deployment + ``` + + The output is similar to this: + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 30m + ``` +3. Get the description of the Deployment: + ```shell + kubectl describe deployment nginx-deployment + ``` + The output is similar to this: + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=4 + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 + Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 + Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0 + ``` + +## Scaling a Deployment + +You can scale a Deployment by using the following command: + +```shell +kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 +``` +The output is similar to this: +``` +deployment.apps/nginx-deployment scaled +``` + +Assuming [horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is enabled +in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of +Pods you want to run based on the CPU utilization of your existing Pods. + +```shell +kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 +``` +The output is similar to this: +``` +deployment.apps/nginx-deployment scaled +``` + +### Proportional scaling + +RollingUpdate Deployments support running multiple versions of an application at the same time. When you +or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress +or paused), the Deployment controller balances the additional replicas in the existing active +ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*. + +For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2. + +* Ensure that the 10 replicas in your Deployment are running. + ```shell + kubectl get deploy + ``` + The output is similar to this: + + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 10 10 10 10 50s + ``` + +* You update to a new image which happens to be unresolvable from inside the cluster. + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` + +* The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the +`maxUnavailable` requirement that you mentioned above. Check out the rollout status: + ```shell + kubectl get rs + ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1989198191 5 5 0 9s + nginx-deployment-618515232 8 8 8 1m + ``` + +* Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas +to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using +proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you +spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the +most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the +ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up. + +In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the +new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming +the new replicas become healthy. To confirm this, run: + +```shell +kubectl get deploy +``` + +The output is similar to this: +``` +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +nginx-deployment 15 18 7 8 7m +``` +The rollout status confirms how the replicas were added to each ReplicaSet. +```shell +kubectl get rs +``` + +The output is similar to this: +``` +NAME DESIRED CURRENT READY AGE +nginx-deployment-1989198191 7 7 0 7m +nginx-deployment-618515232 11 11 11 7m +``` + +## Pausing and Resuming a Deployment + +You can pause a Deployment before triggering one or more updates and then resume it. This allows you to +apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. + +* For example, with a Deployment that was just created: + Get the Deployment details: + ```shell + kubectl get deploy + ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx 3 3 3 3 1m + ``` + Get the rollout status: + ```shell + kubectl get rs + ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 1m + ``` + +* Pause by running the following command: + ```shell + kubectl rollout pause deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment paused + ``` + +* Then update the image of the Deployment: + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` + +* Notice that no new rollout started: + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + deployments "nginx" + REVISION CHANGE-CAUSE + 1 + ``` +* Get the rollout status to ensure that the Deployment is updates successfully: + ```shell + kubectl get rs + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 2m + ``` + +* You can make as many updates as you wish, for example, update the resources that will be used: + ```shell + kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment resource requirements updated + ``` + + The initial state of the Deployment prior to pausing it will continue its function, but new updates to + the Deployment will not have any effect as long as the Deployment is paused. + +* Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates: + ```shell + kubectl rollout resume deployment.v1.apps/nginx-deployment + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment resumed + ``` +* Watch the status of the rollout until it's done. + ```shell + kubectl get rs -w + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 2 2 2 2m + nginx-3926361531 2 2 0 6s + nginx-3926361531 2 2 1 18s + nginx-2142116321 1 2 2 2m + nginx-2142116321 1 2 2 2m + nginx-3926361531 3 2 1 18s + nginx-3926361531 3 2 1 18s + nginx-2142116321 1 1 1 2m + nginx-3926361531 3 3 1 18s + nginx-3926361531 3 3 2 19s + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 20s + ``` +* Get the status of the latest rollout: + ```shell + kubectl get rs + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 28s + ``` +{{< note >}} +You cannot rollback a paused Deployment until you resume it. +{{< /note >}} + +## Deployment status + +A Deployment enters various states during its lifecycle. It can be [progressing](#progressing-deployment) while +rolling out a new ReplicaSet, it can be [complete](#complete-deployment), or it can [fail to progress](#failed-deployment). + +### Progressing Deployment + +Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed: + +* The Deployment creates a new ReplicaSet. +* The Deployment is scaling up its newest ReplicaSet. +* The Deployment is scaling down its older ReplicaSet(s). +* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)). + +You can monitor the progress for a Deployment by using `kubectl rollout status`. + +### Complete Deployment + +Kubernetes marks a Deployment as _complete_ when it has the following characteristics: + +* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any +updates you've requested have been completed. +* All of the replicas associated with the Deployment are available. +* No old replicas for the Deployment are running. + +You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed +successfully, `kubectl rollout status` returns a zero exit code. + +```shell +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +The output is similar to this: +``` +Waiting for rollout to finish: 2 of 3 updated replicas are available... +deployment.apps/nginx-deployment successfully rolled out +$ echo $? +0 +``` + +### Failed Deployment + +Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur +due to some of the following factors: + +* Insufficient quota +* Readiness probe failures +* Image pull errors +* Insufficient permissions +* Limit ranges +* Application runtime misconfiguration + +One way you can detect this condition is to specify a deadline parameter in your Deployment spec: +([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `.spec.progressDeadlineSeconds` denotes the +number of seconds the Deployment controller waits before indicating (in the Deployment status) that the +Deployment progress has stalled. + +The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report +lack of progress for a Deployment after 10 minutes: + +```shell +kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' +``` +The output is similar to this: +``` +deployment.apps/nginx-deployment patched +``` +Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following +attributes to the Deployment's `.status.conditions`: + +* Type=Progressing +* Status=False +* Reason=ProgressDeadlineExceeded + +See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions. + +{{< note >}} +Kubernetes takes no action on a stalled Deployment other than to report a status condition with +`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for +example, rollback the Deployment to its previous version. +{{< /note >}} + +{{< note >}} +If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can +safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the +deadline. +{{< /note >}} + +You may experience transient errors with your Deployments, either due to a low timeout that you have set or +due to any other kind of error that can be treated as transient. For example, let's suppose you have +insufficient quota. If you describe the Deployment you will notice the following section: + +```shell +kubectl describe deployment nginx-deployment +``` +The output is similar to this: +``` +<...> +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + ReplicaFailure True FailedCreate +<...> +``` + +If you run `kubectl get deployment nginx-deployment -o yaml`, the Deployment status is similar to this: + +``` +status: + availableReplicas: 2 + conditions: + - lastTransitionTime: 2016-10-04T12:25:39Z + lastUpdateTime: 2016-10-04T12:25:39Z + message: Replica set "nginx-deployment-4262182780" is progressing. + reason: ReplicaSetUpdated + status: "True" + type: Progressing + - lastTransitionTime: 2016-10-04T12:25:42Z + lastUpdateTime: 2016-10-04T12:25:42Z + message: Deployment has minimum availability. + reason: MinimumReplicasAvailable + status: "True" + type: Available + - lastTransitionTime: 2016-10-04T12:25:39Z + lastUpdateTime: 2016-10-04T12:25:39Z + message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota: + object-counts, requested: pods=1, used: pods=3, limited: pods=2' + reason: FailedCreate + status: "True" + type: ReplicaFailure + observedGeneration: 3 + replicas: 2 + unavailableReplicas: 2 +``` + +Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the +reason for the Progressing condition: + +``` +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing False ProgressDeadlineExceeded + ReplicaFailure True FailedCreate +``` + +You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other +controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota +conditions and the Deployment controller then completes the Deployment rollout, you'll see the +Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`). + +``` +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable +``` + +`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated +by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment +is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum +required new replicas are available (see the Reason of the condition for the particulars - in our case +`Reason=NewReplicaSetAvailable` means that the Deployment is complete). + +You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status` +returns a non-zero exit code if the Deployment has exceeded the progression deadline. + +```shell +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +The output is similar to this: +``` +Waiting for rollout to finish: 2 out of 3 new replicas have been updated... +error: deployment "nginx" exceeded its progress deadline +$ echo $? +1 +``` + +### Operating on a failed deployment + +All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back +to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template. + +## Clean up Policy + +You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for +this Deployment you want to retain. The rest will be garbage-collected in the background. By default, +it is 10. + +{{< note >}} +Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment +thus that Deployment will not be able to roll back. +{{< /note >}} + +## Canary Deployment + +If you want to roll out releases to a subset of users or servers using the Deployment, you +can create multiple Deployments, one for each release, following the canary pattern described in +[managing resources](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments). + +## Writing a Deployment Spec + +As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields. +For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/), +configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. + +A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). + +### Pod Template + +The `.spec.template` and `.spec.selector` are the only required field of the `.spec`. + +The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an +`apiVersion` or `kind`. + +In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate +labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)). + +Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is +allowed, which is the default if not specified. + +### Replicas + +`.spec.replicas` is an optional field that specifies the number of desired Pods. It defaults to 1. + +### Selector + +`.spec.selector` is an required field that specifies a [label selector](/docs/concepts/overview/working-with-objects/labels/) +for the Pods targeted by this Deployment. + +`.spec.selector` must match `.spec.template.metadata.labels`, or it will be rejected by the API. + +In API version `apps/v1`, `.spec.selector` and `.metadata.labels` do not default to `.spec.template.metadata.labels` if not set. So they must be set explicitly. Also note that `.spec.selector` is immutable after creation of the Deployment in `apps/v1`. + +A Deployment may terminate Pods whose labels match the selector if their template is different +from `.spec.template` or if the total number of such Pods exceeds `.spec.replicas`. It brings up new +Pods with `.spec.template` if the number of Pods is less than the desired number. + +{{< note >}} +You should not create other Pods whose labels match this selector, either directly, by creating +another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you +do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this. +{{< /note >}} + +If you have multiple controllers that have overlapping selectors, the controllers will fight with each +other and won't behave correctly. + +### Strategy + +`.spec.strategy` specifies the strategy used to replace old Pods by new ones. +`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is +the default value. + +#### Recreate Deployment + +All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`. + +#### Rolling Update Deployment + +The Deployment updates Pods in a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) +fashion when `.spec.strategy.type==RollingUpdate`. You can specify `maxUnavailable` and `maxSurge` to control +the rolling update process. + +##### Max Unavailable + +`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the maximum number +of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) +or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by +rounding down. The value cannot be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. The default value is 25%. + +For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired +Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled +down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available +at all times during the update is at least 70% of the desired Pods. + +##### Max Surge + +`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the maximum number of Pods +that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a +percentage of desired Pods (for example, 10%). The value cannot be 0 if `MaxUnavailable` is 0. The absolute number +is calculated from the percentage by rounding up. The default value is 25%. + +For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the +rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired +Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the +total number of Pods running at any time during the update is at most 130% of desired Pods. + +### Progress Deadline Seconds + +`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want +to wait for your Deployment to progress before the system reports back that the Deployment has +[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`. +and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep +retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment +controller will roll back a Deployment as soon as it observes such a condition. + +If specified, this field needs to be greater than `.spec.minReadySeconds`. + +### Min Ready Seconds + +`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly +created Pod should be ready without any of its containers crashing, for it to be considered available. +This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when +a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). + +### Rollback To + +Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1` and `apps/v1beta1`, and is no longer supported in API versions starting `apps/v1beta2`. Instead, `kubectl rollout undo` as introduced in [Rolling Back to a Previous Revision](#rolling-back-to-a-previous-revision) should be used. + +### Revision History Limit + +A Deployment's revision history is stored in the ReplicaSets it controls. + +`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain +to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments. + +More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. +In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up. + +### Paused + +`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. The only difference between +a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused +Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when +it is created. + +## Alternative to Deployments + +### kubectl rolling-update + +[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) updates Pods and ReplicationControllers +in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have +additional features, such as rolling back to any previous revision even after the rolling update is done. + +{{% /capture %}} diff --git a/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md new file mode 100644 index 0000000000..36bf7876bc --- /dev/null +++ b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -0,0 +1,480 @@ +--- +reviewers: +- erictune +- soltysh +title: Jobs - Run to Completion +content_template: templates/concept +feature: + title: Пакетна обробка + description: > + На додачу до Сервісів, Kubernetes може керувати вашими робочими навантаженнями систем безперервної інтеграції та пакетної обробки, за потреби замінюючи контейнери, що відмовляють. +weight: 70 +--- + +{{% capture overview %}} + +A Job creates one or more Pods and ensures that a specified number of them successfully terminate. +As pods successfully complete, the Job tracks the successful completions. When a specified number +of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up +the Pods it created. + +A simple case is to create one Job object in order to reliably run one Pod to completion. +The Job object will start a new Pod if the first Pod fails or is deleted (for example +due to a node hardware failure or a node reboot). + +You can also use a Job to run multiple Pods in parallel. + +{{% /capture %}} + + +{{% capture body %}} + +## Running an example Job + +Here is an example Job config. It computes π to 2000 places and prints it out. +It takes around 10s to complete. + +{{< codenew file="controllers/job.yaml" >}} + +You can run the example with this command: + +```shell +kubectl apply -f https://k8s.io/examples/controllers/job.yaml +``` +``` +job.batch/pi created +``` + +Check on the status of the Job with `kubectl`: + +```shell +kubectl describe jobs/pi +``` +``` +Name: pi +Namespace: default +Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c +Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c + job-name=pi +Annotations: kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":... +Parallelism: 1 +Completions: 1 +Start Time: Mon, 02 Dec 2019 15:20:11 +0200 +Completed At: Mon, 02 Dec 2019 15:21:16 +0200 +Duration: 65s +Pods Statuses: 0 Running / 1 Succeeded / 0 Failed +Pod Template: + Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c + job-name=pi + Containers: + pi: + Image: perl + Port: + Host Port: + Command: + perl + -Mbignum=bpi + -wle + print bpi(2000) + Environment: + Mounts: + Volumes: +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 +``` + +To view completed Pods of a Job, use `kubectl get pods`. + +To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: + +```shell +pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}') +echo $pods +``` +``` +pi-5rwd7 +``` + +Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression +that just gets the name from each Pod in the returned list. + +View the standard output of one of the pods: + +```shell +kubectl logs $pods +``` +The output is similar to this: +```shell +3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 +``` + +## Writing a Job Spec + +As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. + +A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). + +### Pod Template + +The `.spec.template` is the only required field of the `.spec`. + +The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`. + +In addition to required fields for a Pod, a pod template in a Job must specify appropriate +labels (see [pod selector](#pod-selector)) and an appropriate restart policy. + +Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed. + +### Pod Selector + +The `.spec.selector` field is optional. In almost all cases you should not specify it. +See section [specifying your own pod selector](#specifying-your-own-pod-selector). + + +### Parallel Jobs + +There are three main types of task suitable to run as a Job: + +1. Non-parallel Jobs + - normally, only one Pod is started, unless the Pod fails. + - the Job is complete as soon as its Pod terminates successfully. +1. Parallel Jobs with a *fixed completion count*: + - specify a non-zero positive value for `.spec.completions`. + - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`. + - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`. +1. Parallel Jobs with a *work queue*: + - do not specify `.spec.completions`, default to `.spec.parallelism`. + - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue. + - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done. + - when _any_ Pod from the Job terminates with success, no new Pods are created. + - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success. + - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting. + +For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are +unset, both are defaulted to 1. + +For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed. +You can set `.spec.parallelism`, or leave it unset and it will default to 1. + +For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to +a non-negative integer. + +For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section. + + +#### Controlling Parallelism + +The requested parallelism (`.spec.parallelism`) can be set to any non-negative value. +If it is unspecified, it defaults to 1. +If it is specified as 0, then the Job is effectively paused until it is increased. + +Actual parallelism (number of pods running at any instant) may be more or less than requested +parallelism, for a variety of reasons: + +- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of + remaining completions. Higher values of `.spec.parallelism` are effectively ignored. +- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however. +- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react. +- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.), + then there may be fewer pods than requested. +- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job. +- When a Pod is gracefully shut down, it takes time to stop. + +## Handling Pod and Container Failures + +A container in a Pod may fail for a number of reasons, such as because the process in it exited with +a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this +happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays +on the node, but the container is re-run. Therefore, your program needs to handle the case when it is +restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`. +See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. + +An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node +(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the +`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller +starts a new Pod. This means that your application needs to handle the case when it is restarted in a new +pod. In particular, it needs to handle temporary files, locks, incomplete output and the like +caused by previous runs. + +Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and +`.spec.template.spec.restartPolicy = "Never"`, the same program may +sometimes be started twice. + +If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be +multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. + +### Pod backoff failure policy + +There are situations where you want to fail a Job after some amount of retries +due to a logical error in configuration etc. +To do so, set `.spec.backoffLimit` to specify the number of retries before +considering a Job as failed. The back-off limit is set by default to 6. Failed +Pods associated with the Job are recreated by the Job controller with an +exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The +back-off count is reset if no new failed Pods appear before the Job's next +status check. + +{{< note >}} +Issue [#54870](https://github.com/kubernetes/kubernetes/issues/54870) still exists for versions of Kubernetes prior to version 1.12 +{{< /note >}} +{{< note >}} +If your job has `restartPolicy = "OnFailure"`, keep in mind that your container running the Job +will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting +`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output +from failed Jobs is not lost inadvertently. +{{< /note >}} + +## Job Termination and Cleanup + +When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around +allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. +The job object also remains after it is completed so that you can view its status. It is up to the user to delete +old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too. + +By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the +`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated. + +Another way to terminate a Job is by setting an active deadline. +Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds. +The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created. +Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`. + +Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached. + +Example: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: pi-with-timeout +spec: + backoffLimit: 5 + activeDeadlineSeconds: 100 + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never +``` + +Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. + +Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`. +That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve. + +## Clean Up Finished Jobs Automatically + +Finished Jobs are usually no longer needed in the system. Keeping them around in +the system will put pressure on the API server. If the Jobs are managed directly +by a higher level controller, such as +[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be +cleaned up by CronJobs based on the specified capacity-based cleanup policy. + +### TTL Mechanism for Finished Jobs + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +Another way to clean up finished Jobs (either `Complete` or `Failed`) +automatically is to use a TTL mechanism provided by a +[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for +finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of +the Job. + +When the TTL controller cleans up the Job, it will delete the Job cascadingly, +i.e. delete its dependent objects, such as Pods, together with the Job. Note +that when the Job is deleted, its lifecycle guarantees, such as finalizers, will +be honored. + +For example: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: pi-with-ttl +spec: + ttlSecondsAfterFinished: 100 + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never +``` + +The Job `pi-with-ttl` will be eligible to be automatically deleted, `100` +seconds after it finishes. + +If the field is set to `0`, the Job will be eligible to be automatically deleted +immediately after it finishes. If the field is unset, this Job won't be cleaned +up by the TTL controller after it finishes. + +Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For +more information, see the documentation for +[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for +finished resources. + +## Job Patterns + +The Job object can be used to support reliable parallel execution of Pods. The Job object is not +designed to support closely-communicating parallel processes, as commonly found in scientific +computing. It does support parallel processing of a set of independent but related *work items*. +These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a +NoSQL database to scan, and so on. + +In a complex system, there may be multiple different sets of work items. Here we are just +considering one set of work items that the user wants to manage together — a *batch job*. + +There are several different patterns for parallel computation, each with strengths and weaknesses. +The tradeoffs are: + +- One Job object for each work item, vs. a single Job object for all work items. The latter is + better for large numbers of work items. The former creates some overhead for the user and for the + system to manage large numbers of Job objects. +- Number of pods created equals number of work items, vs. each Pod can process multiple work items. + The former typically requires less modification to existing code and containers. The latter + is better for large numbers of work items, for similar reasons to the previous bullet. +- Several approaches use a work queue. This requires running a queue service, + and modifications to the existing program or container to make it use the work queue. + Other approaches are easier to adapt to an existing containerised application. + + +The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. +The pattern names are also links to examples and more detailed description. + +| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? | +| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:| +| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ | +| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ | +| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ | +| Single Job with Static Work Assignment | ✓ | | ✓ | | + +When you specify completions with `.spec.completions`, each Pod created by the Job controller +has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that +all pods for a task will have the same command line and the same +image, the same volumes, and (almost) the same environment variables. These patterns +are different ways to arrange for pods to work on different things. + +This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns. +Here, `W` is the number of work items. + +| Pattern | `.spec.completions` | `.spec.parallelism` | +| -------------------------------------------------------------------- |:-------------------:|:--------------------:| +| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 | +| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any | +| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any | +| Single Job with Static Work Assignment | W | any | + + +## Advanced Usage + +### Specifying your own pod selector + +Normally, when you create a Job object, you do not specify `.spec.selector`. +The system defaulting logic adds this field when the Job is created. +It picks a selector value that will not overlap with any other jobs. + +However, in some cases, you might need to override this automatically set selector. +To do this, you can specify the `.spec.selector` of the Job. + +Be very careful when doing this. If you specify a label selector which is not +unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated +job may be deleted, or this Job may count other Pods as completing it, or one or both +Jobs may refuse to create Pods or run to completion. If a non-unique selector is +chosen, then other controllers (e.g. ReplicationController) and their Pods may behave +in unpredictable ways too. Kubernetes will not stop you from making a mistake when +specifying `.spec.selector`. + +Here is an example of a case when you might want to use this feature. + +Say Job `old` is already running. You want existing Pods +to keep running, but you want the rest of the Pods it creates +to use a different pod template and for the Job to have a new name. +You cannot update the Job because these fields are not updatable. +Therefore, you delete Job `old` but _leave its pods +running_, using `kubectl delete jobs/old --cascade=false`. +Before deleting it, you make a note of what selector it uses: + +``` +kubectl get job old -o yaml +``` +``` +kind: Job +metadata: + name: old + ... +spec: + selector: + matchLabels: + controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 + ... +``` + +Then you create a new Job with name `new` and you explicitly specify the same selector. +Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, +they are controlled by Job `new` as well. + +You need to specify `manualSelector: true` in the new Job since you are not using +the selector that the system normally generates for you automatically. + +``` +kind: Job +metadata: + name: new + ... +spec: + manualSelector: true + selector: + matchLabels: + controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 + ... +``` + +The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting +`manualSelector: true` tells the system to that you know what you are doing and to allow this +mismatch. + +## Alternatives + +### Bare Pods + +When the node that a Pod is running on reboots or fails, the pod is terminated +and will not be restarted. However, a Job will create new Pods to replace terminated ones. +For this reason, we recommend that you use a Job rather than a bare Pod, even if your application +requires only a single Pod. + +### Replication Controller + +Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller). +A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job +manages Pods that are expected to terminate (e.g. batch tasks). + +As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate +for pods with `RestartPolicy` equal to `OnFailure` or `Never`. +(Note: If `RestartPolicy` is not set, the default value is `Always`.) + +### Single Job starts Controller Pod + +Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort +of custom controller for those Pods. This allows the most flexibility, but may be somewhat +complicated to get started with and offers less integration with Kubernetes. + +One example of this pattern would be a Job which starts a Pod which runs a script that in turn +starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark +driver, and then cleans up. + +An advantage of this approach is that the overall process gets the completion guarantee of a Job +object, but complete control over what Pods are created and how work is assigned to them. + +## Cron Jobs {#cron-jobs} + +You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`. + +{{% /capture %}} diff --git a/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md new file mode 100644 index 0000000000..c8a666ac11 --- /dev/null +++ b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md @@ -0,0 +1,291 @@ +--- +reviewers: +- bprashanth +- janetkuo +title: ReplicationController +feature: + title: Самозцілення + anchor: How a ReplicationController Works + description: > + Перезапускає контейнери, що відмовили; заміняє і перерозподіляє контейнери у випадку непрацездатності вузла; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності. + +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< note >}} +A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication. +{{< /note >}} + +A _ReplicationController_ ensures that a specified number of pod replicas are running at any one +time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is +always up and available. + +{{% /capture %}} + + +{{% capture body %}} + +## How a ReplicationController Works + +If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the +ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a +ReplicationController are automatically replaced if they fail, are deleted, or are terminated. +For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. +For this reason, you should use a ReplicationController even if your application requires +only a single pod. A ReplicationController is similar to a process supervisor, +but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods +across multiple nodes. + +ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in +kubectl commands. + +A simple case is to create one ReplicationController object to reliably run one instance of +a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated +service, such as web servers. + +## Running an example ReplicationController + +This example ReplicationController config runs three copies of the nginx web server. + +{{< codenew file="controllers/replication.yaml" >}} + +Run the example job by downloading the example file and then running this command: + +```shell +kubectl apply -f https://k8s.io/examples/controllers/replication.yaml +``` +``` +replicationcontroller/nginx created +``` + +Check on the status of the ReplicationController using this command: + +```shell +kubectl describe replicationcontrollers/nginx +``` +``` +Name: nginx +Namespace: default +Selector: app=nginx +Labels: app=nginx +Annotations: +Replicas: 3 current / 3 desired +Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed +Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx + Port: 80/TCP + Environment: + Mounts: + Volumes: +Events: + FirstSeen LastSeen Count From SubobjectPath Type Reason Message + --------- -------- ----- ---- ------------- ---- ------ ------- + 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m + 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0 + 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v +``` + +Here, three pods are created, but none is running yet, perhaps because the image is being pulled. +A little later, the same command may show: + +```shell +Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed +``` + +To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this: + +```shell +pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name}) +echo $pods +``` +``` +nginx-3ntk0 nginx-4ok8v nginx-qrm3m +``` + +Here, the selector is the same as the selector for the ReplicationController (seen in the +`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option +specifies an expression that just gets the name from each pod in the returned list. + + +## Writing a ReplicationController Spec + +As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields. +For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/). + +A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). + +### Pod Template + +The `.spec.template` is the only required field of the `.spec`. + +The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`. + +In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate +labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [pod selector](#pod-selector). + +Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified. + +For local container restarts, ReplicationControllers delegate to an agent on the node, +for example the [Kubelet](/docs/admin/kubelet/) or Docker. + +### Labels on the ReplicationController + +The ReplicationController can itself have labels (`.metadata.labels`). Typically, you +would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified +then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be +different, and the `.metadata.labels` do not affect the behavior of the ReplicationController. + +### Pod Selector + +The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController +manages all the pods with labels that match the selector. It does not distinguish +between pods that it created or deleted and pods that another person or process created or +deleted. This allows the ReplicationController to be replaced without affecting the running pods. + +If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will +be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to +`.spec.template.metadata.labels`. + +Also you should not normally create any pods whose labels match this selector, either directly, with +another ReplicationController, or with another controller such as Job. If you do so, the +ReplicationController thinks that it created the other pods. Kubernetes does not stop you +from doing this. + +If you do end up with multiple controllers that have overlapping selectors, you +will have to manage the deletion yourself (see [below](#working-with-replicationcontrollers)). + +### Multiple Replicas + +You can specify how many pods should run concurrently by setting `.spec.replicas` to the number +of pods you would like to have running concurrently. The number running at any time may be higher +or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully +shutdown, and a replacement starts early. + +If you do not specify `.spec.replicas`, then it defaults to 1. + +## Working with ReplicationControllers + +### Deleting a ReplicationController and its Pods + +To delete a ReplicationController and all its pods, use [`kubectl +delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl will scale the ReplicationController to zero and wait +for it to delete each pod before deleting the ReplicationController itself. If this kubectl +command is interrupted, it can be restarted. + +When using the REST API or go client library, you need to do the steps explicitly (scale replicas to +0, wait for pod deletions, then delete the ReplicationController). + +### Deleting just a ReplicationController + +You can delete a ReplicationController without affecting any of its pods. + +Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). + +When using the REST API or go client library, simply delete the ReplicationController object. + +Once the original is deleted, you can create a new ReplicationController to replace it. As long +as the old and new `.spec.selector` are the same, then the new one will adopt the old pods. +However, it will not make any effort to make existing pods match a new, different pod template. +To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates). + +### Isolating pods from a ReplicationController + +Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). + +## Common usage patterns + +### Rescheduling + +As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent). + +### Scaling + +The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field. + +### Rolling updates + +The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one. + +As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. + +Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. + +The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. + +Rolling update is implemented in the client tool +[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples. + +### Multiple release tracks + +In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels. + +For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc. + +### Using ReplicationControllers with Services + +Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic +goes to the old version, and some goes to the new version. + +A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services. + +## Writing programs for Replication + +Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself. + +## Responsibilities of the ReplicationController + +The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. + +The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)). + +The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. + + +## API Object + +Replication controller is a top-level resource in the Kubernetes REST API. More details about the +API object can be found at: +[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core). + +## Alternatives to ReplicationController + +### ReplicaSet + +[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement). +It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates. +Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. + + +### Deployment (Recommended) + +[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods +in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality, +because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. + +### Bare Pods + +Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker). + +### Job + +Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own +(that is, batch jobs). + +### DaemonSet + +Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicationController for pods that provide a +machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied +to a machine lifetime: the pod needs to be running on the machine before other pods start, and are +safe to terminate when the machine is otherwise ready to be rebooted/shutdown. + +## For more information + +Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/). + +{{% /capture %}} diff --git a/content/uk/docs/contribute/localization_uk.md b/content/uk/docs/contribute/localization_uk.md new file mode 100644 index 0000000000..67a9c1789b --- /dev/null +++ b/content/uk/docs/contribute/localization_uk.md @@ -0,0 +1,123 @@ +--- +title: Рекомендації з перекладу на українську мову +content_template: templates/concept +anchors: + - anchor: "#правила-перекладу" + title: Правила перекладу + - anchor: "#словник" + title: Словник +--- + +{{% capture overview %}} + +Дорогі друзі! Раді вітати вас у спільноті українських контриб'юторів проекту Kubernetes. Ця сторінка створена з метою полегшити вашу роботу при перекладі документації. Вона містить правила, якими ми керувалися під час перекладу, і базовий словник, який ми почали укладати. Перелічені у ньому терміни ви знайдете в українській версії документації Kubernetes. Будемо дуже вдячні, якщо ви допоможете нам доповнити цей словник і розширити правила перекладу. + +Сподіваємось, наші рекомендації стануть вам у пригоді. + +{{% /capture %}} + +{{% capture body %}} + +## Правила перекладу {#правила-перекладу} + +* У випадку, якщо у перекладі термін набуває неоднозначності і розуміння тексту ускладнюється, надайте у дужках англійський варіант, наприклад: кінцеві точки (endpoints). Якщо при перекладі термін втрачає своє значення, краще не перекладати його, наприклад: характеристики affinity. + +* Співзвучні слова передаємо транслітерацією зі збереженням написання (Service -> Сервіс). + +* Реалії Kubernetes пишемо з великої літери: Сервіс, Под, але вузол (node). + +* Для слів з великих літер, які не мають трансліт-аналогу, використовуємо англійські слова (Deployment, Volume, Namespace). + +* Складені слова вважаємо за власні назви і не перекладаємо (LabelSelector, kube-apiserver). + +* Частовживані і усталені за межами K8s слова перекладаємо українською і пишемо з маленької літери (label -> мітка). + +* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Більшість необхідних нам термінів є словами іншомовного походження, які у родовому відмінку однини приймають закінчення -а: Пода, Deployment'а. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv). + +## Словник {#словник} + +English | Українська | +--- | --- | +addon | розширення | +application | застосунок | +backend | бекенд | +build | збирати (процес) | +build | збирання (результат) | +cache | кеш | +CLI | інтерфейс командного рядка | +cloud | хмара; хмарний провайдер | +containerized | контейнеризований | +Continuous development | безперервна розробка | +Continuous integration | безперервна інтеграція | +Continuous deployment | безперервне розгортання | +contribute | робити внесок (до проекту), допомагати (проекту) | +contributor | контриб'ютор, учасник проекту | +control plane | площина управління | +controller | контролер | +CPU | ЦПУ | +dashboard | дашборд | +data plane | площина даних | +default settings | типові налаштування | +default (by) | за умовчанням | +Deployment | Deployment | +deprecated | застарілий | +desired state | бажаний стан | +downtime | недоступність, простій | +ecosystem | сімейство проектів (екосистема) | +endpoint | кінцева точка | +expose (a service) | відкрити доступ (до сервісу) | +fail | відмовити | +feature | компонент | +framework | фреймворк | +frontend | фронтенд | +image | образ | +Ingress | Ingress | +instance | інстанс | +issue | запит | +kube-proxy | kube-proxy | +kubelet | kubelet | +Kubernetes features | функціональні можливості Kubernetes | +label | мітка | +lifecycle | життєвий цикл | +logging | логування | +maintenance | обслуговування | +master | master | +map | спроектувати, зіставити, встановити відповідність | +monitor | моніторити | +monitoring | моніторинг | +Namespace | Namespace | +network policy | мережева політика | +node | вузол | +orchestrate | оркеструвати | +output | вивід | +patch | патч | +Pod | Под | +production | прод | +pull request | pull request | +release | реліз | +replica | репліка | +rollback | відкатування | +rolling update | послідовне оновлення | +rollout (new updates) | викатка (оновлень) | +run | запускати | +scale | масштабувати | +schedule | розподіляти (Поди по вузлах) | +scheduler | scheduler | +secret | секрет | +selector | селектор | +self-healing | самозцілення | +self-restoring | самовідновлення | +service | сервіс | +service discovery | виявлення сервісу | +source code | вихідний код | +stateful app | застосунок зі станом | +stateless app | застосунок без стану | +task | завдання | +terminated | зупинений | +traffic | трафік | +VM (virtual machine) | ВМ | +Volume | Volume | +workload | робоче навантаження | +YAML | YAML | + +{{% /capture %}} diff --git a/content/uk/docs/home/_index.md b/content/uk/docs/home/_index.md new file mode 100644 index 0000000000..d72f91478f --- /dev/null +++ b/content/uk/docs/home/_index.md @@ -0,0 +1,58 @@ +--- +approvers: +- chenopis +title: Документація Kubernetes +noedit: true +cid: docsHome +layout: docsportal_home +class: gridPage +linkTitle: "Home" +main_menu: true +weight: 10 +hide_feedback: true +menu: + main: + title: "Документація" + weight: 20 + post: > +

Дізнайтеся про основи роботи з Kubernetes, використовуючи схеми, навчальну та довідкову документацію. Ви можете навіть зробити свій внесок у документацію!

+overview: > + Kubernetes - рушій оркестрації контейнерів з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками. Цей проект розробляється під егідою Cloud Native Computing Foundation (CNCF). +cards: +- name: concepts + title: "Розуміння основ" + description: "Дізнайтеся про Kubernetes і його фундаментальні концепції." + button: "Дізнатися про концепції" + button_path: "/docs/concepts" +- name: tutorials + title: "Спробуйте Kubernetes" + description: "Дізнайтеся із навчальних матеріалів, як розгортати застосунки в Kubernetes." + button: "Переглянути навчальні матеріали" + button_path: "/docs/tutorials" +- name: setup + title: "Налаштування кластера" + description: "Розгорніть Kubernetes з урахуванням власних ресурсів і потреб." + button: "Налаштувати Kubernetes" + button_path: "/docs/setup" +- name: tasks + title: "Дізнайтеся, як користуватись Kubernetes" + description: "Ознайомтеся з типовими задачами і способами їх виконання за допомогою короткого алгоритму дій." + button: "Переглянути задачі" + button_path: "/docs/tasks" +- name: reference + title: Переглянути довідкову інформацію + description: Ознайомтеся з термінологією, синтаксисом командного рядка, типами ресурсів API і документацією з налаштування інструментів. + button: Переглянути довідкову інформацію + button_path: /docs/reference +- name: contribute + title: Зробити внесок у документацію + description: Будь-хто може зробити свій внесок, незалежно від того, чи ви нещодавно долучилися до проекту, чи працюєте над ним вже довгий час. + button: Зробити внесок у документацію + button_path: /docs/contribute +- name: download + title: Завантажити Kubernetes + description: Якщо ви встановлюєте Kubernetes чи оновлюєтесь до останньої версії, звіряйтеся з актуальною інформацією по релізу. +- name: about + title: Про документацію + description: Цей вебсайт містить документацію по актуальній і чотирьох попередніх версіях Kubernetes. +--- diff --git a/content/uk/docs/reference/glossary/applications.md b/content/uk/docs/reference/glossary/applications.md new file mode 100644 index 0000000000..c42c6ec343 --- /dev/null +++ b/content/uk/docs/reference/glossary/applications.md @@ -0,0 +1,16 @@ +--- +# title: Applications +title: Застосунки +id: applications +date: 2019-05-12 +full_link: +# short_description: > +# The layer where various containerized applications run. +short_description: > + Шар, в якому запущено контейнерізовані застосунки. +aka: +tags: +- fundamental +--- + +Шар, в якому запущено контейнерізовані застосунки. diff --git a/content/uk/docs/reference/glossary/cluster-infrastructure.md b/content/uk/docs/reference/glossary/cluster-infrastructure.md new file mode 100644 index 0000000000..557180912a --- /dev/null +++ b/content/uk/docs/reference/glossary/cluster-infrastructure.md @@ -0,0 +1,17 @@ +--- +# title: Cluster Infrastructure +title: Інфраструктура кластера +id: cluster-infrastructure +date: 2019-05-12 +full_link: +# short_description: > +# The infrastructure layer provides and maintains VMs, networking, security groups and others. +short_description: > + Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо. + +aka: +tags: +- operations +--- + +Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо. diff --git a/content/uk/docs/reference/glossary/cluster-operations.md b/content/uk/docs/reference/glossary/cluster-operations.md new file mode 100644 index 0000000000..e274bb4f7f --- /dev/null +++ b/content/uk/docs/reference/glossary/cluster-operations.md @@ -0,0 +1,17 @@ +--- +# title: Cluster Operations +title: Операції з кластером +id: cluster-operations +date: 2019-05-12 +full_link: +# short_description: > +# Activities such as upgrading the clusters, implementing security, storage, ingress, networking, logging and monitoring, and other operations involved in managing a Kubernetes cluster. +short_description: > +Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером. + +aka: +tags: +- operations +--- + +Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером. diff --git a/content/uk/docs/reference/glossary/cluster.md b/content/uk/docs/reference/glossary/cluster.md new file mode 100644 index 0000000000..58fc3bd6fd --- /dev/null +++ b/content/uk/docs/reference/glossary/cluster.md @@ -0,0 +1,22 @@ +--- +# title: Cluster +title: Кластер +id: cluster +date: 2019-06-15 +full_link: +# short_description: > +# A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. +short_description: > + Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол. + +aka: +tags: +- fundamental +- operation +--- + +Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол. + + + +На робочих вузлах розміщуються Поди, які є складовими застосунку. Площина управління керує робочими вузлами і Подами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності. diff --git a/content/uk/docs/reference/glossary/control-plane.md b/content/uk/docs/reference/glossary/control-plane.md new file mode 100644 index 0000000000..da9fd4c08a --- /dev/null +++ b/content/uk/docs/reference/glossary/control-plane.md @@ -0,0 +1,17 @@ +--- +# title: Control Plane +title: Площина управління +id: control-plane +date: 2019-05-12 +full_link: +# short_description: > +# The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. +short_description: > +Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів. + +aka: +tags: +- fundamental +--- + +Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів. diff --git a/content/uk/docs/reference/glossary/data-plane.md b/content/uk/docs/reference/glossary/data-plane.md new file mode 100644 index 0000000000..263a544010 --- /dev/null +++ b/content/uk/docs/reference/glossary/data-plane.md @@ -0,0 +1,17 @@ +--- +# title: Data Plane +title: Площина даних +id: data-plane +date: 2019-05-12 +full_link: +# short_description: > +# The layer that provides capacity such as CPU, memory, network, and storage so that the containers can run and connect to a network. +short_description: > + Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі. + +aka: +tags: +- fundamental +--- + +Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі. diff --git a/content/uk/docs/reference/glossary/deployment.md b/content/uk/docs/reference/glossary/deployment.md new file mode 100644 index 0000000000..5e62f7f784 --- /dev/null +++ b/content/uk/docs/reference/glossary/deployment.md @@ -0,0 +1,23 @@ +--- +title: Deployment +id: deployment +date: 2018-04-12 +full_link: /docs/concepts/workloads/controllers/deployment/ +# short_description: > +# An API object that manages a replicated application. +short_description: > + Об'єкт API, що керує реплікованим застосунком. + +aka: +tags: +- fundamental +- core-object +- workload +--- + +Об'єкт API, що керує реплікованим застосунком. + + + + +Кожна репліка являє собою {{< glossary_tooltip term_id="Под" >}}; Поди розподіляються між вузлами кластера. diff --git a/content/uk/docs/reference/glossary/index.md b/content/uk/docs/reference/glossary/index.md new file mode 100644 index 0000000000..3cbc4533ba --- /dev/null +++ b/content/uk/docs/reference/glossary/index.md @@ -0,0 +1,17 @@ +--- +approvers: +- chenopis +- abiogenesis-now +# title: Standardized Glossary +title: Глосарій +layout: glossary +noedit: true +default_active_tag: fundamental +weight: 5 +card: + name: reference + weight: 10 +# title: Glossary + title: Глосарій +--- + diff --git a/content/uk/docs/reference/glossary/kube-apiserver.md b/content/uk/docs/reference/glossary/kube-apiserver.md new file mode 100644 index 0000000000..82e3caa0ba --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-apiserver.md @@ -0,0 +1,29 @@ +--- +# title: API server +title: API-сервер +id: kube-apiserver +date: 2018-04-12 +full_link: /docs/reference/generated/kube-apiserver/ +# short_description: > +# Control plane component that serves the Kubernetes API. +short_description: > + Компонент площини управління, що надає доступ до API Kubernetes. + +aka: +- kube-apiserver +tags: +- architecture +- fundamental +--- + +API-сервер є компонентом {{< glossary_tooltip text="площини управління" term_id="control-plane" >}} Kubernetes, через який можна отримати доступ до API Kubernetes. API-сервер є фронтендом площини управління Kubernetes. + + + + + + +Основною реалізацією Kubernetes API-сервера є [kube-apiserver](/docs/reference/generated/kube-apiserver/). kube-apiserver підтримує горизонтальне масштабування, тобто масштабується за рахунок збільшення кількості інстансів. kube-apiserver можна запустити на декількох інстансах, збалансувавши між ними трафік. diff --git a/content/uk/docs/reference/glossary/kube-controller-manager.md b/content/uk/docs/reference/glossary/kube-controller-manager.md new file mode 100644 index 0000000000..edd56dcc90 --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-controller-manager.md @@ -0,0 +1,22 @@ +--- +title: kube-controller-manager +id: kube-controller-manager +date: 2018-04-12 +full_link: /docs/reference/command-line-tools-reference/kube-controller-manager/ +# short_description: > +# Control Plane component that runs controller processes. +short_description: > + Компонент площини управління, який запускає процеси контролера. + +aka: +tags: +- architecture +- fundamental +--- + +Компонент площини управління, який запускає процеси {{< glossary_tooltip text="контролера" term_id="controller" >}}. + + + + +За логікою, кожен {{< glossary_tooltip text="контролер" term_id="controller" >}} є окремим процесом. Однак для спрощення їх збирають в один бінарний файл і запускають як єдиний процес. diff --git a/content/uk/docs/reference/glossary/kube-proxy.md b/content/uk/docs/reference/glossary/kube-proxy.md new file mode 100644 index 0000000000..5086226f8e --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-proxy.md @@ -0,0 +1,33 @@ +--- +title: kube-proxy +id: kube-proxy +date: 2018-04-12 +full_link: /docs/reference/command-line-tools-reference/kube-proxy/ +# short_description: > +# `kube-proxy` is a network proxy that runs on each node in the cluster. +short_description: > + `kube-proxy` - це мережеве проксі, що запущене на кожному вузлі кластера. + +aka: +tags: +- fundamental +- networking +--- + +[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="сервісу">}}. + + + + +kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Подів всередині чи поза межами кластера. + + +kube-proxy використовує шар фільтрації пакетів операційної системи, за наявності такого. В іншому випадку kube-proxy скеровує трафік самостійно. diff --git a/content/uk/docs/reference/glossary/kube-scheduler.md b/content/uk/docs/reference/glossary/kube-scheduler.md new file mode 100644 index 0000000000..87f460222c --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-scheduler.md @@ -0,0 +1,22 @@ +--- +title: kube-scheduler +id: kube-scheduler +date: 2018-04-12 +full_link: /docs/reference/generated/kube-scheduler/ +# short_description: > +# Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on. +short_description: > + Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть. + +aka: +tags: +- architecture +--- + +Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть. + + + + +При виборі вузла враховуються наступні фактори: індивідуальна і колективна потреба у ресурсах, обмеження за апаратним/програмним забезпеченням і політиками, характеристики affinity і anti-affinity, локальність даних, сумісність робочих навантажень і граничні терміни виконання. diff --git a/content/uk/docs/reference/glossary/kubelet.md b/content/uk/docs/reference/glossary/kubelet.md new file mode 100644 index 0000000000..c1178ddf45 --- /dev/null +++ b/content/uk/docs/reference/glossary/kubelet.md @@ -0,0 +1,23 @@ +--- +title: Kubelet +id: kubelet +date: 2018-04-12 +full_link: /docs/reference/generated/kubelet +# short_description: > +# An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. +short_description: > + Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах. + +aka: +tags: +- fundamental +- core-object +--- + +Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах. + + + + +kubelet використовує специфікації PodSpecs, які надаються за допомогою різних механізмів, і забезпечує працездатність і справність усіх контейнерів, що описані у PodSpecs. kubelet керує лише тими контейнерами, що були створені Kubernetes. diff --git a/content/uk/docs/reference/glossary/pod.md b/content/uk/docs/reference/glossary/pod.md new file mode 100644 index 0000000000..b205c0bd1d --- /dev/null +++ b/content/uk/docs/reference/glossary/pod.md @@ -0,0 +1,23 @@ +--- +# title: Pod +title: Под +id: pod +date: 2018-04-12 +full_link: /docs/concepts/workloads/pods/pod-overview/ +# short_description: > +# The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. +short_description: > + Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу контейнерів, що запущені у вашому кластері. + +aka: +tags: +- core-object +- fundamental +--- + + Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері. + + + + +Як правило, в одному Поді запускається один контейнер. У Поді також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Подами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}. diff --git a/content/uk/docs/reference/glossary/selector.md b/content/uk/docs/reference/glossary/selector.md new file mode 100644 index 0000000000..77eb861f4e --- /dev/null +++ b/content/uk/docs/reference/glossary/selector.md @@ -0,0 +1,22 @@ +--- +# title: Selector +title: Селектор +id: selector +date: 2018-04-12 +full_link: /docs/concepts/overview/working-with-objects/labels/ +# short_description: > +# Allows users to filter a list of resources based on labels. +short_description: > + Дозволяє користувачам фільтрувати ресурси за мітками. + +aka: +tags: +- fundamental +--- + +Дозволяє користувачам фільтрувати ресурси за мітками. + + + + +Селектори застосовуються при створенні запитів для фільтрації ресурсів за {{< glossary_tooltip text="мітками" term_id="label" >}}. diff --git a/content/uk/docs/reference/glossary/service.md b/content/uk/docs/reference/glossary/service.md new file mode 100755 index 0000000000..91407b199a --- /dev/null +++ b/content/uk/docs/reference/glossary/service.md @@ -0,0 +1,24 @@ +--- +title: Сервіс +id: service +date: 2018-04-12 +full_link: /docs/concepts/services-networking/service/ +# A way to expose an application running on a set of Pods as a network service. +short_description: > + Спосіб відкрити доступ до застосунку, що запущений на декількох Подах у вигляді мережевої служби. + +aka: +tags: +- fundamental +- core-object +--- + +Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Подів" term_id="pod" >}} у вигляді мережевої служби. + + + + +Переважно група Подів визначається як Сервіс за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Подів змінить групу Подів, визначених селектором. Сервіс забезпечує надходження мережевого трафіка до актуальної групи Подів для підтримки робочого навантаження. diff --git a/content/uk/docs/setup/_index.md b/content/uk/docs/setup/_index.md new file mode 100644 index 0000000000..f7874f9fc4 --- /dev/null +++ b/content/uk/docs/setup/_index.md @@ -0,0 +1,136 @@ +--- +reviewers: +- brendandburns +- erictune +- mikedanese +no_issue: true +title: Початок роботи +main_menu: true +weight: 20 +content_template: templates/concept +card: + name: setup + weight: 20 + anchors: + - anchor: "#навчальне-середовище" + title: Навчальне середовище + - anchor: "#прод-оточення" + title: Прод оточення +--- + +{{% capture overview %}} + + +У цьому розділі розглянуто різні варіанти налаштування і запуску Kubernetes. + + +Різні рішення Kubernetes відповідають різним вимогам: легкість в експлуатації, безпека, система контролю, наявні ресурси та досвід, необхідний для управління кластером. + + +Ви можете розгорнути Kubernetes кластер на робочому комп'ютері, у хмарі чи в локальному дата-центрі, або обрати керований Kubernetes кластер. Також можна створити індивідуальні рішення на базі різних провайдерів хмарних сервісів або на звичайних серверах. + + +Простіше кажучи, ви можете створити Kubernetes кластер у навчальному і в прод оточеннях. + +{{% /capture %}} + +{{% capture body %}} + + + +## Навчальне оточення {#навчальне-оточення} + + +Для вивчення Kubernetes використовуйте рішення на базі Docker: інструменти, підтримувані спільнотою Kubernetes, або інші інструменти з сімейства проектів для налаштування Kubernetes кластера на локальному комп'ютері. + +{{< table caption="Таблиця інструментів для локального розгортання Kubernetes, які підтримуються спільнотою або входять до сімейства проектів Kubernetes." >}} + +|Спільнота |Сімейство проектів | +| ------------ | -------- | +| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | +| [kind (Kubernetes IN Docker)](https://github.com/kubernetes-sigs/kind) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| | [Minishift](https://docs.okd.io/latest/minishift/)| +| | [MicroK8s](https://microk8s.io/)| +| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | +| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| +| | [k3s](https://k3s.io)| + + +## Прод оточення {#прод-оточення} + + +Обираючи рішення для проду, визначіться, якими з функціональних складових (або абстракцій) Kubernetes кластера ви хочете керувати самі, а управління якими - доручити провайдеру. + + +У Kubernetes кластері можливі наступні абстракції: {{< glossary_tooltip text="застосунки" term_id="applications" >}}, {{< glossary_tooltip text="площина даних" term_id="data-plane" >}}, {{< glossary_tooltip text="площина управління" term_id="control-plane" >}}, {{< glossary_tooltip text="інфраструктура кластера" term_id="cluster-infrastructure" >}} та {{< glossary_tooltip text="операції з кластером" term_id="cluster-operations" >}}. + + +На діаграмі нижче показані можливі абстракції Kubernetes кластера із зазначенням, які з них потребують самостійного управління, а які можуть бути керовані провайдером. + +Рішення для прод оточення![Рішення для прод оточення](/images/docs/KubernetesSolutions.svg) + +{{< table caption="Таблиця рішень для прод оточення містить перелік провайдерів і їх технологій." >}} + +Таблиця рішень для прод оточення містить перелік провайдерів і технологій, які вони пропонують. + +|Провайдери | Керований сервіс | Хмара "під ключ" | Локальний дата-центр | Під замовлення (хмара) | Під замовлення (локальні ВМ)| Під замовлення (сервери без ОС) | +| --------- | ------ | ------ | ------ | ------ | ------ | ----- | +| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | | +| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | | +| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | | +| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | | +| [APPUiO](https://appuio.ch/)  | ✔ | ✔ | ✔ | | | | +| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ | +| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | | +| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | | +| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ | +| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔| +| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔ +| [Containership](https://containership.io) | ✔ |✔ | | | | +| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | +| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔ +| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | | +| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔ +| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [Custom Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) | +| [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | | +| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | | +| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | | +| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | | +| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | +| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | +| [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ | +| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | | +| [KubeSail](https://kubesail.com/) | ✔ | | | | | +| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ | +| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ | +| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | | +| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | | +| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | ✔ | ✔ | ✔ | | | +| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | | +| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) | +| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | | +| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/) +| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | | +| [oVirt](https://www.ovirt.org/) | | | | | ✔ | +| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | | +| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔ +| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/) +| [Supergiant](https://supergiant.io/) | |✔ | | | | +| [SUSE](https://www.suse.com/) | | ✔ | | | | +| [SysEleven](https://www.syseleven.io/) | ✔ | | | | | +| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ | +| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | | +| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) +| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | | + +{{% /capture %}} diff --git a/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md new file mode 100644 index 0000000000..90dbfdb914 --- /dev/null +++ b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -0,0 +1,293 @@ +--- +reviewers: +- fgrzadkowski +- jszczepkowski +- directxman12 +title: Horizontal Pod Autoscaler +feature: + title: Горизонтальне масштабування + description: > + Масштабуйте ваш застосунок за допомогою простої команди, інтерфейсу користувача чи автоматично, виходячи із навантаження на ЦПУ. + +content_template: templates/concept +weight: 90 +--- + +{{% capture overview %}} + +The Horizontal Pod Autoscaler automatically scales the number of pods +in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with +[custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md) +support, on some other application-provided metrics). Note that Horizontal +Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets. + +The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. +The resource determines the behavior of the controller. +The controller periodically adjusts the number of replicas in a replication controller or deployment +to match the observed average CPU utilization to the target specified by user. + +{{% /capture %}} + + +{{% capture body %}} + +## How does the Horizontal Pod Autoscaler work? + +![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg) + +The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled +by the controller manager's `--horizontal-pod-autoscaler-sync-period` flag (with a default +value of 15 seconds). + +During each period, the controller manager queries the resource utilization against the +metrics specified in each HorizontalPodAutoscaler definition. The controller manager +obtains the metrics from either the resource metrics API (for per-pod resource metrics), +or the custom metrics API (for all other metrics). + +* For per-pod resource metrics (like CPU), the controller fetches the metrics + from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler. + Then, if a target utilization value is set, the controller calculates the utilization + value as a percentage of the equivalent resource request on the containers in + each pod. If a target raw value is set, the raw metric values are used directly. + The controller then takes the mean of the utilization or the raw value (depending on the type + of target specified) across all targeted pods, and produces a ratio used to scale + the number of desired replicas. + + Please note that if some of the pod's containers do not have the relevant resource request set, + CPU utilization for the pod will not be defined and the autoscaler will + not take any action for that metric. See the [algorithm + details](#algorithm-details) section below for more information about + how the autoscaling algorithm works. + +* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics, + except that it works with raw values, not utilization values. + +* For object metrics and external metrics, a single metric is fetched, which describes + the object in question. This metric is compared to the target + value, to produce a ratio as above. In the `autoscaling/v2beta2` API + version, this value can optionally be divided by the number of pods before the + comparison is made. + +The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`, +`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by +metrics-server, which needs to be launched separately. See +[metrics-server](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server) +for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster. + +{{< note >}} +{{< feature-state state="deprecated" for_k8s_version="1.11" >}} +Fetching metrics from Heapster is deprecated as of Kubernetes 1.11. +{{< /note >}} + +See [Support for metrics APIs](#support-for-metrics-apis) for more details. + +The autoscaler accesses corresponding scalable controllers (such as replication controllers, deployments, and replica sets) +by using the scale sub-resource. Scale is an interface that allows you to dynamically set the number of replicas and examine +each of their current states. More details on scale sub-resource can be found +[here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource). + +### Algorithm Details + +From the most basic perspective, the Horizontal Pod Autoscaler controller +operates on the ratio between desired metric value and current metric +value: + +``` +desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )] +``` + +For example, if the current metric value is `200m`, and the desired value +is `100m`, the number of replicas will be doubled, since `200.0 / 100.0 == +2.0` If the current value is instead `50m`, we'll halve the number of +replicas, since `50.0 / 100.0 == 0.5`. We'll skip scaling if the ratio is +sufficiently close to 1.0 (within a globally-configurable tolerance, from +the `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1). + +When a `targetAverageValue` or `targetAverageUtilization` is specified, +the `currentMetricValue` is computed by taking the average of the given +metric across all Pods in the HorizontalPodAutoscaler's scale target. +Before checking the tolerance and deciding on the final values, we take +pod readiness and missing metrics into consideration, however. + +All Pods with a deletion timestamp set (i.e. Pods in the process of being +shut down) and all failed Pods are discarded. + +If a particular Pod is missing metrics, it is set aside for later; Pods +with missing metrics will be used to adjust the final scaling amount. + +When scaling on CPU, if any pod has yet to become ready (i.e. it's still +initializing) *or* the most recent metric point for the pod was before it +became ready, that pod is set aside as well. + +Due to technical constraints, the HorizontalPodAutoscaler controller +cannot exactly determine the first time a pod becomes ready when +determining whether to set aside certain CPU metrics. Instead, it +considers a Pod "not yet ready" if it's unready and transitioned to +unready within a short, configurable window of time since it started. +This value is configured with the `--horizontal-pod-autoscaler-initial-readiness-delay` flag, and its default is 30 +seconds. Once a pod has become ready, it considers any transition to +ready to be the first if it occurred within a longer, configurable time +since it started. This value is configured with the `--horizontal-pod-autoscaler-cpu-initialization-period` flag, and its +default is 5 minutes. + +The `currentMetricValue / desiredMetricValue` base scale ratio is then +calculated using the remaining pods not set aside or discarded from above. + +If there were any missing metrics, we recompute the average more +conservatively, assuming those pods were consuming 100% of the desired +value in case of a scale down, and 0% in case of a scale up. This dampens +the magnitude of any potential scale. + +Furthermore, if any not-yet-ready pods were present, and we would have +scaled up without factoring in missing metrics or not-yet-ready pods, we +conservatively assume the not-yet-ready pods are consuming 0% of the +desired metric, further dampening the magnitude of a scale up. + +After factoring in the not-yet-ready pods and missing metrics, we +recalculate the usage ratio. If the new ratio reverses the scale +direction, or is within the tolerance, we skip scaling. Otherwise, we use +the new ratio to scale. + +Note that the *original* value for the average utilization is reported +back via the HorizontalPodAutoscaler status, without factoring in the +not-yet-ready pods or missing metrics, even when the new usage ratio is +used. + +If multiple metrics are specified in a HorizontalPodAutoscaler, this +calculation is done for each metric, and then the largest of the desired +replica counts is chosen. If any of these metrics cannot be converted +into a desired replica count (e.g. due to an error fetching the metrics +from the metrics APIs) and a scale down is suggested by the metrics which +can be fetched, scaling is skipped. This means that the HPA is still capable +of scaling up if one or more metrics give a `desiredReplicas` greater than +the current value. + +Finally, just before HPA scales the target, the scale recommendation is recorded. The +controller considers all recommendations within a configurable window choosing the +highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. +This means that scaledowns will occur gradually, smoothing out the impact of rapidly +fluctuating metric values. + +## API Object + +The Horizontal Pod Autoscaler is an API resource in the Kubernetes `autoscaling` API group. +The current stable version, which only includes support for CPU autoscaling, +can be found in the `autoscaling/v1` API version. + +The beta version, which includes support for scaling on memory and custom metrics, +can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2` +are preserved as annotations when working with `autoscaling/v1`. + +More details about the API object can be found at +[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). + +## Support for Horizontal Pod Autoscaler in kubectl + +Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`. +We can create a new autoscaler using `kubectl create` command. +We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`. +Finally, we can delete an autoscaler using `kubectl delete hpa`. + +In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler. +For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80` +will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%` +and the number of replicas between 2 and 5. +The detailed documentation of `kubectl autoscale` can be found [here](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). + + +## Autoscaling during rolling update + +Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly, +or by using the deployment object, which manages the underlying replica sets for you. +Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object, +it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets. + +Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers, +i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`). +The reason this doesn't work is that when rolling update creates a new replication controller, +the Horizontal Pod Autoscaler will not be bound to the new replication controller. + +## Support for cooldown/delay + +When managing the scale of a group of replicas using the Horizontal Pod Autoscaler, +it is possible that the number of replicas keeps fluctuating frequently due to the +dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*. + +Starting from v1.6, a cluster operator can mitigate this problem by tuning +the global HPA settings exposed as flags for the `kube-controller-manager` component: + +Starting from v1.12, a new algorithmic update removes the need for the +upscale delay. + +- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a + duration that specifies how long the autoscaler has to wait before another + downscale operation can be performed after the current one has completed. + The default value is 5 minutes (`5m0s`). + +{{< note >}} +When tuning these parameter values, a cluster operator should be aware of the possible +consequences. If the delay (cooldown) value is set too long, there could be complaints +that the Horizontal Pod Autoscaler is not responsive to workload changes. However, if +the delay value is set too short, the scale of the replicas set may keep thrashing as +usual. +{{< /note >}} + +## Support for multiple metrics + +Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API +version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod +Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the +proposed scales will be used as the new scale. + +## Support for custom metrics + +{{< note >}} +Kubernetes 1.2 added alpha support for scaling based on application-specific metrics using special annotations. +Support for these annotations was removed in Kubernetes 1.6 in favor of the new autoscaling API. While the old method for collecting +custom metrics is still available, these metrics will not be available for use by the Horizontal Pod Autoscaler, and the former +annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller. +{{< /note >}} + +Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler. +You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API. +Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics. + +See [Support for metrics APIs](#support-for-metrics-apis) for the requirements. + +## Support for metrics APIs + +By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these +APIs, cluster administrators must ensure that: + +* The [API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) is enabled. + +* The corresponding APIs are registered: + + * For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-incubator/metrics-server). + It can be launched as a cluster addon. + + * For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors. + Check with your metrics pipeline, or the [list of known solutions](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api). + If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-incubator/custom-metrics-apiserver) to get started. + + * For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above. + +* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated. + +For more information on these different metrics paths and how they differ please see the relevant design proposals for +[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md), +[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md) +and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md). + +For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics) +and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md). +* kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). +* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). + +{{% /capture %}} diff --git a/content/uk/docs/templates/feature-state-alpha.txt b/content/uk/docs/templates/feature-state-alpha.txt new file mode 100644 index 0000000000..e061aa52be --- /dev/null +++ b/content/uk/docs/templates/feature-state-alpha.txt @@ -0,0 +1,7 @@ +Наразі цей компонент у статусі *alpha*, що означає: + +* Назва версії містить слово alpha (напр. v1alpha1). +* Увімкнення цього компонента може призвести до помилок у системі. За умовчанням цей компонент вимкнутий. +* Підтримка цього компонентa може бути припинена у будь-який час без попередження. +* API може стати несумісним у наступних релізах без попередження. +* Рекомендований до використання лише у тестових кластерах через підвищений ризик виникнення помилок і відсутність довгострокової підтримки. diff --git a/content/uk/docs/templates/feature-state-beta.txt b/content/uk/docs/templates/feature-state-beta.txt new file mode 100644 index 0000000000..3790be73f4 --- /dev/null +++ b/content/uk/docs/templates/feature-state-beta.txt @@ -0,0 +1,22 @@ + +Наразі цей компонент у статусі *beta*, що означає: + + +* Назва версії містить слово beta (наприклад, v2beta3). + +* Код добре відтестований. Увімкнення цього компонента не загрожує роботі системи. Компонент увімкнутий за умовчанням. + +* Загальна підтримка цього компонента триватиме, однак деталі можуть змінитися. + +* У наступній beta- чи стабільній версії схема та/або семантика об'єктів може змінитися і стати несумісною. У такому випадку ми надамо інструкції для міграції на наступну версію. Це може призвести до видалення, редагування і перестворення об'єктів API. У процесі редагування вам, можливо, знадобиться продумати зміни в об'єкті. Це може призвести до недоступності застосунків, для роботи яких цей компонент є істотно важливим. + +* Використання компонента рекомендоване лише у некритичних для безперебійної діяльності випадках через ризик несумісних змін у подальших релізах. Це обмеження може бути пом'якшене у випадку декількох кластерів, які можна оновлювати окремо. + +* **Будь ласка, спробуйте beta-версії наших компонентів і поділіться з нами своєю думкою! Після того, як компонент вийде зі статусу beta, нам буде важче змінити його.** diff --git a/content/uk/docs/templates/feature-state-deprecated.txt b/content/uk/docs/templates/feature-state-deprecated.txt new file mode 100644 index 0000000000..7c35b3fc2f --- /dev/null +++ b/content/uk/docs/templates/feature-state-deprecated.txt @@ -0,0 +1,4 @@ + + +Цей компонент є *застарілим*. Дізнатися більше про цей статус ви можете зі статті [Політика Kubernetes щодо застарілих компонентів](/docs/reference/deprecation-policy/). diff --git a/content/uk/docs/templates/feature-state-stable.txt b/content/uk/docs/templates/feature-state-stable.txt new file mode 100644 index 0000000000..a794f5ceb6 --- /dev/null +++ b/content/uk/docs/templates/feature-state-stable.txt @@ -0,0 +1,11 @@ + + +Цей компонент є *стабільним*, що означає: + + +* Назва версії становить vX, де X є цілим числом. + +* Стабільні версії компонентів з'являтимуться у багатьох наступних версіях програмного забезпечення. \ No newline at end of file diff --git a/content/uk/docs/templates/index.md b/content/uk/docs/templates/index.md new file mode 100644 index 0000000000..0e0b890542 --- /dev/null +++ b/content/uk/docs/templates/index.md @@ -0,0 +1,15 @@ +--- +headless: true + +resources: +- src: "*alpha*" + title: "alpha" +- src: "*beta*" + title: "beta" +- src: "*deprecated*" +# title: "deprecated" + title: "застарілий" +- src: "*stable*" +# title: "stable" + title: "стабільний" +--- \ No newline at end of file diff --git a/content/uk/docs/tutorials/_index.md b/content/uk/docs/tutorials/_index.md new file mode 100644 index 0000000000..ad03de23df --- /dev/null +++ b/content/uk/docs/tutorials/_index.md @@ -0,0 +1,90 @@ +--- +#title: Tutorials +title: Навчальні матеріали +main_menu: true +weight: 60 +content_template: templates/concept +--- + +{{% capture overview %}} + + +У цьому розділі документації Kubernetes зібрані навчальні матеріали. Кожний матеріал показує, як досягти окремої мети, що більша за одне [завдання](/docs/tasks/). Зазвичай навчальний матеріал має декілька розділів, кожен з яких містить певну послідовність дій. До ознайомлення з навчальними матеріалами вам, можливо, знадобиться додати у закладки сторінку з [Глосарієм](/docs/reference/glossary/) для подальшого консультування. + +{{% /capture %}} + +{{% capture body %}} + + +## Основи + + +* [Основи Kubernetes](/docs/tutorials/kubernetes-basics/) - детальний навчальний матеріал з інтерактивними уроками, що допоможе вам зрозуміти Kubernetes і спробувати його базову функціональність. + +* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) + +* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) + +* [Привіт Minikube](/docs/tutorials/hello-minikube/) + + +## Конфігурація + +* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) + +## Застосунки без стану (Stateless Applications) + +* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) + +* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) + +## Застосунки зі станом (Stateful Applications) + +* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/) + +* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) + +* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/) + +* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/) + +## CI/CD Pipeline + +* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview) + +* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2) + +* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3) + +* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4) + +## Кластери + +* [AppArmor](/docs/tutorials/clusters/apparmor/) + +## Сервіси + +* [Using Source IP](/docs/tutorials/services/source-ip/) + +{{% /capture %}} + +{{% capture whatsnext %}} + + +Якщо ви хочете написати навчальний матеріал, у статті +[Використання шаблонів сторінок](/docs/home/contribute/page-templates/) +ви знайдете інформацію про тип навчальної сторінки і шаблон. + +{{% /capture %}} diff --git a/content/uk/docs/tutorials/hello-minikube.md b/content/uk/docs/tutorials/hello-minikube.md new file mode 100644 index 0000000000..356426112b --- /dev/null +++ b/content/uk/docs/tutorials/hello-minikube.md @@ -0,0 +1,394 @@ +--- +#title: Hello Minikube +title: Привіт Minikube +content_template: templates/tutorial +weight: 5 +menu: + main: + #title: "Get Started" + title: "Початок роботи" + weight: 10 + #post: > + #

Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.

+ post: > +

Готові попрацювати? Створимо простий Kubernetes кластер для запуску Node.js застосунку "Hello World".

+card: + #name: tutorials + name: навчальні матеріали + weight: 10 +--- + +{{% capture overview %}} + + +З цього навчального матеріалу ви дізнаєтесь, як запустити у Kubernetes простий Hello World застосунок на Node.js за допомогою [Minikube](/docs/setup/learning-environment/minikube) і Katacoda. Katacoda надає безплатне Kubernetes середовище, що доступне у вашому браузері. + + +{{< note >}} +Також ви можете навчатись за цим матеріалом, якщо встановили [Minikube локально](/docs/tasks/tools/install-minikube/). +{{< /note >}} + +{{% /capture %}} + +{{% capture objectives %}} + + +* Розгорнути Hello World застосунок у Minikube. + +* Запустити застосунок. + +* Переглянути логи застосунку. + +{{% /capture %}} + +{{% capture prerequisites %}} + + +У цьому навчальному матеріалі ми використовуємо образ контейнера, зібраний із наступних файлів: + +{{< codenew language="js" file="minikube/server.js" >}} + +{{< codenew language="conf" file="minikube/Dockerfile" >}} + + +Більше інформації про команду `docker build` ви знайдете у [документації Docker](https://docs.docker.com/engine/reference/commandline/build/). + +{{% /capture %}} + +{{% capture lessoncontent %}} + + +## Створення Minikube кластера + + +1. Натисніть кнопку **Запуск термінала** + + {{< kat-button >}} + + + {{< note >}}Якщо Minikube встановлений локально, виконайте команду `minikube start`.{{< /note >}} + + +2. Відкрийте Kubernetes дашборд у браузері: + + ```shell + minikube dashboard + ``` + + +3. Тільки для Katacoda: у верхній частині вікна термінала натисніть знак плюс, а потім -- **Select port to view on Host 1**. + + +4. Тільки для Katacoda: введіть `30000`, а потім натисніть **Display Port**. + + +## Створення Deployment + + +[*Под*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Под має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Пода і перезапускає контейнер Пода, якщо контейнер перестає працювати. Створювати і масштабувати Поди рекомендується за допомогою Deployment'ів. + + +1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Подом. Под запускає контейнер на основі наданого Docker образу. + + ```shell + kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node + ``` + + +2. Перегляньте інформацію про запущений Deployment: + + ```shell + kubectl get deployments + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + hello-node 1/1 1 1 1m + ``` + + +3. Перегляньте інформацію про запущені Поди: + + ```shell + kubectl get pods + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME READY STATUS RESTARTS AGE + hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m + ``` + + +4. Перегляньте події кластера: + + ```shell + kubectl get events + ``` + + +5. Перегляньте конфігурацію `kubectl`: + + ```shell + kubectl config view + ``` + + + {{< note >}}Більше про команди `kubectl` ви можете дізнатися зі статті [Загальна інформація про kubectl](/docs/user-guide/kubectl-overview/).{{< /note >}} + + +## Створення Сервісу + + +За умовчанням, Под доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Под необхідно відкрити як Kubernetes [*Сервіс*](/docs/concepts/services-networking/service/). + + +1. Відкрийте Под для публічного доступу з інтернету за допомогою команди `kubectl expose`: + + ```shell + kubectl expose deployment hello-node --type=LoadBalancer --port=8080 + ``` + + + Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Сервісу за межами кластера. + + +2. Перегляньте інформацію за Сервісом, який ви щойно створили: + + ```shell + kubectl get services + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s + kubernetes ClusterIP 10.96.0.1 443/TCP 23m + ``` + + + Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Сервісу надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Сервіс доступним ззовні за допомогою команди `minikube service`. + + +3. Виконайте наступну команду: + + ```shell + minikube service hello-node + ``` + + +4. Тільки для Katacoda: натисніть знак плюс, а потім -- **Select port to view on Host 1**. + + +5. Тільки для Katacoda: запишіть п'ятизначний номер порту, що відображається напроти `8080` у виводі сервісу. Номер цього порту генерується довільно і тому може бути іншим у вашому випадку. Введіть номер порту у призначене для цього текстове поле і натисніть Display Port. У нашому прикладі номер порту `30369`. + + + Це відкриє вікно браузера, в якому запущений ваш застосунок, і покаже повідомлення "Hello World". + + +## Увімкнення розширень + + +Minikube має ряд вбудованих {{< glossary_tooltip text="розширень" term_id="addons" >}}, які можна увімкнути, вимкнути і відкрити у локальному Kubernetes оточенні. + + +1. Перегляньте перелік підтримуваних розширень: + + ```shell + minikube addons list + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + addon-manager: enabled + dashboard: enabled + default-storageclass: enabled + efk: disabled + freshpod: disabled + gvisor: disabled + helm-tiller: disabled + ingress: disabled + ingress-dns: disabled + logviewer: disabled + metrics-server: disabled + nvidia-driver-installer: disabled + nvidia-gpu-device-plugin: disabled + registry: disabled + registry-creds: disabled + storage-provisioner: enabled + storage-provisioner-gluster: disabled + ``` + + +2. Увімкніть розширення, наприклад `metrics-server`: + + ```shell + minikube addons enable metrics-server + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + metrics-server was successfully enabled + ``` + + +3. Перегляньте інформацію про Под і Сервіс, які ви щойно створили: + + ```shell + kubectl get pod,svc -n kube-system + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME READY STATUS RESTARTS AGE + pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m + pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m + pod/metrics-server-67fb648c5 1/1 Running 0 26s + pod/etcd-minikube 1/1 Running 0 34m + pod/influxdb-grafana-b29w8 2/2 Running 0 26s + pod/kube-addon-manager-minikube 1/1 Running 0 34m + pod/kube-apiserver-minikube 1/1 Running 0 34m + pod/kube-controller-manager-minikube 1/1 Running 0 34m + pod/kube-proxy-rnlps 1/1 Running 0 34m + pod/kube-scheduler-minikube 1/1 Running 0 34m + pod/storage-provisioner 1/1 Running 0 34m + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s + service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m + service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s + service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s + ``` + + +4. Вимкніть `metrics-server`: + + ```shell + minikube addons disable metrics-server + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + metrics-server was successfully disabled + ``` + + +## Вивільнення ресурсів + + +Тепер ви можете видалити ресурси, які створили у вашому кластері: + +```shell +kubectl delete service hello-node +kubectl delete deployment hello-node +``` + + +За бажанням, зупиніть віртуальну машину (ВМ) з Minikube: + +```shell +minikube stop +``` + + +За бажанням, видаліть ВМ з Minikube: + +```shell +minikube delete +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + + +* Дізнайтеся більше про [об'єкти Deployment](/docs/concepts/workloads/controllers/deployment/). + +* Дізнайтеся більше про [розгортання застосунків](/docs/user-guide/deploying-applications/). + +* Дізнайтеся більше про [об'єкти сервісу](/docs/concepts/services-networking/service/). + +{{% /capture %}} diff --git a/content/uk/docs/tutorials/kubernetes-basics/_index.html b/content/uk/docs/tutorials/kubernetes-basics/_index.html new file mode 100644 index 0000000000..466b8b3437 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/_index.html @@ -0,0 +1,138 @@ +--- +title: Дізнатися про основи Kubernetes +linkTitle: Основи Kubernetes +weight: 10 +card: + name: навчальні матеріали + weight: 20 + title: Знайомство з основами +--- + + + + + + + + + +
+ +
+ +
+
+ +

Основи Kubernetes

+ +

Цей навчальний матеріал ознайомить вас з основами системи оркестрації Kubernetes кластера. Кожен модуль містить загальну інформацію щодо основної функціональності і концепцій Kubernetes, а також інтерактивний онлайн-урок. Завдяки цим інтерактивним урокам ви зможете самостійно керувати простим кластером і розгорнутими в ньому контейнеризованими застосунками.

+ +

З інтерактивних уроків ви дізнаєтесь:

+
    + +
  • як розгорнути контейнеризований застосунок у кластері.
  • + +
  • як масштабувати Deployment.
  • + +
  • як розгорнути нову версію контейнеризованого застосунку.
  • + +
  • як відлагодити контейнеризований застосунок.
  • +
+ +

Навчальні матеріали використовують Katacoda для запуску у вашому браузері віртуального термінала, в якому запущено Minikube - невеликий локально розгорнутий Kubernetes, що може працювати будь-де. Вам не потрібно встановлювати або налаштовувати жодне програмне забезпечення: кожен інтерактивний урок запускається просто у вашому браузері.

+
+
+ +
+ +
+
+ +

Чим Kubernetes може бути корисний для вас?

+ +

Від сучасних вебсервісів користувачі очікують доступності 24/7, а розробники - можливості розгортати нові версії цих застосунків по кілька разів на день. Контейнеризація, що допомагає упакувати програмне забезпечення, якнайкраще сприяє цим цілям. Вона дозволяє випускати і оновлювати застосунки легко, швидко та без простою. Із Kubernetes ви можете бути певні, що ваші контейнеризовані застосунки запущені там і тоді, де ви цього хочете, а також забезпечені усіма необхідними для роботи ресурсами та інструментами. Kubernetes - це висококласна платформа з відкритим вихідним кодом, в основі якої - накопичений досвід оркестрації контейнерів від Google, поєднаний із найкращими ідеями і практиками від спільноти.

+
+
+ +
+ + + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md new file mode 100644 index 0000000000..9173c90d34 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md @@ -0,0 +1,4 @@ +--- +title: Створення кластера +weight: 10 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html new file mode 100644 index 0000000000..20d89f23ca --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -0,0 +1,37 @@ +--- +title: Інтерактивний урок - Створення кластера +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет +
+
+
+ + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html new file mode 100644 index 0000000000..1a4e179a69 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -0,0 +1,152 @@ +--- +title: Використання Minikube для створення кластера +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+ +

Цілі

+
    + +
  • Зрозуміти, що таке Kubernetes кластер.
  • + +
  • Зрозуміти, що таке Minikube.
  • + +
  • Запустити Kubernetes кластер за допомогою онлайн-термінала.
  • +
+
+ +
+ +

Kubernetes кластери

+ +

+ Kubernetes координує високодоступний кластер комп'ютерів, з'єднаних таким чином, щоб працювати як одне ціле. Абстракції Kubernetes дозволяють вам розгортати контейнеризовані застосунки в кластері без конкретної прив'язки до окремих машин. Для того, щоб скористатися цією новою моделлю розгортання, застосунки потрібно упакувати таким чином, щоб звільнити їх від прив'язки до окремих хостів, тобто контейнеризувати. Контейнеризовані застосунки більш гнучкі і доступні, ніж попередні моделі розгортання, що передбачали встановлення застосунків безпосередньо на призначені для цього машини у вигляді програмного забезпечення, яке глибоко інтегрувалося із хостом. Kubernetes дозволяє автоматизувати розподіл і запуск контейнерів застосунку у кластері, а це набагато ефективніше. Kubernetes - це платформа з відкритим вихідним кодом, готова для використання у проді. +

+ +

Kubernetes кластер складається з двох типів ресурсів: +

    +
  • master, що координує роботу кластера
  • +
  • вузли (nodes) - робочі машини, на яких запущені застосунки
  • +
+

+
+ +
+
+ +

Зміст:

+
    + +
  • Kubernetes кластер
  • + +
  • Minikube
  • +
+
+
+ +

+ Kubernetes - це довершена платформа з відкритим вихідним кодом, що оркеструє розміщення і запуск контейнерів застосунку всередині та між комп'ютерними кластерами. +

+
+
+
+
+ +
+
+

Схема кластера

+
+
+ +
+
+

+
+
+
+ +
+
+ +

Master відповідає за керування кластером. Master координує всі процеси у вашому кластері, такі як запуск застосунків, підтримка їх бажаного стану, масштабування застосунків і викатка оновлень.

+ + +

Вузол (node) - це ВМ або фізичний комп'ютер, що виступає у ролі робочої машини в Kubernetes кластері. Кожен вузол має kubelet - агент для управління вузлом і обміну даними з Kubernetes master. Також на вузлі мають бути встановлені інструменти для виконання операцій з контейнерами, такі як Docker або rkt. Kubernetes кластер у проді повинен складатися як мінімум із трьох вузлів.

+ +
+
+
+ +

Master'и керують кластером, а вузли використовуються для запуску застосунків.

+
+
+
+ +
+
+ +

Коли ви розгортаєте застосунки у Kubernetes, ви кажете master-вузлу запустити контейнери застосунку. Master розподіляє контейнери для запуску на вузлах кластера. Для обміну даними з master вузли використовують Kubernetes API, який надається master-вузлом. Кінцеві користувачі також можуть взаємодіяти із кластером безпосередньо через Kubernetes API.

+ + +

Kubernetes кластер можна розгорнути як на фізичних, так і на віртуальних серверах. Щоб розпочати розробку під Kubernetes, ви можете скористатися Minikube - спрощеною реалізацією Kubernetes. Minikube створює на вашому локальному комп'ютері ВМ, на якій розгортає простий кластер з одного вузла. Існують версії Minikube для операційних систем Linux, macOS та Windows. Minikube CLI надає основні операції для роботи з вашим кластером, такі як start, stop, status і delete. Однак у цьому уроці ви використовуватимете онлайн термінал із вже встановленим Minikube.

+ + +

Тепер ви знаєте, що таке Kubernetes. Тож давайте перейдемо до інтерактивного уроку і створимо ваш перший кластер!

+ +
+
+
+ + + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md new file mode 100644 index 0000000000..a9c1ff2376 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md @@ -0,0 +1,4 @@ +--- +title: Розгортання застосунку +weight: 20 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html new file mode 100644 index 0000000000..d89a05a95b --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -0,0 +1,41 @@ +--- +title: Інтерактивний урок - Розгортання застосунку +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет +
+ +
+
+ +
+ + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html new file mode 100644 index 0000000000..ce9229ca85 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -0,0 +1,151 @@ +--- +title: Використання kubectl для створення Deployment'а +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+ +

Цілі

+
    + +
  • Дізнатися, що таке Deployment застосунків.
  • + +
  • Розгорнути свій перший застосунок у Kubernetes за допомогою kubectl.
  • +
+
+ +
+ +

Процеси Kubernetes Deployment

+ +

+ Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити Deployment конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Поди для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Поди по окремих вузлах кластера. +

+ + +

Після створення Поди застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Под, зупинив роботу або був видалений, Deployment контролер переміщає цей Под на інший вузол кластера. Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.

+ + +

До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Подів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками.

+ +
+ +
+
+ +

Зміст:

+
    + +
  • Deployment'и
  • +
  • Kubectl
  • +
+
+
+ +

+ Deployment відповідає за створення і оновлення Подів для вашого застосунку +

+
+
+
+
+ +
+
+

Як розгорнути ваш перший застосунок у Kubernetes

+
+
+ +
+
+

+
+
+
+ +
+
+ + +

Ви можете створити Deployment і керувати ним за допомогою командного рядка Kubernetes - kubectl. kubectl взаємодіє з кластером через API Kubernetes. У цьому модулі ви вивчите найпоширеніші команди kubectl для створення Deployment'ів, які запускатимуть ваші застосунки у Kubernetes кластері.

+ + +

Коли ви створюєте Deployment, вам необхідно задати образ контейнера для вашого застосунку і скільки реплік ви хочете запустити. Згодом цю інформацію можна змінити, оновивши Deployment. У навчальних модулях 5 і 6 йдеться про те, як масштабувати і оновлювати Deployment'и.

+ + + + +
+
+
+ +

Для того, щоб розгортати застосунки в Kubernetes, їх потрібно упакувати в один із підтримуваних форматів контейнерів

+
+
+
+ +
+
+ +

+ Для створення Deployment'а ви використовуватимете застосунок, написаний на Node.js і упакований в Docker контейнер. (Якщо ви ще не пробували створити Node.js застосунок і розгорнути його у контейнері, радимо почати саме з цього; інструкції ви знайдете у навчальному матеріалі Привіт Minikube). +

+ + +

Тепер ви знаєте, що таке Deployment. Тож давайте перейдемо до інтерактивного уроку і розгорнемо ваш перший застосунок!

+
+
+
+ + + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md b/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md new file mode 100644 index 0000000000..93ac6a7774 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md @@ -0,0 +1,4 @@ +--- +title: Вивчення застосунку +weight: 30 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html new file mode 100644 index 0000000000..a4bec18079 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -0,0 +1,41 @@ +--- +title: Інтерактивний урок - Вивчення застосунку +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ +
+ Для роботи з терміналом використовуйте комп'ютер або планшет +
+ +
+
+
+ + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html new file mode 100644 index 0000000000..93899fd77f --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -0,0 +1,200 @@ +--- +title: Ознайомлення з Подами і вузлами (nodes) +weight: 10 +--- + + + + + + + + + + +
+ +
+ +
+ +
+ +

Цілі

+
    + +
  • Дізнатися, що таке Поди Kubernetes.
  • + +
  • Дізнатися, що таке вузли Kubernetes.
  • + +
  • Діагностика розгорнутих застосунків.
  • +
+
+ +
+ +

Поди Kubernetes

+ +

Коли ви створили Deployment у модулі 2, Kubernetes створив Под, щоб розмістити ваш застосунок. Под - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:

+
    + +
  • Спільні сховища даних, або Volumes
  • + +
  • Мережа, адже кожен Под у кластері має унікальну IP-адресу
  • + +
  • Інформація з запуску кожного контейнера, така як версія образу контейнера або використання певних портів
  • +
+ +

Под моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Поді може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Пода мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.

+ + +

Под є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Поди вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Под прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Поди розподіляються по інших доступних вузлах кластера.

+ +
+
+
+

Зміст:

+
    +
  • Поди
  • +
  • Вузли
  • +
  • Основні команди kubectl
  • +
+
+
+ +

+ Под - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити. +

+
+
+
+
+ +
+
+

Узагальнена схема Подів

+
+
+ +
+
+

+
+
+
+ +
+
+ +

Вузли

+ +

Под завжди запускається на вузлі. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Подів. Kubernetes master автоматично розподіляє Поди по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.

+ + +

На кожному вузлі Kubernetes запущені як мінімум:

+
    + +
  • kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Поди і контейнери, запущені на машині.
  • + +
  • оточення для контейнерів (таке як Docker, rkt), що забезпечує завантаження образу контейнера з реєстру, розпакування контейнера і запуск застосунку.
  • + +
+ +
+
+
+ +

Контейнери повинні бути разом в одному Поді, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск.

+
+
+
+ +
+ +
+
+

Узагальнена схема вузлів

+
+
+ +
+
+

+
+
+
+ +
+
+ +

Діагностика за допомогою kubectl

+ +

У модулі 2 ви вже використовували інтерфейс командного рядка kubectl. У модулі 3 ви продовжуватимете користуватися ним для отримання інформації про застосунки та оточення, в яких вони розгорнуті. Нижченаведені команди kubectl допоможуть вам виконати наступні поширені дії:

+
    + +
  • kubectl get - відобразити список ресурсів
  • + +
  • kubectl describe - показати детальну інформацію про ресурс
  • + +
  • kubectl logs - вивести логи контейнера, розміщеного в Поді
  • + +
  • kubectl exec - виконати команду в контейнері, розміщеному в Поді
  • +
+ + +

За допомогою цих команд ви можете подивитись, коли і в якому оточенні був розгорнутий застосунок, перевірити його поточний статус і конфігурацію.

+ + +

А зараз, коли ми дізналися більше про складові нашого кластера і командний рядок, давайте детальніше розглянемо наш застосунок.

+ +
+
+
+ +

Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Подів.

+
+
+
+
+ + + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md b/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md new file mode 100644 index 0000000000..ef49a1b632 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md @@ -0,0 +1,4 @@ +--- +title: Відкриття доступу до застосунку за межами кластера +weight: 40 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html new file mode 100644 index 0000000000..4f3a87928d --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -0,0 +1,38 @@ +--- +title: Інтерактивний урок - Відкриття доступу до застосунку +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет +
+
+
+
+ + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html new file mode 100644 index 0000000000..0ddad3b8da --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -0,0 +1,169 @@ +--- +#title: Using a Service to Expose Your App +title: Використання Cервісу для відкриття доступу до застосунку за межами кластера +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+
+ +

Цілі

+
    + +
  • Дізнатись, що таке Cервіс у Kubernetes
  • + +
  • Зрозуміти, яке відношення до Cервісу мають мітки та LabelSelector
  • + +
  • Відкрити доступ до застосунку за межами Kubernetes кластера, використовуючи Cервіс
  • +
+
+ +
+ +

Загальна інформація про Kubernetes Cервіси

+ + +

Поди Kubernetes "смертні" і мають власний життєвий цикл. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Поди, запущені на ньому. ReplicaSet здатна динамічно повернути кластер до бажаного стану шляхом створення нових Подів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Поду. Водночас, кожний Под у Kubernetes кластері має унікальну IP-адресу, навіть Поди на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Подами для того, щоб ваші застосунки продовжували працювати.

+ + +

Сервіс у Kubernetes - це абстракція, що визначає логічний набір Подів і політику доступу до них. Сервіси уможливлюють слабку зв'язаність між залежними Подами. Для визначення Сервісу використовують YAML-файл (рекомендовано) або JSON, як для решти об'єктів Kubernetes. Набір Подів, призначених для Сервісу, зазвичай визначається через LabelSelector (нижче пояснюється, чому параметр selector іноді не включають у специфікацію сервісу).

+ + +

Попри те, що кожен Под має унікальний IP, ці IP-адреси не видні за межами кластера без Сервісу. Сервіси уможливлюють надходження трафіка до ваших застосунків. Відкрити Сервіс можна по-різному, вказавши потрібний type у ServiceSpec:

+
    + +
  • ClusterIP (типове налаштування) - відкриває доступ до Сервісу у кластері за внутрішнім IP. Цей тип робить Сервіс доступним лише у межах кластера.
  • + +
  • NodePort - відкриває доступ до Сервісу на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Сервіс доступним поза межами кластера, використовуючи <NodeIP>:<NodePort>. Є надмножиною відносно ClusterIP.
  • + +
  • LoadBalancer - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Сервісу статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.
  • + +
  • ExternalName - відкриває доступ до Сервісу, використовуючи довільне ім'я (визначається параметром externalName у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії kube-dns 1.7 і вище.
  • +
+ +

Більше інформації про різні типи Сервісів ви знайдете у навчальному матеріалі Використання вихідної IP-адреси. Дивіться також Поєднання застосунків з Сервісами.

+ +

Також зауважте, що для деяких сценаріїв використання Сервісів параметр selector не задається у специфікації Сервісу. Сервіс, створений без визначення параметра selector, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Сервіс на конкретні кінцеві точки (endpoints). Інший випадок, коли селектор може бути не потрібний - використання строго заданого параметра type: ExternalName.

+
+
+
+ +

Зміст

+
    + +
  • Відкриття Подів для зовнішнього трафіка
  • + +
  • Балансування навантаження трафіка між Подами
  • + +
  • Використання міток
  • +
+
+
+ +

Сервіс Kubernetes - це шар абстракції, який визначає логічний набір Подів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Подів.

+
+
+
+
+ +
+
+ +

Сервіси і мітки

+
+
+ +
+
+

+
+
+ +
+
+ +

Сервіс маршрутизує трафік між Подами, що входять до його складу. Сервіс - це абстракція, завдяки якій Поди в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Сервіси в Kubernetes здійснюють виявлення і маршрутизацію між залежними Подами (як наприклад, фронтенд- і бекенд-компоненти застосунку).

+ +

Сервіси співвідносяться з набором Подів за допомогою міток і селекторів -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:

+
    + +
  • Позначення об'єктів для дев, тест і прод оточень
  • + +
  • Прикріплення тегу версії
  • + +
  • Класифікування об'єктів за допомогою тегів
  • +
+ +
+
+
+ +

Ви можете створити Сервіс одночасно із Deployment, виконавши команду
--expose в kubectl.

+
+
+
+ +
+ +
+
+

+
+
+
+
+
+ +

Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Сервісу і прикріпимо мітки.

+
+
+
+ +
+
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md b/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md new file mode 100644 index 0000000000..c6e1a94dc1 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md @@ -0,0 +1,4 @@ +--- +title: Масштабування застосунку +weight: 50 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html new file mode 100644 index 0000000000..540ae92b23 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -0,0 +1,40 @@ +--- +title: Інтерактивний урок - Масштабування застосунку +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет +
+
+
+
+ + +
+ + + +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html new file mode 100644 index 0000000000..c2318da8ed --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -0,0 +1,145 @@ +--- +title: Запуск вашого застосунку на декількох Подах +weight: 10 +--- + + + + + + + + + +
+ +
+ +
+ +
+ +

Цілі

+
    + +
  • Масштабувати застосунок за допомогою kubectl.
  • +
+
+ +
+ +

Масштабування застосунку

+ + +

У попередніх модулях ми створили Deployment і відкрили його для зовнішнього трафіка за допомогою Сервісу. Deployment створив лише один Под для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.

+ + +

Масштабування досягається шляхом зміни кількості реплік у Deployment'і.

+ +
+
+
+ +

Зміст:

+
    + +
  • Масштабування Deployment'а
  • +
+
+
+ +

Кількість Подів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run

+
+
+
+
+ +
+
+

Загальна інформація про масштабування

+
+
+ +
+
+
+ +
+
+ +
+ +
+
+ + +

Масштабування Deployment'а забезпечує створення нових Подів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Подів відповідно до нового бажаного стану. Kubernetes також підтримує автоматичне масштабування, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Подів у визначеному Deployment'і.

+ + +

Запустивши застосунок на декількох Подах, необхідно розподілити між ними трафік. Сервіси мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Подами відкритого Deployment'а. Сервіси безперервно моніторять запущені Поди за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Поди.

+ +
+
+
+ +

Масштабування досягається шляхом зміни кількості реплік у Deployment'і.

+
+
+
+ +
+ +
+
+ +

Після запуску декількох примірників застосунку ви зможете виконувати послідовне оновлення без шкоди для доступності системи. Ми розповімо вам про це у наступному модулі. А зараз давайте повернемось до онлайн термінала і масштабуємо наш застосунок.

+
+
+
+ + + +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/_index.md b/content/uk/docs/tutorials/kubernetes-basics/update/_index.md new file mode 100644 index 0000000000..c253433db1 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/update/_index.md @@ -0,0 +1,4 @@ +--- +title: Оновлення застосунку +weight: 60 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html new file mode 100644 index 0000000000..5d8e398b40 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -0,0 +1,37 @@ +--- +title: Інтерактивний урок - Оновлення застосунку +weight: 20 +--- + + + + + + + + + + + +
+ +
+ +
+
+ Для роботи з терміналом використовуйте комп'ютер або планшет +
+
+
+
+ +
+ +
+ + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html new file mode 100644 index 0000000000..5630eeaf81 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html @@ -0,0 +1,168 @@ +--- +title: Виконання послідовного оновлення (rolling update) +weight: 10 +--- + + + + + + + + + + +
+ +
+ +
+ +
+ +

Цілі

+
    + +
  • Виконати послідовне оновлення, використовуючи kubectl.
  • +
+
+ +
+ +

Оновлення застосунку

+ + +

Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. Послідовні оновлення дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Подів іншими. Нові Поди розподіляються по вузлах з доступними ресурсами.

+ + +

У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Подах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Подів, недоступних під час оновлення, і максимальна кількість нових Подів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Подів) еквіваленті. + У Kubernetes оновлення версіонуються, тому кожне оновлення Deployment'а можна відкотити до попередньої (стабільної) версії.

+ +
+
+
+ +

Зміст:

+
    + +
  • Оновлення застосунку
  • +
+
+
+ +

Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Подів іншими.

+
+
+
+
+ +
+
+

Загальна інформація про послідовне оновлення

+
+
+
+
+
+ +
+
+
+ +
+
+ + +

Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Сервіс розподілятиме трафік лише на доступні Поди. Під доступним мається на увазі Под, готовий до експлуатації користувачами застосунку.

+ + +

Послідовне оновлення дозволяє вам:

+
    + +
  • Просувати застосунок з одного оточення в інше (шляхом оновлення образу контейнера)
  • + +
  • Відкочуватися до попередніх версій
  • + +
  • Здійснювати безперервну інтеграцію та розгортання застосунків без простою
  • + +
+ +
+
+
+ +

Якщо Deployment "відкритий у світ", то під час оновлення сервіс розподілятиме трафік лише на доступні Поди.

+
+
+
+ +
+ +
+
+ +

В інтерактивному уроці ми оновимо наш застосунок до нової версії, а потім відкотимося до попередньої.

+
+
+
+ + + +
+ +
+ + + diff --git a/content/uk/examples/controllers/job.yaml b/content/uk/examples/controllers/job.yaml new file mode 100644 index 0000000000..b448f2eb81 --- /dev/null +++ b/content/uk/examples/controllers/job.yaml @@ -0,0 +1,14 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: pi +spec: + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never + backoffLimit: 4 + diff --git a/content/uk/examples/controllers/nginx-deployment.yaml b/content/uk/examples/controllers/nginx-deployment.yaml new file mode 100644 index 0000000000..f7f95deebb --- /dev/null +++ b/content/uk/examples/controllers/nginx-deployment.yaml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 diff --git a/content/uk/examples/controllers/replication.yaml b/content/uk/examples/controllers/replication.yaml new file mode 100644 index 0000000000..6eff0b9b57 --- /dev/null +++ b/content/uk/examples/controllers/replication.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: ReplicationController +metadata: + name: nginx +spec: + replicas: 3 + selector: + app: nginx + template: + metadata: + name: nginx + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 diff --git a/content/uk/examples/minikube/Dockerfile b/content/uk/examples/minikube/Dockerfile new file mode 100644 index 0000000000..dd58cb7e75 --- /dev/null +++ b/content/uk/examples/minikube/Dockerfile @@ -0,0 +1,4 @@ +FROM node:6.14.2 +EXPOSE 8080 +COPY server.js . +CMD [ "node", "server.js" ] diff --git a/content/uk/examples/minikube/server.js b/content/uk/examples/minikube/server.js new file mode 100644 index 0000000000..76345a17d8 --- /dev/null +++ b/content/uk/examples/minikube/server.js @@ -0,0 +1,9 @@ +var http = require('http'); + +var handleRequest = function(request, response) { + console.log('Received request for URL: ' + request.url); + response.writeHead(200); + response.end('Hello World!'); +}; +var www = http.createServer(handleRequest); +www.listen(8080); diff --git a/content/uk/examples/service/networking/dual-stack-default-svc.yaml b/content/uk/examples/service/networking/dual-stack-default-svc.yaml new file mode 100644 index 0000000000..00ed87ba19 --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-default-svc.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml new file mode 100644 index 0000000000..a875f44d6d --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ipFamily: IPv4 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml new file mode 100644 index 0000000000..2586ec9b39 --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + labels: + app: MyApp +spec: + ipFamily: IPv6 + type: LoadBalancer + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml new file mode 100644 index 0000000000..2aa0725059 --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ipFamily: IPv6 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/i18n/uk.toml b/i18n/uk.toml new file mode 100644 index 0000000000..a86a980e7a --- /dev/null +++ b/i18n/uk.toml @@ -0,0 +1,247 @@ +# i18n strings for the Ukrainian (main) site. + +[caution] +# other = "Caution:" +other = "Увага:" + +[cleanup_heading] +# other = "Cleaning up" +other = "Очистка" + +[community_events_calendar] +# other = "Events Calendar" +other = "Календар подій" + +[community_forum_name] +# other = "Forum" +other = "Форум" + +[community_github_name] +other = "GitHub" + +[community_slack_name] +other = "Slack" + +[community_stack_overflow_name] +other = "Stack Overflow" + +[community_twitter_name] +other = "Twitter" + +[community_youtube_name] +other = "YouTube" + +[deprecation_warning] +# other = " documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the " +other = " документація більше не підтримується. Версія, яку ви зараз переглядаєте, є статичною. Для перегляду актуальної документації дивіться " + +[deprecation_file_warning] +# other = "Deprecated" +other = "Застаріла версія" + +[docs_label_browse] +# other = "Browse Docs" +other = "Переглянути документацію" + +[docs_label_contributors] +# other = "Contributors" +other = "Контриб'ютори" + +[docs_label_i_am] +# other = "I AM..." +other = "Я..." + +[docs_label_users] +# other = "Users" +other = "Користувачі" + +[feedback_heading] +# other = "Feedback" +other = "Ваша думка" + +[feedback_no] +# other = "No" +other = "Ні" + +[feedback_question] +# other = "Was this page helpful?" +other = "Чи була ця сторінка корисною?" + +[feedback_yes] +# other = "Yes" +other = "Так" + +[latest_version] +# other = "latest version." +other = "остання версія." + +[layouts_blog_pager_prev] +# other = "<< Prev" +other = "<< Назад" + +[layouts_blog_pager_next] +# other = "Next >>" +other = "Далі >>" + +[layouts_case_studies_list_tell] +# other = "Tell your story" +other = "Розкажіть свою історію" + +[layouts_docs_glossary_aka] +# other = "Also known as" +other = "Також відомий як" + +[layouts_docs_glossary_description] +# other = "This glossary is intended to be a comprehensive, standardized list of Kubernetes terminology. It includes technical terms that are specific to Kubernetes, as well as more general terms that provide useful context." +other = "Даний словник створений як повний стандартизований список термінології Kubernetes. Він включає в себе технічні терміни, специфічні для Kubernetes, а також більш загальні терміни, необхідні для кращого розуміння контексту." + +[layouts_docs_glossary_deselect_all] +# other = "Deselect all" +other = "Очистити вибір" + +[layouts_docs_glossary_click_details_after] +# other = "indicators below to get a longer explanation for any particular term." +other = "для отримання розширеного пояснення конкретного терміна." + +[layouts_docs_glossary_click_details_before] +# other = "Click on the" +other = "Натисність на" + +[layouts_docs_glossary_filter] +# other = "Filter terms according to their tags" +other = "Відфільтрувати терміни за тегами" + +[layouts_docs_glossary_select_all] +# other = "Select all" +other = "Вибрати все" + +[layouts_docs_partials_feedback_improvement] +# other = "suggest an improvement" +other = "запропонувати покращення" + +[layouts_docs_partials_feedback_issue] +# other = "Open an issue in the GitHub repo if you want to " +other = "Створіть issue в GitHub репозиторії, якщо ви хочете " + +[layouts_docs_partials_feedback_or] +# other = "or" +other = "або" + +[layouts_docs_partials_feedback_problem] +# other = "report a problem" +other = "повідомити про проблему" + +[layouts_docs_partials_feedback_thanks] +# other = "Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on" +other = "Дякуємо за ваш відгук. Якщо ви маєте конкретне запитання щодо використання Kubernetes, ви можете поставити його" + +[layouts_docs_search_fetching] +# other = "Fetching results..." +other = "Отримання результатів..." + +[main_by] +other = "by" + +[main_cncf_project] +# other = """We are a CNCF graduated project

""" +other = """Ми є проектом CNCF

""" + +[main_community_explore] +# other = "Explore the community" +other = "Познайомитись із спільнотою" + +[main_contribute] +# other = "Contribute" +other = "Допомогти проекту" + +[main_copyright_notice] +# other = """The Linux Foundation ®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page""" +other = """The Linux Foundation ®. Всі права застережено. The Linux Foundation є зареєстрованою торговою маркою. Перелік торгових марок The Linux Foundation ви знайдете на нашій сторінці Використання торгових марок""" + +[main_documentation_license] +# other = """The Kubernetes Authors | Documentation Distributed under CC BY 4.0""" +other = """Автори Kubernetes | Документація розповсюджується під ліцензією CC BY 4.0""" + +[main_edit_this_page] +# other = "Edit This Page" +other = "Редагувати цю сторінку" + +[main_github_create_an_issue] +# other = "Create an Issue" +other = "Створити issue" + +[main_github_invite] +# other = "Interested in hacking on the core Kubernetes code base?" +other = "Хочете зламати основну кодову базу Kubernetes?" + +[main_github_view_on] +# other = "View On GitHub" +other = "Переглянути у GitHub" + +[main_kubernetes_features] +# other = "Kubernetes Features" +other = "Функціональні можливості Kubernetes" + +[main_kubeweekly_baseline] +# other = "Interested in receiving the latest Kubernetes news? Sign up for KubeWeekly." +other = "Хочете отримувати останні новини Kubernetes? Підпишіться на KubeWeekly." + +[main_kubernetes_past_link] +# other = "View past newsletters" +other = "Переглянути попередні інформаційні розсилки" + +[main_kubeweekly_signup] +# other = "Subscribe" +other = "Підписатися" + +[main_page_history] +# other ="Page History" +other ="Історія сторінки" + +[main_page_last_modified_on] +# other = "Page last modified on" +other = "Сторінка востаннє редагувалася" + +[main_read_about] +# other = "Read about" +other = "Прочитати про" + +[main_read_more] +# other = "Read more" +other = "Прочитати більше" + +[note] +# other = "Note:" +other = "Примітка:" + +[objectives_heading] +# other = "Objectives" +other = "Цілі" + +[prerequisites_heading] +# other = "Before you begin" +other = "Перш ніж ви розпочнете" + +[ui_search_placeholder] +# other = "Search" +other = "Пошук" + +[version_check_mustbe] +# other = "Your Kubernetes server must be version " +other = "Версія вашого Kubernetes сервера має бути " + +[version_check_mustbeorlater] +# other = "Your Kubernetes server must be at or later than version " +other = "Версія вашого Kubernetes сервера має дорівнювати або бути молодшою ніж " + +[version_check_tocheck] +# other = "To check the version, enter " +other = "Для перевірки версії введіть " + +[warning] +# other = "Warning:" +other = "Попередження:" + +[whatsnext_heading] +# other = "What's next" +other = "Що далі" From a864c1b7e386c0e807ad92a49449d0d76f904dea Mon Sep 17 00:00:00 2001 From: Daniel Barclay Date: Tue, 31 Mar 2020 16:17:59 -0400 Subject: [PATCH 031/105] Fix grammar problems and otherwise try to improve wording. --- .../configure-access-multiple-clusters.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 67077aa331..6e87400937 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -149,12 +149,12 @@ users: username: exp ``` -The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above is the placeholders -for the real path of the certification files. You need change these to the real path -of certification files in your environment. +The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above are the placeholders +for the pathnames of the certificate files. You need change these to the actual pathnames +of certificate files in your environment. -Some times you may want to use base64 encoded data here instead of the path of the -certification files, then you need add the suffix `-data` to the keys. For example, +Sometimes you may want to use Base64-encoded data embeddedhere instead of separate +certificate files; in that case you need add the suffix `-data` to the keys, for example, `certificate-authority-data`, `client-certificate-data`, `client-key-data`. Each context is a triple (cluster, user, namespace). For example, the From 4d097c56dd0242b570c465f926634d3b83f4632d Mon Sep 17 00:00:00 2001 From: davidair Date: Tue, 31 Mar 2020 18:15:14 -0400 Subject: [PATCH 032/105] Update debug-application.md Fixing URL typo --- .../docs/tasks/debug-application-cluster/debug-application.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md index f63173e334..08f0fad008 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md @@ -65,7 +65,7 @@ Again, the information from `kubectl describe ...` should be informative. The m #### My pod is crashing or otherwise unhealthy Once your pod has been scheduled, the methods described in [Debug Running Pods]( -/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging. +/docs/tasks/debug-application-cluster/debug-running-pod/) are available for debugging. #### My pod is running but not doing what I told it to do From cda5fa93e3ef7f0208906ec272aadb2ecf9a0e53 Mon Sep 17 00:00:00 2001 From: Joao Luna Date: Fri, 3 Apr 2020 16:15:42 +0100 Subject: [PATCH 033/105] Adding content/pt/docs/concepts/extend-kubernetes/operator.md --- .../docs/concepts/extend-kubernetes/_index.md | 4 + .../concepts/extend-kubernetes/operator.md | 137 ++++++++++++++++++ 2 files changed, 141 insertions(+) create mode 100644 content/pt/docs/concepts/extend-kubernetes/_index.md create mode 100644 content/pt/docs/concepts/extend-kubernetes/operator.md diff --git a/content/pt/docs/concepts/extend-kubernetes/_index.md b/content/pt/docs/concepts/extend-kubernetes/_index.md new file mode 100644 index 0000000000..db8257b625 --- /dev/null +++ b/content/pt/docs/concepts/extend-kubernetes/_index.md @@ -0,0 +1,4 @@ +--- +title: Extendendo o Kubernetes +weight: 110 +--- diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md new file mode 100644 index 0000000000..3036db17d0 --- /dev/null +++ b/content/pt/docs/concepts/extend-kubernetes/operator.md @@ -0,0 +1,137 @@ +--- +title: Padrão Operador +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Operadores são extensões de software para o Kubernetes que +fazem uso de [*recursos personalizados*](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +para gerir aplicações e os seus componentes. Operadores seguem os +princípios do Kubernetes, notavelmente o [ciclo de controle](/docs/concepts/#kubernetes-control-plane). + +{{% /capture %}} + + +{{% capture body %}} + +## Motivação + +O padrão Operador tem como objetivo capturar o principal objetivo de um operador +humano que gere um serviço ou um conjunto de serviços. Operadores humanos +responsáveis por aplicações e serviços específicos têm um conhecimento +profundo da forma como o sistema é suposto se comportar, como é instalado +e como deve reagir na ocorrência de problemas. + +As pessoas que correm cargas de trabalho no Kubernetes habitualmente gostam +de usar automação para cuidar de tarefas repetitivas. O padrão Operador captura +a forma como pode escrever código para automatizar uma tarefa para além do que +o Kubernetes fornece. + +## Operadores no Kubernetes + +O Kubernetes é desenhado para automação. *Out of the box*, você tem bastante +automação embutida no núcleo do Kubernetes. Pode usar +o Kubernetes para automatizar instalações e executar cargas de trabalho, +e pode ainda automatizar a forma como o Kubernetes faz isso. + +O conceito de {{< glossary_tooltip text="controlador" term_id="controller" >}} no +Kubernetes permite a extensão do comportamento sem modificar o código do próprio +Kubernetes. +Operadores são clientes da API do Kubernetes que atuam como controladores para +um dado [*Custom Resource*](/docs/concepts/api-extension/custom-resources/) + +## Exemplo de um Operador {#example} + +Algumas das coisas que um operador pode ser usado para automatizar incluem: + +* instalar uma aplicação a pedido +* obter e restaurar backups do estado dessa aplicação +* manipular atualizações do código da aplicação juntamente com alterações + como esquemas de base de dados ou definições de configuração extra +* publicar um *Service* para aplicações que não suportam a APIs do Kubernetes + para as descobrir +* simular una falha em todo ou parte do cluster de forma a testar a resiliência +* escolher um lider para uma aplicação distribuída sem um processo + de eleição de membro interno + +Como deve um Operador parecer em mais detalhe? Aqui está um exemplo em mais +detalhe: + +1. Um recurso personalizado (*custom resource*) chamado SampleDB, que você pode + configurar para dentro do *cluster*. +2. Um *Deployment* que garante que um *Pod* está a executar que contém a + parte controlador do operador. +3. Uma imagem do *container* do código do operador. +4. Código do controlador que consulta o plano de controle para descobrir quais + recursos *SampleDB* estão configurados. +5. O núcleo do Operador é o código para informar ao servidor da API (*API server*) como fazer + a realidade coincidir com os recursos configurados. + * Se você adicionar um novo *SampleDB*, o operador configurará *PersistentVolumeClaims* + para fornecer armazenamento de base de dados durável, um *StatefulSet* para executar *SampleDB* e + um *Job* para lidar com a configuração inicial. + * Se você apagá-lo, o Operador tira um *snapshot* e então garante que + o *StatefulSet* e *Volumes* também são removidos. +6. O operador também gere backups regulares da base de dados. Para cada recurso *SampleDB*, + o operador determina quando deve criar um *Pod* que possa se conectar + à base de dados e faça backups. Esses *Pods* dependeriam de um *ConfigMap* + e / ou um *Secret* que possui detalhes e credenciais de conexão com à base de dados. +7. Como o Operador tem como objetivo fornecer automação robusta para o recurso + que gere, haveria código de suporte adicional. Para este exemplo, + O código verifica se a base de dados está a executar uma versão antiga e, se estiver, + cria objetos *Job* que o atualizam para si. + +## Instalar Operadores + +A forma mais comum de instalar um Operador é a de adicionar a +definição personalizada de recurso (*Custom Resource Definition*) e +o seu Controlador associado ao seu cluster. +O Controlador vai normalmente executar fora do +{{< glossary_tooltip text="plano de controle" term_id="control-plane" >}}, +como você faria com qualquer aplicação containerizada. +Por exemplo, você pode executar o controlador no seu cluster como um *Deployment*. + +## Usando um Operador {#using-operators} + +Uma vez que você tenha um Operador instalado, usaria-o adicionando, modificando +ou apagando a espécie de recurso que o Operador usa. Seguindo o exemplo acima, +você configuraria um *Deployment* para o próprio Operador, e depois: + +```shell +kubectl get SampleDB # encontra a base de dados configurada + +kubectl edit SampleDB/example-database # mudar manualmente algumas definições +``` + +…e é isso! O Operador vai tomar conta de aplicar +as mudanças assim como manter o serviço existente em boa forma. + +## Escrevendo o seu prórpio Operador {#writing-operator} + +Se não existir no ecosistema um Operador que implementa +o comportamento que pretende, pode codificar o seu próprio. +[No que vem a seguir](#what-s-next) voce vai encontrar +alguns *links* para bibliotecas e ferramentas que opde usar +para escrever o seu próprio Operador *cloud native*. + +Pode também implementar um Operador (isto é, um Controlador) usando qualquer linguagem / *runtime* +que pode atua como um [cliente da API do Kubernetes](/docs/reference/using-api/client-libraries/). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Aprenda mais sobre [Recursos Personalizados](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* Encontre operadores prontos em [OperatorHub.io](https://operatorhub.io/) para o seu caso de uso +* Use ferramentes existentes para escrever os seus Operadores: + * usando [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) + * usando [kubebuilder](https://book.kubebuilder.io/) + * usando [Metacontroller](https://metacontroller.app/) juntamente com WebHooks que + implementa você mesmo + * usando o [Operator Framework](https://github.com/operator-framework/getting-started) +* [Publique](https://operatorhub.io/) o seu operador para que outras pessoas o possam usar +* Leia o [artigo original da CoreOS](https://coreos.com/blog/introducing-operators.html) que introduz o padrão Operador +* Leia um [artigo](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) da Google Cloud sobre as melhores práticas para contruir Operadores + +{{% /capture %}} From dc2f875952adcf90b7f8541519acf19259b17e43 Mon Sep 17 00:00:00 2001 From: tanjunchen Date: Fri, 3 Apr 2020 23:39:56 +0800 Subject: [PATCH 034/105] add tanjunchen as reviewer of /zh --- OWNERS_ALIASES | 1 + 1 file changed, 1 insertion(+) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 8a8da8978e..33922084fc 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -181,6 +181,7 @@ aliases: - idealhack - markthink - SataQiu + - tanjunchen - tengqm - xiangpengzhao - xichengliudui From ebb020e65a43ea3142905f5597220f11485b4f0f Mon Sep 17 00:00:00 2001 From: Joao Luna Date: Fri, 3 Apr 2020 16:42:36 +0100 Subject: [PATCH 035/105] Fix typo --- content/pt/docs/concepts/extend-kubernetes/operator.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md index 3036db17d0..7ec22a29ee 100644 --- a/content/pt/docs/concepts/extend-kubernetes/operator.md +++ b/content/pt/docs/concepts/extend-kubernetes/operator.md @@ -111,7 +111,7 @@ as mudanças assim como manter o serviço existente em boa forma. Se não existir no ecosistema um Operador que implementa o comportamento que pretende, pode codificar o seu próprio. -[No que vem a seguir](#what-s-next) voce vai encontrar +[No que vem a seguir](#what-s-next) você vai encontrar alguns *links* para bibliotecas e ferramentas que opde usar para escrever o seu próprio Operador *cloud native*. From d36428fa40e7d5cbf35480ed612cd9ec04c87e37 Mon Sep 17 00:00:00 2001 From: Arhell Date: Sun, 5 Apr 2020 02:57:55 +0300 Subject: [PATCH 036/105] Fix left menu button on mobile (home page) --- content/zh/docs/home/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/home/_index.md b/content/zh/docs/home/_index.md index d50c07e091..cf6cdc0032 100644 --- a/content/zh/docs/home/_index.md +++ b/content/zh/docs/home/_index.md @@ -3,7 +3,7 @@ title: Kubernetes 文档 noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "主页" main_menu: true weight: 10 From b3bb46aa932a8475725ff808a029ba21d64bfe7d Mon Sep 17 00:00:00 2001 From: Joao Luna Date: Sun, 5 Apr 2020 11:58:24 +0100 Subject: [PATCH 037/105] Apply suggestions from code review Co-Authored-By: Tim Bannister --- content/pt/docs/concepts/extend-kubernetes/operator.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md index 7ec22a29ee..c2be44bd77 100644 --- a/content/pt/docs/concepts/extend-kubernetes/operator.md +++ b/content/pt/docs/concepts/extend-kubernetes/operator.md @@ -92,7 +92,7 @@ O Controlador vai normalmente executar fora do como você faria com qualquer aplicação containerizada. Por exemplo, você pode executar o controlador no seu cluster como um *Deployment*. -## Usando um Operador {#using-operators} +## Usando um Operador Uma vez que você tenha um Operador instalado, usaria-o adicionando, modificando ou apagando a espécie de recurso que o Operador usa. Seguindo o exemplo acima, @@ -111,7 +111,7 @@ as mudanças assim como manter o serviço existente em boa forma. Se não existir no ecosistema um Operador que implementa o comportamento que pretende, pode codificar o seu próprio. -[No que vem a seguir](#what-s-next) você vai encontrar +[Qual é o próximo](#qual-é-o-próximo) você vai encontrar alguns *links* para bibliotecas e ferramentas que opde usar para escrever o seu próprio Operador *cloud native*. From 7b27b3a662cbe8b3f0e2d6838b3ebc00935e3118 Mon Sep 17 00:00:00 2001 From: Pierre-Yves Aillet Date: Thu, 19 Mar 2020 21:51:22 +0100 Subject: [PATCH 038/105] doc: add precision on init container start order Update content/en/docs/concepts/workloads/pods/init-containers.md Co-Authored-By: Tim Bannister --- .../docs/concepts/workloads/pods/init-containers.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 14e7054a86..2ecbdd702a 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -71,8 +71,8 @@ have some advantages for start-up related code: a mechanism to block or delay app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can start in parallel. * Init containers can securely run utilities or custom code that would otherwise make an app - container image less secure. By keeping unnecessary tools separate you can limit the attack - surface of your app container image. + container image less secure. By keeping unnecessary tools separate you can limit the attack + surface of your app container image. ### Examples @@ -245,8 +245,11 @@ init containers. [What's next](#what-s-next) contains a link to a more detailed ## Detailed behavior -During the startup of a Pod, each init container starts in order, after the -network and volumes are initialized. Each container must exit successfully before +During Pod startup, the kubelet delays running init containers until the networking +and storage are ready. Then the kubelet runs the Pod's init containers in the order +they appear in the Pod's spec. + +Each init container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried according to the Pod `restartPolicy`. However, if the Pod `restartPolicy` is set to Always, the init containers use From 25ffd7679e9126a5adf830f94207f1a7934d9640 Mon Sep 17 00:00:00 2001 From: "Huang, Zhaoquan" Date: Sun, 5 Apr 2020 23:16:16 +0800 Subject: [PATCH 039/105] Update run-stateless-application-deployment.md Updated the version number to match the file in the example: https://k8s.io/examples/application/deployment-update.yaml --- .../run-application/run-stateless-application-deployment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md index c9e0aebd51..bebcd3cc70 100644 --- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md @@ -97,7 +97,7 @@ a Deployment that runs the nginx:1.14.2 Docker image: ## Updating the deployment You can update the deployment by applying a new YAML file. This YAML file -specifies that the deployment should be updated to use nginx 1.8. +specifies that the deployment should be updated to use nginx 1.16.1. {{< codenew file="application/deployment-update.yaml" >}} From 41ed409d9d51e89622694c08c07abfd41a470774 Mon Sep 17 00:00:00 2001 From: Aris Cahyadi Risdianto Date: Mon, 23 Mar 2020 10:20:32 +0800 Subject: [PATCH 040/105] Translating controller and container overview in Concept Documentation. Fixed formatting, Typos, and added glossary. Fixed glossary. Fixed term to solve broken Hugo build. Fixed Hugo Build problem due to Glossaty formattin. Fixed Hugo Build problem due to Glossary formatting. Fixed Hugo Build problem due to Glossary mismatch. --- .../docs/concepts/architecture/controller.md | 178 ++++++++++++++++++ ...-variables.md => container-environment.md} | 2 +- .../id/docs/concepts/containers/overview.md | 49 +++++ .../id/docs/reference/glossary/controller.md | 30 +++ 4 files changed, 258 insertions(+), 1 deletion(-) create mode 100644 content/id/docs/concepts/architecture/controller.md rename content/id/docs/concepts/containers/{container-environment-variables.md => container-environment.md} (98%) create mode 100644 content/id/docs/concepts/containers/overview.md create mode 100755 content/id/docs/reference/glossary/controller.md diff --git a/content/id/docs/concepts/architecture/controller.md b/content/id/docs/concepts/architecture/controller.md new file mode 100644 index 0000000000..4ce6974b34 --- /dev/null +++ b/content/id/docs/concepts/architecture/controller.md @@ -0,0 +1,178 @@ +--- +title: Controller +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Dalam bidang robotika dan otomatisasi, _control loop_ atau kontrol tertutup adalah +lingkaran tertutup yang mengatur keadaan suatu sistem. + +Berikut adalah salah satu contoh kontrol tertutup: termostat di sebuah ruangan. + +Ketika kamu mengatur suhunya, itu mengisyaratkan ke termostat +tentang *keadaan yang kamu inginkan*. Sedangkan suhu kamar yang sebenarnya +adalah *keadaan saat ini*. Termostat berfungsi untuk membawa keadaan saat ini +mendekati ke keadaan yang diinginkan, dengan menghidupkan atau mematikan +perangkat. + +Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi keadaan klaster +{{< glossary_tooltip term_id="cluster" text="klaster" >}} kamu, lalu membuat atau meminta +perubahan jika diperlukan. Setiap _controller_ mencoba untuk memindahkan status +klaster saat ini mendekati keadaan yang diinginkan. + +{{< glossary_definition term_id="controller" length="short">}} + +{{% /capture %}} + + +{{% capture body %}} + +## Pola _controller_ + +Sebuah _controller_ melacak sekurang-kurangnya satu jenis sumber daya dari +Kubernetes. +[objek-objek](/docs/concepts/overview/working-with-objects/kubernetes-objects/) ini +memiliki *spec field* yang merepresentasikan keadaan yang diinginkan. Satu atau +lebih _controller_ untuk *resource* tersebut bertanggung jawab untuk membuat +keadaan sekarang mendekati keadaan yang diinginkan. + +_Controller_ mungkin saja melakukan tindakan itu sendiri; namun secara umum, di +Kubernetes, _controller_ akan mengirim pesan ke +{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} yang +mempunyai efek samping yang bermanfaat. Kamu bisa melihat contoh-contoh +di bawah ini. + +{{< comment >}} +Beberapa _controller_ bawaan, seperti _controller namespace_, bekerja pada objek +yang tidak memiliki *spec*. Agar lebih sederhana, halaman ini tidak +menjelaskannya secara detail. +{{< /comment >}} + +### Kontrol melalui server API + +_Controller_ {{< glossary_tooltip term_id="job" >}} adalah contoh dari _controller_ +bawaan dari Kubernetes. _Controller_ bawaan tersebut mengelola status melalui +interaksi dengan server API dari suatu klaster. + +Job adalah sumber daya dalam Kubernetes yang menjalankan a +{{< glossary_tooltip term_id="pod" >}}, atau mungkin beberapa Pod sekaligus, +untuk melakukan sebuah pekerjaan dan kemudian berhenti. + +(Setelah [dijadwalkan](../../../../en/docs/concepts/scheduling/), objek Pod +akan menjadi bagian dari keadaan yang diinginkan oleh kubelet). + +Ketika _controller job_ melihat tugas baru, maka _controller_ itu memastikan bahwa, +di suatu tempat pada klaster kamu, kubelet dalam sekumpulan Node menjalankan +Pod-Pod dengan jumlah yang benar untuk menyelesaikan pekerjaan. _Controller job_ +tidak menjalankan sejumlah Pod atau kontainer apa pun untuk dirinya sendiri. +Namun, _controller job_ mengisyaratkan kepada server API untuk membuat atau +menghapus Pod. Komponen-komponen lain dalam +{{< glossary_tooltip text="control plane" term_id="control-plane" >}} +bekerja berdasarkan informasi baru (adakah Pod-Pod baru untuk menjadwalkan dan +menjalankan pekerjan), dan pada akhirnya pekerjaan itu selesai. + +Setelah kamu membuat Job baru, status yang diharapkan adalah bagaimana +pekerjaan itu bisa selesai. _Controller job_ membuat status pekerjaan saat ini +agar mendekati dengan keadaan yang kamu inginkan: membuat Pod yang melakukan +pekerjaan yang kamu inginkan untuk Job tersebut, sehingga Job hampir +terselesaikan. + +_Controller_ juga memperbarui objek yang mengkonfigurasinya. Misalnya: setelah +pekerjaan dilakukan untuk Job tersebut, _controller job_ memperbarui objek Job +dengan menandainya `Finished`. + +(Ini hampir sama dengan bagaimana beberapa termostat mematikan lampu untuk +mengindikasikan bahwa kamar kamu sekarang sudah berada pada suhu yang kamu +inginkan). + +### Kontrol Langsung + +Berbeda dengan sebuah Job, beberapa dari _controller_ perlu melakukan perubahan +sesuatu di luar dari klaster kamu. + +Sebagai contoh, jika kamu menggunakan kontrol tertutup untuk memastikan apakah +cukup {{< glossary_tooltip text="Node" term_id="node" >}} +dalam kluster kamu, maka _controller_ memerlukan sesuatu di luar klaster saat ini +untuk mengatur Node-Node baru apabila dibutuhkan. + +_controller_ yang berinteraksi dengan keadaan eksternal dapat menemukan keadaan +yang diinginkannya melalui server API, dan kemudian berkomunikasi langsung +dengan sistem eksternal untuk membawa keadaan saat ini mendekat keadaan yang +diinginkan. + +(Sebenarnya ada sebuah _controller_ yang melakukan penskalaan node secara +horizontal dalam klaster kamu. Silahkan lihat +[_autoscaling_ klaster](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling)). + +## Status sekarang berbanding status yang diinginkan {#sekarang-banding-diinginkan} + +Kubernetes mengambil pandangan sistem secara _cloud-native_, dan mampu menangani +perubahan yang konstan. + +Klaster kamu dapat mengalami perubahan kapan saja pada saat pekerjaan sedang +berlangsung dan kontrol tertutup secara otomatis memperbaiki setiap kegagalan. +Hal ini berarti bahwa, secara potensi, klaster kamu tidak akan pernah mencapai +kondisi stabil. + +Selama _controller_ dari klaster kamu berjalan dan mampu membuat perubahan yang +bermanfaat, tidak masalah apabila keadaan keseluruhan stabil atau tidak. + +## Perancangan + +Sebagai prinsip dasar perancangan, Kubernetes menggunakan banyak _controller_ yang +masing-masing mengelola aspek tertentu dari keadaan klaster. Yang paling umum, +kontrol tertutup tertentu menggunakan salah satu jenis sumber daya +sebagai suatu keadaan yang diinginkan, dan memiliki jenis sumber daya yang +berbeda untuk dikelola dalam rangka membuat keadaan yang diinginkan terjadi. + +Sangat penting untuk memiliki beberapa _controller_ sederhana daripada hanya satu +_controller_ saja, dimana satu kumpulan monolitik kontrol tertutup saling +berkaitan satu sama lain. Karena _controller_ bisa saja gagal, sehingga Kubernetes +dirancang untuk memungkinkan hal tersebut. + +Misalnya: _controller_ pekerjaan melacak objek pekerjaan (untuk menemukan +adanya pekerjaan baru) dan objek Pod (untuk menjalankan pekerjaan tersebut dan +kemudian melihat lagi ketika pekerjaan itu sudah selesai). Dalam hal ini yang +lain membuat pekerjaan, sedangkan _controller_ pekerjaan membuat Pod-Pod. + +{{< note >}} +Ada kemungkinan beberapa _controller_ membuat atau memperbarui jenis objek yang +sama. Namun di belakang layar, _controller_ Kubernetes memastikan bahwa mereka +hanya memperhatikan sumbr daya yang terkait dengan sumber daya yang mereka +kendalikan. + +Misalnya, kamu dapat memiliki Deployment dan Job; dimana keduanya akan membuat +Pod. _Controller Job_ tidak akan menghapus Pod yang dibuat oleh Deployment kamu, +karena ada informasi ({{< glossary_tooltip term_id="label" text="labels" >}}) +yang dapat oleh _controller_ untuk membedakan Pod-Pod tersebut. +{{< /note >}} + +## Berbagai cara menjalankan beberapa _controller_ {#menjalankan-_controller_} + +Kubernetes hadir dengan seperangkat _controller_ bawaan yang berjalan di dalam +{{< glossary_tooltip term_id="kube-controller-manager" >}}. Beberapa _controller_ +bawaan memberikan perilaku inti yang sangat penting. + +_Controller Deployment_ dan _controller Job_ adalah contoh dari _controller_ yang +hadir sebagai bagian dari Kubernetes itu sendiri (_controller_ "bawaan"). +Kubernetes memungkinkan kamu menjalankan _control plane_ yang tangguh, sehingga +jika ada _controller_ bawaan yang gagal, maka bagian lain dari _control plane_ akan +mengambil alih pekerjaan. + +Kamu juga dapat menemukan pengontrol yang berjalan di luar _control plane_, untuk +mengembangkan lebih jauh Kubernetes. Atau, jika mau, kamu bisa membuat +_controller_ baru sendiri. Kamu dapat menjalankan _controller_ kamu sendiri sebagai +satu kumpulan dari beberapa Pod, atau bisa juga sebagai bagian eksternal dari +Kubernetes. Manakah yang paling sesuai akan tergantung pada apa yang _controller_ +khusus itu lakukan. + +{{% /capture %}} + +{{% capture whatsnext %}} +* Silahkan baca tentang [_control plane_ Kubernetes](/docs/concepts/#kubernetes-control-plane) +* Temukan beberapa dasar tentang [objek-objek Kubernetes](/docs/concepts/#kubernetes-objects) +* Pelajari lebih lanjut tentang [Kubernetes API](/docs/concepts/overview/kubernetes-api/) +* Apabila kamu ingin membuat _controller_ sendiri, silakan lihat [pola perluasan](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) dalam memperluas Kubernetes. +{{% /capture %}} diff --git a/content/id/docs/concepts/containers/container-environment-variables.md b/content/id/docs/concepts/containers/container-environment.md similarity index 98% rename from content/id/docs/concepts/containers/container-environment-variables.md rename to content/id/docs/concepts/containers/container-environment.md index 2a44dcbdcd..55c1bea6cb 100644 --- a/content/id/docs/concepts/containers/container-environment-variables.md +++ b/content/id/docs/concepts/containers/container-environment.md @@ -1,5 +1,5 @@ --- -title: Variabel Environment Kontainer +title: Kontainer Environment content_template: templates/concept weight: 20 --- diff --git a/content/id/docs/concepts/containers/overview.md b/content/id/docs/concepts/containers/overview.md new file mode 100644 index 0000000000..7ec5ef55d5 --- /dev/null +++ b/content/id/docs/concepts/containers/overview.md @@ -0,0 +1,49 @@ +--- +title: Ikhtisar Kontainer +content_template: templates/concept +weight: 1 +--- + +{{% capture overview %}} + +Kontainer adalah teknologi untuk mengemas kode (yang telah dikompilasi) menjadi +suatu aplikasi beserta dengan dependensi-dependensi yang dibutuhkannya pada saat +dijalankan. Setiap kontainer yang Anda jalankan dapat diulang; standardisasi +dengan menyertakan dependensinya berarti Anda akan mendapatkan perilaku yang +sama di mana pun Anda menjalankannya. + +Kontainer memisahkan aplikasi dari infrastruktur host yang ada dibawahnya. Hal +ini membuat penyebaran lebih mudah di lingkungan cloud atau OS yang berbeda. + +{{% /capture %}} + +{{% capture body %}} + +## Image-Image Kontainer + +[Kontainer image](/docs/concepts/containers/images/) meruapakan paket perangkat lunak +yang siap dijalankan, mengandung semua yang diperlukan untuk menjalankan +sebuah aplikasi: kode dan setiap *runtime* yang dibutuhkan, *library* dari +aplikasi dan sistem, dan nilai *default* untuk penganturan yang penting. + +Secara desain, kontainer tidak bisa berubah: Anda tidak dapat mengubah kode +dalam kontainer yang sedang berjalan. Jika Anda memiliki aplikasi yang +terkontainerisasi dan ingin melakukan perubahan, maka Anda perlu membuat +kontainer baru dengan menyertakan perubahannya, kemudian membuat ulang kontainer +dengan memulai dari _image_ yang sudah diubah. + +## Kontainer _runtime_ + +Kontainer *runtime* adalah perangkat lunak yang bertanggung jawab untuk +menjalankan kontainer. Kubernetes mendukung beberapa kontainer *runtime*: +{{< glossary_tooltip term_id="docker" >}}, +{{< glossary_tooltip term_id="containerd" >}}, +{{< glossary_tooltip term_id="cri-o" >}}, dan semua implementasi dari +[Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). + +## Selanjutnya + +- Baca tentang [image-image kontainer](https://kubernetes.io/docs/concepts/containers/images/) +- Baca tentang [Pod](https://kubernetes.io/docs/concepts/workloads/pods/) + +{{% /capture %}} \ No newline at end of file diff --git a/content/id/docs/reference/glossary/controller.md b/content/id/docs/reference/glossary/controller.md new file mode 100755 index 0000000000..c88dfccb14 --- /dev/null +++ b/content/id/docs/reference/glossary/controller.md @@ -0,0 +1,30 @@ +--- +title: Controller +id: controller +date: 2018-04-12 +full_link: /docs/concepts/architecture/controller/ +short_description: > + Kontrol tertutup yang mengawasi kondisi bersama dari klaster melalui apiserver dan membuat perubahan yang mencoba untuk membawa kondisi saat ini ke kondisi yang diinginkan. + +aka: +tags: +- architecture +- fundamental +--- +Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi kondisi +{{< glossary_tooltip term_id="cluster" text="klaster">}} anda, lalu membuat atau +meminta perubahan jika diperlukan. +Setiap _controller_ mencoba untuk memindahkan status klaster saat ini lebih +dekat ke kondisi yang diinginkan. + + + +_Controller_ mengawasi keadaan bersama dari klaster kamu melalui +{{< glossary_tooltip text="apiserver" term_id="kube-apiserver" >}} (bagian dari +{{< glossary_tooltip term_id="control-plane" >}}). + +Beberapa _controller_ juga berjalan di dalam _control plane_, menyediakan +kontrol tertutup yang merupakan inti dari operasi Kubernetes. Sebagai contoh: +_controller Deployment_, _controller daemonset_, _controller namespace_, dan +_controller volume persisten_ (dan lainnya) semua berjalan di dalam +{{< glossary_tooltip term_id="kube-controller-manager" >}}. From b6449353e68055db913266736e64dc0154364a4f Mon Sep 17 00:00:00 2001 From: Arhell Date: Sun, 5 Apr 2020 20:58:16 +0300 Subject: [PATCH 041/105] fix docs home page on mobile renders poorly --- static/css/gridpage.css | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/static/css/gridpage.css b/static/css/gridpage.css index d762aeb19c..297c7d3637 100644 --- a/static/css/gridpage.css +++ b/static/css/gridpage.css @@ -23,7 +23,6 @@ min-height: 152px; } - .gridPage p { color: rgb(26,26,26); margin-left: 0 !important; @@ -289,6 +288,15 @@ section.bullets .content { } } +@media screen and (max-width: 768px){ + .launch-card { + width: 100%; + margin-bottom: 30px; + padding: 0; + min-height: auto; + } +} + @media screen and (max-width: 640px){ .case-study { width: 100%; From b8df3044842e02cd9496c87af796fba1d7bae10f Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Thu, 20 Feb 2020 11:35:35 -0800 Subject: [PATCH 042/105] Introducing concepts about Konnectivity Service. --- .../architecture/master-node-communication.md | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/architecture/master-node-communication.md b/content/en/docs/concepts/architecture/master-node-communication.md index 8a4493e49b..ff536d160b 100644 --- a/content/en/docs/concepts/architecture/master-node-communication.md +++ b/content/en/docs/concepts/architecture/master-node-communication.md @@ -97,13 +97,28 @@ public networks. ### SSH Tunnels -Kubernetes supports SSH tunnels to protect the Master -> Cluster communication +Kubernetes supports SSH tunnels to protect the Master → Cluster communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. -SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. A replacement for this communication channel is being designed. +SSH tunnels are currently deprecated so you shouldn't opt to use them unless you +know what you are doing. The Konnectivity service is a replacement for this +communication channel. + +### Konnectivity service +{{< feature-state for_k8s_version="v1.18" state="beta" >}} + +As a replacement to the SSH tunnels, the Konnectivity service provides TCP +level proxy for the Master → Cluster communication. The Konnectivity consists of +two parts, the Konnectivity server and the Konnectivity agents, running in the +Master network and the Cluster network respectively. The Konnectivity agents +initiate connections to the Konnectivity server and maintain the connections. +All Master → Cluster traffic then goes through these connections. + +See [Konnectivity Service Setup](/docs/tasks/setup-konnectivity/) on how to set +it up in your cluster. {{% /capture %}} From ac1d86457505d9b2be53e121c13559ad4d9612ca Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Fri, 20 Mar 2020 15:31:19 -0700 Subject: [PATCH 043/105] Instructions on how to set up the Konnectivity service. --- .../docs/tasks/setup-konnectivity/_index.md | 5 ++ .../setup-konnectivity/setup-konnectivity.md | 37 ++++++++++ .../egress-selector-configuration.yaml | 21 ++++++ .../konnectivity/konnectivity-agent.yaml | 53 ++++++++++++++ .../admin/konnectivity/konnectivity-rbac.yaml | 24 +++++++ .../konnectivity/konnectivity-server.yaml | 70 +++++++++++++++++++ 6 files changed, 210 insertions(+) create mode 100755 content/en/docs/tasks/setup-konnectivity/_index.md create mode 100644 content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md create mode 100644 content/en/examples/admin/konnectivity/egress-selector-configuration.yaml create mode 100644 content/en/examples/admin/konnectivity/konnectivity-agent.yaml create mode 100644 content/en/examples/admin/konnectivity/konnectivity-rbac.yaml create mode 100644 content/en/examples/admin/konnectivity/konnectivity-server.yaml diff --git a/content/en/docs/tasks/setup-konnectivity/_index.md b/content/en/docs/tasks/setup-konnectivity/_index.md new file mode 100755 index 0000000000..09f254eba0 --- /dev/null +++ b/content/en/docs/tasks/setup-konnectivity/_index.md @@ -0,0 +1,5 @@ +--- +title: "Setup Konnectivity Service" +weight: 20 +--- + diff --git a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md new file mode 100644 index 0000000000..0fdbd0127d --- /dev/null +++ b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md @@ -0,0 +1,37 @@ +--- +title: Setup Konnectivity Service +content_template: templates/task +weight: 110 +--- + +The Konnectivity service provides TCP level proxy for the Master → Cluster +communication. + +You can set it up with the following steps. + +First, you need to configure the API Server to use the Konnectivity service +to direct its network traffic to cluster nodes: +1. Set the `--egress-selector-config-file` flag of the API Server, it is the +path to the API Server egress configuration file. +2. At the path, create a configuration file. For example, + +{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}} + +Next, you need to deploy the Konnectivity service server and agents. +[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy) +is a reference implementation. + +Deploy the Konnectivity server on your master node. The provided yaml assuming +Kubernetes components are deployed as {{< glossary_tooltip text="static pod" +term_id="static-pod" >}} in your cluster. If not , you can deploy it as a +Daemonset to be reliable. + +{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}} + +Then deploy the Konnectivity agents in your cluster: + +{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}} + +Last, if RBAC is enabled in your cluster, create the relevant RBAC rules: + +{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}} diff --git a/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml new file mode 100644 index 0000000000..6659ff3fbb --- /dev/null +++ b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml @@ -0,0 +1,21 @@ +apiVersion: apiserver.k8s.io/v1beta1 +kind: EgressSelectorConfiguration +egressSelections: +# Since we want to control the egress traffic to the cluster, we use the +# "cluster" as the name. Other supported values are "etcd", and "master". +- name: cluster + connection: + # This controls the protocol between the API Server and the Konnectivity + # server. Supported values are "GRPC" and "HTTPConnect". There is no + # end user visible difference between the two modes. You need to set the + # Konnectivity server to work in the same mode. + proxyProtocol: GRPC + transport: + # This controls what transport the API Server uses to communicate with the + # Konnectivity server. UDS is recommended if the Konnectivity server + # locates on the same machine as the API Server. You need to configure the + # Konnectivity server to listen on the same UDS socket. + # The other supported transport is "tcp". You will need to set up TLS + # config to secure the TCP transport. + uds: + udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket diff --git a/content/en/examples/admin/konnectivity/konnectivity-agent.yaml b/content/en/examples/admin/konnectivity/konnectivity-agent.yaml new file mode 100644 index 0000000000..c3dc71040b --- /dev/null +++ b/content/en/examples/admin/konnectivity/konnectivity-agent.yaml @@ -0,0 +1,53 @@ +apiVersion: apps/v1 +# Alternatively, you can deploy the agents as Deployments. It is not necessary +# to have an agent on each node. +kind: DaemonSet +metadata: + labels: + addonmanager.kubernetes.io/mode: Reconcile + k8s-app: konnectivity-agent + namespace: kube-system + name: konnectivity-agent +spec: + selector: + matchLabels: + k8s-app: konnectivity-agent + template: + metadata: + labels: + k8s-app: konnectivity-agent + spec: + priorityClassName: system-cluster-critical + tolerations: + - key: "CriticalAddonsOnly" + operator: "Exists" + containers: + - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8 + name: konnectivity-agent + command: ["/proxy-agent"] + args: [ + "--logtostderr=true", + "--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", + # Since the konnectivity server runs with hostNetwork=true, + # this is the IP address of the master machine. + "--proxy-server-host=35.225.206.7", + "--proxy-server-port=8132", + "--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token" + ] + volumeMounts: + - mountPath: /var/run/secrets/tokens + name: konnectivity-agent-token + livenessProbe: + httpGet: + port: 8093 + path: /healthz + initialDelaySeconds: 15 + timeoutSeconds: 15 + serviceAccountName: konnectivity-agent + volumes: + - name: konnectivity-agent-token + projected: + sources: + - serviceAccountToken: + path: konnectivity-agent-token + audience: system:konnectivity-server diff --git a/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml b/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml new file mode 100644 index 0000000000..7687f49b77 --- /dev/null +++ b/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml @@ -0,0 +1,24 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: system:konnectivity-server + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:auth-delegator +subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:konnectivity-server +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: konnectivity-agent + namespace: kube-system + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile diff --git a/content/en/examples/admin/konnectivity/konnectivity-server.yaml b/content/en/examples/admin/konnectivity/konnectivity-server.yaml new file mode 100644 index 0000000000..730c26c66a --- /dev/null +++ b/content/en/examples/admin/konnectivity/konnectivity-server.yaml @@ -0,0 +1,70 @@ +apiVersion: v1 +kind: Pod +metadata: + name: konnectivity-server + namespace: kube-system +spec: + priorityClassName: system-cluster-critical + hostNetwork: true + containers: + - name: konnectivity-server-container + image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8 + command: ["/proxy-server"] + args: [ + "--log-file=/var/log/konnectivity-server.log", + "--logtostderr=false", + "--log-file-max-size=0", + # This needs to be consistent with the value set in egressSelectorConfiguration. + "--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket", + # The following two lines assume the Konnectivity server is + # deployed on the same machine as the apiserver, and the certs and + # key of the API Server are at the specified location. + "--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt", + "--cluster-key=/etc/srv/kubernetes/pki/apiserver.key", + # This needs to be consistent with the value set in egressSelectorConfiguration. + "--mode=grpc", + "--server-port=0", + "--agent-port=8132", + "--admin-port=8133", + "--agent-namespace=kube-system", + "--agent-service-account=konnectivity-agent", + "--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig", + "--authentication-audience=system:konnectivity-server" + ] + livenessProbe: + httpGet: + scheme: HTTP + host: 127.0.0.1 + port: 8133 + path: /healthz + initialDelaySeconds: 30 + timeoutSeconds: 60 + ports: + - name: agentport + containerPort: 8132 + hostPort: 8132 + - name: adminport + containerPort: 8133 + hostPort: 8133 + volumeMounts: + - name: varlogkonnectivityserver + mountPath: /var/log/konnectivity-server.log + readOnly: false + - name: pki + mountPath: /etc/srv/kubernetes/pki + readOnly: true + - name: konnectivity-uds + mountPath: /etc/srv/kubernetes/konnectivity-server + readOnly: false + volumes: + - name: varlogkonnectivityserver + hostPath: + path: /var/log/konnectivity-server.log + type: FileOrCreate + - name: pki + hostPath: + path: /etc/srv/kubernetes/pki + - name: konnectivity-uds + hostPath: + path: /etc/srv/kubernetes/konnectivity-server + type: DirectoryOrCreate From 8ba1410113f5593c5686c5b80baa9e9093755e89 Mon Sep 17 00:00:00 2001 From: "Mr.Hien" Date: Mon, 6 Apr 2020 08:22:56 +0700 Subject: [PATCH 044/105] Update install-kubectl.md For install of kubectl on debian based distros, also install gnupg2 for apt-key add to work --- content/en/docs/tasks/tools/install-kubectl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md index 4a0be4509a..cdbe96f148 100644 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ b/content/en/docs/tasks/tools/install-kubectl.md @@ -59,7 +59,7 @@ You must use a kubectl version that is within one minor version difference of yo {{< tabs name="kubectl_install" >}} {{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}} -sudo apt-get update && sudo apt-get install -y apt-transport-https +sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update From 5ed756714eec05c3a545435a36859aae181ce5c8 Mon Sep 17 00:00:00 2001 From: Jhon Mike Date: Sun, 5 Apr 2020 22:44:19 -0300 Subject: [PATCH 045/105] translating cron-jobs doc --- .../concepts/workloads/controllers/_index.md | 4 ++ .../workloads/controllers/cron-jobs.md | 55 +++++++++++++++++++ 2 files changed, 59 insertions(+) create mode 100755 content/pt/docs/concepts/workloads/controllers/_index.md create mode 100644 content/pt/docs/concepts/workloads/controllers/cron-jobs.md diff --git a/content/pt/docs/concepts/workloads/controllers/_index.md b/content/pt/docs/concepts/workloads/controllers/_index.md new file mode 100755 index 0000000000..376ef67943 --- /dev/null +++ b/content/pt/docs/concepts/workloads/controllers/_index.md @@ -0,0 +1,4 @@ +--- +title: "Controladores" +weight: 20 +--- diff --git a/content/pt/docs/concepts/workloads/controllers/cron-jobs.md b/content/pt/docs/concepts/workloads/controllers/cron-jobs.md new file mode 100644 index 0000000000..5979d82f5c --- /dev/null +++ b/content/pt/docs/concepts/workloads/controllers/cron-jobs.md @@ -0,0 +1,55 @@ +--- +reviewers: + - erictune + - soltysh + - janetkuo +title: CronJob +content_template: templates/concept +weight: 80 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.8" state="beta" >}} + +Um _Cron Job_ cria [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) em um cronograma baseado em tempo. + +Um objeto CronJob é como um arquivo _crontab_ (tabela cron). Executa um job periodicamente em um determinado horário, escrito no formato [Cron](https://en.wikipedia.org/wiki/Cron). + +{{< note >}} +Todos os **CronJob** `schedule (horários):` são indicados em UTC. +{{< /note >}} + +Ao criar o manifesto para um recurso CronJob, verifique se o nome que você fornece é um [nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +O nome não deve ter mais que 52 caracteres. Isso ocorre porque o controlador do CronJob anexará automaticamente 11 caracteres ao nome da tarefa fornecido e há uma restrição de que o comprimento máximo de um nome da tarefa não pode ultrapassar 63 caracteres. + +Para obter instruções sobre como criar e trabalhar com tarefas cron, e para obter um exemplo de arquivo de especificação para uma tarefa cron, consulte [Executando tarefas automatizadas com tarefas cron](/docs/tasks/job/automated-tasks-with-cron-jobs). + +{{% /capture %}} + +{{% capture body %}} + +## Limitações do Cron Job + +Um trabalho cron cria um objeto de trabalho _about_ uma vez por tempod e execução de seu planejamento, Dizemos "about" porque há certas circunstâncias em que duas tarefas podem ser criadas ou nenhum trabalho pode ser criado. Tentamos torná-los únicos, mas não os impedimos completamente. Portanto, os trabalhos devem ser _idempotent_. +A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, but do not completely prevent them. Therefore, jobs should be _idempotente_. + +Se `startingDeadlineSeconds` estiver definido como um valor grande ou não definido (o padrão) e se `concurrencyPolicy` estiver definido como `Allow(Permitir)` os trabalhos sempre serão executados pelo menos uma vez. + +Para cada CronJOb, o CronJob {{< glossary_tooltip term_id="controller" >}} verifica quantas agendas faltou na duração, desde o último horário agendado até agora. Se houver mais de 100 agendamentos perdidos, ele não iniciará o trabalho e registrará o erro. + +``` +Não é possivel determinar se o trabalho precisa ser iniciado. Muitas horas de início perdidas (> 100). Defina ou diminua .spec.startingDeadlineSeconds ou verifique a inclinação do relógio. +``` + +É importante observar que, se o campo `startingDeadlineSeconds` estiver definido (não `nil`), o controlador contará quantas tarefas perdidas ocorreram a partir do valor de `startingDeadlineSeconds` até agora, e não do último horário programado até agora. Por exemplo, se `startingDeadlineSeconds` for `200`, o controlador contará quantas tarefas perdidas ocorreram nos últimos 200 segundos. + +Um CronJob é contado como perdido se não tiver sido criado no horário agendado, Por exemplo, se `concurrencyPolicy` estiver definido como `Forbid` e um CronJob tiver sido tentado ser agendado quando havia um agendamento anterior ainda em execução, será contabilizado como perdido. + +Por exemplo, suponha que um CronJob esteja definido para agendar um novo trabalho a cada minuto, começando em `08:30:00`, e seu campo `startingDeadlineSeconds` não esteja defindo. Se o controlador CronJob estiver baixo de `08:29:00` para `10:21:00`, o trabalho não será iniciado, pois o número de trabalhos perdidos que perderam o cronograma é maior que 100. + +Para ilustrar ainda mais esse conceito, suponha que um CronJob esteja definido para agendar um novo trablaho a cada minuto, começando em `08:30:00`, e seu `startingDeadlineSeconds` definido em 200 segundos. Se o controlador CronJob estiver inativo no mesmo período do exemplo anterior (`08:29:00` a `10:21:00`), o trabalho ainda será iniciado às 10:22:00. Isso acontece quando o controlador agora verifica quantos agendamentos perdidos ocorreram nos últimos 200 segundos (ou seja, 3 agendamentos perdidos), em vez do último horário agendado até agora. + +O CronJob é responsável apenas pela criação de trabalhos que correspondem à sua programação, e o trabalho, por sua vez, é responsável pelo gerenciamento dos Pods que ele representa. + +{{% /capture %}} From 202488677759e81fd73df7bfbda1a8253b0dcefe Mon Sep 17 00:00:00 2001 From: Taylor Dolezal Date: Wed, 1 Apr 2020 16:52:56 -0700 Subject: [PATCH 046/105] Update PR template to specify desired git commit messages Co-Authored-By: Tim Bannister --- .github/PULL_REQUEST_TEMPLATE.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index c7040f7fa8..25b0ff4753 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -6,6 +6,8 @@ your pull request. The description should explain what will change, and why. + PLEASE title the FIRST commit appropriately, so that if you squash all + your commits into one, the combined commit message makes sense. For overall help on editing and submitting pull requests, visit: https://kubernetes.io/docs/contribute/start/#improve-existing-content From 48d007a0fcd5f6e182629d6015bffa8a33957e4b Mon Sep 17 00:00:00 2001 From: Joao Luna Date: Mon, 6 Apr 2020 09:25:29 +0100 Subject: [PATCH 047/105] Apply suggestions from code review Co-Authored-By: Tim Bannister --- content/pt/docs/concepts/extend-kubernetes/operator.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md index c2be44bd77..e735893f38 100644 --- a/content/pt/docs/concepts/extend-kubernetes/operator.md +++ b/content/pt/docs/concepts/extend-kubernetes/operator.md @@ -42,7 +42,7 @@ Kubernetes. Operadores são clientes da API do Kubernetes que atuam como controladores para um dado [*Custom Resource*](/docs/concepts/api-extension/custom-resources/) -## Exemplo de um Operador {#example} +## Exemplo de um Operador {#exemplo} Algumas das coisas que um operador pode ser usado para automatizar incluem: @@ -107,7 +107,7 @@ kubectl edit SampleDB/example-database # mudar manualmente algumas definições …e é isso! O Operador vai tomar conta de aplicar as mudanças assim como manter o serviço existente em boa forma. -## Escrevendo o seu prórpio Operador {#writing-operator} +## Escrevendo o seu prórpio Operador {#escrevendo-operador} Se não existir no ecosistema um Operador que implementa o comportamento que pretende, pode codificar o seu próprio. From b4c76fd68f9fcade50d2320fd4e278ffa714a415 Mon Sep 17 00:00:00 2001 From: Alexey Pyltsyn Date: Mon, 6 Apr 2020 12:27:25 +0300 Subject: [PATCH 048/105] Translate The Kubernetes API page into Russian --- .../docs/concepts/overview/kubernetes-api.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 content/ru/docs/concepts/overview/kubernetes-api.md diff --git a/content/ru/docs/concepts/overview/kubernetes-api.md b/content/ru/docs/concepts/overview/kubernetes-api.md new file mode 100644 index 0000000000..5aea5818af --- /dev/null +++ b/content/ru/docs/concepts/overview/kubernetes-api.md @@ -0,0 +1,117 @@ +--- +title: API Kubernetes +content_template: templates/concept +weight: 30 +card: + name: concepts + weight: 30 +--- + +{{% capture overview %}} + +Общие соглашения API описаны на [странице соглашений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). + +Конечные точки API, типы ресурсов и примеры описаны в [справочнике API](/docs/reference). + +Удаленный доступ к API обсуждается в [Controlling API Access doc](/docs/reference/access-authn-authz/controlling-access/). + +API Kubernetes также служит основой декларативной схемы конфигурации системы. С помощью инструмента командной строки [kubectl](/ru/docs/reference/kubectl/overview/) можно создавать, обновлять, удалять и получать API-объекты. + +Kubernetes также сохраняет сериализованное состояние (в настоящее время в хранилище [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) каждого API-ресурса. + +Kubernetes как таковой состоит из множества компонентов, которые взаимодействуют друг с другом через собственные API. + +{{% /capture %}} + +{{% capture body %}} + +## Изменения в API + +Исходя из нашего опыта, любая успешная система должна улучшаться и изменяться по мере появления новых сценариев использования или изменения существующих. Поэтому мы надеемся, что и API Kubernetes будет постоянно меняться и расширяться. Однако в течение продолжительного периода времени мы будем поддерживать хорошую обратную совместимость с существующими клиентами. В целом, новые ресурсы API и поля ресурсов будут добавляться часто. Удаление ресурсов или полей регулируются [соответствующим процессом](/docs/reference/using-api/deprecation-policy/). + +Определение совместимого изменения и методы изменения API подробно описаны в [документе об изменениях API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md). + +## Определения OpenAPI и Swagger + +Все детали API документируется с использованием [OpenAPI](https://www.openapis.org/). + +Начиная с Kubernetes 1.10, API-сервер Kubernetes основывается на спецификации OpenAPI через конечную точку `/openapi/v2`. +Нужный формат устанавливается через HTTP-заголовоки: + +Заголовок | Возможные значения +------ | --------------- +Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (the default content-type is `application/json` for `*/*` or not passing this header) +Accept-Encoding | `gzip` (not passing this header is acceptable) + +До версии 1.14 конечные точки с форматом (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`) предоставляли спецификацию OpenAPI в разных форматах. Эти конечные точки были объявлены устаревшими и удалены в Kubernetes 1.14. + +**Примеры получения спецификации OpenAPI**: + +До 1.10 | С версии Kubernetes 1.10 +----------- | ----------------------------- +GET /swagger.json | GET /openapi/v2 **Accept**: application/json +GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf +GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip + +В Kubernetes реализован альтернативный формат сериализации API, основанный на Protobuf, который в первую очередь предназначен для взаимодействия внутри кластера. Описание этого формата можно найти в [проектом решении](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md), а IDL-файлы по каждой схемы — в пакетах Go, определяющих API-объекты. + +До версии 1.14 apiserver Kubernetes также представлял API, который можно использовать для получения спецификации [Swagger v1.2](http://swagger.io/) для API Kubernetes по пути `/swaggerapi`. Эта конечная точка устарела и была удалена в Kubernetes 1.14 + +## Версионирование API + +Чтобы упростить удаления полей или изменение ресурсов, Kubernetes поддерживает несколько версий API, каждая из которых доступна по собственному пути, например, `/api/v1` или `/apis/extensions/v1beta1`. + +Мы выбрали версионирование API, а не конкретных ресурсов или полей, чтобы API отражал четкое и согласованное представление о системных ресурсах и их поведении, а также, чтобы разграничивать API, которые уже не поддерживаются и/или находятся в экспериментальной стадии. Схемы сериализации JSON и Protobuf следуют одним и тем же правилам по внесению изменений в схему, поэтому описание ниже охватывают оба эти формата. + +Обратите внимание, что версиоирование API и программное обеспечение косвенно связаны друг с другом. [Предложение по версионированию API и новых выпусков](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) описывает, как связаны между собой версии API с версиями программного обеспечения. + +Разные версии API имеют характеризуются разной уровнем стабильностью и поддержкой. Критерии каждого уровня более подробно описаны в [документации изменений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Ниже приводится краткое изложение: + +- Альфа-версии: + - Названия версий включают надпись `alpha` (например, `v1alpha1`). + - Могут содержать баги. Включение такой функциональности может привести к ошибкам. По умолчанию она отключена. + - Поддержка функциональности может быть прекращена в любое время без какого-либо оповещения об этом. + - API может быть несовместим с более поздними версиями без упоминания об этом. + - Рекомендуется для использования только в тестировочных кластерах с коротким жизненным циклом из-за высокого риска наличия багов и отсутствия долгосрочной поддержки. +- Бета-версии: + - Названия версий включают надпись `beta` (например, `v2beta3`). + - Код хорошо протестирован. Активация этой функциональности — безопасно. Поэтому она включена по умолчанию. + - Поддержка функциональности в целом не будет прекращена, хотя кое-что может измениться. + - Схема и/или семантика объектов может стать несовместимой с более поздними бета-версиями или стабильными выпусками. Когда это случится, мы даим инструкции по миграции на следующую версию. Это обновление может включать удаление, редактирование и повторного создание API-объектов. Этот процесс может потребовать тщательного анализа. Кроме этого, это может привести к простою приложений, которые используют данную функциональность. + - Рекомендуется только для неосновного производственного использования из-за риска возникновения возможных несовместимых изменений с будущими версиями. Если у вас есть несколько кластеров, которые возможно обновить независимо, вы можете снять это ограничение. + - **Пожалуйста, попробуйте в действии бета-версии функциональности и поделитесь своими впечатлениями! После того, как функциональность выйдет из бета-версии, нам может быть нецелесообразно что-то дальше изменять.** +- Стабильные версии: + - Имя версии `vX`, где `vX` — целое число. + - Стабильные версии функциональностей появятся в новых версиях. + +## API-группы + +Чтобы упростить расширение API Kubernetes, реализованы [*группы API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md). +Группа API указывается в пути REST и в поле `apiVersion` сериализованного объекта. + +В настоящее время используется несколько API-групп: + +1. Группа *core*, которая часто упоминается как *устаревшая* (*legacy group*), доступна по пути `/api/v1` и использует `apiVersion: v1`. + +1. Именованные группы находятся в пути REST `/apis/$GROUP_NAME/$VERSION` и используют `apiVersion: $GROUP_NAME/$VERSION` (например, `apiVersion: batch/v1`). Полный список поддерживаемых групп API можно увидеть в [справочнике API Kubernetes](/docs/reference/). + +Есть два поддерживаемых пути к расширению API с помощью [пользовательских ресурсов](/docs/concepts/api-extension/custom-resources/): + +1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) для пользователей, которым нужен очень простой CRUD. +2. Пользователи, которым нужна полная семантика API Kubernetes, могут реализовать собственный apiserver и использовать [агрегатор](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) для эффективной интеграции для клиентов. + +## Включение или отключение групп API + +Некоторые ресурсы и группы API включены по умолчанию. Их можно включить или отключить, установив `--runtime-config` для apiserver. Флаг `--runtime-config` принимает значения через запятую. Например, чтобы отключить batch/v1, используйте `--runtime-config=batch/v1=false`, а чтобы включить batch/v2alpha1, используйте флаг `--runtime-config=batch/v2alpha1`. +Флаг набор пар ключ-значение, указанных через запятую, который описывает конфигурацию во время выполнения сервера. + +{{< note >}}Включение или отключение групп или ресурсов требует перезапуска apiserver и controller-manager для применения изменений `--runtime-config`.{{< /note >}} + +## Включение определённых ресурсов в группу extensions/v1beta1 + +DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies и ReplicaSets в API-группе `extensions/v1beta1` по умолчанию отключены. +Например: чтобы включить deployments и daemonsets, используйте флаг `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`. + +{{< note >}}Включение/отключение отдельных ресурсов поддерживается только в API-группе `extensions/v1beta1` по историческим причинам.{{< /note >}} + +{{% /capture %}} From 9c864c965bfcf4df9fce8b655e20414a47492665 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?R=C3=A9my=20L=C3=A9one?= Date: Sun, 16 Feb 2020 20:47:35 +0100 Subject: [PATCH 049/105] Add a Progressive Web App manifest Add a valid apple-touch-icon link in header Co-Authored-By: Tim Bannister --- layouts/partials/head.html | 2 ++ static/images/kubernetes-192x192.png | Bin 0 -> 87040 bytes static/images/kubernetes-512x512.png | Bin 0 -> 221877 bytes static/manifest.webmanifest | 22 ++++++++++++++++++++++ 4 files changed, 24 insertions(+) create mode 100644 static/images/kubernetes-192x192.png create mode 100644 static/images/kubernetes-512x512.png create mode 100644 static/manifest.webmanifest diff --git a/layouts/partials/head.html b/layouts/partials/head.html index 1e34c3c903..16b09cd103 100644 --- a/layouts/partials/head.html +++ b/layouts/partials/head.html @@ -52,3 +52,5 @@ {{ with .Params.js }}{{ range (split . ",") }} {{ end }}{{ else }}{{ end }} + + diff --git a/static/images/kubernetes-192x192.png b/static/images/kubernetes-192x192.png new file mode 100644 index 0000000000000000000000000000000000000000..49236b71bbafa6e29cd50d7ac6f9e173f1b3f960 GIT binary patch literal 87040 zcmY(p1y~(H&nS#@@B@(Vw`2TjenQBOz$;*S$ z{>1@c(BNobkbg1ozZV!dF4#Y5e=#r_aJ>J;mBFe1%L4%h7HR^#_z`a7qu~UHY9elv9@*Mb^A*C9}eEX_&+ieDe-?;oUOi+YRG>h z7PWIUCFWpcXJjT7fF~v<=65tP<5d=u`0wz)t*@j%oSp4?nV4K%T^U{380{R*nOJyu zc$k=3nOIpF{&Fxlx!XD$x-r-~k^PU5|6h)nsgtpzg}t+doh|V{xrRn|F3w*`N&hMO z-||26bha@2KbmZv{%h7>1DXEyFtIQ)GySjZzeD-|QF#>|ElmGv{wH66h5tXC{}1iI zdia_CDgVDF^FNmUhx*r40eF6<|8<)Hyj&lZCK#9yn6#L%svG!uCR~oZ+WPY{@6lAf z)#0BsbCJVVRtnV;M7TmJX;PZ{ggjDdX=>5HU&>~wBB`T597ddKtClF{#(dQpd({1zY zm_{y8QRKJ)k>{Yx;|(2hIbC)8?R(=|%v(C_Flf;It?To#<3r<(=Mkj;by~nq|I>AO zb@_AtcFsw6tWwVQPTfaF#iNesrk%Hfb>*@Ermj%c^p)O^RVJ8*>VUOTyih4x=|c_i z;A>g*uN=gNsWB)1C8x~uai>q50aJa9 zy0Mn?j}^M;&rRD646FgY+E4X%hgB-0>Xse>pTS*i*Y`Kw7m6-#eS{5Vh8?94HT_~o zjRT*sW9Y&O6|;o3fw)A=EZ&j?y|SYc$8)<~DbcgB&vz%8k1Eg($l;LwfhL)}^haXo zTl!f?8R{F-VrC8E;5t5S5Nm`WmMNGoC_&)5v4>#+LIB1d!DY!tSB)#aGad7~9`r?w zYdn5skL0Nrcz$yD^MNPGBf&Oep~|^P;X8`s$7WgRqHVRuctY(7Pn5nA5SLeQ*-mI! zotsb@DPioL)hYp}Ky{@l)*N5A z9=gI`i{Qyx)SU$KFTb6>-$$aCbt=0&CWqaK)-mZ2UkM9dRzH{wCK#{>C_5Py!!%vb zIjL!Hf#rCnL`{O9evNjWMXGpmRjE`(zOoosHSrB)W;6u+bD;cx9Ek2RWTT70ascR6 zJW>Ssq`)@<4t-qkDRU;E4;dT5M9!8X)KJIN@5L5ZEo;qg@vyw!>nnm~7~DMNM-!Y2 z1$=zGO@{aje|Ffn{feVihXRqJ`HoT;?xsDt!KB!Qj}GaM(dSA~W4y*Y1-0pyiWe?3 zj`Pzjn_G{Rkef=atDdqo8RSa~2PRt9l`QQ5tEvlRiI5fY2)l0^1ROK5CcqD!A4rCY z=8zaN^V?|6UXbGr;qRVB}sl5Ra)mTtT>vuPf5&laOp ze$p*+8M`0dQ*qxoA10TDE31`DG%saSFYYv^P)s=h-@}KvECrMDyCf1?qt4d)4WJ0; zF`T)t(#^X}eS%?tsm5iH`uJp5FdhhuPB4F{p zp}X>jp^Kp;aOXt_*zJrEP zx`bBzpMhSKWVpSs%jN&Ew(Q?sSU0h_$bU0AK)%9!DLUT;@X#&4Y=~g%WRw^UfD^nHIg0&ons3YN0u-p0N2)QoU zYuoZinsh4va^m>;@LdKoHh}9ePsfta6b+G>>fl>_Cmf^_jb$b;Tz=R?K}v)nK!tx6 zU&RB?RzENJR3P^5>u^5sS^oi1O{A0y!Dcga@P=zKr2dHb9MW>(l_aNKS=X|`jR9Z2 zux}`y3AVHULw1(Wd>^5=Oy;{{ncyp z{tO@+WJ{)faQVbwQ_)jDJ(y^0XA5xzdAs7Vd@NYw=xUJVz9HEMS?1Z#g|=Fyyya$R zV|&W6^hT=gsY3jH1c4n~C|6i2vKg@t=V&T3pI~seXWME<_PZcDktmwofEAK z?a^8!xH1>9Qi75Q?7=T>^(h5InCoe&!6t&=ZQM_+)098Z%NG%jb5+}?qVy0uB}*;U z#<1X06}_MM1hJxeVD22864FNAPS0rJVfsHkE00Ox_%I}hFC@JgS)tw}^iQn(%V+)t zJ`CKOCZ8aN-SsJjcElkYI<$*q;JX{(NY$xQ3nEv$?2Q&crNsWE#p!r{4Dq%VbCwt` zeUL7x4obU4hw6TiafMOP*b#tWQLD`7@v&zQ-udaYpk_+)oWJogng1LXfqOUU>aIte z20cv*L!GqZBj)ZhWyFs>ULpcrVgB(keXuEiXNYW)GCy`y+EWTI64tYZ&l@PZesWY}UOILXgsgLlk&ah() z_Ni9?VjF<%Ve{M+-nU(6Kvq}2C=xdLa!460uj^X{TMA1}G$+5fhp+fW6h(>0j$k{T zL#wnEDM>I%=S#ZB6~>d9&W@O!nkuEiA`d>n?z7*U2l3TW@D9_x9Ltj<$UMIi1GEY| z?nQcpl!!7XhaKah%8taO!%#;(jViITX#C}BqVDm7nZ&SunJtAG#$@8@oQ>t9CF1Zy8okC`qtGc}{gSzI^PPld2RPYt@lY0U@>XO)sH^_eoV*9^F|?%mym`K+(q zGYke=KD^0nkOQJMl9~bh$pH*#jAFbKq)pqU*#k*-(JV(afDFCrZ8Q}a%1qCJN1H6v zR{he=a%BFbnYE-n%2@RFp9s2tFsz_&@g>i$yBgM*W)ne8fX+w*o&)*Y4voXowS=X$D-c%mH7RtFB0}M?j z9=@=>Y(*9l7BAEXkllMqdeifepazql-{)_7uD3GVbbc;G_a1T5JXkM|DS^D~+DD_3 zLZXKU7K?AgfY|0c3ZE{og|ZJUeCgG6vu^MA?uHeXFtz~f$OF|RIEy9~O`S2;Hq*Ye zP1p0+o@I#S=D(j`oh{rho4B8yL6c3p>6i$>QQnpK8$n|1JN{&E8&CkG`icI%8_U7* zl3e6zXimrGMn3c8`IfhV^H90e^K|t}TAfCzlKGO}NCP15Q|o~l4aCOt$}=>(GmZKc z1$-QPFVzk-;KV0dMQq~1$->S|_yJ|BEyuXh!;%9=8a4izjE3p;UO zT@yFHV}Zi$gPr>q9@_VM51{n>&>&y>8ysr!izv&LS-sM)9B{`mTnnx_k8kA)b@M(5e^A;jajj#B+F+yfal;aX? zWL${m_!$u=^IlVb#f0YYK-<7+bt0m}70dZ*I8IIjxHNEWF^n}T2JMDuYL7}7jr_n@XzC3E&P z=sD9j)&{T(%8kWC6V8-rzq7-+;FdQ6WorM(o+vWdGAP@rf$V(u=?>bXP2On7G)fN> zlHB{nXiO6aQ{~@GE;*6M-u+S}dgB@rX;CpSuLh`UT0Az;rnRv9%J;Im{I2pp$_tw) zvXxK6RVJf@I0`xg%eIKdUS_fbn|85U0cKcEVIHB6TP4l3a^EbAUoj;gDEgR$i}i+x zcF@Qm_!54F_L) zCgC}76D~HbbiU0CQc)LP+#abA3)#i2eq*xYdcs0-SLg*hwWFOGc>$sqC#4Dg8j1zQ z9qBp=9HncVi=W)a61n?EpO>Vmq{!@l@Op}1rZ-cV5XT`HuaH|OABK_Y)8NLohwqwP z*2@rD;;We_(AH6y0Gx8v!YgVdUE~Wn?f4%W{oyQZL9QOU*dbRtQFVi$V+Mh{_Bc>3oX zvdM(I>#tb<^6__V-ua~Siz`&TE^aFd@l@cJ6lfqxKuf;ati;FeQixOC1CsL{i)NVh z2EH%*A+Ehbs-S)S9c0TJzt{0vo?e997edrf=RI__f+!g;W5@*cnf`8!Zzoiu$Ehu? z<0E71>VS?O>w^-}+|i6r83P!bk4f>0 zDp%=B#J2{3c+BPEK%TMFM=lg+0;b-UU`ZQ^@;j)cSx&n z6|@jB|EN@V93jusj6qEkH1I2=cHfue5iRdH(02=h)mL~Yk|J5Q43Q<8<^$V7f;(}u zIFb~NBJrHL>r1oKR4Am13sNCGCxgZ|T>sYNX6jp{Tb;VAPx;Ubfy`z*`=hlZLt&QI z;(66flnlqIp10_+D3lWcyu52q3wRBLynPLibYA2Fn@7+ysk4BY1A82s-vTw#5_wv{ zh3K_7j`a9O-QRqnS!;O}N?5oLW5V4S=gp5LFh34g< z*i`xC^wnBsl5A-z3L`_>oJJ|ip)E}^2dW4w2dT=i2=~$#kG0k4vn%!_nG`b_oF{DQ zI3ZO9=s2kGOC~ev5ZD>20zzavOoj10rZ<5WEe2v3Qn=;orqfz$q5z9p?Yx>4>S2t_ z>`3}Mb!?Fo7WEX{DoKbB@wjWEA47k1;ItkU}Y&P2DM_=m*VA znTrJk-Fu{>q&(3)obkN{&JhtK`~LYwKtol#OI4wr+KA-esk8v5ivhu$zXM0|CAIu3 z`oi>t>nrsc%WrBzNt;edas}Op{ZZkLYW~|IZXDm_6Fnh6ad`5v0ojs@z^-hoC5QkN zh(D^BN%n++=?RVrAd=t8u&Y0P?c5d^sjt0{A?~<#TMQ84dN$Fisw# z)c4hpYjL7R1NrRtsYyknv~Z@-;Z)DV;UgZk4nK@zhR5u7_I5m>6zPrjH1*?|*1YW; zT7aj+6_~vNr`+FU-*!(Y!F{XDvigYf<2S1UDGT_mlk{CRC38VpDUMU2iND6>+C($P zD$C?vDQdr>&*Ea(7e{_8(|1Hr1h_f4CTi)2lhF5(Aqv%qACk1t9~zkRH6p+I*e3m3RN-x+qotD#Ec|Sec5b@=)v`6Pq@~ta zPyUW?=416|k@G|TXF{~H8u9ZaTedNvEW7!FdG3&!St>>{Cj9pAq$INLmoKTyh}B}dkm(S#8v_k!y}$f1`% z4Xm8mI5oqlJEi_e2MIg&YrOhE!&E5X*?9@$gI2cNiX`bCED8%-Rm@}6+(whMG}ARl zPfhLxU?E$2Yenud{%##CWl5hx;oujwpoIaFH+a|S1Biy%Nt9-%Wn)Q*$KwIH*F090 zhtF6aQq>-)QcZIV6~B*bcgVc3lzWW|@uPTA(XIow?PZ%4q z%2Z@F&YZ)GN~Ljfw?aV-?i7Yw$R~J^RI2&UK>t9@R;2|(1diOv-kWt(?FZ{IyXKYp zj6H%QhNq<5XY!7ue{j;Z&Y@7Jos554)IntVsUf@aE$Rdl+rMB3_0T(Rj)=@ldgTaU1-PI$kQCj3yfdl zX3*|K**9%Q(RrzS^kLsuXsX}vtCL*_X3fYTw=^v7eG*gL&p<>@F$w~YO zmiUFk$kzP(${~|8)PIU2vz|U=rs7*gJ)JY()I--xyt3}uWNWN)wgvH7Qb$tLZ>c@K z$t`$y(b6KM(+LFa5>*hEPwqs>R^Pnb(Jn@NAO=XaPwc!r-KKN$U?kGPdf zbSJY!W*mcPK9>qpQeOW=H5hwKPB z8j}yOlVTj`A$X1fQlWzZXygHzbM)9gU-uAs;)9h7u%GSHw^lEY`fyK)Kb`7RDANMs z<|oL8`uAz797LW8dM%MGu2es&UPLp*Pzt|0bu*=jF1KaB6#p@%{=A&?V;}0RA*)0V zRY+1LQNs*XRW0*NrP}NcX0Po4jzm4*B7NL5mV%#f&4nd5vqPo+mG4hp6$|=;q{R;9 zf3Drh*R}nb{V&w|u4!@RHan=@_ItvWyB<)+;jePBr2r`?Gr=;N%CWJJMtWIE--_CR z?{B{)J+ZmV-?=JfgJTuFER^CFWMn?vj&_~0woK4{*p_X_L$a%99I2pay}LcLUT`28 z#vg(z+zOZP7PzzcLI>d~=-KG_K!k7Q9KiDl-t(;{VknYO3PIXCSwtME_-NP#ArhV3 z-|r9)U{K>(1!_fKqoBTNTB9ot7~m!42SAFWqE9BfQABoOAvJc*#GwPMS!b=%k{H7m&h0AmB-a z)qkIaRYGwrsTQDdzh($8VwqZ2Dvd(Qqe&8ca*~5bn}fBGeOF<5q1VP#Ur@mqEm5yJ zB1o~t70`E!xMPMoj(qb8?^dk3Dzmgb;JK=2fBh*V+K4%iybtOyrF#f`9~bIC>ySGc z%l`yNub6tqG?}PPgOL|d1-pMEc0N=bCue#*3#Fd+ayTv z`)H(s@^>H}FgmTdcY%3GX`-eh=FPRe!adupF`R>6aKT!%NUd=FtPZ<$Bzdv|YIgHH zb&M5-WT)=U8pdIjWV?uo;Qn&}T-aRr$9f1;7}*;qfbVVd~Yjb=$__y6SelmPu5=_0(8ZXKDiRi|5 zdUhZ6S_@_R%6<5YCcP}wsmg$i*!6~r$!x0-PCUc1MnrCks zv(=uF2%1CXGJ^{6JZFB)rV`lMGOv~wej8yYy*Yw;6|`>krCRCDH!)Zr6f5AOsP+Nz zKj$DF9!bquOK2m)8Qcr4l2bzPtLFml)jz^I0`b*aD%*-9Ur^C!%Y5b;8;O-{F4VCU z^e3J7p`&zruh4(C6@~a7GtpNA)qj}vQ1Pqt;z$Vl^$RsdA6*%=?I}e+%{;(mJO(U` z@}+qzNvz5{n(oenol_H}&RyoYi;y&?zzgXM3Y*^5oHTs*?v!loLzda{=#)m9k`N+) z4iY(<_B4vS{wm0njL&OFpzlQ*^)VT}m-$uExL-f}O>$U$=hQVgAd}m4e%mRgUXGVe zmkiK>j={SjvT>!RGJc9L#~{|nKmH9Z;E?TP2Jnfn0;y9}Zb(YpS5PFC<8$}GvbwFq zNwGH2Q1fyfag_Ha{A@h_0vpqu#WnUz(Knh#?%o`g9#LfZ$V-==5g4+e7NeVe&3p15 zK}pD}m5b_ptFM5Dc;KEzaTMc}JJEyj=||LTczWH7!9dJS|D0~7dD%%7H{xP<7>^)I zU0%fUM|Dh!-5-uQsJFr=16q_ib$WX>XgzN;bdtq{>%-pmhF$KbjQ2K=4fLMX-R4mM z0=3I$ti)0Oak5c_5hs*a%2&d(+YOGO%&DDQ(+M?6vY88K(;|gmqb1bcdBRlVo;X&9 zNg>1;Vy7PIeHSk$0V$<9W|W0e(WLbmM`6JC!HEU7vx`)G5b=ig4@alH;K)BZY@CK(U6d>b zdKG9;vuiXy*3ExM2|wcpW*Fka*d-bAW&wd$@ExBy0bR0;O@5Jlfu8|K?KpOQKr^@b zpmcGKPzZ3E!qzt>Xa)PZ;*t%UKk6#<*V&Du5;Nx zA9YZv3$mKw-rqRD+C@$OL`XfOc{2grFr|BtjI3Gb$w65=(n#pv+IU`2h&mVEsO~af zHZo;P7xnEJv*E#0R(%!=8l+_yI_%vtj#89ey|Rl7_FRFB)jecJ>v9o~{ERnGqq@4g zW-M7)ll8IKwQPcvY-Hh3yml=3?!iD$5>PX0->A91T9`z9zv|F=czdD#t(g+RQXX4mYfyZCoSLKQVC@}eo0g;HR(CV- zHnuI#eljz6J%P49=h$ra_}^R;Y>`}`(%wR`g9FQ`59CLUz{Ax7v)6)^=T7-JeAKs<#@1d-xfDTp+K%1i`M7s zC>D_YW~)q_kk)-HVhRYuO=AL?lD~V*FHt=8eCOwX$pAjXpt6i-+6kG_=5z#v(U@;w zKq-ZrUosSddt<{1!d`7k)nqi(Er^5Oc21jw7-6U{#v#H`q*GDT(^sppY}mP zGDQElNbtArr%Y;~ADeOq#d0&qe0%W1evt97c?u3gI1;{j(*Fcu!R>OZk#Gx8>f-jums%hwTb?+#+)`5zzciQxV6B zLMrcaivNihmdX`h<|23;eVFdL#x&gD^VvAkE$0j;;pyw4b3$dq^q zEp3?`eoPbdM~%Y`uq>y-vCfJKhC;VWOH6@8C z8;)ssop-~qTF`7#IA3q|H7vozpz)~0or{-{hkjBrh`EI8MbY<9h8*T7+FiQ(t;sw@ zpig_xcUv>FqoQ}{C2zoc>d?Yh60m}~X6%>h`=*`rm{jd3M|t0ZDDg;inApt7Z(Kc% zD8@tlCY25un17-H@z+<{d$X_syTV@QntW1g=J|B3(JYDbkr(@J|BXpFNE?@6KhTz?#@JptZUKxE86(k$8Iz3x4sM%2%7SF#wTNV?nm(G5 z_=Ehz6c#*e{Obzu-p5IxPngB1hC*2txUxKX@_ zsBnf9W^epasP-0hYAb!@8Rn=3dA7Y%piO^sCVMKiDQghL-_<$DVdUL%tn0AdlAk~| zlP60IwglQTI4GRp!tqv8U397b$Kzn|9pdqyd>9m-`Y8MQwmXjk;gLnLMJZsh&xh9m z#c=rCt8MB^{B3)Co!k!=s?9omy;y38hl)EUc^vv()VJ`@LLvTs{7 zkXYP4^9-7&O;XBy9ek&9^LHp7dR-L?9}`@IaCS%<4t)_j#6i9VY4p-}^ag?lKhnDQ z4c0zpPf8{15M^~_pveQ#VPXI znanL$`g~T19yg=M9)wMz3d><9S+dJsp^3P|<~|@`71VD8W}Wxp=?zPoQ>5+dcwZEf zo(wK!3B6m41o$@Fm<-5?c!@EH1tN%gIh!##pKcK5-k6YnffrZo^wL)86JOjQu*2iKhU<1_`MyLzJ0Mhp z@&Mmck7x31Ki_u}{E$0!#9TCwueFFuwzQB3y^o)72TK#wO^5Aw6P5bA6~_oKN%%4_ zy%75RhMI)1j-iwe3<6w6YwtT+A+yiCz+FT93ry-_RPUh{W|u{Rags62y@B@@Au`m! zvi=4?D*}RuDaY+%Jph7(`1|};uC0gy9yVu0 zgq?kpG*aZ%(Pk{Klz7BCQObx)_Ft8AA#v0}2GNJ^wP&2|Ci!o2IH35q(_X8JNlKzz z7Zu#Z4-}upmknn_1&n7Mtard^_5561qg%2GB_SR-q0+>X_vj+OhdxbqSh_3cf$ew^ zV~EpQ_MxZ?+9R1EOKAA@at-%=^<8^OLZ;DTRugJY-KXS5O$fD+mx7B4?}mHo@Mh2q z_7|n!gRq}|Xf*V*J$`})mS zD8Zqx!41l^|p-zM?_G(y+!E47S&|e zy_R};qC(T5=y2R^sEyjheD}@tnXYaIfO4QSYGKTv81=Eg&hE8V9ueaMGCmnBGR(so zdA@rIOmHIsv;8;(wPBp`KWC{KuHj%;ksQ!5lV6A0O(k&%AZ5jR*q#doA8Z%JEVJDS z7lCbtU4{>BWduZx%ML_T?0=}Hj%HiArpL<2&y1@W?aer)@dQu0$ltj~y~nqHGdbeU zi*Ef~>gn|pGAbHFbY=*b0E*?XDU7?@%=4s%Q5KcqA&MB;Ap@)E2}sdLO#N&m3DLu= zidUuUEVP7R>D#X7)klH1xV?~8*@Mn|t^F&+_#KA5DJ)Yltz{I}eMIKpl%A3gwa@$8 z264h-x?>l29Q1C5hV?~|uI8&Zo(La4w$2%dQ^=tBLlZ8)<%jMo{L(shCmqRlW9n>*=779-1qDtZXSp<&ux)jgEMdD}u)_7Ap9M3X_ij{?Gc;$OIes{U& ziO3Y#RCh2#Gnl!hvP(?cx8vq+&fW=G@};Ujk{BS~U>(x0eLGyT}vo=VMHK|Py}VHqR< z-Yisa94&~D3;5Qvr2-O>RX6D|e2DgmhJ>D04O6*_Oug+oyBK1zj_}+Jz08)dU~Mj$ zKP$`7+1xyJ-DZXCr5Lh(rUp#hP+#M7$g(=KuW1~J=G%Xb1PS0pcX@IC-UkZhbXRQe zNV9&{c!@_bVD5C%Z+5MzgRHcLj?p9iuJy~C9|F1k#S$&*aX#t|_rGe%&f*8#48qFdMgogo%5KF}KI zUIkH-oL<$vc+jRk#@EG5GmE_CDo&Nh3xGPgT6sze=1H z^!X*y`3MUh7)!{`QI7%cNa|ct5eQ$>@;F@m)AC_-qWx!L*Rb%z9Kl{n65Jms0h-9) z#vsH29W#tC?S}Sy*nUv%q@}BVYkC7(v^VTkYfV8;(Fz)8w)qPwo=MZv+vTbciYd2~ z52b3->HJwRmH9AI&y<0iBZRRck4(MUPmBalHXnT2Ld=>eAf<{{0K5m?7Eu5=Yc|Jw zBIu6`yyI{ojRC8l zx4ma$4~5Q+hMNizZBRJ1Q(kdNg`NFy9w2m2x=?_z*(Ly?*U0I&E>U4~`+}HR$B87b zFr47yA$j`+JD8_GhEnB#s&Rk4hEDS;U&*)dZl-bWaCm)+-3+Ce>$Nf;pEg0rdfapW zy@4$c8%x$}?*Wi<|7NuI+^2iZBm(0`uo7%~SACQo1Ngws)Ho)%ehH-N^7!A4x)UGA z>=DC~OU`4=5sN~h(przWHlmIizNusX4&rS zrpQT|WnwGTwA^xkB^|jrgDF1l_{Ce@blL*>)&*{l$SUGdfHX_@EIe(EcMhj+hx3Wc zY8wdGPcHHC+U9nb^*TB~pkM8w{Z(M`eU7VI_SMC1NVJZKSJbX&Nq-SQESx}!Dynzp z<32=TOON%sv%LYK=PpgT<~G_$f5<4N@f{3(dzP8fE!PrukQ)K*rjWBhH-!cv#yOf% zZMqu&y_ad&v70p+S8AB;E}!)cg*6H&A5WdFW|Z%e;gAr|+=Uw|Baf7lvrvm#yexw` zvoks6N`dVkOs-!mCuuZ}0kH#%32$6;X1aVgSMH_#d58%_@t?`1h648SZeB#?Q6lEt zx5OIpikI4a>~Xfv^8Xf4^qz7HI2$ucsZdx|K?;79UWjucT8c%CsB$>==zCB%vPqPr zbdyd{u%WN$V|gmM3Y$<5cUK<6cE|3C!N;d%;~w|MQWnqu)lSIDYKrPi%6BWHfFcVo z+E{(BXlQp2(1Row@^$=rnXWsLI(L4G5 zH|uJxsM!zRD^dpiFJ*3tVKg4>Cjse#_=Jm4eU}bj;%M~Dd|CM>_dyPBF@y*1@hF54 z(df^5y@L3|>o003o3r=j0GP|B63F`cc#V4Z{am6A^AX`kYl99#F*gIzX$!hG>X{9S z=MjkD7%e4n#Xl4|TOK^%qojHV`JA@|Z?jlpKTw+v`qgp&=tCg$3yXwC3-#QndFUK+ z8Ukj+!R^kbHEEVy-hHM`z67hl{nq6efW^1ewH1(<_+xuLp#Qe_?N91o zc$fX?GWFRmBjFRd@jpl_GL^L|#Hu0>G7uwerr0R5?1v5<7jegH~JL+Gl z-}Xj>kxW-fb^^25T2Rr;+B^Ib1QNqma!c8E_`CN*_bK!N|1fnz4;!I0mET6}AX-2Y znkbAtLAhTN|CG4T={Gv@_n=VwGz2GNE(pK zn41+xjr$oDE!xH@jCz!TA=OM}m@zEIA@C+{W(D(mt)1n-u)i)XETcZ zQa67pwpwYCG10ZnDO;JM&+*8`DI*`_m}0AhxFWQ*L2wPD;ydaZG(X~$&JRTLwvw|GwJb`R z%C~3d1n_+BMS7O%TG`8MYvy5PYb1*REaAj%LFc3&q_Z(FOQ!NXg`YEg6l>J=$m4Jf z;`QDuB~)pZsj`!?ug6TPqU8cq^w!B9?!yrV?F);=f`1Q+&vVkEX)mr}*q;Gn8B+n7|uRAgz!+bHl7l&Jce?6&X<;6bgtFHm+Q`8K8aeZ< z>^eZ#dzunZ#>l;1fOX$D>oa}$(GQ?*%-ivXw#>vO<9D4L;$Uh zm$^KrST|r;3qwM595Ny*`tbD+z8ulPORQR` zW?nph^UXqWnW<1fzmjL~+c}&J=I_|ymz~2O;9M~HEH|5h3RrN))uu_wIZ33UB3B8) z;hdF%*98b-BPp?J-!Y@%*YIo9QTAR#HGdtuj?o;K?66G;Q5Wu%Za*CFoehIoA60uL zt%!w|mwwlIu*fq?CqsR3TueOoN*Cb%HaO_VltR|Pyrh%J0X|4rUQs2Hrj8>$3-nR_ zOb2$5vyglC$ViU)HQptM3hV8|YuiaTw`mIWduCg2$`Ef;ev;L7%Q_N~3a2l%$w}SI z6KwwgX!S@R5u`*;?Y9Ai)!;8~Zwbn|;DX1~b&yGRJjV;Woh zqy!*OHWD^jTF;|M@o#=f?hlzixF`X_KIZ{G{l{%SLA3aP*aE_p>Bc@zl9ZIYm}Rg> z8)2?uTbAtNj@C~y$%t`)_5lL0vL+lhj~Ud0;q~#~Js%BGJ?19AJY#W#2po%kh=k4rL7lTVEltTk7^f3aA`$MFBE9u55Hj9AA<6d>ga?@WP79%ZPmJeH%jj;_91ZrV*Q|G$^!^YbiE5NledWL&?UWMnFYz<-0(nol zqxo3$TKZ&0VUKFj|x9tR=ARj!dV!~LA+{BOQ zim;jt7JWh~bcq5^5`p_4RS}`T0ZJvKTUQgzLd?Rnim|F>6R2*;28wkiVwU;_8lKefAg$Nw_Tamv_#0l#*2gQ39K>l5wrq;mY&f$zTV%}3MweZ zo#!$8*(?d2mSi=}NfD95RN^!yy$s$`H1&8zVI|!lAD(S$-TLc}JOs`wRGJ;x;ubxh zfT|wEbSENx=YRA=Fbu+b9pXomG_|xxg`14XWEtI~)|kRr9kqQ3E!6kEfoQkf>|5Sp zMS$<)d~@Qgh$ROjF->`Bgwxmx=PhHeqX15lFdi6aYhiD7LusGvV{k8Z!Y2qPULrOG zPB+2<}Y-1`x+;9mOI*OWb-d` zwOT44A`ZsCHe%y#jn!p6UI=j<(DcIJpOE#H1CO7=1M?9^pbv^BJ)vkK(_LtX? zsqv`T(T5X5Wx@0AS3kN;@E;4^a5-XZJ!3e8N7YIP14mQCcNK#R+Qjp7-lTa@>G*G-H%joa=WE5(d zK(~OtcU$*6oBLL)hwdEWf!!wd`6((BK@n{NEql1Z*22|KV!jq*wv;p1sud(bo&AtM z^FU?X?*@8-_oDTZjpBOnbZJvnZ%scKg0TwO)MW-0h)Cv?#x5<XAN{@FqNpN zk&Tg6TwOaps)1=Ni|Xu0jw5 z+2Ul8iWEz9!kpSkeDDBihj3`Lz#KjK#{7eYFoUci)K3%Rxn|Ee()qMlUr~^}l*>p) za$#BD2p5I=;?bPH*WO7+bfU}GSN_S~i!r}`E=tnSkLW06=SMWtkh35KL|*^k7$K|e zPcjAJAK=A>s9NdEHZ|97ZS9QsASMKpT-P6R9xThh6Uy z)zkC@TAv_wL4J|{MrN%Wq2VVUz`tZ;F1HW$K)Uz4d#8^{%a}F;-28b2{!xx1M8r%CbKLqFLtTw}QJdTOH zVENyI7jl1Hq96q7Vl>GKAv$MP6UpAu1BIS}t5@rHRe$RWQFq1Wo_Ue5dt2IjGT=`n zJ^O}RsQA13k7UK!60}FK4iZzA1u`1TBKmBDQ0_A`WQh`|;aDyyDXQfzw?B9we4;_w z$-x_7w2Ob;n08R938UwAOD>A0F!I*~S6e2qwJb2z!8k!M3D@*XYB|@dlD*Q=Nxx_b z6zAz>kx-@WFK=(F0$26NOZcN1EsTG&24RoX%deJRCK@AjV3HhFJqrK;KmbWZK~y;B z*OiWML!xf=@#=?0h4LM(x;D5gtO*)(waC+cv$!rAEQ(TUc$Eut-Dub!xBRw1B7)K@ zkFR<+<9-;yw&+;LBaE9LJadE*U`fBvVt6o{>&)o9jVB8f$-8A93k=D<-FT9j0jm7I z;ND=q+H_A`ZFd|d@2`?idT9Ts?%~Q=mh1bf;g+wdBLZ(F_HAF3y&x8ct_29nU0d)6 z-Gp%46Tb$q`YI9&cFeCa>!_shsX<0dRNe>UI=YW2RVFC-4f6abcT;1TxrOUu;fo*& z$m+79H>6vyrv&4j=P7Ru5Y^hbO-Et;Q5i)&AFzm^igAS`OWgMdz5B-Jg3uM%ocpxj zbJSNr7~dgli(3PaJ3nIa=|7ry1(klqItJ`vU%&m;IlkvymCSV}j4!%Y;2omkX2f_d zND?!hmt2A{95EAJ9I#j8`|b4*iXdM>49thHs>5g{BQb!qYTY-GuOd2*&=-j+X5^p88#Y;AfGLc_PnQO4NQg%KP*d7WHA} z4A_Z$hr~E|D8=O}gouXW_v(R-9k%3twVrH0*M8pr_4%@+e%*)<1A?H# z@f=lVe;OjtaZa}~U&9=x90m1etmLX@!AdHmwo=o-EZaCZ1c|xxPZg6(z?gGz|IY1=<33W>jdgR0Mja^4;`6~=SfVY{=%!dU5Y&h$G{(W}=rG9w!#A(lCaQ;n!eLzm-$-~myLOdfu zCsFMLJ+BUeS`YijKjNxFLNOj?e$6G;8- z@cBQTxOzK|tM{n?tmqSA*it7&3GYbbAFilG41e-dfbFCJNId^ zCd+(?LlyybGQecO{?lP9WJP*e5J?7RbqEZIWfWsd7VR6+DWE;FnK}@g8Y)XCZXA&M zrZ{1juIs-w>L;a!H+yniu1-W{RfvAk?>_FEYorjDgHf8{`Ul?=tK%$!e;@N6qd6{X zIg7C`kY5qm#1tZ>O(59(M__9x-l*tHx7dVoF1BO{n0W$5ww|(^IYoU&S%ekjCUN-= z!a~KSYQvQaaM)26_IV-TqM68X0g=GGTRfD1gNW~v=U2f9IE8Wpu}wd6-6-}>QQGyg z`$g(44DY{X2zI-PEjW&Tll_S-y{Hprtc-)tsF4U*avwI@J=bB*^8wS(9W|Ub@ zd_TICZYqM2eosGaO~iYUST4b%x@ZD`<9la2H9@w#kheyzM8DB!rkiplI)22|HQGI0 zG#tk?bfz3!Bdss>xKTiNSp6mVL_Lf*>AAB~8NUc-#Hif~hf8P4c+3WUSpO&{L4>Nf zi}jXvy%JCD8?I6*uedfK14+;#HMnM{kvI|Ib@L#Wqi zVGz0=wm%^pGC|tQIxgLGJ-4I-{FMzc z4g6q8M}By9cs(x!Q*uZ$`PLXx_-oZ z{x)2#PqJSd6U|+mAHrTf>b1*^7k`!+NV135D7WJoqTVoxO`>641e+mFycHdM*{rY&_24swMc_QqwsZjPi*YgnYTDdjlyAMlMf^wPa+ zYzmTQx;7vwwiv)Pf}kwk8#+b?V!Vozc-O2tuGQv#!anah__*r~IkP?tb!bX=9V4H_ zIfG6i){r$ph-+6%Ui3F0av%f4{23NT>FH`_S*ocDT%<=psHgr~K0^>L8_W;QNn-M` zhLM+`Y3uuKiB>vP2#cUO49*eg*n?L@nTyOdGH@F`cjuANSCv}$dB3tzFAyL8&Cv>; z*D-0vJAc!S7#E$0o+!-q!|;HpMccM3i6}OhX(O8?@c#rsxj<;&p?21=_nXML&`Jy zYmsnuC+328q#vdmZ(w9oXVVgMdKxo>e8Hi`ne}L|Ken_>g^^{J$*Ne?Z)Ex>pz-W;E!-2q zme>V~=cw?M5D@lFLGxXSMj;Hc;d7psbiL{G= z-A>ldfB3Q6*=~24&k(?M+5Mt7(Yrm_pX|jCV}@~D_G4=mSQZeAOQG{irSjxw3B!X; zRtG(XQ$Oa9NgnDQKEXdm;?;FaNKhWC@iyVfg4emhCLze2vhd3LEBkJ-?F*3;FbC7R z8o}aRKs)_0_*ME)P7e(D^IcCl!uj3@_TBS-ggkGwdP{tOEaf`EYM=JZ+!jVHrea}; zHo+4{06yAa+{>EGZ`gZC(4QPQn9*JKQ_eF(MEM_db`{f>`6dDj?R^o)F5Pmyx4f)A zC!;?Id~HkysUtM!@qmFM_-);J%Fk09G@F>b3ZwBwF#9r3_$a$VH?chYEk0bl^S|4G z1YIMr8GyvGLO^g~y(?3m!I2jwZaQ&(%4Z%LX6ufV?LC~&w=3`zZeitq-I&Qx5gXtL zbOw=JO$$K;Ic@}Gr<~9|G3d}*>66?~Ba5h7-{d^!D63Sz2bVvTzL2obt#x0_?1U4$ zWzjbSXVc;FMicDd$~nqWOH4jU&7(w8gLo!egvCbm+LrPy66Op`0vJ!s#}7PjN!|Tn zJCs^b*t12P!L90ypO@HMsd_II+2yRtIEgk&%5$>b^Nhk=rgVaCB_xE&STSS+-iQCdxh9DnJ)ODikt01a8!NT5*nzH@#URN04xVzI!bCjgXgALqj*m*nZ4|O@ zX(PUg#Fifo3YsqCD0`;f7<%Ck&i90UpgC9rjPJh)VxsgCVqp&o*ZXK%kVRg(FDLC4 zuxLl}!$jSTT+}nO2tgiqm;TVn_th+szqfNa(Kot8NB{E7ds|3MS*+F_EtRXN~lItEJ9zLT;v+eK<<-J$6Nwodn?}t2Z)qf!u=Kw zSEU2-1PCI?V7Rg1<>Y7RigYSc8Wg6(jA6v>@+l<~g5TE@NojC;V(D`Z<}FR@^J4UD zF6$nKQcC7UNSPd|4-qwGvL|-eEp1 zDrdkzM>ky($@rlY`#tLs=xaRRBytN3CboWo@VdM&8_ZGWDh8c$4AFZDNrUn-fFMj3 z1?9USL|%y``})5uB&<`uM?3%!NQ>~(PUci|gCsx0pOMFkLK%RIyOZ^lhVq*nklI(G z!FJBu?-U7pMZwK2(&GuMUAVJW7KCzVz^P1cvNgX_8U`5IHMFe|lv3wHcI%N4;-_-w z*1R`*bV#P7Civsld&R_MK?{BhKp284?ExebzYta%vpi>9scykJ&W%g z8@z%r&NgL>2E%Z3J`nkX^GmLO!0Q@>`8`d%j57C{`6Ng}&(VFz6LYcsU{68@#MBn} zr+Qll`4LS@`bi{5VQi_^pI6|ZT^ywdHXhCMrx29*InRGmS5c2B)p72&CFZzOo)bicE$+?|!`HGjh11Q^PJ#VMza+T>8n8#2>T5PE;qsLLD!rT5Tt z!Z^i*CH*0QAauhZJ1mq0fqN}l_6~Y<0JT|2xGL@Yd&JRY|5_LP-+yTGYXtT{i>Iq0fP8iki~O`ejxv*g#N4qL@%b$iPXST+^?4-l_VS!;kc1sFfb4Kh`?y`& z5{bPpX_bXUWk<1?Zah6dvHt)8rki>GxB7L9v1DI=qJ8`%*D4rWy$u~w=OPF@)<&hj zM1Ai&GRLz+E(-ZULvX7GhwssiNXvYGpnO?uy(aP;%b+L>Aphm~XW5vY*7-mGgjnQ5?piC1$_D=y{%3UwXvyzc zPhu2O#_LSo0zV6W5qv~!x!Q@eKP-P}Jj=KX*{w38s&oqSguCIbi`=AsKyZ)TK6S@UMZ zS0U{j1oj9a5tT{C$dw>rkf9tDQGN$NG z1AA#0P=i81-!S{S+I@ zZ$F;ahRh&mrnOF7#Z1~B>AxHA!1&(?o$3JJQ=fImkwRQKHsD0|aDzYXBlWUPu58{= zuq_DpqK|lgVLQfs3;fi7RRV*a5j~=Y=WC|J1K=hp?${L8AG7d$c z{!jhPss9Ur!dDq=$=@arzhU_j#4D!_y80sQE)h4`Wl0FZX+p3(U$S)a5w80Nn<)qn z$4j&rM<7F05dB7jNEHHS_?j&0*#gBB0+KPFY(?K7$m|T9AOxr8vcR7tN~Br$ca~h3 z4nuFXp=TQ3|5Rd|0}+RC_GXqy3<9C;QEO}=oR5L$$kJ;uj>!z%99DW(3gLzs*V5*487{$K=y}TpuVk@9Q7CCKrRhtzHqE@dOO)t7NwU z0jl_Yt}_k@e#3!<+Bz3=yoyoQ(r6YMC1e6x$~M;nW^rEoDMN+#>PHz+3=sc%jtZRPdQG7uXM;j#Yidd z(Q>vmd+Uu-A_%zV8`~vZ877cZ1QyI8gM<*(t{qv!F#v_AYK|90)=w_riZ(gktHQhq zX`AGE-_=fIz`yoY*VUIGJMGugEvBB;~i6-K^xIWn$S4;5{#8S_}M5n_b%x}!~;)|-kr|N-F z+WV)rFFmUp(RB~J^4_^PrMecZz0)7Y^!n!y3<^bH%X-}XDp`)Ly0%kqH#I<$oAEV1MAY@ew zjdn7irhdJrs_PXjK>?c{J=;jBceb>e6Z{Ca_t#rJ7YQMZt`>1YRBz<5j%pPH(L)eu zA_JnH4#o(iP}A?RQYN(1b;U+();If$ZcuP-o^k$tQLGTt1ph6dJySC?ni*WzDZN>_ z%xy@}QDuh)m*DIuEWiv*qF1V7b`S(>3rc!z%`hm7@P|>#*U9(UU^8;Dk@H;)USShp zDt~A&=0VD}-1a7lJp{lp-AnkBMy6IV7^w57tXQ6OAJL1)zs;Qn14B0cP6%MF3c2I- zQ3&JSh$o%xsYX%vgEC_^sjKTdEBRj!42Gg1^``~Sa@#J*eIWbKArT27OQHR#r=yWx zEd%T1ldgxcJXOJ&*9VfB=KH#E5!~Wn`;(LA06s1%g6T^fXLrL{PHjX4@veQujbJ@~uPC zf7G)2HJE(C4kYe=UBCWzM=e4C7Fl<Emen)w{k~h<^H!I{TGE70Jga(kus`_m zM5MPgsvT3K56SB%kcQaExI-k1#AxYzgSTqo-w#SN!@%EYDck^7FrIfMeXNid<0h|u zFxn3MZMW&ENC!=;Qf?6Z!t3s8&lDKO7*4;?I^TKZAh%4Ya&W;c{>E%o?#b@isEa#c zeOAu+aA(}Ya#wyy09>m=z2vi5xUnK_@#9`VwoS-0+r8{&KzF|Tx42-kdtlOR>I3N8y1DX0cSqx&1Yv+NTxC6p zAefvI4wQ=#{*!sH0!z;j3RKB6MR+{IkI4r`JP+B?w<(J-8bZOP2Zr!VVl3i`nP4K2 z5IARrTTfmYb{$#^p@&R%~Og%@Rl!SBto*SHXAF| z4>X65oQH&;4L$9JQ&3;Zd(+Jl!DJI)JFlJXG1>?_f5bTEkZL=jCbsE!h1=HcYxKJZ zELN_fyXGZ4CvTRW@G^!J#sgv`a++kTB+t)nGdcg|W8SVaF+FCPfJ zipk@@AC23diA4TU|I7_fpvh&=tXCukAdtx}M);jE?91rGGnwpEhXi^JLIb<>$IXl* zk37)4lwf9_E1%kM75D+cwwKr|WRfHl#b&J)95bfj!>QLm_!mI|5{c?m~f*`*)=IBEtc+4K-WYcFqOnc11)|@2^W^P}B#doJ1Hxn>$y4_b3K6GldE3TLdVpZrq#e{ymQft{RBWFeK$1i{_@DEhVoz6j5 zlm2`D=4`s{hHXFV;E+IRd%Cv#yFJ{8J;Ec4ulhyPnmX7W?6RKCuq#G~v$>yYzbOa}1?W(b4i{Sdz^Un=QzS_UIv=WkWd zWOcZ`SpM5vyKKKjQ5zpy{l@WIn2@bz-k-JQ0)*6;w_eBADeJ__)h-P}=t8eLxpKqY zIydr^w{K!CWDo{~;j5F!;6C?!L{vAwKjZ79r@H`z*JA*f$r?~Y0RMLgN?>f@9TB9R zkVrcuP-qv#AIENIwm)2?&^vLn!*&8&?N5~paiu)TY~SsXQvG`Tp@3Z;7((?0^1H_G zF|Zz%#JRD@!`iKF??g`1TG-YLqsQY*Z?)}TX=b23mf|0*$?^(0rPjk^^GNTjIlRrL zyIC@tn7=qm{|7DzWH5IWf}@ERc_Fs%sUN_-Oix2GYX)W!k)J#;3_pW*@_nywBxtmt zKeMP;9Xm^l#Du$kpAxkYe1@Dj0`~=4;L3kJ@anA;C zpX2v=_KOl&BIiUqE5yYs&wco59(~cQ-n|Vr7qJN1EVsR40kxY|wRCVG?Vl0}xo@E- zc8xyazRve9LkAQttfhvO3$yY=im1l>GoHNq{}mJf1xD-6V-{^AeU{mZz=NvxiG7)6 zWnZ#(gU$>#V2@^dW_9j1lV6O(%u5qn&1C$GX&j3G#gfcKFcn<@2P*^tA}oH^FDZM$ z)hkX2Kdt+uYwW8)+-6R;XKNjUrPvIp2+uzc{MxHWb+sAtiKL9QC9Aq5pT3e@iWWwW z5&=xSpC!%9Evx;wTD5}T;2YcFvrBgzfmg<754PC2fKVdOZU}{+<-J!iX!N1TsqTFI zkNibe(@)%b>{jfPR2-0H=m4|WpjMoT{JTRs^Lm12s)c+voEu(nX z-D-Wtgsh1SHWa?HU}ZSe2ouGc`vV(cSa($a?AYy0(Dv`w%nHuJqI7FQb7?RIO~V7U zVTYBS6d^vXImRQ{vytpoZ=+fGwb=K+>wP`Z&W1pE41idYvmuxlrQRL*Z(K1G@jcqW zeHy)=5%+v=m&xRg>*wUpld}U8^b&y=>d_yg@un%@)|a3CzTI1y@xL29Swab1BM*wL z;LZy}gfSPTpc6kpe~(0OY{wc-wc7oI^V#9M;@j(s=b=GfJ7C1d->@OSx2W%qGs%}^ z&&A9q_rZ}GFCxsg*WxZ^rF#;;{OUEYjE2so2L=y>8wPYO+yyvqp|j^4uz=A&iU+=U zhx`^35JcF-+{AEb2OISkXc6B2gps}xMpYsI#pnQHomGGGoQ)47>EHBL<01(LZ&x24 zyqVB16_Y!&2i7XQIQ%qw2`mgdom5X;mMePWwL)SCU>K-(^AG%^#bD8av!%Zw5Y=;7{m8Pi?0kbZrj%vFLQ4rLout3=aSw>0r82hY?b*!V*3~C0x<&TJc1Z7u=y7huhf8_IQFoX0VIL%TW)L} z?SV;P{=meXhU`n~yXn1%B>c8^L%9WFfJi2^ZwVL)epiE;VF)ET{i|K`8I4T@0!eHq z^k&Uq5qF&-C5#7)#%-kHPcJ+B`>jmSdPnyEHV4)9*BdvV=1fL{+fn(*!1=VNTWE`W z6Ds?wgt=bKd)qKNiI{wcNksi2?~FLY%98_wX}I#d11+mVPR&b#h7B^2yJjU0a{`UNs~76>?+U+|)>%pyGCA zEr9WbdH81y#>ZAW2+i^b^B@!ip;Nq0U*ezvm>{kf;u^@Ukv~?!#U|Xc{D(+SIA5cb z4mN~8QtaHW4JRJ+fD=M2e0NLp;-QinM&Nt>%SNlXftG!pJwOvZ|Mk#sG6^-_SbA(= z93dHotzY3FNlpJ?&dhBPTm$ZUXWQ>g97ZBD8BDq>p*PQWO{G#w#*(CnNdk-c6i>2! z27^K+?oUEbff0Q)WKvFHO&C+&183%r!lpm3+$e%0Se56b4zqXAoVlTM-2c#?EAU>@ z#El@Sq>ygrPHjI}nC6$J5D#l8RU47A<&Vw>2Wm~Ed{(Iu0SO*4z&E_Vw}@;W>?z&V zzxr@mA2$)pEX^1a*sPZWJ?+}ycOhH2M5-2hy(P))uYn(i+qr>cE) z!e8`kxQu%jir}!se8q5T`)K8MfwJuD*zssq{@r*H=ZqVh7l$v9>3LRrs&*dZB1Wj4 zaT*KZhrYF|X^2O^gI)1RCM5}a!n-6=9b8C1v9|HLcDNc?SSI`9;*IR13(j2`?tmM! zb6CZ|jc5et?ij2c+U%R|>M>YpsA3=+p&kblPfSJrZmEu~q2F-s9w7laB`0{U-Hqm7 znAe9bZ`PXb8wZ5r$4}9dSI`J^6sFpvemY1XRQPLPBuJH_iW!9INr*5cKm*JBtf?G2 z)v~gCNa5)P;dq97mtPgc3j_y2~e-&2!lJ%l7s4QqH6RN=S6qh#_) z?^`Y%j~8oZ^#xBA_oMc&fC50^MC8B6xwr$>P^Jwdk?F?+0V4^nL?78s0k=UY7R3sJ zF}>5^vxWJ6p&$JR4$f=E(T)x3o*%4(C82;|zA*e9c66@ihdm5(l*)V4XpbagW7^qO zf{Yb?z=vKIaN(0u&BKU8%&)|KYj-r<2!aoyPpVd=oM#)j)Y__e1=ZOI;%;;i-J$#BvkR) zk%c^eGX1T)whWQ{2GXwKRbqcoo3u z46JOo?)h+I-PvcNHmV^NG5W~MCBeHF>%cVfuLBVp_0wd3LU7*ED53&cO{bPB6F8T6 z=U;!4$TjLd*+|8aO|9{(YgZ8ctsFN>FbsghjwrI5HU zbsxM?!RFi$JKLp$9K%wb7~jDL(|bnpJ5qWg-#rH-4+JFeJ?(`Nm9Ct$2p3#y+}u?) z<#u~^rRHxZPQ=u5^s$epcISmxW3B%fLM=RLit#EF#wxBl) z$kqqHd^xvZ5W!*xj;dKS0uAMbth)kBU;vj#&tVUT7MU}>e%2xCe!rQP9$>uo#mave#y@@Vdsrlwmfu$YMxrt9*@F4&Ta2C@MFAFOp^)^V$1$X zMQIeuGcAP_O^9~ML=CnIH1-K`xSu323&ESk+!D29bug#6i_Rm0-vUNMxi}e*^fVzWFjR}P$!{h< z8erlWu&a{qKnQdrZ0AVY1`-{jX5Itg%}9CJo`FU_)ajye!HefqkDzLQNP)@y7g95h zW@aAC>ILKK48R$ZeoaXUHnA_n;&dKsGPG;*mDyvXoh<9&?>j!0ax*5N;{Kmk#I(?4 z#u1TJjpMh3&0a}*ID}sU*Ad`Le1B-8n8tly!D##n@gjU&##)R_LNTcGR<%KiC+a5I z*8TxPBVOyd_g-ez^!lkrjCCto=Kn%-G8Or|raD+k{a=guhF}tD>ekf8u%jPOz0Y2S z8+B5)6+wW?ZC&tD^x0S=#AweWfXs64wYw?~FaK&Y4VoQzIfem9ei9;G%ewW7SaH>G zH_t7+J^C9B>#+Rt$k%LzAD*_=hMf@J>U~;DRz=rm*Wir zBTQ+P(9}L3&HB;YX-+qG%>LH;`s6`)eJpG>H{+L*&cVIzYj2}d{8_jqKkxgM(Et>D zOoY|Is|4#s<8Z?6sd6t&dX88`_3`4UasKMES*+$c&hQZ9Z}UsW27^*1i?DflSK3-o4yoI96bcO?}H; znNcQiPF%UI@Lfsg-YkB`{q(1o*RxaFTU+pRrL|2tJus2&6rZOLuqkhQe?l0F0rc`s zGP1?|CHz&t2o&2@3^WK?1url)y{VonJqbx4g1Y3wt(z6b`MXelSAvOH6W;8ztjzN` zi?rx=VipXbRBrkOffs4+WKN>GlXBXyk+3RhV@MAD#Ta|R$++^tjK8_xTWSP7?Hi|_ zb|W%PScwZd* zm}z_L_7G2eBTfe7A41N#e=S7VOTCo1U9rBV*UqWvKnFLk&b>QQGp;AD;}{tjg$I9~ zYVR!guNP(HPPq*-G2F}9sVc%AgLPl!d>XltSTr}RkIKKj^mB=W?cpG3XGT4TOv^xo zd{PA%_$!3LeDI%=>V#t?eb3OKetQ@>jrIp8A+vcBS?CsR#JRTqxfL=`EoqdSi4kBZ z_k%G+R3Y`z$xr{vM2bai^}eKX@?l6U4hItPd+$&2VPr=Lp8JM7S$kuT?|*c;wAL`H zDuZ(>BKh_y-KwZ1TC2Xd#I0nEu<0M@nip6ZV4elq<9|gyw2}gk z;L4PwevkxdG;VJ98{{kizEWp@^{Fa`w; zSGX{w;1ELgAsDirlI7*sEb#E%y!kIlMe_hO)HKbDKsN5xb@U|s2{Xl7GG^69Ay@Y? z(7~;_T+8Q5^XMSKfir-3?|~F*V;I3kWWRMKwOgIC;r_6va)yTB@X3i-M9eu7gt-`7 zrX&HC^KVoI6)RH%`6c=i1T@V~SZ-gA#1z_etW$=KRXa`}o;s7DSAoMPC7(%u<#7(d z+jR3{g8BZZq<_7B7or%`LEKNjyVL<9%C|3)un3Z(e`-R@xfe;zPDK7_N!r4$s~9-W z>rRehxKvnQu53bmHnIhJM=M;56VWx-o>Hn4?f76rt^)zcD_=Yj9m2{ON%YJCyjRJ+ z)H4}0LB7_+Hxeb6`vhiEnAkQy<9t}?y*UA9Y9Y7x#{5(QHZAe+ujJ8Quk;Ow+V7yT z)e=`l%TBiam{#{#xcWHY|bqt4UJ z5Nk@0l`kpE=8H?Ze3v7^8Ls%;;no--KwcI5c>xEI4{(P z*!K>ei{NVK-&{uY6yB@(F{IJvDMfu8QUI@-3 zc2I*0Z_N0Z$SGCU*L3Kk^gzNDHh988KmV1f7%HgOexh)_c3@>+iQSIOmh*Ocm#+iK8B#~O6aGtO-lW}3N+Jr8Q9aLGo}F( z$6t(Ed8<+LbcXG!Km`T!a@!Nd@ioSMi+z;5WV(zv^^wwLacA#3cvu!T%xaI9+Nzi* zQ!m|kjDw_J=KSQV?2+W3#u`IK-oF!#M_5=dCU1-U#J9)P)1-coX*M2Ubf7*&#A)Yw zky!~j+BqFq6MVsr6F^v7DZk$muNaC9lNsit)Yq}QvOsPn-H>>u%8$Iv4p}q+_FZ@M z%HLBznj-YO+I1MMRlR5X(;YAWnVLss?oS?SkHU%S)4W@q>v5M1NB^twn)cv5=a+kw z4c>J+uP^%ymi4VkXV}*S|X0li;JTYMK%;1%IUW)O`A{+q zHX7~dt*EWRx9o0mFD8;wd5=MfF&;QIGN1eBqzXH(@ku8|5Dr&%%5g%5RrA}lZo;o0 zE?+t$+#ZWv-Qsl^ISCSl?_V3zrAY)M=Dw?n_%dh zNE3GxAwcd}(>JmhK`A3OOB)^{`A!$vvTy;d6O)|$qv!rgEB;CEy0A$kJ=FEhk|0wD zCXBbQ#HM@arul9I%X((UsXwQF0pT1}Igkk9B<9aAUi|hUEd6!mJ%#lZ0pAk>=(z}u zOdev;pF&{eJ1gE3jikyeVqkWP?mE#;d4`b40HAXGeWjK?nV5#% zx$jBINPKQR!rk^I>ghXmuTAMd^v`AkZ>n?&2SMH3;dGrmHu;$4FGXP#No@j~5ZGe@WMonOAmIn8N@rn5O=yS&KUWyGeaW?jnKNpBeDFZqKamzHz`r+oBh#10x9I0G;U5p$m-=-ZXPJCk3xO&fC~ zk#tV98oktwl*5bV#I(CT$!U_)1CMR3g6q&nO>|jL{~e%7SB+NFFG$OWc@pLm1l2`C zVuqb2)&0y5+q|3cl%Ti0hm>8o)Hcs;M4++CMHvGqB34QoTylU$ut<$lFEWIjIli_+>V zf!}tu!z0V~jrio0g%7og9wvg=Xk^9;%v4tT{FISs7dwHSsplhO%3;1}pZd*bJC@a+ z)e327v)Ne-Ax@YUY%NLqup>&h2rOe_9vOD5kr3jEb_A9shFAg`XGLyOovXpHAkfau zEaK7s8T}H&_QROwyNBLaeEq36t~ z%jAa!vV*a->@?VSyOK{awi({IY5aC)>N4Y~R04tO=J0pnpGd9!4$l9mi;jaryg;OI|$GuTsH}bcW;>40#C5X^j)>DwyH2W4D~+kY?wu zxiEBQTqkrVveM6S_kZ#7z!`w~u(l|RkUxvph)2l$A65m{jSjzu#o^k#dz=?w?_+~! z+1D_SAH?V(#LI^8Yl0lAJAeIdDETX&2T7pQk#muFk`D_`g5+D%?#sW@d7J~mzG>An ziL}lm)h8vbB;OLI=zJ#T9W1577`(+iO|a_i4aK?s)4cyLsOSMtw0K%ox;KiDIDmJ}oSHJJ_6(|v*Q6a1GI z69k|#g~=V+LS{`Jw5fvwBgiUDJ6?;*b6**JR~7{DlH@baE=_+0@|lwV;r&K1jA7h& z4^-9(S*&lA^9a0x+6gq?>UZPD>phJ7W4!Z>W+z6WdHN6nz;a%H(fv&2gN@5E9$7Qr z`g!KIm)SPZf9wr;O=IPhChbVra5OB~hfns6q;F0fGaJK9zs5qs;3>6KHfy&TYd*BA zOr27f1dkx<`9~ePq@GK=hv&TNoB{e>8yst2K)!Bhs-g=4*g%T+ePP$358G%WE3`u!2|%IeD}wlZydrGLgjMa^dtT|gb2whv0tDH zj&nfa1h;}^otw~ReEQQRec;r@n`#XAEg{pIh?;03u*M6cB&&x&p5(4)$t-~)A=q>5 zkr-vH{PeD%62=GpwmOMg7PGoXUM{ zi@*0>jiUkp#Emacd@ve=@gkBlu<#+a2x;Gm4^UU`KSs}h068-pM?u1Bx2$$Y(m+1{ zUf-n^k7tpy{)LyPcekv`zpltu$Pf+zU#PhIX)}YX?3=hxLS95dqWuASkkjkwy&wwY zCs@%Z(!EtOj<mWTviRNlp4Xa?ZatTtK~zYjV){7ukA9DAHXvA z(ZEp^&qfjKQGayGx4hrh?c)*THbm8#P3ga2XzE4z@r$|O4f3Mzw{QBLN52Ef=jtEL z)9s|P)NA~%WN~~k-?p5))k6JXLPMCvVd)`;fWx4PA$lo2v=iE$bxcNDC*7vD z^eh4koRH^EdjZBezg*u`C)1Mp_fQAzY}Yu+GLQ*bEX2DEdYL{YZIxmG&3S3*{h_bG z^L0*L9PBN-f}KQw?QKDH*yc59DCvF!n(oXc=i zv3cv4cM1uw63X<6q~eQH#GZ;oQLKVFq5f-hjg+Q)#9Z5hXVT<@$` z&@#=ly35-`-`zE6ocChfpO{GydDCBn%!hnRgSd*Al1MUv69NGpXv2w6)#XK$(klcWKx{=1u;@Ch9A9}{4lZqt+Z zqc)c0X|bf?i75+opA&V4OJz?uOhNK8en=R@YKT>)c5+vlvX&0rTI_s>!nb`(4IN?l z=|#6pD7~I=&JC}bUcNP&&bCrV1WVBmZ#yf+5DDk~xJU56B^nb1nNiR41SQjtd#ktE zptYShB~>c;8|9-pwAEqieB)>l8oR2UicD`!8vT9EA{cC&i9-(CGMbQ2Ju5yQ@Y;nQ zdX;63>NIf%Xm|Pm06+jqL_t(g;Y++11A76^ueG?DhoxGAfbc-n!dU*N&1|amCKHBlH zi)wz-^x9+Prq|6H87hJ?53lpy+qo?$aIjyf{Q!Y$FQ-23SacF0kVcS9s#5|~AAX9Z z-pwEW7|BEjm1M+n>%Bb01NtM-)L!Lz5iO^EAiBHQKl`}#_-5~=BV2s97MZm?u5$l-e!S|G+e`UI7r!d^!&Yi@ zDT&mtum5~+e~n5l!UnL0&>WfE2MBA+3!0QlyPw7Shr8vaGY`H()7w zsOLp1o`_zB3H{L;BN9I6IE=S_}FiF~9*WD3;Za=)Hc8GhT`hKQ!JKQI5 ze}!Rq3|{kJ^<%s|Y*0TN0YG@G9&FFKURlud%7hC1koy>%`tETRQYGM6MK`*YcmJKFDtt;w zAGoLZXKj}}Ik2n!X8=YuOYq3Q4t}^sgvY>!>o;u(cV+G9g>}uGXK2Wjq@~&!8hwUg zMl%2@{DU3L3x|4I4m8FD!P4j{Vvxw^cX~E=EAW#xNVUAb@%B^*@J`4qHn!a-SJzf}>C)4a?<9JR6}^hstBaw-uY@Kk zs$~28TGs{_A%QP@cj49nn2H{GdEw?Z3{$trujp!){f+&gPH=J-;-l1`vW?$+h=1do7N`$l;oOKqnHro)Zh{V+IV_|6w1nSaVCUaFrUH00^MOIP zmTO3Gx{t0pxm0=R(VV`Il{uQw6f^Gpx#Doz_ejQ?ApMM&%~}ywI|0*82fib(6$lw$ z)xW)iv1LopQNLe@=2D5YBfd@xDYpU3=(RcN^sb9PEmwnh{69bbW_kMQ{WGpUtW-2`ho_yYRr#OrFD z9()>p{(Q6?Wsq6`kueJ6?s?Z3rzfC`ukfyj{AN~jBg^$+GXI7s@4gl z#w*Zd6~3w+F|oxi#^9szh(Y5k>!qep-=b zpzLigDPzrD8^7{76@nrYInc{)$TFM-@tETwY94?89Yd-~EWht$5t4*A?W;vl*Z0|e zJjY>Mozv?c8T=GgZ+q3bNl1tW?9z6D2%)D!YXT^CNIaN{X2|alYor%@=_igU@2z@O7a)b4IS}Abi_bf~t*E5GLMnJ;oskTLktCR!f?Yo&igk zODZTJ!b0PEZ=^hASr?A)n3DMw^_<@L3)M5}v&5#h2!9865PnUTy~_TIqk0s@^%Bn{ zC9iq>-I;{ z9Ea!Cje@s?FYmy5g2hS3RXkIpj-Xfnm_=u=Qv5L{s~BDopzjB7&Fds0gnh%B&G)6B z3BJKG0)w0OPC622t9b=ISY@!1t6&Ne{+pvMbu#$BfnKPM4*ffnmAGWc{#JbCR;n|T zpC#CcwdATby?%FMn09(2aB#rnd=F2BuLsai`_BOM87k;)C<;^I<%9}2LKXGl=jWGf z485c4qy+mq+T`Gd2*Psu&3a83kR=??F#NX;!M_ZqRA8i<@6`UJalw!5Qy9oi`Q3DQ ze8;no?wkAyn(p-E3fcg~*{WZ7HtYX#0vC+J15K<1N=b;_E0Gt5p2xubjEO$R++kXJ zZCP<~d)9~o^M4|+v~~E6Y46)Ls7f2Rp|{bjfbmNu{|X2YDc&}BE6rV4e6rge0`YXU z_Q8jULOO6}sSvjMo7z17S`HI6_q)OqqU*`u+|I?}({lPdzj_*h`LqauG!#wvt>)oA zq;iUd@Oxxe?TmSuAR(V4LWAq&NNG&&F@$`8kkX1CGLg9zIFlC501NkJo@CzMF5`Ji zg<`#<4Q#59@lxY*zd-M)UNU6}R4>gIndZ$6?wdC~^3)8YKy zoo!=GVEbzVIo!a#Oo1v%1{0+^nKzkKcm5Z+`4cc+H(KP?_{G3k%u$d78JD2_G0eq^ zUgjJwchLu~dO2qk<@cWZ!KiJF+exP%zE*@kzh(Fg4$sv0*L#elJlLi0Pv#y*OpxgY z@_NeXl(^QWJ&S5@{MLbS{4nX4!P{oVKC0wM42hFTW#Cgk83v$K_-Wc@@KqZaOVK~S z5IuAnzsKPCNk@H?{IG7eb$7dh?m2;{wAO$gvjj|1JyWGQAMzl>x&-ON=}k0&$atp{ zX8>BYZ%AuoZ)6Rx(z03M3+SbhJI1;Q7HFis*gVMw8`h|{knXDhLDqQKe+D4VuD9;z z!yq^w7@uO?uy&L#&c$!Vnw_-l@$gXybZ4cVVXp!2^Kg3mk@}s2EUXExMUOZHj<3ui zaxHmS1jF?R;c9>46I16sZ=zU{Je`2l^vEObN^Td|;$B4f$%)Rw%KNGKaWR4uud7%o&>R!X<^^|$yC6Zj57)^-BHUCbw{K(t`5z1%dlz>8E&`ws}GII}d|KS?V#-?oSOZ7D?Zb`zGlKbOd=<-S4UJ zMlSPvW#8*%`pxvC2&wv2rXJ>z^R?@p1sten&t-(IZS8lw-56QIajdQyy>ed zI~ey4C)6kvM5D`nx%e6FD7B}v#H^kQQ5xgUwI7aL8wa_kmz(m`Dy|W2ueF)vrkS_c z)j6^1v{|?Bs7?0M*|ov=+uP7+kWA6K{0(DzNuYX)a2qftHEDjT|I_Gqjm?k53d*mQ zm@G}``!d@Dz4-3D=zMo1<;yp%49`<3Z~v)O$p`)^SDKBQL2GRHicI3H9@>5pfxrK8W>rQ}l7&wm3a*VK{(RFmlhfe19j`=JsO^s(^E?qEzl>&mYW z&@VMBRLx5g=Qc3kp6X3KNqReYs0m|;pz;{PpX$l97<2LKSk6RgiOJ(^y#5+(@92Dr zx5;Ub{?&S!aWQVUYA^GRx#PvrcOG;WYLP$C z!7h>;XWp%ay`u|R!)gCmkBy%Y7W&ffz6cGqto945_7C>Qi?v2^fqNnO2VzJ_r<_$_ zfe61%sW5!Nud%Pc;f_Xt4)8Ygk&~+fXPP~g(adfL-R?2(@(}pCX3K`PVer@k2)dh( zNpLcEviXaE2~SO5D3j3yVvE%4d+BIz(t7jGIv7kBX5dzZP6)pQlC_ZpYQw%D!_=tMHw1o$xR& z1i!`In@h9*!`$y7HPe3&Kkw$TVhRfB-{1haRVBZ#imqQ5`Vju5#YaiRlF$tpiBKE;dG4-Zr!9=cz_Q;~ofyI-^7-`axapY8z5yiLWiJ+7CJmv< zrvo!{kjNKoU;ab!1LW@=s3+5h^9uywZp!%r#2aW(!teiGm7AU-W&sQF`bADku z4q9?dWHf~TsER`ZN5dG-F7BcuMmcG2oJ=nmB8xnH(g8%OvtcwP_4HZUQm_{MaaS># z5TK+CZ1SCzsqhIMVm|N6#r1Uv{Z6=zrS0Y1$=gc!x@);cgEE>-4|J#MWP8v3R%<*7 zCk9Vvx3P8X`g;A7Zb8HBR5VJ0V&d(-R}kwH+r9cqv(UUc^ugA=L%gdiM&Pu@+ow>C zP8f!Zww3+t+AG2p<2vCR45PKU-@F^cEAQ_Oc1H?+|Be|@Urvx@m*@@15iDheO0q7R zv|?k(`=IlYBnSHhjw~$CK##)OEY!*ejU{;=`rr*92VO6gO)v45z)X2nY+1hT4h>Y( zUusWHtOS2}B=9`ee#`xUWw`g~lwLR+(qG>6)C3j5N5a|(W&#z`i|Y5LR?VHiaWOA< zNmlVDJ#n|V(@DO>=E_9EW$#!$_1LdJ+WtiM9`5Fk&KJ=4zkqEcjReZupEKrG(aquO z-KQ<9O5N19HUz!eQG`a=h^CQt|Jj?aWoKuz1$o=i3f+v(S?flRG2WS3s!h8X{|UcA3y4w~H; zSFC$2dIU=$@0a^U2jS!;g~n@9KX4|-9x;$ltZa|rYh3p}+a!tpc*eHjnYkc z7RC=N@QppvBZQSOddYNYG>lmOLpBkcV|?w0nBmN`$oEbM32?~{=khVHXFs*{xX|@X z=he<{av!$K+P0$eLPNfU#+HEShfHgGfxV-I8Av19jKu1Kya}WrbUfc@a2PLr^G8-DkVpa_u@R^YrmU`! zHaY>U)CY;jXpyk_lJu(fGWgM)LAhJb<$QoAe|&o@nvMMkU%r$2^+%SOOF8CnYZH<{ zJ}{E#cHKy{&lTLqj7D(T$Lu?^9q4;I3)strX}3|;{ocm$oBhu)a{T7oDTG4c;Xo4+ z{>8Z~o-clj{(fn2l})!H@HyLU1;J~4PpRVr*C3zI4NdpLtsiH9>kP+yF8@- zh|Y>C%hZm^EJVya%)AfC>S8*`o%lX^43H3v!NL# z({9qeGXS%SZ(hE~ddyp9nV_meanB1@LrR3X?NJ!AkJOxNyHJAWVS6k`;_j7(K4wSMx5w-G&ie)r$PrU7+@wv4B%ONwKDeB0u2VqYa@%~Y< zRz3zf%jRDD0n{F&*~oW9gyUd?A~1=e4n|B_L)s79>Pk}K@#?sA0O59Ze>>aLR*^2D zmhlqQ!D0wzK1t3~`LQ)MzNXq)ZfUn1+LskbwHLA1)B17Q`~};Qi0hXdl)R}O8d!dC zxbUj-b184IbHduXJUqfh;|eUbM|=G_6TKl>B>^G;s<+y|@8IXxS`p*!ggCJZO{DZX zVl=%10Xll!e8u)!UomG{_~?>C@zT273sNR5)%P{7@dU`M1;pBKKgGW3+9mIo8(F$h z5Qf)kY=t)i;6>5V;nQ6N%)lZ}z60M7Uj7|x0NOW^NPAb(;%)SQCV~KjKkc)GsC_g0 zP8M$-@1DtGaI4KTA|IFZpEcMWimjytjKE&I!Qtz!`Yv;;wkN_nLumGGC&Mt7D3 zdGrFl)8?b<*}<#vQtSWNz`V0KB)r>~!wT-@{cdb0JDjvEh%E*-3wd!9sj+qMz<{uo z`vV0L95G%IR@HYst!Dzkc+OI69W2#1dmD^~pqJF({sBn6Bq);(W=U1|k1=l5N$0}? zRKMYbvJ*qlgo=|)U&M#3%FbV{ zQg-?;!6%9NH@L!8f!i%>?H!@p!~lXz?TYe3Vhpx>ZsaAe9I97712!7rgXrKz4C}l0 zGT%KsYP(~YhQZZae-|aRI+(d7u1i#k=7FIfaX-ah?wtV$@#B#y2RR9ZHL!fZ>Sm!+ zb+=;z%N17eBKs~PH_G<7%G=Ui7&9f97Y@}_{@DT@@kYl71CVSHAltUjOS#GHiTd%ZA8dx-XvN zcH_HM2nBHhw|8QL*8~mK_#v5H4FPAtGx(+f^$gcRW=5Hp|#V z&BN2WD(Q8o)?y4#;Mhs~JZz4T8k5HUE^3?kl;M$W%N^j3_6UDFyPd4Ba?VsqLX@ODpnbdQR)6o5hYFrANmx=HA+6BsxXCI_>@L0JxEP#9Z*q&mR<(0pxt(Em^d^jc@=2_&4n7pBR@CcyPmhtG_$gvgJKl<}!}G+d zd1IBrKsK>+9vhn_xgmU3SLZ0MpD`(h^qS-R$*`=@-fTA`Iy;UQ*{* zf({PgejLl+?Xkp_Ul7<1qv_c2t|Qm~N=&-p54^wO7R$Q)%ial51%+QAIaAP%0=@!Lf$Iq73;hf8-{Nr+E~_oPV}DUO<(&m z1<5n;NtbWVFKVW{q+yz82^qmY^sN;NMn}T@ExQ<9N|?6P^Q+^5*Lw5%!Ali;76N3~ zWfs%^@Gy27XRG*>uk0~c$Xcvz9D$Ltx$sue3HwpIhZpsA0!x+Boc6}ft6&lYFWoV2 zP&^|eD8;^$Erh+#w;$srF>+DpGPiz{mlE;7MA!~NN^ zYF?1m#Ku%P_m&lFBG8p>nLAE%%hMZA5=*u@>CLPhNnb!VsLjfK(VNdx31`fujQn@uJlSrgjb6@sgg5WU2UXu7 zsmP?5zVtV7xP67+u<~M}vKO;2wn52D1qoXpbu`Xa(yP`h6u*Vga^uL?k&|N8zls$k z!AkvZ#(>L%4qA#-A6KA#q|PAr12az5u1u_AGw;{Dzh#nhCi8uKWTj#Wc_0(iLjI}# z5@huLp1=fzsR^EXBj{($SdcL;@QJS7SEb3hGcb**pAV;o z{n6{3x3or8d0xslT5Ef&ZT1~n1K2oaOHp`MNe{T^xxGBPqVbdFZ|Fx!mfv0ZQ|Nb( z;%+3{$$1QfBl_IM@jK2o&(D2ctI+Q)%PxpAA*#Kea<05RBELGvT6;BAhH~0|A zN(7^tosR#6|Eqh7;U6dw5e>R{T>-nf6FKcRi)x=mrRXX zXSNPp!Wvax;vj{vDF1a*b9vX4e=(`42!7VMq~+SjkTE<#aYU_SYgA1Nlp*HO56{%d zy_{%_O>>4tm@R=`fr|2ab5mhBPg2BVV)nwIS-7FyTo<0g3AIA- z$tZ|A-=e>t8?2ARBPS?v;?M^2i9Be)G)H|@@@@|4KjOHG`9i-DM7VUU|Ar|=GL2{z ze>@Tv0@3~I^pkjQ<(ph|OPBQ`Li?FZ{~dH71=oDrch96{Kx^z6W!5UMXjU%GeDZx0fc7CeqV z=1vjmi9ScVn)ExY-OFItQ=LXyv`@Mvc!n0!-4C3@+^bDpmCE!JF>hcKTe2%c3V~*l`2H&LiAjzt+>~KPKGP!++P6?nh~^H& zIK_nHNbiZNU+_cwZo3{DMl>N;48ByCgefBQ(d&sNIh%7IzRG>aIUE1~_gB@84&*++ zU@^f8tc%JwXq5OB&OXbY=&b`#THoDlM4C`~BzW?Ju*Nop9R!UY-78UU#xAFB8SNa-K;QkQs7yt-C^EwI-2jLGe7uQ490-HlpXPh zaW@fDjx-o~!NmP;^6gAbJG9?c^QD!YVPQ%k7~qfEIa2YxtcnQ(Eo=O|fsZN-_Ol5W z^hjGwb!#SvZR$OB?Yb�kWpPxvpjOElg_@w^t!}gO%DPxY9nBe036MO3+9?F}Tp~ zO!=$uz5~G&fv^X4&Z4{yjd3AFs;I{ zJ=not6G~k#iA3+|OLEUj*pToA#G(YE^r{Gk7C-%b zCBNo1a< zBB?gNbzT`H*OhM$xuaq)2v=60D=I=Lj;(dn^Qm+BemCUCpGl8GZ>dSzgjj}}_|gxH zNo6vS31VK@y~F$bQfK zUNbfLz8E$QUI)h8!zI{kKP7snqODdf%4{E~$)fVA<&z^_C_l?-pfv&dwWZgdHI$D^ zPWc|&=W6$0&Lz-<+)5V3PG$Vc15P~u*CEwb#d_tQOMrmi7`}K%x$|+V>oa(2 zrC0g>@<9bzjOT01FHOw2lf~`hYD@}_gzt2!as_4n|J}ah0JK$|NEWdmN0muL`b8m^Q1^7K?=(@=eOEo2a z4`#s38Lxvlz;7X0dg7E-0|B>bQC^gZ_FY!?Q2C!X3a3uCkpv^JxE;NXM-B?97=a`T zB(r}VnC&kmJ9|B^5hT58cKcXwh-8cCIo^HzbctoyZBjzo^|MSN8a?@H=@;J#k26Cd z_zXE7BQr$E@M^E3e8FVa4yy86hrng_iI`%>1zKn$&h|$m0T)8?(`Q95IG@4)Uo5;s z!MJ3sVcF5@yr#}I!H3x&)$inclOcG=ZacC|ay1aJL5-)Adn1_j%1jQPL-rr>6H~ei z5WcyWY~`F${38A$s<-we^I>YP_RE$p1}L(zFFhes29mnl6P*+Zho(x8u$o+0e`PZG zX+06Hwe@SBKZG27ANLROB{3Gs<%OgoX@3TajhRA*Nt6zQPX0OIIX$eImw$ru3c>NB zjru3iPr=O`pm+_1UFtrd@YMF(_C1PH3j5F@m@)~els$tsOl9!1VLb8)qCbe|f0ZOS z_j`96yS`=f)>&UxX6y7f1GK9Np-)vX>3@`7*5}L)BESO&Z+Iv?j^M5xm+UBJJZB94 zwa1RhmNn+5U)zPhV|bda%8kO<8Gv8K*y}IM9Ax*jtV)YGPvHi>tMNgTC>|Out8mSc zANo;4rJ)hY2syF*o6ivxxLfcI>nxm1Q7KU@~3zM+RVQMT%Dr zhPhX`H+vx+^z~y|>+8O`UgQzBgkQb+L?Vw{EmxMF8Mp<)ag{TeWii>nHOU!2{_V?z zHZ?g^(u^2^(dh@0RM{LYM(9j(wrI!ZPLS8O`bj$1zJaD)32fs%y{YkH0@*W=^&pJF zpHUVYA;~|Hkc7*qWxs~u0|JtFDjRqpl|Q_j_pH-K#cVs7)5)x|;c}w`y)5hc(MR2o z_%<NlR9(t;2JU)3L&Jb*3wr&R8f)E9a-Kly8WBCm<+2sX0Ta3$$SgkxR-p4`X* zE@QaA9p}!!Kcl3A*=vKLC+3Hr?e8g1n-ph6#*=KC`*0gsfyVE7W?@RNk|6L5^xi7WByY2 zZYz&qb>i@I>Vcsx>gC$k+2UnBg#HwGGu?8B_^vpZVV^>@1Dt%~ty#-^t(hCDfk2L8 z?M=~R(3#X@^5yjKm+RfkxA>t_X(fmmX1b_=Y4|8Gv4GC{vvlUFypTt@wsycyY$bIq zcZSXbC_c&AhU{b66F}4oWYmO!F#YSGTufD+#nK{#7r0kmG2tH~Lx|um@cI#^;GN_| zFGW37S_I+GU{|t|5PO^w9zcHb?_oBYHnImd_IRFT8n*F%Nf5B_VFU<0Bix6&;P3#&Itte zc!%rmcq6PN-v)bubt?C3zz-M>Cth9VN%$tDgN_*yAvF+091vktJL8|5-}jbk=@$m) zX%4i<{s(3n zm+GMtXHAdm-me=k@Qa8{AH*HTEwnjWi!E;Wp=W-eAb8c6R&J8mmU{#JFa%ZS#*Ci` z-fkwE$-dWA?Gz(03`Kg8+P#bb5skgixUofVpOFs5V8nR2t8l-VMZV8qj7!Nk6^*1T zPD+)__D`rOXC>0fEQ;4};#uCV>A&M@-mtQ#tvR`v@tk-5;%kc^wXCnNn|o(5jAlfW zh3ADMlxG!0w6PDF^I@5KTdpY6)&}3Q&P|HgK<_~s3;I)z5~p1=M$c^Ch(KN@ZIQy~IEA^miry8?4A z5`J=3L@aA=_L9`l9Tt({+goDHSR5QViLg$$)T2WUBq4CW3z(tWwdrq6zNV z(CN3gW|W{)n2j&xW}XCWtTO-Cr}Cc8n^M0=sJ-yC5hKQU5I|$J1ev7s15erXYwW@7a2Tj&MXn#xo`x3nR#q4nA@_@iFls~@g6MivTeeVuTG>20UbqXo%>Eg9( zI*a+aw5JGq{|#w%LbFWr^8F+RlJ;QAb}cG5wCZ>gW5W$~E=@sT=v?L0GS+2CEomE@ zgs%m!&R8~}7*o`DvwmH^13a40X#NY~+gTg>XnFIZT2{A;2L zN=o7=*yroK$U1QZMGqMR@I)vbYEQ z3Dk$Teq-X^%~8{%K&bDK8L5}OmeB-p?2qk1lDz%^a>F$Bi{~Jc@OFOa4{sg32U_Ar zV*$6qo?ty&UqWwC_F8$}f79B6@7d#RE%7SLZ;5Hc$r8r{&rbZJ3u8?4j%`md@xy zF^l{fN6D3%U?MN^ZQPrCW2~6}S>hogrJMc;`(y*B#4|k2B=+HxoJq}D6Gk10vLRGOt+3kU^**$hjsl)7k^vkc-Z8G(29<<{!w-J_yt?O=& z-hklLw)&aD|610&i|+ZY^mNNQ?Zj3UGv@ITn5WlgofydqT#Mjq78M}|8Nw9bZ~XLs z3g+8#5ir-L0=lp68M#;;CYn6++D@<&lzO1?AHx1)OulX=y)lLKS{8kmgD+z_;Tz*+ z*^4r}2fM&udNnyt@=MH6sG~`Td;5fbAzZvnbK1Tn!O)y9J5J&y9WH&?fvPok#ZrW}3CdrXOTr%G2Q> zHxs6YB>XBF3G;0v@THfpdo1!M19f@ySxq`Fuo!85&Dfl|c? zBEGT7y$wPSW9FNlNx)9B*U4sTKVjdgqXVshzGTJ{lILZ@WGgvp(r*Eo0D0s56DE}D z4}-7QS>5EW$qBq30ZP0#;X1n(jIrmjR{H(etiNe`bXt*RU7P>&DcjJ#Mh=;#}MzpBuBq{)I>NeFTzaHJ%PcsE#R&CF9s`}b!)=ab+dhp))td~4kikN z3u}ZsqitD%jtDZ9_<1s-kOZazk9tXPQ7sux@9&}eN3qoq7~OHSAAy}cE?;|Xo*%4d zH$_ppXGO2j`GjS+T6}{XYT0A#XS{tXkFi3hzA5k7X+%Ht4?1Kde`EUD3__y{GWcyg zLKExCC0VPMgl^CcOJNW;79OXo0OZ+Ps)*H_0q`Z*zxNRMXd+3{N5S%#K=n*|uE4Zn za||v0&|kRGfN1LQNvwFNh4OdiJzVMPYLOK?>t?k-wMFnTZJ65eQuHNlB-$Kt21{~k z_EPN+6OqlGLgz6B^|JcM_QGpXC%(su`Av}GCwxW!eh;dzOaK6GgO_Te?gZI}B<8a)v0RKwdEy!OxylV}@n*4i`M3U~`2ybCb)nAG zJp3{zw#`5heKPGgE1O|iyX>>@ji6B!udS|-iNZhxk<`K(@S*2Gdp&|o1@2>XKY?!1 zKS?nD5Wb2j2i48K1Lv~KAXqcmi8m9kk^!rZl?n~U*JIhsiqyitdjc!McDLWbiD>@VcYx4lz$k% z>u+j4nM{A8&VJ_kHBUv(+Qj_>sMv>-#*TuEp(hXa5zNFh1VW5K?KEJXoBO|-n(xU_ zq<%BsGK9*MZ=RRr@PCoKtoQU>c``im={?x5KW1%2wNQB?oWj5BAqkZkpLG1`2XcQq zPKrwy?mpo0o6zY7fw?%)S#7r-bBQFRT>%^PNivPuA@*6GN*cB2S^dZ*cU#7p|Gx1_ z5-tq?fWh?F6!ZxR5YyQvr?i)Kfw#C_7SgpZ^QeT8`2A`6{a}x~VbHSZ>l+3S%=qr> z_p~ov{T2%}V^_jd)SmlOkn_K@toFYh_jX20G`GwqOVe+*tXl^?otIOQZ3nl_m=juS zSuH-U{!9RRm2_XAr+qTxd2V2i$B52m9>Tz#$?kB~FOujSseKCUn!XY(+v3+vnva}b z(b-)|T9NIIBK_Sli6RIvC&f;*QIXjX7y(g1vDIGFuRiX2S^mHLkq@1O)#|>87=6Wk z(_Lm+HJ?qMV2|g(n4j16h@vU|Kla`OzKWvz|KHu4o{$iFC-fp+kX}SkP;4L|VnGxU zuq$>cV(-`mLyA-&vt_y7LPW+Cv1k3Ra8|M$n_l{miC|eswUzczFxCYCGiQae zmEAHYm>Xkk_g1KsHp#Y9!)46Ex9$n~HrJj1VdXCav^To&(o>QkAa(nuw22gS&=e-!af)b3G4K6-@@qsOXZwI;?j2CmfIW&TY%XY1y=ru zORaMZ0hDAfLZKdL9)0JgIaTaKy-+`7+7LARwNJ30AVpkl$tA2?9?kEHTHnrdSF(ay zn%Eu)Lk`bRv)AB-QI4N)mAuXrwd-$P8B;{KihlaTm9~7$<7&;X-e(H|jfZ*u6%eYo zc-}X7DlX@IXBJRuT)Lqz0#|N(_X3$%SVZ@Z!LMr|kkvT8*S<})J(Inb>v7y0qm)RN zAAQExV~VdKiQ#zV=#1*FSf@cAAK)U>fh<1B%O$5;=edInUe2cD9gg<)C{kGkRx{Py~CT*#G=$!EF>QeT8mYGvx+uD^>N56{I-m1|tdXbHZtLFHGF$BO` zrW>M=+|5rS{17$( zQ$Vc0qg_xH+!mavXrib&5$_d+YqD5_AI8EGk1ZFQ!vRGnJHR>?F;BbQs>U`i50Tz` zIOA87=iI|OU#a|Wz>*tM_XPa~ewQ^5r+6X>_6ejyAo z^QWAi{Gx*@YvpNr5J(Ia;v$U*}?VNUse_XZ_NnpGw9ME0Ym*ht{i0mzU_D! z2ui~g$QuG)jeWffVFX}eGkCKVaXl8&SD2u7p_wM?C7Xro5CQ}A{$%R%|5OpWiEwDw z0#KF(^sh)_jJ&}qXLkZ1mJ~js6^`p~&)UWsLu>5m9j}V!eA}xVwPT5pRcmUMjZtv? zxU`rl_G$)la^N#N3B12EYOmsG<2%Iebv^+A4mjY^T$7Bn9w*V`lC2uH~Ttq{;eWa_hFXS zOH#jv?b^3;{uPoTZ|8U&N-_ZB46 z_f63+ncSl2phW8LaPse^4nR22G+4HFx8ku^T(R!FocWeDyjvXzpS9c{^S`#+~4SyCey-XN9-LPj~bpUqCt13SWF& z_eQPwFMsU2I$Zo-X)2mug@uBr_ISK&FgftKww&}VK5Hl6Wa83aJQpCPac{U=hl{YZ2a+rD9KkU%%AT0HV z_qH_6f1d?#SIxaH=LRx?ytaRPn0aVlvE-@FwcK}4a3J9pmXk@CtLF75#G@?pM9uAg zzGwld#!s-uT!g-LOB*N}Xun|-0_Bp5IJ!qx2TTyDJv(8G%Z4!DAC02R5s~Oc* zZZQy=f@1O~iZZ)|K=mBRPo|ko|Nd;l(fr>-5VionA1xjL6#_CstV8D0B9&B6sr)&g z2jf`7kQ#qS)SXIUFmdScamfi*N|U6zm&AO0w2>2hOD#H|a@_B!?g2)77H4}R8UIS1 zN3V0}*JXH2sxclv+w-+Dmi>7~s~<;-2(-pW1X=*&6QE?tkkW~>^>YsQ`m6J&Qr+|m9jW(&(zQeB(fcchs${f zMze^Dyf@)%yD6zZGJ0xD}qRg7@f+of;2F_EF|pVp&F(s}g%C-TdFNWaV2 zwlTSZ@n+mKUbdPue|#||wWv-hs{ebHasf$zN-}?`?oI9lU?GJ~(63R*>NvFR!WFSL0K#j4VTx(2pjGo)vQm38<}>D@#W> zNVrYEsW2+A(n@VOrb?^8UMr<~WQ7`$ZAsnHu0qR*X@t9$Dm^;_Mw>i1>8Xg;$Qzf( zz3E6G*$RUfe0n9ds7@)Wi&tr~p}Y6FpPF?7%ZOPETdAxQ8H33x1+mgIxyjU4z-^k3 zsN%Q(!S5a(q!!I;3eWJ3j$XXK7`Fr5Y=^uR@Z6@D2*0CW7rDbt*y#!kW-tVLV+241 znj7T-yzdnyeEm9@BjFbf843)v$U7UAn1L%}RNF+}(%D!LUjUj9cN{#{J!_YZ%K^ z=JXZlRZ4^*L^QxgVv`I(`w+4T;m>_Af3QqEs|vTSFI<9Yq;u>x8$bnd?{Y>d+ys*DvYJ- z_PPNPek9+~yen~lp9RS4OUax4Qla{9mi#Sc?`8WSW0!6*C$YlBJd z&#bFF^zqWnk>nK0su>EvSSyZ+d&91R9PtbW3lOzNulNUSjZZ_nBtnH{S7W_KBy3<_ zxSD)(+2ez6Yi05$)wfm)UIQ4X|2Vmjc3>E1=e|Mpak#62YpXfBTLu1G*5WnRVCA_ihxuUS4Og83`kHTnAp<~P|*I!6p6_9|@fJ9+*%`$}t& z+CNelM)bRaOJ%*0KTMDCZw^RW*Ema%oFeF+HOTL zy2Ls06P$OrZ1I)xrGfJwM(!-wdj`(c7xym?e=jR{^j^CjbnusOmYlE9h6vNXaQZ)6 zM73o}h+qZ~6Iiy$dz5)I>L>g|)`z=x^bI}E@{9!sl7r(71F)iC0B&!Ly3hebeyWRa z#xlSVHn$Z{rwv6vAUW*5d;h~y+26kRmv=(j@yg#=;!S4`V0LTxBlF^CqbU@cE@(5r z|9sIKXhH)pAO=6WW;#0V&?IFE8HboQf`;EE8sYj zDP{>IU3HjI@NAx{y*9C3z#xt--hlh(4qyNa(QX8McR44kxh<^4)C+LV)u5S&>>DA< zGG*LoRO%tv$X9UvbT?H54W?656K7w9WZr}0u>{g`A8+G@g-Y_Fefs7}T6{G{(Jn#0 z`2?^EUIih_4lKb)b~~=xduAaB{r#2=PvuQO67RJ)C&)YfynFYsAdGBv&W=OM;yQ0y z+W2s~TFj2p9=%pe_a!;sFn-lzPY0`Xh3E(o>S~Y2ykPTw?RJ)a-np3WWTO9dJCe7)uW*yw1h}{jcQ5LEurzfv_ibWy1aW)O!(b%9 z#ru3(fsxKvB7`-{#GNXJTen9-p?#S1L+uG>VP9nkB?Ppf6B_y_(CJ;dsS2BXcJq(5 z3u)JVXjOJ$1+`Q#+`XP^X`qjuQ~LF!HO>vp^VenOIsIT5HL4aww17cODK{c;EgDCc z+K)s7zo%t49UONLa9av*hVbV{OIoOw4ksIU4x@TQn5`su#lE|SlK zb-!0BxrVa@W6y>Bo=wXwk7-GfLl!&Z-lZMccvhSQGrNOb#=zsf;8#V?%h?@E-5oJj z^B^kW*WdvL#(uI`;Ox&e45Bl}F0Qu|@%#;;*$t#hJd^p^5S?v*XIVpM^_Z6sV_7{O ze zcRgBw08k5uT9kS@0n_dd>LH9S<1fX@ODp^}HsW9T;cgS{Z0*^f_MXy1t;hFb*Kiz1gKkn?OY#T7I#T?oiQ>Vm0@yQAo#6O5+hSy8v|E67O{rf; z_h`rOhk*jsMWtYLVU{**$^AONHGX<4X>r1j4hU{fUK$)obBBb<BwxBBlK_bHIa!1cEl6A6sCe<^HGvq8@w4>PxaZFc z_r+R^S3O~q4M9m9TB1#$gt^}PxX~J9<|_=;fp9!SfB(Bhy;MKXf&?WV?Pm1usGEr| zva0SW*~FQQhBe{WDxsm+_P>s8Wbb0UONFnI^A6^2i_kH4@j$o?s-E}a6vIL-cNkGq zzfY03(BV#Cr@<~JDnT29c7GY;Jw9);#VE6pu{9_+5SHYs=|uA)J=#p^k(a~@MPXba5J61J={MMTmL!KO7}*szNs9V zmJt`~$N4IDeI+|gs#)T!01UyNNifBY-ZjNP-413mO2Y7nqN0EH0K2IXz^WKM9>jo6 zNpOdRhv5+7m+;Suc#v+T_{vqfK0 zVSKvE3``#1CJ-Q3R;RqOAw6|v0f5GfKVWN89>p_oHuWjos=-3rX9!jPJ9~60j0{Av zs{>d8IuRslK<{A%rYqVZXL|m_EDo`(E|8$B`Tc%uv3t2b&X&$2L@U&NrU)=p1Qf9#a=lOJlr=#`x_T-i_V{q&u{p~5AwbSfxg?a zzYq!g`&;+j67J2yqcW!zM1zPunzpME52Ed_eke6af18b~9}!DG6Yh)~>hz_}n+Q|f zN<9WATsdk<`hGt-q>smF1nkh%Q7g6Sb1I;0)t#4eUi3iV_q|K+iJiiGm{G0StP(`gj~8S%KYzy_hWRL!0q-cI;? z|GHO1o<;Kq%C?O3Ti|k>S4#}T%pyos3{i;yG*AC(k^aQ{nNd`K{o>o6ea+I-Fa2Nt zcvm>BSZZvJ&ka1p^wkOv&_fuL zB*XCYio_{T3=jf)>sS1qXs=!|(%u*>1s|TRT;o%y`_VGK)O@)#x885&97dh27pVf-?yY=|1k z_%4n}W$pXXvJH5Ju)AgO6@0+^oo%O>1$KZy&AN=%T{79q!!7ncOBsi|bYyby^3E|RW{N=hPOxJuY zy=cQrg$Rz9#uvVe5m-qR4-vPgB!W*%(2GD(U7-u@9(Fq%1eO-bsQf%_XN|&_^eyIE zy=btDyAh(KjUhl=iqhuBQ2S0vbMimh7eRCuzP{;J?T2>ltmtA^g!`;#V@ajf)Jd_OxK z+v?kq_bUGUrUPH5`YJ zW*|+)+hD{GfAl3_f86WA&zWHI#P?@~fpY%Lki^&H+cxXswvi#>kvXOFJ|lc}T#3{G zgeW+}{ZY;{ma%P?M8vF{Fft^eGXQ19m3z)=Ir)UJKk$5)`grHQ=e#pN{F4vwZZF}8 z<<>%0g#pAo9=JuOuLIfXZ(;KsZm~ydM>J{KC55f2s)M zv0K5Mi%eSX=uI6LwwM)HLdWl=XEg(18-`aZ-84|w(wL0mnH`P0lCi7bf3*0nQ=eg^ z7GoGN1VgTA5BatcW1;czw!|z2`G=9XUD5M*bNW^ceKoOU);W+1rdVU@wx1hKU3px& zZzH>N--A$r)wFZ5J;0c1l*==N^MX&ok6-*0GrmutDGYoiKd!}ed>R7xu$*b(N5RjE z6M~0<-@Om*8K(u5h~brOp5Oe6#CI`j$wXF2B#`ywyzhq>2p{Kt(=uP<_iW_etPPfR z+3oeeh?q!vraM}+(SRp?6(ySMA7lkfn&(StpR|eZ+L;pB#gy697+VC8<&9yh;ni!FaB=L z8MzSVR%bTy6smN4kr0q~G2VSW$sm>=Ug&Qr=u4{R0_Jb1-P-szijGfhamt2RCj8H;%(9k8>~0xKfFX^DeiCzy6)Z!Zz(YPe527X+11@3iY;cI^=)P@LZR?+*yc({z)ho zZ;bROI{0rat{Ejsj*&(Hst-W}ea_w8-fwgx?p1GsD%1O{kVkJf5F{i)+yX$ z*E2x|V$5OyzF<;xDWm4igBUMx<^Dnt_S~XzKMg@D^z(|mHX{6oA8>~tunenPDliGM zGOl2@oZkhbLtp8<6D z(ye-4VHgc_Yd4Ik2>9F)b*VH32KQ3a)#+Y`4LoqYU*1WW zm30`%9h<#yXg(TI`~GFRI$N2bB!!ZnTuaK7hDbTbn;V2!TVx30)PJdX3`nje2pbU#=gVY~-JmhNBSeaMQ0@LQE^hd)WLW7;92RXFU>AWg6kzK9e*!bK9#-xa!E2+@3Y{H-<<@Y?w0XXULRkAL-ZGeTzpzx9%` z1G50|5H7ZFXfK-p-KMmsxAloo@d5YQ@B^?VE3F*4Rie5a9vw&QUAK;|(P-v$z6m3|fOi29n_UXH)id5QbxyQ$V& z+}FiUF6sb4*iR|~?nv~-~ahP)3$QJN2X7y?QagCuO)aH-a1blMYX&Ja12H>0|1d&-nZx1%G^Y zU{uw8RqZV@`S@k-t*h?NozBt3haM-Xt*B^yb)}VfVX3L9n#QQk7q9*SpXervmC%s{@Lw##HvKnYLm==vP}^B6IAg??h>=J24xDCk zIO&wia`d=-b-$`2o4j-S3`9AXkp;(!+Z<6#S@<%h6@IM=B!sB70r(4eeNDKE9{Arb ziZ0LutOZiJP-64}vOnjAl0+-X?!Qg%+am!3CY-|9bHC_>+rm}OU zGZeFtS2d2Aob|@XLtKBXv`PZN2`|F10brNNkE)DU!CLtFFM~q#``ea3F3a75uX&7@ zcb$bgWsJm>XeSz(Sf-$C+V>I6CpW=9nm506c&|lnd={F(+9n@nPbIZu z>s<>%yl=VrG1aAF2QMU?ZKb*2GwF$Vim?6F;w|fu#2%%2A$B{TZwfN-HUR01{9m46 zjXABi#NS!;u+OOjqSIH@E?5h@c2+~ zTp4vFNnrEjH-$*@k`^!>W+&A{w~~Ie`my41e8Ym zKOlXVRq~0by$Tl&9dh4+^{e5FKJJNIUm<3@FZ4Q%T0pd>i#BPzf6lh(<*BIU#wh%kf-3G zVIHD#oE|ps^XN|{mnx$32&I~;w2-OoP7gvB0$4Ip^HUcInNX;cvQ z6BaE;;yL(F=1Y=Ba!){4JyIT>H0|Ysw?Oz$Th+XvFWEn9S9!@9ZCPDLRua9U3?HhVSk@PU4m##hVYMMoW3L$>QkEYb7hM0LO_{Qd81BT^s5NQyQE@h zL_1{(w}zp_z8$^LUV{~CNXLV*6|uh_I8Y@FgNdCVF$%}7g+D9RKvb9~{@#0^_{qor z;!FNoxnmImx={U!dpe@BJpw`G=Cs^!c_ij1x?df05OCOAuTsoLnivL2{)Taw$;uxQ zes7r@^wRvvGh}jznQ_p2`kTND9$^q<4>(Kr`(?@7YnJ3dKX$h~xJ_>F*liKJv;hyo z+TF&Kq!b6DGl~_9^6x5E3y%b!hj0Z-Q(2N|b-262!w&P7@SZtagAL_VWVK2I zT9tVv-psQ{{^_te`@i!c92Q+xc4N_*_v}s{T_=osD?Whfd9Qg3GmE+`F$_3x>|5Eg z8xHT6xCODdJN;07PG=9<190p<`3JF&sh7=@52vx!J;REhkoYwUdhB{}kammu}mKE$okr}e- zt6U?Ic#RWo_a>1!3QTbL+hizG|G=+O*?+{D2(0&wTZ^rj*7N;8><75v&sx+44a&;75UoS|R?VaWWTO z->z4)ytvIeNVJ{q!0A(=2<4t357o``B>{aUMbQ4joY-oex$suZcR}9)4(%^7hyjPh zy`M|c5=z84IY+NN^W0uZuj|+=ZW1QNO?+=`rI<4^aB19qjZE_(xbwqb2`pWOyhs8S9%%7)B>lVUnNE}f_8gjEv*9MO)jXbtc@+2RJtl~6YI{(HM%ku z&Brfxteq*JlaU}JsYUpij}sP|U?lO0U8#Tf6JWTF!z6o^d|06w;p-L6HDT0epJwJE z7-g2MYBvQ(HWWJ6v%FGsoPaQPd!Ri^8q?ti3MPsXv|C+%yEMY#))%=sFIraW%{vbk zEabY|@LV7nx>w^Qex2u);!lK5Mg2HY(YvUBUNUJAl!HzjKaYO57+=)Ta=DQMaW?L4 zvm%OlT3$N6s!~A2^b2g3#^fppS@SN~@#llDf4B%*bYES_0vd0)g?lj~oR!!(Y8uk5 zwQS6eyTb3l@SFi1N}P)T-MIYt$Wc75j@z4vAI16ivj%a$*NXSp;_vGRC-~V~V|WS# zGyE>OazFGg*XPnoYaYv)N^r*Ao8puss(eK329{!3!8{1_8ZKC7WO~h~zs1tgoBx$> zV!RsngVIANN+dzCU&6E~JbxAa5cuTxObHtHTi^@F_Gz#T;lFa#YRr8)qVNBU#mBc& z;0x~?xZA$au7?J@DL*WeWBN%zpVY?IT}4`U3X*7=%vIaQitu#vd^?n5$0R z+2FL8&H9Qe^$iz(V1H@|grp!1TuFve2&(;j{#OTw?j7lyfI-DXNmKiBn{s)m982cS zsej9davRCcUJe-wi6`%|P-SuIj$KN2beNz=ODrB!(2?lWg3V*|#&TT@e7ZW_>jqhP zjbka?OsMJaZ;!k8CSFQ10N%br2uwD_6fOOz)S~Qnk<8y0)vuhBmNuyjdnqXPu=5c_8x@@P8AtFf-aJQ_{E_0rdpz?2!6x(Vr;-ry?QNCsYtiUKi&*k! zU0b8|v?%&<{^YKQ5@FPSfl-w=(M$3es4ZVFJU-mhzd>@lFzI|ukXyLCzP(2bB(Jbw zhSr%(ICo~3>|0p?U%T}E&I3sPr-mMNXE5FkFfqY!DPB&Yf1h39SZwjPP0A_4fB)DY zH`As6AAHzAh2U>K&h{Yq76|-gbwLqII~-)?f_3KG%|D3-UgNsQ)V6;F4%Q+NBJd%^ zJSe*2Fui6;wmpyS1d#CkQa_{S!5B;-=WMF+VZ;@1l*M^f$mU{2bR-)d90qyXQw1g2 zAYoDdeN$(e|3C8KYXa#Wz$1Uw7LzLcgp~-p^p3GlYK7+OuYWNrGlRo%-h10QEY!J@ z?=x_ZLa5AW|2D9C^B{y3fs#!Y0_V)J$66yyTl;kXS#P$#ECIqW?t^o_k9~^e35)hz z9ex)^ur=yeq$Dd`9$U5MwJsV#wz*Hw{O3g)NECoJ5Ej65?FW!I51HF+0bI8#r>%|a zQ>>Z$0Y?BAhL~(5YV}@(QvDa<^Na8N7NHU0mxiL>yD}=9xV$oPVE02<7Ue(Zo=x)7 zksmF}?1X7&U75WOj4fwFaF9%Xc6WQE2rzx~7Ou%)0Ro{JHjw6%Go19lufdSx%KYm5 zLK@;xbH?ZGCD~o$t!wieao+-hh`aKBomt=oV=~!zftLF zdY*h0z!j^@Gj$w@B22g@-P{>YmWd;@&VEJvdfa1Q@knw-@cN2C8%gYL3MdTg#Lw6z!lJD3fbe}7{f?Ax*01i7T?Wl|%#URXU%*Fsd;B81 z8t}e>g@{YJY%{F`hepE?136j#*a&ED_1_S2kntzo#}3HI%^XV)Shio79dy zUB^Y`OQL@qzL(`b;NCH|I}Q9MT#~J5;09bK&l9C`D888xXT~1qzJ~dF5=*HNyEz0x2p|zs^|P#)FhHpxE?C}cIrrRS|%=d$rQhFE>7X z@towrWO#^3$qZafG;s5*2Fi9Adria#HUx=9IFk%PN#xTb1RjC$8}MsI%ZuSu;JqNw zQC*xX?T#do80*-XrC}al{4h@m z93rGeQ6}LJ&>-lqU*?~&VqF%5|Ks@^UJXLnDld$S$DWQ2IGkeM`v(K@(qg^w=M~NP z$>az8+HFA$>R@kjN9|Xu(eF!hdiDCxvNDEnc!$RhS)n}9nC52T;uND+fdKNW{{FEX zjoNYzA%6uIeLWst8czh0a?Gny`=o*OTiRiFEdBlYu7g{ZuRi9yh~5g%#HIg*7>9Xx zjkA#W*y7ELMu+idu8mg&f0kK+n=px;G3fvDX}?SkPelW0H6oa_oXdVLe<1RH;A*}b zr6?ym74|=z`w#F3180p*VX@E7b0 zH0I;RS-fqOF4>0cs%mm=9v(R{Id$y$&2qaBQWn# z?(2eudn2`?B?J&c$KnX&JL!C+i3pMASd7On378tkEx(QJSa-ryo_~w<~g^=lUig?fbyk zku2JT$vP$N?`88pToF?)fIse`51z{6`lhFsujA~59`6diq#QSFUOw6o=rFx2K2H8~ zd`Y%3ng|*~ULdqvrn$~`@(c&f;fbM(*5-F1T(|R2b#l)xx(S9FZcc8P7dTcp=V#t9Z=l7)_QyI50-j<}eCQE9Z&8!z6e9!S2NHWJ(pI*021|N?Kd%9Q^Xxngs=!D=;;FxaHFzjI8`s zv9H^}P4EGn${V;!zB%<*o*vaF%kPhXkKLMO2>?5xwlPdoEGsik1^z!TIFnoeR_lXh zu5fl)*5JJJ-b?Cuv`P0z!reD>+y=c4fs@&j&I%Yw{-GhJ!OPw9Dn9#KOz5R3!msgw z8G$PZt+Q?%I$%ojG9;==Q3LFC%zx+bWvs@t0)y;9c>LuwJ!B9}J^>Mk;|UU{PL|&o zfdJ^N3E2)u2H^^q!oL~ks-MTtwn5N$-(H_v2x5J5{`?O!h912?{I&a&vE>~-<$v?Y zg637(jjmKFlIvIBFtTs*5(wQ_`v)sqB(DDp ziR7^e)_a;zQH#tf%MHRR+)Pa1Xfc54RI#^MDFEc_1v6!eY2B+zpTKK4eP=cLDVFy9 z;*`hf+nyr{9wH=)-}id%U*gqAqihs;Ex3v@HuairBV$4+y86qkZ?4E1g|l(w&XJ)W z(BpTbpR?!F{3>_8Bp(cC`;7D~l5?UW#;})JKp?`P)Ju4gcYUa);>gSHEjiClA&6(? zCuKy~g8f6!iIKAH&?-eSSz|GoNK+F*i=|fit@qKah`>s622boJcEF6oFbs5PW|i(O zS0S=C85jDrTO_bp-u}=u#@vd2F_2G!a+zkf7)B|$k>;(`4{EO`J{z7F{=w}oCvT~~ z(P=m~t$9zdQg#5orQPR0tW+Zdy7vqw(SN_-1RrU_W$R%~NaU>aFELy{^@xAr(b{Z9 zh~8g92J&mIJxvO`^PFz)G|qz2w~ef{oa7!>=4XXX+$XuNJV_O$ zDdjPndO;P9`^j>m5%5MwdH*%f^%-}@jbNOe!o!46w@j>L8j?3AjHt+=95J@?;28CpWf<;+d=7U;Nhvj06BfXf8}7Idou$5%=W!vsv(J8_{ukDtAdzy9Sttbz9wi8VpZyv@hpd|D5cd;zmjn_!)1I&Q-*?f$E*FdZ!(P|3~6>ogZ*WdV%X-EWILJX{;;G6JsS^(H^ ze2FuhfrRUhS$SK*6D+Y`yLwP|v6jLYx}(fI$OSHjE6qi}`^n+>YKou6TL;3EMywT1 z=IaD@3s`8bFdsd-w3!`up}ywbr10d3I?m%(Rj%t*q=`ua*Kv2W1!4`_Flitte%Ltf+PWUkR{CV%LO}|kWka=bjzu#h)v>Vg!O!QV56Bg(L<~hD^ zM3l_Q2L_XN`7%SJ*}mQHnD7HR?1$@2Rnpxj0xPz#9r zfH1pP;gf;zTJPB(+6}?SZh?Iwz>igYV?_E~1jwscMsqv!8EMbbPE(i>`WY3^e~NF? z@CnfI8kz5`)Ve`bczYTP=dR(oeOYAxDu9&KbaUJ+Y>KV!?wJgMTC!-@jqH73g?EN0 zDu+-^vp^Twro#_tp*f{4CK*X~tBJzjrxUQU_C%w;*lM4Wa~&zKLHKhy(t2INn+lqb zAvhj^Hm4wFp7T+PstB?q|7hwJ)_mgx_pY)&oHaL`30!XbDE*P})i9trf!&JdC(XpE z3JZnE?Jly>NB%)L^x!p{UP0>YQ;@I9fIk-nTgyjNtFY__2Y7#W^Cx|@u}kjq{7Fie z?liIYOHjfR-cK8D7>1@kim>W^{o*H!^9ig{D(E@a*K2rZV&VZ#--HAn0^ZDojd;5FGAV=Vt2H?4OL`{;`xk?w|4t@$={y`nV3|PAVe_6aP{y77_Yoi ziWCaxebY4)xP9!I`9I~u7&^}>wZmD?>(2;X>H<%sP-tWX20sd}=+=|f?~H&5b|b_W z>hKdR=*q@qdtc-((pXya9!=Y)mE-q+*z25_Zsc%`>wZ_#?4xbjFS2r70#?PM;0nQ; zPyCNwR_v5)$-2M?0GV=_Xe_jCBw?W9ep}oYc;Wrs&r00R!lH7g7sdeii#|!8%w9xR z-18Bg?dyT3AMr~|qNqkIs!uWj2H|&$M|iZ5hH6Mi)7P{ zSy8G0$$Z8oUp{eo@6q<$`eZYl$+5;-#ZQqT%|fUvf)9coDYDqda+Hf|nMK_OP%FoC zM-Urw5z}=W>!(5R_A8oyqhwkuo*O-()Y&Yox;U#A>F%ukm*-u?-f&il#Hbdm5w#+& zc0LzjT3ssR>+l=oV5-&Pj7R{rc6{}xqPgBL_8_~d2`^L_ZjiIbiw8bqk^~LIGX#g4 ztrcg;BxyfDJpN!z4fXa!7PcjfG3)crIfEFVapk^tU_jZm*t3p)+kTECJ;8hLKL>$I zVW#m}0~=;c*%7v-1fVMC#8t9kh#!R3x6@XbUq8RBe3trIRKlOR-&^naN7VlN`Sala z}!Jgr#I6pR!V@ywrVEknn zp2(Y(3fHWu{|Q%C}qX28M&Xj;LZ7Q zRh|hT@!#~xz-LRLSw)TxY*6YG)~UQ{whmfOXheM)7t}K^MxfF>A6GQB#}-9l_y8@y z^bI-NkVxy$-t{byMxthLqFvZvS(nUzd@2hcpYoInE+5$6E8*}$y%nCOlrUQ^X!`>Bnd)5tS+P_24U@< zkn>BJ?<&}o?`C1(b*E&-eop@WnfKpb76wszdu+182p5$~`zFlXmK_@Fg5*}tg2Uxp z>hm9Jf2mfuIs;kA2H=vH{e0PqiTF9)IDJD!&pYZI^?K$<(I}clz`FE1ZjVr`?r8*PAD9s;?fn5Mg zVtlfkYW8ZG-U>boCXyiD8Wo!S!9nIV`AmDbyNTc1JJ;A}G3-~7n^q>DUzleYD6lBw z+R1X_5%8c?djMx}(LP-Gj9Z|b-9QU3&RJa;k4uJ06ph4W-ftkt9H3sD=0 z?#LN?RyhFl_ByS$MxAF_pXY1}g~p;tw5k#rcnw?p$Bk;n_VOE}ooKs2@$Yk8vK6ZxnR-O9??BFHzU`&f7!$Txbn7eA-*XOu&#zbd;lskpWPTW#nC;vR zf4Mn0Rnb%Sc-WZ6PANeW%}vL`T~RwHtD+H*#wGYa4QD>^=DtW^KJye^EwENA!dEWd z`(+r-b3w#)V@oohHEyjO)8vbj+SY^uQoGuz_6yCbGhBLX~Y%Z{Sm^G<0+l66xv&2Of4I3^98GEtHH5*WZ2v zSZx3-F#xR_)C<#;`8-+|9sWOG9Ys@(&>3XN0U-;UvttpGWqhfKG-VUrvoEh9Yx}I* zKY3~90VXV&*X$KEcpX`!;ExK`0y$g2&=5 ztyu=Z(D@#P5o}>26&OLOUWuUq^`o1v?^yOs;^bFe{$RqlIDOZ393C?m`te4>{s8@9 zrSA{c6x!K|fvO(0VoU^>z6QcqQ)D86p&0p7@4FD=3;jd#zrA<>LO{L0Sn`zFUwoEv zr|yW-=6_ajG2w?0NV^{7CY#fn5H!@z-&xcj2WSO_>?Y*s0YS4z7mk#m)9RxNqnv0| z@x~urkbM9~9DmRbPT{&01Ye@Du_M<{=*~wqGMQL~buz6;!uA=kKQE2Z9AIGXxAN?- z&j0{G07*naRBo(^Ug8p!Gfruh9`^;d-a5^$j#x$1N5gUVl)XkP$Xo2t3S#$tGF?s4 zgJ`TY^_@vR}V z4*Z5+z*LtBe%Fh5Mn2VVdK{=3u7HIw`hv3Z$#p7hnHG-#LfIuBm)(ka9`GBF@Zsrr z!S1+A|2YFV?gogzN0<~XErgh5uQdod334+B_}kg$`~+0hXFHsCUhE2-h}Z6UCiE5r zp*OqLPv8a1lMyD3M!O;7i4l0qR`e8~;OYCx;*Wqo-UlF!AQnl5hd?~7oQc7`GDS4Y zOuEJyizN2xPj?0%Mp9h!+A{~5GMGDvvPa^xU$8GTIcz?0w<*`tk&IAscmP-H{jlkP5uM_CgP4;UT{fC=6g`Wc>_v9DEwlllMF!jf0&3iFZ1lKQ1x#L$3 zT*$t#EvJa^AJ2gO>m?R3M)oII_6PbWF*!Y7FDqs)JJ78g6;+;jt#q*5Ws!~O$40lR zOfvSVb{(^C!Pl?>OamAYwClB0e~QZOn6=x$%aual$>$R3mHF$MBaH$UF?$LU;jJ<@SQ`2fT;lHM&yjj)DPhzD|@u*;r;uaTWTU2(k-AJv($D%H>p8~)xaj!EHX5}Ju^(@~<8#wvx*bxwM6!H84aDhNN>+Cl)@fjNn zMkq|M%iIcYIj36I10&}?o!6VFyudqi4q`g;i+!G5!okzfEx!l?Knl(Gqn5&IJO zksFMzAjwsKo~>BKRjC2;APi62{ml+!eUS`PIe!t!)fPjq_eTzZ6lw3*w;=_1F=6$n zu1Cu85~%bLNuD%mKBus{VHfWgWiK$Mr&%N=dkqca`OmJ&JWMw>CYH2;+JeVI&ns@8 z1Saha9PXlD(r7eB`v3cjhguPgRwU2e&zn)R&}<6Zs#|TBm0u@~=b8^cyD_IZQL(Fc zS1c^UbuZdC$S30tA&J~{I+YBWEJ1A>ncN$92=0+0JxC0s6DGKl)Nhc(hw+GP9&xeF zyhlYxEFrX2&c4^R4#cTP9?b{2t`g9I@5Znf-7nt*{V{B(IF|bi*C#-JFo1$^(a6I$ zg}+iP{h~MR$1zD|kO_S1tnExyi z@-Y7HGvsasj)>vfY*KajPw@%A+W!-!IB5K_+kB^;ma$0H_LHh@#ak`OQ`@jW_ygBD zw<;g~t&IovOdeoaXQx;0>j1Djd*;04CIBb10*{y#rjLWhAQIj7Y`}l47ys{GC0iaE z0E?gwX{D!=X3r4gO)rzb&;@b3JXdvZ;BB&5R+>>E=M&QJF8!#{p$$iei;YPV$nHra}IfiMx7Nl5O6Whl^8i-I2h!nt7b)V zOS2%-D%`K|+!=SwE}t>Y++jZKp#7yuGWV;MYI3zGHc152PowN#>Pv1KU%l5r+~$?2 zXMkU597y}mim~GCyFw-~*pXV%IC%R8_zc^RGBQ8%;g7c%Kyij6Gb{`^JSn`zIMZKo zdct(9EmePcwV*a(yw5!L$blAUAonFaV}E0wdzO7V%?%^U?ll<{v34|S@ zkzwgC%ZxH{MOXOIAb#!N=+}M<3jeq!dSC0K@#bbK!Ie7 z#M@tCmaGuoplF8dUb(%(ACd~=*Z1$re3=DmnH4HHX@JB^J4-9oW7Y*?&_OI`8-$g9 zyMrM|Xfha%mI)5ofHBRq|1%ns-YL$EmI%1mmvR=x8u+Sk+;K8ct$rRf`l;I4FT^6a zZj0I>*!CkM@h-InN%M_*z^NsINZ4*0G14>FRNUoUj^;b(t3G+JF+Hm`@6NlA_NKFN zsuTA=#r|tEITjlgf%hdsz8+IFzAFAW_9t~*GRF@>rYjUX?}-Dt^0$jWe_}LwTq4sR z{RD1I7S!-GR!qRKoNJxhTDY}wO6u(VAAp}}4^>M8UP`Aq4@t9#-xc{i)_tpBXlRg9 zmDoR`{|R2QnRO9<_48jKK-~HNMv=HAxN486!pyRD`zSH=gz1ru!CY(k!hIVeNecBmFPcVp(#@82YMF+6V|kH}f>$M$ID_0U5zoAr(T8M?2kLwkcnf@M zllP?CmHT?w-xk>tO?uNJx$9r+6utS6YKG2x9y0vzMiQ@(s-Q1LYa z<`V*{NO2~}{PE?uay%i^@$VJEjZ77icp-#tira-jI?yP5 zE*`I|vNnqFD0)CUXMs8;*!2_{cZX^*f$zdKAc)75KV6p=9?C&9Q{2u6Pr;5 zi-ViAz-e2%oK`OF?t3A7BH<8VaQGp?J3V>Z>HuOd`SFLm@91aCIy1z1>?QVdRtjHo z(8EM9{eQmxW;xL~p6I;ah#%D>%?{@x5nbLs4}Dl^j^s`fx-wb`>`ZX(kjAwxXY!bF_C%T}|o}a@XS=Q*})7Qszw5&@Op4~VJfybH(^%OpAB2~!w z9ACl=3SPy;ReXl4)_)(%Q&#DfajrHrw&(-M!~iaOTC$P&{vzNRHX;I|0z432d?ooI zSz(2Qy&Xc-t?mOcsJ@$8KYe$F1u%q@aHFhtls43wdB6QY!5nn5Z5wdy{`yQ$nof%3V#}hjl_>ytNE1vD$VpRi# z(W+8-sR=)|3Jxgl+Un@EXQ><|;E3_m9+iBdod!HVP;*0o{BB5HaQAGIN0q!Ib zdNu-i9B++BH5wA6VXwX`d!xEH{^xY?t?BzE_Sn2{>=_Z=AoW&mZg88@M}x;t8V%$p z`U@Tlw@v$0C(Cb+fQOmNa0Pr5t2SMXIBr!$dBQTR74Kz?4lg5&`^`El0>I17*IZ(k zs!`Mqf|}day;f0Jqz#WSY5EnMD|nFsNw_Xks{NbK{Zl9ZIs@<@=Mw}ubJa10m8rv% zunqHKY;E|tqDGuYSUz+KrisHBq?ByMMj@lNbuS!lS@TE#x;C_sWOH3(BPDU{oIF{MjDUc`E83EU+yRhl0MB{Iz8jFS4(-bCgPFwY(!6r3 zz^#~80xJ^>4x;kkxM#q=&O~js8!@uf4x&DDD?J`14WI%AUeaNAIO|TTktfu~gnx8Lj*fT5y z$(WOMAl#nhlJ_5aAxJxWY93f0+`@7C!DoZjVSqieKM#jkIs8W2$PgN5PJ^87q`$M= zN`$k%LqqlOT$Q9Q33M@DQOlx7+GDT~#gqyMuAqJ*>1(GFulsC5yTE$*!nX3ABN@+9 zPb4i5Fn*nT=ARb&UMWAEI;05Efd9(B1UFS zyj9>18x3pBhC!Q?&qgE2OgQWyV6g>fcL`qhVf!mZcQJ?F7FNuIw>t-g0qMW6ZHpgOJ2n~f{pjSvi}aU zVBQq>nItx6D;ZTFNN~O#o?hcaP>L>UA9KPMMQu6LQHSJEK`1%f?o&*Evn0e~n_<_tDe=6ZO z!#ReKbjA|@7$*`SXezSg>v^(8dzYHi|MWu=jNrsC+A&!SMYeAU66`C})IEqvLGpLQ zSl=N?Cej6jvVTI79U^>4upE9@t>$W4Rm&D{mQ&~OPtKB}+8h*p3@Cn3^u4{=4#hjD zNcC2+YufR>EN{G*`ki9uFQI>5va#WG&fUWOFeVb-y@K{6ME(}urLgDL7-2MCp6^41 zRR78VbzFxz`^SpLUHAp#f|KTfL`DggIxj!ufN2`!|qYa4G$+s)QSq&5#{20L_-3ce0#l1U%?h zSi9z<4?8k2Ug55Ir8qA6YE&A5vew*<+e?R9TGs47x2`TZh~%DQYxIa7%5`^!=gU&! z9Cp4kK@e#`Q6BBcK&qDg;0Yn5;O1{Ce?J58JLa!&)C$COPAb&^0P2izDk%G8_**uh zfDzQ#P`M!LDHc3Tyz%nn+W1I1jb3yR6}!v$S_cFB5oD4J=elD6$ht(6tGBCYN1RYm z%aPP4%ZWz7*OWX)@D-M82&>cHUMv90JXo+%;iFZ5i0dVlwBmQ+hGs*W<_}Fhw9~Sl zd92S{`7g1B? zeo?Sw90yB?RjSYY3k#@Ckz}=mmBB)^A-J7?gz(jKyO`7qTP?YLwM=HW>F3KVTdTnR z+mWamalSp7h^kQgE3JJes)(E5X%p$@MBoR&c#!zr>V%oE{T#W96OV9+D|C_H`t8_U0s0r;Kt7Ziy=WOGs$ zN-=_U_6J%~idr;BIb_-mX*M+uM4LM9>iY^HG?(8v&CTk7gmPDv7LrghA1!!PlB?a@ zzRO5~n^Ac+K;nic0hB@boucva<@1F^wE&>bKKpsCBgnfjl&ig21LN%+5ysA&t1k$Q zV`Xcd&&K9{PL@Y&?PUdxiAOGX#T135HIHwQ9@~)iwvlYA1@%u#j6kVn3^(TMzx6|i zra5nes~1Fq_@Q`%?}12pB8bSeJu+3T?cxEN3c@=q0_UY z7m%{!oL+&FKGhZE46%nA4KW^jM>5BnT?q)-e?PU4qBcS+pjg*qjA<5cv$k2lgJiL;q5G25i3komS%Fa&gOX4v8zub~BHe3}a z=l(lBIq)5(;6ZzG3z-K>lq5xWxm#e(=E%=EVt)Fa51FX`O7X)=RsYtE`+ARG%VMhQ z>JqOm!qy16J+`P818Iz2;nyO7_3>CCpkPSR8qK4{Z|~?C|C`qfKWx(*UH`W~UiWsTGL8J7{qSMVr<}^TgC@+Xn%#o0$Z`l$ z4Ka9R6drIP{G9@&9(R@ybyxH9n@jzIBH1K5S=n3N@`Ngk2meI)!86*(GV|(3Z~dDI zzdt|zYxu|izS>`);;T5~bReFcV(pV^?v`)|Dc&{e#)!8R1#)lJ;L)XevB2@M*iLo> zB$nL#-dZ6Sy^%%dJ-GfXGzV!EQOI86l$}UYCoVpe z_a<2!1e^T+Pb+NKxz2t>3;a5M-1ymOnW(I!EW196NoThHJd8j&KAcKEE(>4Ky@v2F z!!Rp`D5tddM89Vze)h>zv-uR=D4kQB~v+lqkXoKi5=Jp};X8t$n1Ia2mA z10jwCxB_6A`UKF$37mfainYry2zQRVMbThx4fisw6XC8#Sl#7p+%D6UDL#?YfdmB| z`v_t#`2WF&AHCs^uevPw^a=f21cdHk93FP(n(=7C^1bKy{bF40Kj`A3k^z=zJWo{Q zLWXhsB|JGCsjR5XTse|O=$To@^i|(j_Q^FsYr=Zrh&Rr~N1Kx;%kPeW$iC*W3cjEi zhOhy%hiNgcX+)%bHDR@%?aU4>Vhh(^Bc>;%V3oYGc8eH!f&Rbt&II13a{c4)+IydQ zK4ztak|82PMP-OmX;i5cl0s>|q7jv7&`r@?s5EHMrHK?tN*OCMQ|95|aGYuHwf^7V zT6;Ni({=B^>mKa)e9m5bt-aQ}-uHRm_nF`4NkW-31_InV5K)Ro6mEI$y3t5TH7UF( zf_#9>fhPc30GfQhk=XBW9t$17T2s(T0!U{ZCP@%Y%|6G}O?Vky^yY5grlT_YVf4oN zb_E_^5lTdRW}Hm?G$aYMCY^>vd*v_i7fg&0_|V<|V=kkU3m7Nad2D?$wJ%7nPxfFl z;rYckhEW}Ce``nch!D14ZY5;cQcmQhkj_ytg*<}b@}XR7svO7gtEY%#wa$Sry_n+) zR1i0Dy$E!Nw+!}##$-b?c8=4)^k-bZN3}GW>!VS2tmaq}#a_f<-^Dk^l@Jb3>Et8g z{XP1sDxdJ^sG|D+3}q^hfmEDZAe)+hS7PasF}b7jl)QwT^2BN##P!Z_jLc`NAHjYF zkJ;8sZXSoX@!63?d>_%@-{L;GHvu>56q?|N=-Yoo<)HO)o7KX!aDQ<>W?Bm$D(b4Q z$=_R37>;B+j(pvuW~5fN$N$$Yv9kXV5ZpBG#~~jJ@h@;5frT~6Lqjcqt@-+xty^tW zZ4Z5X*36`K#+=u^MUylU3coZaXd4;~>{Ol#QZbc7od1`+d;Hh_tK)$u0Dg;ntF|Vz zx1b8$iJr}bhofgF<_5xT5d{nc1Hw70Ota2u<#A)PFSt7SW!x2?T|Y1KHiB+AKE-;4 zb?G>5UC-+i(cOxZR1|h^$@p)K9sdjhLTpTEH<~^OCvH^WJF77pjO!Pisn{fE_B^I; z=qvEib*b}%xqDRfFcIS`b1q-6T^yymgb+LD5$5W7&U4vT)r|3Z&*>;z!OIOlxO6My z{q@DOTjYJf&oAfIwHt9?J}9#-98$J5N8S*Gxs8XYqs%7aO%>{U!g)_QPMvJB;-Zpr zy0Q=VO@2QSBkTf>La&=SY9pKwuxw6Rw7v?+=e+m#=vx>xPrZ7)xc=XP&c$H-4QJR@ zv}1--pnf^c*e=GrM6&6q_R*@#T(6`zG1AteKW|yLXrmj^HUFUmJ4AMAqWQY!$&Iy& zKV$>(rjIY$-jVauH~dmOg8Lq{+9C}YyO;f`++E@0!>+0`8Gax50+UMOAA$#G2lo59Pp@eG6S47VQ<`s0x@|B=}7&miD4tmT0oERA!l8{FF@ zRu46oQGMOGUEPShYuXPyVsfw&w#$O6I%RKS{LVKmRIM|F-4yTge zs%sw)^7VgFM1%gf#RsqukPwI;1nU8&eMJD6k5O(703{Nz8hXQo4#inzaBq8W<$1** ztd}oYR5x4?@4gqZdIiv}?Yben%ynzaK*J^ipadj;#<8i=!?_5#>0fU2Lfq5QD5LU8 zd?fe|T)Wx)Vopnk34GN(tsA7&1h1A0RyQDS+jh=WSm?lwn33;@0uaRLe1lp4A5=Mk zIQ};wf0{VfM&^(tRcn&nV zh4ZcaX7kAGFN=$=1Q%4(nTzbOM zOvf$s4@&;*!l!T~HEXMFEML5g=Hx89w_utvQ{I}}sIV2=1ukCDy!a@lY5Ap-BF&9? zu;!JkihUBCj6#|NtKZk&31q_Ar334d&AoQs9xcuGn zTZb1aAH?#mkxmNE_QlUPrIh&_<5fFxxyAf)E^>y50V1RKU2Y~eZH$$QNBfy}a8$aj zv_F6Kf%x@*PXPR03LFB6pNK@;9YEYxDh`L@V6JiQ(}tFW(Q#i{Sl92J*tu^OoXk&a zhBVm%N0l?K@Ny-jNc}!^zTB!JnT{JZ(Ntxbuy9=@Cqx%3zlVUlNkmjC%6<1L6dMSouXTpB!SGFdlc1x5U=Mf+`M#|MuCRuFtPb zMYBKbzN;VJ!PU#;1!f>h-(O>Bh*{eBT# zCM70L4*W>_=aDW21m`pQc>SN?Ht-$=K==zvAFqzWSBq|zm&f**JI{>332t7z^{X(d zfmJ!{o-1C7cWj##&lSV*&uG55blLhvjV&k3 zs|Ee?GW?CW5Un$I@QhhiP?-T=XD#~=w}ME+nBLFF-LvJEs&+?`yEpEcp7{i-gnQ$T zCoitCBTftDybU~RO@SCFcFz<^ZZ6_)&3S!Sb8ntJ>&=|4-9h}1{k*uimN6~ft@2uM zGzfE4QCNv=2s5!<3o2nK0T+%$f;+qH^MR2_#P7hXK7%ATFPsd?HzUtk_**dxX83pC z?RsJ$WyOX*k&X?P8*^{#F4S`TA z9i8=R0K^!$B{16Mps!<@O5!{Yehx;%8ShL{djj`Z@m3w?&mDzj?UkrxuG};xGM4uI zlGj!VIyPLF+sEd;3(r`aAHJXCV>#t)IKzd{ZaK?oM|~#pne&E@2^EMoWU613Iv5_% zG+R-oB|S62(*s{hDk{o~^w0*jqAdF!-fVK(5kjjyp-?6#m6b{}=la6y2&`_JWj=M` z*0%{ZSMG_L@eA3AyfM372*i)Sc^V<9ja31~S7Tmx5m9)L)_cW|B2g`DJvP0sG0zP7 z^3HtL!Hxz+6`W4IzhB@oZD6z8;`Up}jT2dCn>dOHB=LW^7vI*>46I=Trw35&u`bew z#y@NgN0W2f#zl5K1bXB}Tay|8(>ph6oWeMx5bTCo-&on!C2q)+W~>BBn7)czP7Dt zWwiG(Yc{687=(7wfK`zZkHfD=LL+g5Za`tg0EOzpp%_* z5*1YKu8&%*P5@q~oO9yE$rD{*&wV>WD{lsU1XyWE()z@Tn_v_Z#D+%ljerixGOFEpct1*PbHsZjiX;?>qq3~ zG7~SHk&xF+=>g8YG$V(4?s4uC0WqVk1Z~K(YJslW0+)4YsX*t}Zs({En;I^~<=FsFCNasb5U5`aeBo zwL%%3-oN6?z-;iw^+jV89y!pJn2;bSCj8+z%>e-YR?q7)O@Wg&3{8Jo&U>O22Em)h zop*$odc1nk`xziGAI9i3pOyIwo#vgcPII-ZVK-JDF@_L{{qp%6GoQ;-JtnaYDN9%f z;VCkf2Idl5vL~1GSQ0xqisJ;)vtV-F_Q(x2j`}OXNoiJI4L=3qE2@E8BZ#;tVDAt| zhK-`B``oLL@>~elTcL4e(KS_Gjyn!9-BcTxQq9^zJ%v~xDsY^PqU1V`*IFOCx^jgS z*-HJpK=)Z3y~snJh_3dOXCDOZfRFx4@3)_VY_;J02h6_t+?MHV6M+anZg+AZB=$6? zx@=TC8;8%9?TO7;ZQS|Ul6q9SfdB;OV&o3NojP5_U|}@Zxr&b4=~aa0TnV`;2gkm; zIKvrWOs%5qyg)zF2?VE&%TFVWdZSh&(0$e8&oa|$6tc_daHjau-l&L2v ze*3($L-SKyx{6!|=r1uizZ2)4I3~DB`;AszxN8hae@yGq<*EnH!UQ#=Yya%Qz`l=64Ve zNzX?Sc*}%_2MXjz-D~2dW3#wEH?->8O|9Wb!&_#Nn#NQ(D&aHh42)t^?0Ir;L_JX3 zU5u`6pZX5<$?n5Z`&zU*>yk?HGSsV7OG%H;h`9S;w zL=c^zbGE#ZLXqI7GTPtK`0_I{F2lx9;qGRen?hI?r&SU>3PjaEa4q zd^+%=^w)=#0q=StUlgKBa7yB7o_8#C$W!@USh4{+@WkaG6+D1*&-v@7Mb1O=T3#h5 z_%NL8<;4?7TP_axT2Qba5SQr{MC|=8p4=h{Lg=aFC~ix2Y8#D^R=M4XJjFeCGx%~{ zBDn7J|M}{E*KzNndqi|pv<*$?j%LkXMc4B|-9jKDj9yWT{&VE1Ok@xas~!ks7q5TJ z?&5}GHpxw_SBWjtbk@*(h=)r6cnx8?Q(eTr5yY+WNnmtnfmJ}JG}eQ2F0K*75S%|c zW_ZFo1UJpSW=m>sjQhvE(tJk;#;LH%Il+O9Q@NrCUE^>qtEbx^p)-w_iu!lx6pfk) zbG|#q%~0ILG2Fc{>BF&Sbx9avjRMYdf140#f${$P8#bnM{lrGcw@nn{FT=)2=C!rP z`o;~{tew34&Rp}tduzuz^>OAW8sen%7A={Bt7sv6OU#QY6b*+P7#OIJ$F;HFw{sM> zxK4%Yw!f8@N9w`AogW{R*3-U`+_P|w_?nk-m$t>S^B4WW1(AKV z)|==S2t}x=0lX@R*ZLJi45w%Hb*4J2@WT0=?OmwI?s@;}VOb!C`0AlqRvHsM9u9I4 z&wXr0x_&e=$*Z1u_`##XA!>XXvWV<1)1IAMrT~Nkg_t2@S@;wCN^RgGP{hUJtl+nu zC~tgV5sE*mUK$N8d~cNh)zOQ>uC^Xn^rz4i5Ll#JH&tfg{78LeLThNH&&6-byt!uf z0;x80Ds1l*xt{w*vm4?}ni*h(r*p0x&myq$_7{V+(b|YCv%6d}pzXOCv(RC8IO+L# zR7*eg%e@G*x{03`e93CfZ6aq$9K(=35tKu%1Q>sf=uupkrJQF7E}=b zufIgvV(J-s-t9?8VN7q-x=T86$Ntj*rJ%8fT6YV-Fh<_`5}HrpTnJyR$pH@Kz#IQx z<=`U#0Q#m9p_~b6*vFQHC(li{K#d=J}0y&kGSm6TmnXLHj>)J_={VlQ8bK z-S$YN3aX#gvo;pq1)_Q^BVt_(o;}OHMhQh+-LeppoazomghJQ;;^M`av6`8zJX%?#r>;v5<(=eGh>^HDpi6*!}<} zZ9_zwZ;ZL+nhqzXkA#p6&1x&R(Yo)G$28@i@GzP+`5Gp4W)8fV3WA9}RzaAOvP(l6M1cDjVQGl?>PuR4_a zN0L`GDv;05zbvY~ zazG*P`DxtxR`v>1pZ|!v*m@p>bE{*_O72<`l^lf74&K@3clW>@bAdVp6IasmNG&GP zhR_(fdVTyD%<$%inhQgPgZF68Y@*87p(Z{-q64wj zX zge`aj`MBWli;L|Kh{$hh-=4VHVqV|#T$fo{uP~46CHD?=0xwj6AsWOuSU|*I7lMV) zUNk52^@{b8oN#OzFvM#lbkd@xvh7;)1aK1GYhsg%;NLr6`%2WW4 zJ&JJ+!japp=tu`0ciie=OY0QI^=WcwpbMw2UE-JzT0#&4T}x~IEvDW)7?ilcG`47& zS+{Td)ENpP+|*=40vunt6S7{{PFrW46L#G>o1;?}LCE2c=iG_T0a$oDs>fhXEvpM& znb%=SVPRc;kN-ekKaqInS4mzNARD@ODB7gp`R5UI+*@NGdq6-5P{q)MTz=dXy%8Dc zOo5&snO4?161T)_7FI608q>o`g_Q_MCjav!C2_~oY5=Ipvw48nLEr(YIc8>RQNJsnbBmRS(L5z0i z;3oi@;F_S~Peq9}FyersC6Gym;oc1B5}dj2auHsi5QuT~qML5ct_tV+{`MD?qmyv1 zwQ=xczW}j^(s;NM(_aWw!OcWi4!+_}eDV@a7C{Vc4~%h*g{I&urwjO?O^t`+tL@RL z6L61Q#JRY96~)p~@wiDGpz#G-2Ar-Sco_?)phMwK13h@YMK%Et6 z^U$T%yRl{&S@qwacJ25y5Y=}VZklQ{;gjoxswtC?v(>zW44{M|S#D(tHjP%}peurj z*KSWWU{H-Q>dxJu@89<5%}&rXgkpCo0AXlFWT8TWleD`KcmDa^znY&eSbBZIe7qz! zEZQ1If~Z_G={O6{s`xj&PPonvCN~!}wK%_9KqXbgzWq4_JkC}5%s^cD=2R0$j!W%E zm#`Y3orBRb{)abrZT?+F8-AyEDBb8~i6#s;gsG&|GxiTVKZxf&* zo73O8U(abH)4?(C)mjle&t`E1Py%jF20;Ru*GZp;)>>VVoCa^JW+OTEz5a}W3C!ks!V2M-HL{~ zP4qGg=c4AVKT3Ze-QSU?os-y(jly5#oT(TesrSYuSpAv*gV+%$5xN?jm^dO$vx8wl zsEz9HLKFi`R`&?D_$YZaX1bUQ_U@m5vHun4f689m7<&U+&E`5!^R z0TD}geWb`RDxX>5TNUrW>G5UjTBv$%yzsO4^5OK?jo36P%+Xq4y<_@<&bvp}x)Prx zHzBy|F#)e&N{p}X3kR$?x$sDx=Jj@%4{`Uip9V>e&<68d6`=A5Fpo-o@l5Ah zpPsw(U@RTXzxb7b4ps|#*Q$7oRJ|uZmQgA287Ug4UNb41brJVvu*i&Oo*(Dj#?B(G zneSW)N&Aj4!c&KrqxTTw{QdDfbP0e4OvR_%g$$?=_=h-)R^{?sC*np|)IF5DDqYhj z8zaw!$5lBwcrsBNC+5z#-#{YhA9`21Z8(Xs;v~?cdd5tzcK6v*`JvdVLcmo~ATY`W zqa!t(350g2n)z~Im@y9ordKPRjS6hW(jEn0v2K=Gvy6<2HVgkmjzfs?S()oO#O?1L z?O`V18~^2k9Uz)v+25O@@+_#8>30Wi!s}<6-BiXj(*iTPU?cA=Eg+3Rto-K?0M@Pq zm4oIQPSiQgOF$AhK5&`^=WbPWY-gM?PsG33W$QiQtAXK)1gJK_7lW1E9m4Nn9Ij0r z?;`is;QCL&t|A0mC+2<=ff0B*al$|Hdf^p$N4bQ+dk~m2!o!@g+^>yMD$4HQ5pwsm z)P4AfS*u$!P0kOj^S}w#a_#&Me9DZ3ft$B^+a~QOA2s}zD%tfAY=23=J_w<6GH`ch z-dl^UC*7Uueh2-8Rel01|N0?z1T`Gu#h)NbMlViA{_G$q>qIN4Fb71jd|s-Jw3#Ut z16ccqT-klPHjrg(4vj?YHhJ0M<;sy8+C`uN77+T;ZLX?G3$rvAMXwhEUQLgMIGRAf z3Xuc`I|`6(YCW8^)MAHd=f|Gjy!BpGU^zb?ziSc*U|8XU&XpjNR^d$iPRy3Cw-rQK zD5LHiAHQu31hU5~D{^lD@jp}U@xV&p6A1t0ybe6apflENTH|U>d7=APnjv&>wN3k` z89yQ+Ro6Vu?;HMvpdIGau^DG7to9S#e~6bY;BRw=rEXe9=%a-z>lWU~_|(gOHdCJ4iRH$37|=0{AB-;_l!r# zODDXyNVuN+(;#rn zXKSW4()H_?H=$I%en~-YMnV8r=jH?7I|dc6cJAl7MI^OPrXFUP z7N#p*a41zp98sLhiyP}qyu6WM!ii2hXFl_PVragp3Swwqd;|`#=HxPi-rRZ>;g@fy z);kv?S-G{Ty#iqw1hlP7h-ekI8W3_|v^5l1o4ifMA1ek?voYDt1I*LmFa3N%G2=L__N+de zmlF%9{gDe46w&HMz>i8G{3fekim)%B{F0<+#o#pkHSKQ!5NCeB@HdJH6qwHL7~5_3 z78mi$@>QD0vKDBX`RA%?nRjVj;`(W$=d?*%QtcygMoT@nXL30bO3iNeVcM4HaG?4n zfAY<)pIS|vqs7dL004*G#g9d{`@*65<rE6HBbgtyPaT?yZg6g`P8gh^pGX;(LM0#w;C^bGtnSL@|DO zm*UUx4G9M~DL7g4u@%k@AORI_upcK%jdlpnk|bfZKq;2Txpv&@J*696(7=PXc(Y6H zp@Q>NqE9mK$e=b+=ev4@10n=XB58R9d^XMQE!QCM3j*LVJJ#Bl6VoP?Th=~<>uF`{ z1ir(hb#ZclaAu}pLU@5xgMpuz)!Ed$81{&H;!1ORPw6Yl6e12@%$#wyW8~#LIXART z&KnDN?)cHh$>YkSYIF@9K~UJq9V=)3P|^u!3ch}*f>4PK(mnu)N7!ZL{Df2`w;mSX z4j%&G52qiiLq;BPI$q=s1@UUAi9jNUF$!+xmu`bEO355XLZ<$wq$RY(>*Ab^HEj@o zTHUy#tsxMCUe3+#et`<`OP)OCg{bf=_}?Erl$CaJc-B2s1Yo zLjJQo+WDI28h|#1_`jnbtp|TB)%emlo?*ydE=~Io0tn|G7q*tlXuGX*(fi7$kkv7M znKpDi)~3hM>|QtpHi>)KI)?juIWNk{YqhX4S?iuESlF)s+Y0c5py=IW8jrZ( zS&CEdj6^us;>l-%B8u|8bm|WM1+E;kDn}tgYoTv}Fn|CeBS>em1jgt%!Z3defjiZ? z0MuZtt4&oG@o%Glx?lC^zceBVnsvw*AMPRqTE$W4PwN6yeNt%*C13|597%N+%IF&@ zzy!*Xl#+8?m0cSYLV9j9BFe%zl9pO3?r0BDRg zxqM@fb`qmYj4lUbePlh1meANeyrkSmoL|0e2*f_*oUT`7{(>>>>Fw*C$*53X54Km_ za3+!bgV2@0Pe~Gxr6HfA-r`SU_q3FK_?+bSXwMg;5)*RS=t2W)BK($*Z1!yWA6?t? z_t^LU4FvpT_FI^L8lyRcg3xhx^tc7~MCT<4zjTkJ*bO26Q(@RZ_;yRDvy8Rrr^nA! z$dPY8TzgOC5o1QqzqD=2E!e1<*B_V+<3^b5`2@-{R(sO3cjI{>e)Uw0p0J+3?)h4( z*QeU|Uo3|=0T5t)PF>DpLexUyD!Z64ApF2nfi@DI?U}?|e88AS@7B33=}9(_4Z8NL zw4SJ9A6#;Ngbig`UE=#Fm7iS*$DG6SO(6uDxXQcb@mQ)gR$>r{L12F&p!HadjfhFC ztE&jt4C2oi7e5>Gx0!P8+GFhz#tcqs_es(lc(YY+KPvMi?%&LAmsKq5=Sk4glJi8tQzd~-*XNK$TdZ%v29CVecj68b(q!@Fcs(l1F3TVi1TyU|$gM<>er(m~{OY z<9H1rNE#4oW3^-7%IX~-+G8;C4mqoK~2zsw)OJ$CKz zgn+NQ(mr0PYEnI?iok`|YGeR17U80>g=Mo6zdI25fRt;)Sv64w^-euH@FUlEI<*z+ zE6~hZpm64U6mJ2f5nJw^?WywFuRf(5I@Pc>nQ@KpufCyUHnxB*uiKaje1!l25m8A*K~(*vMk*d0x!D*pjnXIvdOjy<|&MtJ4T5!_qfYuLS8 zW;($KGppF~Kd}zqvVT+Nvp_0{{~@0DZFwa?^<|2ZE3t38ArK<~yFsy!FZd2ewTar93PNv6u3b3&^oQf7$jJM}xL?0jiqRMIJC8`Y)0mbUnmn5# z25LANs*$t<+XJ&rf6fb07>QByquBcOaw%5!4S^T|*f(tU>lbUZ)jygPdb(08Hod6c zfaE7gzPJt!gHDP0_p#6e#?50B$0;#M3 zun^m`o|Dr>@>8KNtQ(2l(=qo8djK1`<}^JeekD5pJI|?>IlKf0PHj@UN~eE=xd4wI z?dM5zZFZ6Xw60=C-V$K_$+s8*_!Hp#;ZJ*W7MGf~?nas{oJN_ze(7@B1+QizlHNF{ z!PRkaGIK`fzG2_S1b-D{mL!L-oyT39z){Epk_@~l>pvbl#vpL0LcmkW*r-DBo+G>5 zMv%7&%_d6!(Wp*8W~cefc-1`_SCKu==I&3r+@C#_b@NPOmNgQ7X@+y88@#Qhm4Iug zIR8RdqTa=Rbgd&lK@?wOz zk3w+%_B_0M?nUKu3Pxd6t%Nqocm4SV7;*ec2!%5S*SrK#x*jY45&|&-@Gn8M&l|i6 zEiZq?!ckPh5N!NCgk2+OoEv47iJLt;zy(6=;}C&>5Wf(0HB7uPIL<^Q;B7ejTpQ{miFn$hYd(v- z$o`rpJ(s4#Fx6jOb9oZ$g>;P9;0S60D~Vlu3+LKisr42Q+a>JfCxDXfpY<0j{~iQl z1mNF;*FWE_q|lmp5&ks6qv)H|iRYRtnCO#`2=HP!q1&_DYcFD~1< z9>o9J)-?NE@>6s_^XtSJ?5`QnYGOJnpxxyiwpC_xb^&|r!1+Ma!+c2wvLUSs)A4E;H1^Cf ziTzms{WkWn=c2m2Bzm z$_9S4nE#uX=WlY#8`I|X3NHnj_jiu$G&Z{i%jayXl2kd!$Kzp%6#jaGo1|XGT@ipF zD@^q!u)0`ZvGV_eKul=*|G}^1X-NXw;I$pWRtQ{VLEZ_EVF>`r&JIjmD@*YCEx{`? zvf*?Vh1N9OIFKM#U6-F4u0Xin`!jY1u0xPmMy}SExIfEubU7Z0y&^CwF{0qXA@*5% zA)wtel3>5X=M7a(@M)Z(kH*M*mc2%q;}%sasOcavf3o4eq1iA%sYS`#0_~aN7m}Ug z8K(F+s~;XdB}FRMot$R!hkJFb{524WNdSKh97}3R!w*56ing^#moLqyq8FyY%QR+! zU&{m^4r6pt;%U|*MBfozf6F=nhX07<9|%-yc2qAePfjKidVyFbC-MBxOnVU$Kl72# zS#c>=Vh|`*5D;Th#JpEI5hE}29%dydLXh}&@Cb4Kvnp*_>9k?T{EVJX1`@zAw;!=B zUMkQ{#PSrw^9B}yEHj!Ha( z?%#R^&ZZaiVVa$isecWzT36GuH2tuAAc1DVg%r?g5PIowcyVz-`jgDCv z30tiX*%_a+dAz;Gm_9q3HB5Nlm|hbu`!#Dsw4HkH5`s3PB7!Np*p2yPQ=i;96}nC9 z#(xiAW96@hK+Fd4*TeMpHOf>aQCOUeCmf>)<-8iP3Xu?)-$hI> z%k-k&#{B){sTUWHhd}f_x?J!+1d2OI(f$p`ZISlM>}0jG{^qKfdYPJTEpxF!h$zHK z-f)lJ7)u&a2}hmfLebx~dEP7DQHxO}5WSw@wq95K`}AG)c*U<*UQk}q+f}^#)pwO( z^t|M~{x83;*SmZFKTF!{y+6V2y`J~q?;R8NYXAPfVSn5C`}_RVZ_Lj3C_S2bTgza?bjy;J+n%ohVNG`$;K`eIgBrqyERJ0spx60bOC__TK<2F?|?S-Ab*+fudKurie<6`MUITd zE6il2Ty!QlIp!0b8^*;AQ`paaAu6XYg3EjKsezjNi?&AGkd-r~Sp8KRs@?QO3PF>aa$M4(wEk1nw z{+ImjwMaaw%FgI9#f3-6^t|R>EHI*Des%6QWgWm*A%Q6X==x<)v}q z3q^2NkaCTq2!Kxn^_&kaAyPf(_tSGpa4)zYdS3TOaI^I8^ZV!5;Poe!3|wl*Em)&a zJq8+Cvz1FX^t$z_RhQ(FaaPW2QX2qJ1PC+tzbuHBMa4PWX+NHOQi! zR*f<|JxAohW@|(mRS8I5b(W)gtD~A zWg7s2TIX5W2E^}-a+Vr6{)EkOUs`xYT*d0bHf)YBU-k2Ze#Sh}``y=b;k@3g@bhQI z+8AS!LKdQRop)0vD0o)pcjQEUn#%*JpL!9+iDMPOByZ{WV!r7aAzZab zX38A*80Y$zyI=qG!_S@N!N+eEu#F;Hy}^^1G@g31R2b@&-s=bb)BU)>JV}DMnt{Tb-hQWm~ZO8ICsHAJI(bjAGSe$ zmp6@Ofe^3%i^UbMXk?7}keqOkz`&i>VpA0=cXn|%=Wh0~wBNPVp1|+(cbtN6ziG6p z^3}oHjrqsBH;>CeA~@&W(}$%oKZ9=uDoMrZw7|6ihQ*pfChZm&q(;$w94!0&w79{rfj1v0Syl%5UP zi6C=#R1YT}he$nj(PjC!Vk=laCo%tcV?OWr^P-|lIfmE2Y;WW_y2rh-SFGQF2*!`zs{X9 z?v}kZvutn;6kqY(d%T#!G%2&Lp4y+;gz2$>ioUlQarRt^jVVg%s8 zzn1`+9-A!ah!^e1e~pNv-2ErL{0Lr#2)Zs5 zY~(E>e2M((5fWv;)mwy$4f0iX^tpm4LQZrEFh562v}aMW<6qU$*Uw8L@{8cXb=nsY z9BG%2&b=h3AYcf+vw^p)^)9*37jTE( zvzgIEMPwZXJHqojI_4i`-)l_8q%s@h&L+gkTbWIQ^`Wv0tU9Ku`}`+(^x(DTOfawV z-1pW@1#7cjwg#F`{JxW!t$GD@!9}X881``i_rLsz4Z;5s`g^|HPe9{ofIUPk6nlng zD8ywov+9{HoVQ>(&1IpjCr9xNuKxmhhx1Q2DW3;EK^2r&gEi)sXQF0dgQJ~FTlW*B z`-jC--~97WTC{&ugp~fM_y}h2`u27I-uL}coqJyUckdU1RrzYQ;_PXIM2(DkeHZRbrQ;nVLkMZRjYcey{SXWtVT!h~p zg{BCO$gSF%irf;Nn4HB*DS<$Y0F;t}_@`$0-b4Nm@`KjmoMK*QlMQSdn3f}|ooTMF z{w8z1OBfoFlwM1-ra8jpurgS98fEPt&_Wb{>WSJCzEym|+e0b77d9J_qa35BJJypb zk_J?CRaZRjQ|DerC(p#GJA97Vd!K648>CtGYVx1=+o5N?;UW** z*?zT4zx%cfo#|e`cO`ijyn0lV-$HKtw~NC%5z54dmTVGYCmhK5JuP*84_3kf_ zAf>7tWBpy;KGtX=T-D&X+!{;3wvysGZ>p_czs2#Z-dHJ>5cogvw|WKnI8}K70000< KMNUMnLSTXg=ibEt literal 0 HcmV?d00001 diff --git a/static/images/kubernetes-512x512.png b/static/images/kubernetes-512x512.png new file mode 100644 index 0000000000000000000000000000000000000000..1bf2ce524e900d79282769faf15bd4f0f546c6f4 GIT binary patch literal 221877 zcmZU*1zc3k_dmWYOLvPBOLwaX(k$KG(n>edEFs-34J#?q-6bI?jg-Jjr%QMI@zLk` ze1EV1^4iy(J3Dvg&bepKywCfbON6Sj3=ZaFOaK6YBPR<{2LON%MIZnl_2E#kh2H#d z0KE}c5(faPq8kGfPzE_Kz=A8J^TTX9s_>&_D}-IBR%=Atd7L|x6Q+ygjoYX zf7|Fk9DjdgAHKh@`Evy30sn0Q1mq$Aul&1dBc|%phXaO_tez_XfJOBCiv-BXdi2nP zp0%c~o34_gpt++xr>TXbnI)&Ez0>cj079OE4@G-RH&d{uy`6)rpraOn!32ZBLM|4s1=S(ae=mPH6Q;Ftb8`~p;_~qD z;Pl|*baZ*c#Umgfz{SnW#mmd_(1OF&%fZdmlf%K4?oTKG_5-nWHFvRgauS`QQC`uz(R4<|R*e|fAGH zeDp0iiOXp;tmPWM`L&fyv=Pf4N_3O;EE*A033U`+^)Y{Go|9O=w#<~P(s1Sa2I|<_ zdEM>|UtWl-&KZBQ`{cO0|FiL)9>_1r%wX&*7lsnT%iidbG=L{JMWADu>Dt_`S&xNpI@Ax9kd;L zEgf}w)OB{8iu#FsInVXIJ0H36+I=z{vmYE2v3ZjJB^9l8Gq+J_Zm095?H<#Mxs3iJ zi>Pg{UGj$}(Oy^CXK!Zs-=4Sa-|e}WL#6NEG(wlI`tR;+9QS5sMP}b$Wcro(Jcw#< zv(o?GXF+(=3F>)By?B58-Dl}Y)ED@RBRqSnet)qDDt#OjDgrl#czN_W&Xjpke?9Hxgsl zehFt!b3UVOAz)C4ncpuj%igtzHBJ=#j zW4qh&O0}hOG)x0@rdXG&oJhWqn;-2)daPlEBlUqlqlh}YN_pbYs8ayToa0yezmp40 ziw_u3@KpAd|8J@R;(4MJAsVw?VDa5EnykJcyM{F%E=#F(L z%1d$Q+stBd(bTWf`$er{?4uX6B3bfg)9P|%QCcgk_I+-wx9qZWrHcLJ1Sxs)vq7rV zy^*eTS&|}0(#zqjNuu78rg?YX^14GY|B34Y5B)&~q5&ge7Jo$qc-d07DjDRVQkqOp zyP3CvzfuU!j#@MDMvf`Y+FG?n$(GKI6x;C&rc^La;_-QQbr%Z5#XpABbWC)Mu5&%Gg_yLN9m{>bp!nv=j`G^# zH*@lD;1=GCkNf%$sC z|1@PdzEvT@<;wY%MOgUrFQv5)SZmGT95gYcDm`*Zo$xG=I-eJGE!=z+IfIItUJeZ# zMMV%gB}%HI?view6dfrP%n+;;NwSw!MwiG^(ic;~L3x$;%Ju9CY}!$23z<13Wu^0z z!{8D&Y1*3rD^kCFA6$A>SOuo6lz$5=SdJ3bN+ncFqQd*%BH@aESOKrVic2Jamrr0Z zD0T2F!8vEw3E=qr+DLaV5 z@)8vVTDM;5Hfxa)SkURnOxZf!*hJ)X78fZOm6^;<;U&ZsB_FBKp-8ysDN{|kvtinR z!1b#&qQES`(1NCTDC!u?8YVgffp!{PH^txehV`WWCpo@C0hOk<`diqExsb?**N_3O z#o8C0446w(j`p(>pJaxtN^rwR1VqxVsb{JWu{#xY=lhRKcr#1pfA9zS@)MBEU?!i= zh3vAIhD4o*`;PiZu~_DXJ4NtYQ^eNlkVEXNsZ~4$2alPRx>e<6M+4njHj&vtfZ!+p z%7RScK|8oonGk;!UR@AL(&-uKes4;eY>ax0H@u}lD)QthKG4aVK^?Q_klG+4N;*Dx zIP;_0OIW6*q^q6*fs);wM$1JrMAPkGv6M${*DGK2kv-h|*T*BBB6-TVnea;#%?lfdS2U8hAF9nOn4a5hI|nOA6%X#+#PgprL9>6GQ^A)@iGS>aIxiAks4-p- zTKH;sVVd5FMt2cM?Y_>3ik3V|<C~P&K)ob${L5@7O>v1Vtd$rkPmF& zaqH7ehW;Z%X)Sqy=|XTO^2^0r0rk5t>outXL^Nm&6Yr-Q^Sj<>J2bgncCL_|LV`DY zE@#74_d(;)mVU(|U&QLFJdV}wUM%xtrF=zT1unBqv{qu|pfvZ?`mQ)rUek@o3te-B{ATnY>eqL_63>IY+7d73qS@Qon+er zVU+}abq+(Nvbt>x-}USGsDhxE7nB(l=YVE{(Y{2e$)Q=(dUsp6SP!hD;MG? zL5274&N$^y5?mgyG<+TLpkBYQP#_FD{Q_O<9o*=o)Cn~&)gs_E2^tU=nX#-U-1L}M zlN=li$D@{lZ5BywJ|Zs(}t z%o#mKmfhT*^*fzge)eRT{t70AngC!B@Om-X!cQo`kx(?R*KYG^5 z(1jVWd;}NxcT?|VFrnXf#mbUUC1J_apk)1!*1RCT{Dt0Iwvdc=v)mN-3Ge>oK0o!9 zpfmx;T=k&?w&9#;%k6Cwx+6?wr<|MVon-v|#I(Uv#*a16ckMNN23!3gMCnYuXD<&oDsBSNyj+v|0%aa-zY4C%xnh`QVesI=z)I z5I6sorvFz(qnrsBcLf_K*~y!;Q92&vWB`M4U&3UTcTu ztlw%HzlS@+>wDY`&WYp2)^6v;Y=7beB7@saZB7bVI!9{h)!9}&b!93-h*si}vXP;G zaGO{p{eylGdo5hT{s1RHPieGoTJ@pal9TP?`b9=hJyoi&>+b2oiFtr%I{JhsB_4LiJZq;iS`fp_hO=U7zo**|@%W3n2 z&VoQQD#P_oH*C`IdYyT?{#&Fg8K<{t6*sSByVK}l9j_3tB@aY`rh=M83hCVwwLp6$ zaJGOkz*X{w1|!UFNkosCNo`EEQ)PN$FoyfeG;A9ff4EWTmYf5)G?*D&Dco zj9o3%EL~PaAeyH5k?((I#4`p%mhuJvzb2%T8mYI((QxkRJUW4){(MN?ZB)7$e^!6Q zkDyN5$hQ$D3EH*H{M%1-cgo)D*xW{(ZM=hp=T@e($%OuzI$v2@ z#CK?R;7vkePKk+WXCkiwbjoW1Q;qDpxk7PT7^A1aGb;7Q_ax-zFu6<8LNEYaK~aP{ z;co020WRf^$Z*Ky+UZ^Ce7dT9Ngm8Orh!=UYJ^y> zs=00Fi%$shy-QWhJ2Q~DU%+!}Ds;vZhi+rT5NcFn&;hQjJnRCCmqd3x@>fx_FeKXR zKVPB^6*P+sf7TS?i}42z1At$QS}IjbILGA3Tv;w0WiSPFlchM((~k&Z%n4-Lo0o-i{;_8`Fw$S9BW%tXcat2Tog7(5a}{^&(nrzBP(mTMqo-yWf# zTbavB0=o7+59&P4!4_5&-+%Ih{HSa#TbUKTE|V7Di%hm0_IXA3!eztpldL!tdrXIw_-BhvE1cl#Ync zeHL!ehmjNWiEqo=hoJBixCG({C7|I_)+{s5+jrbqPhm2HfcxZItmD!82!5Ko(TyCV5+gmszY3&4-r?~KsQc{iLieW2@+8wb5gVs_@l<>k-3*&K%)eNV z2k_{xwSa5UJ(y3O{n*3He~g7XD-zwyPQ`g9lx}7G!@|6KO7}^r?w560@$9$?XsUYX zZqK9?E&%azM@R^*5FS!rGN#_ubsF}gD!%IO*S%i0r%Za_~ zl5o$fp;{m#6|f>m9CEDok1#<}cjz((OCzWiqMl|X)T!kDjDto5{7`{Yjy(;X$6ECg8q-q)rn*)@w~vA`!o^=an5_o1e1WCgfb3FTRLpHC-Y5Kvv^&5B_n+l);PKnjMU1W5|JhXFxpXGzn*9V6-mJoeq8ME1>qhCt zzKpePSSo7j`g2)h8@!;UBO{q{22mnk&Gl=v5Cz9bW) zDP$F^t*#yk>la{fZ0eV5iVl5gX9B%z4$juu=-`U5Tk2glDyQcvHis>{%|4;6Y3~Az z$OZ~6MN~(BL1TN?2^H$fI(0Eox)}>=KGlFj!71iKAJOfG1*fK{awul(*`&x|Bkg3A z{Ah&Fo>Jp9(k6ef{=kpyTV=G5Mlym5E*4AWpBOz7(*jnaW2YaSgSY~@*8NWyNyN9e zgL5csv)^_Qg_tAqN@+~Xe&EW~`tw(eSLlZph_H9UaVr%tEAJ?9j zYKLQ4eKE75=Hlb5B~XZ%BH1+&&!Z!K+!rrua>AQeFV)yGPMXle@aG%t3W;|nJd8m+6aAc>-VetDhru4m1w^TVsYn1&N5~OB= z5?UD}k^_&e;H47Yyd9{YZr6W?R8-|RsUl=90{nwZg%rA+5@N;lo;TTO=w!$aE-62; zDo~u?eE%zp)#S_hk6mZo(g3mR@&;ethHL$iv2**|h}iEU3(P#!rO!jLnx>`S*AUKKfM73UY@QO)3p@c{E{kLS}A|%ZQ3H)~fXw}P+Dl8Yr zt*U4V5+c|NzAxt1N!ZGnYfp%syrNLtkg?P?0-7*Y0&z$3U1H`P$*9+O!mQ+SGpSop z#m6e0d?NeOtWqz<7}&-Ocd%s+;q}We#EFgPD0{){iUu zu~tapoj=2^UpX5^2)<=+nf~xXEoP`N?4pk8qgLAu9dwUJrX$+es|}@3o9hki=H>a1 z4W8ID3B*$>B=!S4wHD@ejC|BVu#(=r&qzbWs#x%sLnqNy)JF1-;Yi_`D-PD*|Hd3pn%irw1^U_{ERbDoN}L&-}-Y zQGbfGUBUk3N73oopgrHIj76QlfNp!)M2-k8CRWttcM^GG zLTF_p7rCkQbMn4r15&cPr+xJPIgG)@{c zrDhqC<9NqRwq+$DP+X$C4W25D03X&M*l*hZ2(kDy%HXlWa zu`Yeq^nb-HC#P}y#R{VGzUcy-&oOO%0in_IqxzFgQRkq|jTO@v-y#-gZaomM(qH}y z&7toAK7bnn`8M3&<^;?@#0>&};0N6YO=-tjj$qDsH%UkBu8@`knH%Abr;Wx*z?jqU zRnAlTdE&F-u#IsDpG}4SW50kBv*JJxS^`(UUC#@0GP7b0oudr-sGk#^+&-d2?1kGK zo}6k`{%axz;%W~(iTgU)V#GZS5Tk&w=S6O^uj9r1y&o_XxJMd^WTs!;^){}+zk*eQ z08h=|rEdRSiMVE|M(Es*_8RJ=t>aR~=T=lJdyJ?AN?3w!%fQi8GcUAE+7X!I`ba(h z2Leo|*YJ9&@5cD7+~RDU9QVwQURdtjd2(XvUn#cICf@L}FbemmlYioVWb@3kO~_ta z{aZ1XXV71MsOz2qeh4KyB>Y!G;Jj_K=k$(^#8;#Q{O7ybno(;7k5){L^&r%;TF z`|&V4B2WWaZe%8}f%gjIo2K_Mw`gNJH9s;|3i_{{saH$n)aEY|pAdO|K{u4Dz$nE> z7c+_EA~tR{5tS#BecT*{vK@zD-}>geGx7ULA{$JSq; zz1Y}q*Zq#QPT7^FO3z=iY0%YyzkTk)Cn84MMlHxFTsfzWjP$;+jCYZn6Q|?u#E}UG zDwX0v-|^>|nlZCthb;0LAJYHjKMa93L+x%IE`Q`QhC$z0upAmQ9_WbO4W-FS%r)uY zdj<(TDGusn2=>d1u8-S&KdZeP{pe0>0FC|=O~&QwPOF-1zmCIE$y+rn0%X6t5oxBx zBFn)IBN_2M|Me|wD%PPADcnXXB7`Zl9xfAbx((P%SMS2>O0eYHY2M|#=?1+b0OB@g zu(mHqL1}r72WRgVRZYa1Ju>PiR?uCLE?&-~zr&o;%??wfQO#PH9puad=#Jlx9G(N^0eRpw9X_zihFRmX0QtOz1NTi)W>Hs(*S#CC_>SCV)fsTE691na4E2 z#Ixo3*@Q@w<~Tm4y%`rea__ge&apoo$t%;} zN=A{KX{?y9;SX*TunOA%i##>8LWwL(-@Df(W|v6jGh}~5PQWtv8EGSZ_j!s_)le== z`sD6L|BXg@XqZ@Ommw8fJm#umM&FL5Cx~6zJ#c*%s#8c%pDN?jIClW=MkKU|B`~?- zSp722-qCI%-*q1SGyTeENM``gaz`%Xe>gHYIZ#@U$9$Z39);gu_6#3nxMh$>D_|H= z3bYF=GDf-2Y^MI|3f8_<-_6wcq!%9gG3BB z5|H*$n6<1C)k&UZasa1D>uloiz`*yLnpdYq(gqZKZXKEV3VPqQs|0RVxYt|zDx=Qxf>w0Q=$fqu=!tr4UlJQ# z9kzH+3W08WFRS*`FMm6vAIBK}nIHCsJpyHnDeK}ZHd2RO84?36T;G@d!XcoRa6IX4 zgWN<&vwk~;wPLX8mp$`kMnN$+qI?>Ewv+IaTSE2N3+-e#zkVp`vw*-SL>Kpj-2T+V zTRC;#%5sKS$z9A}?F}-)3G$>_hS9Y^AE|~`K6GHI+-El&0!2XPj{6OICgYWF?Y^QD4L>lmjcs+y`nQW4zawmY3=#a;)0*VsdEPLFC?Qk;9XI%}l22Vp5n+Y!<=u097F_=Okv1_=V}3#GMG zr$6`}7mI8f`8)K+6&mo9pQ!8_k>?Azc^fYYxT>FbT4B0*EfZm@y5;xlLRovfWV%+D zDXtL0CbRf;S0Y?YHvKRk5&L_x*%$c?x&0@$S><|S4IzLL1-d6LFO9g#q}Gb2SO77 zqAmAQv%bu~x0NnqS`+k}{(k-+6*pRH@dzZ+jl5{yP~Ele17Ez2#USVk$me;wFV}|Q z4J#SweqE#CJVw&)pi0A4LHbrg6T00*!-G%nJ8Y`}0tEyiE;FZ0PUvn!GyxFo##e;s z7*!W7B1HJ;Q^QW;%E7ny@98hh{}IalGo;2KvFs9GuB2-zkLv6on}8tEgT)F8sll!w zigG1tg2J7R@$68-Y&_qC!3g^y!2#(_mVfU|VXM&9Hr2 znHRozYsVH`B)!M$6hm$fJ#EA{hd=TjPZ>k+Ow=7i2jcRpxKmyK+EFeNBy94(puI~$ zt_$V5g+ej&Kn1sHa^hw9_W^Y3-h;L+yb1cD*g!8z0*17gWav__x1qjIdE1Xc^S1=9VJK!;uqd0@?(>PQy5*F8}sqe!fk1>%x zWHEQF{LMPK7k=1dE)WQ$T%_MP1l|a$pu?OAlyc~%Uh#V?80yssuk_0khK=LT##tk; zg*IjiD>_lo&$n*Q>ei;0>~14!WlZvxcK2#_4zIx%)a?dc&DhA-ujk@2yy@P~uB_uR z`A1SCF62&vySPC%$Sm;Dw+3hJKVsYa0k5du{a!Maq)1_RWvP+9^eKyOE#+aD*@Ya$ z!5@J=smi`$nuU)24Xi>w6LIHD1)u2kGw!`P1`HsDlQB2kHT2`3gvyqqaAh3gD^>7h zZ*T>~T=e6j7MZDBKJyrJSy&a8C*xT?;3sHBwLB5>zCnKkRk@< z4kJTnjW}Y?7&*Z6+Sh5>hnERIpbCRtKFs4KJG-s=1Ro6U#D4xLimCc>df>%wbXF^@ z{rscs1evY-9+4_Bb9Vf#56Ihb7RB7DZ8&@==8%4Z=4p%!umuM zlaA7Pv6s7%0KL}ecd4xn!7x$wTLp~NL>6U75AVe zZZB{uKkw3W8IXx+96aj@hLi+yf_&GhF#+=(!{UYT;>b-WD{WQ$Z|z`-3}Dz6$eV(( zd(Os>;j4qB^30o4rNsisXzC0`HnVmOq)Q`zfF(6>RN*{Nn@+U*)PZsm)JAfS_vt07 zZr`0Zcp=#~?bmYP#L%KwgjR=EGRfDy>}5ivG{s_41E4_qcdulGui1Db;J77dRmPE0 z^Xl7S^AvNS_%hI&q!UP@mg3rR!r!!t8#>Sn5S+Su=;cFJ6$wE&-BNhvDPcMH6k}ncx+LaZv5(OcyO7~M@`3eC($Bx31b4ycqiDl-r3Bj_ zYVd_G`+Zfhy8*Wf?g>Za?9S%AzLh;S)Z@Jw`pEo0HG_u{MZf%LdH{~+@R-s2(sf)cp4_>Z@rLl_=pub@A#po-`H?v8*X z?aNi2teUPzgr8)ygz<;ASI`BHL;UI{cQb>Q*L2#VW^V>w=uR|Hi=>UMM7#Bh39E2sfZgDrD~gJjUqj z<+Vh`>EB=|VXMi{vqN&hToXfl*3e=_7dp^1$3qHufXNRl<-!)kZL#*(us-nrKPPh0 zeNrgi`>QByV_cE~A*mo&90dw&E3!mrX+`PZetJM1|6#_>0gvf&FC``gC`WOT z^xN}4cnO+XzRc4p;oB`++dVrzXO*YIMyj-R<*yD)YdT_f&&(*pTs% zA-{woa`t{>UM^iB&&F!Zhw&ZiTFHK^bRg~2tAX#5HxNyq`}e}*2m-&C{=XK~TqC}t z{#qiZjvm$$1CyIM^0*eA?^U z{eCm&4-4WiDYm2fFCwW0MeBOek)(Vq{z%JX{C=ArTPc@Wsf7J`3GVPq_RBhIlq>x% zpf&?dHfbP|)7xXqat_qGLXY$4<*0(Yq{{x;@A&0HzE=sgf4+85EpqMy;j$1!DjLi2}u&E zcRtM;M&!sl-OYgqz8)QFBSZAF8!8`@G+J*{=*q(Qj(dL;Ixwo2W94v@zEG{2-0;MU zo|%9OiDOFiQ4-+qxXlb>S^hGycu!>-M)jm^1g+MPb8xvahEvx?ygI3JH`iXPLwU73{ux z+LyRlme;qB;b=U8ZJ9M7EN|@@@D?YNkVDm9FE)SwL>;4=!DcP?E%np#52~C8AOI(! zHDOl2bqn(&r*+pGWmHX8A7(Q0{j?f-L5S-2bH7xR?xQp7Cc&f=Ov=lF#C@gN8=mR2 zals#tnP(lEiBTdl_C{}2t+a?>y)=6@gP#~vAj*P#1!=gGH{|&QyQi$My7J_Js?dwl zctuw9^&KqFd$dLZ_B4xA4_V-C{*c zrX>c&*1Jz(BD5Kz=8HeA=yivc#cqvs=fF z^p>ZkLexWL9LJ*}nf_}iY^6SQglym++nVy5M;!_({*6djKo$Zb*5-~eS(8q4#na&o z-_W}|1Gc33cmmogz7{yB0-h#Hothj{n~c(CfZ3oQdxgJ+|1{>gf=_?ugJe32K7u`a z$9AuEuk;X(d3Lpiu!{`qgQnRx&Z0zVBIy>~R0pTRTCizV{l6JeZZ;eaFS`DD9zwuS zIAvM^KuaW9#q=FSpuXD-k5MRo8-gOv}wUc){RItiwX3_Nl)64@+? z-aHIN7Ns{E9;K?kpg(w_)^(>DyyngDaZ_)t{_%A;pV9MbMLS4_w)#9Xf>5Xm&cU4k z8ykiB*JK1%gr^`-R7xxb^1)*N_}=aeT~vU|-R$*UGoT`XP6Qjg9^fEMIi7Tyo$uOG z*a_C6p_(lv07z zOpnE2na>a!y?5p44q7YSM0Px@iMi4`nDHm|OVBy-$DFMD<{v?Sf1~#DG0eQdV zjVCe%KKtv;c}ihC#2MJUS1;-wBH5my0mB?yncRlSshQh#%A*sA&wvS{KC)OYqpMLci%B9J5hL&~G4lrFk$|f9 z>NXA;blNzbBexcVKbyRd;%i!QOU+l49m`k4DIt)Q`8Oj)dHP*D%4tk5;XK^0uDvWj zd&5w>N!Yw{#0UcI3TfN;37)a(wn9JJL9CaQvqzBg^*g$A4&UK({N(tEPuv`-bQvNt z@MS}WBNc5cu15omb#*dm1kOeDd$H^>0UrOh<(??Nv-(NetbKyZr^$bBhagW*aqXrNlW?+R@W$1c-x21VMQky_Jzt;n;tR!N2xsBDNmZg0u)#o1enL7n!Ut2`v)vYVLpnB6_c3JbLusdbRy z$M!blNuedW8+YFiK2=@#yrQt`4Bodf*Cj(syUWoNHr1j%jvGO*rVN70UE_~aEXT=A zFjL5Q@ma$$&Q+OGUH-8g+=2HQT@GzH1z}>=>qP0Uea(`)@-y(x(~i>!X1O(^i8|R( zD=oc*>Vn6(rNUulMUP~smRioDo1du}*;}k0NDVTKdG3ChU57OC3Zpml@_ikbVJl+7 z*;80J^{02ALAlR*gorEthHmNE62SDJ6%VWw*>5Jf`4?I)&cj(^ zfZKMOQp?G30~Wz`RTnzmM;fDNDTVm63MSw*olHnJ&p~VXQc?3DsUM?Fxh>9+LiRK0 zT2GhbSrE?R%1##@)OU~oja}#=o8!ClGij~PeXsM(czV{31iDP!uv=0R4rc6%!nEwk zJ`0M<7eHjQ@i&3J){jrl=!c0u3EzAL;r`K%wA3!T@OAWVby773@|AVoh+Gt*UP7`L zwL|#8Wz8bPBOHxt2{@MxO51A@&5g+!2=$GSySuLq43AAy*L+`Elg9C9)GLjMjFV5# zIJF$;$@VrD`lf(1XneP#>Z&0DLmxtsgfO>W#Lf^R5*T`754hDuG}|n~=O$k>gA##L zIh-39|IlJksQ_WX9ovzw;lnAn*#spslz{-L+CcvVryEMhkrpH{-Rm@jhg_!r5W`*f zyUqkg0*2*fSObyff4Xhr>4xH$WBsTXtq@l0?l4}&m6_;B_ZWRzCq5R3Ub~NPAMufE zh{Sq6%tg)I-p`^&WFJdSqu;Y9#jV-l3rjRM5^pj%kmDxrPY%Mt4k7mvm)tYs1KO@6 z%19(L)N(u zLAspQFWJ!)YgZ26j^h*Xjubg~bP&{s#&chSd!S1lPfmsQUsl^IbD;D{YNFGYjJl{L z{#3_XSP6SW(hA4gUF&8s^LxBch!m0#!_pp(dp=K{vDV-YM~bNP{^+1G|JXp-#Q>Tu zoq5elyC&Zi3@abmRLeZpilN&c=Ucox7Q|?r62nZ6oeT{F!4f$jN-orpbtX=7XzTj3 z{e17`;lD{T2XXi&9^auLHbnA+8Ck}U=ecnS_^^^&SA__eET8}gaNh()F`nk}pR(VE zlxDk~>QpjEJbgFVGB*C+rB~f`TzkYlVe8s5Dolo-22=02sg5WzNb8()%EJtxGe>2| zYig3>9Eo^NzH7bs)0=rmG6Bv(8hJZ=Py6Cnv|mgm->?OakT~ipI0^h6to-LCv4R)C z&@n&UoHy^ag|;~gJQxNTlqSnsK{P|uG$yfV^SF>}jhKnc8zm7>Kg(vF?;+>k_c(O? zh+NE;&QDad+j28DOVlUN*xn6PSIB zW^nI!f@RH)(p>$>Uv_YWnuN4bF&JxOzwZ3)EUUS{y_*6PJL_qhB1WWCVqu~m-;#P+ z_sUUwEn^K(HRhsiSaI8Q&e1BDX;>m6)_Y2X<+S8|g>F?3#<89__0YfZkS@?29+D&t z97U?$4vQJm(U}*2r_35_J}S^bl&xlI66HaIg1M1*>dzz32NQmiOYF=6AxX#FEn*~o z2NRt9-t`$D*4*5hCL4U7RsVqvgb-q-R{K=J1%g(qU7?5(RJ})46;w?A(6~);#d1=v zsE@_a6qaLOMA{xG$fv8OlW89G&+~xy;XkaIsGoyAra+d=0OZ&hmxHwk)2N^h+Px?0 zW1^ToGPu~Dyci)H<`lK6i%*6FWrhcrodN2f`n=pb_)e5XmN8M7p37`$WwiAa*np191@@ClYM zhoWMhn^12gtp8F|E`mYWAzgB_+t1pJ<@0WGMVE{b)*so?cH0Wm*L^DCGX5w(Yo%An zSVO#d*5sMYKp*jwZydxVETp_XMK@1LLq4QVUoa9NnnT%G@i4l4s=B7(mrHZuzRI2s z;$QS7fZeNZPyFoLy4kmiX`XUEYG;c51zAp6DxBJ~9Lx~nPiK?qaQt&Ock~Z_-Hn_{ zo8W^2kK)J6#aKj1gG5nfd&EgBE*aLDL&c@ImFGUyCjdK3^IH^I0aOifUw^X7pTGwO zmS?X`#oXKrN%@9e?)RJlfaQ%lP?;$T?edoMp4K!whN4&6Ai9HIP!0Pe+qZ@g#} zcaazcx)0fh*u!f4!g9+dSE_M(h7rVmdhr&coS~Dvs`_Ck-uB@+1+hb$L0$eYj~5La zJh826el6MD5w9o5=2XJp2cB}ihnY;~+2|Zq@QcG$9wBCFs)Q-7Rlc}DNq^db;VgN{ zp-o)5%Ej?71OD352WS`)@QpGkHECi8Gw_kcr$8C88&1y}Kbf^>GB}Br_U%yaZ$TQO z)1HYw`I^{n-`Y+TB~Ucmg>(749s|xBoV|vb?3i{u^Gs#o?4GE^=r3#|A-F*@){=_# zhf8}U_F~=Q>YHVLY90ZfMG6Sp0?vz(cLLi2w9%(qOKa*O}u zH+$U_Kf=6rd~Ip@^=|0=ZTi>3hz6tf@c|5^`~Ebq4uG}F!FdR#y3<&~N`ccw!afLJ;w zlv0|$Vs}g;Ii7K(d4|A9_uSoAJo1Ncm}3h2)T@<)RUN@Z-o3$GI7_t?_xzXF_vuPr z4=>>b&Sd_6y%TXgtTO=ef5^hz=C8pF8DYO(Shby9g^IexNjjKv3Ps zX8rU9GY)2LPO^4t@pXu?9KLg+rBJN<=`pc|Mo8|4q5H5pHFDx?-eVhnSio%#8~5jY zd4WkN>}^0HG~Ts9H|jPgWEmiVIz>4od+RwdDMrQ5vdP2vzwDG4jKT99yWP|1=LN5& zg#$EIqeq?XNOQX+O`_+qQEW>k^j{{%|NFlH?z{ACFt+*>nqni{5AbzsOAXDmVLehF z{8e@IR|V?doICgra@r6236?*U$dB-6%o`Uw|j6qS_iy5`BE?LHi2@#y@C7K5ZfFY{%tE-s^sSu#zHS6;kTi79#$&O z&<8O!&IO4}+E)gJC{c1d>6IxLyngXjs6eKM22IR-#;ls~I;KaIyzD&#i}i#?8svW} z1MVaAM8D^6%S}O%g+-lLf7iBB{g%X$wo6%!jMzl-G`PlZpA-JZ?oO{ZzqAyS+zS!j z<@^>8kE#hSO)HFPmVWwkxG)_Ct2QDxAi??Ivr9DaC_P$7so1a}FwcN2-{;`-hY?k- zAJUx!`52xmlW#St#YlpSl|HVodp;y+JVX*X9(J0xhQ?Gbb}2$`_M85`+z<}q4U7h` zG54#jo=Dc-V(xpo(+|;)tk_A`6V!5(?c_1&&py1gr{Z!e>hlC&sCv7~!P*@0K3>yj zm00d0u4D|E=G||X=-;q4-Q>!n6V}6PW}nZPPGZ6acfP;koy%=fFfZ5yET+W9Sl#ie z0%DA=+_4`Lss3@h6yZ=p?$7@pQCArdh1zt9C8fJrx*G(fVd-wAQ@TOAySuwPrE_Tn zq(KSklJ1TVz4yM~AASNm&pc<&oH_IC=@u59H%!v1)B~J&hf5Jd?$0Qn6s!1;0rb$3 z7vH&hDeU&TBpJB2XBOFCL?s>_wh{53rjI1gw)u3nia2~|rqd^D73y{+>vOzzu?eu? z%weQM5Ibe58~~yPy%#FIer&?Nh$6n2bUwwgqId_7l=t)ou>VQ#Z<{i2Y>i+A)E$H+ z2W$h|wa%56mS7<=Q@u?B-kly-6mVPs5;p@iJ@NOUo0n_9QTo*T?B;R~T(KOr(vjmp zN7(I7N)xt0xI`pR6B+~CajiFbCFDY$eBzri08jkMPZ4E5Ct6C%1z>tj>(h->qRutd zcg#TCGO0sw4}Sh|hrxo~$xCilvdJDPMVl>bWH*9PB(A-vUeqJ>{3NJy z%Sz~e^g1K0Qpmw(tMhcmM(FHH9&))8=w$^Q{wW^|V?=w`PDZu>K{y|?P)NgGwyW~* zXGD)&sWiStpNk5b(1G0`+@TFngexVy&p!`Ci01ag-VL9+aU-3ifh(oKTl4%jf>zXV zeF%7_K57EVsM8;{Dmt#-A)yvg3neV5ujUuEjP?5^iuS#EM%K88*QpDo%lmE8Y0UkZI076!DqqzgY zBeyXGTHrZYNlvzxR04qnZJ+3mAn|wU7%U{Y{-I(^7(%l~j0qK{XI3>z0#dKw!&QkI zHcwZh1iN$I=slx?Dg`ARW7abU7Y)J2y(|GCKLBw@z;#hCdp~-GZ#k2XLUKKBS4>rt zZ1llt*D^{N}*-e#{+g2&Sn@!r#;Thtyyo?a}GjWcl)G-DMe4&+71?S zt#zN2TPG!dbJ6(1GoGd(bfQ6p1h0ktS`#z7vc7LYBcnAwxDKOQCqwg=|jUi7+z)C5|uN^QF$T8Lc*(_ zv|@sJOmn#rDS!dI8cjWoeSwa7p3#@Gd@(gX?uqdi`scR1^%kW7q_L;Tu(RQgPr9XE zf&8yYAD7DEw~yE+?NL(fPUx{Y-fL_?8gy)BkO@jEtYXB9>f+%h{l3`gKsOE#Y4WnF zQT*(NmcZLC#@8hp~yAL)A0lX@AUhvs` z%1l@XdJTS5b&x`9$C08G&|CcRq z+MYWCPogL+u!v?PQH??Mo!V0VK%&Cew3u5@pM#jj&LqnoGg8qo{rT2)?-EqjPI!I{ z!+sv9MQHX#LCOO&<|Y9LHswnXuA2v6k)#e^CFoPDFeJRJn-T;>DAPEeU{FBOxY`3n zF1Xf)UIlU3s2P{5WUhyKm+nt#A$c8%R3v)0un5my(!q6Scd1-2QDf zcX};(it|*Be1yFHZpAb9hXhQ$PpV{S+Ob%89}P%?M+X$>$zt^;nRO0$H50Xm2qp5y zj`)8BcLh(TTlP2f%zn{I3heQTtBm>>i+ju7iYD0mV!WXY5eW&4K>d6dLJk%ArwX4% zpq)8~0)ma!WB0;0B?Wd)wYn7&8{YEqmGm0w0Xt`kr zo74}?N<4y~gelB|C}tT(VTUP7af5_vw(^n@2B=`gM1&%HD2=&a@zs>pmv!S5Rd)VY zlvo1ds-IwhFJ_WOcqskgqq)F2MK&Z_KSzJ1dqpIkoY9!QJ`@L=E@Q|Dc=P+7o1jwR zv2QMoez$1sQj65?zXC-KbeHVpRUlc_)Dm*SNX+D8u72lwhI0*91XQz8!SdgI5AJC+(FT*NPcJM?dD~UXmUh=-b-l7^y89rsob9 zDRmQNK}Ng0D5doo?v|^E;rh#XYoW_P>MxX#1UWXwnuMHJI{_E5n&|yz4?njHGRjbi zlA1HMpw;g_Ea61j{Oo2vdPLuEp-Lv~%Udl$J(9QHNcC!C^Vf{Y{%3MkMX$9I11ILt~5+yhxvW*?B1+!EFc3uA?~?hr9hZ^&?mMO}C#$6K-{laH3A z&Cv6rb99O`Q60v?RbMgX8Tg&vAzFgeZtHRvLc$9_t|5-);J)v?Z*)sjL;d^F)qw`I zcP*5rkD4tE6vYKoI2#w3hf}x#>n~|#7u|A|YHmBE9pxLW(qc9kP&R@UO7&r23~?D#q_Tv-`i2Lekxh8% zL+B*<+CS5*{S|}r)RLl!(;GGV$#4u6s^&2#R^Do#xFB-rIvHo+f`zZe*dHJ2bpQr; zhNs93g3e*8TJ`&jxK^g9;Rb30E*1(SlW4Wcg5IIp513ywq;fXS=&zLQup_vve>t2k zl%8~oj{_Q_01fk&BqB8u7~0Nrh@Lp?CKi>n$dpW`YltP1d{fY&n9V0uhNU{);F z0ThhsGHH~(q{MG#yGn0j6?1RMCZMYXUlrB*3-JMJ)O+Of2|FU+NVnI#Tgh?D?wgOvP!`fX>k(O23QujkuuoqOz}<4_ z{781E-%aeemJ_@GG4Y)kMn3N;R`bbcBW{z!?EU3*ddzIuC}J{J``QM2V@Pa7fZJva zs18D0ziu8=$Vdohu*zyisL-k1hM7Tc`2uEC-1E*Z5ux(|z~1YR(_=PGnj@xkm0m+{ z0}pLGS({8&S3UhQ657%vTrhd6(Qy#%fAvUgH2`4r>t@T7Qpi1QeM$s83T6~6Z^3_g zDx!Wwy^_8|?G14K#;bF38+aeO^?fl6NBl@Sb1KFByCnA*k0E{)JJS|(6N6}UGoSZ> zk+s2~s)8kA1egU(dHY8o_Sa1D6MyBx405pZequ%5!h7l#S7nnMx{<7IO2M2DtKgn+ znQIK#cW|=uJjb4%LPdEo5DNo0U!;ZvQM4$5&^@Zw-8q>w4_#GlK-d1z?v4)zofv9N zPF%6MtS|U10p^H9Oz)C9&Jclyg$2%tu{3-+YQ-X_u;X}c6B}&_kpcj{#U5(G^VFn2 z-PsIbf9Z7u+SJN>mbAVrj#Y%rZn=)HQ7HS154ckh&*~YND0X{wPb9Fb*Q|QB8)-0v z0s@E#t{>`lQs_cw(!RSX=ltxMwxnvuz>z1=tJ^^-&1GbAs{u{Rx36^HON1a;DkF8u zewdU_^=a&6{}z)Wo?b)qOZlvQ?rO(6a+d834$}=89&mti3kZFtQmNlt6h_dL=~H)& zOHrr_+Uc&kX1fTUI)EePmM2EkTqz^ifFx5wJL;RIV1Qi+SA)M6Gg0uXoS7$8EYEHy zzd&G*mO>41RMXMAi$l%J8%J1IpDW~k7QH&Dw(N%d9Zqk{d_KTU6fyFAXf=2uX_=t- zg{0mq291KNAO1JFb#@HR9J{qpT=UqrQIcZp)ga*Sopy%`F*GKwKr-;k-&G z9j0SLn~WSfu)z0FE0+psK7ff*Wafo~$77#qQNtbWep!hpG!`U3@M^iRueCb+$ggxE zMrysrT4N~%frPuN*r{WQ;D%&!(;j3^K}lfL2|WpEn#ztzK|4nU!szGQKQ%JJ4q81! zVx5Q-gVnniV(a$v&4uu3_;aQ1W&sH_$CsiotAsxzv%lnXl6q~%=^Vy;*1ADz3D)`g zcRpP47!Sb1aqeW=k}w(!fjxfJU=mi>bst;cXsrh0T8T^EtTxCg?zpn6YiO^FFAed^ z1Jh&JkPz6(D3rYtd=XH~wu*OodVeLrUi{wdC*V*nl>#$ua7cBNbuaIE2GE!!y>%sDN zHKe>C&4kUF@M0BH^FE1`nyS3i+qfmPDZt1Z=#Tu-FokZ`En%$+)wQSKD zE3rO3>s0&<+It(L1wTD6_Y4v}WT+y3d7tcWCY%zNtN?%KcQ*u?kYGG>_5&SK)rGAq_a#e1|t_cr|wY(HZhS3fP=cg8$2LM?mu@P z4RV*Spqf&hYCdD-99*3U!O*a2H(s z0bkNTlU-GJw9>=vhx2CFL)Ev-pgjS+>RzSo&yt~CS#JG)q21O0rM|UvkovrxLr&`? zF3E_(lg4M2@xCY#1K+wmI?c1qUg0 z2aOk*{H8czYWSJ*JcLOe@!;iS0w;k~Lvb60o&4b3JfK$}3nvgEpvxjb1=C9Pk!8(0 zjR5+^kl}FmR%{=qW84_5~``v|d;wLjMJ?-zDHeK&H#^!Bh{az$5C zv+eXSP<}ElpwTV7`LH&8CgXDdZF{7QeoBpn!5scYb^{$PVrR@8RV{NWsLvOIBuO#T zK~E9)4Di+wz(RW48;;vg0#x`R(~%^#z;C0x`;w(l+z1$NBt|VqcOgp!3pOwlGr^R3 z=Q=YmS5U{WpSC$dgP5h_;Rd3bc{-wPId+H%*v@*ZAL#zoRdil=<~8SB(ViuNmjf3*<=iX7JjVG5flIYJ{UCg<^htoO z5x1 zOd*|Qm>t&~asG03p{>PEH~}_^D#HzCLH9dLU$alhlOxxIectc;6Z0(ZhhCcEIRh?JqY}4o~OIkZe*H3G#Op;UPjF1`!!?3w=4XFy&x^cNDVxoB)Mnv zTrzh*#4Ea9T)4f7EJ2=Z2mT*`e|suSHK8vC=>&e~#kdxA}#b zB}WbgS5crK;CP-Kh)LUytH8rw`9M8T47fbC}h|; zr~cytn=&-yXg?GuY-$478x$cN}4)-fDuvo%wMuig*Aic(zO~942U$}4?nw|gZ=VxEw)B%nkP%S*JXgU!(jS5m=V8iq2YkOQ?2x8$gcsj*unP=T{+KKg|vF zcQF$BpSP?>#!&gB!qa>1j~nqVpTBCO6G@m2=}lM!Q+XgfPgB{_j$U%{u#Lg?gYjPn zxJ{XY(xUC(@8~Rl$i>z%vV@Iv7Rr?O=+9eOQbrrlq*Nfw%N20a(Hjdd9;E6vmgfrg zdyDa#g@F|tjfrRP)#r2Nr-dQ6E!`_mS<(XV8pmo&l2h)cXB2ldS^}#7!r9Z;sZm&n z^H}EdQxr!@+ORbXZuOz^53LHRKQNm6GsSGa-CB~1+7K)JlC1%M_8v9G)A>PQ)O=`g zU5=*o7U>rh$V@(VcA;$=lF3y-7wI|XbF&7&%?3%yj6M7qCrsFy4;0>2lP?i|u~y%tEs#EC<~;|} zD5paP!miaKd^v6iO|2xzx#1Hx8My0Hnf%jOZx3a1piL)dTT*L_+P)0@|*qgxya`3;cUQnJm}jx z6?6bBC`5nXK(GocEV#bl7=4x&nuwh$s~ZC)Bj_T=j%oy=a_Q^U4-(*4N7@($Sv8L+ zSA>Ahg^jraS%=c_B_z^RnoX=$KuFb8!3Cx+lQ&=LGc}QL;IVHVblA;|SD%lX?!yr6 z;dq45MChI0DKgxQ#KUZM=kNA+irWEJvflK790mZt;M88lY{gd=RwxqFwXL0o-rHbr zN4_%M7?G(g$c0Z~84m;kBvH>r(Gt%;?a<-l(7NO$rG5t97A6q6hLvo8N*3}!k+R%k z@!iCeY*f{G{D#omK9O|RMWltpH{6ZVsI}c0aH09Q@r@wA2lmCPfawa`EEYlJd$hqa z-d$7GPnO<>>VHwh72aN$KD=O)secPJfiYfiy0*-hXg0FdbC^5AQ#h{gqnto+nNzyL zPf|xX`W1@Ve^^fMT8C4MJK{{(?FZa z9d_&7<{stU&@wkkdjbUN^F@$8W?_7xmWkI|hU*WEY6gdjur5;3JFz3zZPtPe^~)@7 z5rgO>GeVk2;{vG!%%~Cn09M8{`|R&$$4)Z-we`X2;`jIdA(n`c!Qe!3Fc*y-sI1fQ z2+&3<2j$T?ZV=^qVio>=4z0KRTfko8alep|_7%kot0T^oixvXpBH&erBx;O9=~s|y zfwL*_c`vVm0{bmxUooLb0)_qfU=GASz!zr?pjUhgnG4cF(U)>Ixh)h96fS&RJo4tD4Rh3^a$;J>1QD-i}gSfBkV&lx7Q(?4RgK<;1!$&~D41XgI1S3jNu8 zrF6~`y zU>pNQN?QHEA^j<%Ak=r#H{W0^qgYFLL6`O3jwI+4vl}V< zi0WD)H_TAAj{K8GfKu`*5qpj2xL4e}n6fVuDwtRh+!v9t#~;5GI!G`JF}#d;yEHpV^rgt&>`Vv*q0Pdn8hzb|kA6N} z`7KoaGpEDsdmt@*262m*CfTO3n)khhtSbYS$8l9jz8{jheqvKrh|c^^R!^6EfeCI9 z^|UEZ$&;z>#4(00_;xbq1-|$W8fN|v`k93bCv?bRD+`JV+c1Rqhtao*;K?V-fl0?_Vg~40lJ;w>B106}3!&G}(ZJ9N7dd#k)U*k}{ zkDj-QiUKC+7p6JU;vpn!K4hT-agXZnwV;S3T-FE|wwbYBe@Vk>4$Y-LYI17sexx zl2jd!*qa8w;zZ~e<`sY946QM23vFg^OEwN)4Z`W4ZkoD!*PGZQPt=+-MYzqhwuWaV zhN4+2I@T~qNYeyuG&LDhVdtl=Zp2rv@qabu4 z4?bVGQzQMGIKXOPoOOmP0l5`F*6$*$?5}v~wRB&K;?@1Z+Wy@{;$6pXjIw#3 z5o7Bf!~!Ju}e0{IMV1`E3T*Uc+*ZWMcT;v`-3)38+Xv+l3*ISJJf znJxRvDT2P=(;{}k!p@YCqb9V5;RJ~J4PPsSp@Uu66xxeaxWCcgyd|6z7a(CUO06U4 z!}+^tzkczVTeLt$^NjrcN26Bb*hkX2T%W-U0VMyrs=hT-+Lv?nY6?pJdqTpa6M{YQ ztLyMj_2{+e-)rjywa54HSUjH_`A{~BxM)7(7h_W*;-Ajvz-jc%mW=A)FubH}N%EpV zpOC1P#|-*CqGP`weHo+|x!U-u#PAkHVJ$JRXhfC*(Me_6tIAryGwyqUCBPZNg2`Lbh=MGh5` znyp6;<+%UNl^I9z0p`M7bmlRP3wYj4`~9MvD6*Og?2qkBa?V$9B21a`7=MwR21Po^ zl?;W{^iYJZgbtxdFpYcd#axd~kwxgIjVJYqb9`On3i>zp=dVNS&CwA?5MG~9@_+Mq zb^KJ&4L^AwjWMAM!rj$-rbo!H(U)LhAoL91AIRzWymtypC6JH~hkVaZc2|`qtj1Rn z#_d)@=ct#EYYq4Am~GBqAo_AGk62>Fnuoonm8D(|CUEz6ZhgMs+YNq;X9ft2_6p)w zlE*hL?@mBYD_L0*?S)klwZVjsaHM>aL1p#PQpy?S*cFUiM(DlE*?cHz)^a@;d?@a3 z;X-=m7vDoZNr@aidtfAqZ7=zgulm6Cc-$^zZtUJR=j@$NpCGYH&!%axgfGDS_K<3G zM(L}K6Q1H<)2K0!pRG?nXzO+7bYT$+_mFm+TF=laeeBXF8z*!;$V|0VMD^RqGw7;w zvja)pv(Jw$^LV|#{>)5Y*n0-?w#kmgPRUwv#Ytog$$EFeLkQod(Po33HaFZODcmLV zO45&eiIkH?KRH7IND25b39b1B*Rxv({$?(>?pWigy7VY8U3{Lv$QP)M`K)5mL86Ha zZ8{XpM6R6|V|7LoN}5F8zR2j<_Vv9ob?L2j4TWx(OH7HDX~C4dUituDQtHFN!pA;{ zyll}L3yG{(awr7>Z#TBwuy(6IHR4_pj3$t6W+I&X*fYp6B^kL)_jP}OlbZz?%euVSL3LiDq!?4m5HmjLV8vPc438nytvKI%TXvis7cO&hJje+K=mZ<@%Au)WED}81#dAdR=#-(~d!eDZ0Oo`5}S@ z$6sQepi<_<#Thc~yUV^byEpK{{*=Ffw60CEA2*EWaL6WWMA1;fcyDQ7598KPS^_IP zIjb!D7GEGSaQ$Pc?}QE&6~O~3+GE%koSqM32+(`iqPOEQpGV}yT?CxI&@L!^%JJ!U z8|im=F7@4NfW&BGZmZ4B-T#>&Yh&Gp_U*x{Pw-PlS>K1X_@Lq~Lidy3w+gz-e`8%l z4kbzU+#mu5$@5@GiXzx-&W4WV?|M><7l|;RbY1LyA2BFi z`u_4E8!*^&ba%FLUKx)-c>8dJ^ea4|(#&gZ1#xXT{X7j8x>iuKN2JT(F1s2zs7l@u zdm?rYpiKCKB@ojT@Qq%dW${gaf5WL4NkHT4TRn+oVTQ`_9|#1SVky=e_ZQwN=Cx%V zhRc=zXkEc17O@oq2w7!4L#B#^Gh^tBOv*R@e#nL1^aVaNj|N1cc7|1Kzj!PL4vieY z{yvbG6euV#gcJkO^1pHiGFbLdsB?-X1X4hoqd^t0+LPyzLe`q&=|q`>g(I}$b&(rV4Zy;Ug=j)hCbiu zaN|M`p!{{F$8xw|^;P#bj3kM0er@XGo6A&Tw&I~QW9j^P)QNrP&6~~QHqny~+wXtf z@U>+n(v)lu$V+$=ys@cDtuONPRo57J2_iB`{g+0H3qu$m>;Z7&HQAhC*n&=Z3VLh< zLd(CpBsr-e0#d$FNl2BUL&X{STGqCIs&Kf**3n#^-4u-FJ=pKlj$z}(3pg+{`1~#O zdq?aX>lsl^%0DFIncTp&a-)>>rI)7GMRZ7Lv&L9qd3h`ch)ravB^fPj^lR*q-~dvo zDUjFGQXCDz{%nsAK{Xt9J0qRJk?`v2WB+d0A%EvL`Me_qK3Na z-=t!pgmT^J^`C+*C>PDT2+{~~lC~lJ*&;IF`Evk{R$*{jxqf6|@BG1{sZ!~YSm~@~ z3c~d@ih)MG*_Rr564M;Q@T?diY|BUlE0iGjK754`H2$SJEeU>+PCC z*UW}^sN*Uz83b(sOWQSGsSXXIu^(N_gCJ-Y_K1CZlJ2mlAMF(Go_n3uP>I?W@L}BV z4W`n)7W%_lf1#~up9YD%4}~xIK_;Q;+DX00`iP9-z+udb%-jTGtnDWP$9KLVfS(^b zM5lQz>9+fpRq(#NwnpCun@eu)Khx22e(&-jBXy;Enf6!j?PcZPWRQt z8+o-~Z9X1Vf_Cgsq{7Stko>Y+eC$sk@khNpJA(@Jjh$iyOStR3dvTaWPWGNenBz*> zgg=Q`>;xWWiHG@)`gEF4iu|hmM;cUwH)n3*El>Okf4mhannjK)DjKNR)Z!1C?^bC+ zL#iWJIRaHdB#JY!k5;~vUJ7hHJH{dfl&66Nl(W~*d$6wzuwq|3EOoRF0jaY2x{##< z;#VA%Rf?K9XM^*<|Dhlr!e9z$7Jy4LO7M}OR5b?}ZTRXG?0}jzjl}b$y2HgW1UQN3Ny9C4aM8trAs05+pEN@aL1rfYp%q_IBz%U&9v=g9*fy zzBUV6RGG$FU$lJkn^|{YSdB0w;BXTHn=-=l+q6VXh~G!QRW^RLz33<(9jWr?*Ym|P zp+6Xi2q^&;P*|e4c2kI_Pv5@H7rrn1-fI@I$Tp#+`AKMo%~r_z(U3aKtu^_80iNd= zBA_z#I7c5O1bChuebDT-mxFXqUZe>?#+~4MU`7!0`v+RyNLUXL?O0_PolGfd$G$B` z!rmRsL+UBcOO(aBk&<&o76Fyjl0*eUoSSh<;Xnp@crWiCOjn|XYrBmW!087hdOif$ zqF(eX|8UT(Z#8b+wW~S&&=a2=?HGt`%K9ATU5V}=39p+GTM%6T(SxTuDeDA^G23?& zTa`(deElBM(^2dcn3#+GVg;NJEJnuxlP(5=OqDf?2pD6?ExHGUowlF94p#w>`K~6; z$q}H#1s9v$ld*@XtfQax9?-jp(k>}50esI{Pa(nz7}qK)0*fSDLvHnB@N5~P?|2G? zt3=3zHTO$t)<^hsH>+%A{wdiQP#!d&!oClR41nPB&1ob$lChT6}&7nMFKNaFf6HXABJ#x7!n7%)k)>%R8cPbhQeFg(<>_OjWI|S1Pmu4l&kH-z&F;XHw1<$I#(s) zDK)AXF@#WKy@$c2Bg9kt2Rae`X#&C2skypXID0Un{DlvTR^PJX#@J^vC9rWO)i==$ z2h~+oI^1`Yzg|R;(YDvpJbXL2g0cH~szQ1-$@m`gLAo?<!%Xw!#M|C68Jc_0!$e$eW33q%3`cE%zwjlnO2jM^iZj1(< zqN=m1{#&OQ6cn6pmzqzW_MiVRlwb8$akbLa*8E&zi0+wq6!QDE53NdFCh(eo;ol?`h6xC2cWNkm+&RMRaDAqAY) zdE@Q~!14{)^u<~;3xs-|GG1B9f8Spf=cU{9Mg{&>}&>p>|GR5!D znkul+h5rc6Qn~=55D$f`REBv?LdFoMCQ@lkVJftzI=@7tD7uiMZsD-8>yr z=ZMou?vZBv5Sn1Y!*HJkM$oN(?D@i4AJ3`A(+-6DxD>udgztf1PpQ`zE-oQcaPHks zYYmO|wj8r$oSy!g47 z-!@HXg?B+{;u4D$qtthLfAzq?dL&8P(6zRhUKrors>}aXq27M_B;APqAmi&oAXLNkSJ% zLDW?${WD#Bi-rFyG?YS1hHXe8T8POxqRUoN709uZ!VD9;XWv&OFhKv(^2TwAyudKH zWlnBHmXmri0Y;fSydafEEt%`Q>uUjNuQF}uH`PY$AGhT1IMKi0(ZjNW&pNqot}AZ% z$>YM-m_G_zz((AXrAkI&6Z1CittRi^I@euJrdo*=8hj&M^N!7NCKNgvv9>cGA@fE{ zRMYo~-qK!&d#4yP#Rddo!S{;=h~SvBk3&pSkXKCpL)N{?!JElrvlx9)Iyx~tc3p{& z8rnv`gB=JT9ili>PJ{%`2ffD$=#Y>++`S_iLkrlC3p06LF?Mp-uZhW|dzBS#VC1n| zo)tS$4}$*w#<$YHMsWuB>+^7avt-m37UZ(>bID5B4#NWsHFlK;yO?%0-n6E;YpfhU z^k-bda)x${Yo>(@^C<&7kI?%REdBM1owqM-2-|_2^4ohn?I`TNw?=yttR`V^UtcQr zl$sI+Sbqq=^g^w{&BZ&S>&~C<*!N{!PX8{)Ca>8H&2aVpxQPaxwv=w^O{E=uUd|zjBq0ONsAf;s9NS1gbdjoY zy@E>&yj&?-x(s|H-TW+*qlp%Ykqwuq>dlG2%=sNXR^#FKX&#IpUQRdTq3Viy&-t6n zf!Bmd$L+APde)f3nHbAjuxx{6hZe4prt~eZ?(P>Axo`g%I5GxIb~n3dFUuJ(#+~hY zMHw)>Yv@VppN=-SPZA@GOvae5qf|tB?WE?YQCh$9ZZUmqyx>Y-!#KpDM26)~^dH6Y z&j<(v-XN*vNa$hnK`kw}=fdb>B46K;bjm3uTL-i{_$EA4;6doQ&c#dN*YOfDyuF57 z1|3ZJtzS-KO#U1SkG1F;)f7VNP;q3o=Mi{m&})m!1Ps~^Gv4GTfQ9Mt%*u_KdZB1j zWUlv-!s?td3&IOr$)TGdwP3+|?VTH??P=`8zAz37^Oz|Ar_mH9i`zwylilILmre3a z`Gnh)zqb~KntE0z2{$x6GPCbPIuOtmMD@ciMk;vuJP8*NuQvL{*a74+%x0{pu*EUT z1X02-Rn7311_ui-NrH`Ak%Jce?x_ruwxY6NFV1Z9^(|e)VAFo|B(*}#LnBtdhD#52 zoup?H&O^pMi}$v60bK!FoZ=me12ZoIJzxPfSii=HGXatZbF6ja$w|ni{`}MiN#<)=ePGcg`Rz=Ly2Mi4!FPmnqP;EXfy2@ zz7}li8GxeL)~9>1xJIb-`FD&Rt zzsBF-=VS?<-2$;f7dCsir>7nF)?^+s$~;e`d&Iv;tb8Y65%RZL7T$w%B=jk>3!fe7 zqf7#H?oJ#?SGtz{|JecjTXsmLpeQaan1bm2uUt%x1k*!lU$O^uCm84y&iIZMlHnn` z0X4C0VRX49&2Xj0j~_ef7`>v4@cvx;L(Aa{?ucvkf9w!V%{ z&=IdcsP+`x?Q`~RGt2xH-ebK=^fcsWXw5=gj4OueXfWB39J+&x1AWhqz!52^ z_ned+qjnpQ`2+0DQp{f+HecERmD%Qx`?3^S>S29|Fpbz7z)Kp&hOH>M>k=C z|MK(S(2%@=F-`}m79r;35oZF6&7&>n_vJS4G+oa*-I2yw;&ZBLI!(kH4MP+J?L=RK z+So(uk7ZEo33d;*q38a=+_$2;2H)y}hz`n51HAVrIwrnnQ$8+SOtp+wX0DrB2$9|k zvqkN29elwJ7`)&`ot9d;@W?QTn|*=pcgAWKAcCIcCiq=)`<8ziB*ip)LVi$paMWVu zSj0j7rtSM;x^b61*Po@exeC5$rY6T624DDKbuVzn(l+R*1bJZ&cLkA_)Sg4D9{6gx zmsI>S7F(AF+QUD+D1pUg>l2ZJm(hi1$;^=*)Z$p-g{-+9bPgWYNz?TcP<|f#UXH5t z#LuOByr@&P6#tJ?c>-cu$Hz)QhQ*9PGE&1enjRoMM!ZFdTd=uxZ*fXmOc}=1*m>d6 zdbc(Y`)}$DJgI;#=>TXX(f?mq{#gYF@f0=3U2st{c_B_9wfbYZ?CAOPF;UKdqg z9*oNTL-$fTo=~Xn_DX--kVq4X4r=6kzG@hu8;DbufxG14f5aWpKNGeXBiwWjo7#x; z83>}ExjcM-Gr(_+FtP~1Kbp0i=*fC4TbymrXpi}i133W}rim5UQgX+V0fn_cY!bsC zFm$aPd%`h84xmG0V`V@=8CwA_gohlB{;-&tVWkw*5b@R(#mIlCnGHW&!B_azy_Iy( zlbVch9UpxU2nTub-Op30pqX)Rt39}I;&^B+*(i#SOWPjz`RFRfnmA337x3yrfBB7P-p57N$4w5<#< zY_C}>1Ev10*`{@OD@&&Fb>UlTm1bUToJOu{ss*CA$X?xez+yNu#55~u*#X4llc_SS zpV!O|=G5zeh*3!>G*VD^!tg^b>t8u+sWf@-+NS zP_eM`b8)g{*}p+nfr0c*U5YiT)70I=^+mrtnZ(TGQDGFnbkE=jXrn1cUZFS91a0q#AL58oYMyP zsN!$45u5)t(#|o$Y@6u0+f)Z~sCRnb?Nnsrph7Yb!v*M01cDk8BV&aIC+4)9syKcW zT@KyrBA8n$$OzQUr@z@V{!BPUff(p!=9@rl12Kr*vS788+ku|)TFOyx-fh6a0%Z&L zJ~Rae5YsZFyp0?`)vU1u31SldE;LnMR2WGpSYu>I2*$sr_DdWN=J7}GyDvr#1vp~3 zUPQO}%Jb)jWAd0H?aG6;sTB5|>ZVUZ14`BZ5a_EYKYYtBd~P=p=AvbGcr(ucC#stU48~ zH;!1$^1~L#(rYcXm5B2;E-LFel)4x%WXK44p!YHGAT5p4qX}k%uaLi8+PhjN_$(;W zVdW6NLo&-OMzv8qky~Z^B$@eeb`wZ2azk0mU$S+D^ImcqN9_7m4O!|XLhC=x;ws25 zlk3wZh83Ht(wuZ=TpF(BklA7N4*6({&>lqXcYyZioB!Sd_{Pn65?QFm>;4#CeeTt- z{<%@o3JHrZr6m#<3A)=ZviD8DUZ%ms-V-E;+ zm%W4MPNWF)Qhj8>FM*(x$5Tt39)w@N0cepw(5ODwQD%CDg*1h@BUH*csA>7F+77EY z;a%^yO4FXm;mAgBKGP~cW;H-WGupu<34_ebac*`jG=+Cng$7fc5xQ5t+S;D>FFrgm zLFJr{v}28Lbu~+kqo;@4ey+R!1_?^04s~TW>w7ZzzWQsNaqJpA*n@m2rCyYje80@` z(vj^SCTxJS#|P^@+wV00QAPCjdnlmkUK%CF;Bv9nW=ls=K;{dG5kk4QG(GODPW%7S zlpSi5gMPdmVu441UBPnTI=md~O@1f@0S3mr6A)klhs zxfBWFZReZ1GF5oT%w)Q?#`b%Xn%)hB`_Nq?zO^p6=03%gQa&)BQHmsvaepzFdT^FQ z1f;9>kGh@)?>5>yWLy+#rW*rE5d}?h(_g|`0m5?NEWzd?w-hM1T@$8Oy=Y`8L+BTD z)eqArig+!`+(r3)3jLwV*bLvjq}?C(Qjs3)h%oC<77dkNSVURmp7G}S@{uqYv-9Pt!HUy zVlIC3$YLhai;@qvmgkLdB3?8HTlk< zeU|nWwt7|9+SEPyU1%K1mEdpLD z=Dgu*z9G^Pv}d$+a2Ofp(NvSYI82K8Wp(MAy)$8jFkQPlFhB-05AGt0>lB(_AR1W; zf@1N()D_m2ZOF`$zFa>iMPHx#g5fQ++rN_zglPVrdP>8=^R8f-`g_8D?e7@c&v@h|J(akw-j$LlqayCGDo@lt0=Vupkd@_~WiN0QyZ&&~ECA3wq!K|Bt6{V5}@yvrSHHTPL<{+h)h<*tYF-Y<1kRZFG{3ZQJ(C zow@V=!mg^ds}{b+^&`r&V!$Y@b=c^=eGQpO-KE5RIuB3}xokt_W7_O^C@Yqiv3ZK@ zf=QfjP2!)T^-9QkfVFVb^;1o!4p986s)F^u$U6lIYLOC&m$NlWF1Ao43U~L;Za81# zUezZi*x6xheoy97*+?aG)z&dW7GT|H2sTh>rHJ*P9o_EHQ9VqEe)tY0OW|4s83B=R z(*uDteGOK|!Fz6v2Hp$Dy@xptdkwyTLR!uhx3{kF|G)7EiG#|$v7%qUzgSXjn-l;%QCFxV9GHE!Ih?E0(pBY*zke-NP;O{7=!@+jmgdw^_!>X*^%N7N~+XI@zd zF5`7Vx*y7p(#4pbsL%Mw3%ActSZKL8(0>bI`THmTl+~1fPDSE;(}VXrb& z-B4qYU9adxo*?KOf`hz;Z0}pb6vV5@5Ft8`1!AZWo3;BUQ>|a_hgOAPTyJ((p;WB! zAUHj5K@3ociqS82AxrNwu^qn@v7^tAkW@>G9FPIioOeT;ULfU zbYq&F2=i{i+c5HHedVBapv`T1xk_hJ?)#SM5bYV8W(pu*FNf`gqz>G+V-t|UART7B zd;1F`jgkQ-Le>miTFF9W>KkQFy~gBIS%=~8;<abWHT$0!1&vU+Y1^5*;a zv>b>#a3nlSCR-$gYN_A|S|hMvp}AkR6u-`I4K@9dSYDAX>Pd{P5nc*TNYaQ7nm^q9 z|1UiALcRMMObtPSq3RRxb~C07zN33BFSHkU%7n15KQFB3&550LZRs(D#g@Xos9zy+ zPgFVWro6}i@nG{klzwb#Z&M7$^zxk?X-!R{)uAb($bIudoc~slpU720x6~shP z{M8(6X;7AYMX0IQlNX|;-TaMP7UnJgM2U@^)P77v>Ytpw^6*f0pvzfhzI28yDfzRYA(LyxwsM!34b3q?Reb1PAema2XX8ndpqiP`XH ztjuY+h-1orJ=uFTRFuY#226-1=&hx66N*O=n!}g|JYyJHFHAi$Km!82qlk7 zDC2ac>l7f`iCQdf(}(8Pu>OxH?|^Es(97IQs)C z65YZgLIjFc|GnFexJ>8-UGEc3Nr^&Q`kGib+^X3~+kKJllYlwHnSxGzoDygs!LouS z2+;T>4y`OHrpEO4X^A+d>FNmIpu8z1F|7b^kULuiPn2=+bhu@u>A*wm1rNSR^xP-0 zmjZ0l{{{PA|M*@+GCw0aj6LM5u(qd7C@*R?wj$k}%OiI-;S^S_MI$WWM2;D}T{>Pm z|1vQ9S3OmA`WNA&|EqjF*S%uA=Ez6Ow_{A#YX-VK<`;euf?A-3!S>kLB8(XQ>c4jE z4otED12ft_vH-_D0)hkOcLyhwK_J_nrg2p{(to+LXI}99dA!Ljf&@PtD^@<(UE1eF zIN<(gP&t`Jkf8hIU0cc$*SUtzrd!0&lEL_uQuW5={nd>KL3x_b^{!b|?b>pbSk)r}+1_Lufh z1RNSqZ+G2S74HAOtvR#@Ne#>BD3d(#OY*b8d-oEu8ffc5O*9cO z!EV8O47>Ab!;*1h_@+lFMtL)$^d@AU{XeQrV_bk_U$}~W<2iI#cn~B-3eRxsYB0PL zw=;pQV9@LsZ2A3gr2J?`hP0P<<1060)vD<#wdNd19@`jI>ra=)NpT z_lO8yktX-JpAzsjf{1bvFneY}2L?956EQg*zX*gOJs`#%*#fI~uOzoe%=tLR{>WiM z_h^j%Fn$MRc{CYa(KG-YI=dHeD#516O1s*!g8peLRWp4Dj&ms)EIMe%=tSVwv@^T0!mL+M!@<42{rB3Dj|>86XVyvj z3^;5s7c{JUf%AgcJJ~2s<$*tE?jiT7{6Yi+fnYEN3wTfcdE)~FpJajufgMy6|BrtT zV-uaS2l^&ll&eEn2|;Spzj9an|JBJox1Qd9v%{ zgY&_fX*Q3J;h>OM@a;HvD(3n@&S*E4|M;?60z{gA)< z=ODiItGjbTzdt15+Ma^FV*iZ={gu7#GTjIonhry46kfP^`R4t_ciRzq^U7ae?{IHs z7%xxhYg&L+;77A=_K3x{LqXs?E)DTX(SdDHxWT@tg&*X(z05^xo^c@d6l@n4#ZOU7 zvP5}sJO+miCGVFowmm(xSjEq>TZdRpy4J5F={D2LkmL_vj0HPn9MIG5EpEW~?__wE zU_}Figk;KjA@t09$pT%DFvKFp>O8U3{QzWuv@ud6kP^49h=~X&E^?0br%fjAiDmrl z{KPyTTgu?}WbC^iV0QlvHPx)n2g=_gHolMVO?sci9CZe&k|9+t3RGPvx3wcqKLWVy zRvl8rHG>uF7NX62n^^bufFA*(jajc!B_cDRUn2krgV>sq>qwuNermTYJ`OwbgKz{$ zDc2r2JtnTr*b8=2^G%sM^|Z96{?~jMD%LB$B6BEQm;g=5!_3hA2XVmjj&P_U-B1Z( zD$%v#1uUj$+|&&>`lR;vG(x@Z29xjI`zUL^Re4B25JyUYuZNDJg(>xGFS4OA@hG|k z!$jk;GY_9a-lbmEh{*I9XRn^l;BL5s9O7^OXE)jsXtn>1;1pYf>zx*cFYqt_Mjl_e zhT3CHUDM&70kOH+JY<>3NJ6&r2h)CTToW{o&5d_Nan&{~s~0eBUj$V@u!Gc_&Ubm( z)V)cbaw%;0IvEz=2-$zSZA#CF4!azipXdZX+9=C$E&6*}TIm`JXi>^7*s5KTVLL-y8m;!y~s?TWgPe=4`n1Xb(GK*crm;61He!-WOX<>n2Ww9}*iGnY$$# z0pb>D44;4eJuDSb0yuklwjD1QTPx-lzLmCoW!D6yxkD*)MPxaC(m8O_zY|gat~F7fOfzIXeA3rY5xA75&4W zQ88~IAx~W)fr+i4P&ne|C!Rff_Yq9&gNqRT|B{G2;=KYw&55NEGK@(0lnf}g`u@!+ zMNsG6k#C2?;mBPq4gI+O6L<5dd;F4G@&)XC`lJli5R${?sXX5gDWTKtOdpLFO4-XG zAyivvj^V8&yXF1^73dP>SC~j8=^ENqp^9k1t|$;GOdCBxF#Xq?@c(YCHu2uQnWyK5 zt9Fr6aX{CAso-v$)_AU(ZB{E;KbN~6?^wYNXUFhc5Ms(FowqYj_&L-OLNpt3_*?V} zd+3sBVbxPTel0$QRd&JZN&wgMOV3}n6*kZ)1A1(|86_RN2;sa}N{;;JqF6(56q^s- zRz3n>l3`@N9v#(0JT^x%#NaF<*<$M)X1+Pyg3D4hr5E@Les6FYV0Ay-#x^SIJ$%m;frV5N1{aQV-|`PSN)}9#$20*b`?ZJv0d~4;neHQ zP3Rv4)0LBAaVuO+?=q@uAcZ-CMD3~DWbJR}gTy1UVgnKvo5sP!uyo+w*&al)3zPh! zmEq86l~4RLp6^ za)o50xirSRUy!E)d`N9!p*r%Y7AaQ?Y4~o%A&>B|v)hplmkBEM(-r?K z33=$FjP6Herrqt-4Tad9(#tqfwc0bNYR(W6zp3X895@tO6E6bJ_quu}nU(UQU9Q?9 z@hoszN=$!$r_6T}47OZjrmmoe1n_i5;E0!pc=k6=7iK_y1L}?fwhMY9r!MX>?Xh`y z!;_}cye-(>w5VCVZ--Fo0%b&Y@`@kWMBd34cK4Kxuaw!hiqZYS(vCj%>W-)Y!io_Q zfEf_&5`1l?t&e4ERmJwOr{5f{xZ8D-Uc%q(Pd~%0BL<`cW1MMqIO?Nm2iO`>f?ItW zu~q%^!R}&)SfagTiZeK>FR$@lWQe!e8Bey~c-+(XFpGfQG7tFzw27L?Y9>AUYfr3B z46?a_excAP)DMIyO&AXUf(>_Pcx=4?`hfZTp(AqDB1k(XCL3|U!32B> z=HNVa)b6Zixb?BI=&!6=xQ@17uts&5H#1AD)=`3@;g9&H5l%xKA{;6hJDpT>E<#JeHV=V%E{-j9rWN4Km4g8a6zm53)jfs z+{ZYZy`sljsdhC|2AX6LG^_`MHte}iR`c$Dt>I7V30~kx9_J=Xku4~-kzFU_`>nC1 zX};5bujL?=1Dg;DK-H+Xf1N+*$k`Y2=gEV|AC5?6y3y20mEw9u^Y^Z%K0ToWUd*0z zPFRES%vP-tZ{C{slJz=w6mExu=geYQG3(5&=^V(8kPQXTNPWV@-x0$!Uu-v!WDrrd zX7Q=d1Q8e!2-{~8q?>`i*|T^BkSqTi>pB7iLYqjGv~E6b%7y9f8|Sv9-`n5j#xCsX zl9E@-Jo@UJN9xmHHE%>?e7PqsyQ7X1^}#El%T959=&hgM%}IY7BreP`5N>Iv#-o)) z>a_5Uoyr;(YMPrKh$?E|hhCEnA zw|CFVxWMQQ?N&$ncg=`j(X$I@hBgK>wITOsDMZd*4$49l(M%m zrdQCrM=Z6}7S>OkT9pC5T$Gre&iC|HeWc-?4M;M=<)*bF41BXXl zhS2}gS_wZ<9OzOW4jjcqLSOH(?2B;6H7SiRHWE*_1W5imLP7>i811V=Z}dn>a+~3r z5XL)QgY#h)n4hZ=ePRAbKxu*gwBI7Spt;&_Jh=*3j+bL?^1QxKeKEM@r90QoZ{5Lv zwL+W+XT^wK+VOXw{**#i`ft=%6?_|294;tx(t2rU#aw-lY`~UDTGM;tvM6MZ<)C%w_=2F3!*J_B>#EJjrvpa z?#Ho_sMNNmpEdr?uSl6oxafIY;QrK>J_JMpbh4KF5ACY*1!hp_G`!SnA6+DT4D|33xZn%5ApL&pl}7IHUAy(yQtttbyrMx6F44NWJ(}Ks}50 zF>Vz~`RAR=RUh6pjRslQCxv{lXyk^dh>gPQ4GPFP%ElCP*5*JPP@q#8g_Q?VehOZH zU1gT~`#*6uDH2~dB73~^!w;=ZK)&CViGm;rg6m+x5KDo>54~&hjOO}$v{psILYlTi zCt?TLgZ|~8heH9^!Ud(TBR&{@ej-`Dp9y?~K~kDGoV&DwdES`x*&DQZ)9U6v@)YpT znOr%ueD)pQZ{ZZD1DufWV>tV-Kug2x&8Y!BRIeGkF-pVYZ3cy&N5)Lx3dPPeg| zeVbKXbTiddCVT%0Jownv?e;|~t3$*;;UFRUdlxKNkh^7d1hp={>WW7(6eY+2N?7x# zU*v84YLvYnc)vOYo3-`aDMj8tpC64;B(1sX`_><~}C(8OxjQ z<3YS_TOC)V9{2O}o0zxd#eAUob8Dgxi6 zYT#E$z>)9RImOvq;=lSr`&YoJ7opfZ)*?p$QQmo^S1VuCt@&>gj?;^}oUupVrKT@C z#k<(groP>^wsi}^7-w>K%&K$WwN|YINd_E9Z$Z9{+gzK;0^jfaxorA#q9Xt7v=;0c(v}pPnI`aVH*m`Z zAu({$kf8Tk5usj#8>1F-9_vMMLb^3hD(iiOQ_G0y%)oUeB_vU7hqs;qvX3|v@!^En z!S9=Ujimzn8{qaYR*s_T3w8>Z1VxNLcn8=voKR};?OJ-yL%yCDnc0B+V>lRMlr7P} zgi0!MCaeFH_3NaM<+_;rziJK$H2L3PqZaFxhO)7%dP zl97e>gY=Wu6jRFO?sR?Mxvc`P%*)Kk?#5D)%YR(ivWN8YI4L&O21)*7Dzht=2c1;Z zc=yFiak~4E(R8!5a+S;Blm{9zzq2}&@0Q=aIe>u{`kiaD`;V9v`D`v}kd*zx?|8^d zG?$4JZI53M+FQ*(_22{V>?qmTj;EN@|3mC53VSTxB?v2%=$f+U-3Az$>=Q)fpA51( zw9(s^*8GNWDA0JJ5p>bF3i6z5szfz8Ve(6W-`|}eFfu1 zZc7c%!qe7{?T6C-9@uxj8F%SR(%H#hMJI?axZw}h1Y*KPRhF1Bc1lv$c7&XN&kuq< zEOx+RTXK7Lt>S5AazDP8LrM|aLoiij8jPADB4CDX2EHyRLzODvZpCaN2p< z`IL!D%>aL0IM}Bk9zAQNHo;@7VC*D;GfKr^VA}0+XCk2Co4|G;-7d^FRlvZDpLc@N zL!HL-kn}7)5$QShDog_u(Aw07VrdHOpxk`+3XMu`aY2g!b>mRzs3Y zZ~ElLNwB@V_d9J@V89<8XVgR)Ut#LxYSCNosW(eke~E$!S}v$S1R}TugtOF%-Tyqa zT4OV(ZoW3SZTIZs$g4S46>1E@0k(7o6kw22VX*2uPX56zktv;*&cgxxY%!QK>7x@b zcrTuOYigr1l+gbzrXIpS_s<8Vk4Kk>dKV6ziS(6!k0B5Sy<&wCH?9bxKAUf3?MC+@ zvNNWf-29rQIHNUMsysA91_s(E&djNuD(>K|-E`_tj7S-Ki;}xwjF7brZQWD=`G$Y% zJN;NO&xbPn6dYtRA&Hhy;x)gm)g8zfANa>%g{H`N z>B|9s_J_6X47$anFys~r6*k;KRwG!H9IxX5^j+QohYf!&>%~h&355$1y44GSG(j(? zA3}s#3Es=Z6?y^`*+-atdFgt^J}`1JZcE$zgAq4i>w?^tK6VE_lWFg|#^6+96E9*F zt7CaA%+qYFPrl8V5-palqr*srTA*p2K~&k+(5GZmPA2_2b`N>h%Md$(UTG`90$-}d z!4JD&MGwADY!V(4k@0VMn(k#|Tb_?Rw%yJSCp)rlt4+m|2)$S*_{+ZB{uEDh`FgN* z(iYa=%}F&p*!B38&Bg9OkDIhF?K-GAxxx>~1+v$My)FaS>GsQ~?)|F(d_Y?ZG+uUP z+E}~6I8xg(gCcb2k%I#|TZO>^ggwPLnV+3#h9kmfuGI+)s3%;cl1I-zi^p>?)4vUX z=3j(Pq*B|3dn6_X+XwBD1^-6BLTX8pAfE@mUc2{Rjsxay*_KOvk*%#w7EE!8_8`a;TDh)cLKmR$+2xcf=q{$^j z2ocMl8hX~x+#s5|)MC6Kqh#GUCq|;F@wa2mc$3l|)H3QaD^Ih?z`Eb>$ciYEq0apL zv7Hk6u%N|t^IwsoXYJ#tEMR<}rN1;zQ;18|8;ICOmn5?3?eUjs6-MlTDsYHU+R4{sQho~Eow859@K=84KT`TnzQ}5#65#0{IKWCSm5Vvag)7SnP0kv zAjKwXg!F@OmuX<&s%M(>TfNxWeU9#?Z0BUF4*`p-IW8_+MV#QJ^sQlQmM8YQnyyF> zxkA=X-TH~5OIRQy=k5FsKOs9`q*@pi>!{(=l?l_0sCrw7%0P4t2UcPW7r5AFL~e-Y-Wt5^}*nc|}_LgL5d84nN~g3OY8&Ot|4@*8&5S54?ao}5xhR&Yg@xe?O%8=K$D-^4uBeu|N%8w(i&k z$aAl)$CqD#A<|_;J~U$oHl#w%i*mFpv{SQLT-vdK@|D#Ar`H#JppD`$4V>Wufvdvn zrA~#tF0GM;$(s)tC&J0WNAUCVV1hT4?L53SQso-(E0pR(X^QOu?%zX&@G=Zk^Q9u& zk;L1v2AGGx!fpAmx$1Y^)5*@~QN4?R3cT0evEU=UFgE5wbwV*;zgaQ7mX73O-u?l! zeKS#;V9PfCt;SotThk&AZofJA@L?Ek#a7^Zc0ol=MfUf=FH_{V4t3ht(+~%cJ~d;< zlQNlfBrPuUR<7Q~3sH0`dZZe|9cg{^aFsm$yUrs1QM+<3 zF&Pb-AX%cIt$Lz);1>4!O8&TGOpr1p!FFcU=2)awAt)NWr3q-zj zipdZ$w1XY?<|64+><4Q5YHp?;W-GlTy%Av6VNW3f8Nd@|`bjWx$okGrl`bOx<_Hq~c|fH$H1FtxMC$M; z{UnidWl6e?IlB+w845zv$}NqwN*_JsMH)vcFvm))iFA)#)O^pE9!{ZxI#7Kn6O{Db zANk^Y7KA9s(>mq8R}R^;2u?>riTTn<-lBx|tlpkbHT-q>WD}Mxlg326#mu>j{!1?1 zLH-A`5ihQ9j1Z{Tcab-EkEH{)`dhitKBpL*ZXVm3VRA(`ue16e`b|myk;<|ah3icQ zrp*CZ{De?QBbBBj5Kw#f<3njG5Nt^?{bc-+V9uJwe{f~jB>MV89a6fd2Iu=W!uP@p zRwiB6$V~)#S6Lvr4w9+3yU{&@HrPe2Cuc%39N&E#PeY!`_oSUESJ?iB!yp;}IU1+I zUQyrggc>=3Pyv*+6(IlI1=V*wB67=)GWHMZ1FRSxzR#mptRuJ<tEis@>(=H!BmB+T$V%X|XObA3-? zwxQG0^};_UfD(ckhLv~No{Q+RP6VMtfiv)F*Dxd9&IY@BUZAYB84&eybtMnu1$HMo z#-jUf$TV9&?eNsJdH8!%#6XUE7WS))V7%H1+56Va zQ9FrLy)k1IO7DOsImkMFXY8ZVW<*hxqwrPS<}#?xBx04_z(Wx7Rxb1g^hpdPZVNiz zo~oNDC_T+0`-NenbdPE-}R<{OmJAjK?Z|AD!w-Jhcbj zuYkg^VfV;hGzwiZgwG1m_w&X>!u>xx5!*u>Eb84BK0?krG?LZLa@V-I`035#uy6FA z7#{CBZMlh1l@p&y5abYDd1piApb#7^m){?C?@bg3@6V>Kp**Q6vN>+pgzICX=YFr? zZZVm!3nU6tK6y2FgSH~S_p^=|CuK>dzMSOW+o6gnrdfLQ?U142mFK%RnWg1DP>(o9;X52i<*nhGJcrlCo zsi>|Uxd_e2O0dnK#@igg^Rg8va!$*hfTSrdP3GWJpxZX;v9bgQRqU7eff)^qXi6k) ztxDe#zJs^{B}@_h>@Udl#vQVmHM$<_;RB{U!p&qB`kF)R@caCB9nyTK+;tQ)w*LNhmPH8j-OXisQ^US*eNvZMjBCi@HDD=(cBn%@nwfX1e+L@WN0{up0 z8=b12RryEBHEa38`gJ(#I>qi*j@R?uQj9;PU@BWXp2xeH;JZIR{}pq+f3AJC&Z35U zMtE`aoUA1zX;9QsIgcMpvxy`R`4}w?>_4Sb{tZk70q_ndfcsRm^@9-0`?l@}Lk|)g z6Q;dgHv-+CeT_DUP5nwqjl1scZ{E#KBRP6NqYBJ?y+UkA{wi}u*O0Sb7BQfLE$(@b z@8|{?GrkV=#ym;9Y2!hs(a&sQEaBSh$P-DmWKyCGEf^6+z}!iQHM~~??45vx3S~5lNqGgJ0q$M`}#!>(oOW_Ver@x%0@s5YSEOV z(%q^DyC6==)5z+^+H7b!eDUNtn9rZ&5(26%Q$kiiHMQpb9+Pf-?b{`PM?lf8u z5a^;_*eZ2zh@g0%d0ehYMAj+{jkCbVJ*jOt3pe9kULA+F?j#iv^US^3tU!gx#T?5O z43j8cbv*7o%{b+8w zq8-ssuTDxJ^nc{*SsFn66kg99l1&0frgc$57dIFDpC#-*+IGSX09(HK!C&NJ0SM?QX_bhXbdxyGR)DuJDjvqlYglNIkEby2V zZ6E3gv5T1uhQ)Rkah7frBex^{F`ewX6@QoeUxQEfem#MlGAP#;@2`ksn8rdq-KPPDXLTi!^DrB(0MRh)9mulTM?kYBqNs~Zh1M}GKjqdS+ExIko} z5OlS05IA(7biy`bATphWL(BH%z@==H+Qp zOU10$_dn2zX$!2x4UQpqEKVq~r|4-c3Ggm3UMbJOxUIx8N3Z1`ku(aU3|ufAFvAa9 zvy-JmDS$_$-ApT8+&A@l8!B$hDf;qth7DjpzWAdQa9X6YHmsrgLg)uSeQ~)AN_i`{18js-^{kfL!J7knM6WJV5(1NC7SoezlRe3 zH{xdMOB!d&H`2}tpX4fH=L0@854|BRcTtS;BRK6}Z9JNZ9EcXI3~yfFSz;_P2aJlknZC55J2sVD>@mR=7~ESyB8oWdoN+o9Lp2n?y9IYa?aK z8Be$%ddcM1RrTMyEM}pk&17 zV4(Yl8}lNi>fOI?Pgk1xV7$1B+g)^SkLe?uXPNG%u|p>AceT2XCH~|kT8;b=pug|1mol; zACnS;8g3z1d*s3_Gaho!A9U4CL4j|mOfhjh>kTpf-~&Oww-lVcVnFv-yiyNyVuH5C zI6(;;g+EB1EPfUhXbNt+K!4>8{knptMlp2@Zcuw)Sm_TH`4K<*672SFN4a)9}5Hp*YdLK38{{s~rWso5nI z&u-`t7zU7n8yghfZymZDvC)=aAkKeY^C7oio1^(9u>v8W4*(D$gGwE{#lxfTn%DhG zs#GIXV(Vka1mx~yjc?X=651lSWuuFmLxBv|2q{nWsaoy1gptUQrqqRal()DGfg*Xk zWMM#(upkVRJmexhb`1;<_y9$`YdG z?f*z`e0Tx*9mrcX@lunz$L>aN@92=k2HQNa+gdb$YH0phcTi+8dmab@0IU?mzrMKp z#5YlQbkE9|x1I@rvONg40~u&nNH>(Vd~S;jLX^)T?EZ4+P#6Du4Kv4YEnk0k=+kgX ziA`1eUb|~=pNMPG%Rawh_)fuPSx$6Lx1lHB+N#K-N%#K{uxqmHhGoVWSzK_Cjx?`%xANGOF+Gw6217=hUha2CpjR z%cDG3Q~aJpXU}s2uY>imx!5pTsD5T<9McNRG#8!PpsvF>faOxwj4qfDC+th8)t>O9 z1<-8Rwf}+#bINZyD9Xfh(SMMB%a`%6N6s zchnP*b@373wso-Z@F`CX%!`jk5DvVYqP0&tdh=z0Vi9}lhz>v!l-dbJpM#eVY0@QF&WkKnbPIz@o=124BK+I^O&a2Y$e+c64aiP}BN!MwdH6seTW^ z+iSqYd%-M$c*wRk&5l|XZy1os_9PvF0crPCh{#!fics?x>GpnE(2c}xDdub?(TT_B ziGGUJzGHU_sr(Tc;*e#QDJJ-fNDo6pULLBe+^HN^K{sJ`r&40c2iC~o-@(I8Ur*jkMZH(*{Xml&zq0jlcr1^fm;nPO z5q5#08umnx5cvM8G$3qK3Z`*3(yqxpZ^hG=BdigBS-XkCpwPl|9vJDA7Wp=hF>ne&K<4EPP(*crXO*zRU#^YIwl%9bu2( z*fXCqyRCVXfkXI$eI89s%-Mmohh0nrjjKtt2?(+L@aDlku*HrZv(6l%5ecZoSCK^! zbt-}R)7&Zv)?0Vfk@eXfkw}}jlSG<}WF3$!O*!2`Opkb&bu0A(EAqA$ZnvA{HAkJV zx(!wufL?Wx%Y9O;q;gGL#rXV%i1PclCtkE^!_oH$f5==Hn&5Sb%GU+C6?8L*H@2Sh zz)zt~bLx5J2(c`iwn)CMi6N`>NlPMlAule@d!%jHy27XDVC`El%1ho>etWzRpbP^= zS(4RM5YRUJkDtl{p2C-`458fkDus>D5ZGeFSJziKaR#HL zv&=V$VXtXv$;6=QtwfcNw-fh6U~yc=KH6O>Av6Ubg9~-|0q`PiaXMDeztT!iYm%5D z5k2p=C=8B-v*2kECFf(6CUQUKl6~y=ZPN_Pi%K zHV=kDk0;M6NKa9)y7`U`Y)u%}vRc_#5V9kMuLm;;uOQim-ZR?*eulypKy=VTZ zq<{iB=t6y3QWikg%I6CKrM1oP=hXc-+Bg$DBFYB((gRAR#z@r99#bwOSyH5|dq4tK z45aGoPx(1)@`y6JwfdLm!3kj&c9vj^I-WbU*W2ypQxgfrd}ZFAf&k*v9r@)T=n)ox zLgX4iXSJ(XA6vqlQ-5cBq)cnv> zA}lwL1u~i+zwqW@=wsY}>CqMM$06@7_TvGD`xTTQBCvwiJA&-G4{c9NPrlE8LJT#K zzu^|sv`k1H_^Y;u*uGck?>z%-l=7&}L?>%|lW3URqOshcV11!vf@%R^$CjZ`+xW`? z6aHYGfknCw^4rdL`(bg186WwUxHOEi^=C1~DDSEC23fg5Ws4vYGm&M8k(49)Uw1V- zu7Y2lEoBhys-mT$kie(m2TSisN~rU*gK~&SS15?%? z&%9h?gs8n(H%1(Ee=ay#vgNtQBHqSwH%K9?Suu0^ZvTX78NSPIfLtTy4!pJRK*3v!!F9)by53*t}y zZ$R?B^$|o?Eh@8UDYT6h*yywU=ekCQ-;wDb>;Q7HU6n%uOnMrI=i|+>hX=g)vWB1W;C^-XKs>Hc$of-Gls0^Vg~r zPI&C#N%q1YhSRHWZw(A(AQO6$Ts{N-y)SQrN@A6J2tNY6rJKZRih=YeK3Ekz96TnD zzqzPyeWGiVNP1eH2KZ0qO-(+3%kW|WSuo?SmiWoRj|zV~9XfFAdXa;UZUDR(krRcG zn7a`{rrJHWz~6<8K@|j+#6+k62k*ZR{rB#_#aLPU_3O4F;@KX)dtM!1Y#L<2xl#@~ zP(}9+(FIaa*UKy4`7hV7AD6aK$~Yo+lA>Tneo!9H?hs0PLWX0@u4n%$N)&r)Rm>)iXVg(q`Z7S2gLQeVn&u1Ohr-T@{6i8_0g6qN z)I-FamxWvNaMGuBR`Iey9exTNgs_~ro(Ozu$yjZMC~)RgW`;6 zLRK3DrmF*&TYYz?Gr^=$-bFj39YkD_u^QmS?FmmFeCdg(-oZK=c`nz`T?j7LbBOw$ zXd+N-K&zMXCJhgmJarruH2@=+*90o>1c3umI;^n4vsntDIRGLyh-?sk%5IY*Q6F5! z=*GYhGCZL$du6cPpV#zD9skU^MbaUuZM&aBg$;ElFCjYTSlr$~=9VY^TSH9dkD zf4VJunQDwej`}PAM)w?!fAm3x+~enM zZn!!U$k+TNSPA1LZew)fOwM0NkWuyvcN*xlNCs!2Zl*rx40|EbZMByp#XyXl-Ub(} zP}x5(e71lNtUj;mzLKW#1>xtG*#@#|1uy~pJv{gxQO_c>B{f<8umvza)8DXBZ3PK> zfOzx3J!OI188O_G{YYFdZ}AuV&qBwKrXI_xE1zG1#^DWAX9m@cS~^+|Eyc*|b@Q?K zpqd!1$muQ%U0)&kPzxU>fPOj_+Mh8BBEzOnReG-cXvdjyeP=Sk@*$GYIU@P_C%hbv zHWtAnIVtb&dhKZyUNGPfwY`g%ie=5u(CR=fKB22Ywe&yRFHaxqAFWA=b;MJ;-ftix zS|qlsPKej%d>n%@YTA#5yW=Ur6Ow{Dali8;q4VBODxTv2An_@Lk z9gyU_o27={M_0&P6~evrk6?Y=mWah&2HlfCU6-+xrw-1IOw5Jd*mO+hE6A!?cc4 zG@vKUF{mU<3}dlHa!jx%Ni5+&0+dbaEDGlvDKi){5X3z^Lt^)eT=RZ<@>-jy2BH<~Kg-K^FHu1AO9Upxho?5jQ_ z_LDB~Cz97SeCZC!C1^gmZ|>cHVL&hm%BRg=1Y<*;{NF49fKUE0mVl-Q+%RS%&xJk1 z!1CkTNXy6}gaLQ2CNP!t$~D{oyK%!7n3TS=ukl2PVX2xxrSg1+lQ2slT2`?e!tEZF#Nr<`FxGB z$5i9y+UuB2fFis=cPyf2>ph{XU*!Bwq89W6(E%O-#V3{?YYntb2us1Ynv$cUMQl4S zT0f!bLgC?*f)Xz{1&%Fprj}j8y{z?gS)0q;YVy&?k=L|%Ma@UtkS!$2h;5J|)uD3z zfZ<9+F9{lc%0JB%tdu5OEF=$W3UI~fNvlZc@tvO0B=B(PF*{+9O%zVdg3 zePXiMh7`I(Qs#e~Zg0~%A%^CYTB#Qa(xfrrRWRBinJ88&J(5f|F>GCUOpt6-gVX@<=_aF-Gc7Ai<;~i)U2h za8}pOeWJ|+q~U^iryVYoW|Je`j8U@?y0J2MGDy)oLe0CqGj}5FEpRn@} z{mfm9Bk!K^7=cVE<1DnY9RrX9zpHwiK;!5`E5o&l`-hbVE*m`y{>1@&X`mBjZOFFy z;Xp_<3QBh({DkYgaTUBbK^rkoIm7Icy!d?3XZt|&UUMuRV8wlda}iRlFkJ3KVxk$v z=iUh5_O%~0p`1fp*Ajl4gn2|37L|eptxi%iJ2Ab+7eL&3|G@{1K?sMc$5893#;MsD zChmkF(pl5iw1b2*?oKbS3x*6b?xfGDdr69U=H6nWa~l;1Qf(MyJ-M`b_M^Tuz=E{C z9%!SqDM5)C2x%E-@+W{Q)CQ^2NV-p}B*>DzjnGvt+5$4a6!(l|w3CZ~kdS`#Re z{y1QI5dGxZXY&yTNGO9E+*9`B?a{Xo#v*8zIGglQ)+MO&MjIP-MN4}(PVBc{QyfIi z??{ogOr1+YIY3?orI`p?f;-I9cOPu>yT%EpLNLftSr090+?vxrE1+5dWY7_>p7@R2 z1;i_)t;wzWZvl26U88oT|2w+_w_lC#!q?{@SA-c^dYi>pS=>APZuAV+u2Xa~Q^bW6 z^%N%sZYq>drzv(#SPW*pV%G7UrhZgrApwmrtFm+q84_@7+#NONg-l<#@@Eo(=V9z~ z659kB=fD{4QK(1muB2l@VOtX~65{ zy`vKq%-hnr`0pYoq0OEhfvEg5yPY@)E_h*Pyov9OBWzm^}0wt$BbLl0Tmv& zt|FWx`9_C2WV@Iv7*4HhPXrI~Y@QH-AOs&;6vKQ^bB>?aOEq!3E+yDkwrr2~n!+yp zBgAE4Vp>HTN{wBe|8=k1L{R%gfNllZ$gr*8RXXvPjWT5ze0D3@$zieRiK~jt5mK(EtOm>Je zAIcjtK(oYPv6d}Gw)&wN4>-ayKc`QpZN8h31FF{4PeUy*1WZk6qCUg6oyTzj$GR}J zRGW!axc^YrOfv+8!hn_>n-ruRHSvR~`4R8~g~vUsaWRL%@3azGO@&|#;4FpFOD-W|1wnR<~;Zx3fbE%69-gHsPl! zB@ua;+JIkU9t&rYA9WVUvw4X^c#uW=D(ixtra>OOzw*Tcqt(qK8pkwOk0I&nWK%CS z8@}HS^Q0$oXRNXU_?~MO6B@!y*!sylGv-f_61vIy9FXY1wEYaj7#p&o(*5v(f+$i~nfNNdgz~0E(7ORYK+nej?o9Mc1I3681U@jlLeSLPJEp*^ z`-m&{J1iAG7XV)<{zx8pfRJU#AyAI8e}6fz>X#lz!9RPZI%~|uSkEQ%R)t|j9U2^- zRfS=ooV(XUWdyQ<4>S)HeV{NNEqcYHVR?((+TiP$g=_d^J}TZEYLENL`y z+~?jZH;P&_ASv3_-l`Y)&?f<<1Kclze(g|4;b+iq7;nXzV|W1w5TNxZo7}JDaZ>4} zVJHS16?{O{5>=7iz{XD@Ph6XNCvE(A#zqd=ohbqGMuooV{1oh)Q9I1H#cJ&)tQyiv z8p@N(ed*))cNB=W$%U3#Vm!NnOOI0k}Kq@1^cAg$9kO*A`Z2c23iv2alp<+ z;XG$>0n8)K?CkRIhVU_c8!C(5{t*SN;8i3*Y(X*#2yTFE!cK)`CAY$AS^&ig8W|R- z=7eC=Fb|gsxDS+htzjGz}ZxAhD4=eS9JBqKU9l0ZFPGwuE+Of`CB+TwNG4>fk;mLXlm7 zq(dQDsym(mwEcdUVm|QjH9b46TPOtn=wYVs#TO9t{hex&2O}Zgx<<^#_b5M>Bjo(9LV!D7KB5RIlz65O zHIH&5UPKctH&)H<( z?*ALA4hGHa6^i}iH~(vS%ve|Dm-tzg{@7n;Tbhaacx-8apR(Yw)q{jZjk1a-hi0#K zAhv@4U|)ebqdb`ayb2?^?od+vZ`HGAVQhV-o~Jxv(5jiEe6$A)>SCTag&;e|OCiVt z4kkDiDl#wha{eaOdw-P=_hwUJa%KD%m^q;M%Qk<|_L0#r45bV7`kc*#0nyF+D^&r2 z@^@cD0Fa!N9QdKUA-#Vi3Pi0lpAo~r2U?_SVra?O7qjN?kA@+4Rb}Vx4*M8uu(13= zH`-4}AgJyP`YtF;n)!`>LO&T4&cYptlc+0#bob2?ZG&6R*r?^R$SOAMMbL z6vf^X<~I$8Cu{MXzAaMbhTwYCZ4s1UCJKn3$im6b@ow5iE|X@-*x95)BPk9`0i!V3 zEfiZG$KwgAC-x(DZxv&O%V#^`gyY4%y14lEWvtolX~CfrAgGl{m`M!XBiyMFC3%Y9 zZfPW@;rn00Z;o{Y;g9UNNw?jHdZ=P}F^EFLeY$UyR^vSiZ8Ya-mT`K-lID!ju5cCzD*Pym+I-UhM&XlH1#xc z2=eM^aSsJInW{jP9fEzME3$d}0wMF02&E)>E@K`0Q1VB#V%pyfFc*=an*fDIpm)~Z zP5OT*#$WObj8^Qx${__M!2K3~g2<;T&aF5+{oYHAuC)FYXP%BXLf;$Q!4LE*jVNEd zZ~4*sg^KkvYy9CNaW@$oU1no}*tvB3Fr1Es;qiHvI}XnoYaWvCqOmr} z<@m{YG(5&``BazQIDH#WAj6AcfHIyS<(ht90Phb+ZUQv5wz`aU#`|t4LRA<2d|nF) zTZ!HV`uv4rr5gq`a{TK*lJ7}AxDS!*q`HT~Y=mC8h$>JGS(>;MyBx6YrceC#a0&?c z-cEB&p;!mgrk|XZa1U9+pZNW#V4cRTwVV~F@f-#Fj@`!dD*_WOtrz;A6dUm+YQ;4? z9OF4zS$gH&-!T;*1uNF4Hy2y;LE+ijwZA9mP^WsO8lEF?JI|Om<(T3L{3ogkgFU4s zWlhK4M7xvCz^T1RqJN|Tb~()9ak!s+A`t08ZR@ck!>t`G5e@!2Svj&KVGH=@Q2!Rn z50Dp1S+gJu5#vU<4HVp)tp-jKy-9so|CR#$0agY~cnFB2IStQ}GK3HTh3{n!Hco$e zbp+l`CIdP5K*~$37rnh11#|Nrp zW3ed1rg7oJ8T$s15z>35;x9If1PuaLi#tISAi>2Zx+FD$^VRW`VYxL!z&d`CK>!y| zkbQVPaz3rd9_KU$hJYtsC;0xvSZn|3KZXv0$FD%#AY$q%3b{g`{ox}4Bfgu)(okn&D(l7^rI|EwSB6y2sG5Gh_q>KywC1n{!w zs>Y{h?D@)=1pV@WtGqVt7m3qB=qsMK8+|Sg0Dy2olzj=?1a>6N_dC|?${mwoW3a5Q1Z-_%RF1e zJwXh4h{yHtiFjV7)O8%H!65mt;!kcpffyK?uDvk;p4QA>;SQOwZZ|p)<%tE$5agl% z9hXyiE`$v*J{i?pQscN7D@^0T7)oehZF(+x`{xvpd{FX5Ps|#qvM?;~!Y0YoPt~JT zMag4}>pX(~k15en^#k7xY@hOZ#7@R~tzEx1<}o~lxl`>@@?hOfSAVB+%i=zEnOF=f z7tJCxJG5AP{XYNL%lX0U9jqxhqg7{~)yT|Q&Rw%s7@^&ief$=w}{z(l_ z;u{sPyPgJJa=?+47ss>giod)UNk@5{RQgx>^bVN7!T-c@aR~+g7@q)KAmpPXtuHC@ z?HQ}RV1B{Kny`yR`7I!WpxIXqM`a<#KZO(xehMCM){;Rc;w?$#R`qebMG9QQEHUAF zCxEA=p79bC$`ySx)>Zu#^Cv&#F{~F(0YriL_z=5H1|_&D*-rj@) za-*7qlz(?I0W-i1P#LjVsgK-_!Ey6q3p2$)&@B5n?FK_BOxPxtv|`+w{D6LLvZjE2 z!{e>NhM$I7Jm1ijJf#)(4mzW2vLXx9WBJ*#r^-+_iXq0iBH&0QYe-E+>7cCJM}ee zq3R9??>+g-dKjS{4;t)rzlHkNwOY;e&6h~=w}_@}zNYy1(-eRC{1kY6NLhH*fbw78 zUvm9D^5`dSYq8~Hz6f_DgzQv8D!L|jn6gk9i^ zW8pDP8Jj&}ZBRHAi-OI5JQ+M0Nm|EuZ}J=h=*ZqXcE^0q*xh5{4^5o< zF7UB!v0+j3O`-FEEbe}=V$|=*(a`KpuxSde>&FdbxCb8K3e~j-(6kd70+47> zQ~X6XjA90kvjMn#2fb`Eq(>g>{p(Rc@U_2#K{eu4^ zh3;u>U#Y?iL#1J!%^bc5hhlpV-|wQH!9MwOhY}tzkTtqcqlH%k&|xSWrd1Qkz;bAc z%`8wO5bs-GtP~t1!6w&%-9QT-nHBX>fCLiPNxpfZ_?s`4j{kzOmd|He!g`^2ipi-0 z@jtPekK|==o5zI*t$+NFd1VJkf;X)X>8(+KE1iP?;ah@As{9^g}lnP

3OI0Re>%f((G9Uz3MCX1!&jfWTO` zNSaDYZ&>!1&y0xt4ho0Eli8NL$b!6}{%)TFfBF)|Kui(c1WgTQ#=eud47rw zcKk)z!_zT#IOItKD?Bl4hI}33)fNuqYL!eLxMzfMXwO0X3~(myPwB#!u+|tc;e?1L z_VPym$vu2XKOzlFa(=~Vejlc)$R)oRbiTx&6Lg+O^BwSoy=az9mUGC^&u(mfc-stx ze0i;%)uZ}DVCej0n+1aA;??JyTbASW)yiH(fwPizQq57InC9#m*u*d&9>m?&LogjM z>pitKm2ZW>UY`W^_?GGq;w-*5S#1W@&t^Xuj6JyLMg=@b5WHXwVzPL1$sdZqflI-*YWA`&5@d8=jPFIM*%V2m93xw=T$41`MLuXkK%jt zUw46_smk~wOOcL>xi@m{sKCIZ+~o7LkRJRLHMNMkf%U`tYoV~hAb~9GDK!LyRMZT1 z1E6-c{^xHgAo->}B#!Um?X_~W2@v?N&Pn zgTXFmn%#sffef~lX9qjpC%)FoF;@jwr}Y&6w9UGpHXd;%ALeH1-n_6y_u^v%`xI?B)b#UhE z@Vxxg^x`nS*9~|F(o!Un05A3S@lZes;xlPJJ;`QPY2R-hmmz}!ML)~Jv}ufO>~z3> z329&lRT+UXz*bh5KEhPf+h0=vOH^-fM1id~h$+v;V<}2gb+P&PA5DFp&)k--q-4Vu z#YA3|2Q=ewH_Qa4H~Izr%C|)_V2|T@9BkPwsNDc z4nfNqV;KzpYz3@`%}8@Jo3X5C1A>iXVF=jOH-rB}##;Q4rK|%4g8lb{BN27Yyt6v~ zMD<@GOn*w>7l#0xrgh}`jyyQ7HpL%d2v=#H@(K_?QbRz?s#GJ7%i)j*qph$s?J6?m%VPn~>kf6|0LB*y$QcAs-UYGb@aVvNkf1#u@?=Sx;BCFVAq8qey}FI> zRDk*hK@sMZyIRPVHJUAguFeGF5fw|~U~$x|F%$NWq{fwSyX(;+W8 zgK|1bgX$NB`3eRAsGA|0;xC`6T;KTUU$tD^o92tPCXhccC_D9WQk0H~t{9{JYN9ui z$x98|C|()v_Z;-J5@S<6SA84ag|W-cZ&?!R!mv9jp$;Jolv7F_HX7fTihnHx{9VOy zCCt_<=-0mD0!S)+8$(OdP3G{hBLWa0cSOq-kcrFzc)npGoE$EtcN9wNdKc?_+JM!> zrw?xp9ZBn2xg>6inZTVC;&V9ogXql;Yx0<@rdGcFTi49~xU3h{U(6@b5RiB9AE!3J zB(wy7$!TpaT4S7>B2!Y1k=2_ENRtg^zpU^hl)i#CB=VS_HaC(((^p$ zBijcvWHr>~OL1PD>MZu*d#i!ua>W;-b?KSt_OUip!`MVKz}?Gmk3zpf=f)-otdF1} zp+zl;49r0N*-XS?;P$qi;A17(8{M~V0C-(;*%_WFgkmeVwJ&*)2kLgiCyOkeDEC)> zx_<;bO9hX6^yCmgO{1){1q2*y0veR%b$>=b?*pTNWKw=M2q4soog|RPthTH{)GsUB z!M8B7NU&vXua`7IL1{5|Y(j8v#!ltBaw)zsHo`pcKQ{qQu0YIzAS(JJb>1xWR#s6n z2uLJ=gqpu^^2wNNDd5&b(mW{AybQ0x+G9C{VFisxjINhY69d-EFB86-;tx+phx-M) zJAMyE`&LkHH4PjHCmrT+FnouVd@+aMjX3Pp8kE`#Q0lkA;9X!)z;z((UxsqY44il6 zr4Zwv8HzvJ!5L4-i~g|Rh4#DX(-#PZBSjaxt-$$>;j1MNvI`>6^5VWc@%{WbmI4ta zKhwD?k?_x(To2MSYCj7KcM6JeRadI9><7HL6|+1G`hB2wZjg5uit+it)3@RL!8mwg zr#xM{E);@wV86l3g10eNx8~I7@TOQ-*B6gE)yi*`~n0f?mJ`(c` zhJ(d@HU)!k4Sh5xTgWij4SWxTMAyK5O|{lWbSmC}10W~*{cIV{U*wJON5vDSMj=E9 zzlXW@M=tSS%{TmXvTPC&APvlSupeZ_{5j08kYXUpiCz>6RHjS8hOUi>PxLnVV81zb zAbywjc{BBUdcKakl=-|i&aVx{0?!|1^W5Lcej>~G1=$S#qAY@vfME0Z#F&divA^RS zb6DMs`F2j}%z-4@@7!at&nU*~->&HDKE?h-g3(irkTCQ&P$Q@idTG#-NGSWM@2!*b zR2m|pZ3aq_M$~7d;&(=FjudzoxUwQ_|Ggf2GPX!_{^UJnQ z8>FC5xF&|s`R1B1(H+6}w!&OM{NN$*IdZRfZ#S#|@cX8KIt=wauy6Y>b^aAjG`6A3 zgSOV}j7=!FDtkl$%%7^3&TIlknGGukdKb*D<26Ly`yO~Q`Taj`Ldj&%?`4>FhR0(d zwqOEk4VI~_3)7x=h-9qY?exDX5J(?SGo_%YZY2=qh4iLVKvE#V?#asoHHKE*8F;oa zU!pskry%n`7|)}j;)mge>#^dE(fYCG#g=fUIii8%cgmUJ%=LJ zd;N&mPzaJnmtEo_Jo+`$w=o>T_csB-Q3u!Uj30pkz%)r+D~>^qj>JqlkKaAUE@>Wp ziO?|F=f@ug``;b##!wG2Z!O0~4TAi7bah>!;P^!&#T5cjvTMHJbHoa#eO`O+d%l; zq>wC8jb=*88Q$5IOW+G_GgeYC?r{@N98JLYJ`0=CEuuPNnXe4{_SpeALN{zl83}$K zR5I+EwYl~SIZ19%JP;z(0zZ`s&l`LNIH`8@d*Es|vu08;Tq@{)^j!>KPN<+;M@F{lB zSX@jRmTgY~C))8V8F6*I$3x+##=gRX;F+dA#C{wHgc2Qxql}?D49R1ZV^qe|%h}i% z@O8F7^Yc4uI1I?0(~WV!Jlf(vFgV4};8$@4s=HLc@5UnS<#qHO5R!s`0xMP6%~uG2+Q{tn(?^ zPP9yz0YF!9fR_R5A!fa`q*#UThl+1WVMFbY9NNyDT*^1T*N5>D{djqsysZ3+Tso=f zzD-{s^FPaTHgyF9_^;DWhv^9SLe2=WP0PtL5&?m(`C@d&fn1`wy0Op@h^K2#+!R%c>w|{zl zDKN_Kmx00nO-FY|ce#N>C_ZbR<%mXEYSH%8V$h-sfI+){eWjk+(^P9u56Nk&*2IrvtAv>Ey z9!1AXN=8a17`Mk-4Cw?}++=Hht)Gb^eLHa6#F|y>2U{=F?>+Aw7pa23?fw42JnKj# znao)2vg<)49qrX?k3Z%~M(5Onj0N#~WgsoWP=ONCk(y#N3soktU%l}7OGO)R{ijVD zYUPhU12+YKLa19ts9+2kPey4WEf276bAnE{_%=ekobPzkF`*=5ef@qp9h{P}TE3HK zgfGH6ZxCOQ46tQ3`fO0<0f9Bjd>W>7#sgSMjqjbt^|7wB`7M7_?{x}D3fK`QN)Low z)B0fXO|demDo6%MZZDd;P|c+!1I`Wp>43NSi;wgde?bA=0Pq*Y`NK!w!T%S6cPb+f zFeaj@m9-3&3HoKyrs+|W;m~UO=%Td)6u9P2`}lHr5>5i@9P@`O=m$Cl{+9w5wMb*8 z)Zslzu{4?0u_$E4;(xVHr*cLm`}%^ip*`s!!D%pFg|>a$N17+BOLNiN04j zg$e)wm7fF&R27e7<*{K0ijs zp9(32-;qMk3jj0=1))N&BCXvFkjLr`b@qp0qi5FA-Y!^QhD%7Hlm`lbDwdHvEiqNq z@3k^YagJFic~HO@*E-ypvE$Waf1z9)F`?#6$P>?8_c_GEM?}fSYEtNq+YDtc{r;hD z!G{Ib>6SKK+Ihn>@yW~*W8M0LZu@};Sc>rk0&7tVC$>W2xYX#Sfb_luGz=k0Hc1yb zI6ERmbydegw1s95GfH;!$6=m<0372KKUhpaQpJEhU86RlI>wQo)L;nCr9%e}3dKCW z`}6ZXRtSov^ZJidAW)`H?fezRj9Y#e70Kflv&QnR7=`C_LVPYB;W-rpj)OM-Xqb7l znPvYg$CE^RRP!!l6IKw*3}l1v6psf=xkQ7#ubH_Ly^%~_=7iY}>rggh9>V|>3S}uR zp7WGzK=_Wa%p!$J@#jZvo*j8+674OTx&j6b{1yq9`LvnF;!{|ErBJSz(lBU1AxszR zoS&b9ov@DQr%HFsUr*=T0VJ)&G9;+WVD(*W`9YFfx4@o)NFnOz+(^#%$^J}eO(^;ktx+<7UU5geO|jR?eC z2D{_&T)UlfBj&zG=L^0a5$W?ein7$f@-1fp^&4a zccLDsn764`x*Xt-8B1N%wGt0T@8nvYKS4-KSJ%n>DV}=^&t(eU|Dtf=!_F?S>+~3D zNP+#U|AiKvec%Z4D&n?OX@l`(M&J?VJwL=}P^by(4|9bVdO800WmuL5CW+h|24qAu z#yiXwa>!F4fTcLbIS3=ph4nl2c|S`fY*3fT5?7T4SoNCx99%hI${ z%#5u%d#GOgw=nF@y*x6064+!KLk2j#sD)`cG@jWEx364HKYWD(Br@skbto_%z-v!1 z#A>25n+(hFb^uO$L+_fLx*L^;0;n-mU}Hg!*Yl*npoVQF4Z8VEGaJ0lFX-od%oLDy zinBu;pjEj7X~$rBPv~z=Pcf>7wuZA53tzEPKx6WhPfxg*-vmZDeeA`8p<5XHzI3Vj zPGcZwuBdh0GXgo%zD#@3I1<;voTMf)BI(vD%+IU5zb$=|?R}_8HWn)dDf`>+ypvE) zdzNPy(CUtj*@BY*6)uen>XS7 zJ-j}tNa77rVna|LJo94D2on^2*Knf>L&GOwe$$n&@%iCL zLsbai8ya^0+6z$h8gonaciW8XDKIB&1D%HOA1QVe0#Ca?K>ekFgpWSzCg+VLLNGiz z795mD{`xz3esus1#fGfJDt!QMW%0qLC=`HVhx(pd6jut+>7aEtqoBxd^Lm)c0!2S< zMMn+EQRRHDWrm6Huueh>Hq7U^-qv3P_$glqMF`Is?V=_)rh*7b3XzQM7TUYo3CZfu z3aERe02D)@%QF}NR(k{~rE&a=y$-+wcqB-xfSc62hXMPrIavZ(mtW{BO(vcfu6$4Ru$?h<`-Yh4{3&g%Xl6uU&EEU~_*cktFqD$HikiEv)BpY% z1#|e*8#Z&q(F5^NxQ16c zXiAs~0cTsYlkP2`@YXMV+WkAmeD1jqrDSMGLAact*zhm~vb(tcKy1y`5{*!LB3>1! z5}oo^U&<`cQXuJs;(=}$R|BNvYlm^(#X`e+13p5KK;=Su-$@wWMrw#p8cO_&@sz=< zytbq2toTim%knG^Z0DE`1_X8CmtmmeY-P{6gd*?+jp}vN3yN3me&zbRx59~a@Vgh$ ze}`3_6Fn*Z5P%@A*eD`BaCqyZZiFBapn&!u@O)wcy*+{Fa}YpKZ@hmK`OdT9nD`_5 zQ>(5lkB7p)tmn?~C@4VVAM6>Muo@mC&onI+2+E}vdG}K~V;*y4alrEky`7*U;U9yy z?IxA?8iRG_3AAppF?+>%_{BiJ7+_pcII0T(T#=9a zTj6;vvdQ-@zYS0Sx>Tho_f^~$tG|c>A77wWLq#3z->i$qsgJ>cNqkE_k9U59`VV}F z6hN(i5vKn7 zF8|G^kL>=J0QhHM(Kj?#?h*9=!0Pjkahnm_H*0U|xKPG&hoxPvKmd-dXE{v_P!sCF zIQ6De;9pU|OlANyyTS#bIhI7sgeg`Ju^#USBDVSoJmo#y>nimiML*z0>GMYE=j4sV zhW=UegGw9}uA%vwPV|DnSolKDp3eO-J*z_N$PPip6ZwjV<2V#5-1oTN`zl^1JrQ)T z)KWqnR4^_x*nUMEUjY!7gpwODTTtH~*RZ#@uXi@#=dI*={F`Qo+bL_adw`oebe^G4Ke9HcsY)J3)KY|u&Y}iyW%hD zAfJzY9;%tug=7F#ZX-Zt9Own%ncB^BkG#xZqOSFkt?47-85+`Q*Z1)qF|GDH&!_C? z6}AUXGhBi~^w4ZVJq4CwM$2o?{Ssa4K6p=X974LwQw#R*N-Ar8Wo0 zS9l=m6OkdasOAcjzc1JAheO%dyA!!DNIJBKA<7eUnuG###c+wUaMT=t0=ppJ6=&=t z+tzm2=-rI5oE6hgH&#Y9$A@Y1Y6v>|wP*OB2jv{0;KHplT-zU>Lb1hjOVn(XcEzkW z;CFKJOjL#t0Z-#JSx}xR4c_wjzv`vo!LInL3q(3fnxVy*uy>0C=79FQpWQec*_7xc~=?9c{@(WsM<~w4nW%*Twxq z4#pl0bq&o)`>X%aDe&(oa2E0bT1;v4RyH4EZDe#)oW$2R22Gk**UEDloX#aL@LD97lQ{G<=T> z^aa=y=K=m`(G-pSsOXy;$BvpEA1~fhGullOxls9?)|2e(ypn}qA{QWIlj_c&BO?F= z8<7L4;TFR4c+F#i0)?XlB)J#Mk_5PVuoFK*zZM|~qbjSAfdB-bWIA(QgMj3 zd;FWi!+#I-tcn1PeVXFmk`({wD{;$3R>b5D+g&~i8*tZh>*qwb#75oObd4b1<;JG0 zsbC&j+XAwT^CEsXl@d^Blqr0oCQ$E&cujNJ(e4NScw$0rdv9)1leM;)AmoNdhBe3t zPOYQ%Hv#VweaN2-#Uj}>7d%lK?>2E-%Y~D}wwMe93}0zUOM8pVY$iS*28C5YKUYf) znQU(FjR*Sbx_WFwcmel-RRVo!jkPP%p)$j6WjrqlL)oe{EBNP#a5CSoVc7`i_52Z~ zLYgo(xMca&F7*MR2&nzFX9a}LtoHxH0qam}Ay}U`u+C0_|HT8BQxxy!V_7sM?A715 zf8~SkBp;Le!?y^4bH`nego2t6y?{yiIc!tMir02soQN7mq=Y2{O>e%4a_RtnyjaUs zA#*TfhRG^9?GNLy3kKtMe??}Yb>P1@ZHm7oyLA&8{x_vR^i?$GPoGvRe4~!Q&R!}e zhYCY(*_Qt42(e=tAbe?^#&xu27ke?kvLTQG{0eA7TM|EuwrdhJmUd6Q=@fXo6tMFz zON8huF2g^R!l^a6#wOi5)6Y}{{-Dcsr(2h6KUqTCCeLC&iN_aCP;_qWf$CWQR$ zX&>f<*12hrky0JDbtahECU$hbSPlN7^$N43r*uA zrk{T6@0J3Qu(i{cn-SuP5&dQOS+UVaoHs|IGBWd9<9wS}M$)2Cm+c%Lj$Yk1^^AoF zcGJ{zcM@tspj^{1*mFK(755Zb;M(owP(@C7(4jE%hJfNX$NR%rYt6DRv%AZ(gD=1G zeqX(0@LF^ZlqQ=Y;2P=?Lm_2fp}IU2SRe^b!*{~~Q0$_49i`$gJtXhTL-T&AXY;hv zJd*c65}_inxr=B3JJ(_!Y(%=c*62y<4FnYQ(&hT`P*i?3Z+aHlnXy?*k9(R*!$XuQ zpfu$~QB5xOmGp27rLo==cX9k2xqI(`PnJZvC~EOgOKiV~ZTur_2D@+UIstFBwutIs zqbf#0ikW@UxTYSzI7~J#=-E>9fx4whWVS;Xkmix#hM81F^Q4{u*5_kxH-Rt&tq>Vr zn&bsFL9C#1l~E4njO1v@+yU9HbD-StGk%Sdm<~;~w{jeiTl1HmE=lDFCqDh6p!;nA za8g%1>ya7P$p-FL1_Z4NwRIon(?(n$*8#HTtQH<6lfV*^2MXOs)8zg@`vSEAK;FQE zh}iujnt|#O@Bj~SJR96D4E=FFg+c~cA#go1u}cWMG)UvEB-s*Ru%$b6vOQn%y;!Ry z9URWvQ0@WVA3Y5JH+DX3V9t+u)NbQ|h_7G|>AS0ZJP>jvFQ)xOfqd&1<<(iK0RH{N z+Cjh|0P{g6x-N45m(l2vfZZV>Nm4)`)CWuf?hEs=6XxXsYdlK&vhquPJb7*)cdlos z<2WBTo(naFs-o|SI#FrR%XN>Jdi!4rd`w%w|K9!Uywe2s(Fd4E0sJ~C{`WIJuVUGQ znBvZhXGK6@%f2h%hysNoy1cm^5;sv9ndoF?8Gaoa=;tJT3P{hXYGZlGknckC(6US| zi#*6Irl2LgSOJT^U&_yw&rq&tTSOhTH|kmMozyZ627xhOESeg663CWnp+hLf56J>& zQh7E+2*$&xfY0ZHDTuxqAfY4x$|ekIoR}tFzPuHWbf)II>Vc@kc4XU3AKporfw# z!(o;ElfmnP`jFy(%sBz_YsKjqW*H#gCgc{YsoISxleMN6qSC-t4FENU{51ZZ@WF;; zNeb$HQl>xyt;jWVz2nf8LZB{x9yLmU-%WqGc5gV4g$d;>K3a8+=8*4PaU+A-{684f zlk&a2hq3*U8NP=BB-x*okhvgCpF{$T#cTVc_i1a!cdbn@D1oHsvJ+ zhN7(V+zm<-AWpt8d{*uyeYgiGn4EWMM?liyv5}2CD#K7o9L;bVMmWlDKVD077=ZKi_L2f}y-*GT ztRr5N4?sd{5bD`b!bZ2gOae#d2r4v0SLvMXMb!Lyb{$-e%(JT8j4)} zK=WimaPe<=4ptHL&M&%CbrpDpw`n1b@)ti=gMhq$Y1mFK9zzNZqB$Z%)k?#F0lo}e zLoM=*DjgIjC_n7Mo@d7_K!9|PoxC@=E~@Q*^7G<|EkLkNbxnY`l$E*d(@WV6xN3WI zdeIZ}BnRF#;X3KC9v#q2!<3wlcuQ|O1>PG49wKxq9`pMVyT{Wp7VL%ctKer>qVCQ% ze+O|Ro4Nv+o|o-5yZeg7^YSzORk9W>-tFU)3PW#HDNp1n%+M$06o;aSsm|->%VC*M za95vbb$WLoyx90|hooO!r@$LiKyKU@VaUR39s{x41)im@>I(AECd8?m#8nv2rv`lh z06+jqL_t&nPWwbCt5IbjAtsY*fy2Mv&~za%08ILF*`#nNAPI-fZ3F}izJh;+6~Tm; z;P4Gl?Cv%VLMc7&((v%c1J%FqPAMP_G%}LBAIz2^c#cO96S@S?^%BZM)(4Vla@+|4 z=Vy`ebq9fhzL-B*ExuwL_;*R{>ZI_Wve!?y zWgbZy_pXmZr+}n@Bg|7VU|&bMLP!;uI6E-R?;KlF?NZRY%ys+(s4}MZRxbea$*?Mh zPzV%^260+W0Q!pFUQi%8`xai@#5;MhfbG^SN7Z3B_n;(J0_+n|laXtqB4A-UxF#}C zg&^$K-eDn1o)CoWu;%f4Cxg*1s#D-CQ9wfYvLh!lt^RhH=dh^8FSpL5Y`!jj4y&l1 zZ-q}%KUelbfPAoVWlR}(5P~Mu3fhjGKT{vvGgpTo8OSq|2M=-G>wpeGc0qudgVC$_ zueYQ}0coIdL)P&d7~W_+d2Ecl4IrW|5fYL%w3r3_tVjKy76(B=_~zQ`axtS3GLpux zu+k6nBGCAh@4-ANX5EE~^^`+lKg}BeWo`3FhxWhGGe`Nqh7|K{&8c>0dyxcxnacuB z>#DT!1D?&x=7zJ9^P2KOj6Xp!?akol@GDFLXpSieNu<^k&lnj58o={N3W(nRngTKj ztRLnD$Ej;!K7l}NeyTnZhp|4lsY58+eZ^4$H-Dr1aiV5r-OYm4tth*qTu6K98EoyF!h4Tg!4m6_*VK=~cHCB~ZziV^t! zb*M+QU+2=&i^3A1`$;wW>VJ|p1qfFNxm6v9c?$z3EMYkbQTA+XcqV`vE*h5t!WZM{Vs89?x%2ZGKUww|}x%vK$=BUul$}LMZ zU4id8u7-6436PfTYBq{R=9^hvO`!{hJzaXG52C~hUyV=%$7)TLdw z0OZY-D~cUNoc`GCWBnBwp(6Kcs-%HzoZSBZfZ+ z&qu{m-ipT(iDassk+UaFXgX^JggS@LToZf(#eA-%f@ zWSuM>nB8DR%*k&tL82&-a~XJFP6D+9{9*b8`!j0l?E;Qh+z- z(@3#TNNe?^DhsJ@c~nt0VYw_tfy51ouAUNAXq*xqC$pEl5WxK~bjaq^%rQj&nKO)PD+4*w&scvWrEO;EPJ#FRrM+VBwaH@KZJbL zF^#9^ZfU5xks+URJXJEpUB}gEe~`$6cYPoF^>qro0}3ETbtXa}8oo!)p)B+RF>3{o zOy$z$C~ZVH90|vbcijy zir&!*1+f!LHKLkQ#aMU$%^>T|pj<snw z{Nzu8KuUDp%Wvqf-iQKHxRw07Dsn5({OJOq_){ReezWvIVK`R)cEC>tc+6HcxHQQG zD0sPHqcR5VL2lR2Z+1!GFc=(CYO!(no(ocH5&!<24b;ly$y@=iW}knfLFy;}zZ8(= zXjPlRTx1rdH3L~*7+x!)h|Rb&(_=3pH%r3!?ICzxqIp~sn9x|c;VGsn=p(C|%8K@Q zey5O96b42lj&#xxBBo>=Nc8`bOaI^_rNAR-WoFF(#;gE)PBd^FN4WsP8$_liM2wZI zN3|+OkN73ZiNWsT$b>#h`$u2U8z`WwtglXkg#cK=;wl^D>rQ|*v&m`JWR1W^IBL(K z&k{z0;qLL-tc-;RXNCHS*eTD}dz}Iw7zG~NJO+R#mnB1hZ1gM`0(P%z5(Wlr`qr8jwSyUi##NdjpaUWngK#V%jUO19{`1?3w}ozi zgF{?LD}tE*d)N?ni2cap&y=kE3Mu|q58f@D0KQeF&8XmDJg;?qE{!nZyfW8P)rN@A zqD|&mgy>M0pjae%ih9|eGqy|0<4?`@S9_`dd0P}vav&96Duk9jH1}j=0I2GDSGfZN zKs|?P{6eB%Ne=}M9=>GT>hWrt=h1!xuDhJ>C_p%*8S7Sx!vh`L-0H?KFEL-caTpNw z_LeCSXD_>X8R9aeJ$@J|jbZSMN@2bydczR&nR|5we8Q)=Whhk__&@fp13ZeVYo9w? z(n$gdReA^MO+yDkx->x$r3fM-pdeL11f@6WAXPfjrArOH_onpT5<+^}o%_FMb|+c< z^;5qOm2BqlkeS_`*`0gt%(>^hrxjZ_{Na;$qCEqSw~h&eT!_KZWpGVv5exYWgh3TK z9gXr>b96`^jyU2}0fSx)2b^7MCO}*U)^Teo2bt)Rsy= zWc}1s1*K$8%&r_giD*~-lau0V z7MKH3BNv`c^*VA$zJfwFPI-$%qH9hmnqY;;f`jU~%C2dHc$M3ti7Qk~Lt6FkF}4N= z+>ryiN9iL3_rs%v>fMtWd5!rJisMwG8XdBaG=-v_;LsI=Vv0gxi93}urdA3g)S0MN zJsGc~p->toWeQ{d>YnZ)a`p>HF{gr@qJ-+rv*9Zy(NNR?6hH^Bow*j5AGJy<4{IIv z2}0s7B>bRa!OXMKx2)0=hSzA@eu>au_@U3RL4bU@jGCv=sQ&y{XXL*+UQxt$M`*mQ zzg$g}{*ccyu@LJ`ps70dZ`TM4KEfC?rRL{FG^Q$A6=vs>Bv zZIO?YVjc~Q_+Eu4|Eto=uS9M{o5$9^cSRojR0n0f7znnnQWMz$B~~J7&jas=YtsCG zyB6bYg9E9A1L!@}k?k{7xVXYKP!O`@%_#vT8|7(X+37NfJV|Y_^ zSKH-1g1Qm;H}#k)E?}OP*(yr_G2$LzOkN0YIX(S`4Eq}FtAR`ql9$^MnG8#;&)OhO{3lnl!M|pl-$37Z!U2kMGtfP`HPl_ zfHLs}g^DtG^5S5W4urg~CB?nRW z&1=4lZbLNa?(Ak^uOi~{tE2+*5ah=xF_gQ&s9~}y!Q%$-(9u0WzkD5V&|e$d|I7hr zmljz|{hVf`9C}gxr9@mYqOB#*c_mdL$~wqr0C$~A+>cZ(05^EHva$i>2#8KO&)73K zkR~|r6Pg*bM2p4~VADqI%@Joe)%#bIppPcgNt&p&4g49AG0&3UP4hbj0N{EHFgL!U z(NK!Hs%~Y}U#9(;&pwEQqWs>lg{xwgBh(}zb2B9!?l*tiXYBdJ&1b02t~@mM(*_6h zyS^O!m_S|==Ge>1AE2Pr^tq+fMD3{h$G0WUgI-0WSUu z5>Kc#M>5*JkzY&Xm`6w1EL;nu%7u(R%2}e(C-!xUheF-!{N$ET*1-TZB^8o$FrOI! z?7~qZ+oM$MbbJwe*_ez}F9#p~?BA8>1ri2+&H}evr<`>2;KK~IJp9_D5@iQa^Cijw z0zeZrA;kf_LyFrBIiA8VV=s+yz*(CJadg|z=ZFd}@Un6jov!m1G>?RQda&r}Y&+{Y zMN!j;pQAVFGc6T0jj`$XhO7ME2Wjqof0Q7?0w{OWY1bM6rMl&9X+@|UJpb3ZZ?3I} zx`!2VWh_D#SpXYka3C#nAf9VYLm`<)`Ls~1I5dlk=XcDnpz|u-UibK}4W1&Mkm=h@ zEB0joXF5xMd#m%u6M>zO(z3f;PXJiNK`{t*`N{r;I+Xk04llTHUU*KTPaCbZ#GnXH zrJsTml-)4+$EhDv1msmq`Ki`stryy{0jNRRhNkfU_KzCx{_PwPm<2O_WhVUL4D667 z$;ANeHPu(RrbaToj@5<@e?8(R`GZupqo=IAL)BbE{MC)9QSJZ+muo3>2eq!`scdj= zbh(jSQVVxu83&+Ct7#+aH<2|HPe~MQtlBPP4aEZ~0Y97?ml%s8qG!f__4r$elcG;g z+UMcCd7?jcft-4PqRFfAMRyTd#P>HV-%R?}@yhtCb3m8#qX7dVr4Fm;P{+_3^dNNn_ES(UCGguSrD6xlZy+1fbu* zE8Rn6$5Uh|hdO&wwod-Y2guch=S@r!jksU842GF_27yXT=ClsWLo|5Lw#;##5LL5e zJfvX3i!ETcvIkEVLXVPrIIp>*W<#nsBvUw(y7#rMMbSW(PH%HF6c5e zy||iQB>L#k#41l_BSmuHgI{bPLf#Y*S*R(7af&w3HEbiC?i9_5 z^70JtU%wJPrAwFdm1Sow@>IXD?@&JXC~D~XM7*QkB}#X~oK*n$Rr63a?YPtq4C>=F zwgv}Y1P7*z^1OmXzN*fVClD`_&D@K|LmmxBigp$}H(SuOo%PVFRGuT!V7-MQX&}v$ z0RZR-Y*FMU-^Tq=lioqN2;IvXw$VNleR3NvtsJ!##kAh^EGg#0qc%bwl#J9GZL!k) z!(%YOg#+yA1F3yw6A1tybBsAgv?V&*{p!yNV^WS2wM1cR2?KQ~GEO((eG~q%a%enH zDNN$M4ZyJhG@>|K7*&L|I4d!m>`3(GSF`dyfdM(}#@>DL2)C^~He&(hCKU8flA^ep zKLXtbvFBfepieHFRFOuU60`HAAqD^#@Y;#ejhH{o)aP8ZNQ|ZfDBqYu68>^`0>&iZ za~C5Jc28IFt~UO1T1rX;?sj(?Z^}%j&OKcoAB86$q_lMu+j=%Kv6$?NE+Q4v=T-I- z9oTdCjBN$Fz?7O27q*b-%qv%0C$M5&AAPs;VY?FTU=iAmw6BMb7z6C5Wpb37PEP59 zx4$d5R}|w!yt7t@#+~D%E}f0Ph_-)U+_`64j#`WDEGtmPjO;bktqla&Yxj2JAcQv;k?3mK0Wu;qY8e&BRK&h<;v;J);c z716>bnNE0I`Ys;G^{}qTiX=g?_B~;S*$5hlQd~K~&y%R*ZwC9$Izw-Lv_SLZ^NpRC zgaf%We?O;me4~Z@k|~Z;Iz)gnODmG8U#1B_0RuShLKZyXekp@dLQGc?ZO7DA^T^yr zyo#ZXq%i1Z*)uz5CCpJXL`g3`;#7{L*VW=&%r(y@pK1b;lN*{}-jM{ke(c7D9r1Oz zhCuv5t_g!Eu=lhs5Upq!=dP!D;`WJ5ZNU!!<#dpya(hKk6?&CuW7NSTF*k^sbeVbf z=`3UfG`3#l+<#j~v;(!{s1CG3i$LWw3^F)ia6sdLIsr9g06hA9WDeo(7)P#!jhDr- zO}Lakk%7p)czUwAFzYVX8trms+?EUmy(o?x#4_+;?v~ZKe|gUInF^abqwUHs z!KIV}MCXskE3zBxl(yoKYl=Wxzk!VL8yt9!1J@n6YTQ=)5^ir+F*N-NWg4nU&s)r9~nz>4}40^YD7qVSEk ztLHThvhw8`ayh8`0J`K|^uxHpmu3m&VvUO3v_Npkv`#@HQ1guIuT%ThM+@{%9)&r` zvQ98fs`fG1308POb*Z#n6L(20zA$Rqy^~B!Zkje z^uwxMnWFK!LAn`FBeJ6VXF-bxU4dNiRjLqbyQuwSWo>26kDefg)~>V7*cu$rIDmGu z+4#9If1@npkbi_eMkDB-s<$sX%5nyJ(Rb!0TxQn3K`lzP@$p6)L%I04v`vO6=}yx) zS5_k{kk&2gMfS`r8o#ws_z{!@qK@u13rj55s){9 zXc+^D<^bgxD$bK9J z?^9#MCb3@ClwQ;3RL^<;^H<`QN6K(W={?%exFb33$Z~W>Y&oJ9%_m)oxPg?e4+EtF zAi%y7B{JJ$`!*80kZK`nLmu$T%2odjGB{vx;CT)>iKU(Zo8N z`^Qd4WYW6J*%J^EU}~V8*W}YA`c{iPeWu0LJpWy3=#>Eg2&f$hoz)16vN9K?Ryq=0 zFMjW-EkDtvEzb_iQCI*Mm}-j-P~2?vosj~NhMo{(sDF(EdI(Gb&4{Qc zq!Zq!we&SQ)H|gYw$3)D*h|wSZj0><=;~B?Y^knZJCO&orb;1?*+jQIV(&^gqRfMp znj#yXj+{u4VJ+RJ&5nMWu{AhgaNsvMU>hvIlfMuJCVB>mQ$$+_{?g1=9TBJBMC7*h zayj7$k^$W$S@MN@z@S>zx^Lm&(gb5gG6TChd>E0_uMq2E=T`)kFkGQBF(&1FQ5ul~ zUMa6r{a$v!PaM(WB;VuScOA-yvI6BQQN4*4%4lr4NTNZ1KL>)HA^zj(2&=%Ry`rut z0!N1hsYFni_umPxqT+Ry*#Y6oR-y|xzf^fU(Xk8nsE+615U1CkAlw zMDrXPBd>w4RmG1~2w#z8007T3(s*TX;Qw<#tO5K808N<qNxMXkg^>zq+bdIyh^t+{y#yksZnzf)Jt~;#*|gpAOIk<<$)N2^P0jF z&4!l;W{o|7j-CwIG3f8$fOtY5aB;tcKK2o8IOVSCwLsWlcop&RKpDUksza3#H19r? z3%*R3=R4h9pG1WLKT@JZ0T@IPa;A14dOJMQ?lbV0WF)#F9_XmWAnU?M58%l%NJ+=t zCrlIRh&+s{{(+CztB^CJtJUu81)y=+?YJ$po-KR@l>qP94Q|m1w8yEy8eIoQqnE}V1XLu18#3YLnbU_IwfRI z=&2%v>9dGmKhLkYqg>fJ~41UN78Hz@H z{9G@ib{yg8UDBLe^n^mZFfK^_835t>m~C9!Xp{Ijmwz{+-n-g#Oxz_w99q_h%!d5_Q*)-L$W?@2A98^F0l$<;IkWDS%6}^a4S&o>R(29$hAwFX&&l@TMc%~d}>=_&|IPe??oYt9rzx0)tq2H9We1K{s zwEvsyOOx;%*{0YIsnCOxDIRf@Y1ax)!$ap#L$pW2h!NmyE5!fnRYT7*7*C$O?Oe^1{>X8~ddSMdst9WfQFLptw<$QaycO z@Msi<H<0nY{*TY701o07CXJO{U>Gzt=NCO-kTsN_XvG6_IDr6$0u1_} zIUw?(F&a_;BG=m^<#D3)NnU>N1(MfS9xastDPJ#IBLaX81b3mlsA=F5Y?*G6&n*SY zn%@m2yH@0Kj2y0Y0q}LJQU80h7vMDfv(O@rtT4YT76=&fe)G*G0L{|=Dv$4|NHM9J zHFtys&;Nz?gR=+WYU`>Mr}zQ@ppWQ%g6AJ_ZX<5@CuXAp(tCK_nyMvB1iXY!WB|9! zhd)cY2j3aVB`iq6$OSq5ojiuhp}+#sRU1LfBr54Yqe90GzoZy=T)(1k#)g`pm4Q1~?fq1#M^< zSc^v@6a&y{jM6UlF$y^%kFl_A7hhqqqljwJ;f#b2&A9Z;vk@qoT`^Nv1=S>DKCnIF zP!d$YbyVd}V>gH~T!RDukpq4XSWMdJEW?T3ubB>s{K#cA#R$%^>UN{)xBf8nt)b@P zGzf~$VJQiQ5314!Yu{^l{{JI;|MG`=gg`Si5rnTu)Daa~aaT{tJY0m)+vKrK0O0JE zq1Ca#`b4|W_F51ai1}$|?S#gIw7=Q2+v>+K6b@Jt#CZJrZOke>)bJkKG=4=1 zmvMr|0r8y}!Nu?%eqq~28<2w7&c_Gdaq_$9-BUo(*9ZU&3O=@Trw9N<`4mU+b24RE z?$^;&52+|QO+OMmGwMAAl}F*@%xTyuk<6X{V#n;{TRgipKv&T;4uGsjag&O&#*{G4 zUPXS0c71*BuTygY0Q9YY%liv78Oh&kR&Yy%_|MI`!wO)4tdP&ku`N$+I5Gpyk@b+( z2EAww=yEz9k`;!$IDj})=C_k7^e*{e51FpL`%6XoC zysBz-D)hO9sTVsQb5a+)%={_1fM{+IXuno`kJ7OL8!S*f$L$N(P`=L>5ROFb`L34Tm zd3i?m1?VBC!N~Llz)3#4EiGuX~v|2T?TeN&2#wdN#RAKSlK&@iHM4U6{mf_xS` z;qf|%AE|=MA$Rb4wfLA=URS19pG(5fr2HRUjPd#Z%mF7Lu5<@1HW2fg;3y7QbWffu z9`6&48d)R4b0lz_5mD>pPRw^EFzda8(&~>z@0dd&5@{N>Y-GlfrZL?epI7K;`V5xgZN#H9@{X2qjkvAi%j3xRms~A&_SH_HPeV zpEvzFVv+R*gG=eCn9vD@WH-5pJ%FAgErS`TX?>+mTsi;(`=<2|P7k0oxN*92N;|yX z{$zw2L6okFwY3<4^B0LtoXjDIp{soozC5ep!PQy(n_DH>thaMjeDQb^1fGsxqPnFm>{kjE3?VIGF__aaW9 z2)3c<9r6mciFkTh@il-vFyUVy9T4~9S;aKu5Y174A(>jWsQq=i)!HX3VXTmHnyztV zs$2*Fi0_At&%rjy=NUT&2ht=5oNY7JAX(A@uv;(E5Co!jdFEl?42RW@Sjl*^C?me3 zH?iYMVO%Q(QQ?o1A$q)YcS>^rfP59fZP%RBik!rV^U>|qCq!Lae^(>oTNGU>9GOMM z*2iYa%GI%Pmn?Mmo4^>K!GS-`f#OcjQ)|Fy7CawqWibv(EV(~@BL$wNhDlXc`3+6^ zHl-J;9J6PM`$DsVX@Mg9YhxhgaAWVE;s6_3F4A927Hzb7ssQ9}%ku#1}ws50pQw%dWdm(xz%A74ew2#$`%e$);dzN@ z?bE^j86jsw#um%x$@kj#2<<Gf#5ba5kdxMeJuMyac`*l<)T7diQ1l&LPHMX4`s*0x|NEPE0DuP&qfr?D zR@$WHyzN1Izt2$&;&_xrh2XrJ?ZH|5cg!EJnY0`0^HvQ$xb{e(vAYEclE>w^0lnMN-X1T;& z0FucEPmMdS%IdE|`#;C}3*g=7Rgb%DxTrDmnN}q969Dh-2!JF{@bBzc9cG(6N)I7v z*??~f?=N|OfFhh-VCbDl2$e_~Iikeur5;49`gU%XZa2|aYdhx5&1>WV9hIq~z z$S-L+&QGLz$mS=s@LR~m&tD<sN$*BHs2L};Ncv>%mcL+j}%iQmv zLe45jXtdl8LngDii1-xe1=uyS9`WR;pjJvn(P2ZtTR29h;Lo) z4j*TGOqaLZV%O(+06^>)kJ+&CTjrIGYf0F9qWb}YFU1P@VBoPyGn5_X8hiS=L@IzG z(`fOAfZ^`_)I3PMf#>LISO+x_&zWDw7}Ej(uR{k72|7>IbY!)ee&Ca3i>WUfD3Uym zNSmYJ-v+cb24FxPsg7m{>hf3jETClL|X*&QH28A35p3L1eT8~{i zHwF%?v$d~YOJD@e8>?|fY`;KBoIzVMc%;IHC1geN4AmI_$~hBKWa`9+ zSp9;XoPj4;iBkFy8#FNpTvf46D+Zps-Wd`u&B0QEOk}3+k-d```N3}A5qFvUH1VZhk8Avk0s( z!~noW`yeh7sHZbVL_gHMww$BQ*{uWhfApnRd1!ZNq62&%4-Jh=_!e(;6 zhS|OG{$kJG-vnPpTh&3iCRrh8RAqD2|#r_;d5yE4E~}Nx z=We!}b^riBs1)uEU0PX7SK6iYz+>$ zaDY7!AQ&v6tVnp>GXKEEx-RVt&#c13)Th>4gM3@SgIV6Z8$nh?_2Id@h^-w@8K3Ec z=S}+t>>K1h9PkBJ?lP}pEzJ84xKAzs7Jvb#(L(@3vtF<0Gl%GAjv+ULabNVDTJo;7 zAaa+(?uK&zU7rl5sp2&lKDANS48VXC=s~~=YZeFKH2)q99_w7I zBc4|$cky7Q)6@!*5AmGD@Z>8=q8K6~3i|{q`CM*>Tq3tyhQRv)ID!GA5fm%MOF7QK zQBVX7Tih=XHrr-#wR7c1vCgfMAbcPVAxGugs67M+gt@+$PXNBW?kU5#ed^wUkJ_kS zXu5Hy;23Ldq7zrYY!!T%sP5C(FZ)1G0bQjVI6nrp|MA?2E~31aic<;jk*jg%1Z=f4 zj6d$j0h1#_N@NkyoLVQ}LBRjlC}Lg7zn-Z9+V4)R`-wN?$kA%|W+K~}d^4ImW97yT z1xVh|WZmz6Fh=y(IKZilAo-D_2VAD=raS_A?bM^IPCo+>A2vLyvAPTge1Sxp0@zlh z&ooD|n?F0OV!1cz3gkBA*gZ=bYIOdXZ&Vbm`fl3ELrHsxGBxllh3sasy(255aE?I+ z2huhN_CZ~RqV8AUa!Yh2I^OVBlp0IauXg=IK1nyl3Is_P8I;iG_qBfNFm<3!DgC z=#u5SD2S#IA2-_^;E(PR_TC-Sn_%!|^H?utBSoNCM1Qp<6xUN|Ytf!)xYF#9RfWAjto_bbu7YB|myj(~5}01!TO90lGS zo__w`lv*8lVNfy$BGJv;j#vF?Jl(}M8gWIgO1v(ol21rwJGr&KRNO72mj<{xFTY}* zQYE~HAT_UG-aY2FDAl6G*-@p1Os|THlobc*_hl*@u5q~r2VN)#f*cMiF$M6m4Rga))ra*WrUmrcz4&omL>~ncng% zC!la7`KyKHWz4r9+UDq9fJdDgafQtLSZrmh_dX-ee_z~I+;XN-BjqLc_l&( z=NG!-Q1j!R!MFS>?UL)f^P-vL$VFC0C_&&Is+K`vy@73J)T$08Dz6m%#tWX%&C3RS z8Vtj)_b25idVU7OsN4N^_8(y|eX11Z`_Dr$JrPY{_pFyAq0BQfAEYhHuD^%3UwTUK zSXJZ@wDkf2;CDeO&H#Xa8J`fvMad8)psu?k0N{z#1~$ZIp{e98YytK#Wtif(T6xTS zwzU$Wm`Z>+$n&1+C>YuW<`HAKz}+syF$A0Dgr~o*`CHKfecLxbyQ2aSvvssiQ)U8C z?eYa>7=#?|4NcSeU%_$HtnrYzBwNT81U`sY5#rwzTo?ek1HeoJ9GQQ5akx~+pcj0> zzXocLLt%%WCojP4-G|>AiCPkfus{!l=?4^&LK@jPWkWX<=Gza9Wy3t`CdShW(|V(7m)2<125b*DE= zqTetp9Alw)06`D$78Ro&CK~a_BN%G%{5$$f|M_2dIgjM_ zSWHupRm(Zrt7sT}EflvbVU7F(_UtYDS6uW?6o(mF4&xsWjCePWGPA z2Q1E9WZmPoipH=(gfx*mS_o^PY|g1z*=uF2Xl_k3`m-+cGr+)KxxZ5m|58K;wuI+F z5&%8f5!XcCM`XYV%THn)fPjqheQkaMF<)#rrxs6WkOKfv^C2z*_m}s0Qx__U2!NKp zM-_}SnaZJ9Esx9HC^HNYJYn)R*-2}!>l|N(JAToMreG!ArY)GxhKR%Ch=7-!2RZ~enqz!NIE%P6rd=OiGcXCrRbv%v$Uj*)}0T99Z;IR$(E zZNTsG+8~1iX^I1SAL_SZ*5dwGmeFiS#EiWJdr+FiRO{~x0jZPdAlg$7yh^Q#ls*uB zdjVx%nEJ}v9Ez;d%#CYnKKmXD{D`@W!neWB-=M~zzA}lPq7YT%1p{Rg=i_Kyu}p*? z$^Zg%_JL7OQ%*Qdb*JY}JQXXsU6~v#e?<8|;+A+!%|dDlzsvBDRv!2uLq4KXy|QF8 zeT2w>iH~Zi@D!&TVYx3p0lP1v?s;yU{~;Y?k1Nb2mfkNfIVu{E8V5KHio4fJr7Z&@ z0n2^t@_?%bR0H&m=+?_|MfiBGeAn^>n zoKI0YXN8k9fU@|$YYNld8}r%ObWMDKnikCyTS<)bK$eJBT-#!Ko%WCXt`iOI&}m+V zMF{a183&r1V|d-=LU^YU3PNvjBnbRXQ#Fru5;PDVIRla}I|zcy{KtqRw3sitJgEgZ z_BLKWMkf4jY{$}fsFFYz&s}St0RFd`HS+#r0ESPqM+K@J!SxdoDvE*c6XjJTgW^U@ zHP9h0_BV)&in3l?o!(52LI%d#002M$NklDAwK6OCNCjjPrd{WreRc+22`+j9UBoTC^r^t^1IB6_0u>aOsaYE}3pDm{9fv<~8B zs({qi=%(fhwm|B*F=WOf#63x3A!S4UG;(-ZQQR$`q2Ue%KJtB{)FCS3lWDk8m+01! z#MyE@)`QGqlNKb!XSph5?#UfvR0ap0=RlxC;kH~+>Tv(*yRXE)s;(vKwePj%p8eqY z&z@zG1)lF@dINrNC>EMR<_3zY?2H~D7%#;LD_SklFA=vtP27as+>v|w#zJ8nHDgcX zI4jYY`{sNZhQn+&`!i~X;a?Y%c&o0K8E67NusVu?_e0rgTJZyzE>7>5r}(KQY`==CQ(gf;K1vy zX)YUH>UcRwOA*LR2=LSB<+>~Vp|JB6okB$kC?LuXah(ccydko$eB``h-ne`5SziD( zE?S4ofe1avdb>!2a8BYb;A9*R2mI*5nSXcJ*IC4|G1@pXKssGC6!5(IU9QYlQ%=&L?&z z!r))qJ2WxsYRV8q4;0bMj@KLMAh94%?es3ytcdQoNTi zh}Sb~4yChZ(6mFP9v{5zO!ea<3^F*7W;jp*a9J$y)h?o$CI={X0(B%RI4;{H^J2*5 zk`7L&c~PQx2=+fi%OWhr)}QSne_AwW4w8&e{JPKmaN9R$N>dC-o& zQAdf!v^n)o(gmy^@AwOWg*tJD>Mu`g$Lm2>+6&{4!2!4CfK%C&qbZ!n>08^Jj~W3O zTlK)kH8Gm6s<_MxDKnO)rtKWEtzLwoI|K)Qfly{vq;!uAc2gDU9>y88QVYG^E56Y} zZ}&S!u^?ni%cwkPWA{qTEcrQ6t-D#WLWQDlM(ti0KMcc{GY&n0p`zwc*Km%z(o}?D zu^?GUega<$(cbEb4n_!9IcCM(FzLY)exHmnttAAOFU+stC z4-cIPg5LoOc-$=8P^?jewdn*aB*~;|=5O*#x7)G9^x*l?DI~czPbn?e-3AkefqRpF zz`wNBQ%kIPj+yo&kr3By>7~3c;5eYJa+q97v}f$y?I?3egELl}<^>>QLEe5tgCQ?U zw#{G7qZf{Q2ZRhgc9uIU=|_pjD(i$Ubp-Bpj@ zwzbE30tT`J09dC1011fLa=jm4^f>rq?Mk$} zVtAa2b=||qv`)a|slh6aJ6k`+*cu#ge-6ZBeTCHS!E{}OI(`#!f23AYhaxLQ z7Pha2!QUiORfl6iA8Oh!TI0C#G!H&ICdQi$GB}VnIN;=0_2I!E+W-$DQB z+|_1!d`wjIoBX%EI;F5esfG+!e4cFJrhJsT_H>224-L_(MLVzl)cT+2CqQm|b8&CS zco>!y2L#?!4kFUDL0m&w3RRYiXCA3Qu^MgN+c04<6n}L_-^cL0Jlh)cgL)Nm<`U}5 zf?sl(+=uKjRAa)ovs}@CPgiVH7q|YttNk@!*Z_tEMlN_9d$iq`4q-^!_-V1cVVwTuG5`YAS?VSF3B3oyUIOB zA7+nHVkhQnsr1I0O#tIc{VRNMw(X;nNL1kGb`#91K-lZmO)I#H|AUA)k^pSZol9)a zA^W>0_W3MeA5w&dwcDD0E`X@R-nW$ceBa5QT$~L7{u9x88Uh264p1!dozI7|<1K?+ zIG`KkPPrf_p#a-JB)<>HCWz16(Be2%!buD2YzdCUJiIl3R_=%qD53Bq-CjV4WIk`2 zDLTUMT~iH3SwMOg9={zy|FpQv*87n-PAB=jJj*p~0K-_dh?LZt%BH8D%|(&b+_>+& zWgA+C8Y88w;~6k{onCG43E&Cz1ot9FJ! zPqO3SXxm4CjCdXMudGHc7xMS)x*q@#gQ?<>!VXw=0|4CY zR=;G0OQ-x@vhLy{X25{f!ssVUIjA2{=z$6wId?jfjBVQCKu(8YWU5J**y|R1IlchO z;8D4RLd~rJ$o#y#K1XM6Qr}TeAn20d$K0${M@_99O3-{C$yYDu$eHwW@=ID+aO=`2 ze|~Llh~P@H)kbOn6egvxn8}`^@P`R`k#|m2>tvmxK%x5P?*6@rHHbQIA6NG=6#N^r zAEvWG-iUWZC3+hQ**5H^JgM2i z&&;wlvZX11T=r;dMJV=d`c2A+1$@!|HmCerAQZU$ojfXl0sChAcsUS?a5-}gF$|~M z1$;LL$h zB0|7zDJM#@!JyX3RKs#t0MJ1-gk1{OYg?%OOF0dX$vQNHWIpJgc>RRGYnvVS1;85F z7G_~Aezu>eBT4{JdS+fBsv`hk$K#pmbpTF>a+Ne6MJC6LFG7O=ELH7M<|m)5n8yzl zKTVEgKu$IcPH_fmWm{ZqE2^d?Gw7g>k=y^P+CefCe?Y5GuJ?A8e^iACZ=y~76oK<$GS()&PemRlV# zNLh)lh>vW$uSLf zRt9XHB*SD0r0tN9z_-ME2%)@L=8(^h z6lN%Sm80Ch8p?6%ZhPn92q{&A)X6*^@izJ-w;?(*=e%#+5~A@R9NL`##jx^s{<9Rw ziKN169eE4mzJv^O@JZ<@jxn$pu`1Cb`@sOHndz}onP^wBn~Lo;(FYB_O$Y-}a=yrQ zd*Xb2e`my1G=-u$OfJ?ug7+br;#2qvTv;~_T;4Qx|DQS@fO!J|BOZt)sFzN%nCyy< z4P;FOoVycUUPJ2b^Iwb9;2wQX1iprOx^49q@4|5VB)qPg7sUgQSnG*_IDZv2$K6utmK7^vtxj=sFNRD)8e3cmEcP-=+Dz1f&clMt&yy6Ro))S~(hkt=f~DPU3heRv?*;Q#8g=1izC?Ne7myQ0Zh@hVlEI z^Zraphuiah#m=0fa&CI8nb$MRIK^879<7k7#tM{5C*8R0|C$2`@r@BkK#XvEvJSlS_4&{yZvVF)!06~Tr)YxGBNfR8%=*U4;=0YLQhU(QflYT*v zDuw)zZ=)58Z>~O>kC}gP;Y3#i4f^1<-uBMrqg1t9m%4FR1LcX*jlPAT7q-<@iRj1Z zeFbBR@JK8r7&x>CLY>TymxXDjRy>K93s&M6L#n_KHPsXQxlLVUmZVw|214G9e!G-W z$TbZOyJ)u))gLn9N*ETD;|(v}icf$XS^|$|2MC$svWMnzI*FJ7co6ZOhG_sc{2_Vb zesx)`#91*F_2da=bAX1>uZ973lIV?zWy8J4A~$%;7mvAkV7KVfS3Q=ac;L+k*;Opy zW^a+eNWRBLbOgry$RlVL#%o5#7i@MryJ}#LXH$Co&IwY_D0fQBS0aioRJh0xMS-Cm_mF}5umGuA*H`dOzMl6alQwy0_ z0lP6!qvzuPuN?6Mki?4LVAS2`=W@O{%aJ91gDh)B7orp4QH#|P_>rrYbfOB;_MblQ z9#{eqK6{!T3s{4QfGi&CxqTG;%a^#Qu;PK*LI4aQ7Y1PNQpgE3ro3V7J;#Ax$vs43 z9p#XG@ujH2H9&M4pMFux@b{6+US##6m{~;Cre(>l;5mt!VcRY@;;1t89>7@=jZhqr zp}c4SFCfv5axoAPox#>h4hLZHX{On12!C;4OMF~zL;{3d&zo?Ak93gy8bwI2U_TyD zEX(E8N{wUTd}-}KVp9$N+boG_I!pyRg@vlegO>vsqBU^E5e~z9)56cYutQ3UMxkqJ5o0z zP<a_ZE+W9$CJ&z&^q)B3T!=+M`0YyKz(&-UJlobiF4dUOBxp4o#VHqt(;k+@3=U@Z;b(ozD$^6Pd!8?DB zbi@B1`~@&veIhIcxJ$v_whtpEfQPB$>lN{W0g~_S?ES1C;rZGP!w1io*d#uJDT6ms z^;7o8JYsVm&y5m<${IaV4zK)m*=;_UuaNsWEG7U1(bE$%$ok-mS*+&T379iSR}JnW z@!t?v9>BaJ%IO5lJ@Yhhyb?zmz{3NBFLdUE0xE#Nj#-Q4x7H)t7Bj@2aT`(1H_C*2 zH`IRc1{<0(oVwbEGQ`}L z`G|&PojozE2n_$HKNgF{^I5G`cE$7|+WK`#d`$wlh0NTpm8~rZ=+1)IG0%T5^x__fp9Ek!Bz z>?J)@t=!)0#@2filCY4#x6oHKXAgv|rS&8J35Ed_0)q?=qzMjK9O|m6wJ5+I?%3}V z+e>%?imb~0nHh#`hU$UE%uu|N=>pUt2t;`Pt(sw!{6Q+&(Ti?nf}-e+Yp#ZPAQ*r; z&3&-BN>p34p>Lbur`hNs|ip1j}NxWZ_t%ws_7C|ax#6U%;zoA1NbRn zK+<8Z{2CC@`Nt_EAUul?`AJef5Q*&R_x#0_gHxR1LTvBI_CyDFJ>4V^poT>Ewo!g< z&`#dB>vzEckb^D!Pn$;K{JJRShwGw&vI_M@B&3 zk_BaNG-&2p4_yGE(!G{ptL=k``jwmE`Q%*`SG~XUbo_0I>(5`=Y8nP$V0ThAITZIn zAF)A`jJE&?@qP410ql=)44zB-dihWSxXH2rHEr;{vs#@RoA?#cneVRbO<15^oG+&m z$WH;t?ZRta4r?E!qCfimpGbQN>$qB9T}qvi0%i4zK-B}P8C`s@#~ozImVeq%=EC!E zLz&3M1Hnswcg%54MKfny>c7ANSzc~npx%3%vWZiU!fGQ$CJN1Yra$~X=^*66Y3OB4 z^&OQ002AKek5xtzksr$9RTEe z_Gko2{qX|gE;DV6!{7jO;1T>?P>8(qn(uRt^wD|`_NjPIK0jJ@mS;2+q^uc^GXNlC zP`qvi^n%L5hf`=$Eje=|WfM7{)?pOGVID$}KV9kPAL8#J^#AD3kCL!WU(I6xdP|e^ zm!E2$r2|^YxIaJahnoMG#5rXAGZ65tx=6x5NTjQ0bt1Qkgo8u3M+7o({m@TiE)}o zeFbu|0l*;jqB}3;#Z38r^mqmj?}ULvg3Vz2(;B(Fpz!adJ7w)#paAqw{94w;{ZP>( zvser};2Y(zsD|&o1^5WZ0}gPQG?|^iKs5Z*m{-erC?d`;l28lJDU7KonPG zZFM)KGk=%kTlp*hGbK3H_19?|*KqMXXBsFvBiA=qRBLYIxaI4E*JGaGD{w7LA_DO` zxDQdd^{CcHTAy$2(^T}wdEH{L(ovIx&Py*kx4;80xxK_ya3r}WoB-oI*7Vms^5KB@_{5%Q05M*J1 z5K?F$r>waPxeox~^w5aws9|0pWqOKq%vslRnTteaFRvt_o!8dH5sW-5L1G~r0tOiz zNK+ix4^OoT`qOUpd-*+&-1nLS!I!68jz!j{DV&gNpisx+b!95b25qV%-Sc!1x%M%* z`e{@LrgSp>^!nkwv2BpEzat`-3WLHcz3DcmLO{3tK~ouR|E1Z@20aM1mcUL~8q+@Rm*z^^@b^>7FKgRf`ABtR@|cn?RzSITrUaHI+M`+$J9_Vg!FBW6Rl z83FGirT9_i6(;zYcVxfN$G#Vy;`b9aO58WTGtGlJRMt1aa!3qDX#XP1JkbxY>zNA) z7%JhR@q<+;*1M+NoM&$VL;q0FGE)cCfM_+KZh9P7`flM?p11M)CFEGb{EMl8g&JcR z-1dy=*VnDR>BfKbh>9L4!I>H3EP@=%`zhmn?^3p1|51;g0pMGv!v4*b>X?_9 z%Qx+K9)2zyS)cowTSlO?;vN8+_e?cW&XYXWD4pmI*7+~uhN^3ccGkbR&%PbsyWTui z$tjsQGK-`-g$wXPezCle7~Kn#%!` zx$SCn-K5;8(?G%-^MS}TNI^%JCN}>68%l?(5O|&!*oVta02pRu+=f^o>TB6OI^INd z^XPrwq$c>{CGhdRgX=AdUN5uoI zlFKLJPXY2KrG(h0>BGN{sQGVrZJc9pAhmFy800d+i!00*9g{^?f__#xe+26R zej%Rw@d>G+YJ!%sb_1#7*>D&GZ1&VL$lt$Ch-NpTn;hr~$_4zM4HqZCZZHj&8c2%n95kMc$Tkp3A+}PcXIy2i%wgrYk~nv8o%7V>_|Hdv{!(c_t{}<+>J` z?8So3NlG02Z1AwboOSfA);~xsTc>AO4U+HCcTgm=h|JU)T|lPZu8{}@wD;I;J>#z- zGH5{7QOTg;KezdGM((ERfpI6J+or{-zvfX^Pp1U7x-d6 zT~;}tS0~gIRe$qVQ`9-C_UH!zspU9EZB0}v$RY}cpCVAzJPgvVS;AWuMj*5#h7>Wnc*LTV!rb$Jn+PMhgPT;xm&=#eoscJ3y2QXh_i@Z zC<5^k9;V)&?kb8cVu1yz^rsotTYI@=WDOOm9UDaQ2fYhVRs=H zqhstUfPu{2ybCGsA|)-I?IP#+D|N&W3dC<7L(H1`2fzJiK6cz?q^_4uw5s1(#3a#) zPmEF28t4L(bgoq#?kk;?JEA{T!>jX11;RK|E!m*uV!gpAuDj~H4O)k6_MVLOrj|N} zQ_#dKG(qtCpV?JfrrJ>(7jAIi&v76QwL&0YY!{%Ho#j(}8w==T7dGDBYP*Ue&qHJV z)iaPsRTK{{8jbo*!-><1A`n-|kwU4Q%&GkPQ!c+FNlThX-9$M z(Q%LiQ2n<`D0WG!WnXlfhNMSlWGJM_OAdOW@*u*YK8iSc3uVa2ensw~8%RM~U~j>I zK*c7(7nPzY&g8c`cQm$PKYUzqN|{kUF!V<@ zk1^S|;f+#qrz`}8xhP7>`r^D$^dp$ZH3WR$GQsc+IO}^~sp`CzPx1M1l$+g&;|HL+ zc|+&>ywM|6Qs>jo()P7@uCCjElt=J<Jp&JMEFj-p$GjOTY5?7I)L6<(i zb|s+z0zJkJT@;p;sALJB>CAtRD&~sf66E1QdxYGMX=s6LTMI5d0sj~nGQFT-@OOdG z!67GqXG}p`Z#fcGzw2cTQJ@94*z`m9=%!c(Vk&;=yAjWGRqwBq`Z!@3fT0*{^Wi+c zuG9ty`9MlRP9ZLKm#e{2wq~L7o;C3PMB5g^I%HmjE;U$J5_^L?cbHUs{qT1x!@fF1ZIBb-YdZFppRIf`?az2;fYa6j3S`?3jBE1sWyOS#Xv zcq-!Q5MI_tU??lju?|liA7PNe0k`GAEl06S(;LDTdsNf1>!pF z=$`gX633xOd^XL~^b~)2+%KA9H7P~ly)Q2~atCSsXXEnO8%Q9`mEQ?oqSPW9(sM_j z*mh7v2gYnwccElKnU1mP_d)MGZrvoHph>fAhAd|iHgvS_=5mpSLFvMbKe4NapmT%^ z2>t07X?ei8sUAZ|YEAW1UPsc;_S>dR!4_ucbGHR64i2gg6M;R)8p#6kO>VPAKZ>8( z(+|V{x>iHu*M?GBH~by!O8KiB-1>d&?}*$Gy#buOh5P<2rr-p&)78~n+|g&9#}Uri z^npcYe$V+eyhXHtE2%xy-5k0p;!y+;wK&8Kn1Bq#%%I-dak_znXHp|$UYv~hgN-g{ zb;Capcl&vq^Ax5g0A`Ji-4T&dKLp({A`)Dzu0-fq_+w50FnftNQ}Xg&!NAYY_(Mw!W*aTUm1ft z$PWMj61W~|0Ny+36W|Kqz2M9G+rExdFj&j)27sF*Tbz_G%y*orJZhlysa4DCC)A)F zy>oh8TSPMX1?*N@VqVPlt6F<()&=xuLqFNr8XRyl4#4YQj^$61g355(+U}_`P^hZ* ze)0zl_Iv1sA7cAH)zWGKTt;jX%d1pQsUHA{fF~>lvk>esRn#JK0%1Wx!dPF3JhEmn z&q8(+$aaR`S6YZ4G(l$g`NvME8B;IO$S01jAS&{B-+##2nRtv2VTR&5kCPXS;#{_BGw@oz#A zdo!W*lY4mW|4E*pMUZDU%O(NCUk1p!99k>RL7u_G2IY?ZJovz=I2(qH#*RPyrH&D%IC8m33X8>U7omkALSH}zks2%2~ zsr^Id3&0$^KBq6_I*czdIPfxZ;0UxZ$OA9c)LLurN{_=}Nz&`nkxIrc0_e_6>J;@R zBmpU(C}rtO$9t*yudC{lI>&LIt|edi6HLrI$^mZsw{7;3_}J#CDr*lN8#e^GoaHlj zQ!tUSZ`(@Cxyff3I|c{bn*#z1I19F~P{gbQavigTEmW%!4O&#avlkR*-$_0h}@GIK`F7~BN6QNql!_8m&4D~bq z=B*wPkPRVZY$!ejToI3|*Ci6y%BcuN+Yr3?|Jb_@Fe$2KU8g5(BrVI5v*esXqKX71 zD*^%nDxerZ5hW@?qDqn=hy+1`7P##dV)3(-fF!~oIHnZX{^TJfK7N5V% zxywPi0;L@oFaI$I++zZGnG;kIZ0#X9IEF-b)GKd|?OUR{w(XAR44!4&yM0KPnX35f zKWr$NIwzj&_AmlAir_xHjHH*T~`EY7NRb(EwNuvc0M9m%H(~ z1XN351@l=e{`#G>xiUrBcdA)A6susg;4|InTd?ZX&wX0SR z(%Te&y#wz1hY&!kHeb2V2Y#3JfEh@D!rzwo%=9Z!Y-w9hSq1m2h=@-Kic$41F}4N= zGAIY!hWgm5_N#I%BE$C-j#HjTD8Tc}=0{IO(%F4sEA23tLGdpPR~jLubT2q>F!5<- zM6YFv;@Wm$pFg(U{XnA7es|Y6-azwB z@JK^&l~-gmBb;E|IOH}3xH_dnb&hZ(FUrE~T2*%B`Btlr!iHI!7I2lG zOmWG+{b(f{jMZicIf3E}Dkke--KcXRBKa)NSCkLZ#4FI(P(L^uQt0>#T zo-j96ul?K(t7f@O3gqNYrY52yD?q3T{Ja5a(z#@8Gb{(#^99w`bW+q1n7AU3B;A&X zS*o+EbPY2+9JxNq)z$)wzfK}X+-HFfpTVcRkX9g46NWyenlh3aq5T_OeK=kBN}uGM zyq{oG5!*;nKnQiwSTkIkz&)k6I#hfQMe`j%vcIV;BF9v*7nB1~wyI}PY2^}<*0;Mc z&QXG>N6Sf-9>TD3^0%v1Z7ngc)kT$qX6RQe?(#bH^A`Yf0l`gys(R~vMHdV*IACz# z8FPR$EAiau4qZ_n%7z3_n3yHnF%m{@K+S|mqOp~KE*AlT`IEJ4C?4`x{hB%ZD-$3v zuTuZukPK%wxkz5X=k_2avp>Es(j7wZjJxwUKj)hH^SW}fB;0j`acErK6(Rc+zhvQ&`)VSf#-qJ;*odyT~Qw|_ZsU?q(2BF%?^jnSWfcOp>wlFBV2C~lD z3z?N#@OvtNcEF>CNWnw{7U(?>sL|5@kONMrekRColayqJ@*RISx-bf#QLEOa_nKcr zQsdhB%US9|5ZEG*&;l&@Ux;!rjbX<-M~8eKb;MaPtB=U6hJex!i;YpC_<#A@wm%cs z6V>XP$6tW}m(akm6lp?2)K{X1hh}oqTU=M8`sQUMnWJh-!02pWfBno zA<=`?{>WB8d(zBJ|3=S9^ z_*)L>^MpGG1G4##LxN>eg2kWoF+OpH{pB1f_ESLk_mHzzZ7!eq7LZC~bH^*8kh9h} z>oBBV9+WTBJJQ3G{Ke%!o!jLptb`!vD(=U@(6iH%UgcvK ziC-aQ7Qp^P&i=XgRFx6Xh|?$Jsk@b&Lszmkf;Ti}~$dQeFwf3=Kgh4an7S zx_nNd-pdb96oLOeZpOf41_+u?@;$Y>gu=g~*Y$%=Ow@t1%saWRw(~PUxo+rJ8C!z` z8Jq)IC@W=yAodojFGKN%2=^&eNy>gJ#|}&BG<~0koaYS53_|)WMh$oy#Uqw88G>d< zs+m@yzjo-(VcSij8-pH}b;43>@>PC8p(BzvC96_aie(&{k(l3E zS>Pn83Y8wBq_v4uP(XQEEYD$FCliGlPc45yuv`$^0rV;cSRQFsv1 zS1q^vlGugl#pU_lQ2;5A$ziXJ{Mn@Rfd80hRzIviyq@R=g{RtyC-CJyE+l}nom?l+ z5#{*IYUYs94cqpON%$en_n(I$gAE$NSTPY==XInf1s;Ta=4m|p&y~!#-12}ZWTIa! zfpx=v0-TI;XAZg^dXjRqSW_%}GK}~PC zz74=M4bKI_7us^xZ=DvZ+k3_1INK_}4g~1D{d<&~`U~$?Mh<81n)PrOLFZgqYK0x; zvam2E`#UdjGE?PMQ-1L?QSCkc1z`$B+JkSth`dVlXWJv!LM%t$zEb$)$ewk)_Rg{ zkpve8<$@cmA&hu9HT2F-$FD$^o(oH`PsYXbX#{7P!lYc2E@=JqOk{)JuYcNRfjJrL zh)!aeX`n#Ao0Y6|&G!}LW_&LxOcq7e2CK4HRK-=m(-a(%TS8FYjo(It9i6pLjVm0zM(MY z;Z`=Q+=+63S?`^+Uxxu;KI%=wLj}!vp>|ztI%WM&!}9-XH_-*iL=ElQ92($!Cc1HHZvQa^-(UQb*$QhHj8j{`0EzHF2I20hqdY__VAJN_t*X}zJ zN_uMXh1Q8c)afLB)z*ZZ0q8rc#@ifvoa2*VS3}IddK{}f?=o7 zgRF8Y${>Dho=dNxdkI)~ybeRsxau?KL#_O9~>mLuy#2b~kpBZuxgkjDlC*_Nn- z&nI9Uh}EJWBco7~wlv{7I9SeU4r%Oq6M3%zARj=!>+82&_&MN|G{!6=!6i zn!pAF@W8wJuRjMWa-JGtJB5~cV@?p4(1-SW4ROpb zJpEw!he%IC>HzgD4IuK}ZaKwSO$&Vzw43W^+b7J+wVD9!uU5MOa|DF%x+sM_JxMwLBksc5Qo`Hk#zS+**tN&#a??N<)MO2qV zoYZI;!iJ8d+Rof^2M__@7wlo`Ow?(~+#4};Z*L@5Lk{T&ceAm9M58R|{#Inu$GVC&7ptxp(hjjfOaAL_ zdT^1P=PZ|q-ubrL80#*?K5vK%Ro6josG{T%HE@x)_R|Y*jnU+E44gk0 zep!6Miq73d5m}rCk@TzCgRj)rh_W2@J1e00IEB2VCI91v(EECujw8VMX|Lo!UV08` zNyv0g{NxhO5W$I%n$OR0MYYG#-G|D?EEoZj_1axfW8KlT0}q3I`~&|8I*6Co#!QGg(kQTCKv*MbAX>b1Cy?)llls^JC&s$4+?w zhJdztr<)oPeO!G_n|L4s#++N!I-(uSBr(a^P}!SAtoRN-9#Wn1HJS}R`2pO@^Cf4E z_M!zX1K(kzYX%4Y{~XX2e;wKL3+?0_A@VS0gLzQ=S17L_c#@9axKJ($>qC|7V+urI zUZ}Rqyjhlm|96m2exN5QIl>tw0sR*Ts4E5F=Odu_^L*fK9`FAz=k$L4j_BSb%!iZ3 zL2Aq&wrx>i0C{jD@}jeMnlIalP}FQVj)YqZh81rk6puYe<M7275R^aguu_8P zr9a94<0ko?f{&jb z42dl3gh4QIr)@vW*|3KNXh?wTsE-nf%VM=Z{(x7I_o>9QQQ?{q08Bst*jG9yvVuRc z!HUDtWr?OI9v@*pja*-ivXl``Fl~2LBN!TMkimfr&jCzY%6L&(6vaIBPVyVF2*yE^ z;wR1iKm@$yA7iSRh68=+BZ8pkwKNd8pA)IAk8c%M#_={%eOy1;k7NIXO8WjG;HhyA zmHp&#T<6Fn{%p6pF5aXP8X{miQhD7&2pquC!TeD2#J%5Ps>?M8BR2(%C^<(__+#`CzpjanJiuw6huQz*8&;T5~y zK;B~jV8*flI-Q0yq+eaB-e!xXk76#t3)d@^D@9wIGt1+nBqlCXf2A8i5dsRJu9l?6 z8}5eDMuP+Y4F^mZ4;xJr*iwIVW>gRgz|os6EB3KYCdz*{B*~14OCwk`Q3v=qBLlpS z&ckDY*i3Or^g!nS6goQUQWIMuA7)u8K8NA&9e6bE;E;q9Fow2xw;Hsm@ztYjD8RIN*f;na89r6junYF!uS=LClk7;U0m3PlKt$)4QSe zo=I_Q5czdr^wvlSwAK4(@4;EEO}&)uq7CePpQr`pA$)Es)zFf@#bi>CZ-T4eYJCJp zT?T+GpSdl~B|5YDdWao-wU={jk~h*k1P@XF03JiHBlmnRexpsuCX+(XB^Ej83x`38 zpx`D7*4u;vj)igvo{QY{iIEI2g-ymz2IqkCGmw-%a3n1Q;uMY(AORXsdn5<^qReHV zlc}AT@5u37Z4L>wvujAH)VDl(&wC62I<#UdZDWs?Qc1A`d_@T_CVwF}6J7l1W{jN; z1Qa*`acU_4r&b`oL056x3}MxyA3~#XPk;kld?_mgxmQIth9)7(dyJfiaz-_RUok;& zf%P(s&#GTRF`kT_v55T4!)LsVF2c^rioP%M`d5IW{{ZRl;CWe6X|FYL5>c6DS?Y2Y z{Nyjy*X2rFXV?7KUe?Jc*j#T$nALd8TfB?HIcc5|$OBSkW)!z}DN>kW;X z1{oa4*c@mI!40H#Q(FR#2=W^uV7c`BDB!ROig5}J6GPz@XV_Uy|kZt|DM!V=R+*`2Z2`PbftcZ*=V05zeJTp z|D02m9Z(eJT%2z2P4rg5@g45tISL(rx1$4spy(!E9k@zv*KSy4xpbfPB)0I1O%j`o--18tZDMfU#(Gth2j^(;1?yjRLwTcp?&usP(i^ zl5t3hmc>i@WHi&@z<KcwQM|cByj{zVXQmr7CS-vf&5Ip=k)eI>)%K8N#`I=af1SPC_CZdI&kkImr?9m_q)vx10e8@xl+T1=SldB5X2J`0HYJ=5VmuX z3pWPa)X)GgS^CgsgA5LMD+k<3I^_#;41#+|oD<)0OuD1Ha}jk%j*of^mPG)%{@JDt zmUx(Y>-`?wOF0HySO@hG68VvHqfBB!RWwD25KDnkj0-a8UpSx@TZPNk0=q6`B*Qx-rKL7wg07*naR0)2z zU##$GMxAuFkU#>6Mq<9mg?F)L7-VoDvv5Euf*MJfCvRrADV!v^>+u3-705kBd*tb8 z20{{ZLMY2Y!Up$SSsXLo(lwqunnm7Y05}2rV&d1<`C&d}ClufljKF$8!k!X#a8g7KpU2mXNrk0WbhTw0KhGPQ`b!WC!0`lg3%c)8Z$eZT}+~{J*ya^QAPOe5Jzn1%HJC40={tJM+ z%n$9ZjHl>^yVKY-`?}f#zhPKk&nG7l9Bt;4;segDrQsZ8>UGAR!GVm-0oj2*;2Z$? z3Rw?w`Q!^Z9#&O>FugJ;>{lBMf)qqWI0P3`_;3xOeiIp}w44E+%@xl#0O+2-vhX|I z0W$-UoTeD+lWx1~aLgxqyXunn;@ZIJ_j_P1f%ol4WDwVD#4>42PoDkk8LfMg9PpuF zPClXp!B2q+$>eM8j=a7EH7!|7PE?Oa4JaJ%dSkL8_E7MMZV9Mjvepyr#ooy_QLU@( zeh8mGg4?N#HWACTSEXKW>^;T-UBNhl=K}l{bwytm1QTjIP9eIDvaj*!C!p{L33Z{o z2ZTW5$zvnt5^ZmMxK2W4*7sreZVbVBh%Vq`LMvJw^06#rP{4lwBd=73v@R4+*@3Q$v)UOqhqp#GO$zqY|K97ur8(H8r( z)UT0^WK&MQ=ymg-4*~!X0wCAOKU^kZSD*6S%2D6M2rp8T`F&-i_*e@Su;A4Q1#mIR zc+Kz}P$Q8H8S`nj{FY;42GNc)_w%W6tj!#-aeZtcrsr4e3%I%pX4x?aPe%+TvTU(o zz{AP&4FJ05FBHocp#3S-0V&P^buM+yXBO?k#5D+Z7eJuJCzPM1QiBW*WC{+rpLLN- zj75SsirKGzDz_39*`M#Mg(sr=Sg6g|#sk{Nin9erUN}DBS-FIQD}-LY&1LFBgYN$7 z0Q73(r{~)^j=%ueIa>vw8_3pNP2|``kQV^3w03&dOPotF5Vrv+#=>U5piYxZ;rS_S z-YYgEhPY4Qhg5{<<=-c_id=!5fK~Tri@S;YQV?dPrkID#vH*NxMBgAd0G`Ga(hUtM zCyX6~0|p2Fg#+#~o+;-;VY0=1E$7LN$bZBJx}pS!$DL6Tb)8=meX#huGxuQ#7{ByD zh^;MAqm}vIR4@*s18l#@Nl@74Qcn#DX?B&h^E%IT3M0~aCmi=73Tki*dBm0FTRDt` zs1&t5u9bQswQWKbYbiI5ufbiF#!#Jlg@^AI62%}Rq>ytp%o(%bcRL#N4eDxK2$?u{BLHm0^9;cR&3|#c{r={7n9kh81@)TO_7A z_Rw9=Hvq&Try?d&Q3AlEI$(6T{0i_>D!MgqPje)40X7O+A~r%Hh)&)?SP;b$h@M`k z;CSfsVKmFrIp8*snHCBY`}h;D+l$Z~qM8#+Z}T~prtx1PZQFGmvm)m9YNUDB<9FrD zsAGLNwVC&SeYnZ~7tzk;mvcHQ5#{*a_b6BYaORW8xNMU_|0M^m!LyI&1WNzwoJ+-? zTiqysgMnqP?7?Lw>#fTbB_IGcJNDgM(N$mw=(P9ixPrL;1JRI^Fw$r6HSz+eP<5>u zi*Cr(vzGst%`-l3a3Di+0A)9{e(iC^lFM0gujcW8U%W~kk$rkvSRebhM15vVY4i|R z&OUfH$$lPo-en=Jm3&bAN7*7J3<63p5CVSqro9-){Ed3%Jszb`GK~ z2(1gwWfud9R)if|mV}%r@rRT1$R(Kn28xg9BeZ8AP1M%gXVA1K`4gfEy+WABb@HF# z*UU7TsMEO;2P_B$yLUdZlf-jnjuGocdHpJ5Yj7Zgb0F4*o3;LlVA~3qPCtp*#$`aO z4Jy`>a{&1FwLV~^r1NvtN)5q_bF@$kRrheVc)kIk6Cf40Q@NOY$FYRQ6V0l*W><6v zKvB1Z7Icn4x$o8fP63aT^MSnSO=TETduHqz9QY47AmP)2qNW@WispGB`zBgORMuW- zhlL@ZZbC7rDOzrfECB9#6LgOSS2kE=1O6usq{{K5ZK~RsJC|5QhNA!7MHep5LMtKi z8sJuD%#$G<5ik^rJUm0Fi=9MA!1Xtr{kSYny#TF1Qoq3k=P5a02v|DV5%oLKvasJq z$G}7Bv{P3qgeIuCx+3gOL**GpaKI0#Wi5wM*Hymk$+Xh=7J~!c&4DD02o+tp3+csy z@$!{iL93upjuoxg<9~eO)m{Gyr8y&Nrr+~`!?Wg;LGcuZ*Ff118JhIsG2KH^UlER z2?v$e(Mak?dC}^rcawJ}KtQbIalx_Kozl-5qZR2pEvlElvAga=Tq7tzN z^W4DrZ&e^3^JNe4Q9eShj%52TIUMujUa^Lgcp?vqhTO@79DO!jo=7XSDpT9UCjtsU zWK!an>JXyjL|ZSpGfflXk+p%>Tf6p`<_!hG1!_c%<+Kvo-<6ACYfjLfRra zQXX)t1pJ$Z0`Qa{8{_Leg#)6GIKZC&H_7GES!u? zRELMN0u&t;^`uX`o9ec%Ki!5K-)?ZggE*jv#N38?6P|xWkgj0Etq2o8QE!wroqx)i z*b#Q!o8wwWV7{wSUO6HQ68Vj`4&-FVi9b6#$nSA|0c2$lM8A$hEM)|NK~I|lW=1~A zR#1;MN0aEsvG6CKyB22yA~l7IxRUZBaTx!359x-fI+&lcrZ!tNGZjK`7s#iV_zUv* zy9l&9iNpFxd1lPtsf;rAGBgJeSXGOWkV-#s&gGc4uS243Dz~ffUzYr-T~~)_se3mL0A3BNV z$kiT%)@^;Nd8RpLZv)^DHC7-h~LyCp)OJRqlH4v#efi?mtAt@ zX}LkG>QYe%!t)D%nN3Sp=?gpy3=9VS9~?+@L>7^^3!`jO)?)_J0ZF|8(6yoqxP3 ze4QSe6Nn5ECLC9TStsD}3ytG{;edW`lJUgbp-3ho;u?aY_*TTTW8-w+t6h^Zf7PBk zH6aq?sCtX6`RQBS-b7Oi@goob6V(XGhJsHJ@Z^JUPsI0llY;p#4DqrP4wzXKwD~g> zKXn-ZDzJ!uWa;_F$-9sobx3vchk+AS92kv3M#ls#KYHup1BGlHJ-&c7H z)&HD72Swi@noy}%>!9sKg-(R*H-83BbfMu|GUP?r=JGtwc&-5ePJNAC(_jF&j=f$0 z07Fnzci5+>fW%$6bGNgz22q303Mr;ZfMD#jSC^$RA$cB;jTRXkc=8_<>^#6>X~rE@UByZxBn6{zy;xE({Wn&MH5 z-q6PNIhem#;Ye@F0uum2LD0IRR0Uf-rrix~Pw(b=%c{3acO$vov zFjYbb?wyEtoh6X`%x2jq_Pc<0@_@JW&Sf&Y#Js!0}Oh`(QD^EnPp-IwK7f5&ym zGqu%G5{{`9iibSp!3J8-H2~!1A`4(>tgOpWQOz*f1cudP!Andb{Eq!~`T>^lBxD2t zG+4C2ZD3CX(5Tw0@nUcwqjLblg}H_@nu>5o<8yHK;lkEjgO#H=JWs##S#`HshZbR+ zysk*3)ASyShhPk-C4}I7X@#iExo&l__u_6Sc{K&-JrpTI(k%GVoZ85j*f$IS%%EpW zcDLeQ9R4uO!=febMF5RjND>QS?2+jNiqwKzQ0>}hm++T|UXzu04Z4$$%1SOt4H@@R&y0H_?SaHM8? z=aKLz@frSy%}^Utd_5B@DSV!{`aX8GbG+qqOnglU?ynWQjpf^jHz#uk(@ic9sl;N| zS%ULZd%}8WBTezYi%c5uxI*o>X$F9lZ!&h$g#(z|v~fhDEN>l?9g$E8QYi>{ z7?wx9G}Wo7CGJyIw0|N(aG}^29u~6PjQOT|`Fu^4*+jpcx%HdlA&@H5tnUaIKoANh zQs#?PE*N_T2QnrHCc!)g?K!!*dW6@(V$pu@iscw5v-0$@)O2_GIc$f(^_BSkLZTF$yF1EIH#&Lc0d=eCn*O7CT$2`0cE`codT4OIA zIH2cXU_UkHYx^vCk>p`AJ4Xu;T1i{M9qEI*r96UQR^NAfiiWuMbQHIJfD%alYY~Q4V*0 zV11DvibxFn$mj66Wc;Ni2Mh^VPccR;TA42ucjqRK&4BP@9>Ey0Xyc6UTCDqnd_5b~ z`F(B+j!h)`uwoF^F8PA2gXDDi^Y1p#Mbhw)PiEz64%^#(pWbW`kb~t;&7d&uqZyBkTl&*zJ800Z!b$=t&g-%zuTN#JS+j~^K&0M|sH1izy1D1UJSWCS!S8Z?Z9=R( zN`;lB6S$8-FjrUdz>fS=ld1wQ2t&I36^1IzFWRt$0l=rJ`86>BA3CF+1g4P0JfX!z z!@T!urk=0=iI2wljLCtGoB|okX8}~7Duqa_EsmvJ9pFNT>)De1A-1;4KgXvha@K7% zw+DuuP=jlU77EZ8qsGs3SmJpG0JoqZ`_pt}oMz|4M=XGxA9-WA9Z=_Wf8MO)0As7Q z^4(42An?qSH7N`Xl~^Kkcn)KYmKYp(svKzUsxqpUgZb+aerdLELlFQ0eT5}J%*XKu zT4XRRWul}u;YhbapnLP4YU7^3m$SsdZS}=pnl|4aeJ_{2yY~G0VyI_hp+N=*GBO8-(HWL+ ztcOg8I44H<*gCDe;*oo3G;dWL9KI@Wd}#j_B;>BAQ%j{-NOKPC{dlt#Gfn!B1;y=loVJ`u=<8|Bwc3D|=hOt%*Y1stA5pdjAia*1LR>cGDSh4AHsQ&2f*V1;>Mkq|6om*lGI zXGD1i`XvaA6LW23gD;8kAFeD%8eH;ysQmiKHub#g{nTUEbq^l9KK1ji>myv>>-V~8 zCl}(hXO|zc52V5V+(9 zNa5gvHoA>xRcDj2&FCD+0j(43oq4Kg1R0gCjEZk2UnS~#vcmJ0FCn(h&AH3;3C725 z8An?&JnD-ipwyW>N)q=x1HgPl5VEJ@W9q=32S+QlI4vc*Ui?m9hW^^-tJCBk@N|w9 zF&e1(C~ku>@n}z$@sfEsAZm)i>;X?Y?i@<oCppz-2!@Zx-VQ8pIsXq2D2{yC@gJ50;1D zP+M?L3Q=AjOeqJ*r!AhfKys6wD(U>uebcJ^zPKeZ8is&hK0kHH z_7|$%9?x=C>4VH4e>n@t`M78jssTW;mPKea$B1LQ=CQ4Q{_*YO*FAphy1uacd$S?y zcOvfFe9S)hz1U)oS#mki^AW*41t5^!lqGj!ZxNkvT@@X-J|rkj~j)Y4M8Q7>zVI@MJi^?kOkMwev{5jr9&#fH#pXgYNoAMmQ%UDd_y@ z%1$W$)E)D9d2EXylod!hR6c_H!*j@UL}wi$R}zI2uB9?4+!|9@9YXZBI%{v#1LOh- zNtz`~g5B>bUkF|}M`{s|sUI-*3=U)h4k$m+43Qh2K_C0i9Ai{!W43Z)C?JU+XRWFH znT7#lM1Mq6dmjy;$@CD-zK;Nh8c=lFC0%EvLwMhWCa645`MR2Ci>8pPF3P!v0niI> zFB@rDq4CoKH@Lycnj*-;aazi3#$4Sq{J<5mBU{Qrv=FSSjnJL9kiui{@!kn9A{qCM z75(GihoU_^c&dVRFy7+WA~Eijw)8GU3=9m!?mr9O>56-p>!yh!FyFuspw^=>Q5?Al zO8R~vI>UZnWL8WB5CZpqia6tlL_1e0b&*hvWFdqQ%)@8(e%swhAJ^Aq+*iFwqn;Ge z-BU3a_zss87Rv?#n8{q6VTFpvi$R{rfn?X+5gkQ!4T(nT7)@?%O7kBnwR;%>su&+B z&%0;#=ATf8C|1}mJ~03*6ED!!0BPv6SBI80+CNWI$FGdPfeIe_^&Igi?3 z?gkRBdG_|oAfoU7*mv3nndXPK>qBDfSQmY%huh#QkS7B%?6lpJOaPu^0B{QgNGWoM zlMKnOas&#IQ0^7k|4GKsHF|N3Q?@&{0&=7^~$Z&cF|u(KUkuk8(g0 z3&l+&#{>f4$_4vHcvY$3rEH58c-V4w&$7<6$o&D>8pSe%(iD933%ub7<>5MMNy^-f z4SW7!2S-li0*Q)@lb;afnrNwy%EP2Kq@HOJ0^ZQKOw~qR(TjlU4~sc6!=Iwt%mFc$ zJ;M|wDpLtQc9Wh&P(RM7TV0V(23MdU$PMx=Bi+P7F%?nsbn2%o{gXE$8+^TqZKGn4 z=NH_As>s$A*jA!1eG<5UKg!@CDCA{ZG9cthh5(LWmC9k#tn8ELUQ^F5|ae zz$3WpR>V2vN2bqy|M6zz2;_k1kd#|8;^T&x0+EQQ9x#4b*Wo!tUP1n%3{H6vNeux| zHj?kB4RP^Rty6_I9_-I?5=a3`OTtjIL0D1H0oNIoUC`c%!Ij4a-7sXzA zCdlqKV|iEijMf?)_^&u1B9Z*kAM^H)qKqrec49bngqYK!f@+1~-n33=(zrTHy@HAW zgtfi3Hl8@iFB=BanLNh;5TzCIB07TPPuH;S%@kkp#CbOqE+ zRqREwqjhpCmtVl-^DO)x>3o}vpEo#=!U4$>hcW^o09mjAhdH*>9*hf%e+7YKtJ7Qw zpnsC12dfYtbpNZPspBrqdxi#kgq`uKjDw9B;7CZxI|vZw7Wy*kCmbDYU-DXLcg% zLupUwz*B02+3oS}h2S-K1jQkm#btza`uXDxGot$tZGSDay#oS6*7iQ-1ypQXL3OYM z0zjO^`Un97Lbs-P>z_;a?MYWI9%Ypn&k93AKbR_Tf9sVB`d!(FQa&S>&yTN$YMnh# z{FE>st(}CN0vNyQLA3x^^GP9w$Y(^-5sqt+tg#C^c!PfBB#%|cma6P?gJ{5mif664 zi2e$TyT{1@El0(EQ4!ZQ2Xa6+1Y|_MczK_kl_2=TU>9=F`kKIaJ!`rDit_~s{u7eN zsW5ozB)4Kki_nj}Xq#8Vqog$0KfBNKq8`@YpF|>}no-7uc5CaeIpq>#$KXKb;egw) zn^4Sg8NsMj?|kq@_U7}9XC5*2!M<;MXOrJ>wyPo-|623z25hm z)Ce_?N$sevfVGsw34jDa^%|58vodso5*neFIhnpeuJaOj{}PV;`Mf~}2Qn20%v!=7 z1?rV+^lH#*r78jdSg_P74$CsD2wYCg8|fl)djR^ZlkG%uIhR0{+?zfBN*>WbORlmW zIhMV%hkDmC)>h9MeI3sqghI<-%0p2eNcBcZ4*`p%k_!toHesHx+NNwvKb%jYvnUtH zp0elQ|A#>#8Iu>;ak+1b9AiuaeJp2D9oRa-f%SmNrdVrgtSti zKqS5C94HrHz1xVV-j2^jYpbLA{t_ZQYVDAYg;#Cn*|AC3BFLL$KD*Z|}xja(jk;4$2+ zyd;49DEWgak-j7PVA_w7k)4qwa8J}NH80q+T=YcUACliAYx-t44Z%&uNR)6ZTzYar@8Uv+X)SWqOyryn=KV@7f%y#@@eMfu)7TJaq;#a|W+()I zZM{x&&Kq`l?$M2V*cIc&bgF_-!!f2r@g?5sf@HlgfZ%xyPJ5jWU zn9@mnsv)*WZ9-=u|AbpFb5h(C8Uh3#)_dS0L$&gmDQ~3DPWl@FbkE4I5FGENGU4Eu zb4!wL5%mq&a4GRLk|E6v+AHP*S~cDIq5K8c>7Pseh_PpIAoFnGri^wg$n9A5GQDeh zD3a53-xr=V53n$%Y0t-M4B7}0SelKS>C_FC}s<|y- zIIjO3DFFAwaODTH)w?)O#B#`rvbQ2u(DX-$4H+yroAHNLR+f`PxqPJ<3UQzt(Y2X( zeI3se^{Khwm8k1T^4BzIi1-U^h*x{ahxpttv{>^Hmx4Iq;y6D1$S-*IK@`+f^dI?p zTJrp4Kcb$Hdyu^Iir*@c4IvH{u0M3ZWBTLxwXa9@Kv!TSViF&T;%pc}BH%7s63~pM z;9BrJ@O=!#ef7b%2#_!Q+`Fmsz*qjrHiKwh`FHXLRwVk}d8MuMBN&9H+0IEA26BAr z6UImighw^iAlUbT3E}YRYl`nr7Q^ zRT@P)%{!jHt1vP*g*qdhA^PdN$a0s@92^`7@*`T5Z40@4BB!9fyCL+tU+ zXrvp6meEuVN?jWDG0voRviqslRkHUc+H~-X@;PAmEWRn^EYhygANkHZoEM>RR+80d z6&{nR;(ZPkuq0v}Vw_nOr9LAiO}9if?R!g75lvp=)Um99nljWVOJ!Jk?VU#fPM#^J z!t)ozjd0$Ub1?3w%8n@bl}Z9&`PkG$ui2CCdC1*hF-$BHFT&o`%9>MPysFiui46hb z9jeWtEe07J$jls&9gx%(A%5m9;#D?k|MubeWi}vXV9YMVNJpJy1n@wK`ce9A(_iF@ zg=Lfl9gAiHNZ~}qPl`qP)PcYhW8X@@W5H`K4zi+yNj2O{EvYGeGjGUbA(G}bA7cx&Za&7uYd5Fe?j~szZeRCF+&YqDkLOb{CTHG<&kcs`+@E` zbh0^MhDeGeEM)3`vuX#`$juIW568jt{Up?3+cD;HIRT;X)npG&S zL7lUeHXVQ>cYVOp>;n+mJYrbaT%pJj6;$m_e>OyILZml5+$eRoM}xrifX@h?VBEuu zc^7vfsci&CXB8k+Dx;sTqPh)e#)aioBhl|^pB);T48_03?d;c;-SGT><}8FLcoGd^ z*s|clL@3S~i8CD0kN$`fQ0ZfkCH=lVznltW-=+MOOfM0=(s9NsF$WRyH~i!uiJzeV z@raqOiN_-rDH8GAWjJyhh+yDxnRw14H$*w~@?F$z3B&e#M3kR`W3^_TQvpyU=WP5m zB=~Lc=i=yQL0K_AKM#mi+7Mj|P9TZr>EO}0WEFfpa5Q?O3I#O2lRX=~iSfI~a!%lL z0pl%yAx@Q)uf&U@0>1yY+uh>lHV9-iq{Xv(U>Lc?Q~?6MHeS_xngQ<$V&uc0RU@9_ z@AyA93(Xe0s0oq(8lS!bP-c_?yhc~?`(MB;kQOQM6q{sxwZQ>T<$$Rz^i<5p=k6!Z zlV2d0&!s{$OfZz?`y=NqmXD+#%G*N{5IJDruBi1QT~~Q?x-+s0k|$h8D{jyVz#{10 z{HRU<`O^4iiwZ~l3d{F7|MI99O^(mib%+k+F3xIVBDdjP zw!B+@GS3z8yrPJji+mthdN)04oK(>tHWXs+^zLq@Q|o=8Rx;=M!|x6SQ*^=d;|GPH zyC@=xK%p3Eex0+j)BEZUi5T_WCnJmUJWMsBdP;Z}vyIAs!ZZpYS)I$zG=GGw*9(Gk zm?8Q$?Ne(`02aEk&lhQH#rOL~`9uhKBxyYU*+UdA>mctfDUHN_>Wgy_8a0tpkL}K= zipL2Q2V;+41E{e<(C1qve<`aY`F)+eHIks<)@l?|MuFfQ^OAk6><|82!JWZ4MqD3tkxD&dm!`-2jGyi%%^@_~5rz;P-C-1&XeMFM6mB^=0 z)+D8zs09J-3&afD5us{w7L=iEA{ueOEf?GfnO5~6BYVApsLag z`rd6d3gH4^*xx+&`gdl8t!@Z#)iNRH7Ila`4Fgn7OQPrjd(-f;`+Z?B!SnQr*4@uU zA59@9R7=PVl*%7%22{*q9~;i(Px1!+2?m_O=(-L=+YH~{0tS-;mj1JTaJi~Z~qapC9L#&PIanfQQ4 zVhm)`ib`7;t;kQ9Y6;u{#fHFw5{S?op#~t|2RY}-`CQcjMuLahld2;b_(wA{L7sP$ zOPVeASTz%U_S3KOmaTl$uP^3`RqVO+oo%Tij^cSR$#ojyp*GbOtk8T-+a#VdgA5L2 zN)9wca$g7Ju25560fbgqaLjwoRYVcXqUt#ILr#C+G=ds|Ge6>==g~kI#Qm{NC(_l4 z^!Ry~=z5H1P$B$$73#s+RD+LY-LC8;D%CsZb_I~N&9m<{cS6+68R2tZ1mL=tWFx#9 zWN;vJazG!kVF1!<3YE~xN|{cWLdEAAOWLpiM}fl6%^;URk~z7R@p|p!_Q;rU_xb); z1m*(d(;v{zpd#58nF?UOZIz>yIA2`kV~>zK@|YD{-B*T_nSh!iQ$jHKv;_eZD|z8Zb>JRE z?e5gE`oSPGuKT7D3E{|!e<*7$WJ931UZ^LYnW!-g|0J&Sho+v((0geOV=N)L6-V)oD8yBfVD{~znfSg$?f{HK7z4sIPxC%H z&goF)*9fe@SS@lZPigZP*ixR08Zz5$9cW&l-ayB@@c8dZKCXU`oC5t$L?qQETIRR6 zMdCQ5=cpCa()lsb{>9OyRBg=L!rAmvetEWVw17hP`5zO*@cBHz3E}q>HaXhJdHCI? zn2a8xJ;j0N;C8!;&|)W(5GSqWe#2I(1y#&MuoEMIKF~6qxNCk)fFH%jAxacZdO_n(ckPv5KEgsZR6R_e;&*-$>!=>K%S1=S zvy*5f3L;vaLr%Wao-{qXdCG^~J82b~$KT7y$AMVLP3T~I8l%Mm$D&RGC|hJeQN@?{ z0G4GB>jao^M&U3w)Hi+x?qjQualR~rlITaR`FVEcm$g}C$Wbd$RjR|Y_>Hxy!1Jnw zi*VnI;4d@N<7TsL@EGwNF_J#k#-Gj`2kk@!O`;7wm)Z`7pn13|*}r5d4)g1BSwKmcato+y7LP*r4M)E-kDWN*hdq z7zc8)9Ik~H9D~`=j?Y`PUZffbamt%XEhF1&@+_HSIG01@nK+Y*`LT%Jsln78s31(x z+8^QW@So??V{H`OsI$n8btB0%o-tr*4Y^;-g5V3p0}Oh6?+r3IkZC!<$?QR~9UX<_ zWFXqE6`G`6BKq{z)vMyMz6bx1d%ATbqS>~WTkOk&*-agUhm> z6!(e=CCd5yh52=ofZDIo>%;>5A@>mxQuaYWzb})KsE?3(Y#j^e;+8nV+5VLc%6Nid z0nkpUXCVCcBlJ&-|05S=!UG-h+aKi-p+Mxk8ZF*R)A=BnMtss7 z8@qqc0W;KWeQm%Mz=cI~y>?L#wHB~NSxE=WPjZ8d0Y-;Vba??(aS{uVM7Fi&puS04 zi3-*Z(I%$u^ve9KZE&5NUe9`Wf}1Eb?79*MSd=f(DwqAXKp2Sm2!z2*l62RpaB^qoL^({Jz~@5X z|Kjz_`Dj1fKkxaQ)xfXDa5Kdo7UUfcHBqieZMFy!IW&_;BrqF~&!MT8dEQ={XG&|_ zb3l3ppqJf%Ff4`VDGVOXSiBR4!n}O#S^19Vb;4+o!GUMX0kxxSzEqCoAP&N;PnN8R9UI0-DmpP*Z~p4rE3S=)yw~!Zaah zfUe3vF)TW$i!}v~&y2!Mn7 zYN&HF^25@u8sYJ3Hoh?F7tZNWq(#Uz1s=QF#qxx*hpmt`-zLj)B!1}Icu-(J2-+8N zjx|G&_Wi)Wo(YH${}$HiE?+ZVq4gJ|HdX;q zQWQa6enJ(1_S%ypxmAx4i+8y2k#okuKXJhAxfMy`1oE~q)#ADx1sa5OGhauD`|FkJ zSm9k8{LS{O_E1Fz##=mn;i@I`Iq_W9Zj)Bk^}x7t!>(lhrT=jz5yaEsJu3I%Rm8mh@*@6 z&xLc~Vz&Xs#=_KF4rr9yn1KIlFUW z;%}f76?rkSFQ=+}e`GeP;o>zqRy1UkKFZ7N%-Qw8sK`6Zr7b(hWXPFUhR%;^RUs9Um#b zi4Qr(T^#tdM?5-5*nBanhhC#(@%0;H>U%vh9{aeg4m@ zEW*MDV6UOrM_Y;nvg>*Mat=|;5ez+In%F`w;hIT!a`+Bp?Ahy4W*A2X2mXlzjO)bw zW^W^NX%hLNz_kJ<)6>VUtW1V}nUUnhNnx zeRwM8@4J1ILFjG-CXn%>3|pd?58in$4hz7?<5q?ybVL0_XW@&cg;+2W?a}flEI6EA zh7AV>85}S;K(H!ibz9sQbMg>JjN>E83$acj#b*LPV+Nu0^qly)%q{RrxfJMc>3vr8 zeaf322ne=5?<0b_D(nEo9qr`@NY-tS&!R$5jav=v9}~OpOJXAn*0uwtl}Kzx{lBZZ z$Zsf7+u5~h_;yBYcvRM_rdNnc+$cQ6rzAq3$|x(eUo3-r_Zj|QNV$hx5D|t1sP(5I zSWX8{fCbADfSMP@JULT5U(SJtuv%?^8Ec=zNcFcd@m)UVQVG1IYvV?bV?&c1PtFDzyu0^ytmF?>@MIaO* z*KXfjFlhohgSy+50T7sqNpU9(1102QYlH)04p4884|F4N+bp$IUds}r{J7C zStk39R(805j7R)Wv|bbNXTl8Y?>_Gxn>0@W8az&Z#n5J3Lx)AahxMkj=*or}SrO6y zad`iI@931#N`nJWl>=^^PeN12tDGdYcJ|=cFgH@;kEbq+#`eHx!;&qK?P^UcqeZyM zaxkY})8@}e%$=<8K05B6Qe&UJ0iYy201$-bVEGF}QpFe;0NP_g*k0M`yOw*Y1jY=+ z<_he(@SG^~AlPCHx~e5p(sqp(g9DkI1Emlf>zX3IWC6+1(@|MsA*POQoJ?$r=xwQu>8in#?H|X|GBw9M#e0>pwL)=OAewP3t=|PC&FhczgVZX3%hRso zQDamX0EnV)%Z6~QkH|B})EI``nI+4bi@`(IEvSe2B2kBJFXRu{3@c`b{4th%nD;s= z-zd>A41`%L({y}12Y&~Mw=j&V71c$Ij1t2{YotkKi2eiqeJDi*H4oBSBvGw|^HvDD z{2VO6|KHIgV*KA-V4QoZ9B?n>fcQPo6estOReru?Ip5nv4I^^@6;C{fmdzH`NR zgc5x0>_qq;dlu&ic@T}646{QtuHS;+=auD}yKHGn!*TAvp~D*BpTFTdUN2;6`GbaB zy}{f8Wrfu1T3NJ1FM4?d10cK0E-1}SN&fcztn8cifPqEOn@My;;=XDW>fRa~g9DkI z1B#92vb+|#!WITp8?~xc_)pUu%vbwLF7U~vF{dTJig3ttrLgn(JgZ&u^j>)GeYUie z{@{riihpNT{FC2t6k-AQ!olmlHcZr{_|#%ss#;y0t;MF5K(1QE92sPAz~Dd{4wTX& z3voiuVgrD^gFOohv$=>NM5^YW;bRS$=z7D8Ugcvsk{cj-R9UV4{WL);9J2@uazSoG z$NyK5^U*#9jmssbivMBnEC8jdwm!c1nVH-7a_R0+T9A~+AVgHe08tR7!~hXgEIF~Q%X8-*UZ`9Z=E{>$a~*=qTu7gS<37=XHKrWW3B(H+IqAguD;rO zkLLQ;7=52yKl8G{J7ST|jKQUTEr*xP_}au!b!+m`JMFUE_h?iFZ2s!0pBhh%xCYnN zd1bEfA+Q!oF7!nh^I7fL6)Q(Ph-d0{>+zrn7-Iex&X-sDH2iH_@}c;F%;DO(AV?01*MqOi!oMWaoAVp8M6{eP~o-KTrJ`EPWpsE z*A&k%Rx>#w((!t_k65SR0@hac?x=a-cmKx#0zYmYm%g6MU6DL806_3+#@r?8+;{FH zn_6YH;J`<6y{j2-Rffmx()1(WLeJx5ox)?w8l;)Gtqmlx%g?&m*6t+$Mc`L6!EEHb z!;CxWsb2m&8;U;Z*nh}bSvpd*kykiLb`Tahxg&>s(PupOjjE*pD`_n=-^ICEc}nP_ z0@Rr&*qg~a^>oyj9P)DRW4atT&m3s0=F5B%zS=Wf3^+$VOshfglG?Uv_t1>za8#OA zs#^F##?ke9jZ|+26!PY-5vMhch>0 zaI@ZDns_BH5nc04(g???j|PUCc#gZ1<-}Ps{?5(ab6pNxTpZA45nn`5Q2=DgcmOQ% zCzx!k{u?KTNHcD-f6tjO9?jFArqt`Dy0+>puR?xn1U@qt%hUI4b_$AF*;Q0ey=&OX z=`-62yDSCkuvmq4^GqSFn2H?ixlEFaW^VDnDz1zRKTYgn50lgVUTl!6*bUJQ^*E&p z`@**~HLIIz*LWfv0q4$|O}>h%j0`O%b6AM{YU;&xmhT~7b{dK6k*YGp`YT?+EPUFLAcW}SvBo0XLkcGVf zJ|0*IL*Z<>zKwsx^F9hl&s~%J@^@~ip2e#}sxEx!SSy#L$MQQNG%bM5Dl`u1Q@H zzy<&3ch+^w06F-)?9ZR14J7w!|HSVC$d{|U$tCP#a#mKHB9x%Yq}WW2^T=5H*?R9QSy%wBd>aYn5?PqsPg%v62}q1m4r76Nb?#&5J-=_H+b#>pQp z=r8g&k$EXdrU!m*Gx+n|komSjd+=1#6;&yITmYOS$5PFP#&6+Nc)Cw;s-8 z1VB&)AG8o$`d?n-rLdz|@H6s8catcx_pujC;ruJTAI}o&z9IGH{KV&hc3nhyNR% zEw4)`--#7ek$;+yzd*q!EEJ_0KV9~F#DPc=Ye-GFhW2CoX9aMi=j(^X_Fm*jzJ=Z=vrqC?B-HA)5;hK1s?+@I z2FKnE0I=bBfdK2rjK<3vrPBZQy|RBsW)DKRrtTORXiZ4L)9VwmmdW|yEhV=~t!??6 zS=AJ_%}%Ib66?2-L-pKNK;4#bmIW!0?Z6q7arUxO^dIh{5&(B`6 zr9h;xLzC(;gOxQkSX04zjYW)t(oU4tK9b&^Q~7RxMWp=<=TlC%A8-GoJUg64r4u|$ zYTx`qzaycLI5dt`|D+&4EXZtWC% zUdD9Ov2Wjd(A#5AC^KyO_B;I`|9B1A&%j`mSA6stYJdMmQBD=O`sgyw^|E=hVbMjr z)yG6?mhMuFPPB4efaz8aQFf%=df5D{Xz+9r;hWmz@4@VYuqe<-2)tLSu@S4rIT(UreJv6(I$< z6?HNH%C5P$B+@nMEht<#;-sGI4LvW{1w8g^lkyox;w?;vVPu;7vS?*mU^4w=iajx0Hg zVD*pW+!BXRqF0w&l5NhPZ@FT>dr)8FyV4AKSsG$HeL{ajIDAgLa&kHN3*_Bwl#NrT z^=JTU$-g1~%L~|8Jm@8x^83EcjN`G6_(*Hc9DH$R2pwKBk?z@n5ho&qC&i`1cMUKp zrD}`uU|=tk6~}ZBY)&($?k#4v1Ix^ze?0>DdX#a|(l-lt(Gxq}9-U~65p;bF42+3O zF;I(=wLl9gJ@`yIbv;lZ`=f(1qRIRv4XNCZ9|uc;AVM(eKrvyWOI6Et6#!0(-NnC# zSMi$1CMZ=SIoo1wsQdi8rBQ8sZ<%jt1vLU;fu?> z{qa(wT&Lj{j1*OBo;0#H#2KF3+;a@r>XO({E^%}JH1NMG*d$Q%>Ior0cS z7L(P^yH$+=H8Az7g_Ch4>8~+Pt;>(H=-au&BR;JLCut@qt?5TC%Qh|On`0?uK96pD zQ`*$Me?4F3iQo>E4JpboAFQdHhZG2II(O?qxpEZO?N~B=Yro^0*xOajU=P8r57c0z z0#X0~SDVi(XJ%&T_a3iq#Lf!0irL*G=p;d^*c;L&KPEFnQvMmT6bMt>?Fmx#L3J{< zPe)8n4btC&={T=b6x6ZUPao!G$}RgIJ?J zJ$dJ2X_nbEv2GO}``y0CGfy~j6L0g!vjJ}W4H3aWIimVss7DZzXRL5vrd4S&K|}6m zyPO*p6H;~OOL%IOJmCAkDsx--ZVHK@C(a{Ajx0sH zt2`A6ZLrYkHfM)1_P&VTn5x5ErM{hhkA@G)t!Q6WdE3S=m&&-HzfG{TE4r74#%!N> z9yT|XS5>^)Mu)8fJ(#=!!C@Y8;?qV0XTUtihPjO+JC03YRi*=+kn?W2LmdKm;O4dvqV9!Nd@*m20t*}E$P4id7+~MBUks`A zE16?|NZTK&Q%$Rj#{HMhA4!E9%TFJB5v0?)-NkqcyzO$Zfv8j?(e2?Qn*8(_ zhCc$M+P?qAJ}KjP=e9`J40UVOxSvlZ#331TAC~HvraDXCgO{0x8N^JolZ_rHj9cP| z#*?iUlI{Q{N)T~LTy+ps_fA;}-B$B)Dg$H`OH~e4d`XLf>|u9{E-z_=hsvfB)rUtB zwo{d}?f?v06vSp>?sk%wJr^#E-N`RL`NBUK9jm3HMbULhvIL1R%jLyiYLlEf;X|2h z%j`Sb;OBfn+m;r>yBmV1c%iQNt0C5$TArRTJc1;cxdJ>s-De>pVnSDq)ueIE@Yj+e z8_h7X=<6PK&F)++6vTe3S z4$;7g0J(L%lRCz>l(amc{N5~Vy}Pdje{|ja8>fI@oEG#neu@@CrXoV1Z`q6p&q{5z zw-_d-Ed3ctMfF}7bKLAGry+vAho(%&Vqqvh0-g(+j@_YYW^$=|ZWLyOPl#zXHD1K- zn#D05C!+77?7^GTw4|6ow|<+3dw7ubkb9=ox3u(&q)a@tyH1-KI#2>QYRF z0d!JEFx}sBhX|M;&nFlh$ou|7uT}{hAQk!fGH3KNma&E2m9XI3L30AEmgzJzM1{koi7{dSX1>Pl4fd>Nws_-P?_^O7V6wp)ODUbQ3{eQd~f}O8ZJhz8Ot3Dr%- zwEq+4`La_+Oji-UpmKn~a@85ObF$iT-wXn{3K84XLCRfD>j-L>o!WqQ@b6vLR?$#C(UlHct^^ku8`D<+vV{)1R(xh7&6fawu^9E ztv^j8FHlCjzzAPEv(?250QIw4N6iMrn(U8NjGIs;`CmG{LnEZN)RKai4q84HM~uH7 z;)N+%ahsv!Cy+x|^IZ(+jXDV3U}sLKI=b~^p2Jjmo8kc9nEa468$yxMXO%Z zXE1p*WD}4GnK9q|fah(vF$X=LLwQV|Zn(>WX;&lr0c8Dmnccdo`yXqSW7xTUm@{l* z>f=E%_{aFqkuzskO43nBvm;wvmJz>iKFhBSmG62gl(wo`=%Zv!H`|$ZYY*8KLac1Z zLcC;GdQf~m?KY@eaoJ07fmL{J*Lq10q(57$Cj{y+{kn2L#JNZAhbj6H89zAFa&Tk( z!1$N)pL+rcbyHazv$N(x-k|B4<*HnUOs`cuy7_)>Arjh`W`at=finrdI|VO{gYXL= z=nw2mT6=crciB`~k-C^GR2^MI;Aj0ovQxa{{3UYhLNSkc55r~r3g>k1$Jec{AqN=& z)8{?d{atfiy!>lR4ZS}7xXCKdDHLRQ*eim7*aH%z>(*%h|5b{29E4OBlh9UJ5vjXJ zRMk(b1C-);T`jT;@o2=%L-#+we(f%ana0JqHffaA4!;0aPzFfbH<2^%#X^R%?E(Qikrvw zraEYQO%}oFo{>);Aqy{b;4!I1W?>gVb9qSSew9k_*A4 zc|k?&+*Lo9WVqygABOD1^e21WoQTajvaFO(nL*I0E9SM9ORvS_I-7Ptt)Jq)Q#iOklH%+o{pW}VerA~$jC#a zc`xRtM!@1M$-(dpj=6LP5}mS7%$-sa`D(M9#>6T=+eMwFCUFbq3;Ks2#ash#H5DZm z8rpBOl^cRH{&J5a>!?DKd|f%MxX(TudhNS<5qg(yOH@U5--+d)*q9lA(THtX__(Z^ zSu7X?jsM?C*5b##){{jG;x2sNELr6m^*f9&^CxwCL_B1AAfo#;tVBRT)h#k#k=X$p z3bi<{CADCk{n;_?9+8#0Q3RdF#JD2xZ~NVA&Z`u8>o^ipGJF#OndlmBq`mAYpIK<< zysaM;@*0XF0fFe4-@#CbRQoHG1cuvs$gzN(>@({ZZn|5tV}20Jr>gSr`H#`m;951; zGwjA-^F3_mLk6=&jQecZi`D#y_V$hzioY;@MQ%*tHure^@@rEjcuCB3i=}Y#ZUW35V~9=szrq&rW2? zcei3;tpGVC!F{Y!h*P+t;-jrKQvOJ#E{q$GDr5?}p5>6D=(cWZ$HKLCTw9LoPkTMa zIdJgChq@^=RlOo6#B_zg8OQq+HfDcQr;-heWo~!j2((mTdzddHiBJH)L0{>-D=F~}?TKsW2z07|m zXC1js8>XD*I!c|^nRYr~hp+0Ah{{+mFu;ZNDu6ck#(5xA4lh2GC4x6&?Yj6Snt9YU zErlujZ9`eNtiASA{@n@iUG8T+4}s_UU{*YCa2o*~G#xtvByJB8jJ6p3*p=OFsvP&rob;2srNwFDjZ+;J8Hpsk9%RIykm8Sr=BTu7tUv<5lUW9fu z!M^=Y1~I%7u8ekQpN*OeDtA#STFYV=&4=jHQYJ*_cbm@6{&n^)sy5H^N&Utjs)LeU z+*0?MQrPl6`AvE=akZEz+l~1)vC)~5#anNQst!msnXHk6iesJLsL>zm}OxU5&`lz0v zgwv37y}y!dHdz%G_?MO}#2YK-nM{fRR(7N)hcMi3vdnLF7ytVM$T{@93UrdpiQm{v zi?Hr@;&FC;>Hsr)tHOgPqm>ojNVvmr<|_-cpb52-~`IDfImd zKTO5F!96l%z3oQ-_rb{!c{c>^y`i~h-pcP@!T>RaJ6z1aTQ9)|Hq&eeMO48p8BNL$ zD&>D#z&E^FkcfM2r-jybwpq&52NPc(!#La$td&peaJ+dFv#Bt3w`O&P%NE6*T3$@# zqg_Yfcj9!ZoE&mbLew}mxM&bL;OYtRD z#o%WZkF@=7KpI9eGP4@|P~w12S$tyC=F#r(w-fvAe~0{s5Y1MLRLY+PT1}Pip+_k- z8aW5U+gpFqljc)_emAtdFt>vtuP$N_i1?I4cnPK2ZO;7te{ z$@)K9!0oUt1?R5+?gN;qGH|cxBh%%{+e|1YC@yRBEA=Y6JMEevD?h&*JbgtX{Z`uX z@77lBtfIy|`KDHZJe%FF%_mQxC^aPN8R1L@ z#hGMYA;H0KP+JbB(xYemFcW5d_EkptqYrwm6~0Hn_Y}2@kA0b6OkmUbZjm@xS7EoL zQWX!I?@zsd1MN3(d^ED`le%4O7xJrf$2|@W>o+MOvw6TCTW7M-U307RCoX6|*|`d| z#(hBmIyVIFU{h%(y3aSB{w#BTsuz+sFy96OrVQxvhNk5pRWvN6)~(Y#=$(QgLJAoiU3ot6+6x=4&K2~mjXk)3Or6G;PAQS}$@1z^hV#8^ zrw?XY^I-`o>V^1^ax#n z$|1@NtYjl#h~X|T2SacFl2yJA0gVFh-k4?(Q+|@(;C>-*{&otMu5I9xU318t%@5yR zm^TMXhYPpmS7B+sCmOCNK(E4e_;>t9tXBPB{=@7F@Y;URGlxQh@893GgQ6SjZ4}GJ zPD0tcP(_YR3?(^8$V|evo{!?k{UqZcW>;!c+@k+*LOzaAcX_d3bP4H}Y?y`qt;k^W zNjy?4+Qi*>JM9!Zzkx6~Q>$Yrhx;73#U#DU5LZxC`=fgLP|m?66y8I+g@OW?|3Aj0 zwT>GK5JS@={FAqYU+%1G^5*S@R-#pR?4m+2hrl)+ugq^&(>#cx-=&v?3h-s=3`I>p z(sP^_I_DNNQ5M9#I zb7adXwL_Y@6vF2RvcL)CU$oqEW_bQ0DCt2UPgAqkVl4a?eiP&A8Un1*eIax-c0IK$ zEQYimgC`fXH*OL~5^l5ft5(j2Dg|X!D)a_Z>Ld??iH*fKhQbOB>TECH8)c|%K2=h~ z$Mt`T=81Q?e=AIWzEp@=?uM%N(8VFiD&$|WJ>GtP|2NC{noB1&F++WbFMy$K&tWTW zpZn_}U{FvsYxPH&L=b$#w=*u}k+%%mZ(v@UlAjWGJ6=1W>XWuK!0uL57uZ^+(bXd- z>&G7&kBXi34YZZ_tUF>cj79&8$MWn3+mJDjuZ|MgO4oWy_W*}&aZoo!o=D{@r*aIs zs`r`x*j@L4vyHY43s02ljsRe%QTtN$zZ9iIM##i6pGV^&TM4ZUcW)qZ@eF7~c*!C* zh9DCj`I?`vyi_a~{D@Z#!OPPS+tr>PI8ZTv$rakWFxY>0UtUra{J|4Zwa|`!J4Ktw zg)XGeFpSI;+ne-XEPt^hC5|VUK9gs-B$;~}|9!;>?%E{j;Y@b_5#l=)!G|I0-h`6c z{r!TQu(hjkMFjF|NxEPt_)+R%?q$IJ3jOLh8spcSvVK^YKD#{^mWsf<896bd=D#8< z9sf@0)yE&i=5c;veBGd0iMT*oOE=|6h&aT&4{}geCuY`Qip_-e$A{+rk$hV3$+i^veP8ntThAi;x=n(j{R_ML+ z2n+b`UnOhLLQhBJ{{zRiU-C}xf49Sz8vX@L4?-X3p+a|HZ4yiyDQ-vhRL~z_*GNij zqy1_gWTRe(x3R7TPNHLmWQP2b-3Pnb~K z7ey^=f~n_9>jPc9e=jn_j%{%MGM}Qw5tA>Pv_J} z7Sw$e&_+(_G?)0BU^wu%8Ol|fj7BzRBb0xwIt9!U-w61V%75ZX>lY*VFR1@@dDPGk ze&CkJ7facH^YjgjCm7_FNWeyd_Df#cN`y=K#G2-1`N@XyIj<(=vqizeW+=J`sGG)d z{~1-QD~f=9kW&(T?Ge#yUyngK{TtNj3KZ7bS;@uWw4YbIiHxMUDyz2Hd9A9PY{$1> zQgu&A$SA~jR=9_~%sx0q-Q?^2c1!VJ+Fr9W2wbFWuk+hDtcyRE^x4bv$yPmBzi}8M z5P{==b_Cvphio>HZYoDc9UL+J`JHb&1rO!FFw7?ZY(tRJ8AVDQl$hS-e2`!^z1 z<(57KiqIgM>d@vfkab5to}!ZWbaJ_e4<+?%S9U&hGTYsJwiQGd+Ujm#Cl!lsqbN~u zUU6Qrav)0V>v{l!(N=9JM*1Cs3_RRU6=NYgzDm^ppX>h*g2^nQTryp!m+9&%lHq(r%_=%Ko=$@`wq%(gzjj-7=F0yj!pCW*Q)jL14ij z!+FLVxslhY`@WIV`o&da^d}fkL!V*>vDKUM(v^VsRt?Ya>STqBEvSX3OfDqB~=E)H9CdGhi%-}55_vz=)<{IB99+Ea`%4U%EO7pktk z<=)&%{FufA5Uzw0C0LqEHtP=?mKJ~M;}y@pm%qIXrcjB6Xb%hk75+ro)pEY*A8mQT zw1rAy!axgFo8&=}@wu54ntI7#c>Hv<@WUtRMTx#c&w}wF?~;f;O0HyRjV<#~Cee=F z;yMmTAYKnR@V|T58yE|aGIX-}$wB3qcuBTY_s=dn#`q0PYf5;wTj~IHt=%VH`(N!f zsZWyGtl`2U(MiR35C=1z8PycKoH@J9lf>i`sr}Eyj*x|Pr>idFaqhd)9_`vyB3aC$ zTeipzqbg%RiAcK3lOm<2J@jn&twN>AmN73Z9ROio!xPyJ()F*#O%+!lJE_%=idn=) z`V-1i^VQg0g7TJ&pGWp;2i{uSxInU;^-$j+5m76pwX~Nc&gy6h1{b*P?i{}Z)!3&y z2|B(|^fliR{|HhXa$_%j?`O&kU_sQ207;9SzFwpK zMX05zLs}~jB4)WI)m?)2vSG*XepSNC{B5lX>lpq7LLnX;J$6`U{8>$3PD|PFiUn-H z(`M<4fZ#a}E^(`YaVA8?cuF3=kPOgOxDR z?DpiEjhSNM73w=Cf!hP~Y3v7}Griv_GsBlZGb&=NA-uRK=^B$taT-5YxeCwx^H{F> zQX)li7ZiQescv2brpI6>W`k?GWs2`S1R?}y+kr4t`^`pw>e>FK9N;3TC6z{nc-!mQ z{l=2=JiU}5d-CWu_WVUNJMDt~qbOk%iZHzAel~XJ-R94Y&)z)qhke8k3ZUz&$OhP2 z&+V2qo7~)w5#%#E$sGFO-&!T|^WYGvz~=h0g-LRG)-;%}Q2}-&4Tkz4>_Jj2qVRUn zvux}j&iCKb_DW7g9l7yWCK@@IYR26^sk^f{lpjoqC@@?FXwI_9eZkfKfNwF9)#PtU zv-T_lHPj;}yan8uLrRO48kN)1Ori?Zv@D*GNte^|jRjRcAe-h_=q@O5aio(;i)iGy zu96n@i>lML8e>wRwv$uBe|z8)Vf1L1dGTsJ!XdCJ4LW6R3M%0=N*{SU)g2kzNZ+cs zqbyM3wsb|vo~>%RsGS5ofB7OHOl>AZOvf!KZ{#`8_-JO(MN_X?W-dJ51g11p*7=@X z-9Ip8%X?HiK{oU}Upu$A1-B|1dzE`saJ7lCEh0mT4(TrwYcL%WUKp>WakFR=UZC7dHp^$_KrI)P)$M^;H~DD(X0 z`xqsahz4~2P!#^VNbAJLuYIr*fO0)fdGn3CQkpb`qcwFTRM1r~f87O@$8d{excnQ2 z;=+B^UdtObsvrYrnbJ1ZKp`81o3u`~LsAn`jd)YTcwVyKiA2>(8+pG^B+c2`W73_E zuyvDu$PzRF^0CX+G9Of|wceBVYbzqadtF@^m`xOq3B#+rE$_Mq!dTyO zm1fVsw(88Mxs+jG#>oFL_5cO?=%zf+5uThb0DfKEOp!bdiI8t;Ybmyc#W~FxABRQz z&43j*gJcOMxx?mOB@#Gw^@naL0G&25?#K{N5>-%Fz*RBrI6nrep~#v0C@jv#3|CSu za?-WEJLRchDU=sNOucR}+cb3cKjQ6*XO?uZnU5^NB3h*SR@wgMKi_=$CLYVnT&)L( za2sAJ(KT63ItTf~H&IrGvBy6UNrp5@%oGcF7ZMnHDolurRQ?%oijm@tT+cJ$8PEky{?L$S0_*u&jJ0`%{n`+9-eFz0-5OeFoHQiTs!Qd ztjDYyy&6VzAc~wjdz*vg^%@dSXm`y$KFeSEK-k?D!h%}d@a}UPfDHUU5WS;G*Od=P z@5atoQ{I-4v=(N96jr9R=b~TX%Cu9);K+XU5Y5K`U$tVAhMfAsh(#3=`;6^Ok`nRY z$G#-j^z7{xkMyDA0R3;7Tp7Ir@xxGJ6Y@oLFkt!d(6%pq@un3segQ&A%0HmmV$&j| z{Jj^nCrg#=kC;7fX7oKR?T{*C=xOE&ODK00_gp&qWFk!9vx}C?6UYi7a7a|_h9Ivy z(L3fl^X!f5dU21O_VH4|xp+N3nT-^lMFUeMSF?k*i8K{%Po0mA3jOOtfpF0ftJQ9J& zkVk74U;3S1{d^AX3Y(;d;3f+c!uo#;_h4pVaNz~{b++HEVg;rr&;7=|zhHkR975(v z8v3CaheQi;tHR+4%|5@nX|sF>MFK0jsOKV9Cxap+1M9a3cZfBcRJH|~W?e-Bm|ROx zzGg&;QJm^s<-N{K;T_2)W4& z#ctH;-gncRY3YqB6#KEzuDw`xznGTr{b|EzTkQ=nOok>+X~mrVG|j=w-pjsbi11K2 zmA1MCvUSu$oz9K z@GD2-$fNpZ&30wG70~;wv&R)@#G%y|z}*Tt0-VW^xGI&JsvSiDMvM2f3rPG$kl`I(<}? zI-aijyFHzich1y6XW$@pzt3?m={3k7dO5s|25it{5DL=d2I}n2o{6czk;>#<2+? z>_U4bP8J=g$_GbUW;wT^oXk^F6#Sn5hdRF-O&{DxzKak8OB;>!1s0={$UVY6aElM- z_AfMX^$OiI-+zf=fe#g;0?SwKHGK?C7zOD{&R;30*}#*;K@-2@REx?4KRJ$g^n=;! z3XJbP4CFapGdGmcKyUdLJ@|FM%c3VUY{E^}RhJdY^`EP_r>}sCMe4HO(%bS~PxRX< zclf_p4?7z!a6t$$NrXD)o_K6f1+X4IMSB^GZX}x11$X>mE;TmqHj0<%Zu}9Sb~15e zCtaCggnJP)Zm*Sbk|+A>Oz#Uuw(9>F?V7)fr@)?#;JMi6Y~vcJtoLO12C3a1)V^+O zy;O(>`SjX)bcylM3r&3l<;x=y0XGSpvP`kg_PU=%HUl!Zcsc_M7_!_genU8}jp7Yv zIX#2fkG@ZJkJW1kX#UVXX|#Zu9L?p4tMVmMI)swyy@#|U1V7Y{lwGtq47;8BKq9m+ z&L1ZF`l9C=Tp*a)L8&$x1~=z?y}0VMJrhIf>8z%Kb>qd`P58_f3~NZt^=z4co;r zyp*=Qs)y>yS`z2Gh=xVKa8HqqL^+RK*>FDgH)0i{67JPjJtaKI0%NzKmugYOG&Ng_ z6P-Yv-=n-DP)H(y7z$_sVJIoJ<(xJZd9`UR6Ub2^RRP7qPD*&0eEGtaR+24R zA>6azs~*G}XdTnw3&@%L0o`gp)JuRZ{N7!+KUF*nKy@K)#La*kt?aFR zE>7mQ02>$@Yzzr)tCpospSxaSC zE#uny@g|3ICUS4J(;)6sOqCMlv4kzK^tOV_m;eLg!Ij@L2g?4HK}tGF2jw||z!j8O zH;nXNlCDyZfQOSlzjo-3%B#`#%ZeQIKGF&HZ%er(QC$J=8+viBiD$}Nms(&*()D6f z3hm>hu$&YP@(EOEGk10Y)1{&Lz~t>mrBWH0h-ruwS(Eh1Zkwmq5AgA8Rf?ZW9FnKs z&)K4mtPDQ)TWwqZ(PCLhYzb95sjrfPBKGuak@yJPy5W#A$pSX|KWU)__vr5pHBFpe z+XA@q81@c7dE?a|n{f>@eHm8%nnQp>#~ooWfGqQMsnvQ(QU)G~9A*HNXK$r_5knH` z)y$tnR*{mumgI$Bz7p7RKuv&$oz%Yu^B-AAmxP>^UX#rFtUOP zJwM`9M$5X`r{pEjjwkV4#b2v3z9bNUsns6JzHKy8@HyGvKM2-$P7HpmG5#QFz)|HtqxM<{0JPP|aAZvPAOf^PEbfS`+x#j|zWXNfzY5!>VZH1Q z%)$UO$2ACAXRG^sTglgJ-12(iOk&gm38U>#@^|=0P~-^4!BCSna?Z zo z)@$m26j)QOwAfdxFKFbp21NI|OHWOPl$Ov1Dn*n27)I^64auM7Gcfd;;F8xIxK}~o zhzI~XEuDX;vjty!dzjV0zFwbCVOaUbc!a_gW)1y=mQy`XKc=qArcj}#^ANC~;MTh8 zxS`~}>cd|EQ={V4O>scDUV%9}(Gk5J6OgpKcDl1c-WH96L$~@Y@(H=XUa@9(>!isQeeQkJI z&8x4thS7I9)^^3Fnb!VtIa{ltpW{6ziq7zITf2sVTow84qX*Wxa;K!&xgw^6(>-AD zE)or|N!%Nha9a?xP7ZXHb?P1;*1pxZVI<4SWHa`C$anixQByJ2NiXSfjIXTfOX`4x zK0LC*+M2>8S5!v(RBwzA(q5IbM(+NvbM+M^(vV=8KLL$me5BL-&eU(}F;j&J6~;(3 zylq}H>QO~GCZEK;$)0dll61H$GR8^qtV$xewD_~oY9T6hcc!b|hdT1>`2;XmRBdHc z&h#5e#&DG#jU?5qSegyW^X5gHABSrlGLfpMAc7^d33rvU-=JHJ`f5?!s7Y*tA5SWY zmwW5v5J9qHmnZ*3`u!5KKK|ShwGB06$keIK60}ddGvGmpT@Jk4Q6i$atRIvQ96z~j zQ~{*9zTOT|%BiBN<4tADX#Sh}Fgl97mAWDqg@a2v$SORl>(+K|3?v|O@Th65CZ@os z#CfSU@q>#*u(TKM1V^xb#I!lhcK&NesK$%P_A%=(Hu!#dXYsQZiW-rwA#GVFb?G!f z3@zF^!}<~p5fEGAF(I@_U$WUZ&!SBl{9|cgZIHOsX5I$*sG<_5YB#!QBZinhVIwWn z7FV$vi1Hug(X}TeMG8aoEIJQW z+zMLQkNfe($_ZD^vbKQco~*%EjgH&rB|j6~nbrAeXNN`!h;m9B3lRLQv4qXAE@1HI zu079Juc_JRIU+r%M8G}!rf`ddOrcw5Q7;-NWR4V0?hwp_7fMfgk>V{haEx>#abec- zXn>>=F+X?zP#RK6vrikm5gut=82J|55IEy!x$l{MlSgh~w_`l|P`4(vW1nM;t3Oa- zf*vt_9Z956vRn+MuBIkQ=Iylm{QWx*gh!a4h#**JIi;?5-wkutqegjP;9NF`Pwn`B zv#EW4Rgo%uw@xng3ll|r;^an|lV)|hElRKYM<)d&2?UDeFOT(a-0;15TVw7c^7q4E z5ADPExZ>Nu2h%paD3*KV`|)l$&r+B1jD0Vsnz=c3%q-LWjc}lCf%i=r8BoXQ>W}uO znTZJ14%)S5o8#@RTeoM}?8Q~#Z0*0EpFG@>M;0dUo|h(e-c`?|Py#Z!Ie8}W2iViP zKoVdd@fI0VC`|o~74=A1pa@LvxFOKYAM2D@oFrkpCThTU_kHyv{q5po#?C2 zDbDB*Zq7IFMKDzHXEh95xQIQltK#+|@Xu?R`xZ42xg+bx!Nl)ya2t3>eh@_bJNNVB zQcym*%q<7QVXnAjekeONp+>I4h__TOwO$t)0Ie?A z_Zb?#50%@^vs?IGU@hLDb|N8|}d{uH!9Qk(D;=Z*3fKrLx_v*i2W1jPQ0 zPa3=U=^xopyUcQ7r>8^ZS@b0Rz>ViFUYEi1k1Qm6g|YWSUeL8Rc$i&D8+XG4%4)$Jv#DDzc2i* z8M*!y(x7JI8}I|>Gmv}gDCbiQf*om$->GGLkH5O#AxmsDl7+4Q!)c;ra#h$!X1#L- z>(9rePfZ$#tGh=}*>+51T6$+ts0u;Pv1x~oHBMJjVIu}j7au?xX{2c8kkSS z=+fMF|Iu_zat05I=9-Zkxz3EWc0hbW48g#F;e?xh`8OSrDVZq$MLn1!uDMWsAVm}1 zDCQb3EQ~2TWP&~sEH6X7RY`97Ju=bGSkMZh>it`K&@pF9^{|S_ELmj7W8!OG$QNi3 z|LaaPDR_{H+qf5bJP~it7@um6V~ffH`F2ya+oMW8@E#Oz6Jx`XJhl9~XB=&TAOad$Z-^P(_g1V70m=Pg)a|x*UAk9eA{9mxX6tM1#}#aKQyz}>o7YKI6@)MN>bY!_F)pr$lqNDs@qEm zL3Q=%^V8yp0cXvFd&K%F-nm8#txb5xtph(s(G)5c8Wa7ZdscwAhkXlDjPL!eAEU{4 z5C}?;*vKoXX|V`*8VvM9;O9NGY@v)ZKs~8j{{ArBLIUkUe*`AZE6UD3Rz2zAELzm@ zIHbYy^L4io8x9k3EWb`nOQ9-B2UkomHK9Qj9NwyjOzPzdhJ)jRfBFbc277*TRITGY-`Z0Mo~vo_n-S^kx0g{!a} zu9+p)GwxZuFdm#HF8XQE+&_R9Tr43MXqq6f0Ea5@SMjq~F#4Y#rB#h|isr9q5hKU3 z2Wlx&Z>o2^J5q{Bv9oR5BzDCD{KwNWqSP>yb5meS+#5VE+>3eqkH!CS<=-?lAJ zJkC^0nCXgn?%s$$H@=4)v=vm!qjlkK{e9c{VojA%w$WriF0eLFWI&8Oyc<1z{uz9jzpUlp9; zZ(Vjd;OG=rkQ+MxF27wi-G!CN-o{3TST^f6CQFa_%1+4I*ouyv7KZ#&ia0OaOU+3i zWSO@p?a!unZ?P1O{B9$3;iPP-MHCbBHqT?fqNopmX5mbxMSJ|Y*Jl5W6y~?6>uRn( z0>fA%>?5VmhQ2}nj<*p6y2{KnHFi$6%aYbyaET8XQ)1e~oh}B0R6^S?QRuB}W4EdF zM#v!!%F$O$bO^~G&}W`_xcwHzFB>p< z{bi5zyfGfDK?8n!sS3wLVg2=oX{I*=?NvsyXLy46J@_oVVEhl0D?BN#8mL&WUSVFm z`}o^A(PFFQy(Ioraltak?+Fq5rIxT#xsZitVtgm!Tn9^DW!>UoJxmx+8wl&TOcbzeM8me&^NJW#ODx~G5ZY5D7+!?pZ=IPYx*J_po!s#G zA#R*<5sG!0{n0To<1Km)K?`O^|ID1mdV>GJ#03&gh1VYL@TnW5#Gb>7EKHDLcp|Ui zq!A;t*X7V6t7>n+b3G+l@`b^5kksCS9L z?w(Yqce_L02->JB@Na$84jUeaw_#33x0a(Qmsa)9lGG_UxN}5r{qE3zwfNW!S-2MX+qZXT|O;jQlQ? zEaIgwZ)}Q?djP(vW52?(<8T6P zB}Gso20;S{HrKvG=-9eXDc)LwQ=p^7O)i&AHW@c z@jV)0lFwdEh6oHU5d1wMg=g%1gx6tyo#swFdDxRCTUb5h;=3t+Ss;i2Dx0Isua!=7 zt&(jZU%f~5-2vk+(NyoUII%eX{MyoG50vsr;HecOMK7>Gf)gtBqhD!#Tdz8VgO+$$?(>Unj zpdxVfvhEFQEOeJhyT{oXtYPoLsoJjExSM3pB%jWg>a(v7IY$D--yVE? znWbAyW&JE3{F_QGwNKj~fV-nB^%lM-Q_wNI9BxI-kNOW{kP{#mX_t|WbxjH{jR8+` zc~a*80d7E%zY?ZCp6}#ovhw2ws;u&Q19{YyuFZmx8GOpWe%HVFn(k@moC8M?6zHrG zZ+aT(`v9OmI$G6^Q0l$a?IvK0Q{}4{E8&xl`4diyTlxF{sPC6@EC?EZRMgJ-gSi*H zBsmaL{C`H=L?K3Ot12$7x*k&P)ec9nV>yHvdW!rv1`dmhd|+YV17jh;*0K(Y#rUW{ zG2TXz{8_Ipo&Fn1gWueDQ)VB;Q;M>*2nj$i$hgBU?En}**|(D4A7~gC`{T4>ECHg(uOQV>ENuWmB8k=$w9khu z=j^(uD6~*cM6G}i5qJ2m$7Z4u9@?`Y@D8hE?6GxOvz<4-Mhtr9Me~f5jdIp07XY}Z zy25#R8Cj=9Ar6v%NJd#(`rIhHsU-7KPoW-*95a*vpkQAN9(r*$p?6G2>XYx!5U9jF z!}Jmj)T%PV12IPAzP3)0M1{5e4u^jaU{Obb_TH_3WoA;jmxVQyDmu)06~ywXo2d^h z=Dkw=qOX?R1m53}7q{^{F+AdlkWs~dz`mY(*l9~=rmv(V(2LVz(vtc8Xx1^ulk~gT z&o%S>6*m{2;^8{i7cJ}Y@LVxNBW@%8s0wQqvC7i{PlHZ{<9$N`5E3h|h+rNl)GyC| z4*-Gx7TXE6@;3F{sX5=f-_uKgR=tHkeiiO>CViD1f&qxw#4&)K`MbOAa^SCUKvGvD ze}u^DOg4A{9JALjReNvco{`AYL%z6naKvtgU)gyh(u>ibH|5!hcdbglDKNl{(cJO*3aI+G}Ay+ad`(?gl&hmM6D+ z{>eYh1%B8QL`EW6`>xaCM6A?DS!2ltp_-4V`ALK*IO81rT$!%qLY_`P*{knzI;|v0 zJ58{x((=OSMG%T;O zij^UzxrA`R))GspzO*NryJ+jnKFENW^Y?{NM_h6_@LxD^#ww++AQ!-8&N+P~1Vu=sTC7`MimeAShPf1zf=RaQR35<~|2PFL`gC=EmbWmg88K4-Nh|0@{IXj z_m=s+aQ_1B5W!W{UEyzu!Grgj-ucz_6_N%9hM!4Rv zGI5~ILfP9>uW)!@vX^empAhzC@GXC!e;m}?C3cWS+*SK8mOuW5_#Q1)Zzp?nDz5(3 z0Uoo`9+ueDJF2|<&WJZN{abvG?3; zRbeqzp1DMjqKMb+9aG(@$9hB^x5UE!rG5b40%h78556tRA8BqEm;2;TQS4Lry&pxo z1c^;n!yEy4ED`6u+4f`q$_78Ps{=SwC;Me)bm^~iz#~RTPCvnCx^o%S1L$%heFZ29 zkqN5*p>4NyIq;u2Agk5sBlAZhZjZ6AZg3s@U&s3Yr)STJzl9VZ0X>D*s-2RP1y}qi z$-$DT0@28vD$^5q*8xVOm`$;ak^1j(vDjN%JoE7~RrM$5eo^OTa`lpXo%z^YO%1Bq zC{hhnaU<99*bT*VAhloMAZfHz^5BH^hIxN;rfj?H&Yl0i1Q{5b7fzO>d|< zKKf2PT^5$wox=FocFT!7?Un#pNxq4a(}KmYwj+PY!7k-^4ls}P7f;qB4+a*9K`OIW zT9l1pBkVfw6FMKScaRlH0D!tApUrdjuY2|YfOJq6M&i6XIj#pIvTs_yZlw+3LnY=v z%%hf}_y^4UP7LY+a9SV7%mV=MpZx>6XIz*Z2-|EGRW<0CQxbL=7Qi}#$~^5YjOX98 zQE8+zQT8^IpQTw@>3Y9-2DtS9a6szF3MDaZi|`YLW26Y5-=Tnt0Fz>>i;Jb{N|t$A z;>0(J;Gz4G1g$*ntxN-r*?%W$t0<@vn<#Xtxq^{}_?crt<=l6zxB3QOiTkrNRGnqEIIW-D0U`jlXBOPzY%g*6FIm4&_i{R2iQq7o&M^mald4R3 zJl*<4QwdKh#kCjs-Y?Kgh2mHE7A;0|YAmghOazb#lskx!6C zJ2}-^0?gjx>fc9aisz7i(i0_eOhLc5pjA?-(%;7=lKxcfDuPFb*gYwQ9F7W(vX;Ubq@ikVkys%;$<_+YLJV)co zTH}<;S1EFjE=%O!%SZEx+!6PLuMxcD2Vih9K)7^KaKO4jZ<3HywJGCQ@w6`*8Q;WW zJ_k^;FVtDvuuY)I#b-!Fd9LNmL+k3K;8?s7nbM`#NJkR`YeCj#z z?4VjoOS2k*dZ67za#wAwoN$j}{cZE>RbNGYcj|r2*93GOVq1Xr)8(C0hg{g-LW+Fo zDF*{&?}HC3OsaTFkTSP|ljEYH>Zx{e&^}-kS^wUoX!AkZ@~Wx&m3AcN9TXF*lwP79 zpsl47Bx#*hgOE@Z73)f1SZU+*6!EmvExAf9&WlH?0eC^)i3Kc>GUDT?JfX}QlL@GT zOCCuUH`1=g)R9RWqMRv}gr5Gdw_DMsM2;?p2U2eM^dx4BpXq)l>|`-uc+<6G`8U62 z=j8h9H*(Ls5INxQPFZPy$NbMnJ*zPQZngH`>YqzUK=A{6vt<3r)bF44N|N#UE0E_| z$f$oF00>^xlq0fsCpQYxq1#TJ2!OQS*71YfCqN3IU?f)sPIS+4L?d2 z-Ktd<`eqUeP(LA^3eqZUtjDXXbDZUfAvx`B-RBMVQSTRsmIcYxFfX--X)0Ngi{0+o zPxkOni%j;FflcMH?=7Rh-_L4voy#_t1AmSKx(yyxrzv>N@2y!n@(wxI?(Bp#TFl}o zFaVA#CAl5n|GX^q=N`d5>Yw01aLo<-)OuBSRBC5~q#k5#QGKp#&?g+lHgbAoInTUP z55h5bsZwnJ(a-DD0T=qWkOlumHQG_=pCv%82=n8Wq^fOB>qUORQcnh__vJm!Bj$d` zgY!|Ue4+3YXFOLz?2c4QXcNG2@X3NHZzke;JZjFjZyc{lvc_KDy?Fd7wiCZ=eY`KJ z5{gc*lZZmUYU#E+GV^o(F!i}pR(Go7XD5b^d*nX4nhtZSYwRIybVcr4h=97!8EvCX zq~mdOJ3x&nI3WsEVF+zstZmF`03&j=eqL8~JSlpTnIaB8O~!m7hKhK{&-!BJ!X^7& zS>x?+$&qsT-k7@jDy2T0x8|uVJgwg@`a`Rv(f~ox_Hrj_s&EK7NR0uYO#bui5-je| zk>)TI8pN$MNy1{DA?M^s0PrFCx_Mq#>b}EO-}NGRJ>G22&WvpcV$&llkseU_mjxHb zEC)yYbN6zOy5KpWSCQ{GOdiu7A{{WSzITG-f)aZ2I_5rSqTk8HE`8@>qg2E`0XBX^ zkxr)#Wo_ST3OKIF4`N{l7^CiROPmH_0Zh0S!0y+;Ktb3ORnqr?oA6ScpHy1XA5GTF z+NN=xmCZNZnlTmqb#2;4yETAb*ol0GakTyIgke%%juG`E&UY&8`?wu?5y|d1#~n?0 z7)OBUCU-{tLb;@IO~=LF!+Ov+##l@oy6+#EpH@EeV;R56O;TzESZMRt%>AIJ9hbcy zT$$(K+15$u0gFtP9OW_{ik)jqG!%lndC%+==uscirFa{4?Z`UaqoyhKTK8Y0GrQ7j zTbf2vK8`+YoNUrNydUpDz>+I18isf`9=m+|lE_tEp3^_VhjIH5f9kUr~E<oYC~w_J zq|28Plk}6Ur{AdC{E~Up+#s!f%<-IgU+v76 z_`{0mc_dlAT5a%SddiYE}+sUJ9Gx$wSFiW#40e%ZOIi$JRigJ<-+WKLMuYzyFv1{7~71Q?V!p zkqqoa`eKCchSgE+?tb8C7B|{3d2jo`t9bB~jcTQFEeM`$G=p zK-LNSIeG*=E;?F&haTMgP~O1D#7w_IS+zARBs)F*u{9$xQCIg6?VGBn#DZmO!n7AU zglQ{SL0g2Pc>r;08w^^j4Q%ZuXa9DdEIm)qvqTOf-CYww_at1Ho*xRe(uxH0|zLJwLcRTn=}4cL)}o1PSgW!6mqaK#-t;1b5fq z?(Xhx2OQ_RGX1|byTg(E-xo+e5^j%a9(HDKXJ)5sx~ogxsv^rRi>zTEkI?@j?z{L& zEPUs6Ir$MRz(=H}6iVZ}$*`C#@MR|Xg-K*c_@0|6w#Pcf{*x#Hgx)XFO(}^{yO>ok zqhN>|mtv1KHY;O4ZEN_#1Mef*6VFD{L|E(#L@bvJFm`hE-Nn`lFvK@IaV0(&+3&j# zn&a0S3pA#wkqiL60o)^q-&VF%@dS9n3_uI3t2Baq3eYJNzo*0+G8RaWUM0Hd*Y4k; zp9NOvBeuYdGF+}9@_{|2f`UjMkT%XRP!`35bob9%W&=?X{<;`2w=7_U;YS27GBFzG#VD7IQ9iYgq(Oljj?fNk zStLJII8U-7yc+a~L$|(*hl4QR;#@`DpqKkB@Mt8>N0*nfRZ7mPy6iJ$9r8i2jn984 zh5o;P^@nfPvjd7PZV*@?c!xaB0XbQ*bl8j~&L9L0jzM!O9 z$~ZRu;a=LI%Tplkoc%GkXu=bQ{iiMFX0a_2$`qt#1N1?4nQHCE!OSBarWC&r{UC~gTw8;zDZV(n7=nZ?vq$dF{8Gnqqy*B|F+cwrc}j>ZNdZk2Wa zrqCwSa@YY@r|N2eVGC`O@>w4By8W=HDgFvMw2$|sfG^Cs#0TrM5yUsumSt>)coPKe zM)wP+GgJhh-|~E+9SV}qOj9ww0)|B{2M8!eT1(^<2!!7l!l|wq%B^9&esVp&vzP_N zH&?C~ra@Sx%X_VrZel$?Eecb$7fJAQWax3(HUxfi7)GVTQhQSTE!}J{2uXOf*|m|D zCye=-OO?=NL+-GpaxpQLvCW+>4vvF7F>~6UfLJJ;7~y$1ycu393>0XONG_AT<>^#LPiid1B5g8a6j*;U?7?{K2u(F(mgAB{ z)70dN&h<)MhSawP{bIZv82~O@ZXAnkjH+w%f+Fn|Ab)R7_efU`h|Wy5@dEEfX@BDJ zO*H+~9_0Nf;B-VrZ?dPS_gnrmzg<#qgeaNTeH$|t_eutN*UdfN_hhb4JcH0_Xiu5QL0AQ)& zCHN&cm+`gJvnpd>;6KICeGJB{CX-Pk-H*e*N{iJNwu_)~Vhqv*L4m+>-a^ z%BL|;$ynv8IlAk>nB#iMdh$5OnQS9Pa-7#+_C%TwSPG!flY+lNnQhonBJ`;$z)U)z z#4k?8G+OaxHo+6e;d#@s17!J)UuI7sV5pG7xDujM!J7GwZXYd-Sc1VA>{&%DLE`Nb zP_*tOt_zQ@Bc5Qv-Tvs7Wguqn0^40$AdTns>y((kji}oJdH`#ryj>_zALDIN&;MR4 zjzVyH^fQHMji@LJ1z1Go_{x>dKiy4H4cA3R{|0(J6c5E7eD8&{rr>j^wK{-2$#0Aw zfA80|t3EUZoJbr9=`ZNkURw=kXk4HE_U(qe8c15GPsL=d=DH03rx4P~6?2<=eC`6kxW-;DIL_p8f%MG*e zGZE52C5Hh07@GJA5!Jh~-ZOrbl9&Tlokj7KHbKr0ALW#f^YKT;zXHrK8}a-Hr5+3h znXn2`J!BYe*1bBFJFX65!e{&4lkog;spVJ7iZ~EH4x%fWGspm=wIoFWplk+GNUqHS zX;qmy?qhiy#`1R#+-Ud;U9AV#AetBt1PUdX!i2RNg4d6dP|D8|*kwMfV zhz(64YR1v}j{F+!Ssdj7&miSkO&pelB|>Ea^tINK76shE_AvxvzOkhi3s45IjDIfa zIjW*{TNM*|03iXfxemEO(ap3y^(j3RD*1t^%oc*71p+MA2Bc7vxct+GfIl!+D7A_~ zO7;<**djbnDc+M}|B#Tcw^CM*K=5gC+ywv8PF%j)EDqm4|M7VnOB7>2H#;yQ77Fv; zL03*$55lAVJYXNcf>_XwriJ_w%5e_$cuHCQb#b2Or7T!Dr^&sEL5C%`>=Uf+ zj`PCN8P<@L+rT|;z66b1*yLVIdpp=S>R!$qGY1PVE7Zsxr^&g*Ag~Z7NmwYm&~03^ z@saR6G{dvU0~&P%0>iQl-HhWH8{4yDF|&@b@AB>H6T2Dcx;!bmNqFxu5jL}!f%Ea5 z))El~Xm!CjH_VbkkOql8?G`-@*)~{%mc?2bmP25TPTk1}K7@9bK45k{r_Aeo&V`Ep zs5OlO?@Ixv!3n1;d{!DOwNW784QSOsjCl64Jw3rsi%dJ%A_fXf#7gT`q9s+q4A*hkbGJ7Va$GCkYGLoEPJtu}x2%?o)aw*s7 z5KxEfrOq%DsB0tL_wP2<5fkm0U!XE&<6-uFayM}T^2>5aTQu_`;m7X}xoVKtIjIs{tF1j_pZz%1KeQ!1j6NA=Dx1-~ZlD|Fv>eGY?l!`3QKVm< zM{bz8$>w>sm*AK}%d31kKLi)m8R&ts%4L*XNUKrPAT_@Hr;!4Gl!HG5pdWPS9r^Dh z!cBb+zKF~N|4_Xfq1MchSIVXCtjh<3&1qX#0fQCrwS=jlrtm9ePy0wZ6j%$l6l~CZ zDyo6eJg#exqbg$hO?r+Q@uu;~Fa-GFum>>AFpwdkW)=rl`OPW6+mU{E^q&rBA>{a? zHI~1Nkc&;y5+o8~H+>$hv$`SHc*Ad}qH@C`*D$cF1g06)(HaRW1Pw!NApR$MCY8U% zI22Ys&XIJ5XF=3ec__la+H;BffR2T)D-#Ar3y*UR>awc%%UeYtY{3}0SVm}E5rh4Z z&5GwzM;>i&BWE*Npxwl`Pyv~!Dq6zV%OnE;n{AXIEHn6cqFB zXC{98@)Q)}#?z07EMctSvT4;KFwS4U47q0q!>uqh^F7^5C?1R5KIZ|L7nOJ(6l!7C zXn8PYiH_T8zf6o7p7-4p3WE%~jyeGj|2VqHFTd` z5C)E&K@Y88Bs>KGXEs`hV5D%A!Atncwf4Hg;E^2fJvl-(o`Qj9bUc3LFN<|{Z;j(l zZW$0Q$ShUu8OV@@Fb{|5m!JT+R)3S~(-O4Q2L|WQB~OBi-pm}HOW<8ro0B)IsR42% zrAvNRH&1!bm0)CJyfJ2F#iU9MtzZ8l1!$FGh)7J8<7rZ4hmO}iRTrFS31D<9e zSm!8Jbc%?sq!1!;Z&@;|yKJLC78s<^Ywx%Fv0~@md{MO^2b9sDqO!tr4VGU?ifeT` zB>wFVtNMaPd0Lu3Zi;+Z29AKR z16>b5q<8vLnnU(==?z1oyW#cu{d6=8s(zQN6Ux1U!aV!h#<=jJkoPJ@-IU8hKKmJ3 z)g9bV@Lg2S5Ka$8@t2Fsd)P1>k3awje*aC2fy9bS{%7g!4~jp^Lg%97e(>7fRg!17 zL=}d{a5Qy1+bbrDu{QPAREzG1eilg;p@ZQ&I=Ag}pj8=P0#~aU*L^H`@(p3Moqv=9 z&i896Y(Go6F4AB0%!9(4Y-+BxGsB^ehrEm`4U7(u$84^G9j0*`{q84vw*i1ArofF- z!e96nBfOgW!Dkjss12#^2H1)tSy?$_d%g~i-D+YA#olP`6QKZKn#6jFpM zUdmQemkJB!5xtdf#sbpJI$oe3_8qX4pU3YkM$8#4wEJ;3ZFTrhjpj~bY5@-9MsyzA zeN^^kMRulbEJY$a{`mc~V>cj~Z03;a_B5c=OMa4)IEs}B6xf!SI`9{t#hLtvN5fQA zAJ_hd3=3e4#lOhi!8{+i0G{P2Vt{4+MY{OqtTx8y4Ad?OziGpr5P+8 zLiyjE&leE8>zwDeFQaAv(0rhWR}c&c)7GqJp&rQPG4_r%9`vzvv$hkk1crJR4Fi6f-0~ysNmjPUAUg#=2M$*Tg!yu8^WK2Ak*8Uj^$g@Qz5rpKA8;xhlkvS{mLPd)ghCz|w zZB$iEZKA#yMh>rcHK<1KXzqwYcE>pjuKTshLG&^N^XO(gHQfPvf%iVp@P z0JpiEw(mcR9F7aM&!j~G%DPUi09YZG%E(7nZ-IeX?rQ3bx+-8`dDd=j~Z;Yq?VcShb_0kp*e23=L! zrYLl0pvIP=uhFvN{{h-bXttcFR=XZa_%dG%i*1K&zi02wkAtPR^UFPX;=uS}I}5d6 z1cU96>NhC^lHu6B3h)ZJ-%u#@Rv0!yaA?sA1RUn;fxQ5Xg?7}8Rxn7jbl)F+by$b*o@%5|HQ?}^Ux z6ePZ5d=8lCG?W=0c&%UT&y4LXe!G(mp2G^aG7i@@zMvd>-3@b*7;?vKiom)~ zdpc+xw7wsO0v(3wRu<9Hqj1~+p$`yb(i}DqqJE{#!Rt$Ky5Tty}3C$Qy1`9K=yp&r&v4Dx!F9VxEz*+2w5 zN-;r)bsKFMeef%MF6H__>#ThKmydTF0FXYL2C&#Fj+5t1<)zyCfY1-$M~#hD=XIoME_ZYnUbZfOnu;bH12yn-MuUpD7wqHKbdJCNjC}$l}m!L zpGe~+L2%PmPM2Lb1^QhZ>u_Sg?0e+mCxOk3WdjBc7$Kl$prytFLySLOu|MMnmA|^| z2fQE|$xs)2%vMP(LiY5sen+KDj4gY-zh_KG04p-)Ugw5Bas$@`Qvr?x%R4o1C1sAX z>rx+{Fx3#hLRTpXquO$ZWf@_fhmem^rE{CcVVrchG}clSNp?J2__Jy=fsO>_4E|P2=iro^zc~(*fHeSa%L!yAvK5%x{d+ z|H44xGlx^N(pn^;e6nVu3@U^}0zeABOl{#6F1P704yx&7gx;NshgQw$Cq4T5n znc~(V*T?RRN20+gGui@Tbu%!pmlq!C=8x;@=c;4Ee0CdPs?JM;o{voFbdZ}ohP!>w zw}I|)kg%F6M1(qz-+v;C(KdQbOgec#97`K5b>zGNj6OYIJQnm`xOQ)<7`&%#g&vo+ z;k}f6m~0FOf3b#$sys8;?NiZ$szmWIsG(rQjmYtaerfH~q5wa^DpHR_tJqHxa@vQR zw;_0id2{z+QWtzdNE33@kAdP3e)q1XC~faLN%=k*_aw^$n&R=b7O}Sy4e5{VVch1O?QEXSo9kFXiNSDEUNWCfi_2s#bctcXz2R7~n43G=YV;z2Ca* zQ48?H{Z{!zK{)E!w^Uj)7-xgB#Z#DJk=7qTO2Han)LUV>e~$B_@M!{L&A@*I%%43> z^C|_>V6fE@r+dWDK`6@Uo52<{bwn;kSzbNe&3h3I!+c*$NB9>=B6HX6HS$<6Q!%f( zI*i-sNLwz-J}bYlmQ*~wHIQ2qPMX!eDc6D7XaMXy zTM)quj@Eo)Wus?mBzZi3>Olzpu0Q2-8XS#4C?gcZL01Gq2=yk~_8+H!heKV`9pmkI z4tR6|tUdzBoib;#lrgjd*k?kzNF(O6+C)F`8@(Uy4+YIj{c955s{B^9$UnenwSZq9 z2k7?@=+Kg%AwG#c3WH+TVjs-441@XeV(1t|tFzc8@f`(3(~zc7t>ixNkj+i@oIHU% zt8NwL#K_z@quB(5ZK_wv=IPoc&=DEaJ;{U0GlLscz~8Ax5X}oR0J|bTNa)(1@2P$B z#3}?i2r=J% zwM%6%@284yC`91Vpx}M>@__cYxxUL%6Moa)cN$x99iC(&r4`M}%z1z+Wo6PQFDT(S z?gv>%Nu1>g;sxq9G5$HYjS;86##ZRph~3h9x1^Bs`uwU#L8ChJ$E+_3(=JG+w;U8#)SrX zEM;xS9WFSAJffu>7OiBCDij{0DVIQ=Z<6oC*Xm}iMIj1c?!IIJ=2kFNJRdYYh|g#yCwo;6lJJMBK=ecaj{ydgSbbS4 z19WtMa6!2{9I4rl?~D2Y3mb62g1KZF7DkDG^XH(VL{2j&lv(O6rBP!12-|QM z?$F(N+qq%feXB3t&$t6ffJIlIS)M`>s^k7G$M~=nmcJxx00hLCMBc+B@p$3@mM}EW zMo{-5cpcbuucw?5$jv7mx_Nwo7`T;0HPReuUMK zfi^}FtvvyhCw`h#o0D(ypLuy!4xQ|c(RmV)VLt`#GycR_rS+NT>tDc-*zZz0YbxZB zISQN#Ru;LvFewJ+(4=mu#<~q8z*%%cPMbPNuYxY{s)L$O5U zc`3!PZ-W#M`RJ2r&Lr08vmQ-x^S z$A6)K!#&AFNoX%J#>Yb52rf%NtrZ+k!bK_qCgCstaSuzn)-($I0~AmdOh<$kWp!1S z7qJ>KDzHphv&a?!1>YJ3h#oAP=@)s-m*7FkF**BWr2Arg3#(3oByzWxR+6X4NZ$si z6fB{~NF%kb7Hz3aUEDJJ5MtDEi|k7I7!-GTLb00owQp%5#5D#5K~NgGesMjnb&Poj zz>hi(G2s(#5AZfX0eeZ&nieAOVW!#?8+@wH&4-ruKx9m-yjF|@*}io@!;gbbDG`x& z1KZ~S%;m(^)o=i640B~6aXaTmPKC`*oNXa=iG0LIfmFlqYJ3-&12!U5zii@B38| z7qqs}%CQS#I_*1Kqwg!!^n@XG>b-N7LcRuF2C^emUJYKFPGUc{fr{BhJzd*=XbLz} zfAVA~r8wKP2)`H6-T*yP~*W>cjDp7IwLYvC{~AKC|oIS1}lu`GP&C@y769->^rC!RV6& z^A-348kx60-KO#_RdL!6H07*naRA69@)7vMD zi@1IW+ow?8$TBGg*m$HEq%nwWqKd>!ioe!03j9M9(7*S6Dc0|;CUOB#{aE=!A?2HJ zSR`oRste>>=W4qe1>P*I@eX7FDB=(ieZf}2Xsuo4{)7++w_8FKPe~T`K%p8ID`LE8 z(TW1EhuY?aBTA)G9$OQjT)x=GV>HrvCIHlF@m8~1!-$}@KcRq0I30ADL8mrFQ0hjE(yB$X9Wmu-KRJJxkEQ*irUgVinljw{K?>a9uJax+nLC zJsa~^7mw@{@VuNk5mQ=zfk>fEh24#Pq33^*VT%##j%Wx!&04&N^}Hh^9-#`rNi`;2 zHK;nvr;0)dTzX6KzmV>xS6po<`cKDLL%zT~9+EtYL(wsvb#E%o1$#V&`r5Ss`45jT zj*2M^{%a46d01XCt%-jQgW`{I4slhF#91uhsJSlcrLc6jJ^FA6u**j-BHnMJzC*Zt!Yzir)C@nW#@wkRM5obxBQ2$Z~Rk<^h;xA?T^U^q;3&Xii zT1o&_ryS~;@dC}>+>>{ zJZ}H4v%{l-)H%B$EL_fw*z)7vQza;-mSeIkpon4=+MNafSL^Y|d0s+FP}5_a(Ky-X zsG?%i$tIGYY@YHFMSgwmz^|f+q}oy{@@XXSQY63HaR)*QNZ8VS)cdLiDD^6Bt5M)> z3dCdNpn+?pC%s#26Py2|iRkTe=x<;DhyC9v^?&&8_lKcKE2R9ilka#JeDi#5-b7STL$NQKJd+sljwcu0W0SrG@I<^wlt9|lo zN{%QEky0y)|Bvwd1ts2Yz4OTo&V{!zkR}k!#b;tTc|!Ryc=W&(o3m=yytV@? z965AH`3M-So&XuO1XO%1I3ZnlVThk3a4Ke3Y}K+~Hm5;02U1Ic{#JiEg6eKZ_Od<^ zlaVdFYw}#uDo$ZfzmI*?QkA99Rf=Iw3lPf5L?Bs_Pj5J{3O zAFWeVEd=9Q9xTR_JgT1opf>nN_~*9$6sp>9$rrDDV8GSo<5G4rh*TIVe9vM!}Nc9*moNM ztPl~bU@V^_*QIyDZC;)X8*C>_7So%_cP9Ia>Iy>*{ABVkClEv1(yBfFit`&Z~a>L-Lo)ua#ipy@g1O}ZFRAT%OlIZNim3O)#g9U8~1d0 z?pPauDt#cJhsmYIOhDGF`gGS}92O~&Bexlz>^JSaBL)s)zaxh8%@;{*N6o$S8^Gq6 zZ+sC(A0bwpZDZR5pGSrx=Kor7Df?A;IV>7#1|ef&vN zp|IrP#&t;j`x#7SEIh9zm*|L4!wLb*B_IVx+}oQub_-)0TOAk{c}BTkGJ+h7?@{z_ zGmh2wq9U>WI(VAn7g&c1K!l}p{4Rx9 zxumk0(7nX7=SoQ6-7@hs1Nbv_^rA9sQu9*O^vv9Vtr9u(gd2D~MHc_knN;`-Wi zkzr3sbxejMP((~fb;A-+n+bH)ujVuU(N-Fx(T8N5CWndB4V_t6(^On zjYa_%p@0f4x{CBS5M<@2qOJmEW6~u`j85lr<&B%I?sJ8TZh3?DPK?}X-Al2(r~y!g6jO&kU%Bq@ zecW~wo066zM*tiFvBCCB;vkaV?eUz+?<2Oj+Kywq0zi}TTD^?dyYd8i^?6S}`RBqLD zTN*$~6fxFa)8m{tsQ&g-DC>{D4nIant*BxaUu3ml>9!^BKshSD$XxZjmtVr^B8BLs zB)zP}iYdM{6$zGbifipHfMF0w25GpIf;{|canJqn=d7_8sVmZUWKm+)dI)sLj>Fcd zTS0~?GTUow77<8m8UFd9B&p_*!1pvno>sPUB9e-uTemwK;omo z1DNjsA(BssstEdFIe#_RpiqGAPcJNqhXDJmba03ro)bO@P|8LS^w*#|!zpybRUz1a z6oR(Eb_#|HSZG=X1vKUAWbwLb$ko%W(FrdIP-m;X7>)Fk%qreRr-#xT&B#gO`ZvlDUTO^lpUEB zMp6m|vy$*#lG;-}lXhpXDge|+hWMe5ts z(xJw`*At@yxdmEB5#9BhrA7qxzmPs62PJErQKaz=PasrN2m4pmSHXKJ4~)jocn>f6 zSuBjiVI2m|~P zX_4RA-pvY3V6R}Xfc4FR_l3#H=)n*w$R)l3`V87+)@%O^v2h^g?LE+`siM2u>Vr-A8=7jU=P z@UOz7EZO-wrREARTHzxutyRF>X>_=%cl8%?1?rE8{&m97AwaB6*cG(3yJ zMZ${M>p;<3gd7FEpm?nn58keJet)5~?(-2kkuLdgsRI{G+=paT;nedL`TXr1|!O2GI8nB!xRm<^!e`C{V+wduiEC0Fw8+{e~JpL zyvirsnNNbb7KYn5%~cK7D&I3<++aqEAd!_~{nfT^^?|RV9@7!`MK%N`uVe!`5PdwH zRZvEnlHo%=Tia?B&?w*%6etGdBeZpc7k`SCgT3&NNkL^kzZucU&Bp_JcQ$iFs_vl_ zOW%neeFMy3z`J|)ZUcZB;xGxsu@LW}7@!vTSS5zddGCm8(J`_j8lb*9V*jL7D1VUx z57H0yFJXz)nnnSQ0xm`Yn$M}>Zv=OU>0~QHF}($9r!Yu^pog{ES2m^`5I-{_w#o3Y z@C|%18>}#%>{JgMUM)3>*u^^Sd|b-1PVERdT@DCPlY~2Lsmyp&e}SN*9SVN_oi_w| z^xV#$$&+Od7d-?)#+mU5MieWZvp{P% zD7KE2T^L_yfV&`<)@O)-H(~3l0z7r`&ziE9;)Kk|#eWIw=_Dx)%`Zzo+eyUtv1Rh^ zaZ#k`Gx0=J1by827~TL!8(Dt^i9%Vu|C#16O*nGjGkH16V?D(0?vdK^ujvP42g=D9 zTi9jdJrznWK!3+Od`L;wbkn6cWCviKPgSkx zJ{jc|r-v;-x>05xqbtO}M7N&-Z$)%b-@EHm;-Bq-1Zsg^2sCL$a;7O*+D-i6!MfOy zVjg3?n>X6w4FOqg;*PLWR(x=cEQ$i`>PQqP-@}^2k#3Xmy`V&M#)8U(OtFtb(%u56 z(~SEGP9i_jg&KYSeD#<0#JtIQnH6%M&x zS3AdH{;LgT(rdv~KnPapeI$YR7gZM`vh4)~l3J1%8v%i>1quO%;)hlkJf*_0a39Y- zN8c23@cg(26bhhg!yiBt)|y5EjRLMn0okDl+nR``0v=eeSko1bkbqrN@1(b`K#X?f ztJPx5By^0oLhfOQU~f^%T)DcCALqy}FZDt7^Z`(G`ij}WK(U;cGu|`8v$D8tlja_P zs2}hCvZ4$FsaqF)KNO^6;u;L35d4u5#gIT9ZSVh4z|CoBNr`OSKvcx0vn@eJbylK> z??MUmtT?%F6Zd@ySMiQKHPumu`smt8Clk8#`#p8OL^`7O#TEF?gA-1<>U-yI+-b&s zN`7ck{P#%ebvgMY2?9avFZPULI?gML0E&M@iFBIxcvIvaHbM-kYY!01P(Cw-?{z*G zitfN|*<&CV+%5Aog|!l%>m)lsv5I26xX+H_Sgo_2(6ywsmzDxq(BJyVg0MTGYbDm_ z>2D@Qj7dhG9guawi<5ze^<~`^s3t|SlOaHB|Cs^?r&;v^Oh{1pJy+^0aTsEc?0T6- zPKR6+m+P0MTp%!7OE1_b71FeetkEluDz;4`4|zcEYxb00p$U+)8HX2p%> zh}P_2!VQnI^KsF`us%P(yWQ3w0Ne<-AGtf^^c_g~_Y~(@wb7M8Ep1PufQwMT?2wZT z{yd00|Ibp!o)B1J+D|OvYk+i6tcdMKfZ-j@wUGd3 z^!hFC$j>3IS{Y~0u=#R9^}0UwP%bbaUr90u1i8f|7eGLmLZmPSiZza1CWtET!8IP3 zK({yI_ftfvL*X$`(G}!hF?MNE_!wF@zUfxIocl`9ag`ww@$A$dD%BlMJls+8uMMD3uWbUVS-}t2FAX< zj?q1|rcpqnfNM|yGG#EyNnVwV1CVBx-SfX|whv;g-MAVTJ@aC|yJzz;m4Kmdvz!6h ze_(-U-N;aYHJSdsv%MwazT;e9EKd-?#B1dIzX(Pgq{~9ks^G~Ht>teZ0AGsADr!MM zS?0MMJ~L*UDz}9<%u#OS9k~_OOX_?>7#4p zY1sG6saDbF6P{g8BU&IW9wZb+^(FDwwM8!#GWr{;e1oxyPU07~4zXB5-vO1BS>Iia z&-m!1kx=bU8~}IN+8guzDEK*c{&FpM$HHMyf1H~ctRN; z&T(A2T;4fH{U|Yf$No7w>vge#HvKv;>8P3ijhHr1VQ{$pxcw4^V z_Bv|{B;V^~p2>m85$nX~>z}{?DHIS^k_dAtDRU1#DT@I>;e%su=<(-;H|;Ho%@->~h{>giy1Vgl_jN+lou>4eo!dx8pp!r2Bkb1wje!``81$qxfpATW5znx_A;Og^J zV#~pxwDLiB8w_jip~jp1HWY$=qA#sa===N2_%1~d?O``EM;KqW~OJq*tTMx&!pnKkT>f}j~L`~HX7UyNPS#G|K8c&pE)=MVZN8mL^Rc# z<}=&lgoL~Nu%hD*KQLaOVTK(Pt7uIriV_41u~**8$*#_T-}G49(J1fi6Y!~rL7H~C7w&O0Y_o+fg zJspSu{sV8LR6n=|Nb^cqU<+n1WID*$s_HF+JbN;BqU|dmkr{$&pw5Tag#1;Itx=$- zT2fTUyAOxz2amOzhkh|{4#D;4E353r*qF*yx|*Ocrt6pbu2DsrVIkvbxfz8isi`B; zai65}=`52c=80G2*^?x`o08{+H;pa;j2)Yqar>MoC^BhN8|pA1>@fH0Dan0)q}=~# zAKSp@jOZ68%z79kEcFp{9SA{Xqka^X!<%^@bdPuKJ>VJKmQ5YGDzXwgZTHP z8`5C`A_F59w&=62zYXIt@w3CQc|~QzzM`g&HKY7%e<1rDCe`}7s=K45sIy_%TF}{W)w4xr4 zW1ue69cX^Jvg}Eof8u;o*e0sZ!+v9H35gcoIeMmy`8Lycd&;QBjO7jSD=cjWP~wdl%m;E0eD$OA2Rwq-Gzw@G_8Ebp0PBWiO znBH^jITXl0-JWs-G-ss{e)`@Gtn@vbDhM}am6RVaBj)!EJYm;$H`!FiUOfswXm16E zZPeZ6hamu`mU>a$Q6zR+q0|$vh?Fg^%!uEUOxkH01(GBMyc9K-#g~%nlcz7oT(4D9 zUnC`~cJ7p=1!G5xTu2$?p3rmNn`cz&klq4>8=ec7Zz>YNUjZ^RK8J6Ci1nt`tx(~T zHz1eoCKMiEJO~Tzg~*OF7_~W4SCbT7@r=zS1jQ$x8(=!bb1YDNs3@V-<7N_vD#(h> z$sK9N*3ehFN(-6X7oO8!L9glTYe&+XB)WQBWH%Mh-dW!4ZD0Og`>CDL>5)`#+2xzo z2*$Fm^@^3+!Ev5Ku1E6^=m}D95>120H3L17MK9M8YZ=R4+v~gpWZ={WC%eVN5OA{G zmDaJ36CPJQ7a7@A9IGK6AKQ?T@Ms@q&xjl#B=3{}Vv^p}*D&kcf#GY5@fU<`uo`KK z{p@xHF`d6yv!ZYy0&8F=<4f;Yw`dj zVVzen%FvXtuB#iY^7%61zM>TPs~N`?SW!hjb;r^BOScI(N-TRSazTgjfN>_;|H-hZB!~4M#`pM zEWrQ}S8@@%NJGMMz_r@qZKh!dq z$66_QI2({&!uKpd@ErX z)iO>b!+?B2R3bwmFN*O;<4N_HD>rlD4<~X#)DjEf^Jmf;sT?`lg7b!FvR`9tmU++c z=y}M=)6IJa)?{WKCwd?s2UV|mt|$bm!PQ;@Ej@` zDK0(PI1=zpsr45*3<3L>JgsVnA#mIEqqU=oCOm?i9jVNJ*kvndMQi6zK^FkM>8y zg7U{C_~Gm5leCVXM0zy#yq9e+#qd@-TPVM|5A>Zmy7X)$&gsMkk*AcUcl!O+@@h$i zB&veq^iax~#UQpmp>rF)Jh(7&1Y>76ygDai-PEVljpOj71Nwg)>AV%X(BYU=@n{xJgjdrl&r+#RA1;PqS0Eq5;)bnm^{v7f@U#<0uS#B`G$& zLzU-0AUZQoLI+a@`w!PIWo&Yux=Y*_!4CBEt7$Sk-tLF=R^AEcMJR7O;ew>TXK*V1 znP57``r_Tfke+|b*s-V2^NDH*v0C)y1h3?%!ZIaebz@D)xnsmi_A}(pf_y&`UpcZ~ z)4JsJn7b^wXnmyi7`gi~-}ys6_3hjchkeiPjp=~gJ0tEiw>?Krko9h>$*^oqCO;N1 zKr09{G9YP9qku*M*Q3BiIEXNR-52VY5_0r-*3f}693j((*ZJBZOXw{*H6!51DV zsgW7Nf!ZnZQgN17$&&OJ<;lNp08mMq8&Cs+{5E1E$WnoL*YA)njOAIH<+71aIf0`dn6 zOl%#wGis2t9HV_L{3@!K_1QTlsu=X!`7wSnjwKzxgiP(2Fi3888e2*Zn7Dh0hx{9D z2n4lMczqeF_uH!@ zglz_B)8GflFf|U6_+R$?t91MO1^`;-^|(z>R!33n>5dTS#7wym@)oj|ZELRg0mDvV zbr;fDEN}O0BLyH#$PoTQiFGWg-6A|DR%^|2B zo%A5H4IaKj=g#~VwePJ?twnkgc(?#jyr3w)e-Wzm)}0dPF6BrdLH#8w$Z}u{?I&C;?r}KDuvs zYrOvMA&2D5jE&k*^|ae`)UjRYm%#u5|4r)%C>t3FOz?Rd-uyM`>>Nbq9E<~q^*!Hy zl17j;e^ zja*j4^DsjR#yUnel#|U(=)9POh}h#e{tapIFi#S_K`i0X|9nqH23Z}aiK*gSDj%t~ zh3&Ss2#0|%dT-npGWaOxyezMb?Gt2;ple=(ib-q6s9o2#8U>Or1yp0*Q{ZVae?6}1 z^APgsXz3SU%8(OA*=9;6_i9wEmE_yJ@Hu3(1#qUT>FC@nAl zTPp`kHNiW}VOe{LwW_18HhDJOXSy|z;Mo~^uWp0~JUJIyg^Uw040xV4_)#`jxu|HR zFs_uabEri4-=0LfTBCqQfq#<%n790EN7|}zi>0L)0X9%I-C$iA#CA_v5ggr(v0G^# zSuL+{o;NoU!xH)7sW^p){F@Bx-|XIB`Yl365Fb0u7O>ptX8;7ag<$A+&0~0HKz)yQ znV&p8An$(u!*soXkQfj%5p@!nskH7fuY>ak!ZbpP4<#8)D*TWH(wwMf43AR;Z3LP{98g2;EU1S5X9+Z&Rv)(v=76Jh>VVl&N1#cJc7m@K*;TtCbUQanP z?&f&vYpu>$i>E6Cqqi_t-jHFgt{HM^+_hAMSCsJ{poYUVE;HNDf`YuHM>mS{YP*Q188;F@DIIj=S0<;K)k}6deU?52v z+)_>UcL*E;22q}#XM)L`-5maKrll2Q-`v}sD;DFWcwsLy>Krh86)^#IXi$F}<{<}v zNHRl2>xf2yBu@c#H7^Am7&w?%gqaL}7&p*N*I)-eBt=oE zsU)Nyl;p@mAYHxyl~ST65RkTixBc%10M$}|7?7nDc#MJ&Yc7FtS8>bCM<7C=OtsVA zYvA|~sr!xF40r}JK5rm{q5BjuhH+SSMKAb5tj@<5f43Xjbs7aU3Vaj_L^uWgO_;^- zWXCNP)yZQOaNTeZOpPtPvh86c_yTgiSEc9U)gj^o+VrNW$~btq&t1!d8y`drXKz?~KUb(Z81=pW9N3sTY+o59{%%m!mG!Lun)YRE&46|Bd4|;g(Vc%od zi3H%~IA^63&aeNV!ZT0Ii{yUB^n5M&!bH(PTnF8j00{2j2d=H?s!!htfeuhuVHB-S z^T4BwelRP5DiK* zaq<@zCV}?o8U;Q%3S@Dp?eGGGflQ%MyW8|!yycieY?H2rk3Y-~vo;RK4ml~O#uo0X(*YPlx$KCVtDH3?ND z)E$VbQX?Kp+3ZDrQ5Q^`<%rDq$_NALmaZMj`oN=erhVAg;wre}Gd>HV9FxZI>&UJ` z{4)lskLwx&G9V@w$S`RR0>Z%8Up=$^O37-94b6GNh<^L#@~FpQ*%pn&Pi!grc^npP z>bFWi>?Vn3a+HIUFTyak5Ix$Bnh`~Th#PRr5)Z@3@bgEf#Ft|%+ddB~OhC+bOdcgL zKA<$a(|mGySg&ffxCf9j41=+~?SU|Hv0{^e1gOtg|Hv&33Xu8-C1_Xfg2Wg8RESM2&vE_cibnTujyHS zg8K6-{1(N)0|}8)kq?PKB!#9{(=uXKz&IibiCPT&P;Hk_-drD-pRw_~4h@P+4rI`J z|7!37vgkdw!(uw<@(plis!>I~NZV=@&?u0&DWD>tO0u1_wtJp8OyIo{5?4CdQ_jNJ z7c=wiH*A4p|4xdMBqxd%z==YhW&@BZ=N`60fD0l$nGkmB+WJ5A_u zUEK4f(nH&76!?@Vpn4+a!Y={A7T(@P!OT7YJ4-|a_i`9S)FNQ{W)$~=K}hlJC9M(Q=wfk<{%08P1^6zS2bK-?a%>r6D_86>L{6krY+XcQ zSoou*(v%z$p>*pM3%@#_9Cva3PZJa!ykC2em?)sw8M_$91LeS;_6iwdZ^hX5e8&bx zS4!w6@4($y2^@z)U4-HTfhVyQK1_Jnzq3~dquvkmysa3^7GZpJ<3CcBfQBo}&?x0Hee!N%pAy3VQ9v~nOsCI||B zUtXS)8iqwhFBB$}JDk1vc6b^fA#(V3<=If=prOqJbq^R;&97-)MYt`vtJX9MXcTY> z3aIj%!Lg3%OYvpo2ng;T*H08+tj>&7wIuMvpNIB%;s-e?>yWh73lz;+}X z{JYRds+#fv41gcb0HD4FvhgPQ} zf~RL*-Luh_jVR}IZO_!mdkOtO{HHewhDx*GQ$-Rv|K#!;Rp46y8*WPU#+%hp6dGs8 z+Wk;xtoLptdpT9Tv-<`J#*czaB8@8Z9Hwv0J3#KOA!aI|S6_(!t|0h(IhG~E&yp`h zTK4!*Ot`@N9$c$LKNAH1G?)DI>e0V9tqGvOo;XTsp8s$lKxC|E?L`wMyS=l+S5Mt; z6ZZ>aKR!N?F&>8TiWB_`OE3tag&B`CV>DsI>ujdAfXdr*4o4V>Q<86nbve%=Tjx`|`ebN=={U3uRN@7aJ33`3~U zTk|2Xce(%*_-5NHB?l7({_fIC#E%x0vZKef&!|N zo)6p%`^a>(jApR$loW*m(k!Hi?Eyo`*W~-%_xbfjD8LKfMS5VIL|RdkK_BL3n0=8; z56$k7tUhe7KfD1zwRD)Ad981;TjbN%*>$O@JeIMC1Ht7{(l?V~KXt~)rNyv(mW}^b zq(`!-#=0O{=m30JtSkgM9Sg~a?Y4F|jRG14{t*gT74seA-H;R%o{Wfe)+Ln8th6r8 zX$c;U-sd+Dj2^+*y%kRzSiSKCH{cn9rBn15M*)roJ8(cEaTlmhNua3$pPIBP0?@DA z=x;j|M@4Lxt~`M7FboW^G;f_OfU#anOXfF0F*;o2b#ob>ai68e)&!MB#OEnx0m$eu zuqd9u#6%t^-xt3Exc~vylj-ZQW`K4t2r4MHA*=VZ%I!TM(DE%<_Q2=mU{MYU|KR$A z0e}o6Nu}AGs_q@6a*IWOJM*fn9y*tm!Tf=;X012qBA@_e-;z8CV0RX^*Y=X=!@tCr zB-+O(k#SF*{ZH*Rr~*NVr-5&mJi8a!J`P0}BfO$0&Davd_DhkS5-vnwOiuP3#|J@A z@_?%bwS-e#bI;iio(a7J)LZTx+fb91L*Mjw z_p8i`DNxo`3e4Z#2*LQ^v5Im+ z;*kesEOf2RWv&~u!LeQ3znZa2!sGHd{sO*W5@?m{625WKJ?DGHBCSC4L>Hv&T9we3 zgI}F(6;qqBWw~y=wqm_YzSy*%$HA<>5e4wB;XD%-c*hEtH0Y7g_5Zwg8HND#)OuE= z#v0E`ZSgM4L7$=5j_io@K{>pQcMioj$E9B*Lqr{ndw4f7Nms=SlOCL*qhx8&M72Kg{PS5ODSSgiAi|UPhvl~k)+1|F@zOPw;vI`F#MtR!B;e{?cu^exen(2^xOC0eGnp1Tto^B=(3l4kWo+$ z6|r!kfuR3{`2TT-jP~mq1(GZU+?=6Lu?X^y?f&yhwQ9+o4M<-yJa_{(iamzs z%dF%%X4z9@Ol^b!u(&{<_3J z8U-{8{8tL79uJ#Ct+5P{$5QNX(S|l9#dl4^MWDB866MM*w+0frDEJ z9)cj03Z{_;H&V3V;{%*qc&rEM4q=i|{QJ<5oFmz9l9I z`2L|j?TD|-mqkjY%vCzP;XHWc#HPsnT`D@?E0ozlb_U%#jexdxSjV2k_C)$W_Otia z;gQQ=2w7=-Oa?FHNKmTZ@G1zFBcpg6`2|0rW>ee&OfD?)QQ1HK4IjW^0E9|3N~8r6 zAl2@aQFn0^BFa)(JSh)=(0%cBGeko^p!1SQQ~ALtls|;wgB@mjDJIeI3t%oFxmW%7e*Mml^r`*{jc@kH`>k z0|)uv(bs2e;)PDxlHon3Tj&2&2g9%UO#IANDDOSD0&yke47H>d3Y1{ba2N)wi6IPf zt=cxb9ey4ahh%?e4~zKCjOF>oJ4Tw0NVw&49*X$~dZG{^=``tArI@zWD3FLLpho6j z2C@tCnPG)gh6BlYcT!XbaT*5hyOsS-dw}@7*QS~u_+|3iUN@wj=ucfZ%Fbf@LczHq zD=@M@CI9F9`Og~wR7G4sxBY{y4IaNH;6DY3#80T}(~pZcGty!Hzk1wI7|s2<*6DD)8VqOMubQS#3c z!6`EEW2kfbd{N8DMU35k_2i`$3NNqDi?TKl5KkaI;BF^@h!LpJ>ZC$569i}CW3f>E z&_8bHODIX4{IeWSS07(2NpoJNsW1B_LX{+ zgtn&-d8qSVd%>Jt5*}L0R()#d{mskS+M~A3-#_zjjReE_qVSJfB{H%ZiZ9b<b?bRo7%FncM##ruz_9vm(%oY0*9%=MCig@HOo24d3>k5L0~EVl2!6MP zcc&1cH%5%4TFu^R&=A?7_+y>pHolDvS)pR{ddY~1pVeIngaJ$shlbMp9Gw%b>c zp`6=UYsEwshPV7N-mi1~X&a3K8U_9W1tOd|I-bJJp8@`vQ@To&%=G!|$n*m1e&dSO z%XwqHb1Pu(i1f!ygJdLUkRT!mh=SxS zNwP#0L`f(PN_ssP4be*oQ zy5Zb=vCU|h`|!cDdFr2?D*a><9~(MqgSd#T4&WF1#pjY0n>s8ccdA zqwN=X2t&i>=(S-8=xBe9Lj;_OGRBArtUw{k5WtA3rVvQM2$;f4CPaF&*QgqGLNp`l zod_?{5Q^@5&JcYOs8F4M%o4$$yPIe$a#I7m=tGm3F=SG!pUt#rUd1!mdSM>qS(uqp zz)I_>c_9Ba>0w9$GY^XX9-m6uM_^3++W%NcZV;&EyW^X9xO~8f>Je9scpWYHe7f1+8J6|yW%S5pnmNw zY|p{(#|GeMw7~u2{f~hPuj5xfq0hKXkH{m6vFAVRMC5Ne1pCikTX!M)E!6WJk)frA zkbtNGjv;bBj%$wlWXJbRrp!Nnp(*qSLR~x`Q4VSUpeQC>a+a4Hh|)jsStfqNxZnR% zrl@1M_EWCw7l&z_!6dwg)N;&#;rYd!V_b)%&cE&uA!I^{GARD;UgW8lb+R4fs@ck7 zyX*@vuDZt8N3=Xy1GSr6Ed*+~>0)^{{Vq%CdA`y=mpu`!Pu`cTmBd&WDE7wQ)0aW< zuN%U`;E(9L%fO=K^%^6~ z|9SB#y@HUJ`p(&$`(BEUy7tZyO zwJ9jDD;`|X(*xM1wm^<8lfThdhR5Yu3cJadsrD2C3W2|ZfIB%L*giSgSDj?hZ^>Pt zxYqIStib|0ZtbpG3A0coAZ$yYsLD^Acn6I(vLycuf>aFNPhCZ@p6JMb*kv)E=dx&I zJaIxk4iX!vJrwgX@UO8(M|xMwPWw_To2XQaH!2EPJvKAyq?5Auzpeu)ST*xKS?-53Z~ zFd#PP&=LI?m{-Q&H9s|JNCprY5gJnt{TrhnN8u;(AL3I=tA^xxQ)?Xbg6IEK%bP=W z7)E{5Svyfn^#4~p?>^69;P|*AQRRGOaEXG#y2LgNg$wCIg~*k952Bo%gRfeWFhALQ z9SMd3@TJdl-mz!Kb1@#tPGQgyABkp$pz8;zk7p#4sg$Pf9{S}r83U=i>uRGANRtQz z8fJWQwv)k}^(vqiBD22X+)bCpI}?d!RcVsNyEv*mC3LJ9^d_nx*@J$uuw&~q{yoHY zE()H)ae_6>!gBGmZvTvM_}}!x6rlQJLC=UU?1s&gTOq))5er;3pW@ndOq3t?S{MQ< zY?sP~x>#|TA;7>v{ZR;{MFgVU3gCHo@Zz8kzUI7! z498S4cls4#6;X@e?<1l-CV+%AN}cv~NAHZjvM&I-j& z)#CwwGZYU514f2|XbFcwT4Pd(01?1R;>=bcbDmtDp-7&*w$RU3jIYoMas8z?a5ydU z7y_hu;M3Zg3VJnzZ$bJNO zW3lZ7?#pDk5$PLAye&#lJzP{_@P1gaP3w#ow;@;7N2tRB+#ee#{|Mc>*7we>q@_eX zrYsm9@eNVw%7Mc;=S9p|XHm?Jr0o-58it(9hMD_m-9>dCg@8ifpFn_fw?Gy^5zoWy z0Pvq=Yak{-@o)O@L){BGf0n$H=nW1}`#DV-2e!fm3`BC&NNk6JDR>84UnKXl8QbS7 z4?RBvz>~x^2BJ8M+^|S=kgoHBAr>Rj?I++=(>L4#OLz9J+26OrVtOIyeiOX~-cN7c zPdy7oeI#Kq2s7BM|8F1nKAHO$o!EvySM9f@zf1N_%8()JY$< zWLh{3fC&{7V^A=OM5y?VpLbYEYZz4mJX~2;)2ye&44HDr25X%~5>xAeR~?##IQucvJ=(wEV(8Mlt>Wr;dZsDwEKg&FI~ch`&zf?a@)Gb?)fi@Tbrk&{fPDnl&{C>{7 z(|Q)q`$2h?%RP%`q5`$Xh2(&P#ylrqmuXnw{j}7Wam5DC?2ukhqyHO3Z$$A^T^S0}o$Hr`7fLe7L0){@MiKy{1qka+Qd5zqnv)$EX#hK1@&8biRG%c>v^ zEA&>6tJ*BAL3ChLWGE6dQ_uRvs_^{WLcV?{-A4%NiEw@_ivoi$Q1BRr0cwTvx0$ic z0e#gE+vnW=S7Gq!UB15&&4$9aQ0I9;Uk=L(s1n|V>_ym;+V)<(qka%DM~`ig%WEU3 zRDG_GcOo>vf3bZHqN@n;utPzH;|WwXg@8gJr6b_Z0xtSfdG?4VIrN=!3sLSvz9Tg# zP~YY|u{Y@g(Ny164a2LVGhqD~%>nTe6EF3ksT^ws;nJLvU7x45zh?QsSQkZFDRjUMwS*DJyf*^2D*xPWN_i_m8;n%V{K{@mFx4t5r{~>d>4)lshw|_6>hQ!(sEz1&%-c}Y z0{-?dK5@F|zRTJA6M^G8d6J-@qF(Pv_V8W4e!F}U#%;SLt531#zw+rUV>I9#)ikLm z$NVEg(3&Rt|4-ilb6&_uEm&ZRu9}ny6_ly;;7k>@gD?zMSv@%#hOMy9Q8l@aGMz-k zbPa4jrIp6|Z&I-T>Faz}=P!y{8aa}GtjVw%U3|9lYdM8OPGmJ0*5Fx}JCK2bpXx27ifA-1oX+=<~;X>5J*D zuGSE$JSD`&^x=8>m0M*xdDRC)eS)Qtm9M)D)n7r+`cN z!@Vv-xmh8Ab%gCDByT`$+J+dj_cKxm{VKUEsvcJ z0d(b&&tk)k3kf86ob;Y@G+}~$+*m=@AHPa_JKpf)KOsnKCB-(b8X)J$dkCK*p9GIb zA_M~Zmo26S)*$+6Pv6A>_c1@#xSq)gQngo3%j>Lw+Rtj^1PrGLD41?2OvPZy{wu3M zCYTYxIIw5tT>>R&xwsvO~Duj9mB)u@M@8a!5{O9kh_TM3J0;7}h zDhPoKimhjDYZOc^^vk9@Ih`;(ZTM;Tn&{2Rr_yuk%?OGq#U5eN{R z+|QzxJXVjrM0ICpOJ@Z>3GX@bOBkL|CUGm$A|uO`OOa3h#(&szACKDevzdHuO%&_J z%P<_iZ=J^Ft9DMgpI*;PG<<)p$zJD>tL5t+L(;#Fm zvrBGsJthC)w^R4Wzk8bHP!z3R2^?M)nK^XhV2RU%;!6{~J?Pu9Q4vUrIxxc}o@S%^ zBvqH6;kcqOlkl7nZkTOSHxlaV6#{7;frFUs@WfkQ75fMd56f>z&CI&Xf91DR2%-}_ZdHK89`ye(Is)%M8$w>IRP~%+ zPojSqUQ+i1Hm~f9MB<` zgA*`5hiu%I6#q7u5U&RWa(Tvt+j?V;?G<$(CNN{gROlEH)&4UAW{!jhP>}FkCxz>6 zlB)^ zil?BI*-YvydtM;Wihbe$)xo;nQrp6@RkqUBivm|)xM4mI4{T58f3y+wHS)=1*(%zi zOOc3q#rNcX3gsBv=DHqi6mYHSsD|qvk)qN!@%cx+4ikiMD*}L z{55?K(R-gwuNdZu$2()Jug>$rH8BcgVUj<}1Ui6=ao1gbT9=h#&tv95aCX=Xc?DlfvD|~M_ zQY~i0_x2(h6;xJOxqqLWCdd$5QNIc)32_`^+p;1pBUL<|A%g{qZ;2}TN_hl7fmQ=7 zfv}VtbMF&xr`lZ(IeCcjwpi4IahV)_c{CC&{%YglD0o8A2FOn#AoLVXL@6BiHkfQa zcsJT**fNe#fBrfG?kW~2kJgmVfRcXC<3kZgbRyH8VNQ5fwnptQ8nZ6>wD3e{puBkf zi>o>Xl&;imddY>S2$G8op^rHGzS#=y&6Ujqc;u^*<2y8c1y>zi(=5)x4+M92IY@AK zg1dWw;2zxF-QC>@1V2D<3l=oEySsekdGGxLGqcy~J>6Z^T_tOeD|cI55l&4b%N4vQ z3>x5n@bU;lLF69_rMRWHkAuP^+;m-G`_kOI@`^z-x>YXN%2psqbH1YMbZ`Kk7eBxA z-4+(67}lRo?IZN0Tp9TR*JM$30B?u1Z{WaMH=a%9&LGd}Oqk*Hl@62r*At~6Y+j|Y zQ=CQZV}UU@t}uN2TGrOD&LhpgrqN(W8EscXwOUaThmBo>HN50JfowZHv*gUg;@}gOY#%9dqT1r*6=A-#G^cGgRZ^kIC%oa5yFS)K3@H z9l76C4P=#bf;$uYYWLp_V~Ogo)LSI5)X{AC$d=FHsQ+-+#h5TB(%0H}FatlGUfB|b zKAUVVyS}e-(opM7m8R|$B2XW|6vO|DVdaOi+IoF|sC2`GqC*H!%C);(Jrkb6qbXaqy8!|9+DVx?Z&CZ16|&c#)cQ(Ua_`6cYfY>$Og!Ik*%(`u%Z2wS+$TkNCF zINEd+st5oR$)?fFV|osP@#EcbuTBzzHeR+&fRfy zi$R6`Bh<%f*coCuql=93ko)0xVZhOyhuTE5C*Ouq-nX%}zGfinE$nh|6SMjBeTVql z>`pU_nsJ>BvGo8pp_>*l%__^zH}^3RM0)P{oF`g#Evz01O`6HqHXQ{@V?LKj7C|)H z>6-5z)czS7Ok{uYZ3zdDp)F)7rGpuOoPHnu!+W^Dt`CkOsb)9|<`JGuRlG+A{ruRSIEu5z2du9Am<(*e|-C<`ggpkeW-MJ zqx7pT>n6vu;w}_&W+y<2#pgT+lzkp|w}+dJ5zq+{9r^)^cS3YqMX&pz=%)i>gDI&q zPY^5!r(TyT!^5moy5Dcyu{r#^!!7<-H9C{=xHD#!ihJjFX}`ia8v7%3E=V$)prA&} zbRkpzMCw9cC^Dm}RI)Jqvucj$q@zqx($5qd1`PQ-_8jTT+CO*+F;bEZ+`e$9*CNMQ z(|4fz{3-GI)_w_$(|B41b{LatS(%|SFhb3Z(ZcKDsNnKJ$ORcA4p8nAK#UJ$HC1~; z5AbL$vYQgfeg63p_nP3!I6-06QgFrxxfIbo_WL-N#egI({HRQ^Z(sgL=0&U{=ZSNu z@ZR~|*GiipnN}qlqQqRHuL|^=&o@G0=y{VqK`xXpyYs<# zC3uR*!(Pydm*pGI2rV4EUGn%YYc13JcJf_7TRkMMBBTTWhp7`nX{mIw3!+5HLP#Hl z$EmjwYXm=@n<>5%jaC!eq|bW^Ovi?D(Hx;`9jtx-qEXM!Y^52UP48A;-Ro~f7u5JQ zzjMe|`b&Hr>D+@;3yiEbE%|)o{xxpAw{jy);orsJBv|%ls>fHgR*TR?WZ}{X-TLI` zB2liVL&gd;G9|CFu7)$?xvUGA>RW*knW?Rw>3h7_Gsu1!M$cnT!s2${vNj+W8C$6UL%u96Fyr1p_P$sn(0cZDeXlI|oiX-(^{TU_NqW3)z zk!9nZ$2FWCGI-b2BiO#Y=N2L&D`gpB7XO{nPxDZ0cB7m=nw27SQjTMNZ@lsm|-%q2o#*1n=nbL_vZYy+-t9?_ClwQ(! zsK0DforW0m0*8d0V%D4C;*R1(N#)5%^%NY(NWTixMLy2St}~+bZ#L|mxxVsFCsc|t zz}pTsFBWd7E7HE@PS{)qV!hrtaO4f$ae-i+jk3Nt{jLws=FKD$^iE?HHbF}iN~L7{ z#(^;RlOn$L=@@m*C3MF;w%>n_X&>tpM2>-$i1aCcY{|P8n*qse2HXS>*{%i5!NK)%RtH~lD?Qc|V=3jUR z3A0FI1_vbuG23USEa_3OaB_`dyMCt6Bs=L*@Qj)nUJ1(P92%*Zb#OmkP7(qdKHu48Bj z1mT_GT5l`lo6l`-=1%#;5Iy3B(;#b(>!h^O9P#B$2}2uk9y^xmeq$ko#-`zK#9p_b zjmuV@^!KyvB#C2)Rn&u|HXEW~IL)r>ox1X_vIwxeG-SL^_RtQZ2WkOLn@?F3xwsho zi5;dNArGE!n+t_8s_z9kE<>F$F*ns1?g*qj+SzPps)YwL`7RBJZp@i&8ArfqfuV-raDnM6^Vzavw4W-a zD)Wuq*#;=u6tA)02`|pptu1^X{xtIsejy`SZC5*AP2RE8Ebsh!Feh;piZkJADC&}U ztQmA=Js?2Ez{Pidf?Ov3L)Truo`xJFwr5NP4|eDG&+N%s{<=UPr?o{p*u%W3qJQXD z{1wpQFgZB5TaxXohk}BMdEA8580Bd1Wi!nsyTjDsf*{EpGJcqeS{7pMT_1mbPsRJ! z<(SwNL}?(KkK`8hv&j4H{%16Quq-= zhgAx3=_VRdjNERVvAG+Yb8ANLC$J_9?eR6gk**mX{nG{2ug@0H<1@CpIO2u@0i>&Bsp61wZ=H;|q87QQC4%#Y?69L%!;%xI(KXHS#}ipKe*-l+NI>czUadUn%1@Y$ z>)S1h;rp$B`^P-7gO#vQ-c)o0R!Q(e&Cx>3dUm5VrJm0N_~)=5@<$+@XoS|c2&R|Q zNfHf|P@_gLFU7%h#inga28E~K-S%{ub7MU-7h+Nn^B|K>{rU9Nu)HdNX-V!UZsO*z zVDz0~`lu^!h!FCk&GJ4ib$POUpmYJ?VT*2ny|O2Ir$K3_>!tb=56x)Ts-pxIvRhjB zk3qnARxip-UK5Q;*9bi|Ik$gviVk!UBG%L(Gt;vAUK+|oMR`eO)GJZ#o_H4ovOn^} zSKdmtZMa9@0}pxc7|(Ioqk$(8(na>-`&5@_@OZtxVNG1%e$|iEzW@YW#WG|S4Yp6h z8PLA)^_~SlX%Rqr@=@V%MI_BXz9NvmLcelu>Ci1? zjI*}*rYmIK+h)JPyCX8$et8xY=cyNP>vVO;h)P8%FCq%b?fLmjyv}X}hhh(0kBCMw z$9}Cq<(Hm$cjjMx40ex@!XHQ(!RcdY))8Cr@=`BZDF;LF}cu1WFPoxExulEjCw^@q{pZr>gG5N}h@ zRT$1e7 z_(Og2jNz3-yd*e1nTL!&=NF#+@=)0H20=t36!Kbmma9Hn1sIk%pi9P=p20r z=OkyDhhZkc7^fQcm~om6Lu**`e(pb{+t5*(UWLs2hSWgJG#52;?(o8CWrr*L$-l@7 z5i&%0R2+Otnm|zRWJ+{tW6cx{GuM3Xt7c3@J^f<(Vz^64*HPK+MM1Q;lx@ofqiyTn zE!JCY!c)fJG+|^Q!IZB1hqgSo;Q1iLi0`&Y_Lk9W#b84oEZ3yRy~u;OZLwj}YJo zhMV6ae{y6!8fI;{#LD5?v2yP$^-z#Z<~_aa^k7Fo?ERJ}K}GUNZn?PkKuTFp!3qmy z2hD%JJ__O;rZHlN%9KgrGU$Zx8$bF`_7n=Sz>KWEt4IwQgvdK(OHK0iAZkMU8d6bG z)NKf%5#f`HW(V^buBlS*8ul$;%bF|SJlQQTSVEtMwkcw2GPl~Y!10$WYrpd_nQFSjKDw`?$YonNa?GVJ&I>o8N2r>62fid+amY(?n4q~6D8 z|9MWQi-p6G%g^R}F_7#cecB895muy;Z@9~2A+RhB>1P7P zu4=GSDy*+W{Bj-Uq$&Ly^(^VF$kl!A;z_Hvz#yEOF*mlY1T%~fzn$TvLHqGEBQ6BB zEngzUW5d%8awgoMs!Liifb5uGoM4H)gbdkM;Z?UUCa%>fwZ2LcaLD zv`XxoDr!>ANv)ald829^_0FiN8&4dQ34LqS%&0f-Fv_viOv zkPtuSm2J9>3E{d344zEQ@+^&HBB%w1!EnM9R(K9`O`2qo_4OPd`wVX`E~!St0aO1} zJ@Fvrn8j~D#v{q{QZh=)4vrRIskRX})!}}n!md9QJ(_kJ*+hu()qDlyw7ztZB+a9` z>$F!44`3ZT-lQRiSYZ1rb=IdWC~0SZ?eN1#15ghfWP&V0Q;GIfMf~-unrhg|eDyP@ zYx`rm!^q zH1J5t*`v)@Z~>INjl!BDEPHR#U&c)(>5OOg@NUQ^`hA?G`&(oWkMFo|55o1nkAFh& z1|n|a1n2odC{%(`EXvOgJ3B^)8bQWuTa6bANlMPWN~A*k8OX+QT!c@E$RYphU~nOs z&%-Q8>|(Jx*Pg$uC>Fw37M>v+k_w4B?F=&)L3b2nUd=>joW#4`G(LeSDCrR`-x`Yy z?fH$dclO`n3R3BQ9={*W)Vq{6Xuz?SAO0>QA%z1&fBRZHKs_}#&Tcale^O~-UWSAWft>%o59`d)9&>7J)P~W~MP|sE>JE zLT4)Lb@CGQvHn#v3!bsQ8v+9#Ug3qNqCa+xmY4$`q2us6jfDsT^B#Sz)B)qi?;X0T zKu)o$*jeP!(4@!T$7;yeCL9mgx|dIwHKnqPf=;Sy0(nXlD%Vyj#34xb1%|ryPHK=v zLb6)aG`JK=lXl^e+M;Uo)b0f>#JPS>o%EXs!Wj}pawS=y7?#RAJs>EFleyP3?vh3_ z+=JA%NL%Qu*m`E2FGBbd8ecD;jO(uYu=VwiP5^%J7$G&|Fr?PD>OQ@_`L-b}90bA6 zXb`$9Y}cfby z0Bu@!o8kuJA#XfxyLqwylA}CQI_i*7cHP7yO!LhRT~Kx7XPU*z-S_UIg!|jji zO(lj?_ERiO;gG`c+Yr21gLIyWHVUe{YZe1lE9|8=SIr4acUqer8-Nz7s0F6UyR1&r zRScuoMhRlBtTI~1r11I1R088q?~@OsoCC@Bhr33$*s8iNJ2j+Xe-jqR2Vea5rFVSL zZLKIN3TZqs9G?b6X3!Q=obSokA@xTlnAWb0_2$kXr5bzlWpbe)-;dUzZF=SRx}osj zP-9`Civ5CVcOPiF%X$!hsY?l@^qU6G_;31I;0%7U7Osi}GRzJ>cM;5PCevmB2&H(h zZ94kVHK>^D4?Jny_;yS8lAQqJ@h|&=`?DhB;zKvo|8AzVNE{)?f|YG?N=j~qqYiSa>u z{FA$&{7ixzEfjWr?v}Izh|U+7hFkd2u$o8yEFPtcKS=ZAZ&@;eZZojypos3&9ONJw zg5pS~IJ3>Y%R@)^o@u`Iv&itU&c$iJY+-=Js>f?_Fcj%nuaY~#dPkso=)TaP;eQmS z8M%>J1<>R#El+sT;8XOEr-*LpSe-|OK;QK85DUl&jl=_^zOV)|dGsw!{(A_%sVwu3 zkc?Sq7N`E?QPf?IKQpD6LLAk@%4cI2Ron0l`=LM@thLLEb1 zX585M;+yrT1TCCQzMviZ10i-2ckiz9Z76-kgozwa6&MwLUptCH%5JjF2BvG{mQ)?u zX`J~#(NTl^7Y8OTR5=2?7*lROHa$|90)k;~L%Vp22|3SPIA-`K<6`*zMnkc2Hu-NW z6I>0i+t)9@q%1ubycH(i&I2re1*{lKmP&0hE8U0W7?BD-Z4VeAEQIr#XSWG<0!2r# z7t4B&rPRba!0Oc5F*`7mKm5gwJy1sdjuc!oOYPf^vPHpKMR=h~)eE3$54FbTjh%C@ ztD%^bHCn&bA{#S}7b!_s6a-&r@yl5e-J0ry=NZpQNYSQ-n8B9MDn;gC&X|^F3kZ;X zvW(^K0^^6xT685ErdJgwNj1a_A1_4(AiW>fS?u?&a#T9J-C zviIj1QK~0yrH6}cEyI6DD2a)EfRGj0-m;N{Le zcbC;88QawAUIYb=_HExvFYe3em${nZ1p{LcjHNSB6b`ocWA2i!g?vG(GdIB5C;Si) zas`Pw9mr5$)9A@P?1*XAvC!Xd)ED?rlJ5Tna18o6~c_QtZQOSqXHaK2`N^qv<~LobVzpEnsYalffctxqQk)P`^k$Q&^9db0*| zRIluuALrr$iqx+<*tZVL>bK9;#hAz{^YpW&54VN1@`k8&M z9!6xXE=R3j`tEb^7Xo5+)duOBSH35RfvPdCCR!3Re)|H_EI8rPuGGhuj>3Fi#qEK;qKfiedVd68MBDm9mHd&7s2Cx{iwb zCNSL9PUW4*KEB9Q;WkEZawFWz(eS&f(Y$!Pusml^hK$%~(EivTp06S9Ygxmw*^J4t z9sJVu^c~}Vd)|qEF1}Q~vL^YR3LkN$J1LFYQ&&YkapHsZ4SrjQtq&K7ABKAQ_8kZH z%Qg-xE{lev8E}(gm{XKM6z5K(pX-zm3G85++rhY9zi0_B^DMsXVeNZ>pfZWmNd(y2 z`1;7ZL85x!jeUlE{1_pqNeb^&*r5EfYyi2u9G(Cqeu(Zr#i_ zOBiqFc)<{S@k9>J5QGupnfXjMUj4Jhki5R4R7{9T%S$e`=|(ovu;oMyZ+zBM3R=K! zxn;lHxvpruT6G=;LmJ*}>TK%g-GXY)Rg+EMVCM8MjI8!MCbNSA{W!$3*imw_5a1cJP+zd499~(fac$ zq)?B_y*OlsI(%e@K0DqWTR-ImBKCB+CZhxcvsr8z=Piltwkeffqf9#B$Nj(EQ1G;U zMLbDvFMbafuFir?CZ_jAKJ%i;klF!zl2F1sYuv-W5nPG?krRwMV8JISAU=T%gvYCZ zObKAY%S~>zh&;vtE=Krx7=$`G2~8_@%AcYyC%F{KsUXop6qPBUbbY)P@}85A6RpSlah~0 zq2IXu<>j;E3F@d>qIS0t#}MxLtLYtn>z7GuIc#rMLD%~`ZeB3zay5&FbIQ6~VS5E< z3gTjjKnEF?n0wHg=#=fKSEEow4NWdS?PbUcB|?z#YBJsBIwF)(ZPbQDxLK?xWdb3x z{vAaZhZPG+fmz4$7ebE_^nM(ua^YjVJ;YzP;Zsoq3>cX1zR)IZQc&+_i3g!d`mvR# zCx=X4`R8$LI_@qI0_0{mfa@L^or>be@*A1Cj><0{bvbCk-0A0Bru^!BQ}gq^Ao&{! zrQ7*%X%qW4DTKGc@d7IX2qG;r`wZt4N6EjAS8B{d319%(A0-e_S%Lz?)_)%KeAs+g zlK!YFd7XuTl@J>J*O8}|9CH9%g@eIYRK;hBk`^SE`hWnpJ;d0W#;GLz+Vwy&|BgHE z2|5`Lamp)r=wYa2pMQUt7^|ZSoLPUort=ZA{*<$RbHUFpaJ6%_QiB5B8|9 z3O86$X=qU7<#yN5Vrw!c!OG+3e&kaZ`HW@=O`W|vU^@z7q#|e_)dmJ42aieL!dsCZ z_?rn+7eQS&r8Q5GJ>^@L1(035pz-Qe4{`?*wKYmTVHo=E1%HeD+-`y$p(px< zBg+I^qXV9l%~J5|pLNj_BdD{oSRG&|DmRGv;cY1y4}!LM&D?EW2wuZ=RWSpW^-ZhC z$G5Qt8Yw)mbJq!>|7}KH*B}-#xL;`T5uK@*`#~OAnD`qr^ykRh(zUa)HU_SWw+>)C zLa$*<()w?`)jH)u1a83_-JGV=EwPBTdbvyj-Xz)+s~u_|fO08XC-e`v{3j_s?2$|E z?sVL5v=CvW=a)E(eBSs5DDGYh|Jy1 zF3`Z*1XHvs3B%fcj$JVN6Cx-j+q-)+^ao9`C#voADXP`rSLp{eg4kiCNNP6tyWY4X z^sZq*ntJ5h6Z%L1`;uqFP<~jAM-cc#8sB-KEGx*dUsI*v^3=1PloPSn~ zxv2Ud#C`|}ZhV68t>sLKkq!@FG#AozgQhy?CW|7KEtreQUQM;|#;` ziK@C-h|+~*va;KhdxFaG9)!8-=Pi3SALIJ(dtp>2LcmGnm3_hJ7l8+;B2Zf$Nv043 z;s6W)!bQ66YfR~e?F4bYNIqjK*S+L3Cm=pYjAb|yM62HP7uZh<>Xea+KKI(H!+4rc zR{<0=!;F6oa6ZJs#?1MfAn}BWoqTSF=@nV2j#69h^JUI!v~pznf;%RMr$=m&kpI5n zLP+43cZ0-De(ObhEa6}--A;Co=`V~d#Hb2TDYKf&={$L92lv#vWYDnTcl!i-1ErT( zEL=f4ZZ6SwINzUS`Xmad8kYq$ay;SCS;Z`YQpRIJpLmfwTGNK6Hcr53x-_23Lc{0; z%0wFg+Audp=z*%mv-(t=E@8~!vJ|Wz7YqcBn5vQ6V7`sV^=&tE(rvc6ymu(b*;(r^ zw1>d*FbnDI{06wKZUsUKrIT~c*!O_* z_1yI5F4xso!>2XbvZ+8l_kuZ=T%vH_+GRgEVFJpalbaEOUiY;ony*&Yd70imh26ul zKc^}MOku{A=$#24in5O_^~8X=-C3X$HH^3eX#O(iSmwBa*D80#$rND{L7VumWj%$0O8-O8?*A4*1~mk`3pYq7CS>2R&@ZBsx%C1a z$oh#+W}DTl)H49QOFUwtY}ax22P;SrB>k`hezhVHq< zrS%Ok7W7l1MnUu)dFr)*N$1mcvam7^66u*O^sIXrhgZ(GmM(OqntV>s#2(86t)5?f z31fHe7@RuXZuw*`w5o_N&4dhjbqQ~^Pa;7IH${hBW3=Snkk9Nq(kqGABi#_y^mq)L z&;)Dy`l>%)bcX&JIj@`figD*Br(DVUORfj|~nX?pF41KB+C+`n}HZ@17t z16GjzMG7dbOa@MC8kT(9878~TUZBn6kqHuIYkxU=2tHS`XkQb%1R@THPn>$eb%zj= z+nZ&X6GT52!XQ9RbS|mT%K`ge!b56{u*RxevUZ?r%C+Fm|5vhk4>{swOCP*O_ z_q9BHF6E^w?tp8J9BshWG~^M%7v0wf+6=h>*Z2hS<@G#^hxn6zBv0kv9S}rC6qsAG z5j}BV)DxIT(Sng|?FiicbJ0DFGE+j+5-_Lm73?@%z|!AvOU}2}4^43;yFnpX)Ht}G zRnAB-JWyt|DtzfSH@7m1cd>dNjxyf^OXa8dppD>txXaD*M68QAT6If)r;=%?#2`S~N*}=Mwk#?Vl!%d?Deyy&|$2$3RT%qTx)e`!R>+@Z-5G~T7 zVt>PN07R~+{~;GEt^g^2(QdSFrkUV`70HO9x|bp}WYU&>tYYg{U0zBZa8LQPaxpTW zAhISIq)tqcx#sD594;abE%>x)d^07EwP#^t#Dh5CmpMLApfkzugsJH&V!05RO$gkB zp|TfEm+Jfe^;?@)N(GS8*E= z|0$BTK`PMrBy?|)kS)N_VvdEAGLB{R^8QgFy^|!Z_w*xuic)|KeGm(0U>t1-0n&7n zyAv|9SPLkDBI%P^ayFnv*ywMk;e8-s6^ zwThY8ky@oaigV{rfA=Cl0A;~_?N{|eKAOJ0dNL!A)*z}aM?*Keju(m@?DiMeUk*O> z-}fT^N-98NsK@(-ira_tC}r1ZT)@^(4j;Mr+vImDN}y8!I5V~wtHf>HYrS;2l$j1{ z9-G~34ekGi*`@aW9A`$1T3w8tEft0(cZ}}dyzV=3990VTN!X@_H{fH9oi@x~AwaNc zh!=^uckUqxu_Y7?>xHb+hwCQhQAM1LD9B50U1};sOFYg9%(!Xjj#EE=7>)gqNeXSK z%bwuD+dKUci|9$%Eriz<1-{Rkk^eeNCIuO82~M*c7H~pp(Rd`T6I_+#ymkLSEz*7VL>}PJyV}Q7#Sa5k7mBGYUR`WV!X6# z5JP)5Hzj{P@7D#kYB}k_jTZdZtsIGhN$Ix}FHf95X>RZY2V#4zT&55P0}E`|nwkgm zD^Z-kC+i#I>IqJ=R8!MdHMOrmn;SzOD|sZcVAJfxl*pZ?^E9j7VU386B5s@~DAEIkWePzwn?A&P|n*y~g4 z^}z9Hq{#zy;bb_4l#O1VDEs}!H^e(+vx#)7tQu71-k&GzwxDb*lC-iAADoSv^A0g+ zhbv4bY*buMJ9qC%8Qd=KQwAWv7zg~jqd^g*ebeE@ZA^3#H|d`~dT2d=U=VBw=~ zQFkJ!`L)1PNz4d_pnHMfzdz6A|CXoIu(eWO_j=c~ydy)EZmr{H;FAvbyWcR!@qyQq zu-+NC&Ue1(gjUL~n6DyrC^nl+rDUNFE2}?;G5!H{7sc_RN=I9xBe5^Y+mwlJ?q4$1 z$GlrGYv-tc%xgV_0UhcF*npw-{5=LZHM+3Ky>44uvcuGXnw9O-noZZ^q#rDg*ulm| z(NfJn;RgH~zX!K$W$wbDo=J@^Z#f9@J0byJz zD?5}t7iKyCO=7Q{yX>8KJg zm!GiBNtJA;ld%MtAY?ZDtM-0>cj9y}_xTacXFTM2f=&H+Tq$4yGwv~yoF`haXWE_2 zRl;)G>)fmp?K-dQsK39bh&CDZihFI^^hiYB(jTo(QxJ;UR(T{Vg1(#;pU2B1n9$_7 z{w($zs%Qq3)&~k>jcF>Ri}>UJ--18l$CCr{bgK#eeAumAZS&M6BvbZ4sM_iDS!JNK z@mZt!alstR=SDA;E!}S|j%bBV8S~*Yz)nF|KU^K4ec5FT~U_I zL5JNG>`6)sHqoNIE#bAF8&xKM{8oWXJ!HT;< zl!&niB;5;w7*8||j#PhM@UL>if~tTB=wSAv8;$>L5kfyA*2DZ^?uQTcnGOJDQxW~z ztmE)NO)AsO!H(fDyi7C9V#cso5~R$k9Q(M&b8z%Hvl+y3F*vOW$i?(}APKDKL-VJ*pvn7k`VX(JfDNeB61pHf zwjcKGygjmOV+7B6)>EvB8Avv2o*RD-To2poVfR9PW*R5)QC|H7XLy-|iz*XARBumk z411r3)3<*8^q@cghVt(RWelIQbX>yH2d2KaiSz3{uIJ z)=F+hs|^HFD5lcmoqJkpOji#D+^OWI)RwXKv2o*BdLu#F z3)LAb$>FJM)ED@f3%gof)&ua(dm-_U@rGvnB&^LNLz6jM#-(~QU%ONVYW ztc-JSCSb8P&{H!u#L+*zY8HGQtLC2-U4HM(h*9?G|7!tgxfNc%;XPKNUz5ZRKWTla z*QIPRunROUzP3&f`lC81xe#WkV{#z>^+-UC*Q_LGhSlc~WgEM37m#+@{0-HRE{Xq| zjl#}-e?R3r#V>^VD0_PDd(QG2&ii0bkxx)G@L8x~YyA@i<~0dZqCgvCTDWj+6tgts zrq{bQyn&yO(fr%Rk@)sE&CuZQ<%KSSsV)n5_9$i|Irq9zeWfEBADZM;_FhaK-xM- zn<-xY_v@!dlwIgevd=e0HoaWNLI6JP7$%j?Hl3v*8W{VHie^9C%uuI2JEmw``Rfv| zD$F|}h{QpIx9Qn;q)J6fqwY+=+pJFX*nA<&6*0Pezq<|U-fT5{%r?5!Ozp_H+$IZp z4kA+C=7~RLAh+UWm;w(TmvqffCM ziOg>U@PnoPHcnCbY7cl>)4lA;IUaavq1#ADd}bCe+kwZ-`G!b?c+G1iMB~)UTwi)4 zeEtXs*W)E+t*2#N$;vsW=uT+v?o%}~vMxd&=*vy{nuM$&_rDt$<9}GuZUxMdsS>MM zG$GO);Zf=nFApJ0K|zL`7o5`578_PVi^EDPL^FMQuoGQ}-vqgEwd?V362olT^g$&& zG;Gz8r0zynx00)XY;5hhJ-Jn)1LHNoHFz#4>E$C7i>)!z*>8DcExU zP!Rh#FAB^!FrXC}6BB~GJ1yyq{Wwe=+ZqiFu+7-l8LZ+8rx? z)zqRGVS;kHe#E(1hhAU)$Zrfgm@u_LqjlL+O*uYqyR8AO2*_mnf+7 zC|)DZn8p;T0-?yMgH`IZt>*!BP+wrzUm{V{RQ=bgg^F0se|M7@Dwi2WrdIq{H_?u2 zkXP%GvFgJQm%i#!ysBDKhN!q5%81W*i;~__#BXq761!k&&o0jrbc##cR{#2qhXsK2 z{R%(JWj7b#CnLD0$dHourVl5CQg-`ww>gp?Lto2d+a-}`ApWI0^D>`wdWB+-n}Q`^ z`8{H4KYfIDOmFVIRu)rrw!zRh#N+T2(D!_kziV@$X_FrpVKANI>0?-|`K#fZrBz@Rx;J&IclN`p`^MgDTJwhTcxY z3M#;>40-=#<)X3Tar<#i`}Pv zSFA@TljPQoE!g-4WkvOO>{k5*+HIa#pd1{XNY8_qV7$qSZJkD-1WfPBCGCn+Nsi ziv<@k^2z_EPLx0RJYVxG@cn?WLg=UG)P7k7kv!-D-qE1+<$E?5s|+LU_pL1{83yH? zn#Ya5Xv_XGYFoqbtOZ&^RZ%W#1cBauZQtVhtZcCM#aH}CN`#`9`)8W}rWaV&{r)kFw&Z`{!0fzMr9E=g5(O}}nX~ix zZHJ3F!X|Fyn~$#a+WWN_mDWCsWQ9CA%DevaF3Cyd-4t2=2X0H)(j(NQJ+%bKn#-gs zrBsJ^V6EER%45Dz6l2DM_R@|w)_TR`zrPtYBYOfsSmb={{UU&n+FW}#VuFzyh0c~{ z3;>9*q5PNLn`8X($S?e(tVQzAV1AHXhY>SYIsLtV_%!+12~s)Kt;<`2R5?~NXcA!u zLng9dm;3?Y+nLFCW%j{O{^k1rLgSsxD5R<{!c7QT6&;GNDg38C!}qCK$k6FII=;vO zd&)|7J~!(zI$TQG;~1V1#~I$&M*d*v)>$6Y(l2O!LVSX1Y(k-mpF@O~#lbhfHN{KW z57;LY*#3{FuW*a{Z@MOyuBDM&8l<~B6(j`dSh`!fJ0(OB0qK_RX6f$kX6f#H@ONMD z`ycG}+4;`QnK|bSUpTWjqK;6f_J`ZsaF*GPapu_q@dzZ+400}!Ca&O#E&M}4YELKE z+C&g30f1xr~U^1M%iRpeG#0 zZyE9o4g@?HWT816m*suB$EXtlQb)M?_-!ayFib2fl{!DnM>J_z9Z@(Ee1LHPH9EQT zvF=y0NdNx!y9*0ZYF74Ba3>m!{>Qy+P8i|CW-zAX6i|h*j?F+^Ogc6|Ps_t!JY+?< zyZ6pZ$=Lezj!PsuBmnztHlXAe%U^8CdM2mfqsY)(?x?-ouG`gK_}mk!0F_UaXPY39 z<~V}Vd+Xb2^9aqzGC8^XA;SdROqFkKqI2mGBZ2R6oHHk)cW~^0|LB}=W;M8Qn?&2u zs3$p~c_d%pk5xfncK|JqyBAf_+R5DMH&=4$ScuiEU^dcr(g-^tyNmdLi-cop@BzoM zKIK5V3=2>4MZ34Thfyom#PHmbho`cvrGF`(8$j|T-bHo(2>`waE@#Wlhfzt11M0LP zq+r<^>oEk_JV8;y`SYAz_z*Brj^nzNI(#48d+X>X`EZF%g+Xb0i9E)A9`(m4hPmA| zwIiO$SO*L;hQ}Ohag#$G(@9!iQCo)u{tPka1Lv4O9tLEe+yBuRgqZ*}D%inS0o(jr zJ|Ho~>sL2NAv-iD$ZyX>0#$M9*liA!!m1`XR)W z?h&;m^4lghtG)v5?)a;h2DuUtcg&f`{@;%B=-H>!Pn`{KZztb{?9n)LZAm>I&Ws>~ z-vHWgqZMtIM)z;l^Q_YHr?Dp0xb6E2Mi%(U4;^$?Q4pkZ6d-i8`sTk-fk9=+V<`OU zl#LEE_DbGR1_HzZbUU*t%jArjk$w<$58usPEVCF;693pbeyhyz;LXag>J~xj{a;B-;S$5wPoEArUO&PJZ#>TXV^U91#k-qPB?O!s!r~lNk2xp;p%m z0U2mK2BH_2J-Yj1-#bK{c$CHjzYdS6SbRawRRzR%mkXfXnP2u>8&U$NQk#GOVvR2q zDyk*nESkDebzp5-(w*pNM`3QGQ@g^4}>`G@eua$opc(BJBwg`*+E z(uRq+{!Ac-muR^N`8w{E9Zv!OR3jqYidCzJKb>N^d^GpysX@Eg1l`1hvarKZ*{(&# zy%_*AH@}@}7>y`nAE&TV1 zmhkBrT8%$$a9kfV?~_0(uk`jlToY&nja@d`i2IjW!0GzkAQtlLcI?DHtW^DT*w2eB z8gn_oJ40;oJ7w&23LI_=zaUu9^Fx540HVXOTswK(pWtd2EnvOw*FVWsB(eUOTV(%@ zliy-R7qIU3Ceik1(OTZ*^v?>WRyagPN#3daEB<7jy~)`pg+UJ4OZdREZ`03MPTAdm z#ZFn8+8vSSIiss#_lag3prt zEF|CPxNb8Y&1&RLQ7k5j-yx~UNhCRVipBm}NewaJhYOpZKdlB4+}+8Yi47kEAAKPv zc}4o70WY%0RrOmIGp1y{nY1q7W$2qa?yy2ZOM^NjVe2)?Nl$wB1`T78l7s&%lH4H> zYQjoGgM0%{^sx5kq{>Z}>}aS1w-RwQf>4_+OU%#+DZi z+fY`7y`zMH2xy=Ogu_P`p&_CX6&Aqq& zehrM!OmFjDr8Ljo?xphg{{+9Yav+ih%ovMj5`hF3Bs-p3zMe#)T^F)YQ?@<)p@p_N zQ5oRtZ`!sS{n$ASc-qZxCu3iGFQ>yyB8WoqQ#2f@qh8KnzI(o>h3OYw+bq}3ZvNx| znbzb8LRbx;VL>8#CS>LSpBJ6%M1An%icC`WHE{%*eoeig5?Z&;USFBDfrD3n8!l0A z(y1b`1B8KJRNwyFDp9yqW-7Ymn=ZJBG8HGcciGEL`fifPArfVCs35M^QuxwO9Sr^F z&V)v7lkWYt+~h&gC>QK@;=ydx1gPzrc-h$vDu;~agNkgqi`l?Ce_({v2}WJFi;=YR zN?g9nqTw@d(5+wWiysAtH(2sDvm2LW$=oL)llOC74wYTr^D^^2`X||I9_IKURVbo2 zWNTW8!blDOA6AH_uAr!V=wQ=wNEwXy$j&fLdc7m373=?%AVevn*MD_)KcKimcskf7 zrNS@O{r3a*9Y!(2h(F)}nw1|C8lOVQ=`O}@ak5)9+kv`%E6erL-!SyOW+fu;Hx_(N zQ6nM?rB{<|P8g}+x^6~ytRPWhDLc%dNK6Ux@(?aOi4gOp@`-{I2lVj zW+dn{tFU2lHc*}o@G^h8OG&{CaHyh+0mo&x$A;4lBYnGveI=I?A)|t^?5!xg%~@q6 z{!IwyXI@cmq8?D;C(dv2<2jw8_lu?TW5}n!H0)!bMHPAsICM|=iLg3wJ*+H!`T~vHa?WE`zOwnF`Yc7?adCI~UDLkh_nUWZ5Nnl}Pmmca)Ih}u zaaHDNa3N1L1Jz)lx9Fz)_tJ<1wvzcNr2N+r0CC-b?F7lcL5NhG@~9`f+U-IAxDihR4#?z>B@cFb{Ajt31WCM@U<346&+= zE7(#E+gyB@7f^(F{x!3j&7CBypq6KxR(xp9gI$xy4~0lkuj6NUvRnxBeb<~I;aI&Q zf1uoMp!32$I`h1m8w3StC-0a&i9qGP#t3G0g}cpKh}lOyB=2o2K!xZ#^2UblhBhzE zZi@P}2FCGqm~k`ID3x{r6ViSd})qiiCU~Q zeA;&ZRssHmfs>BBy)ilE(!3n(7qBDD)Cj$gZLAX(JL=hg{G?XV8v%*0PCez1b-kV} zR|4D+7Z;L)+kM(DxCtAST%y<`xTPerY*& zI&GZ#xv|ncvmty}?}l%A#&$rW=fQtoZ~){nxBs8aH=QYo^n;U@%0pt=n^G-;5Zc9$B6U%!D86 z#bQ0H(%VV1G&F2<8<_vI{w?2mVDlp-rF~k`oZ!uQWzMA7CvO_EiphB{OdaIlCVZ{{ zX{=Q3g}c-Ya%1+zNzr2_E<|7^An0KL)9UYwx+1y8+n0P>&RBa5q06|UO|>|v)uJv3 zy~`o6%6Du%#_8dGqpGchrCraX5p2jId_4J@dtvn)g=#koc;QpFF1+6|vq4wwle8Bw z@z?LB;9}T|Cknncd}DF!KS$j3PRzr?L#?k|BHxtNO{tY#(fYRqMgzcolQ*LA&tdz0 zeyb1m3pPPnIiP5lA|?>a-z1(jeAST188!ryHPGQMb_#XtPuHx-z^gc4F7bd|`Nt$X78qqzxk^bI7aJB|is% z_|n0?bkd*pVZE7G?2(sp@b=@0Q4&_71FWp4$P7eu?8-`@v-L9Jc0v_IKd!|7u^L4p zXxrL1K8y1Dsm<->B(`##hPBnzxpd1nd-sDGNkC7gUYvHD>0JI+;WU2Vc8|e6a!i4K9 zzs42JW-%09^UhWOo`CBH6n#5|FwIh!KH=XqK4pu;_Oa_r9`Qk~F`la&_tEW1oNFNL zD)0V!w+B1KMeN1O5>F8v-PeZe;f*$7Vk65IX(?0Vu~VKwm-VXu$jYlVDs7zc)RUSl z&aX^SS#5{Yd-&q6tcCNDrmMuia4eV6at!Ak3JwXLr{Jyfn8z%y6@_H-O{RL{PWr77 z0^!)viF_-2?EmIav@taDq7;XJHnTlwANB@q2pl*Sf=d#a1#n%hHcBr zt&JnnSRUg#_nz-LzvlLEW<1+^ztxODmy z0QRw-X*sVXUBX25MV>aW&ps};bW~O~(|Up|nyCY^_PMU)iBTh;zeklmgPof_mnuF# z*k`jMnU_YQBIZ=AIV5Qig7)nR z;aJ)9unFs_DZ=(D}DI8|g6<|B9n&~4zsK{Q>0IS` zmA=~UZNhz8qj}H4@9%3v(Li%>!JKNUYyNu_6!30>oOnnxho}}kVD83p{ecepnPbsA zgE)(v^Z$xM56Mc zEYhZj(n6S+nz~Egt~`IK%Q4aEE?8 zVHKODauz-hxcu>7e~~W<5m89ya1dX|Kx^U>Zrd_+oeAWOPiIYWDR(0b8W4CM4E@sA z5Ib;a6AHrFsp>dWc1Kg>;JePJ;vY<;B0TlkSXJ|{MuyvEo>ZC{ehmrWlg`8+V)6gf ztea8ZS{7QlpT6}8sy7Q3zEMDE50I305H?^N_KUSVtE?48;fXw`9lOLgVmV%qd)+ z!b5=SThp@8D~XBQT8X&zDksZOn{s3pA`;X7gkpOhr~rN{mt7{`1myVr zRp^#RBD(B3UwYqn8+7WjiO?05+=&Xm5=hY`iBBs_+5$tG-rt*WdYb&!o2A^pxyNOcZ-^jg@g8la#J4N<} z@dgF?8)hc*#3)zX(Z$Mo#c{-Hu)|ZWc}y8}&*gU? zmL)n1jT$oU5=lLlRH1=u)K!s9k>SV_bv#j2-BfE5jDH&~zJy(Jlq^m^D-AiodvAur zai{nb6831ALn!dd5wAJ|)fdm$&BqH@!!tx8Z>RvdVlwD4#G4<3@~@UpT%C3z!iJwb zwMdE5m~6^?wu#ZzUiQ*>?cf!2=@DD<0~6{}ueWf5&@HhZNn)zdE-|?lQ*@$j>P*M_80m*bi31zckyizzy|E?c{4`d0rpy@M(Jo7M?Z2eX~pE8}ipN zEvrqe=XjNR7HRFNB)qZ|SV8vELP_5oC!*%plQLxm5d%6TjgBa=*Q(6mcXtD2S54_k z?w)^F;1s^v?ZlnB$FN@9e$QKPaKdJLq~cx~W?)0m^+D+4>*9LBbWA~)y0R^7$32x$ z!1{pMvaLOSVTcM!ivj$`s}*>mT>zl_e4DDqRmgtz!O@-#d}g{l8FtYRgIF%L!QMp1 zU;6m(sv!ZSBFZKZb==;sa5V%W1_tsh*QBoo2t#?=RVtQES+U=E{((m&)L4}$c2_vz z8EMf&tW@fG?`lzKE_kLx&+)avco?t%e@#8>r)LCA#N`Y^NcSr*Ko_4SZ(e3MF%yA0v+o_kZf$vzVts8=C>l%aQx|F@AW`wK+i74irH!+c3 zZ>L($aYq0e8e-bt+&bSp6|s?PkU|4I_Rd ztM=!|pA!s+Cp|6_3a?F|)7(B1yV^vwty~f!^4#EWWndd}%w+5v#C(3PI|zFh7NF_E ziB=^@fS?}8urAnpC}Br{t28$1@kBeGrJe^)0H7h4;w;YVU<+hfF_^B3??{w0U$S%t z1>}mG?ivJEzOaT9sr`rEfqER!Ca;kz|MTwU=Dxabh6jM#>H;{1+OH@lD4 zAvKVI}g!MhD2pcX#eI|TeVEjEo*nD2rM8!SEEPH$CEk+B5aagzKy{H}K zBI#GbcCwV=MqGDNMdM|^}*Km@{9QD7{|p#xs5U z(0Q|H!KkJ%t9tmDU-*w^POg|W^^_v^cRFsth=^lZgfb*u^v?>GKN?rqq_{{T9z?H( z1P^MWI)@L@{L#vHq(Y#cBIDF&WZU%eAHA{K{aM>hddY?_#>U|imr?4l;jBEyGrC^o zm4h8U_mHwdiYgtM(asg5u#_Ly*0+9&V=%eAbY)C3fH@l*a5@}wXq1#wx(TQsbmlx8 z-^F~T)x_ak9uGvY$UYv=JuUL`Q8k8Tf3Z1zJ(7NHuHMLLm)R0Cgk|z%%UkWvYl`@)AhZg;FG7BCw`y=_*pj+o+f^8<))=!kw7+$cz>g)s zcmQX^DwkY==I7!>(z&Y7NB%m5}-lTvfd5!L9-<`Hdzyku5~8nF zyvp>{Vh2u zJHmc(lSG&oR?S%>8%`edG+DyCIU?#&NsKKzG+dUAX z15d7UI0;mxGEunmQwhQEMVx*zdf^kjvdFnD)CWtvG~S_4sjc6p*Ns_dusW5uk|v8I zqwl$5rqda-V`+~o;`Za1r;+o1qBQC5(e;M5vT-amIxZvdlGY9>pVHkz;Dij~cmPCp zM*Sio3 z7{VH-%&SCM0`kOf)~)jwBYIS-?_h>LPIH*`>1T<@EGnf`S|~ zL=SlGW3Vu6r0SF$I_999t%n0C6!kliyG)))d$9X9b%M|)lRSINZMYnUTWRo4?@Pm9 zR;0Ei9jJ3Fj%sIXE+Y>5ZU+a{LGU}oolr!(n;bIxje2?Ca3OQrV-ace?1F7Oma&cbhnB9fD_D)QMSqM^S& z&b1^t`kcGOvBru)?{@XMgCM*KQl)o0LZm!)V<*gBfttdBz{<6J;ljOm+Q z-&jh%_!zenmK;e0s2$I+P+;n`{e1J%;fAqwcs<2p{ZJp0eHJ<fZM^Vj#e2MY*; z-f`D^+o|Oq&FFgjnBP*nsPV6%M}}1jkG>y-esQ6N?bvKIa`L5}9qbUX*{uRUwOuR3 z2^b|01Qj5Do$z3}$L3I}P%<7Z=~z%KL~_0Xi*!F67#OuaAz&7Lzy7Z$d&PKtz+l#T z`Q3XVTm6D8TfqFb+EL3iCK|iGk|AYgKIQYYUowt}QXE;DctTNhHxv}^!0asSh$VI9 zkw>R~Tr}Bs$(+(eA(5zx%ayOkfZHq2D31ManH4CGQ1J|_)!>~{Cr+=ZTfe&Y=2pdR zBs*o?HfBQfs&~b)WF}mQq?mUbc)mt!PE0)AS=6HPYP9i7z|;bBl>Q_l^onj9w|ewQ zK^XqqcF0WY)-LtlUMFcH0#WTF^m`jPNhgf&D*(9`6CTfp9`uv|rPHx1mBvtQPrZyn zK|f4fAoaiXH1E&J7Idr60f~0q4`vC60 z4uHPYi!Sv`@yr!&*pc^yoU`f}DSrrktD`BLUW=!H^9-=l`zIjC`Kz|1)f2uuWqjRl zxOI2;B}Qh?5dnx`t!Y!xM#FP7;k&oDDs;K8ktZr=03ioko9dQ1CF2x`v_B*Cu8>V-<%9DkhX zSO}#t7&=lKf?mX!*L`6-dDn;)Bxd`eYvwcDF}QnE`u&&U*>J72ch9L(+W~I-7(3Dg z+zWr-=s?T#MM0v3M%2HHp0wk+b%+IqdJQ z;;~-}MgtYSs(R`&{*7tM)}^ri!&NwyBnjJ87Cm#m6aE+c)nn`?e(^3l_@mC0#i)e3 zP3z<*MEI{)7%i-f>00nQh4Zi^T}XN769?sw{>Wo7GrGOK?MhdNx0+DF%dA&FupM<& zwtk{1<-8AYTy~3GFQ4}xsHqWOr&YO6CcKhf2)0XrV@egY%E=i^szP)3bs7q{!QydvJlw&7E>Y%qi9B21-HW^hQxYFnR{d z+E=}Gp?TktPkz+C+AbB-G+t3ECVKfdx*jwy7nnbdM_`Cv)rRlre|(JksS8$_HnTDv zb`-8*%f`Dx+G^3V#m9wMiYV!COQyXdbQ4kDi3nJ#Sl6Frtc>+A!8NG3x_cmklYc)t{r0+OE<3#|j1AF^?=*${4#n-&bVm`b?^O zUqR#&p=;x6LcDR()Y~1eWp((wKOu6Wi{K3CLyER(cdXV3v4duzFr%+ys^*UBme9hP zAMgiADU9`<=IZWbSdRt+!nr@qOB}4{5W1K_&t09tH{lS3-`Vpb##Ni*D0low{RIm_ zJ-6^`EkptF%Uma=lVeir++IsZ4HzM;vq?deFudD9xKW$`Adg7 zljH+HO8JE6iByi!=enhI!atF+NMeR`cPQDPCxX{v?%kR=~p`4PH>jdGY%eVBM4D*hx z9>XOI;d_?|Ck}tG9fN5Jg+XBMFMs*G-J&H^)_)&?i*HAx#s<(f4ERal|v>2x@E60W(o*+s#089X9on7{=gs)?w^$ZoZ?f|uXbW9=bD5Xp~G zx96LK_Nw;(E`)YYC}{*1hHetje%oDlkkjpELv&G$yuC6XcT3!()v4SiQiaxgOOwn? zL?Q@yP}JhbuyJ*``OFLz!b5sPJz|l(Vyh9;&@$rW-nCb&1l%fg_5_T%Pbxmq-i^4g zLlJ)P>||LuJ)SJGk)~$OFD213v$NONuWTLx^Ob827Plm+r<` zr2U&cm!ih7icTH<^y$vmKPjlqC2E1y4N_OPD(0=-jtu1(-T?1NLxC4pAdtM{H> zBz44R1A!(e4uPb9>Z2oubg|)7 zl>YOU80fN3rbsk|fkW zQV>|KE*CIOKq~A#yUf>Fo)_$aW-zm*)rCk#t)tO`Y*eFX4_SlMfdTgk9tzjrOYS?I zzipb$d#fl=7qj7p!~+J;wUQ(z5k6rx*)&OVvt$|O9W6u}%yTi0kpcZMNm~4}-X+RN z043Q2fa(rJA?u$bu=P;E{|I;#l#WD4IhcrpB^W`!SY*)U9S0`ILs;K)kAEf-g+$w2 zV}g4lb37ayk&`gD2y}Kyd;P4rUV1;BLvfg~#Z>~%+k@W)Ujg4iM5n9M{u)HW=BREa zj=$)5kH3uZlTfSiN0#+}_Ctfpi6hq2j1@}G9d%Q2H+Z!#H6V%XQz5>x!Y5m z>Iy^s*HL)fNmSpfm{N!I!%KbK+7U3oivQ?9k0z?;?K*%ix-u0$XsCiEUKfl!Z42!T z3Xv|M_c0`?bbn_c9OFidOtSFO$2m89HasrRsH2#_U$r(o#I%^eBSK7m!U*8=tXn0?B= zFW#1XIJ!{3Yycg_Ws4-y3=>8svWp!_cppsJkYd367VCX?+2t7#$$nat{f}-3fHSBx zpe`W<7A`p3kSv&;XygBG~A(S6ka=d zc8%2=b3}kADos62tBnu)Mq(*8JTY22e!EZ@Bj$`^h*+d#9O8kXz_lkd%^8))Hqgi+HjtA|uX!xFJ2P;nyNEs2BU@0m(&(TtM8&S9~D9i7q@OHv=p$Y@wL1 zvUC6v1YiL|G-vdwz2x!W20n+=cO)M8L5)0qbG4G7nz@f+@uRt}1m|hIM}gvFLS<@b z3iSOh?4(ubwNRg}^|sM~`^+wfbgQJA2ur)aXme@cy$gq0QHjfmGMf_9 z1za%8mpA*QGu^+GEgSMA59ZTt?x@MKOnPx9(06V4QkSePUBp-814=)_zuk*YDkLilLw3 zZ(Xa8N?P3s>DaY_BYmx>hxZ@8yduD|85Epf8!>LQz%*ka?{=~H%`LdQU$00N;KpIs z`c$5T!=`Q!eEZ1zK`RCqXh#8>BI^oEQ2!7l7yjBNWY6A z2(?;kJy|l5vJRyRyq2bQV%8PaCm^)tirodZSjQ}t-XFE00-AFb-&5eY^XWtsrJzr1L@R9<#X!!t+kf+ zCF*za*oj}&<&E`6eurn?=UP1W8j&fGN)(@{RG~MDgty5oW0{TX6Bq?di z(MVtQzDH_3Tuk04{z9%G1A_O9GcDvXkt`g6)jV2CmS3c~hS~mtk^_1opyUID#Vr34 zrx>*(UJ@6PQ)irkX*98}$dvY|bJhJH7a4to4Nk4LQzVJ;C-?H%O`UPnrT>$MUd6Mhql9F++X9 zk|KM0^;0 z+p2_!>?~zRM7kBbPxBjs@^OusCa=)nc157X@3Q^Ii!co`WDe#@P4RDM=4wb2mgNvQ4ij4uH z!GA4zheQJrTB<#)P1yypN0CziXW!=dX1 zUAW40dQW_Q%bMS?AKl&8$o<;3|Ksx|IBh|;Dl6Pv;%*Vlr?F&WTRO!-O%F~?U%YFcbLJbKvLqFTa} zGfJb`>U{!LIJ^%&A-@kYGU(&^K6ToubM}cp4qFnzP?ns+y#nk8%k|d@5s&;n8vqmM zLwJV^V?W!5OB8+{dNPC@xr&qZLxjKl-8{LPfl#tSm}vqxKmtg}FjMfl8q=C8=vuYy`k*!jT= zgS&T_xgtGwZlWXm`_kS3O70wes}6OVuFSfBIQ$2)$E0o*HxLUS`x@jDUJPuIr^KKY%{IUC$^P*F+*ypQ*^7OK8RT&*86!s z1fi9|!RoURBKX8zfu#|8LqbvHB4HtC`6Usbl#beCgj< z*rHpvP9Yj`4fjf&>n?XCHxED7^0B@I54@= zQg-yz>(@KC;o;*0%O`UbS9x=YM zBfki-%*gmN@10H-%tFoC|(3e%|t4qYV!l)PU_zA9W-d*j_9tK z)rtWuYrBSI9*lXDkm8?xuzNMhoRX)k0=1O6ZpNoi#yMQEc51Rjf5}L*+79W+ zPa_5X*)cl~DU6{Jn!`mK5N{aw5jwaAmUMktW4ui;%RipzSHBtc8ZZ$BUMG71Qn^;sqweUsTZpc z;S5AK#97HX6UF_M7T#<)9gO}$kdVN~R07B$s{wk@G*2r|vy8W>Z@Eyeucp@I*ww4}38b`0;N%E0D%d)zkQDJyn>rOxVzyhfz zGPlaXci|J!Vd7`3@@W6#CddWM&FEawHm_Rk)aq^0+J9aa^3u$XUMi2R)s(l-=8Zog zmVn$)VrY5cIXJdU)j%Q`x$&{3{1ByFR%_r^2e{20Zd0??sC|uce7<}F+fp&nulz(f zCx7W=?VNyoPkx@i_%MOFe_lJy#Yr4!Td^(SFK7Bk6IUt`e+A&%dxZx{I-z7voa zuGLSBBK@*r#!k>Rm<}%7smKpVBVo zoIUP{hnvluwAP-CVw~`LN|c*EwkU8TVY(2GOflTeYwli}1|<|-^@?z3kCB}pyNcWw zpzvDiCsgx68b0GdMQHC!w#oAe)=!|;oja!pQmWLXg)_4H4;bo6c|mS15lPB;SY!|G zs|$;!Ufy8QHad!lw7ng*) z2>iaqOEp7JqBPR&w8;~K@XZ1Oxb!w=N5=X?Kz!d>x}QCFRi`n78h9lE;^V#nd0m%p z0|-3sZ@HkrH=4 zMl>U-0fzvy{CVY?`uTg&SPbdb{1a6kVLG4l_abqq~D7*B$$D@ zrFc^D8zDf-bu-a5sr%QzeW*q-!hS9ziic@-+7bX%mt661XG7wUo zu8tACjxr`>amh~_GjB)GPzhl9n(R@;&noN^>U9qYD>Lh>xv@{zn|#NE1yu_ObAsjG z5WGAWKp3rz^po80?(0@d1Av(4Vm;?zF_w6Iy7^B{okyJ7`Om-L9O&vLeQ0(+3mjFK zw?C5aH-E({Xv7OPYS8gc3C3Q$ZKB)6*mJyV)Qg>5R%$<$W*7GJOHzojL?qm+|G^43 z;E?@a&k`dmUXx43PL-bFbtySW0J0N{ zZ}M)UQkl1Fq#m2(%nyJ5tMk4JZLH%ohd_Y=Iq93{rFg$^cBk%z&1~cPY4g6=#FC{E z$AL7DO1<<~R)6K_-{hKCHAv`Rp2^-`dHpmqcl4eM&R~rSjgCD%pmSqCZn0*lCU~_^ zm|Om7n9xkQUq&PO$><;oI0pd_JG#XuLxvZ6_|kr<--JFO9h|&h-0PJj@p>-vK~P`HAj=dz5axa*aEtB$yRl?`<0N=RZ`m zD+NLnbQ_j>FtbXm0Q9F1;ZW0c;FRAD{Qck=iIcj(*N?`S(i%QWw6$=!`D?R)d*$nT zk!8Uhtboj-iRjhOvB^GpE*)Bkri~nTXS>NIhyl-!gmme%WTufpjw_*MJGvD6mwcUI zbw>#Iv*^LDRkS!iKpSY=vHfb&^|>Oju^=F5-Q(+55wbGDFRajSyzcPN>?{olJGk7m z?acyy6pf0-NltcN3?|CKtte;`z)qA2<$L|p3qSPBxcVjsU2Nch=c%Y~S* zkaPCbMxGm9)(q>D@vnfrWzM9%RmVQd^#0*5-FVcWRD5+Tq@S--q@VX;58jkQxYR&6 z*Z_(L7hMi1Jfji+uU3sDh@5??fX6=}SkIN6)g|?n>ud>P#QV=DQ53-XUuga7=nM(K zW)4{UP;(5rm(`2IdJB{9gJ*A+sMqI`Pp^S5HYc6cqH)N23PPOJgX zhUp&M?olkrP%x^7&D1sw+v4_$rA!)zp!0E&{e&khUO4QD9zBybcyk0m2>Kz$y!2{N9Z)utR2GMe-*Z6g0P;5+<4%+LWbQ*@y}7{uIu&RAYUMCHpkKRgp> zd7>Tz3dfsaN2GltV4e4Zw~3$Pg=C_?asB7%0h*#ZBQ4!Mx3jZ;4{CB@qDI><2O;nO zM8l4p`ds7nCJ>7Si($h_q+?AsCxiHRprVL!49F@wE*HMJ+}X_X0 ziOQ$tx4{g{WjkLC%s5Q*^Eud?m>9_WtmtzDB#`)w#wDK{!*`LR?* z|0VIkhNiMs#Z~x_eQ3sUG%yx?M5X8Z^q-#Sl?g7;p0}e-Ppli|74>GY?lZ*H<2D*< zF4Rudq${j5*B*SuA!(cYLIlM*?;6{n+&@NL_hn1f1u+g%p`q%*sjyBuWpma?FV|7Z z4mRHt$6PhSeA@36DnIzP&auPZP9?Zm!atnT)_ec&g!d&?JjDTQ$!#5D{+u*?_8~m3 z*T<{FUrb3y4q=kEIwO7vU7|>2*o1d!`?Tm?M~)*T!E;5*A{`gFU&3=h08_-3c+a)Z zwan3!nQuClqhPa#-TW(g&a7I|2rJZIFx3NBG{=+6m8oPYA_0jo;p<`+le6dO~3aBCQ$(BU0#4w#YvNR!hk46OwxQKzm&_~7s6wV z4N2Q5hX14LE&QVHm-k^<7FfEwyGyzoq(MNCMnpTpg_?IV2 zBD{Aw?5$ptNh~C z_1K$hh7rG!tIpQvo+E^d>F06xUi_ACCY5iAhX_SZ_3Mz4LO4I6L5=Mz$N3(#KNVx* z&je5|FI9^&Q+j>YBg(6en)i@0auIxOzOz^qUbr8t#C>2s?(dy8{QRZFZ+>RxQy?mp z%mb}lVF**6Psj<9!?Gc1Wc1 z&bf7omtdS61eLObxN}1Ql%E4JAAgm)7U$mE0TtpVl&WWKEnfQCcD$)#?qXd?^4!g> zV40XR{1q59USoObx+-Ey;tR|@nP>A}7dd52_5b`B-AuHD4kX5X!3qaQ7}2QF0H=yR zNr;tqzzC4}5MA_9Gc^Qeyo6nEaWHZ)gJCzI5=ir$ekg`&@7>@lo|uPZk5Du(-l)8(!1Gxl9a93AE|^T@-9fgnq$CE z&+b+0;s8eynR=1zf^53A-72 z?_581S;r#%)Jwo7F(Z#R1k?Hfzg9yv#+}Tq4zP^7Y-f9T3K?l9v50Vdgc=wBLaB2X zcvQ)QTcjOcv2zxHuZgV<$ab6+!myfKo#Lg9fp0Q2aA~VX5Up?)q?>E-MqH3$&2aLC zP$Ak7jy)_~d6U^*eM{BqoM21n9?N=?dpYVY@zEqR@#Pt=PJ8V0+KMISFRFPP%B z{WC{-{I#PdZd_euh9m}1)KS!OY%sOn_zIadU)dgvnn)`riqnGqs*fLU9y6RduLO~GQ-t(j8ANG4YW)bOV`yE?`5bNY zhNU2VAr17*M*48Jf50zC;*^KmHcr;=jOa_zKx8bU^)=(7g2U1 z+L(Owr*<}NyO-oY58Z~IflikCSdv}pxf08$5vPQKHfJ`!#IFoRO8Vwbi?`1olk_eVg)k__)PlKW6;}u` zdDmtU56DUOn&(L=aYHK)YctT$5XmgQMt+2DjQ=&g(nLAr{sOO5V); z2FQ80=$Q)mMBgubt05TtWqbZ#;)99w3^ytd`|aqlkF0O)&<24MA&)z~B`ab!_~W>f zQZ4-}mz+zl*gKepR0CL*O^fBYkwPTt*t^K{d-ZD)8a3{3XDT8(J0WirngbIvs-fV==B%b zSJFQGr!Hp|d!iR-La6s|ZEUq3848=<_x0uN0(Cx!+p;P4sIMFLeL0cp&+?T;Frp4I za*z(PZ~kv(jeVXS5-W@0xc!!x#UuR4!2-rmANM8{=CE|t9Y>5gSWc#t(wTS?^a@*- zDP&o5LCzR;5=Vt5Z!_y&=Wtyq-!(=69mXdm5s)!@-pLuDI?JkMQ4lr+D44e zmE3dO8&0swFh$AyTIHF5lTO@e+l824TZvybIJX){M~eV5aWsdb-dSi0@c$=IFb7^F z*2Oh&(7p2Rp@3Ags8IXV4gA#a$38{dx;V0xJ8Wv?P?iQ|2D?*-G-)f<>9C5VfF7F; z_bppkBVNNm`ld|B&Hb;s?68n3PPU{?^DSy!gJEQdr%;Ju`dS zwPmEF$u`UKYAk0>=$0B`=+C`?xC~0H#M;G*j7iH6n(d{$m<;xpu6kT7^{&w1S6&L0zPU1_poW`<$e#L`Uj+9m5w`J|j66yxlHGY_KAsw&wG|)^BF3 z6_IH*c_AphC+c&zufM}tXho%Y%vfw2y&AC~S|YdgjPR~jI_&!=@?f9ITiq{8dAcm4BG%Pk zoQa<2wcYQ{lapPEzVtQ38W5v+f7%u>|F%!TOI;4~l0>TvHH~;jUjis0BJ1c~_5`wO zPvJ)MjKM%q3gkPFM>nJq9(V9Ib`$nldO-vX1hw;K>(hS8?W*P&BJa*JkFK z{D9b;dMcSR2A2@3Ab}uigSnEVAA&+i?%irX*s9%ST+zhk zP4r&}&(5l3R;BSf#9fG3Oh2vV>Lx@^9IR_qx)i%{Dd?`CtO|v62CY>9Ew}uzUh>2t zD{k<-|7>P$EOBcs{I^MnBL|rIjxnB*jbRWIh_piJ!{HK#<$zgIEoCO_fKgw=jG9Wo z!_Wi!-IR-pf}n$6kU$yTVu*Gk85*N_Gy~H=t&-wr&;|3{QCA{4Xf=JVUQ1B&qI!8u z&X2ww#O%XRFUyP)b($EAY2p>rd$XUfP&H?lPKnUNC*afk5;L8Ga9=1A>*8wKdWOur zrz0lontFz8U(=BRbv3f6b8LTl5>_K8|98@UqowTc*a>5D9_Ic4qEnz|_fO2G$Qy_` zCV~npVg%gbQuY@&GW_`IOutbgSm>JO6C7`0ytRU41eO|d8a`vj5!SSC5)!%f@YTGY zFTs~U+YM#5>`1n}S!>cyBQhfT_4syG?A{lI6oZ=m$5ZL*d1pm-;(2{9vRoA!f=uE; zsFn!1MGS^X?pN7#H~X7`k0`w5>X4nDAEtT0mvzB4acWC8Fz+OHZeb48*V_2H#7*C0 z^3E8)s7-Ftrk4#Z+M8ptFeeD^=svg&vtD-@`ewhXeJD7AdT!1xRaJP~iZEIOImoO+ z95KfleyG=qi5gW;budN>jijw5XtcWHeU!e^t@?J$Ttv^>MF?n(zWuqAd93-1d`LU8 zH3bOR+WhQcSBos_$;>{XA}gQSe&qYSa5@We;}`t+skw;K>P3`eUn6Cg(acIQYIe<6 z?_f?kH#|AvmJfEAckN-6cCUyAPmh4m@>7Q$X!oXe??wD*nZL zp1oyyr7qQDSFDtIGQzj&_$T@?yg}UUGI|aHJ0()Qgy`)r6#rq{{8PXNA*Y%#8n3cHe8stv|7E3vS_r#@01K2kl{|i8r&w$Fv!(&FT7tL(Z#Tpe6K0$mA^)wT zdi0FUELG==-nl}_9vA{immxML`CKP@O?oc2IQ39-c(`fRU*0T^_ghD)Lc!7$077$W(C5*zSt?PPNhIzN-Gt4`}o0T{N+c0G8Uh#rCGH+l#9m*jqct_zfgLo)B18PSC|@#ud7_lE-vbE4_7 zP3--W@RKgOcH;Zoq3Leg5DR%YI$sXx0bqGN!QFFlLV1X)xJukf4G>^j=P#cxG$M{J zS8SYe|AS>mtQllkZ%!ztQ^TD#PLu5x^7_>1Y2n10o)Ue!B(as5?Dn%;VZ8?~aabdk z5|hHLkfv0$)L+Aw9~0jW4CpHfX$b-&+DfB8jK&FH;!9C>{VHp%-kZMEnXeyVw<>DR zmG$MXMw*yi*6DK2Q%=2)O8g9+_V~dW5pEbV2pI8*MszIk!sQ^QwJhoRMz`c~4}7)I z&p${G4Q+WU3sfdM zL+ISHa<8DuFVem7SFnId`7)|c~I$&*KVD|Gs-|d9zr$N z3&@kPueVn;khU+yl|Y{m^K8*+ben0~wX+S-cExCT*`iZ5>j^&==?{<@9FmhoG&Y&< zdikY)s!mIbawuKspE?WFjmP1z-)YRZW=APs_(#?FQnAL-n4d<}RHD%KRQ@@?S$JF? ztT=y~VF42Wa9hY)>k4%GJupbHC! zWF19FLnzL5^u*Cle=T?Y;s!-QODo^AXvQ6S`}|!DMglI?1U#*n#H#dc90Y9WqI+oLb+(S3SbOnN zN4)eR_fv~`wQqst%DxwKE4dy=)f_$U<%Z*NdjdOU(#1xe>|V>CcO#xGf{)~6FdzlszCsq8Xfz+T8g zWM|nSL#C9~YD7O`_Z-a~@7d!m=rzml(R9eh)^4*daSu}1D<=5+T4&-;`$B{ez>Eo1B||=@cAIm?D5_Fe zm#wThwYlUZJlDzD?Zp!3v06Ovkcmd?J73MYz5`uhn~St(h2tZLX(;1Ty>A*@nbtOh{Se>?$7Zl}4Y$={X@Z^u@y(@HB#_fB|8 zl(X#CRF49P-Zs%*^*`p_{Bmt}IWOEh<}lAmD!&F#=6+|EN0*9ZdW9f$eaV3%ueX%J zktC~w)Wnn5jl1TjQm=14wMyybb-qPhWbZm8WqE|W=B^cL!v1Y#851lq@0t78W`jFK z%LBe~nY#Ppi!lIH{)Pl-{PIhT0{uSY38@g-$wP?(Jxg#OFle|jOodPyK27us&}*NT z{YkP0P0N0HmUX2Kj8nW2=>aS41upn}a=^WRCq6iYs*?HH!m1w;=3zCU?66TRrAF^i z4`h0wX+}GuHtIT#0d%}sbs`+DPH!WadO=jobxj8+gVy-K=(PeR!InQu<3|T3Uf=KZ zvG5grEY`fgP268+jz)vofXI2mW)@tSlpUu&E!yfzx<50=B7*r&qz2LJpE=!KX44k8$+ApYh_P?RDVg{55YFV;=l=l3zcj{)RWd==4sv4dZ-{R1Ep7GsU>F3^9BIR_Y;o z^TGv^PqSaFllm8bjPTHmT3ioK@Uh5ptE(za0Uu6SQ!|v{)u?;@RgaW(Jm80C<>FGb zY=KM!D&)EqG35ISu}BnYNP$j6_>vF*BvZ+&HstfeOX%?T8&ZA1);LON`alEc(mrYt}blhm0rA~Td4m@q@kRAT0p9Kvzo0+$884&Zuj%fYR3Pu_tetYS4VfO^72(Mg~r|%WC z+~HqGPh$@bCnw1Kol!RGER{DMz+@y-Mew%)^XmP3Qu3SQD`8D)?13^^b7{X84CxT3 zS%_1D`vN!a3&*DNA_3Iv26i;pbf@`->Ll@9jiMZq&EDs1zD_IG)p$H^X-e6gMHV9%J}dYjIN-m6jbY-(687XHkRa6Pc6Uw z|Fr6kv&u-}Bmi#E8lk4VRIff>=C7vPue;3m+V@(`w1g;8l#Nx%eRxDSV?;WZKG(uBew0GXkC_KGyD0wxU3b+DBKAm*nI2 zU?H|+O-qbgTiuX9h5Lh(v_8|Sqv#1amGi|54)fvdxD2MF!eK?Nb~G&GVA0gJMb#RW z3(ZANo64-{7#!}+TKS$gbxyTda`PH_4C6wBr67KwZeSeh-dh{5Eop?^y?W~N4b|sq z3&Z#bPX4~iL$Tqp$H!-|A}qmp2DYIDH1IeI)*dPR-RBzRgms~^PX4h|`P(gs>m~9k zMlF_F(z)S#3cVTb&h;8yME@>6do4#1q6S2ai?P0J^F4g7>5><4%XtAG_Z7#~h~|fE z)+*K@eN&PwYa;pn8RPz0Z9WpRd8-)E*iCCfbQpI$s5e>p%8JN%n~mOD&RCyZ0sau< z4+`{__bp76;J|NU>u%X==u!C3{x~oElTkbGM00AX_?v+>X za-B4>_)HL)<=NK=t1q3eQjex~P%wHsY%-lKt0+Ev&eIPTvF7UtQ9U2WX{8nkLW zP=^eQRh%V(d^OK6MXlHHdVR+SKVq#174T%aRL(Hyf)DM}CP+a**7##lzDSz7&j-!c zud}vaEp3l63FXT@P*3~QBl!*$m;OLhuQ*SdeP^lEw4KMW_N(FOtX5$aq8#EtRu!ry z^bK|R#j~i73JrHkk18)Sd~?Xfz8a7NfzoH%0Wb?iezOyvRDHLVml@i0D1Yix)yW5v zxWt%~5vm18L1yj7o#lqG1E^!sF6d!yl$RU4bD#5XCj3MB6I5$#k*Ac)$K+vhnz`;#_X{H1q8B`=?~G)|Zzp=PyvWl3E!Yuy~u z_YR8yRO|uyr-vPQ1SfZ!Mi67_2gJH7hfc`1?bq;|;d(|rc!QyiaXnFI;bk_-6-_Xm zy(x1oUY)-_GhnkZxIyf>5IJ0BAaj5CF7$Dv(-e`{b^{O!xiJbwrM1-X>D(da&Mf}WU*XPhmW##0vWlvKN^|#<;!FlsMSsle8 zi7WM)h2qkmgNIMi5;|D%EpPbI?%qyQ`j~NsAB^*A>8UQ3SHYznwI8Prh z=0i7RV+(`e;{z1rKp5N792z4RyO+Rkug{Z9P^1@M29333b)W^-u`HD9&^hiu z5ZPaY?lA*PjfCdPhJ60&25x2lUpdL3u%zgi>ESL5$zH8W+Eyd!=ce$}SF^uYfwbBk zQX(JQJ<_$D>$tm~k?cg=QT}-?a^=4l^+FNi7hKqlQ4VaC`4Y$4d)iJ{W(0zT0^dA4 z$NQF6phFK(NnXqmiCkb_vw`KOl}d+BWAboR2fRq5MAnAU-Px>7?h4`ToW0))U#ai; z`{AC_UG>dPu%3@t2NuwNxOAFCnK2L`ZRQa3udup|7#ZQ8*eORRig=IM!05tpSmIhO zm35r&XG`>GY;3dvXs){pqD|;IyGNRIyXK?eIq$MHRl#eewI2kxYC_Huj-v@N^jzAD z2=I~R)B@I7E`$46+7?c}$T-7xi$ICiq$a|{gZBm589ljoq)BZ zl&4bB+)}$EN&m+HZ0^O^&>41`@XOeFkYy3$z5Z`(gjxP$qr^6VqnK`K94BhFUp|w5 zbb}rfbpEn(G**W{4^X`cTXF)cM`m^!%6i4Wb$}+xEst&~mT*n9iqK2_H&eDXADTwM za&eNer}vC4-p`-JWF{4V^%|6_Vy{?8D%Ljv%Ab*Kc9#^<8Sp$dS(|rqx%hGkIHH)~ zCZEPO7V|iStc73ToOB+G3uh;9wj2adzZ`g5*$+vfBKg;Y!g>T)YqN?o%ucUiG3jNE ziE^qsTFUyrltqKU@=1~hKXW|}Ta`v#Twq2ehH9Sc)c5uVf}J!hyjue)rWAMzR0%L7 zS$G{ahC)6w;rPfnA>zM_p_1e@!H#Q3-LJ0MBQ~%kKKMzyR7TSLSYudTQQKD=cDK9l zs#?N7`E&P!b@L}`c^5&)kG`t;_gThNhlXEQKxMd^M}FM$|Ml_zDs#=1K55-#r{*&{ zn}qF^<=(3b!^iIZrht}9H2(cthI>Lj)P-s1ESjeZ_Z9p3cHJ z_CertVGJy2)e236*EZpO)U{cAB9sm7nko18G5?JOu~PLxNYeBFBc!TwPtMKazCfA{ z_l>Hb3A^K)qsr`x11yLH&g86LA+!NqC<7lX-2xBz;g4+dYSQd$#ARIf8?)FtWm9nH z0S6ISAxZZ)Mif$hQ8)5?I0IlnEuR3`sHlHPtitnxYCM++W>=@j#y!hx({il5cEn58 ziQj>oA$#6(U%9&Lb>$7W7fyZ9u({{#e89f@-BEcZK>z<&Vkv0Op=ELgVBV8 z<>+wl-z%=3z1TBH`;=c!kxip<;N*>3&WXGVk&CUV(aAK4?6pSOSf__I)^S^cAI9FU z5}c38Wxj;?M3D7cv#sicj}QXj4$L0_r#zBJ9iY>N#x0_KH2)+qyGYQABc1fE^;>?? zhjO9uhT+KuZ^2jRmgT9}?`I)7itn@LgXUJ1%3eKRL*)th_&={fAiJ&@a-r-eB1%!e z1*;5;q&mSvu1k@hT%-4wys;Y3^0t&q8Je@B zTjZv&!9Zf9C=i648HB7QCxo;qjSm2;&{IS&o!|+fm+r7%Cl02x+tl$jcK0ym?@N;% zYFVI3xvRE(ao3qpvudldG$AHuhVAIQu*q=2m1O3DUd2>=lI`Q3{l~QtLPGT0-YGCZ za^m2Qrf~9_V}@xAW3QuJ`5Azf+MaXzm+{-xT^Vgyn`uhWmk0b@mX@@+wbGl_5H74M zJIs@>ny{@egaC8|O7_S<-weYS141=Zi`kLH@W_}#i;;z=X%?L{T*T!*9Nj$?5N zjQLs`q`S+@25&20ImyL}R%Gv2_5GOz>!`*5jS03&_JMfM93-B9QCPH)iJo6jO6hvs zm%!p3NY}?u;cVdr;;PC4ks_IxXT#;hkZUS_%A)ubVDmK=I4>j`+`1A>nn(j1>BbKV zZo+%+)0a;)3HkWi{*-X^avsd(>e|tVQ=)t^6lYs~cBE2)itz+u|0Ef@3K(*;Lm`v#xcrj|1?(_hnAtYhA^zq#Ux9`EL-yJGJK+}BqDC>Z%qqBkgvpo_rN%Qz85ekzNU|mZ9 z2O5{23AXkJ7qk2}yXL+PGdXAf9{s+AopQL2eOnweh!XGDnTih_A{kDBw!Qd|@+&S0JHz8!ltiAlt zt;Fi|#lQ+El_N_GlKZaCR%2A6gTrv{nbd;;k+*%#BAx{2JCbZIM@G&MivNCgIS?@u zpsJOJ&W`S-M+e`IB*$m(HC+FrLDAuEX`2a$vsYxZT>|BVx?L`92khlni%wN5_VTOh zbpB|N(T{CA$!1nc!|!ii!P5|U(PK$ZZ<*PRq>+pXRlqP)S8H4`Xpr%@;bFV!4CG3} zDsonjySf6hn#~+1Y*s*c$O4y&J+j!luQ8@`Y0UJ$9^atxD}*a>C1d@o zP6$5xYhd|g+Ot8iUiD`5Zscebl*GN|+uo|P2wM(r|Ks8+ zzMV(#>T&}O-I>LpHQF*)sOkL|#`dV?4k05@wiUvW+(M-B9kdBpl8)b9p$9@sXi^|C zJ8{CZR4|(yfK`YwCBC^eu;_xFmVEp;*P!?E*%NrQw$ez zHlYuJFy1jfZwUX#N}&KiwN{rojq7VVBiP`LT}42v^ZT-@t*ThasEz^e)|U4VJnYU< zbvU1{<6FnL^bNM-^IRP_K8-e-u?@R{KC$ZG9RkrQ-GIh3c@Q-ZG651bkN7Yf@EgXB zS-4QFc}hCj%Sn6)=ewKCNw7piU4${6eTne&k))sQ#j~{(ik4J03LQWBs^x~mFBkrj zD~qaLR_!$2T>fN8nQkq}1b{H2k7NJpR!)y;9D~h)CFr=RoMngDZ;?p+c)Kmvp=3Oom>g&AW%`7kUI2PgCmS_F z*lf2%B}#)L4BsV(0B+H;307oFk3^(_+(>D}v;7q(HvbZ97A+K&FVbWgK(of(BFR9$ z2{x<69E~bA=*`(<`ZQ01vzhFms2sINCR{35+N+|Q$MB&hK48`~r#NLKtBmb#Ps;1##z76E7vrUvIy$F* zJ@=;uG5*a&{2Mq%M$HV|GCB3he1ich%7^ONV>B*M_-g_0TKGswIhZDYWaPbY&<|+I zAnvVD{36I(%=Q8-f(Q7RxRPeysbK_o8HlB0XtnyI*W@be0ADgcM86e;awYgt=U)x? zL@U#iJ0^4z>0e$2NAti|Dolj(x4Nc$ zNY;JTOflf1(!l)PgCh4<4gO6BMwHOuF)Qa52C`Pqfua)-IbFBR@>qY70HqN#yL0?8 z^`-N3y4?3^i9`<+DK5bRIkRIw)N_vfNayrrWUXZ7p}h?m&%rGR@>tBPD$UhS9}s{0 zU!&d7>4^&%@mS97iYrj@^C_L_sXs6KBSt`expcBJ# zSxNanGsp9Lwb{XE)`+9C1nf(pT6|BnMS{erBL6lNC7>HoxaxToK|o%8Ts4tt^T0^lAPl zqgKy0uWo9|ZlOg%YYrBzJpJ}xSmK4kfcVzr(b*g1ph2&8hH#mLDg3SJ;)IJ6Fs6mw zSF{Q@+zsEyNh}uDjMEhHkH#Pnv^yRGLV6TQeUXGO1_0RZF6mIMys<`kv{Ds&ez`RO zjp|wuO<6#W>SAtAOdmyZI%2=hy2e@)4$j%2%FmKsawqhaao*Tm3;_19 zME@J4P7(=>6QvVs3Ti$SvOT&7z@(?#B*cTgoNzkObLhruggBCvso*6jB>`e9(RWe% zkH-zd+v}_`m^}h|2`RFf03gLI%qCF{lN9g=xLHA}ttf*9wNa)|v2eF}^$PnpX7D>R;DCA%9 z5Ct0z6e8)r8?MRtH(`Ji~;Uz~>GvOC}C4N0HXT$&sz{wsIkcAQ?0qXUb{fmHcIEknUOY67qhr0wq37buEZT;{#wtg}+@{63W0un9rh6~7=fVAY2E+x z>Q&7iGD_RI!`*tmbj+x+GD+QduXo=mu0(qhj;pTr2=aSd?`YQ9@pV6${aHO|v}Or@ zyOhdA(z*6w%_Zfa1P#hMRW%IIkRV_n%yhB_MjYZ(_a0(AyqMd`j~y6x=eNAz4=x`O_QHU zmbS(;+5dkL>9tBeaypy6=T2EHUp5KsrPR8wZP|i*-2*ZI?!Jc4_sJjNfvZjA6@wmd za*H4@b7)H2RgKXj_?w2dOd+xZsX`0|Ya0!NE-~n&f(uc^K#3ycIPQSsFCPlscwUgh zhY~(EKb8n_f(uu8w|jYJi;Dz<-~0{_Ef_?>Dt-)9JW;3L9gxe-e7=gby?ycPt z_-!doNa^2tja?o16DTe6X5Y2@P2%s34?8|Vw$m&#JmV+&3W`&v!uyhf>?94Yq~e#&XKS# zGr{x1|LJf;lRjtkmi(YR<9i*5Z~>TE(E&}QT_~^Ahql<^S)@0{vBt!XRZ^{f+DNf? zB8=cy6wxB`5$wuhM9LJBkWXjEBI3o4M)(RFJpi%+$1Xxuu#!b)au>C8ocN}Fk5&pf zzolCV%FS|@&_da}EhJ&t+tnZWiLE=0d(&CX5t^NT0xgq0N8O<`3Q@8_+hE1n4i0qnh~HO_bxNFO9#mgQ=`-t@Y$YGeQ0G>ngG zvI;$~SxBzuK4|=3KbHqejoPDJ0V`)#9)kEvW&Gu>&;7*7CR_0r(YFl?(81gs0U9l- zdvcv}( z)u_nFZmvkxH;Y+i6mm5kWe6HVIu&s}tFbzD=n&a1q1E3T(0RaHRcDj`9apj)cNyR) z-ve+FEj160!Cd(;{!!>3UsuOOmTq5*%liR@ZGM8N`2$?>iqVja&QcB?iMXR;vNB0I3vvvlTW!QeRU*bRzpJy*F{!k@n3bWioG+v_Lz zjJ`E1GE`U_2Ntw}tBhGYRPft$-x8)21o!HGhBNWtRj{PD@F~A( zAlIp8?b6^f|ETpaP;-_>TGF|S?EkPI>}tTw!qk<4v9rZ=4+rEr+aTI$GGp*`?(6{z zwM8Sh;NXu_1%DG27C12{D6ovwL8^;#CHE$S&hh;PLDd+)n{~)*Acf9V;c)|1#$iC*GrtR$xxXV}WE{QManIGoiEiHPMA%jN zKxLcJH8xPLCfgh-l+kNtdydzs)~vq1SNVUaBv;40ws2>9J=bEN^)Q#vbs3WxN;kdm z(f$&-TuZxMr4zgIozM2~xO+&k=&S|)vOrvapYiB)aK8ASgmz3H@31NY)ML<|;3$5Z zxr!KoPRP&c-Dc2MxJUbP9n~K--$*X5C|Q<2bGuDzjiPtqaSrLk{czFfOSYC8ykzD! zp(9tKJicHg*B|>bJ4C0L7c#-^raeT5_}Tv^Mfr0C&pj(22-1$E^HO}@4Ud)CdxWe$ zOpZu@rAF7J5C!gPK%Rw8R@~TQLRgj}Zm)Plx|mY(!nBKm|@_hIAV}~;fi1R1`a0*B4lZT{L zzIsZec%!KY={rlgvfEvwn*f1j{h5;1WZ>_Ktb2T9ymd;K!$D~HhFa7L;8rJ57C|2| z=x!qLD1S)$nL{mtqd}g;$cs4}?do9K`R+rM^O%0U6kR9x?NT7tKyfXT_fTSR4CvUX zW42=r)Bi3ly3oN6@}0U)UE%4GA#oMEzox_qRWx8J`Co>~sE|*%exW37ynyRi^Pu)} zP4yCZBHyHCWSr+_#$PyG$)13oDJmOeaX^|$hhZxZe<2gDKn1)kXKj+C<>b-vnR?sBZd9kUc~1b@(*4IO3Fz0ZRy|K!klM^Z_B(u4 zN9)x+ZCR$qo-00mU3ap#Sx6hl)ED0VM)}@NKhpswp(hOGFnGT&G6`H#EDIJjVim4hHKo_hucpbG-cA#qDxD`xQR6>v^?JwQE}F?GX$Q?IbyG<#&sx z4D;|c)r;~fhP_?q%Y?EWHGKSt0quT{AP^rJ`PF!VF~m@$4mTcw+84_0l%wn^foh+7C;UN{>N8hc2ULjoZNDqUF?<$e`< z`CrAV6C&GSX5cR*jNK(J(;Dn1?r-hq`=hLOHrO^v(~6ZThLb{bvsY_yD^J`4*LI6v zuIndY{Wg3y%2Z2DmYKee(y7T#1SX48Qpmr-M>Jbb{XV{DQ^m9#)28)_Xp8>y!nXQM3}C(=2wEj`tLt z#BBbrALRBE^5+;#)+fupMqTt~VIp1Ua7q}O^Zp4trzIH2n3N!935GZ{XF*B7c= zOPY$K?(0RnWBcH}mMPLb)>*uJ3c&eKd9hqwP~&?Wg*ANOLc!<4-*A?kyMr%QaGWWt z*~FiABgOaPf4addO%=cGDaW_BD1`&5%AWa}Xp?o|20fZpK8BZbyNbehkwakbsR7m5 zg#YhzyTxJnF4MM=euo|8q4@Y)zN~5uSe-e@nbGR$moR_Z|9c+{@8H!?cfc$1BYGR) z^4SU=lzGuT*o?b|_6;KrWr{4d>w#QWs!SV1O$RtLl$BZf@{++V5M*yE!!s`%PYRug z>-e*3FJ}msto%-n&77Cg`b{#73wS7*s9oT*cD#LXGhi{87UmQbU5+w&T*NAmZBs>9 zii-S%bpK60pf)=6tc}{P-3i>wboyzsq|>{7+xt>;%Mh&cw7uORsvYX(qutFnM*#j& ze^OWX_3lXChZQ@~6=PKSc-c+&Ez3moi{{|}KqmoWKGA}awqdCV-wh*d@ysEhSAxB; z# z<&#NNiI#~}W;ST67l8DvAva*q^|D9KS9K9=4k z{@Dq~ZV`}#s;{Q;9!jESz>ei{es7}~vPn}doeA@wWs371o1$e0sIhck+gdG_h<7jK zh}V|Zj6gcj-6r!7*SSx8Wev(K|LZ+5JW#M6QDSDGw_^3S$Ll=b25=2x&HDphg)Hs> z;P_{|t=0+wh%_(?`xS-41S?;CXV>! zbI)Kdr%B<3iV7Gj8O0mlwfuSiNY9l{#nx`1becivJEiFDh<(ZfaekH1qV%c1)3+SPX0FevXv z%^wB5xGY9`-M{%9R{>c`FMnughdvPIREc41$l9MT|JtJz)AIEo9h7%|3sOKI$jNL@gZj zWYC@z{+l_mmq@98E*gYXC zU|MvnX~$Wt&42CnrO-Jz?7S&OE`TxEO%v^zZ-3ywAe+h#uq|^Kd>ji7d0=q9pGKcb z+o+ZrHV14?$2OkUbnn{iqf0E)%v_|Ddi5Ao=ud|U%XN1wZLan8Y%T@ zPwFpxDR@eN;Q6L^*uQ^**az^-{=?WrOVdJ^T)=r^-Ph|1TB+QqNHMR$5b6D9v@E1V zth#xlDpYw5Bwdhe~}hlMx?)YC2%It-SK#lq05NQF;UAyI=bt4HU8oIpo|(l zn%@)yHtbztUvi?sUze#Qp^eiMi&Ku}SE37_tfVoRZBb<=RvzfY+WL0uG@upT2TQz7 zc&I=)My2kn|F5ZQk7u%dZc+HCflx8Ldc@A*8R`#Idtb$zev`hKs^eZziPF*v>8m;~)V zgp3=ENp;6Ub+f2G^ukj+nOD%ssL)e;bOxUXx83+MGyxxa^a?ya6B~b5;%1VX_+Yur zC$?yufZSVWw&Rq_7rHG^-{USLT{dWb^9L{C%!D3F#5c&(FaC-I1~fBKn=3mVM`Yj7 zx!+X`6x-TPDq21#x7^k}E;4j-yxY2VK&ayRMg51JzvH<0H6F`lqo#a~pMN_IRQe;4 z?Y@;|SXuj^X8W2|jK;Rk^`CU3jrVE7VdA*zdyHM2eBz1<{<9ha2qt@sHw03UtDo1e z+lOzx>6^XlDb;D>WxoilG`^D=s-KT>6NM)ge~(_1KPgVJ;`)n{68AHe7Ef32g;VkU z?`W4w1=zwE{bCA?#7XfMgseO@!tT7o<*I_M&7|1g;h+&WBqmH@>tR;MhY9k*BGziz z&s(_-5le)ES|(}|zHK|slK`}(s(Hy!fpR2gr^iAOG@%YLYc>{xus}3xLHgwf|b`V6RTDq6xcyB(!hiBQEqHq#1h>)`z0|emN zZ3gPLV@j=^S9}y+p7+D2iv21;XHX93*mf*Z1-Fp)IM}y%am1L{7a+DvcG|KR=&)6K zCUWTvm*C}2u$7s~`j+}XAt@w+@01Id$~-ux&2BiJaO1_JUNJY(63KTpq0%l}K^=ee zIV4?o=9~tdEI2>j#I*Ah5Tg_ZUbJ2`)3uCa%;+TrAGV@gvhLO4+| zEEipfkGfZv?+4F%7Y7fpBU%-2P!iK@Kr5^Ih7S&E92baqUI4o0<)V3jZ}YZyO3har zjw&;t0CFcmarO`X>7F$tE~P~XeDIg_Zw8CoV9OQb^%8TuxCWuHKda`M9{{uug^(Td zGev1Z4=rtknK$Q{t12Y&LJx@V`+0TdJEBmG;dzV*sti5j^R>B9>qC{I1HIr8=a)=t z@aW~N1J%;?f!_xk0CfqMxa(`-Jp_-jT%9XGl8)si;j_KZc^hpT!<3&WA6#Q3zRELM z@4t@I`Vr=T#VE))c^z<}!#Sv@S{9Yz@)D5u9VLW6RS^1@c4yf;p4;zVoux9Y10?+l z22Z|VcNPub;s*zA+8pRJ6_jjJe$pjpu)Y9X+Ywn_u~PY!cgpEK*h1!TOt6u<R^|9G`HZk{S{NoOYNj7(j+5dY{k}KE@cc11)0lsx2+nc z`e%*S3{5TPhjkiG7V7}OxW%v~rLa-U-Eq2eL$8tF$T<00N@c>+@;D? z-xk+EAnlN>p|ghZtGnimE%w^oBFt`B;RH(zWE{~!1Luk6drV1zG|jO0jVnd%!|^-R z(84hRe#&(5I*Mt7NKLnej39Z$0uCyAlu}ON<=u-2b?c7!;^1PV%s8^)GGDlR<&>hK zdljsoj**pT0VfzE;R5lRv=}iVO{%P54fc&l*5lajb-gW*y!A(`U_ET_Y!*Ox<<>Oa zy;y|fu|`1;39Q_MM6LcB5g@R2NfRrNK?weI4J%~o0wX4Rtij0lNerIO@knkh!1KF9 zv^WJVBP>WBOS#YpPShiP8(JrURIaO5;=%UYuf@-OG_+!~FCvvHewz0Z+CX#bh+P~M z6mbbX;|+mF0NUl)9;rypY*>WB#4hU@zVQ6+C(4NP#JB~AKrjy0{K9tLI2_v?++yrG zXNhG_U)P3r0dqW48Iy>1&K3PRJ(-MFTl_LoZ_F;Z8XZ{xJehZ$ni9E*_y5>rJWiE9 zfYbKx?Th2}$+_Cdx|p5rE_wV#h5;~Xx6cM0y?td~4p81YDFQ{VwOpawZ+5EeB|u#H z?4P(5`Q}Cm7bs+#Wc33dKKmgXM#2~Q@zZ#q)K)EnP~OuPbu?^`j9gSiN~?ZA-@`g@ z{Cu=JnyS|N;RYyV&5Fl)&1R!+Tl?6WPRy660LhhFzHg(D&dpI<)GHAu`8C9=(VP_a z&2La%pSrJ)|JFHF1$MKE;Z{t7ZI`CLZdoTxQ7X;gO(yrlE>xkV$WJd$W$=!gNcI2S za%t*iPEFXF9Hz=cXu)LlJtHU{OJAASKyP&S7ake>O{`^?0Nx zAwN z79(2_c{%s?#+^Ajql7st#=jgjb8paCwKx{o$L@mCyFdHl9MQ;m$f+ON3?;l3x?w){ zKE!^Om5ukN0<}pOxt=&2`{Bp?1l7R;zN$e5&+OGWnG3a#J+jsIWn98nH6d~H^Fdnuvc6+HQkxp&fAU@PJ3T> zWIO77<+Y!dG%bfZ&(EUIUu?2p#?J;s@w;uErRAukh;iUNdet2XZSV)h{!t6GP4D-1 z>f;_dNwJcz)aHp%iVx7Dejch53tkN?_%DzjG8Wk1^@B=LZ4nR#JCT zWehWgY4PGM`rrAAeoNQCtPZKZTQXR9bPhPAuG07?MyI4RFS`45DFHQVS>2-rVH`ZM0*1!qYNH0q@)C%2 zA3Yzg5_f{Z znJ%t)-qk)GNy%ZI)9KjWNDHa_7t}&`0r6hG(Dp>3SQygdhsJ6I0Os$jRg48bYH)Y8 zDZMP8Z+De#6&fvdPA?k8c*1v7ez+dH__tNd)z>XZIhi~yd!L;MU^}E>V#c9EutqPU z_&M*010W`N2WV$B2ODF{$mQNHs@CgKgr$#$S8YEfU&7X5a?01wrvuDrjtq2N%Y}PQ z3;w_4`(5DdWb#asy0FTe%t^}J&IMYs{*B3{N%`;MMV{f>zo9)fEVG;Q6eohH@WW-9 zSBe0o5Ggf7c1eA@T2i!Fs>NC*oDj;QgiZH+9@9rM64@jLX-9| z-JnpI0P+J;Q4ytKT)+>H+n}F!DvKfS;D^Q-W1iGI5#Vf+AY9Whiy?az$=Nx^?1D1Q z)~g}!OGD){MBMz7Rm4in`_dD+vi`g40VHp9v|Xl4eka}f2gvanMezW4n+9Yo;dlS7te$)I~*tO)6bnZr^s zKuT-<79p;JA$rue4U>O#4VhgrV- z)N89ASgs?M8%8!tSh$iaho5L$F%&5nrfPb2;P1#{g0f-R=8nP$M2Ri6D!%H)Y8cs3 zbe=?HNK$7loLrjan~hNccjOA?r*5!^(~R{u_3mE?t}lsXzZ4FXZzU$* zeQ4mqJjZ^xzES_wJrv;2ZFt_3W4PIxYCazi$ZZs5-7XBcPxbf`usOp|t`5AOjNE#* zgg0~qgs%AD1q>j$V(}nCc%TBgH|W;DZnc11DRN)5(ZW?@+XEXXJt8hb(DpON@kGQr;>)D$d-^nv&C-zjZ7^-kI)LUhXLUB0gSTYO^ImifM# z7mr0ee3=XHPJGIPs8N%hu4b`xM`SWw>kKF`PT(N&+j}H**YCFHEnsqCR_#j2ns5~G zL#H=5uB;hezbziQdyA??J4xJTxzh^$mX8-D_f{YX%mv7l4HNxVdzt-Z%c@}g3K3vo z;GDsH0Ndd1+W>CHCRTZ}gEm{Y5nL9t3aBQFcq=3{)wO?>xqh#%^npu>z>Lhc%IkYZ z8M7r`)7oE_Xb79P4A|8G$lU(5JZQiH#ScieR)ybu3?^kyr#<3ZeVLm_lq=r7X?aYP zSalVuMK%!uCa&BXO>>vHTBX)%Iq_H8kCH4&p=kuEqj}`5!8O`R>JK4>rD%pzhn9Uy&IBuF~y{l8i1Lo7Ol-ADCehs^& zyvslTC-0v9n53O43ED&-_^93H8Wb}Z|I`c7jNfm{%;F8{PvrjV%UKCEb52__8tVU3 zK)6QtKiNAIwl@Q?S8v4sO_~2=l>fQz;Ir3p-fv{^u=W4O{PzZlN-7)1?r^!kqI;9) Q_i+#F%XSv;&F((=7r6j^pa1{> literal 0 HcmV?d00001 diff --git a/static/manifest.webmanifest b/static/manifest.webmanifest new file mode 100644 index 0000000000..8ae9a16ab3 --- /dev/null +++ b/static/manifest.webmanifest @@ -0,0 +1,22 @@ +{ + "short_name": "Kubernetes", + "name": "Kubernetes", + "description": "An open-source system for automating deployment, scaling, and management of containerized applications", + "icons": [ + { + "src": "/images/kubernetes-192x192.png", + "type": "image/png", + "sizes": "192x192" + }, + { + "src": "/images/kubernetes-512x512.png", + "type": "image/png", + "sizes": "512x512" + } + ], + "start_url": "/", + "background_color": "#326ce5", + "display": "browser", + "scope": "/", + "theme_color": "#326ce5" +} From ded8bdb23850cbafdc37b2f0599b2ab6253ec10b Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Sun, 8 Dec 2019 18:11:39 +0000 Subject: [PATCH 050/105] Update Casandra StatefulSet tutorial MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Retitle to use StatefulSet term * Use full name: Apache Cassandra * Always use StatefulSet from apps/v1 * (It's safe to assume readers are using Kubernetes ≥ v1.9) * Update to match website style guidelines * Move note about environment variables out of overview * Tweak wording for clarity & tidy whitespace * Tweak chained shell commands for clarity * Drop duplicated / unnecessary task prerequisites --- .../stateful-application/cassandra.md | 172 ++++++++++-------- 1 file changed, 93 insertions(+), 79 deletions(-) diff --git a/content/en/docs/tutorials/stateful-application/cassandra.md b/content/en/docs/tutorials/stateful-application/cassandra.md index a20d93926d..f55a852abb 100644 --- a/content/en/docs/tutorials/stateful-application/cassandra.md +++ b/content/en/docs/tutorials/stateful-application/cassandra.md @@ -1,5 +1,5 @@ --- -title: "Example: Deploying Cassandra with Stateful Sets" +title: "Example: Deploying Cassandra with a StatefulSet" reviewers: - ahmetb content_template: templates/tutorial @@ -7,79 +7,66 @@ weight: 30 --- {{% capture overview %}} -This tutorial shows you how to develop a native cloud [Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. In this example, a custom Cassandra *SeedProvider* enables Cassandra to discover new Cassandra nodes as they join the cluster. +This tutorial shows you how to run [Apache Cassandra](http://cassandra.apache.org/) on Kubernetes. Cassandra, a database, needs persistent storage to provide data durability (application _state_). In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster. -*StatefulSets* make it easier to deploy stateful applications within a clustered environment. For more information on the features used in this tutorial, see the [*StatefulSet*](/docs/concepts/workloads/controllers/statefulset/) documentation. +*StatefulSets* make it easier to deploy stateful applications into your Kubernetes cluster. For more information on the features used in this tutorial, see [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). -**Cassandra on Docker** - -The *Pods* in this tutorial use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) -image from Google's [container registry](https://cloud.google.com/container-registry/docs/). -The Docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base) -and includes OpenJDK 8. - -This image includes a standard Cassandra installation from the Apache Debian repo. -By using environment variables you can change values that are inserted into `cassandra.yaml`. - -| ENV VAR | DEFAULT VALUE | -| ------------- |:-------------: | -| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | -| `CASSANDRA_NUM_TOKENS` | `32` | -| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | +{{< note >}} +Cassandra and Kubernetes both use the term _node_ to mean a member of a cluster. In this +tutorial, the Pods that belong to the StatefulSet are Cassandra nodes and are members +of the Cassandra cluster (called a _ring_). When those Pods run in your Kubernetes cluster, +the Kubernetes control plane schedules those Pods onto Kubernetes +{{< glossary_tooltip text="Nodes" term_id="node" >}}. +When a Cassandra node starts, it uses a _seed list_ to bootstrap discovery of other +nodes in the ring. +This tutorial deploys a custom Cassandra seed provider that lets the database discover +new Cassandra Pods as they appear inside your Kubernetes cluster. +{{< /note >}} {{% /capture %}} {{% capture objectives %}} -* Create and validate a Cassandra headless [*Service*](/docs/concepts/services-networking/service/). -* Use a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) to create a Cassandra ring. -* Validate the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). -* Modify the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). -* Delete the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) and its [Pods](/docs/concepts/workloads/pods/pod/). +* Create and validate a Cassandra headless {{< glossary_tooltip text="Service" term_id="service" >}}. +* Use a {{< glossary_tooltip term_id="StatefulSet" >}} to create a Cassandra ring. +* Validate the StatefulSet. +* Modify the StatefulSet. +* Delete the StatefulSet and its {{< glossary_tooltip text="Pods" term_id="pod" >}}. {{% /capture %}} {{% capture prerequisites %}} -To complete this tutorial, you should already have a basic familiarity with [Pods](/docs/concepts/workloads/pods/pod/), [Services](/docs/concepts/services-networking/service/), and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/). In addition, you should: +{{< include "task-tutorial-prereqs.md" >}} -* [Install and Configure](/docs/tasks/tools/install-kubectl/) the *kubectl* command-line tool +To complete this tutorial, you should already have a basic familiarity with {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip text="Services" term_id="service" >}}, and {{< glossary_tooltip text="StatefulSets" term_id="StatefulSet" >}}. -* Download [`cassandra-service.yaml`](/examples/application/cassandra/cassandra-service.yaml) - and [`cassandra-statefulset.yaml`](/examples/application/cassandra/cassandra-statefulset.yaml) - -* Have a supported Kubernetes cluster running - -{{< note >}} -Please read the [setup](/docs/setup/) if you do not already have a cluster. -{{< /note >}} - -### Additional Minikube Setup Instructions +### Additional Minikube setup instructions {{< caution >}} -[Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MB of memory and 1 CPU. Running Minikube with the default resource configuration results in insufficient resource errors during this tutorial. To avoid these errors, start Minikube with the following settings: +[Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MiB of memory and 1 CPU. Running Minikube with the default resource configuration results in insufficient resource errors during this tutorial. To avoid these errors, start Minikube with the following settings: ```shell minikube start --memory 5120 --cpus=4 ``` {{< /caution >}} - + {{% /capture %}} {{% capture lessoncontent %}} -## Creating a Cassandra Headless Service +## Creating a headless Service for Cassandra {#creating-a-cassandra-headless-service} -A Kubernetes [Service](/docs/concepts/services-networking/service/) describes a set of [Pods](/docs/concepts/workloads/pods/pod/) that perform the same task. +In Kubernetes, a {{< glossary_tooltip text="Service" term_id="service" >}} describes a set of {{< glossary_tooltip text="Pods" term_id="pod" >}} that perform the same task. -The following `Service` is used for DNS lookups between Cassandra Pods and clients within the Kubernetes cluster. +The following Service is used for DNS lookups between Cassandra Pods and clients within your cluster: {{< codenew file="application/cassandra/cassandra-service.yaml" >}} -1. Launch a terminal window in the directory you downloaded the manifest files. -1. Create a Service to track all Cassandra StatefulSet nodes from the `cassandra-service.yaml` file: +Create a Service to track all Cassandra StatefulSet members from the `cassandra-service.yaml` file: - ```shell - kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml - ``` +```shell +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml +``` -### Validating (optional) + +### Validating (optional) {#validating} Get the Cassandra Service. @@ -94,9 +81,9 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cassandra ClusterIP None 9042/TCP 45s ``` -Service creation failed if anything else is returned. Read [Debug Services](/docs/tasks/debug-application-cluster/debug-service/) for common issues. +If you don't see a Service named `cassandra`, that means creation failed. Read [Debug Services](/docs/tasks/debug-application-cluster/debug-service/) for help troubleshooting common issues. -## Using a StatefulSet to Create a Cassandra Ring +## Using a StatefulSet to create a Cassandra ring The StatefulSet manifest, included below, creates a Cassandra ring that consists of three Pods. @@ -106,14 +93,23 @@ This example uses the default provisioner for Minikube. Please update the follow {{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}} -1. Update the StatefulSet if necessary. -1. Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file: +Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file: - ```shell - kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml - ``` +```shell +# Use this if you are able to apply cassandra-statefulset.yaml unmodified +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml +``` -## Validating The Cassandra StatefulSet +If you need to modify `cassandra-statefulset.yaml` to suit your cluster, download +https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply +that manifest, from the folder you saved the modified version into: +```shell +# Use this if you needed to modify cassandra-statefulset.yaml locally +kubectl apply -f cassandra-statefulset.yaml +``` + + +## Validating the Cassandra StatefulSet 1. Get the Cassandra StatefulSet: @@ -121,31 +117,32 @@ This example uses the default provisioner for Minikube. Please update the follow kubectl get statefulset cassandra ``` - The response should be: + The response should be similar to: ``` NAME DESIRED CURRENT AGE cassandra 3 0 13s ``` - The `StatefulSet` resource deploys Pods sequentially. + The `StatefulSet` resource deploys Pods sequentially. 1. Get the Pods to see the ordered creation status: ```shell kubectl get pods -l="app=cassandra" ``` - - The response should be: - + + The response should be similar to: + ```shell NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 1m cassandra-1 0/1 ContainerCreating 0 8s ``` - - It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns: - + + It can take several minutes for all three Pods to deploy. Once they are deployed, the same command + returns output similar to: + ``` NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 10m @@ -153,13 +150,14 @@ This example uses the default provisioner for Minikube. Please update the follow cassandra-2 1/1 Running 0 8m ``` -3. Run the Cassandra [nodetool](https://wiki.apache.org/cassandra/NodeTool) to display the status of the ring. +3. Run the Cassandra [nodetool](https://cwiki.apache.org/confluence/display/CASSANDRA2/NodeTool) inside the first Pod, to + display the status of the ring. ```shell kubectl exec -it cassandra-0 -- nodetool status ``` - The response should look something like this: + The response should look something like: ``` Datacenter: DC1-K8Demo @@ -174,7 +172,7 @@ This example uses the default provisioner for Minikube. Please update the follow ## Modifying the Cassandra StatefulSet -Use `kubectl edit` to modify the size of a Cassandra StatefulSet. +Use `kubectl edit` to modify the size of a Cassandra StatefulSet. 1. Run the following command: @@ -182,14 +180,14 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet. kubectl edit statefulset cassandra ``` - This command opens an editor in your terminal. The line you need to change is the `replicas` field. The following sample is an excerpt of the `StatefulSet` file: + This command opens an editor in your terminal. The line you need to change is the `replicas` field. The following sample is an excerpt of the StatefulSet file: ```yaml # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # - apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 + apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: 2016-08-13T18:40:58Z @@ -204,50 +202,66 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet. replicas: 3 ``` -1. Change the number of replicas to 4, and then save the manifest. +1. Change the number of replicas to 4, and then save the manifest. - The `StatefulSet` now contains 4 Pods. + The StatefulSet now scales to run with 4 Pods. -1. Get the Cassandra StatefulSet to verify: +1. Get the Cassandra StatefulSet to verify your change: ```shell kubectl get statefulset cassandra ``` - The response should be + The response should be similar to: ``` NAME DESIRED CURRENT AGE cassandra 4 4 36m ``` - + {{% /capture %}} {{% capture cleanup %}} -Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources. +Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources. {{< warning >}} Depending on the storage class and reclaim policy, deleting the *PersistentVolumeClaims* may cause the associated volumes to also be deleted. Never assume you’ll be able to access data if its volume claims are deleted. {{< /warning >}} -1. Run the following commands (chained together into a single command) to delete everything in the Cassandra `StatefulSet`: +1. Run the following commands (chained together into a single command) to delete everything in the Cassandra StatefulSet: ```shell - grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ + grace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ && kubectl delete statefulset -l app=cassandra \ - && echo "Sleeping $grace" \ + && echo "Sleeping ${grace} seconds" 1>&2 \ && sleep $grace \ - && kubectl delete pvc -l app=cassandra + && kubectl delete persistentvolumeclaim -l app=cassandra ``` -1. Run the following command to delete the Cassandra Service. +1. Run the following command to delete the Service you set up for Cassandra: ```shell kubectl delete service -l app=cassandra ``` -{{% /capture %}} +## Cassandra container environment variables +The Pods in this tutorial use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) +image from Google's [container registry](https://cloud.google.com/container-registry/docs/). +The Docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base) +and includes OpenJDK 8. + +This image includes a standard Cassandra installation from the Apache Debian repo. +By using environment variables you can change values that are inserted into `cassandra.yaml`. + +| Environment variable | Default value | +| ------------------------ |:---------------: | +| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | +| `CASSANDRA_NUM_TOKENS` | `32` | +| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | + + +{{% /capture %}} {{% capture whatsnext %}} * Learn how to [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/). From a105a3142794eb8ce8c93a9b9b130fd89510ddc4 Mon Sep 17 00:00:00 2001 From: varadaprasanth <48983411+varadaprasanth@users.noreply.github.com> Date: Mon, 6 Apr 2020 17:58:39 +0530 Subject: [PATCH 051/105] Fix on PersistentVolume Modified PersistentVolume and PersitentVolumeClaim terms as per the standards --- content/en/docs/concepts/storage/persistent-volumes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 99c21105a9..3d444584ee 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -28,9 +28,9 @@ This document describes the current state of _persistent volumes_ in Kubernetes. Managing storage is a distinct problem from managing compute instances. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume and PersistentVolumeClaim. -A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. +A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. -A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only). +A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only). While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. From 18260ef37bb8ee709b21162696c1739d7f3f81d5 Mon Sep 17 00:00:00 2001 From: "Julian V. Modesto" Date: Sun, 8 Mar 2020 12:05:52 -0400 Subject: [PATCH 052/105] Document where the API supports single resources. Co-Authored-By: Tim Bannister --- content/en/docs/reference/using-api/api-concepts.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index fbdbf14f87..2eaa7421c0 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -334,6 +334,17 @@ are not vulnerable to ordering changes in the list. Once the last finalizer is removed, the resource is actually removed from etcd. +## Single resource API + +API verbs GET, CREATE, UPDATE, PATCH, DELETE and PROXY support single resources only. +These verbs with single resource support have no support for submitting +multiple resources together in an ordered or unordered list or transaction. +Clients including kubectl will parse a list of resources and make +single-resource API requests. + +API verbs LIST and WATCH support getting multiple resources, and +DELETECOLLECTION supports deleting multiple resources. + ## Dry-run {{< feature-state for_k8s_version="v1.18" state="stable" >}} From 7bd283f8a4901160a12a35c08e6aeead32d0abb0 Mon Sep 17 00:00:00 2001 From: Claudia Nadolny Date: Mon, 6 Apr 2020 10:55:46 -0700 Subject: [PATCH 053/105] Kubernetes 1.18 Blog: API Priority and Fairness Alpha (#20090) --- ...0-04-06-API-Priority-and-Fairness-Alpha.md | 68 +++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md diff --git a/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md b/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md new file mode 100644 index 0000000000..8f483c82c1 --- /dev/null +++ b/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md @@ -0,0 +1,68 @@ +--- +layout: blog +title: "API Priority and Fairness Alpha" +date: 2020-04-06 +slug: kubernetes-1-18-feature-api-priority-and-fairness-alpha +--- + +**Authors:** Min Kim (Ant Financial), Mike Spreitzer (IBM), Daniel Smith (Google) + +This blog describes “API Priority And Fairness”, a new alpha feature in Kubernetes 1.18. API Priority And Fairness permits cluster administrators to divide the concurrency of the control plane into different weighted priority levels. Every request arriving at a kube-apiserver will be categorized into one of the priority levels and get its fair share of the control plane’s throughput. + +## What problem does this solve? +Today the apiserver has a simple mechanism for protecting itself against CPU and memory overloads: max-in-flight limits for mutating and for readonly requests. Apart from the distinction between mutating and readonly, no other distinctions are made among requests; consequently, there can be undesirable scenarios where one subset of the requests crowds out other requests. + +In short, it is far too easy for Kubernetes workloads to accidentally DoS the apiservers, causing other important traffic--like system controllers or leader elections---to fail intermittently. In the worst cases, a few broken nodes or controllers can push a busy cluster over the edge, turning a local problem into a control plane outage. + +## How do we solve the problem? +The new feature “API Priority and Fairness” is about generalizing the existing max-in-flight request handler in each apiserver, to make the behavior more intelligent and configurable. The overall approach is as follows. + +1. Each request is matched by a _Flow Schema_. The Flow Schema states the Priority Level for requests that match it, and assigns a “flow identifier” to these requests. Flow identifiers are how the system determines whether requests are from the same source or not. +2. Priority Levels may be configured to behave in several ways. Each Priority Level gets its own isolated concurrency pool. Priority levels also introduce the concept of queuing requests that cannot be serviced immediately. +3. To prevent any one user or namespace from monopolizing a Priority Level, they may be configured to have multiple queues. [“Shuffle Sharding”](https://aws.amazon.com/builders-library/workload-isolation-using-shuffle-sharding/#What_is_shuffle_sharding.3F) is used to assign each flow of requests to a subset of the queues. +4. Finally, when there is capacity to service a request, a [“Fair Queuing”](https://en.wikipedia.org/wiki/Fair_queuing) algorithm is used to select the next request. Within each priority level the queues compete with even fairness. + +Early results have been very promising! Take a look at this [analysis](https://github.com/kubernetes/kubernetes/pull/88177#issuecomment-588945806). + +## How do I try this out? +You are required to prepare the following things in order to try out the feature: + +* Download and install a kubectl greater than v1.18.0 version +* Enabling the new API groups with the command line flag `--runtime-config="flowcontrol.apiserver.k8s.io/v1alpha1=true"` on the kube-apiservers +* Switch on the feature gate with the command line flag `--feature-gates=APIPriorityAndFairness=true` on the kube-apiservers + +After successfully starting your kube-apiservers, you will see a few default FlowSchema and PriorityLevelConfiguration resources in the cluster. These default configurations are designed for a general protection and traffic management for your cluster. +You can examine and customize the default configuration by running the usual tools, e.g.: + +* `kubectl get flowschemas` +* `kubectl get prioritylevelconfigurations` + + +## How does this work under the hood? +Upon arrival at the handler, a request is assigned to exactly one priority level and exactly one flow within that priority level. Hence understanding how FlowSchema and PriorityLevelConfiguration works will be helping you manage the request traffic going through your kube-apiservers. + +* FlowSchema: FlowSchema will identify a PriorityLevelConfiguration object and the way to compute the request’s “flow identifier”. Currently we support matching requests according to: the identity making the request, the verb, and the target object. The identity can match in terms of: a username, a user group name, or a ServiceAccount. And as for the target objects, we can match by apiGroup, resource[/subresource], and namespace. + * The flow identifier is used for shuffle sharding, so it’s important that requests have the same flow identifier if they are from the same source! We like to consider scenarios with “elephants” (which send many/heavy requests) vs “mice” (which send few/light requests): it is important to make sure the elephant’s requests all get the same flow identifier, otherwise they will look like many different mice to the system! + * See the API Documentation [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)! + +* PriorityLevelConfiguration: Defines a priority level. + * For apiserver self requests, and any reentrant traffic (e.g., admission webhooks which themselves make API requests), a Priority Level can be marked “exempt”, which means that no queueing or limiting of any sort is done. This is to prevent priority inversions. + * Each non-exempt Priority Level is configured with a number of "concurrency shares" and gets an isolated pool of concurrency to use. Requests of that Priority Level run in that pool when it is not full, never anywhere else. Each apiserver is configured with a total concurrency limit (taken to be the sum of the old limits on mutating and readonly requests), and this is then divided among the Priority Levels in proportion to their concurrency shares. + * A non-exempt Priority Level may select a number of queues and a "hand size" to use for the shuffle sharding. Shuffle sharding maps flows to queues in a way that is better than consistent hashing. A given flow has access to a small collection of queues, and for each incoming request the shortest queue is chosen. When a Priority Level has queues, it also sets a limit on queue length. There is also a limit placed on how long a request can wait in its queue; this is a fixed fraction of the apiserver's request timeout. A request that cannot be executed and cannot be queued (any longer) is rejected. + * Alternatively, a non-exempt Priority Level may select immediate rejection instead of waiting in a queue. + * See the [API documentation](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io) for this feature. + +## What’s missing? When will there be a beta? +We’re already planning a few enhancements based on alpha and there will be more as users send feedback to our community. Here’s a list of them: + +* Traffic management for WATCH and EXEC requests +* Adjusting and improving the default set of FlowSchema/PriorityLevelConfiguration +* Enhancing observability on how this feature works +* Join the discussion [here](https://github.com/kubernetes/enhancements/pull/1632) + +Possibly treat LIST requests differently depending on an estimate of how big their result will be. + +## How can I get involved? +As always! Reach us on slack [#sig-api-machinery](https://kubernetes.slack.com/messages/sig-api-machinery), or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery). We have lots of exciting features to build and can use all sorts of help. + +Many thanks to the contributors that have gotten this feature this far: Aaron Prindle, Daniel Smith, Jonathan Tomer, Mike Spreitzer, Min Kim, Bruce Ma, Yu Liao! From 0bd6f53d4af03426d66d8a77a4dde68ce22a9c79 Mon Sep 17 00:00:00 2001 From: Daniel Barclay Date: Mon, 6 Apr 2020 19:39:55 -0400 Subject: [PATCH 054/105] Restored accidentally deleted space. --- .../configure-access-multiple-clusters.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 6e87400937..ed7dcb2892 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -153,7 +153,7 @@ The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above are the placehold for the pathnames of the certificate files. You need change these to the actual pathnames of certificate files in your environment. -Sometimes you may want to use Base64-encoded data embeddedhere instead of separate +Sometimes you may want to use Base64-encoded data embedded here instead of separate certificate files; in that case you need add the suffix `-data` to the keys, for example, `certificate-authority-data`, `client-certificate-data`, `client-key-data`. From bf6f3da3f50730734280b93e20a3d847a1eaa076 Mon Sep 17 00:00:00 2001 From: yue9944882 <291271447@qq.com> Date: Tue, 7 Apr 2020 11:59:49 +0800 Subject: [PATCH 055/105] adding mengyi zhou to apf thank list --- .../blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md b/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md index 8f483c82c1..c4c983c543 100644 --- a/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md +++ b/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md @@ -65,4 +65,4 @@ Possibly treat LIST requests differently depending on an estimate of how big the ## How can I get involved? As always! Reach us on slack [#sig-api-machinery](https://kubernetes.slack.com/messages/sig-api-machinery), or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery). We have lots of exciting features to build and can use all sorts of help. -Many thanks to the contributors that have gotten this feature this far: Aaron Prindle, Daniel Smith, Jonathan Tomer, Mike Spreitzer, Min Kim, Bruce Ma, Yu Liao! +Many thanks to the contributors that have gotten this feature this far: Aaron Prindle, Daniel Smith, Jonathan Tomer, Mike Spreitzer, Min Kim, Bruce Ma, Yu Liao, Mengyi Zhou! From fb38dd6d50bb4e8d7fe52b77bbda5f2040be933e Mon Sep 17 00:00:00 2001 From: tanjunchen Date: Tue, 7 Apr 2020 14:03:35 +0800 Subject: [PATCH 056/105] add zh/ prefix of links in zh/ directory --- content/zh/docs/tutorials/_index.md | 56 ++++++++--------- .../zh/docs/tutorials/clusters/apparmor.md | 8 +-- .../configure-redis-using-configmap.md | 12 ++-- content/zh/docs/tutorials/hello-minikube.md | 34 +++++------ .../zh/docs/tutorials/services/source-ip.md | 16 ++--- .../basic-stateful-set.md | 60 +++++++++---------- .../stateful-application/cassandra.md | 22 +++---- .../mysql-wordpress-persistent-volume.md | 26 ++++---- .../stateful-application/zookeeper.md | 50 ++++++++-------- .../expose-external-ip-address.md | 24 ++++---- .../stateless-application/guestbook.md | 24 ++++---- 11 files changed, 166 insertions(+), 166 deletions(-) diff --git a/content/zh/docs/tutorials/_index.md b/content/zh/docs/tutorials/_index.md index 0c99aa6d98..4cd5bb8772 100644 --- a/content/zh/docs/tutorials/_index.md +++ b/content/zh/docs/tutorials/_index.md @@ -16,17 +16,17 @@ content_template: templates/concept {{% capture overview %}} -Kubernetes 文档的这一部分包含教程。一个教程展示了如何完成一个比单个[任务](/docs/tasks/)更大的目标。 +Kubernetes 文档的这一部分包含教程。一个教程展示了如何完成一个比单个[任务](zh/docs/tasks/)更大的目标。 通常一个教程有几个部分,每个部分都有一系列步骤。在浏览每个教程之前, -您可能希望将[标准化术语表](/docs/reference/glossary/)页面添加到书签,供以后参考。 +您可能希望将[标准化术语表](zh/docs/reference/glossary/)页面添加到书签,供以后参考。 {{% /capture %}} @@ -39,10 +39,10 @@ Before walking through each tutorial, you may want to bookmark the ## Basics --> -* [Kubernetes 基础知识](/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。 +* [Kubernetes 基础知识](zh/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。 * [使用 Kubernetes (Udacity) 的可伸缩微服务](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) @@ -57,10 +57,10 @@ Before walking through each tutorial, you may want to bookmark the * [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) --> -* [你好 Minikube](/docs/tutorials/hello-minikube/) +* [你好 Minikube](zh/docs/tutorials/hello-minikube/) ## 配置 @@ -69,10 +69,10 @@ Before walking through each tutorial, you may want to bookmark the ## Configuration --> -* [使用一个 ConfigMap 配置 Redis](/docs/tutorials/configuration/configure-redis-using-configmap/) +* [使用一个 ConfigMap 配置 Redis](zh/docs/tutorials/configuration/configure-redis-using-configmap/) ## 无状态应用程序 @@ -81,16 +81,16 @@ Before walking through each tutorial, you may want to bookmark the ## Stateless Applications --> -* [公开外部 IP 地址访问集群中的应用程序](/docs/tutorials/stateless-application/expose-external-ip-address/) +* [公开外部 IP 地址访问集群中的应用程序](zh/docs/tutorials/stateless-application/expose-external-ip-address/) -* [示例:使用 Redis 部署 PHP 留言板应用程序](/docs/tutorials/stateless-application/guestbook/) +* [示例:使用 Redis 部署 PHP 留言板应用程序](zh/docs/tutorials/stateless-application/guestbook/) ## 有状态应用程序 @@ -99,28 +99,28 @@ Before walking through each tutorial, you may want to bookmark the ## Stateful Applications --> -* [StatefulSet 基础](/docs/tutorials/stateful-application/basic-stateful-set/) +* [StatefulSet 基础](zh/docs/tutorials/stateful-application/basic-stateful-set/) -* [示例:WordPress 和 MySQL 使用持久卷](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) +* [示例:WordPress 和 MySQL 使用持久卷](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) -* [示例:使用有状态集部署 Cassandra](/docs/tutorials/stateful-application/cassandra/) +* [示例:使用有状态集部署 Cassandra](zh/docs/tutorials/stateful-application/cassandra/) -* [运行 ZooKeeper,CP 分布式系统](/docs/tutorials/stateful-application/zookeeper/) +* [运行 ZooKeeper,CP 分布式系统](zh/docs/tutorials/stateful-application/zookeeper/) ## CI/CD 管道 @@ -155,33 +155,33 @@ Before walking through each tutorial, you may want to bookmark the ## 集群 -* [AppArmor](/docs/tutorials/clusters/apparmor/) +* [AppArmor](zh/docs/tutorials/clusters/apparmor/) ## 服务 -* [使用源 IP](/docs/tutorials/services/source-ip/) +* [使用源 IP](zh/docs/tutorials/services/source-ip/) {{% /capture %}} {{% capture whatsnext %}} -如果您想编写教程,请参阅[使用页面模板](/docs/home/contribute/page-templates/) +如果您想编写教程,请参阅[使用页面模板](zh/docs/home/contribute/page-templates/) 以获取有关教程页面类型和教程模板的信息。 diff --git a/content/zh/docs/tutorials/clusters/apparmor.md b/content/zh/docs/tutorials/clusters/apparmor.md index 627a52ede9..e44f1247cb 100644 --- a/content/zh/docs/tutorials/clusters/apparmor.md +++ b/content/zh/docs/tutorials/clusters/apparmor.md @@ -416,23 +416,23 @@ Events: nodes. There are lots of ways to setup the profiles though, such as: --> Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。有很多方法可以设置配置文件,例如: - -* 通过在每个节点上运行 Pod 的[DaemonSet](/docs/concepts/workloads/controllers/daemonset/)确保加载了正确的配置文件。可以找到一个示例实现[这里](https://git.k8s.io/kubernetes/test/images/apparmor-loader)。 +* 通过在每个节点上运行 Pod 的[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/)确保加载了正确的配置文件。可以找到一个示例实现[这里](https://git.k8s.io/kubernetes/test/images/apparmor-loader)。 * 在节点初始化时,使用节点初始化脚本(例如 Salt 、Ansible 等)或镜像。 * 通过将配置文件复制到每个节点并通过 SSH 加载它们,如[示例](#example)。 -调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,并使用[节点选择器](/docs/concepts/configuration/assign pod node/)确保 Pod 在具有所需配置文件的节点上运行。 +调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,并使用[节点选择器](/zh/docs/concepts/configuration/assign pod node/)确保 Pod 在具有所需配置文件的节点上运行。 ### 使用 PodSecurityPolicy 限制配置文件 diff --git a/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md index 7150b572e1..614c7021d8 100644 --- a/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -9,9 +9,9 @@ content_template: templates/tutorial {{% capture overview %}} -这篇文档基于[使用 ConfigMap 来配置 Containers](/docs/tasks/configure-pod-container/configure-pod-configmap/) 这个任务,提供了一个使用 ConfigMap 来配置 Redis 的真实案例。 +这篇文档基于[使用 ConfigMap 来配置 Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 这个任务,提供了一个使用 ConfigMap 来配置 Redis 的真实案例。 {{% /capture %}} @@ -40,10 +40,10 @@ This page provides a real world example of how to configure Redis using a Config * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * 此页面上显示的示例适用于 `kubectl` 1.14和在其以上的版本。 -* 理解[使用ConfigMap来配置Containers](/docs/tasks/configure-pod-container/configure-pod-configmap/)。 +* 理解[使用ConfigMap来配置Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 {{% /capture %}} @@ -156,9 +156,9 @@ kubectl delete pod redis {{% capture whatsnext %}} -* 了解有关 [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/)的更多信息。 +* 了解有关 [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)的更多信息。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/hello-minikube.md b/content/zh/docs/tutorials/hello-minikube.md index c10be6b6f9..1086f9a334 100644 --- a/content/zh/docs/tutorials/hello-minikube.md +++ b/content/zh/docs/tutorials/hello-minikube.md @@ -33,16 +33,16 @@ card: -本教程向您展示如何使用 [Minikube](/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。 +本教程向您展示如何使用 [Minikube](zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。 {{< note >}} -如果您已在本地安装 [Minikube](/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 +如果您已在本地安装 [Minikube](zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 {{< /note >}} @@ -117,17 +117,17 @@ For more information on the `docker build` command, read the [Docker documentati ## Create a Deployment -A Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) is a group of one or more Containers, +A Kubernetes [*Pod*](zh/docs/concepts/workloads/pods/pod/) is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A Kubernetes -[*Deployment*](/docs/concepts/workloads/controllers/deployment/) checks on the health of your +[*Deployment*](zh/docs/concepts/workloads/controllers/deployment/) checks on the health of your Pod and restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. --> ## 创建 Deployment -Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。 +Kubernetes [*Pod*](zh/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](zh/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。 - {{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](/docs/user-guide/kubectl-overview/)。{{< /note >}} + {{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](zh/docs/user-guide/kubectl-overview/)。{{< /note >}} ## 创建 Service -默认情况下,Pod 只能通过 Kubernetes 集群中的内部 IP 地址访问。要使得 `hello-node` 容器可以从 Kubernetes 虚拟网络的外部访问,您必须将 Pod 暴露为 Kubernetes [*Service*](/docs/concepts/services-networking/service/)。 +默认情况下,Pod 只能通过 Kubernetes 集群中的内部 IP 地址访问。要使得 `hello-node` 容器可以从 Kubernetes 虚拟网络的外部访问,您必须将 Pod 暴露为 Kubernetes [*Service*](zh/docs/concepts/services-networking/service/)。 -* 进一步了解 [Deployment 对象](/docs/concepts/workloads/controllers/deployment/)。 -* 学习更多关于 [部署应用](/docs/tasks/run-application/run-stateless-application-deployment/)。 -* 学习更多关于 [Service 对象](/docs/concepts/services-networking/service/)。 +* 进一步了解 [Deployment 对象](zh/docs/concepts/workloads/controllers/deployment/)。 +* 学习更多关于 [部署应用](zh/docs/tasks/run-application/run-stateless-application-deployment/)。 +* 学习更多关于 [Service 对象](zh/docs/concepts/services-networking/service/)。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/services/source-ip.md b/content/zh/docs/tutorials/services/source-ip.md index d8811a57ce..5e1cb34c47 100644 --- a/content/zh/docs/tutorials/services/source-ip.md +++ b/content/zh/docs/tutorials/services/source-ip.md @@ -24,8 +24,8 @@ Kubernetes 集群中运行的应用通过 Service 抽象来互相查找、通信 * [NAT](https://en.wikipedia.org/wiki/Network_address_translation): 网络地址转换 * [Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT): 替换数据包的源 IP, 通常为节点的 IP * [Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT): 替换数据包的目的 IP, 通常为 Pod 的 IP -* [VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP -* [Kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理 +* [VIP](zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP +* [Kube-proxy](zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理 ## 准备工作 @@ -59,7 +59,7 @@ deployment.apps/source-ip-app created ## Type=ClusterIP 类型 Services 的 Source IP -如果你的 kube-proxy 运行在 [iptables 模式](/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT,这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。 +如果你的 kube-proxy 运行在 [iptables 模式](zh/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT,这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。 ```console kubectl get nodes @@ -136,7 +136,7 @@ command=GET ## Type=NodePort 类型 Services 的 Source IP -从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试: +从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试: ```console kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort @@ -189,7 +189,7 @@ client_address=10.240.0.3 ``` -为了防止这种情况发生,Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints,发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下,你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。 +为了防止这种情况发生,Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints,发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下,你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。 设置 `service.spec.externalTrafficPolicy` 字段如下: @@ -244,7 +244,7 @@ client_address=104.132.1.79 ## Type=LoadBalancer 类型 Services 的 Source IP -从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT,这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP(如前面章节所述)。 +从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT,这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP(如前面章节所述)。 你可以通过在一个 loadbalancer 上暴露这个 source-ip-app 来进行测试。 @@ -390,6 +390,6 @@ $ kubectl delete deployment source-ip-app {{% capture whatsnext %}} -* 学习更多关于 [通过 services 连接应用](/docs/concepts/services-networking/connect-applications-service/) -* 学习更多关于 [负载均衡](/docs/user-guide/load-balancer) +* 学习更多关于 [通过 services 连接应用](zh/docs/concepts/services-networking/connect-applications-service/) +* 学习更多关于 [负载均衡](zh/docs/user-guide/load-balancer) {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md b/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md index 2586d8600a..956537c88e 100644 --- a/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md @@ -15,11 +15,11 @@ approvers: -本教程介绍如何了使用 [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。 +本教程介绍如何了使用 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。 {{% /capture %}} @@ -32,13 +32,13 @@ following Kubernetes concepts. 在开始本教程之前,你应该熟悉以下 Kubernetes 的概念: -* [Pods](/docs/user-guide/pods/single-container/) -* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](/docs/concepts/services-networking/service/#headless-services) -* [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) +* [Pods](zh/docs/user-guide/pods/single-container/) +* [Cluster DNS](zh/docs/concepts/services-networking/dns-pod-service/) +* [Headless Services](zh/docs/concepts/services-networking/service/#headless-services) +* [PersistentVolumes](zh/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) -* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) -* [kubectl CLI](/docs/user-guide/kubectl/) +* [StatefulSets](zh/docs/concepts/workloads/controllers/statefulset/) +* [kubectl CLI](zh/docs/user-guide/kubectl/) 下载上面的例子并保存为文件 `web.yaml`。 -你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。 +你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。 ```shell kubectl get pods -w -l app=nginx @@ -110,11 +110,11 @@ kubectl get pods -w -l app=nginx -在另一个终端中,使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。 +在另一个终端中,使用 [`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。 ```shell kubectl apply -f web.yaml @@ -170,9 +170,9 @@ web-1 1/1 Running 0 18s -请注意在 `web-0` Pod 处于 [Running和Ready](/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 +请注意在 `web-0` Pod 处于 [Running和Ready](zh/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 -如同 [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的,StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`-`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0`和`web-1`。 +如同 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的,StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`-`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0`和`web-1`。 ### 使用稳定的网络身份标识 -每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。 +每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。 ```shell for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done @@ -232,13 +232,13 @@ web-1 ``` -使用 [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。 +使用 [`kubectl run`](zh/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。 ```shell kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm @@ -274,11 +274,11 @@ kubectl get pod -w -l app=nginx ``` -在另一个终端中使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。 +在另一个终端中使用 [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。 ```shell kubectl delete pod -l app=nginx @@ -382,7 +382,7 @@ www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO -StatefulSet 控制器创建了两个 PersistentVolumeClaims,绑定到两个 [PersistentVolumes](/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。 +StatefulSet 控制器创建了两个 PersistentVolumeClaims,绑定到两个 [PersistentVolumes](zh/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。 NGINX web 服务器默认会加载位于 `/usr/share/nginx/html/index.html` 的 index 文件。StatefulSets `spec` 中的 `volumeMounts` 字段保证了 `/usr/share/nginx/html` 文件夹由一个 PersistentVolume 支持。 @@ -491,8 +491,8 @@ mounted to the appropriate mount points. ## Scaling a StatefulSet Scaling a StatefulSet refers to increasing or decreasing the number of replicas. This is accomplished by updating the `replicas` field. You can use either -[`kubectl scale`](/docs/reference/generated/kubectl/kubectl-commands/#scale) or -[`kubectl patch`](/docs/reference/generated/kubectl/kubectl-commands/#patch) to scale a StatefulSet. +[`kubectl scale`](zh/docs/reference/generated/kubectl/kubectl-commands/#scale) or +[`kubectl patch`](zh/docs/reference/generated/kubectl/kubectl-commands/#patch) to scale a StatefulSet. ### Scaling Up @@ -504,7 +504,7 @@ In one terminal window, watch the Pods in the StatefulSet. ## 扩容/缩容 StatefulSet -扩容/缩容 StatefulSet 指增加或减少它的副本数。这通过更新 `replicas` 字段完成。你可以使用[`kubectl scale`](/docs/user-guide/kubectl/{{< param "version" >}}/#scale) 或者[`kubectl patch`](/docs/user-guide/kubectl/{{< param "version" >}}/#patch)来扩容/缩容一个 StatefulSet。 +扩容/缩容 StatefulSet 指增加或减少它的副本数。这通过更新 `replicas` 字段完成。你可以使用[`kubectl scale`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#scale) 或者[`kubectl patch`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#patch)来扩容/缩容一个 StatefulSet。 ### 扩容 @@ -1071,13 +1071,13 @@ kubectl get pods -w -l app=nginx ``` -使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。 +使用 [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。 ```shell kubectl delete statefulset web --cascade=false diff --git a/content/zh/docs/tutorials/stateful-application/cassandra.md b/content/zh/docs/tutorials/stateful-application/cassandra.md index 062171966f..78cbf27c5d 100644 --- a/content/zh/docs/tutorials/stateful-application/cassandra.md +++ b/content/zh/docs/tutorials/stateful-application/cassandra.md @@ -28,18 +28,18 @@ title: "Example: Deploying Cassandra with Stateful Sets" 本示例也使用了Kubernetes的一些核心组件: -- [_Pods_](/docs/user-guide/pods) -- [ _Services_](/docs/user-guide/services) -- [_Replication Controllers_](/docs/user-guide/replication-controller) -- [_Stateful Sets_](/docs/concepts/workloads/controllers/statefulset/) -- [_Daemon Sets_](/docs/admin/daemons) +- [_Pods_](zh/docs/user-guide/pods) +- [ _Services_](zh/docs/user-guide/services) +- [_Replication Controllers_](zh/docs/user-guide/replication-controller) +- [_Stateful Sets_](zh/docs/concepts/workloads/controllers/statefulset/) +- [_Daemon Sets_](zh/docs/admin/daemons) ## 准备工作 -本示例假设你已经安装运行了一个 Kubernetes集群(版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](/docs/getting-started-guides/) 获取关于你的平台的安装说明。 +本示例假设你已经安装运行了一个 Kubernetes集群(版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](zh/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](zh/docs/getting-started-guides/) 获取关于你的平台的安装说明。 本示例还需要一些代码和配置文件。为了避免手动输入,你可以 `git clone` Kubernetes 源到你本地。 @@ -133,7 +133,7 @@ kubectl delete daemonset cassandra ## 步骤 1:创建 Cassandra Headless Service -Kubernetes _[Service](/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod:一个或多个_必须_调度到相同主机上的容器。 +Kubernetes _[Service](zh/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](zh/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod:一个或多个_必须_调度到相同主机上的容器。 这个 Service 用于在 Kubernetes 集群内部进行 Cassandra 客户端和 Cassandra Pod 之间的 DNS 查找。 @@ -354,7 +354,7 @@ $ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces' system_traces system_schema system_auth system system_distributed ``` -你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。 +你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](zh/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。 使用以下命令编辑 StatefulSet。 @@ -429,7 +429,7 @@ $ grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodS ## 步骤 5:使用 Replication Controller 创建 Cassandra 节点 pod -Kubernetes _[Replication Controller](/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query,用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。 +Kubernetes _[Replication Controller](zh/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query,用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。 和我们刚才定义的 Service 一起,Replication Controller 能够让我们轻松的构建一个复制的、可扩展的 Cassandra 集群。 @@ -639,7 +639,7 @@ $ kubectl delete rc cassandra ## 步骤 8:使用 DaemonSet 替换 Replication Controller -在 Kubernetes中,[_DaemonSet_](/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector,用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。 +在 Kubernetes中,[_DaemonSet_](zh/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector,用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。 示范用例:当部署到云平台时,预期情况是实例是短暂的并且随时可能终止。Cassandra 被搭建成为在各个节点间复制数据以便于实现数据冗余。这样的话,即使一个实例终止了,存储在它上面的数据却没有,并且集群会通过重新复制数据到其它运行节点来作为响应。 @@ -802,6 +802,6 @@ $ kubectl delete daemonset cassandra 查看本示例的 [image](https://github.com/kubernetes/examples/tree/master/cassandra/image) 目录,了解如何构建容器的 docker 镜像及其内容。 -你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL` 和 `Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu(0.1 核)。 +你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](zh/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL` 和 `Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu(0.1 核)。 [!Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cassandra/README.md?pixel)]() diff --git a/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index ed0e8f8f7d..df29be10c9 100644 --- a/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -20,11 +20,11 @@ This tutorial shows you how to deploy a WordPress site and a MySQL database usin -[PersistentVolume](/docs/concepts/storage/persistent-volumes/)(PV)是一块集群里由管理员手动提供,或 kubernetes 通过 [StorageClass](/docs/concepts/storage/storage-classes) 动态创建的存储。 -[PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。 +[PersistentVolume](zh/docs/concepts/storage/persistent-volumes/)(PV)是一块集群里由管理员手动提供,或 kubernetes 通过 [StorageClass](zh/docs/concepts/storage/storage-classes) 动态创建的存储。 +[PersistentVolumeClaim](zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。 {{< warning >}} -A [Secret](/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。 +A [Secret](zh/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。 通过以下命令在`kustomization.yaml`中添加一个 Secret 生成器。您需要用您要使用的密码替换`YOUR_PASSWORD`。 @@ -453,10 +453,10 @@ Do not leave your WordPress installation on this page. If another user finds it, {{% capture whatsnext %}} -* Learn more about [Introspection and Debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/) -* Learn more about [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) -* Learn more about [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) -* Learn how to [Get a Shell to a Container](/docs/tasks/debug-application-cluster/get-shell-running-container/) +* Learn more about [Introspection and Debugging](zh/docs/tasks/debug-application-cluster/debug-application-introspection/) +* Learn more about [Jobs](zh/docs/concepts/workloads/controllers/jobs-run-to-completion/) +* Learn more about [Port Forwarding](zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) +* Learn how to [Get a Shell to a Container](zh/docs/tasks/debug-application-cluster/get-shell-running-container/) --> 1. 运行以下命令以删除您的 Secret,Deployments,Services 和 PersistentVolumeClaims: @@ -468,9 +468,9 @@ Do not leave your WordPress installation on this page. If another user finds it, {{% capture whatsnext %}} -* 了解更多关于 [Introspection and Debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/) -* 了解更多关于 [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) -* 了解更多关于 [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) -* 了解如何 [Get a Shell to a Container](/docs/tasks/debug-application-cluster/get-shell-running-container/) +* 了解更多关于 [Introspection and Debugging](zh/docs/tasks/debug-application-cluster/debug-application-introspection/) +* 了解更多关于 [Jobs](zh/docs/concepts/workloads/controllers/jobs-run-to-completion/) +* 了解更多关于 [Port Forwarding](zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) +* 了解如何 [Get a Shell to a Container](zh/docs/tasks/debug-application-cluster/get-shell-running-container/) {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateful-application/zookeeper.md b/content/zh/docs/tutorials/stateful-application/zookeeper.md index a7bcaad1d7..4d0475e629 100644 --- a/content/zh/docs/tutorials/stateful-application/zookeeper.md +++ b/content/zh/docs/tutorials/stateful-application/zookeeper.md @@ -14,23 +14,23 @@ content_template: templates/tutorial {{% capture overview %}} -本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 +本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 {{% /capture %}} {{% capture prerequisites %}} 在开始本教程前,你应该熟悉以下 Kubernetes 概念。 -* [Pods](/docs/user-guide/pods/single-container/) -* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](/docs/concepts/services-networking/service/#headless-services) -* [PersistentVolumes](/docs/concepts/storage/volumes/) +* [Pods](zh/docs/user-guide/pods/single-container/) +* [Cluster DNS](zh/docs/concepts/services-networking/dns-pod-service/) +* [Headless Services](zh/docs/concepts/services-networking/service/#headless-services) +* [PersistentVolumes](zh/docs/concepts/storage/volumes/) * [PersistentVolume Provisioning](http://releases.k8s.io/{{< param "githubbranch" >}}/examples/persistent-volume-provisioning/) -* [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) -* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) -* [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) -* [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) -* [kubectl CLI](/docs/user-guide/kubectl) +* [ConfigMaps](zh/docs/tasks/configure-pod-container/configure-pod-configmap/) +* [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) +* [PodDisruptionBudgets](zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) +* [PodAntiAffinity](zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) +* [kubectl CLI](zh/docs/user-guide/kubectl) @@ -69,14 +69,14 @@ ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被 下面的清单包含一个 -[Headless Service](/docs/concepts/services-networking/service/#headless-services), -一个 [Service](/docs/concepts/services-networking/service/), -一个 [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), -和一个 [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)。 +[Headless Service](zh/docs/concepts/services-networking/service/#headless-services), +一个 [Service](zh/docs/concepts/services-networking/service/), +一个 [PodDisruptionBudget](zh/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), +和一个 [StatefulSet](zh/docs/concepts/workloads/controllers/statefulset/)。 {{< codenew file="application/zookeeper/zookeeper.yaml" >}} -打开一个命令行终端,使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) +打开一个命令行终端,使用 [`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply) 创建这个清单。 ```shell @@ -92,7 +92,7 @@ poddisruptionbudget.policy/zk-pdb created statefulset.apps/zk created ``` -使用 [`kubectl get`](/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。 +使用 [`kubectl get`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。 ```shell kubectl get pods -w -l app=zk @@ -130,7 +130,7 @@ StatefulSet 控制器创建了3个 Pods,每个 Pod 包含一个 [ZooKeeper 3.4 由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置,以执行 leader 选举。Ensemble 中的每个服务都需要具有一个独一无二的标识符,所有的服务均需要知道标识符的全集,并且每个标志都需要和一个网络地址相关联。 -使用 [`kubectl exec`](/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 +使用 [`kubectl exec`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname; done @@ -184,7 +184,7 @@ zk-2.zk-headless.default.svc.cluster.local ``` -[Kubernetes DNS](/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 +[Kubernetes DNS](zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。 @@ -320,7 +320,7 @@ numChildren = 0 如同在 [ZooKeeper 基础](#zookeeper-basics) 一节所提到的,ZooKeeper 提交所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化是一种常用的技术,对于普通的存储应用也是如此。 -使用 [`kubectl delete`](/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。 +使用 [`kubectl delete`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。 ```shell kubectl delete statefulset zk @@ -641,7 +641,7 @@ log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %- 这是在容器里安全记录日志的最简单的方法。由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流的应用日志不会耗尽本地存储媒介。 -使用 [`kubectl logs`](/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。 +使用 [`kubectl logs`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。 ```shell kubectl logs zk-0 --tail 20 @@ -679,7 +679,7 @@ kubectl logs zk-0 --tail 20 ### 配置非特权用户 -在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 +在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 `zk` StatefulSet 的 Pod 的 `template` 包含了一个 SecurityContext。 @@ -736,7 +736,7 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ### 处理进程故障 -[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 +[Restart Policies](zh/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 检查 `zk-0` Pod 中运行的 ZooKeeper 服务的进程树。 @@ -947,7 +947,7 @@ kubectl get nodes ``` -使用 [`kubectl cordon`](/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。 +使用 [`kubectl cordon`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。 ```shell kubectl cordon < node name > @@ -987,7 +987,7 @@ kubernetes-minion-group-i4c4 ``` -使用 [`kubectl drain`](/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 +使用 [`kubectl drain`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 ```shell kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data @@ -1102,7 +1102,7 @@ numChildren = 0 ``` -使用 [`kubectl uncordon`](/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。 +使用 [`kubectl uncordon`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。 ```shell kubectl uncordon kubernetes-minion-group-pb41 diff --git a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md index b22d2a365a..16ae8471ef 100644 --- a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -26,21 +26,21 @@ external IP address. {{% capture prerequisites %}} - * 安装 [kubectl](/docs/tasks/tools/install-kubectl/). + * 安装 [kubectl](zh/docs/tasks/tools/install-kubectl/). * 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 群集。 - 本教程创建了一个[外部负载均衡器](/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。 + 本教程创建了一个[外部负载均衡器](zh/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。 * 配置 `kubectl` 与 Kubernetes API 服务器通信。有关说明,请参阅云供应商文档。 @@ -79,16 +79,16 @@ external IP address. - 前面的命令创建一个 [Deployment](/docs/concepts/workloads/controllers/deployment/) - 对象和一个关联的 [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)对象。 - ReplicaSet 有五个 [Pod](/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。 + 前面的命令创建一个 [Deployment](zh/docs/concepts/workloads/controllers/deployment/) + 对象和一个关联的 [ReplicaSet](zh/docs/concepts/workloads/controllers/replicaset/)对象。 + ReplicaSet 有五个 [Pod](zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。 -了解更多关于[将应用程序与服务连接](/docs/concepts/services-networking/connect-applications-service/)。 +了解更多关于[将应用程序与服务连接](zh/docs/concepts/services-networking/connect-applications-service/)。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateless-application/guestbook.md b/content/zh/docs/tutorials/stateless-application/guestbook.md index 0dac40d655..c970f91d7d 100644 --- a/content/zh/docs/tutorials/stateless-application/guestbook.md +++ b/content/zh/docs/tutorials/stateless-application/guestbook.md @@ -143,9 +143,9 @@ Replace POD-NAME with the name of your Pod. ### 创建 Redis 主节点的服务 -留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 +留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 {{< codenew file="application/guestbook/redis-master-service.yaml" >}} @@ -349,10 +349,10 @@ The guestbook application has a web frontend serving the HTTP requests written i ### 创建前端服务 应用的 `redis-slave` 和 `redis-master` 服务只能在容器集群中访问,因为服务的默认类型是 -[ClusterIP](/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 +[ClusterIP](zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 -* 完成 [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) 交互式教程 -* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) -* 阅读更多关于[连接应用程序](/docs/concepts/services-networking/connect-applications-service/) -* 阅读更多关于[管理资源](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) +* 完成 [Kubernetes Basics](zh/docs/tutorials/kubernetes-basics/) 交互式教程 +* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) +* 阅读更多关于[连接应用程序](zh/docs/concepts/services-networking/connect-applications-service/) +* 阅读更多关于[管理资源](zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) {{% /capture %}} From fb0a183a755d754d3e8a38527c7e7ecea846216a Mon Sep 17 00:00:00 2001 From: varadaprasanth <48983411+varadaprasanth@users.noreply.github.com> Date: Tue, 7 Apr 2020 13:28:47 +0530 Subject: [PATCH 057/105] Pod Object is modified as per standards After Fix - Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling. --- content/en/docs/concepts/storage/storage-classes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 9d6fc1c3c5..e842165763 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -813,10 +813,10 @@ volumeBindingMode: WaitForFirstConsumer ``` Local volumes do not currently support dynamic provisioning, however a StorageClass -should still be created to delay volume binding until pod scheduling. This is +should still be created to delay volume binding until Pod scheduling. This is specified by the `WaitForFirstConsumer` volume binding mode. -Delaying volume binding allows the scheduler to consider all of a pod's +Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim. From 9335dcda9130374dd13e5ef8687097b35efbc47e Mon Sep 17 00:00:00 2001 From: soolaugust Date: Tue, 7 Apr 2020 16:36:54 +0800 Subject: [PATCH 058/105] fix older content of 1.17.x --- .../tasks/administer-cluster/kubeadm/kubeadm-upgrade.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 9fc79c1e12..42c8a4c786 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -50,18 +50,18 @@ The upgrade workflow at high level is the following: ## Determine which version to upgrade to -1. Find the latest stable 1.17 version: +1. Find the latest stable 1.18 version: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache madison kubeadm - # find the latest 1.17 version in the list + # find the latest 1.18 version in the list # it should look like 1.18.x-00, where x is the latest patch {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.17 version in the list + # find the latest 1.18 version in the list # it should look like 1.18.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}} From c11c8b38c40de37ce96609d179260c733af82a10 Mon Sep 17 00:00:00 2001 From: MengZeLee Date: Tue, 7 Apr 2020 09:28:46 +0800 Subject: [PATCH 059/105] run command is deprecated can't not create deployment with kubectl run. --- .../docs/concepts/cluster-administration/manage-deployment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index eedafce1a3..f4d511fc99 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -427,7 +427,7 @@ We'll guide you through how to create and update applications with Deployments. Let's say you were running version 1.14.2 of nginx: ```shell -kubectl run my-nginx --image=nginx:1.14.2 --replicas=3 +kubectl create deployment my-nginx --image=nginx:1.14.2 ``` ```shell deployment.apps/my-nginx created From 4d3b7a06da70e2f5aa18144e032e5e0c6299c152 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 7 Apr 2020 16:54:30 +0100 Subject: [PATCH 060/105] Suggest scaling deployment to 3 before rollout of update --- .../concepts/cluster-administration/manage-deployment.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index f4d511fc99..1f93b54d45 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -433,6 +433,14 @@ kubectl create deployment my-nginx --image=nginx:1.14.2 deployment.apps/my-nginx created ``` +with 3 replicas (so the old and new revisions can coexist): +```shell +kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 +``` +``` +deployment.apps/my-nginx scaled +``` + To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above. ```shell From f0cded013f551288808606125fa4a8f6882e2090 Mon Sep 17 00:00:00 2001 From: Celeste Horgan Date: Thu, 2 Apr 2020 16:03:51 -0700 Subject: [PATCH 061/105] Restructure to reduce duplication Signed-off-by: Celeste Horgan --- content/en/docs/contribute/_index.md | 85 +- content/en/docs/contribute/advanced.md | 48 +- content/en/docs/contribute/intermediate.md | 967 ------------------ content/en/docs/contribute/localization.md | 31 +- .../en/docs/contribute/new-content/_index.md | 4 + .../new-content/blogs-case-studies.md | 60 ++ .../contribute/new-content/new-features.md | 134 +++ .../docs/contribute/new-content/open-a-pr.md | 484 +++++++++ .../docs/contribute/new-content/overview.md | 73 ++ content/en/docs/contribute/participating.md | 12 +- content/en/docs/contribute/review/_index.md | 14 + .../docs/contribute/review/for-approvers.md | 228 +++++ .../docs/contribute/review/reviewing-prs.md | 98 ++ content/en/docs/contribute/start.md | 421 -------- .../en/docs/contribute/style/content-guide.md | 4 - .../contribute/style/hugo-shortcodes/index.md | 2 +- .../en/docs/contribute/style/style-guide.md | 14 +- .../docs/contribute/style/write-new-topic.md | 11 +- .../contribute/suggesting-improvements.md | 65 ++ static/_redirects | 7 +- 20 files changed, 1274 insertions(+), 1488 deletions(-) delete mode 100644 content/en/docs/contribute/intermediate.md create mode 100644 content/en/docs/contribute/new-content/_index.md create mode 100644 content/en/docs/contribute/new-content/blogs-case-studies.md create mode 100644 content/en/docs/contribute/new-content/new-features.md create mode 100644 content/en/docs/contribute/new-content/open-a-pr.md create mode 100644 content/en/docs/contribute/new-content/overview.md create mode 100644 content/en/docs/contribute/review/_index.md create mode 100644 content/en/docs/contribute/review/for-approvers.md create mode 100644 content/en/docs/contribute/review/reviewing-prs.md delete mode 100644 content/en/docs/contribute/start.md create mode 100644 content/en/docs/contribute/suggesting-improvements.md diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index c58c72f28f..606a361c0c 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -4,59 +4,74 @@ title: Contribute to Kubernetes docs linktitle: Contribute main_menu: true weight: 80 +card: + name: contribute + weight: 10 + title: Start contributing --- {{% capture overview %}} -If you would like to help contribute to the Kubernetes documentation or website, -we're happy to have your help! Anyone can contribute, whether you're new to the -project or you've been around a long time, and whether you self-identify as a -developer, an end user, or someone who just can't stand seeing typos. +This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs). + +Kubernetes documentation contributors: + +- Improve existing content +- Create new content +- Translate the documentation +- Manage and publish the documentation parts of the Kubernetes release cycle + +Kubernetes documentation welcomes improvements from all contributors, new and experienced! {{% /capture %}} {{% capture body %}} -## Getting Started +## Getting started -Anyone can open an issue describing problems or desired improvements with documentation, or contribute a change with a pull request (PR). -Some tasks require more trust and need more access in the Kubernetes organization. -See [Participating in SIG Docs](/docs/contribute/participating/) for more details about -of roles and permissions. - -Kubernetes documentation resides in a GitHub repository. While we welcome -contributions from anyone, you do need basic comfort with git and GitHub to -operate effectively in the Kubernetes community. +Anyone can open an issue about documentation, or contribute a change with a pull request (PR) to the [`kubernetes/website` GitHub repository](https://github.com/kubernetes/website). You need to be comfortable with [git](https://git-scm.com/) and [GitHub](https://lab.github.com/) to operate effectively in the Kubernetes community. To get involved with documentation: 1. Sign the CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md). 2. Familiarize yourself with the [documentation repository](https://github.com/kubernetes/website) and the website's [static site generator](https://gohugo.io). -3. Make sure you understand the basic processes for [improving content](https://kubernetes.io/docs/contribute/start/#improve-existing-content) and [reviewing changes](https://kubernetes.io/docs/contribute/start/#review-docs-pull-requests). +3. Make sure you understand the basic processes for [opening a pull request](/docs/contribute/new-content/open-a-pr/) and [reviewing changes](/docs/contribute/review/reviewing-prs/). -## Contributions best practices +Some tasks require more trust and more access in the Kubernetes organization. +See [Participating in SIG Docs](/docs/contribute/participating/) for more details about +roles and permissions. -- Do write clear and meaningful GIT commit messages. -- Make sure to include _Github Special Keywords_ which references the issue and automatically closes the issue when PR is merged. -- When you make a small change to a PR like fixing a typo, any style change, or changing grammar. Make sure you squash your commits so that you dont get a large number of commits for a relatively small change. -- Make sure you include a nice PR description depicting the code you have changes, why to change a following piece of code and ensuring there is sufficient information for the reviewer to understand your PR. -- Additional Readings : - - [chris.beams.io/posts/git-commit/](https://chris.beams.io/posts/git-commit/) - - [github.com/blog/1506-closing-issues-via-pull-requests ](https://github.com/blog/1506-closing-issues-via-pull-requests ) - - [davidwalsh.name/squash-commits-git ](https://davidwalsh.name/squash-commits-git ) +## Your first contribution + +- Read the [Contribution overview](/docs/contribute/new-content/overview/) to learn about the different ways you can contribute. +- [Open a pull request using GitHub](/docs/contribute/new-content/new-content/#changes-using-github) to existing documentation and learn more about filing issues in GitHub. +- [Review pull requests](/docs/contribute/review/reviewing-prs/) from other Kubernetes community members for accuracy and language. +- Read the Kubernetes [content](/docs/contribute/style/content-guide/) and [style guides](/docs/contribute/style/style-guide/) so you can leave informed comments. +- Learn how to [use page templates](/docs/contribute/style/page-templates/) and [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) to make bigger changes. + +## Next steps + +- Learn to [work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the repository. +- Document [features in a release](/docs/contribute/new-content/new-features/). +- Participate in [SIG Docs](/docs/contribute/participating/), and become a [member or reviewer](/docs/contribute/participating/#roles-and-responsibilities). +- Start or help with a [localization](/docs/contribute/localization/). + +## Get involved with SIG Docs + +[SIG Docs](/docs/contribute/participating/) is the group of contributors who publish and maintain Kubernetes documentation and the webwsite. Getting involved with SIG Docs is a great way for Kubernetes contributors (feature development or otherwise) to have a large impact on the Kubernetes project. + +SIG Docs communicates with different methods: + +- [Join `#sig-docs` on the Kubernetes Slack instance](http://slack.k8s.io/). Make sure to + introduce yourself! +- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), + where broader discussions take place and official decisions are recorded. +- Join the [weekly SIG Docs video meeting](https://github.com/kubernetes/community/tree/master/sig-docs). Meetings are always announced on `#sig-docs` and added to the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). You'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone. ## Other ways to contribute -- To contribute to the Kubernetes community through online forums like Twitter or Stack Overflow, or learn about local meetups and Kubernetes events, visit the [Kubernetes community site](/community/). -- To contribute to feature development, read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get started. +- Visit the [Kubernetes community site](/community/). Participate on Twitter or Stack Overflow, learn about local Kubernetes meetups and events, and more. +- Read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get involved with Kubernetes feature development. +- Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/). -{{% /capture %}} - -{{% capture whatsnext %}} - -- For more information about the basics of contributing to documentation, read [Start contributing](/docs/contribute/start/). -- Follow the [Kubernetes documentation style guide](/docs/contribute/style/style-guide/) when proposing changes. -- For more information about SIG Docs, read [Participating in SIG Docs](/docs/contribute/participating/). -- For more information about localizing Kubernetes docs, read [Localizing Kubernetes documentation](/docs/contribute/localization/). - -{{% /capture %}} +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md index c8ccd252f4..6574bba709 100644 --- a/content/en/docs/contribute/advanced.md +++ b/content/en/docs/contribute/advanced.md @@ -2,14 +2,14 @@ title: Advanced contributing slug: advanced content_template: templates/concept -weight: 30 +weight: 98 --- {{% capture overview %}} -This page assumes that you've read and mastered the -[Start contributing](/docs/contribute/start/) and -[Intermediate contributing](/docs/contribute/intermediate/) topics and are ready +This page assumes that you understand how to +[contribute to new content](/docs/contribute/new-content/overview) and +[review others' work](/docs/contribute/review/reviewing-prs/), and are ready to learn about more ways to contribute. You need to use the Git command line client and other tools for some of these tasks. @@ -19,7 +19,7 @@ client and other tools for some of these tasks. ## Be the PR Wrangler for a week -SIG Docs [approvers](/docs/contribute/participating/#approvers) take regular turns as the PR wrangler for the repository and are added to the [PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers#2019-schedule-q1q2) for weekly rotations. +SIG Docs [approvers](/docs/contribute/participating/#approvers) take week-long turns [wrangling PRs](https://github.com/kubernetes/website/wiki/PR-Wranglers) for the repository. The PR wrangler’s duties include: @@ -37,9 +37,9 @@ The PR wrangler’s duties include: - Assign `Docs Review` and `Tech Review` labels to indicate the PR's review status. - Assign`Needs Doc Review` or `Needs Tech Review` for PRs that haven't yet been reviewed. - Assign `Doc Review: Open Issues` or `Tech Review: Open Issues` for PRs that have been reviewed and require further input or action before merging. - - Assign `/lgtm` and `/approve` labels to PRs that can be merged. + - Assign `/lgtm` and `/approve` labels to PRs that can be merged. - Merge PRs when they are ready, or close PRs that shouldn’t be accepted. -- Triage and tag incoming issues daily. See [Intermediate contributing](/docs/contribute/intermediate/) for guidelines on how SIG Docs uses metadata. +- Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. ### Helpful GitHub queries for wranglers @@ -60,9 +60,9 @@ reviewed is usually small. These queries specifically exclude localization PRs, ### When to close Pull Requests -Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. +Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. -- Close any PR where the CLA hasn’t been signed for two weeks. +- Close any PR where the CLA hasn’t been signed for two weeks. PR authors can reopen the PR after signing the CLA, so this is a low-risk way to make sure nothing gets merged without a signed CLA. - Close any PR where the author has not responded to comments or feedback in 2 or more weeks. @@ -82,7 +82,7 @@ An automated service, [`fejta-bot`](https://github.com/fejta-bot) automatically SIG Docs [members](/docs/contribute/participating/#members) can propose improvements. After you've been contributing to the Kubernetes documentation for a while, you -may have ideas for improvement to the [Style Guide](/docs/contribute/style/style-guide/) +may have ideas for improving the [Style Guide](/docs/contribute/style/style-guide/) , the [Content Guide](/docs/contribute/style/content-guide/), the toolchain used to build the documentation, the website style, the processes for reviewing and merging pull requests, or other aspects of the documentation. For maximum transparency, @@ -134,21 +134,21 @@ rotated among SIG Docs approvers. ## Serve as a New Contributor Ambassador SIG Docs [approvers](/docs/contribute/participating/#approvers) can serve as -New Contributor Ambassadors. +New Contributor Ambassadors. -New Contributor Ambassadors work together to welcome new contributors to SIG-Docs, +New Contributor Ambassadors welcome new contributors to SIG-Docs, suggest PRs to new contributors, and mentor new contributors through their first -few PR submissions. +few PR submissions. -Responsibilities for New Contributor Ambassadors include: +Responsibilities for New Contributor Ambassadors include: -- Being available on the [Kubernetes #sig-docs channel](https://kubernetes.slack.com) to answer questions from new contributors. -- Working with PR wranglers to identify good first issues for new contributors. -- Mentoring new contributors through their first few PRs to the docs repo. +- Monitoring the [#sig-docs Slack channel](https://kubernetes.slack.com) for questions from new contributors. +- Working with PR wranglers to identify good first issues for new contributors. +- Mentoring new contributors through their first few PRs to the docs repo. - Helping new contributors create the more complex PRs they need to become Kubernetes members. -- [Sponsoring contributors](/docs/contribute/advanced/#sponsor-a-new-contributor) on their path to becoming Kubernetes members. +- [Sponsoring contributors](/docs/contribute/advanced/#sponsor-a-new-contributor) on their path to becoming Kubernetes members. -Current New Contributor Ambassadors are announced at each SIG-Docs meeting, and in the [Kubernetes #sig-docs channel](https://kubernetes.slack.com). +Current New Contributor Ambassadors are announced at each SIG-Docs meeting, and in the [Kubernetes #sig-docs channel](https://kubernetes.slack.com). ## Sponsor a new contributor @@ -180,12 +180,12 @@ Approvers must meet the following requirements to be a co-chair: - Have been a SIG Docs approver for at least 6 months - Have [led a Kubernetes docs release](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) or shadowed two releases - Understand SIG Docs workflows and tooling: git, Hugo, localization, blog subproject -- Understand how other Kubernetes SIGs and repositories affect the SIG Docs workflow, including: [teams in k/org](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml), [process in k/community](https://github.com/kubernetes/community/tree/master/sig-docs), plugins in [k/test-infra](https://github.com/kubernetes/test-infra/), and the role of [SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture). +- Understand how other Kubernetes SIGs and repositories affect the SIG Docs workflow, including: [teams in k/org](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml), [process in k/community](https://github.com/kubernetes/community/tree/master/sig-docs), plugins in [k/test-infra](https://github.com/kubernetes/test-infra/), and the role of [SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture). - Commit at least 5 hours per week (and often more) to the role for a minimum of 6 months ### Responsibilities -The role of co-chair is primarily one of service: co-chairs handle process and policy, schedule and run meetings, schedule PR wranglers, and generally do the things that no one else wants to do in order to build contributor capacity. +The role of co-chair is one of service: co-chairs build contributor capacity, handle process and policy, schedule and run meetings, schedule PR wranglers, advocate for docs in the Kubernetes community, make sure that docs succeed in Kubernetes release cycles, and keep SIG Docs focused on effective priorities. Responsibilities include: @@ -228,7 +228,7 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings **Honor folks' time**: -- Begin and end meetings punctually +Begin and end meetings on time. **Use Zoom effectively**: @@ -240,9 +240,9 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings ### Recording meetings on Zoom When you’re ready to start the recording, click Record to Cloud. - + When you’re ready to stop recording, click Stop. The video uploads automatically to YouTube. -{{% /capture %}} +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md deleted file mode 100644 index 9e477a90a4..0000000000 --- a/content/en/docs/contribute/intermediate.md +++ /dev/null @@ -1,967 +0,0 @@ ---- -title: Intermediate contributing -slug: intermediate -content_template: templates/concept -weight: 20 -card: - name: contribute - weight: 50 ---- - -{{% capture overview %}} - -This page assumes that you've read and mastered the tasks in the -[start contributing](/docs/contribute/start/) topic and are ready to -learn about more ways to contribute. - -{{< note >}} -Some tasks require you to use the Git command line client and other tools. -{{< /note >}} - -{{% /capture %}} - -{{% capture body %}} - -Now that you've gotten your feet wet and helped out with the Kubernetes docs in -the ways outlined in the [start contributing](/docs/contribute/start/) topic, -you may feel ready to do more. These tasks assume that you have, or are willing -to gain, deeper knowledge of the following topic areas: - -- Kubernetes concepts -- Kubernetes documentation workflows -- Where and how to find information about upcoming Kubernetes features -- Strong research skills in general - -These tasks are not as sequential as the beginner tasks. There is no expectation -that one person does all of them all of the time. - -## Learn about Prow - -[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is -the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow -enables chatbot-style commands to handle GitHub actions across the Kubernetes -organization. You can perform a variety of actions such as [adding and removing -labels](#add-and-remove-labels), closing issues, and assigning an approver. Type -the Prow command into a comment field using the `/` format. Some common -commands are: - -- `/lgtm` (looks good to me): adds the `lgtm` label, signalling that a reviewer has finished reviewing the PR -- `/approve`: approves a PR so it can merge (approver use only) -- `/assign`: assigns a person to review or approve a PR -- `/close`: closes an issue or PR -- `/hold`: adds the `do-not-merge/hold` label, indicating the PR cannot be automatically merged -- `/hold cancel`: removes the `do-not-merge/hold` label - -{{% note %}} -Not all commands are available to every user. The Prow bot will tell you if you -try to execute a command beyond your authorization level. -{{% /note %}} - -Familiarize yourself with the [list of Prow -commands](https://prow.k8s.io/command-help) before you review PRs or triage issues. - - -## Review pull requests - -In any given week, a specific docs approver volunteers to do initial triage -and review of [pull requests and issues](#triage-and-categorize-issues). This -person is the "PR Wrangler" for the week. The schedule is maintained using the -[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers). -To be added to this list, attend the weekly SIG Docs meeting and volunteer. Even -if you are not on the schedule for the current week, you can still review pull -requests (PRs) that are not already under active review. - -In addition to the rotation, an automated system comments on each new PR and -suggests reviewers and approvers for the PR, based on the list of approvers and -reviewers in the affected files. The PR author is expected to follow the -guidance of the bot, and this also helps PRs to get reviewed quickly. - -We want to get pull requests (PRs) merged and published as quickly as possible. -To ensure the docs are accurate and up to date, each PR needs to be reviewed by -people who understand the content, as well as people with experience writing -great documentation. - -Reviewers and approvers need to provide actionable and constructive feedback to -keep contributors engaged and help them to improve. Sometimes helping a new -contributor get their PR ready to merge takes more time than just rewriting it -yourself, but the project is better in the long term when we have a diversity of -active participants. - -Before you start reviewing PRs, make sure you are familiar with the -[Documentation Content Guide](/docs/contribute/style/content-guide/), the -[Documentation Style Guide](/docs/contribute/style/style-guide/), -and the [code of conduct](/community/code-of-conduct/). - -### Find a PR to review - -To see all open PRs, go to the **Pull Requests** tab in the GitHub repository. -A PR is eligible for review when it meets all of the following criteria: - -- Has the `cncf-cla:yes` tag -- Does not have WIP in the description -- Does not a have tag including the phrase `do-not-merge` -- Has no merge conflicts -- Is based against the correct branch (usually `master` unless the PR relates to - a feature that has not yet been released) -- Is not being actively reviewed by another docs person (other technical - reviewers are fine), unless that person has explicitly asked for your help. In - particular, leaving lots of new comments after other review cycles have - already been completed on a PR can be discouraging and counter-productive. - -If a PR is not eligible to merge, leave a comment to let the author know about -the problem and offer to help them fix it. If they've been informed and have not -fixed the problem in several weeks or months, eventually their PR will be closed -without merging. - -If you're new to reviewing, or you don't have a lot of bandwidth, look for PRs -with the `size/XS` or `size/S` tag set. The size is automatically determined by -the number of lines the PR changes. - -#### Reviewers and approvers - -The Kubernetes website repo operates differently than some of the Kubernetes -code repositories when it comes to the roles of reviewers and approvers. For -more information about the responsibilities of reviewers and approvers, see -[Participating](/docs/contribute/participating/). Here's an overview. - -- A reviewer reviews pull request content for technical accuracy. A reviewer - indicates that a PR is technically accurate by leaving a `/lgtm` comment on - the PR. - - {{< note >}}Don't add a `/lgtm` unless you are confident in the technical - accuracy of the documentation modified or introduced in the PR.{{< /note >}} - -- An approver reviews pull request content for docs quality and adherence to - SIG Docs guidelines found in the Content and Style guides. Only people listed as - approvers in the - [`OWNERS`](https://github.com/kubernetes/website/blob/master/OWNERS) file can - approve a PR. To approve a PR, leave an `/approve` comment on the PR. - -A PR is merged when it has both a `/lgtm` comment from anyone in the Kubernetes -organization and an `/approve` comment from an approver in the -`sig-docs-maintainers` group, as long as it is not on hold and the PR author -has signed the CLA. - -{{< note >}} - -The ["Participating"](/docs/contribute/participating/#approvers) section contains more information for reviewers and approvers, including specific responsibilities for approvers. - -{{< /note >}} - -### Review a PR - -1. Read the PR description and read any attached issues or links, if - applicable. "Drive-by reviewing" is sometimes more harmful than helpful, so - make sure you have the right knowledge to provide a meaningful review. - -2. If someone else is the best person to review this particular PR, let them - know by adding a comment with `/assign @`. If you have - asked a non-docs person for technical review but still want to review the PR - from a docs point of view, keep going. - -3. Go to the **Files changed** tab. Look over all the changed lines. Removed - content has a red background, and those lines also start with a `-` symbol. - Added content has a green background, and those lines also start with a `+` - symbol. Within a line, the actual modified content has a slightly darker - green background than the rest of the line. - - - Especially if the PR uses tricky formatting or changes CSS, Javascript, - or other site-wide elements, you can preview the website with the PR - applied. Go to the **Conversation** tab and click the **Details** link - for the `deploy/netlify` test, near the bottom of the page. It opens in - the same browser window by default, so open it in a new window so you - don't lose your partial review. Switch back to the **Files changed** tab - to resume your review. - - Make sure the PR complies with the Content and Style guides; link the - author to the relevant part of the guide(s) if it doesn't. - - If you have a question, comment, or other feedback about a given - change, hover over a line and click the blue-and-white `+` symbol that - appears. Type your comment and click **Start a review**. - - If you have more comments, leave them in the same way. - - By convention, if you see a small problem that does not have to do with - the main purpose of the PR, such as a typo or whitespace error, you can - call it out, prefixing your comment with `nit:` so that the author knows - you consider it trivial. They should still address it. - - When you've reviewed everything, or if you didn't have any comments, go - back to the top of the page and click **Review changes**. Choose either - **Comment** or **Request Changes**. Add a summary of your review, and - add appropriate - [Prow commands](https://prow.k8s.io/command-help) to separate lines in - the Review Summary field. SIG Docs follows the - [Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process). - All of your comments will be sent to the PR author in a single - notification. - - - If you think the PR is ready to be merged, add the text `/approve` to - your summary. - - If the PR does not need additional technical review, add the - text `/lgtm` as well. - - If the PR *does* need additional technical review, add the text - `/assign` with the GitHub username of the person who needs to - provide technical review. Look at the `reviewers` field in the - front-matter at the top of a given Markdown file to see who can - provide technical review. - - To prevent the PR from being merged, add `/hold`. This sets the - label `do-not-merge/hold`. - - If a PR has no conflicts and has the `lgtm` and `approve` labels but - no `hold` label, it is merged automatically. - - If a PR has the `lgtm` and/or `approve` labels and new changes are - detected, these labels are removed automatically. - - See - [the list of all available slash commands](https://prow.k8s.io/command-help) - that can be used in PRs. - - - If you previously selected **Request changes** and the PR author has - addressed your concerns, you can change your review status either in the - **Files changed** tab or at the bottom of the **Conversation** tab. Be - sure to add the `/approve` tag and assign technical reviewers if necessary, - so that the PR can be merged. - -### Commit into another person's PR - -Leaving PR comments is helpful, but there may be times when you need to commit -into another person's PR, rather than just leaving a review. - -Resist the urge to "take over" for another person unless they explicitly ask -you to, or you want to resurrect a long-abandoned PR. While it may be faster -in the short term, it deprives the person of the chance to contribute. - -The process you use depends on whether you need to edit a file that is already -in the scope of the PR or a file that the PR has not yet touched. - -You can't commit into someone else's PR if either of the following things is -true: - -- If the PR author pushed their branch directly to the - [https://github.com/kubernetes/website/](https://github.com/kubernetes/website/) - repository, only a reviewer with push access can commit into their PR. - Authors should be encouraged to push their branch to their fork before - opening the PR. -- If the PR author explicitly disallowed edits from approvers, you can't - commit into their PR unless they change this setting. - -#### If the file is already changed by the PR - -This method uses the GitHub UI. If you prefer, you can use the command line -even if the file you want to change is part of the PR, if you are more -comfortable working that way. - -1. Click the **Files changed** tab. -2. Scroll down to the file you want to edit, and click the pencil icon for - that file. -3. Make your changes, add a commit message in the field below the editor, and - click **Commit changes**. - -Your commit is now pushed to the branch the PR represents (probably on the -author's fork) and now shows up in the PR and your changes are reflected in -the **Files changed** tab. Leave a comment letting the PR author know you -changed the PR. - -If the author is using the command line rather than the GitHub UI to work on -this PR, they need to fetch their fork's changes and rebase their local branch -on the branch in their fork, before doing additional work on the PR. - -#### If the file has not yet been changed by the PR - -If changes need to be made to a file that is not yet included in the PR, you -need to use the command line. You can always use this method, if you prefer it -to the GitHub UI. - -1. Get the URL for the author's fork. You can find it near the bottom of the - **Conversation** tab. Look for the text **Add more commits by pushing to**. - The first link after this phrase is to the branch, and the second link is - to the fork. Copy the second link. Note the name of the branch for later. - -2. Add the fork as a remote. In your terminal, go to your clone of the - repository. Decide on a name to give the remote (such as the author's - GitHub username), and add the remote using the following syntax: - - ```bash - git remote add - ``` - -3. Fetch the remote. This doesn't change any local files, but updates your - clone's notion of the remote's objects (such as branches and tags) and - their current state. - - ```bash - git remote fetch - ``` - -4. Check out the remote branch. This command will fail if you already have a - local branch with the same name. - - ```bash - git checkout - ``` - -5. Make your changes, use `git add` to add them, and commit them. - -6. Push your changes to the author's remote. - - ```bash - git push - ``` - -7. Go back to the GitHub IU and refresh the PR. Your changes appear. Leave the - PR author a comment letting them know you changed the PR. - -If the author is using the command line rather than the GitHub UI to work on -this PR, they need to fetch their fork's changes and rebase their local branch -on the branch in their fork, before doing additional work on the PR. - -## Work from a local clone - -For changes that require multiple files or changes that involve creating new -files or moving files around, working from a local Git clone makes more sense -than relying on the GitHub UI. These instructions use the `git` command and -assume that you have it installed locally. You can adapt them to use a local -graphical Git client instead. - -### Clone the repository - -You only need to clone the repository once per physical system where you work -on the Kubernetes documentation. - -1. Create a fork of the `kubernetes/website` repository on GitHub. In your - web browser, go to - [https://github.com/kubernetes/website](https://github.com/kubernetes/website) - and click the **Fork** button. After a few seconds, you are redirected to - the URL for your fork, which is `https://github.com//website`. - -2. In a terminal window, use `git clone` to clone the your fork. - - ```bash - git clone git@github.com//website - ``` - - The new directory `website` is created in your current directory, with - the contents of your GitHub repository. Your fork is your `origin`. - -3. Change to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote. - - ```bash - cd website - - git remote add upstream https://github.com/kubernetes/website.git - ``` - -4. Confirm your `origin` and `upstream` repositories. - - ```bash - git remote -v - ``` - - Output is similar to: - - ```bash - origin git@github.com:/website.git (fetch) - origin git@github.com:/website.git (push) - upstream https://github.com/kubernetes/website (fetch) - upstream https://github.com/kubernetes/website (push) - ``` - -### Work on the local repository - -Before you start a new unit of work on your local repository, you need to figure -out which branch to base your work on. The answer depends on what you are doing, -but the following guidelines apply: - -- For general improvements to existing content, start from `master`. -- For new content that is about features that already exist in a released - version of Kubernetes, start from `master`. -- For long-running efforts that multiple SIG Docs contributors will collaborate on, - such as content reorganization, use a specific feature branch created for that - effort. -- For new content that relates to upcoming but unreleased Kubernetes versions, - use the pre-release feature branch created for that Kubernetes version. - -For more guidance, see -[Choose which branch to use](/docs/contribute/start/#choose-which-git-branch-to-use). - -After you decide which branch to start your work (or _base it on_, in Git -terminology), use the following workflow to be sure your work is based on the -most up-to-date version of that branch. - -1. There are three different copies of the repository when you work locally: - `local`, `upstream`, and `origin`. Fetch both the `origin` and `upstream` remotes. This - updates your cache of the remotes without actually changing any of the copies. - - ```bash - git fetch origin - git fetch upstream - ``` - - This workflow deviates from the one defined in the Community's [GitHub - Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). - In this workflow, you do not need to merge your local copy of `master` with `upstream/master` before - pushing the updates to your fork. That step is not required in - `kubernetes/website` because you are basing your branch on the upstream repository. - -2. Create a local working branch based on the most appropriate upstream branch: - `upstream/dev-1.xx` for feature developers or `upstream/master` for all other - contributors. This example assumes you are basing your work on - `upstream/master`. Because you didn't update your local `master` to match - `upstream/master` in the previous step, you need to explicitly create your - branch off of `upstream/master`. - - ```bash - git checkout -b upstream/master - ``` - -3. With your new branch checked out, make your changes using a text editor. - At any time, use the `git status` command to see what you've changed. - -4. When you are ready to submit a pull request, commit your changes. First - use `git status` to see what changes need to be added to the changeset. - There are two important sections: `Changes staged for commit` and - `Changes not staged for commit`. Any files that show up in the latter - section under `modified` or `untracked` need to be added if you want them to - be part of this commit. For each file that needs to be added, use `git add`. - - ```bash - git add example-file.md - ``` - - When all your intended changes are included, create a commit using the - `git commit` command: - - ```bash - git commit -m "Your commit message" - ``` - - {{< note >}} - Do not reference a GitHub issue or pull request by ID or URL in the - commit message. If you do, it will cause that issue or pull request to get - a notification every time the commit shows up in a new Git branch. You can - link issues and pull requests together later in the GitHub UI. - {{< /note >}} - -5. Optionally, you can test your change by staging the site locally using the - `hugo` command. See [View your changes locally](#view-your-changes-locally). - You'll be able to view your changes after you submit the pull request, as - well. - -6. Before you can create a pull request which includes your local commit, you - need to push the branch to your fork, which is the endpoint for the `origin` - remote. - - ```bash - git push origin - ``` - - Technically, you can omit the branch name from the `push` command, but - the behavior in that case depends upon the version of Git you are using. - The results are more repeatable if you include the branch name. - -7. Go to https://github.com/kubernetes/website in your web browser. GitHub - detects that you pushed a new branch to your fork and offers to create a pull - request. Fill in the pull request template. - - - The title should be no more than 50 characters and summarize the intent - of the change. - - The long-form description should contain more information about the fix, - including a line like `Fixes #12345` if the pull request fixes a GitHub - issue. This will cause the issue to be closed automatically when the - pull request is merged. - - You can add labels or other metadata and assign reviewers. See - [Triage and categorize issues](#triage-and-categorize-issues) for the - syntax. - - Click **Create pull request**. - -8. Several automated tests will run against the state of the website with your - changes applied. If any of the tests fail, click the **Details** link for - more information. If the Netlify test completes successfully, its - **Details** link goes to a staged version of the Kubernetes website with - your changes applied. This is how reviewers will check your changes. - -9. When you need to make more changes, address the feedback locally and amend - your original commit. - - ```bash - git commit -a --amend - ``` - - - `-a`: commit all changes - - `--amend`: amend the previous commit, rather than creating a new one - - An editor will open so you can update your commit message if necessary. - - If you use `git commit -m` as in Step 4, you will create a new commit rather - than amending changes to your original commit. Creating a new commit means - you must squash your commits before your pull request can be merged. - - Follow the instructions in Step 6 to push your commit. The commit is added - to your pull request and the tests run again, including re-staging the - Netlify staged site. - -10. If a reviewer adds changes to your pull request, you need to fetch those - changes from your fork before you can add more changes. Use the following - commands to do this, assuming that your branch is currently checked out. - - ```bash - git fetch origin - git rebase origin/ - ``` - - After rebasing, you need to add the `--force-with-lease` flag to - force push the branch's new changes to your fork. - - ```bash - git push --force-with-lease origin - ``` - -11. If someone else's change is merged into the branch your work is based on, - and you have made changes to the same parts of the same files, a conflict - might occur. If the pull request shows that there are conflicts to resolve, - you can resolve them using the GitHub UI or you can resolve them locally. - - First, do step 10 to be sure that your fork and your local branch are in - the same state. - - Next, fetch `upstream` and rebase your branch on the branch it was - originally based on, like `upstream/master`. - - ```bash - git fetch upstream - git rebase upstream/master - ``` - - If there are conflicts Git can't automatically resolve, you can see the - conflicted files using the `git status` command. For each conflicted file, - edit it and look for the conflict markers `>>>`, `<<<`, and `===`. Resolve - the conflict and remove the conflict markers. Then add the changes to the - changeset using `git add ` and continue the rebase using - `git rebase --continue`. When all commits have been applied and there are - no more conflicts, `git status` will show that you are not in a rebase and - there are no changes that need to be committed. At that point, force-push - the branch to your fork, and the pull request should no longer show any - conflicts. - -12. If your PR still has multiple commits after amending previous commits, you - must squash multiple commits into a single commit before your PR can be merged. - You can check the number of commits on your PR's `Commits` tab or by running - `git log` locally. Squashing commits is a form of rebasing. - - ```bash - git rebase -i HEAD~ - ``` - - The `-i` switch tells git you want to rebase interactively. This enables - you to tell git which commits to squash into the first one. For - example, you have 3 commits on your branch: - - ``` - 12345 commit 4 (2 minutes ago) - 6789d commit 3 (30 minutes ago) - 456df commit 2 (1 day ago) - ``` - - You must squash your last three commits into the first one. - - ``` - git rebase -i HEAD~3 - ``` - - That command opens an editor with the following: - - ``` - pick 456df commit 2 - pick 6789d commit 3 - pick 12345 commit 4 - ``` - - Change `pick` to `squash` on the commits you want to squash, and make sure - the one `pick` commit is at the top of the editor. - - ``` - pick 456df commit 2 - squash 6789d commit 3 - squash 12345 commit 4 - ``` - - Save and close your editor. Then push your squashed - commit with `git push --force-with-lease origin `. - - -If you're having trouble resolving conflicts or you get stuck with -anything else related to your pull request, ask for help on the `#sig-docs` -Slack channel or the -[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). - -### View your changes locally - -{{< tabs name="tab_with_hugo" >}} -{{% tab name="Hugo in a container" %}} - -If you aren't ready to create a pull request but you want to see what your -changes look like, you can build and run a docker image to generate all the documentation and -serve it locally. - -1. Build the image locally: - - ```bash - make docker-image - ``` - -2. Once the `kubernetes-hugo` image has been built locally, you can build and serve the site: - - ```bash - make docker-serve - ``` - -3. In your browser's address bar, enter `localhost:1313`. Hugo will watch the - filesystem for changes and rebuild the site as needed. - -4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C` - or just close the terminal window. -{{% /tab %}} -{{% tab name="Hugo locally" %}} - -Alternatively, you can install and use the `hugo` command on your development machine: - -1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml). - -2. In a terminal, go to the root directory of your clone of the Kubernetes - docs, and enter this command: - - ```bash - hugo server - ``` - -3. In your browser’s address bar, enter `localhost:1313`. - -4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C` - or just close the terminal window. -{{% /tab %}} -{{< /tabs >}} - -## Triage and categorize issues - -People in SIG Docs are responsible only for triaging and categorizing -documentation issues. General website issues are also filed in the -`kubernetes/website` repository. - -When you triage an issue, you: - -- Validate the issue - - Make sure the issue is about website documentation. Some issues can be closed quickly by - answering a question or pointing the reporter to a resource. See the - [Support requests or code bug reports](#support-requests-or-code-bug-reports) section for details. - - Assess whether the issue has merit. Add the `triage/needs-information` label if the issue doesn't have enough - detail to be actionable or the template is not filled out adequately. - Close the issue if it has both the `lifecycle/stale` and `triage/needs-information` labels. -- Add a priority label (the - [Issue Triage Guidelines](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority) - define Priority labels in detail) - - `priority/critical-urgent` - do this right now - - `priority/important-soon` - do this within 3 months - - `priority/important-longterm` - do this within 6 months - - `priority/backlog` - this can be deferred indefinitely; lowest priority; - do this when resources are available - - `priority/awaiting-more-evidence` - placeholder for a potentially good issue - so it doesn't get lost -- Optionally, add a `help` or `good first issue` label if the issue is suitable - for someone with very little Kubernetes or SIG Docs experience. Consult - [Help Wanted and Good First Issue Labels](https://github.com/kubernetes/community/blob/master/contributors/guide/help-wanted.md) - for guidance. -- At your discretion, take ownership of an issue and submit a PR for it - (especially if it is quick or relates to work you were already doing). - -This GitHub Issue [filter](https://github.com/kubernetes/website/issues?q=is%3Aissue+is%3Aopen+-label%3Apriority%2Fbacklog+-label%3Apriority%2Fimportant-longterm+-label%3Apriority%2Fimportant-soon+-label%3Atriage%2Fneeds-information+-label%3Atriage%2Fsupport+sort%3Acreated-asc) -finds all the issues that need to be triaged. - -If you have questions about triaging an issue, ask in `#sig-docs` on Slack or -the [kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). - -### Add and remove labels - -To add a label, leave a comment like `/` or `/ `. The label must -already exist. If you try to add a label that does not exist, the command is -silently ignored. - -Examples: - -- `/triage needs-information` -- `/priority important-soon` -- `/language ja` -- `/help` -- `/good-first-issue` -- `/lifecycle frozen` - -To remove a label, leave a comment like `/remove-` or `/remove- `. - -Examples: - -- `/remove-triage needs-information` -- `/remove-priority important-soon` -- `/remove-language ja` -- `/remove-help` -- `/remove-good-first-issue` -- `/remove-lifecycle frozen` - -The list of all the labels used across Kubernetes is -[here](https://github.com/kubernetes/kubernetes/labels). Not all labels -are used by SIG Docs. - -### More about labels - -- An issue can have multiple labels. -- Some labels use slash notation for grouping, which can be thought of like - "sub-labels". For instance, many `sig/` labels exist, such as `sig/cli` and - `sig/api-machinery` ([full list](https://github.com/kubernetes/website/labels?utf8=%E2%9C%93&q=sig%2F)). -- Some labels are automatically added based on metadata in the files involved - in the issue, slash commands used in the comments of the issue, or - information in the issue text. -- Additional labels are manually added by the person triaging the issue (or the person - reporting the issue) - - `kind/bug`, `kind/feature`, and `kind/documentation`: A bug is a problem with existing content or - functionality, and a feature is a request for new content or functionality. - The `kind/documentation` label is seldom used. - - `language/ja`, `language/ko` and similar [language - labels](https://github.com/kubernetes/website/labels?utf8=%E2%9C%93&q=language) - if the issue is about localized content. - -### Issue lifecycle - -Issues are generally opened and closed within a relatively short time span. -However, sometimes an issue may not have associated activity after it is -created. Other times, an issue may need to remain open for longer than 90 days. - -`lifecycle/stale`: after 90 days with no activity, an issue is automatically -labeled as stale. The issue will be automatically closed if the lifecycle is not -manually reverted using the `/remove-lifecycle stale` command. - -`lifecycle/frozen`: an issue with this label will not become stale after 90 days -of inactivity. A user manually adds this label to issues that need to remain -open for much longer than 90 days, such as those with a -`priority/important-longterm` label. - - -### Handling special issue types - -We encounter the following types of issues often enough to document how -to handle them. - -#### Duplicate issues - -If a single problem has one or more issues open for it, the problem should be -consolidated into a single issue. You should decide which issue to keep open (or -open a new issue), port over all relevant information and link related issues. -Finally, label all other issues that describe the same problem with -`triage/duplicate` and close them. Only having a single issue to work on will -help reduce confusion and avoid duplicating work on the same problem. - -#### Dead link issues - -Depending on where the dead link is reported, different actions are required to -resolve the issue. Dead links in the API and Kubectl docs are automation issues -and should be assigned `/priority critical-urgent` until the problem can be fully understood. All other -dead links are issues that need to be manually fixed and can be assigned `/priority important-longterm`. - -#### Blog issues - -[Kubernetes Blog](https://kubernetes.io/blog/) entries are expected to become -outdated over time, so we maintain only blog entries that are less than one year old. -If an issue is related to a blog entry that is more than one year old, it should be closed -without fixing. - -#### Support requests or code bug reports - -Some issues opened for docs are instead issues with the underlying code, or -requests for assistance when something (like a tutorial) didn’t work. For issues -unrelated to docs, close the issue with the `triage/support` label and a comment -directing the requester to support venues (Slack, Stack Overflow) and, if -relevant, where to file an issue for bugs with features (kubernetes/kubernetes -is a great place to start). - -Sample response to a request for support: - -```none -This issue sounds more like a request for support and less -like an issue specifically for docs. I encourage you to bring -your question to the `#kubernetes-users` channel in -[Kubernetes slack](http://slack.k8s.io/). You can also search -resources like -[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) -for answers to similar questions. - -You can also open issues for Kubernetes functionality in - https://github.com/kubernetes/kubernetes. - -If this is a documentation issue, please re-open this issue. -``` - -Sample code bug report response: - -```none -This sounds more like an issue with the code than an issue with -the documentation. Please open an issue at -https://github.com/kubernetes/kubernetes/issues. - -If this is a documentation issue, please re-open this issue. -``` - -## Document new features - -Each major Kubernetes release includes new features, and many of them need -at least a small amount of documentation to show people how to use them. - -Often, the SIG responsible for a feature submits draft documentation for the -feature as a pull request to the appropriate release branch of -`kubernetes/website` repository, and someone on the SIG Docs team provides -editorial feedback or edits the draft directly. - -### Find out about upcoming features - -To find out about upcoming features, attend the weekly sig-release meeting (see -the [community](https://kubernetes.io/community/) page for upcoming meetings) -and monitor the release-specific documentation -in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/) -repository. Each release has a sub-directory under the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases) -directory. Each sub-directory contains a release schedule, a draft of the release -notes, and a document listing each person on the release team. - -- The release schedule contains links to all other documents, meetings, - meeting minutes, and milestones relating to the release. It also contains - information about the goals and timeline of the release, and any special - processes in place for this release. Near the bottom of the document, several - release-related terms are defined. - - This document also contains a link to the **Feature tracking sheet**, which is - the official way to find out about all new features scheduled to go into the - release. -- The release team document lists who is responsible for each release role. If - it's not clear who to talk to about a specific feature or question you have, - either attend the release meeting to ask your question, or contact the release - lead so that they can redirect you. -- The release notes draft is a good place to find out a little more about - specific features, changes, deprecations, and more about the release. The - content is not finalized until late in the release cycle, so use caution. - -#### The feature tracking sheet - -The feature tracking sheet -[for a given Kubernetes release](https://github.com/kubernetes/sig-release/tree/master/releases) lists each feature that is planned for a release. -Each line item includes the name of the feature, a link to the feature's main -GitHub issue, its stability level (Alpha, Beta, or Stable), the SIG and -individual responsible for implementing it, whether it -needs docs, a draft release note for the feature, and whether it has been -merged. Keep the following in mind: - -- Beta and Stable features are generally a higher documentation priority than - Alpha features. -- It's hard to test (and therefore, document) a feature that hasn't been merged, - or is at least considered feature-complete in its PR. -- Determining whether a feature needs documentation is a manual process and - just because a feature is not marked as needing docs doesn't mean it doesn't - need them. - -### Document a feature - -As stated above, draft content for new features is usually submitted by the SIG -responsible for implementing the new feature. This means that your role may be -more of a shepherding role for a given feature than developing the documentation -from scratch. - -After you've chosen a feature to document/shepherd, ask about it in the `#sig-docs` -Slack channel, in a weekly sig-docs meeting, or directly on the PR filed by the -feature SIG. If you're given the go-ahead, you can edit into the PR using one of -the techniques described in -[Commit into another person's PR](#commit-into-another-persons-pr). - -If you need to write a new topic, the following links are useful: - -- [Writing a New Topic](/docs/contribute/style/write-new-topic/) -- [Using Page Templates](/docs/contribute/style/page-templates/) -- [Documentation Style Guide](/docs/contribute/style/style-guide/) -- [Documentation Content Guide](/docs/contribute/style/content-guide/) - -### SIG members documenting new features - -If you are a member of a SIG developing a new feature for Kubernetes, you need -to work with SIG Docs to be sure your feature is documented in time for the -release. Check the -[feature tracking spreadsheet](https://github.com/kubernetes/sig-release/tree/master/releases) -or check in the #sig-release Slack channel to verify scheduling details and -deadlines. Some deadlines related to documentation are: - -- **Docs deadline - Open placeholder PRs**: Open a pull request against the - `release-X.Y` branch in the `kubernetes/website` repository, with a small - commit that you will amend later. Use the Prow command `/milestone X.Y` to - assign the PR to the relevant milestone. This alerts the docs person managing - this release that the feature docs are coming. If your feature does not need - any documentation changes, make sure the sig-release team knows this, by - mentioning it in the #sig-release Slack channel. If the feature does need - documentation but the PR is not created, the feature may be removed from the - milestone. -- **Docs deadline - PRs ready for review**: Your PR now needs to contain a first - draft of the documentation for your feature. Don't worry about formatting or - polishing. Just describe what the feature does and how to use it. The docs - person managing the release will work with you to get the content into shape - to be published. If your feature needs documentation and the first draft - content is not received, the feature may be removed from the milestone. -- **Docs complete - All PRs reviewed and ready to merge**: If your PR has not - yet been merged into the `release-X.Y` branch by this deadline, work with the - docs person managing the release to get it in. If your feature needs - documentation and the docs are not ready, the feature may be removed from the - milestone. - -If your feature is an Alpha feature and is behind a feature gate, make sure you -add it to [Feature gates](/docs/reference/command-line-tools-reference/feature-gates/) -as part of your pull request. If your feature is moving to Beta -or to General Availability, update the feature gates file. - -## Contribute to other repos - -The [Kubernetes project](https://github.com/kubernetes) contains more than 50 -individual repositories. Many of these repositories contain code or content that -can be considered documentation, such as user-facing help text, error messages, -user-facing text in API references, or even code comments. - -If you see text and you aren't sure where it comes from, you can use GitHub's -search tool at the level of the Kubernetes organization to search through all -repositories for that text. This can help you figure out where to submit your -issue or PR. - -Each repository may have its own processes and procedures. Before you file an -issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and -`code-of-conduct.md`, if they exist. - -Most repositories use issue and PR templates. Have a look through some open -issues and PRs to get a feel for that team's processes. Make sure to fill out -the templates with as much detail as possible when you file issues or PRs. - -## Localize content - -The Kubernetes documentation is written in English first, but we want people to -be able to read it in their language of choice. If you are comfortable -writing in another language, especially in the software domain, you can help -localize the Kubernetes documentation or provide feedback on existing localized -content. See [Localization](/docs/contribute/localization/) and ask on the -[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) -or in `#sig-docs` on Slack if you are interested in helping out. - -### Working with localized content - -Follow these guidelines for working with localized content: - -- Limit PRs to a single language. - - Each language has its own reviewers and approvers. - -- Reviewers, verify that PRs contain changes to only one language. - - If a PR contains changes to source in more than one language, ask the PR contributor to open separate PRs for each language. - -{{% /capture %}} - -{{% capture whatsnext %}} - -When you are comfortable with all of the tasks discussed in this topic and you -want to engage with the Kubernetes docs team in even deeper ways, read the -[advanced docs contributor](/docs/contribute/advanced/) topic. - -{{% /capture %}} diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 2522a832f3..bdd27c536e 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -1,13 +1,14 @@ --- -title: Localizing Kubernetes Documentation +title: Localizing Kubernetes documentation content_template: templates/concept approvers: - remyleone - rlenferink - zacharysarah +weight: 50 card: name: contribute - weight: 30 + weight: 50 title: Translating the docs --- @@ -21,9 +22,9 @@ This page shows you how to [localize](https://blog.mozilla.org/l10n/2011/12/14/i ## Getting started -Because contributors can't approve their own pull requests, you need at least two contributors to begin a localization. +Because contributors can't approve their own pull requests, you need at least two contributors to begin a localization. -All localization teams must be self-sustaining with their own resources. We're happy to host your work, but we can't translate it for you. +All localization teams must be self-sustaining with their own resources. The Kubernetes website is happy to host your work, but it's up to you to translate it. ### Find your two-letter language code @@ -31,7 +32,7 @@ First, consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/p ### Fork and clone the repo -First, [create your own fork](/docs/contribute/start/#improve-existing-content) of the [kubernetes/website](https://github.com/kubernetes/website) repository. +First, [create your own fork](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the [kubernetes/website](https://github.com/kubernetes/website) repository. Then, clone your fork and `cd` into it: @@ -42,12 +43,12 @@ cd website ### Open a pull request -Next, [open a pull request](/docs/contribute/start/#submit-a-pull-request) (PR) to add a localization to the `kubernetes/website` repository. +Next, [open a pull request](/docs/contribute/new-content/open-a-pr/#open-a-pr) (PR) to add a localization to the `kubernetes/website` repository. The PR must include all of the [minimum required content](#minimum-required-content) before it can be approved. For an example of adding a new localization, see the PR to enable [docs in French](https://github.com/kubernetes/website/pull/12548). - + ### Join the Kubernetes GitHub organization Once you've opened a localization PR, you can become members of the Kubernetes GitHub organization. Each person on the team needs to create their own [Organization Membership Request](https://github.com/kubernetes/org/issues/new/choose) in the `kubernetes/org` repository. @@ -74,7 +75,7 @@ For an example of adding a label, see the PR for adding the [Italian language la Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). Other localization teams are happy to help you get started and answer any questions you have. -You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding channels for Indonesian and Portuguese](https://github.com/kubernetes/community/pull/3605). +You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding channels for Indonesian and Portuguese](https://github.com/kubernetes/community/pull/3605). ## Minimum required content @@ -105,11 +106,11 @@ Add a language-specific subdirectory to the [`content`](https://github.com/kuber mkdir content/de ``` -### Localize the Community Code of Conduct +### Localize the community code of conduct Open a PR against the [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages) repository to add the code of conduct in your language. -### Add a localized README +### Add a localized README file To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of k/website, where `**` is the two-letter language code. For example, a German README file would be `README-de.md`. @@ -192,10 +193,10 @@ mkdir -p content/de/docs/tutorials cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md ``` -Translation tools can speed up the translation process. For example, some editors offers plugins to quickly translate text. +Translation tools can speed up the translation process. For example, some editors offers plugins to quickly translate text. {{< caution >}} -Machine-generated translation alone does not meet the minimum standard of quality and requires extensive human review to meet that standard. +Machine-generated translation is insufficient on its own. Localization requires extensive human review to meet minimum standards of quality. {{< /caution >}} To ensure accuracy in grammar and meaning, members of your localization team should carefully review all machine-generated translations before publishing. @@ -211,7 +212,7 @@ To find source files for the most recent release: The latest version is {{< latest-version >}}, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}). -### Site strings in i18n/ +### Site strings in i18n Localizations must include the contents of [`i18n/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) in a new language-specific file. Using German as an example: `i18n/de.toml`. @@ -264,7 +265,7 @@ Teams must merge localized content into the same release branch from which the c An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch. -At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous development branch and the current development branch. +At the beginning of every team milestone, it's helpful to open an issue [comparing upstream changes](https://github.com/kubernetes/website/blob/master/scripts/upstream_changes.py) between the previous development branch and the current development branch. While only approvers can open a new development branch and merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required. @@ -272,7 +273,7 @@ For more information about working from forks or directly from the repository, s ## Upstream contributions -SIG Docs welcomes [upstream contributions and corrections](/docs/contribute/intermediate#localize-content) to the English source. +SIG Docs welcomes upstream contributions and corrections to the English source. ## Help an existing localization diff --git a/content/en/docs/contribute/new-content/_index.md b/content/en/docs/contribute/new-content/_index.md new file mode 100644 index 0000000000..4992a37654 --- /dev/null +++ b/content/en/docs/contribute/new-content/_index.md @@ -0,0 +1,4 @@ +--- +title: Contributing new content +weight: 20 +--- diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md new file mode 100644 index 0000000000..8bbad41840 --- /dev/null +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -0,0 +1,60 @@ +--- +title: Submitting blog posts and case studies +linktitle: Blogs and case studies +slug: blogs-case-studies +content_template: templates/concept +weight: 30 +--- + + +{{% capture overview %}} + +Anyone can write a blog post and submit it for review. +Case studies require extensive review before they're approved. + +{{% /capture %}} + +{{% capture body %}} + +## Write a blog post + +Blog posts should not be +vendor pitches. They must contain content that applies broadly to +the Kubernetes community. The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) manages the review process for blog posts. For more information, see [Submit a post](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post). + +To submit a blog post, you can either: + +- Use the +[Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSdMpMoSIrhte5omZbTE7nB84qcGBy8XnnXhDFoW0h7p2zwXrw/viewform) +- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. Create new blog posts in the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts) directory. + +If you open a pull request, ensure that your blog post follows the correct naming conventions and frontmatter information: + +- The markdown file name must follow the format `YYY-MM-DD-Your-Title-Here.md`. For example, `2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md`. +- The front matter must include the following: + +```yaml +--- +layout: blog +title: "Your Title Here" +date: YYYY-MM-DD +slug: text-for-URL-link-here-no-spaces +--- +``` + +## Submit a case study + +Case studies highlight how organizations are using Kubernetes to solve +real-world problems. The Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} collaborate with you on all case studies. + +Have a look at the source for the +[existing case studies](https://github.com/kubernetes/website/tree/master/content/en/case-studies). + +Use the [Kubernetes case study submission form](https://www.cncf.io/people/end-user-community/) +to submit your proposal. + +{{% /capture %}} + +{{% capture whatsnext %}} + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md new file mode 100644 index 0000000000..2b886b350d --- /dev/null +++ b/content/en/docs/contribute/new-content/new-features.md @@ -0,0 +1,134 @@ +--- +title: Documenting a feature for a release +linktitle: Documenting for a release +content_template: templates/concept +main_menu: true +weight: 20 +card: + name: contribute + weight: 45 + title: Documenting a feature for a release +--- +{{% capture overview %}} + +Each major Kubernetes release introduces new features that require documentation. New releases also bring updates to existing features and documentation (such as upgrading a feature from alpha to beta). + +Generally, the SIG responsible for a feature submits draft documentation of the +feature as a pull request to the appropriate release branch of the +`kubernetes/website` repository, and someone on the SIG Docs team provides +editorial feedback or edits the draft directly. This section covers the branching +conventions and process used during a release by both groups. + +{{% /capture %}} + +{{% capture body %}} + +## For documentation contributors + +In general, documentation contributors don't write content from scratch for a release. +Instead, they work with the SIG creating a new feature to refine the draft documentation and make it release ready. + +After you've chosen a feature to document or assist, ask about it in the `#sig-docs` +Slack channel, in a weekly SIG Docs meeting, or directly on the PR filed by the +feature SIG. If you're given the go-ahead, you can edit into the PR using one of +the techniques described in +[Commit into another person's PR](/docs/contribute/review/for-approvers/#commit-into-another-persons-pr). + +### Find out about upcoming features + +To find out about upcoming features, attend the weekly SIG Release meeting (see +the [community](https://kubernetes.io/community/) page for upcoming meetings) +and monitor the release-specific documentation +in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/) +repository. Each release has a sub-directory in the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases) +directory. The sub-directory contains a release schedule, a draft of the release +notes, and a document listing each person on the release team. + +The release schedule contains links to all other documents, meetings, +meeting minutes, and milestones relating to the release. It also contains +information about the goals and timeline of the release, and any special +processes in place for this release. Near the bottom of the document, several +release-related terms are defined. + +This document also contains a link to the **Feature tracking sheet**, which is +the official way to find out about all new features scheduled to go into the +release. + +The release team document lists who is responsible for each release role. If +it's not clear who to talk to about a specific feature or question you have, +either attend the release meeting to ask your question, or contact the release +lead so that they can redirect you. + +The release notes draft is a good place to find out about +specific features, changes, deprecations, and more about the release. The +content is not finalized until late in the release cycle, so use caution. + +### Feature tracking sheet + +The feature tracking sheet [for a given Kubernetes release](https://github.com/kubernetes/sig-release/tree/master/releases) +lists each feature that is planned for a release. +Each line item includes the name of the feature, a link to the feature's main +GitHub issue, its stability level (Alpha, Beta, or Stable), the SIG and +individual responsible for implementing it, whether it +needs docs, a draft release note for the feature, and whether it has been +merged. Keep the following in mind: + +- Beta and Stable features are generally a higher documentation priority than + Alpha features. +- It's hard to test (and therefore to document) a feature that hasn't been merged, + or is at least considered feature-complete in its PR. +- Determining whether a feature needs documentation is a manual process and + just because a feature is not marked as needing docs doesn't mean it doesn't + need them. + +## For developers or other SIG members + +This section is information for members of other Kubernetes SIGs documenting new features +for a release. + +If you are a member of a SIG developing a new feature for Kubernetes, you need +to work with SIG Docs to be sure your feature is documented in time for the +release. Check the +[feature tracking spreadsheet](https://github.com/kubernetes/sig-release/tree/master/releases) +or check in the `#sig-release` Kubernetes Slack channel to verify scheduling details and +deadlines. + +### Open a placeholder PR + +1. Open a pull request against the +`release-X.Y` branch in the `kubernetes/website` repository, with a small +commit that you will amend later. +2. Use the Prow command `/milestone X.Y` to +assign the PR to the relevant milestone. This alerts the docs person managing +this release that the feature docs are coming. + +If your feature does not need +any documentation changes, make sure the sig-release team knows this, by +mentioning it in the `#sig-release` Slack channel. If the feature does need +documentation but the PR is not created, the feature may be removed from the +milestone. + +### PR ready for review + +When ready, populate your placeholder PR with feature documentation. + +Do your best to describe your feature and how to use it. If you need help structuring your documentation, ask in the `#sig-docs` slack channel. + +When you complete your content, the documentation person assigned to your feature reviews it. Use their suggestions to get the content to a release ready state. + +If your feature needs documentation and the first draft +content is not received, the feature may be removed from the milestone. + +### All PRs reviewed and ready to merge + +If your PR has not yet been merged into the `release-X.Y` branch by the release deadline, work with the +docs person managing the release to get it in by the deadline. If your feature needs +documentation and the docs are not ready, the feature may be removed from the +milestone. + +If your feature is an Alpha feature and is behind a feature gate, make sure you +add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table +as part of your pull request. If your feature is moving out of Alpha, make sure to +remove it from that table. + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md new file mode 100644 index 0000000000..d9f670ca44 --- /dev/null +++ b/content/en/docs/contribute/new-content/open-a-pr.md @@ -0,0 +1,484 @@ +--- +title: Opening a pull request +slug: new-content +content_template: templates/concept +weight: 10 +card: + name: contribute + weight: 40 +--- + +{{% capture overview %}} + +{{< note >}} +**Code developers**: If you are documenting a new feature for an +upcoming Kubernetes release, see +[Document a new feature](/docs/contribute/new-content/new-features/). +{{< /note >}} + +To contribute new content pages or improve existing content pages, open a pull request (PR). Make sure you follow all the requirements in the [Before you begin](#before-you-begin) section. + +If your change is small, or you're unfamiliar with git, read [Changes using GitHub](#changes-using-github) to learn how to edit a page. + +If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make changes locally on your computer. + +{{% /capture %}} + +{{% capture body %}} + +## Changes using GitHub + +If you're less experienced with git workflows, here's an easier method of +opening a pull request. + +1. On the page where you see the issue, select the pencil icon at the top right. + You can also scroll to the bottom of the page and select **Edit this page**. + +2. Make your changes in the GitHub markdown editor. + +3. Below the editor, fill in the **Propose file change** + form. In the first field, give your commit message a title. In + the second field, provide a description. + + {{< note >}} + Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request + description later. + {{< /note >}} + +4. Select **Propose file change**. + +5. Select **Create pull request**. + +6. The **Open a pull request** screen appears. Fill in the form: + + - The **Subject** field of the pull request defaults to the commit summary. + You can change it if needed. + - The **Body** contains your extended commit message, if you have one, + and some template text. Add the + details the template text asks for, then delete the extra template text. + - Leave the **Allow edits from maintainers** checkbox selected. + + {{< note >}} + PR descriptions are a great way to help reviewers understand your change. For more information, see [Opening a PR](#open-a-pr). + {{}} + +7. Select **Create pull request**. + +### Addressing feedback in GitHub + +Before merging a pull request, Kubernetes community members review and +approve it. The `k8s-ci-robot` suggests reviewers based on the nearest +owner mentioned in the pages. If you have someone specific in mind, +leave a comment with their GitHub username in it. + +If a reviewer asks you to make changes: + +1. Go to the **Files changed** tab. +2. Select the pencil (edit) icon on any files changed by the +pull request. +3. Make the changes requested. +4. Commit the changes. + +If you are waiting on a reviewer, reach out once every 7 days. You can also post a message in the `#sig-docs` Slack channel. + +When your review is complete, a reviewer merges your PR and your changes go live a few minutes later. + +## Work from a local fork {#fork-the-repo} + +If you're more experienced with git, or if your changes are larger than a few lines, +work from a local fork. + +Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed on your computer. You can also use a git UI application. + +### Fork the kubernetes/website repository + +1. Navigate to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. +2. Select **Fork**. + +### Create a local clone and set the upstream + +3. In a terminal window, clone your fork: + + ```bash + git clone git@github.com//website + ``` + +4. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote: + + ```bash + cd website + + git remote add upstream https://github.com/kubernetes/website.git + ``` + +5. Confirm your `origin` and `upstream` repositories: + + ```bash + git remote -v + ``` + + Output is similar to: + + ```bash + origin git@github.com:/website.git (fetch) + origin git@github.com:/website.git (push) + upstream https://github.com/kubernetes/website (fetch) + upstream https://github.com/kubernetes/website (push) + ``` + +6. Fetch commits from your fork's `origin/master` and `kubernetes/website`'s `upstream/master`: + + ```bash + git fetch origin + git fetch upstream + ``` + + This makes sure your local repository is up to date before you start making changes. + + {{< note >}} + This workflow is different than the [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). You do not need to merge your local copy of `master` with `upstream/master` before pushing updates to your fork. + {{< /note >}} + +### Create a branch + +1. Decide which branch base to your work on: + + - For improvements to existing content, use `upstream/master`. + - For new content about existing features, use `upstream/master`. + - For localized content, use the localization's conventions. For more information, see [localizing Kubernetes documentation](/docs/contribute/localization/). + - For new features in an upcoming Kubernetes release, use the feature branch. For more information, see [documenting for a release](/docs/contribute/new-content/new-features/). + - For long-running efforts that multiple SIG Docs contributors collaborate on, + like content reorganization, use a specific feature branch created for that + effort. + + If you need help choosing a branch, ask in the `#sig-docs` Slack channel. + +2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is `upstream/master`: + + ```bash + git checkout -b upstream/master + ``` + +3. Make your changes using a text editor. + +At any time, use the `git status` command to see what files you've changed. + +### Commit your changes + +When you are ready to submit a pull request, commit your changes. + +1. In your local repository, check which files you need to commit: + + ```bash + git status + ``` + + Output is similar to: + + ```bash + On branch + Your branch is up to date with 'origin/'. + + Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git checkout -- ..." to discard changes in working directory) + + modified: content/en/docs/contribute/new-content/contributing-content.md + + no changes added to commit (use "git add" and/or "git commit -a") + ``` + +2. Add the files listed under **Changes not staged for commit** to the commit: + + ```bash + git add + ``` + + Repeat this for each file. + +3. After adding all the files, create a commit: + + ```bash + git commit -m "Your commit message" + ``` + + {{< note >}} + Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request + description later. + {{< /note >}} + +4. Push your local branch and its new commit to your remote fork: + + ```bash + git push origin + ``` + +### Preview your changes locally {#preview-locally} + +It's a good idea to preview your changes locally before pushing them or opening a pull request. A preview lets you catch build errors or markdown formatting problems. + +You can either build the website's docker image or run Hugo locally. Building the docker image is slower but displays [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/), which can be useful for debugging. + +{{< tabs name="tab_with_hugo" >}} +{{% tab name="Hugo in a container" %}} + +1. Build the image locally: + + ```bash + make docker-image + ``` + +2. After building the `kubernetes-hugo` image locally, build and serve the site: + + ```bash + make docker-serve + ``` + +3. In a web browser, navigate to `https://localhost:1313`. Hugo watches the + changes and rebuilds the site as needed. + +4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, + or close the terminal window. + +{{% /tab %}} +{{% tab name="Hugo on the command line" %}} + +Alternately, install and use the `hugo` command on your computer: + +5. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml). + +6. In a terminal, go to your Kubernetes website repository and start the Hugo server: + + ```bash + cd /website + hugo server + ``` + +7. In your browser’s address bar, enter `https://localhost:1313`. + +8. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, + or close the terminal window. + +{{% /tab %}} +{{< /tabs >}} + +### Open a pull request from your fork to kubernetes/website {#open-a-pr} + +1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. +2. Select **New Pull Request**. +3. Select **compare across forks**. +4. From the **head repository** drop-down menu, select your fork. +5. From the **compare** drop-down menu, select your branch. +6. Select **Create Pull Request**. +7. Add a description for your pull request: + - **Title** (50 characters or less): Summarize the intent of the change. + - **Description**: Describe the change in more detail. + - If there is a related GitHub issue, include `Fixes #12345` or `Closes #12345` in the description. GitHub's automation closes the mentioned issue after merging the PR if used. If there are other related PRs, link those as well. + - If you want advice on something specific, include any questions you'd like reviewers to think about in your description. + +8. Select the **Create pull request** button. + + Congratulations! Your pull request is available in [Pull requests](https://github.com/kubernetes/website/pulls). + + +After opening a PR, GitHub runs automated tests and tries to deploy a preview using [Netlify](https://www.netlify.com/). + + - If the Netlify build fails, select **Details** for more information. + - If the Netlify build succeeds, select **Details** opens a staged version of the Kubernetes website with your changes applied. This is how reviewers check your changes. + +GitHub also automatically assigns labels to a PR, to help reviewers. You can add them too, if needed. For more information, see [Adding and removing issue labels](/docs/contribute/review/for-approvers/#adding-and-removing-issue-labels). + +### Addressing feedback locally + +1. After making your changes, amend your previous commit: + + ```bash + git commit -a --amend + ``` + + - `-a`: commits all changes + - `--amend`: amends the previous commit, rather than creating a new one + +2. Update your commit message if needed. + +3. Use `git push origin ` to push your changes and re-run the Netlify tests. + + {{< note >}} + If you use `git commit -m` instead of amending, you must [squash your commits](#squashing-commits) before merging. + {{< /note >}} + +#### Changes from reviewers + +Sometimes reviewers commit to your pull request. Before making any other changes, fetch those commits. + +1. Fetch commits from your remote fork and rebase your working branch: + + ```bash + git fetch origin + git rebase origin/ + ``` + +2. After rebasing, force-push new changes to your fork: + + ```bash + git push --force-with-lease origin + ``` + +#### Merge conflicts and rebasing + +{{< note >}} +For more information, see [Git Branching - Basic Branching and Merging](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts), [Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), or ask in the `#sig-docs` Slack channel for help. +{{< /note >}} + +If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR. + +1. Update your fork and rebase your local branch: + + ```bash + git fetch origin + git rebase origin/ + ``` + + Then force-push the changes to your fork: + + ```bash + git push --force-with-lease origin + ``` + +2. Fetch changes from `kubernetes/website`'s `upstream/master` and rebase your branch: + + ```bash + git fetch upstream + git rebase upstream/master + ``` + +3. Inspect the results of the rebase: + + ```bash + git status + ``` + + This results in a number of files marked as conflicted. + +4. Open each conflicted file and look for the conflict markers: `>>>`, `<<<`, and `===`. Resolve the conflict and delete the conflict marker. + + {{< note >}} + For more information, see [How conflicts are presented](https://git-scm.com/docs/git-merge#_how_conflicts_are_presented). + {{< /note >}} + +5. Add the files to the changeset: + + ```bash + git add + ``` +6. Continue the rebase: + + ```bash + git rebase --continue + ``` + +7. Repeat steps 2 to 5 as needed. + + After applying all commits, the `git status` command shows that the rebase is complete. + +8. Force-push the branch to your fork: + + ```bash + git push --force-with-lease origin + ``` + + The pull request no longer shows any conflicts. + + +### Squashing commits + +{{< note >}} +For more information, see [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History), or ask in the `#sig-docs` Slack channel for help. +{{< /note >}} + +If your PR has multiple commits, you must squash them into a single commit before merging your PR. You can check the number of commits on your PR's **Commits** tab or by running the `git log` command locally. + +{{< note >}} +This topic assumes `vim` as the command line text editor. +{{< /note >}} + +1. Start an interactive rebase: + + ```bash + git rebase -i HEAD~ + ``` + + Squashing commits is a form of rebasing. The `-i` switch tells git you want to rebase interactively. `HEAD~}} + For more information, see [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode). + {{< /note >}} + +2. Start editing the file. + + Change the original text: + + ```bash + pick d875112ca Original commit + pick 4fa167b80 Address feedback 1 + pick 7d54e15ee Address feedback 2 + ``` + + To: + + ```bash + pick d875112ca Original commit + squash 4fa167b80 Address feedback 1 + squash 7d54e15ee Address feedback 2 + ``` + + This squashes commits `4fa167b80 Address feedback 1` and `7d54e15ee Address feedback 2` into `d875112ca Original commit`, leaving only `d875112ca Original commit` as a part of the timeline. + +3. Save and exit your file. + +4. Push your squashed commit: + + ```bash + git push --force-with-lease origin + ``` + +## Contribute to other repos + +The [Kubernetes project](https://github.com/kubernetes) contains 50+ repositories. Many of these repositories contain documentation: user-facing help text, error messages, API references or code comments. + +If you see text you'd like to improve, use GitHub to search all repositories in the Kubernetes organization. +This can help you figure out where to submit your issue or PR. + +Each repository has its own processes and procedures. Before you file an +issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and +`code-of-conduct.md`, if they exist. + +Most repositories use issue and PR templates. Have a look through some open +issues and PRs to get a feel for that team's processes. Make sure to fill out +the templates with as much detail as possible when you file issues or PRs. + +{{% /capture %}} + +{{% capture whatsnext %}} + +- Read [Reviewing](/docs/contribute/reviewing/revewing-prs) to learn more about the review process. + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/new-content/overview.md b/content/en/docs/contribute/new-content/overview.md new file mode 100644 index 0000000000..7a5043d56c --- /dev/null +++ b/content/en/docs/contribute/new-content/overview.md @@ -0,0 +1,73 @@ +--- +title: Contributing new content overview +linktitle: Overview +content_template: templates/concept +main_menu: true +weight: 5 +--- + +{{% capture overview %}} + +This section contains information you should know before contributing new content. + + +{{% /capture %}} + +{{% capture body %}} + +## Contributing basics + +- Write Kubernetes documentation in Markdown and build the Kubernetes site using [Hugo](https://gohugo.io/). +- The source is in [GitHub](https://github.com/kubernetes/website). You can find Kubernetes documentation at `/content/en/docs/`. Some of the reference documentation is automatically generated from scripts in the `update-imported-docs/` directory. +- [Page templates](/docs/contribute/style/page-templates/) control the presentation of documentation content in Hugo. +- In addition to the standard Hugo shortcodes, we use a number of [custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) in our documentation to control the presentation of content. +- Documentation source is available in multiple languages in `/content/`. Each language has its own folder with a two-letter code determined by the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php). For example, English documentation source is stored in `/content/en/docs/`. +- For more information about contributing to documentation in multiple languages or starting a new translation, see [localization](/docs/contribute/localization). + +## Before you begin {#before-you-begin} + +### Sign the CNCF CLA {#sign-the-cla} + +All Kubernetes contributors **must** read the [Contributor guide](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md) and [sign the Contributor License Agreement (CLA)](https://github.com/kubernetes/community/blob/master/CLA.md). + +Pull requests from contributors who haven't signed the CLA fail the automated tests. + +### Configure commit signoffs + +All commits to Kubernetes repositories must be _signed off_ using the Git `--signoff` or `-s` flag. +The signoff acknowledges that you have the rights to submit contributions under the same +license and [Developer Certificate of Origin](https://developercertificate.org/). + +If you're using a Git UI app, you can use the app's commit template functionality if it +exists, or add the following to your commit message body: + +``` +Signed-off-by: Your Name +``` + +In both cases, the name and email you provide must match those found in your `git config`, and your git name and email must match those used for the CNCF CLA. + +### Choose which Git branch to use + +When opening a pull request, you need to know in advance which branch to base your work on. + +Scenario | Branch +:---------|:------------ +Existing or new English language content for the current release | `master` +Content for a feature change release | The branch which corresponds to the major and minor version the feature change is in, using the pattern `dev-release-`. For example, if a feature changes in the `{{< latest-version >}}` release, then add documentation changes to the ``dev-{{< release-branch >}}`` branch. +Content in other languages (localizations) | Use the localization's convention. See the [Localization branching strategy](/docs/contribute/localization/#branching-strategy) for more information. + + +If you're still not sure which branch to choose, ask in `#sig-docs` on Slack. + +{{< note >}} +If you already submitted your pull request and you know that the base branch +was wrong, you (and only you, the submitter) can change it. +{{< /note >}} + +### Languages per PR + +Limit pull requests to one language per PR. If you need to make an identical change to the same code sample in multiple languages, open a separate PR for each language. + + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md index ac384c8eed..3f491dc856 100644 --- a/content/en/docs/contribute/participating.md +++ b/content/en/docs/contribute/participating.md @@ -1,9 +1,10 @@ --- title: Participating in SIG Docs content_template: templates/concept +weight: 60 card: name: contribute - weight: 40 + weight: 60 --- {{% capture overview %}} @@ -35,7 +36,7 @@ aspects of Kubernetes -- the Kubernetes website and documentation. ## Roles and responsibilities -- **Anyone** can contribute to Kubernetes documentation. To contribute, you must [sign the CLA](/docs/contribute/start#sign-the-cla) and have a GitHub account. +- **Anyone** can contribute to Kubernetes documentation. To contribute, you must [sign the CLA](/docs/contribute/new-content/overview/#sign-the-cla) and have a GitHub account. - **Members** of the Kubernetes organization are contributors who have spent time and effort on the Kubernetes project, usually by opening pull requests with accepted changes. See [Community membership](https://github.com/kubernetes/community/blob/master/community-membership.md) for membership criteria. - A SIG Docs **Reviewer** is a member of the Kubernetes organization who has expressed interest in reviewing documentation pull requests, and has been @@ -61,7 +62,7 @@ Anyone can do the following: If you are not a member of the Kubernetes organization, using `/lgtm` has no effect on automated systems. {{< /note >}} -After [signing the CLA](/docs/contribute/start#sign-the-cla), anyone can also: +After [signing the CLA](/docs/contribute/new-content/overview/#sign-the-cla), anyone can also: - Open a pull request to improve existing content, add new content, or write a blog post or case study. ## Members @@ -307,7 +308,8 @@ SIG Docs approvers. Here's how it works. For more information about contributing to the Kubernetes documentation, see: -- [Start contributing](/docs/contribute/start/) -- [Documentation style](/docs/contribute/style/) +- [Contributing new content](/docs/contribute/overview/) +- [Reviewing content](/docs/contribute/review/reviewing-prs) +- [Documentation style guide](/docs/contribute/style/) {{% /capture %}} diff --git a/content/en/docs/contribute/review/_index.md b/content/en/docs/contribute/review/_index.md new file mode 100644 index 0000000000..bc70e3c6f1 --- /dev/null +++ b/content/en/docs/contribute/review/_index.md @@ -0,0 +1,14 @@ +--- +title: Reviewing changes +weight: 30 +--- + +{{% capture overview %}} + +This section describes how to review content. + +{{% /capture %}} + +{{% capture body %}} + +{{% /capture %}} diff --git a/content/en/docs/contribute/review/for-approvers.md b/content/en/docs/contribute/review/for-approvers.md new file mode 100644 index 0000000000..f2be84f10d --- /dev/null +++ b/content/en/docs/contribute/review/for-approvers.md @@ -0,0 +1,228 @@ +--- +title: Reviewing for approvers and reviewers +linktitle: For approvers and reviewers +slug: for-approvers +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +SIG Docs [Reviewers](/docs/contribute/participating/#reviewers) and [Approvers](/docs/contribute/participating/#approvers) do a few extra things when reviewing a change. + +Every week a specific docs approver volunteers to triage +and review pull requests. This +person is the "PR Wrangler" for the week. See the +[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) for more information. To become a PR Wrangler, attend the weekly SIG Docs meeting and volunteer. Even if you are not on the schedule for the current week, you can still review pull +requests (PRs) that are not already under active review. + +In addition to the rotation, a bot assigns reviewers and approvers +for the PR based on the owners for the affected files. + +{{% /capture %}} + + +{{% capture body %}} + +## Reviewing a PR + +Kubernetes documentation follows the [Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process). + +Everything described in [Reviewing a pull request](/docs/contribute/review/reviewing-prs) applies, but Reviewers and Approvers should also do the following: + +- Using the `/assign` Prow command to assign a specific reviewer to a PR as needed. This is extra important +when it comes to requesting technical review from code contributors. + + {{< note >}} + Look at the `reviewers` field in the front-matter at the top of a Markdown file to see who can + provide technical review. + {{< /note >}} + +- Making sure the PR follows the [Content](/docs/contribute/style/content-guide/) and [Style](/docs/contribute/style/style-guide/) guides; link the author to the relevant part of the guide(s) if it doesn't. +- Using the GitHub **Request Changes** option when applicable to suggest changes to the PR author. +- Changing your review status in GitHub using the `/approve` or `/lgtm` Prow commands, if your suggestions are implemented. + +## Commit into another person's PR + +Leaving PR comments is helpful, but there might be times when you need to commit +into another person's PR instead. + +Do not "take over" for another person unless they explicitly ask +you to, or you want to resurrect a long-abandoned PR. While it may be faster +in the short term, it deprives the person of the chance to contribute. + +The process you use depends on whether you need to edit a file that is already +in the scope of the PR, or a file that the PR has not yet touched. + +You can't commit into someone else's PR if either of the following things is +true: + +- If the PR author pushed their branch directly to the + [https://github.com/kubernetes/website/](https://github.com/kubernetes/website/) + repository. Only a reviewer with push access can commit to another user's PR. + + {{< note >}} + Encourage the author to push their branch to their fork before + opening the PR next time. + {{< /note >}} + +- The PR author explicitly disallows edits from approvers. + +## Prow commands for reviewing + +[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is +the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow +enables chatbot-style commands to handle GitHub actions across the Kubernetes +organization, like [adding and removing +labels](#add-and-remove-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/` format. + +The most common prow commands reviewers and approvers use are: + +{{< table caption="Prow commands for reviewing" >}} +Prow Command | Role Restrictions | Description +:------------|:------------------|:----------- +`/lgtm` | Anyone, but triggers automation if a Reviewer or Approver uses it | Signals that you've finished reviewing a PR and are satisfied with the changes. +`/approve` | Approvers | Approves a PR for merging. +`/assign` | Reviewers or Approvers | Assigns a person to review or approve a PR +`/close` | Reviewers or Approvers | Closes an issue or PR. +`/hold` | Anyone | Adds the `do-not-merge/hold` label, indicating the PR cannot be automatically merged. +`/hold cancel` | Anyone | Removes the `do-not-merge/hold` label. +{{< /table >}} + +See [the Prow command reference](https://prow.k8s.io/command-help) to see the full list +of commands you can use in a PR. + +## Triage and categorize issues + + +In general, SIG Docs follows the [Kubernetes issue triage](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md) process and uses the same labels. + + +This GitHub Issue [filter](https://github.com/kubernetes/website/issues?q=is%3Aissue+is%3Aopen+-label%3Apriority%2Fbacklog+-label%3Apriority%2Fimportant-longterm+-label%3Apriority%2Fimportant-soon+-label%3Atriage%2Fneeds-information+-label%3Atriage%2Fsupport+sort%3Acreated-asc) +finds issues that might need triage. + +### Triaging an issue + +1. Validate the issue + - Make sure the issue is about website documentation. Some issues can be closed quickly by + answering a question or pointing the reporter to a resource. See the + [Support requests or code bug reports](#support-requests-or-code-bug-reports) section for details. + - Assess whether the issue has merit. + - Add the `triage/needs-information` label if the issue doesn't have enough + detail to be actionable or the template is not filled out adequately. + - Close the issue if it has both the `lifecycle/stale` and `triage/needs-information` labels. + +2. Add a priority label (the + [Issue Triage Guidelines](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority) define priority labels in detail) + + {{< table caption="Issue labels" >}} + Label | Description + :------------|:------------------ + `priority/critical-urgent` | Do this right now. + `priority/important-soon` | Do this within 3 months. + `priority/important-longterm` | Do this within 6 months. + `priority/backlog` | Deferrable indefinitely. Do when resources are available. + `priority/awaiting-more-evidence` | Placeholder for a potentially good issue so it doesn't get lost. + `help` or `good first issue` | Suitable for someone with very little Kubernetes or SIG Docs experience. See [Help Wanted and Good First Issue Labels](https://github.com/kubernetes/community/blob/master/contributors/guide/help-wanted.md) for more information. + + {{< /table >}} + + At your discretion, take ownership of an issue and submit a PR for it + (especially if it's quick or relates to work you're already doing). + +If you have questions about triaging an issue, ask in `#sig-docs` on Slack or +the [kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). + +## Adding and removing issue labels + +To add a label, leave a comment in one of the following formats: + +- `/` (for example, `/good-first-issue`) +- `/ ` (for example, `/triage needs-information` or `/language ja`) + +To remove a label, leave a comment in one of the following formats: + +- `/remove-` (for example, `/remove-help`) +- `/remove- ` (for example, `/remove-triage needs-information`)` + +In both cases, the label must already exist. If you try to add a label that does not exist, the command is +silently ignored. + +For a list of all labels, see the [website repository's Labels section](https://github.com/kubernetes/website/labels). Not all labels are used by SIG Docs. + +### Issue lifecycle labels + +Issues are generally opened and closed quickly. +However, sometimes an issue is inactive after its opened. +Other times, an issue may need to remain open for longer than 90 days. + +{{< table caption="Issue lifecycle labels" >}} +Label | Description +:------------|:------------------ +`lifecycle/stale` | After 90 days with no activity, an issue is automatically labeled as stale. The issue will be automatically closed if the lifecycle is not manually reverted using the `/remove-lifecycle stale` command. +`lifecycle/frozen` | An issue with this label will not become stale after 90 days of inactivity. A user manually adds this label to issues that need to remain open for much longer than 90 days, such as those with a `priority/important-longterm` label. +{{< /table >}} + +## Handling special issue types + +SIG Docs encounters the following types of issues often enough to document how +to handle them. + +### Duplicate issues + +If a single problem has one or more issues open for it, combine them into a single issue. +You should decide which issue to keep open (or +open a new issue), then move over all relevant information and link related issues. +Finally, label all other issues that describe the same problem with +`triage/duplicate` and close them. Only having a single issue to work on reduces confusion +and avoids duplicate work on the same problem. + +### Dead link issues + +If the dead link issue is in the API or `kubectl` documentation, assign them `/priority critical-urgent` until the problem is fully understood. Assign all other dead link issues `/priority important-longterm`, as they must be manually fixed. + +### Blog issues + +We expect [Kubernetes Blog](https://kubernetes.io/blog/) entries to become +outdated over time. Therefore, we only maintain blog entries less than a year old. +If an issue is related to a blog entry that is more than one year old, +close the issue without fixing. + +### Support requests or code bug reports + +Some docs issues are actually issues with the underlying code, or requests for +assistance when something, for example a tutorial, doesn't work. +For issues unrelated to docs, close the issue with the `triage/support` label and a comment +directing the requester to support venues (Slack, Stack Overflow) and, if +relevant, the repository to file an issue for bugs with features (`kubernetes/kubernetes` +is a great place to start). + +Sample response to a request for support: + +```none +This issue sounds more like a request for support and less +like an issue specifically for docs. I encourage you to bring +your question to the `#kubernetes-users` channel in +[Kubernetes slack](http://slack.k8s.io/). You can also search +resources like +[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) +for answers to similar questions. + +You can also open issues for Kubernetes functionality in +https://github.com/kubernetes/kubernetes. + +If this is a documentation issue, please re-open this issue. +``` + +Sample code bug report response: + +```none +This sounds more like an issue with the code than an issue with +the documentation. Please open an issue at +https://github.com/kubernetes/kubernetes/issues. + +If this is a documentation issue, please re-open this issue. +``` + + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/review/reviewing-prs.md b/content/en/docs/contribute/review/reviewing-prs.md new file mode 100644 index 0000000000..cb432a97ba --- /dev/null +++ b/content/en/docs/contribute/review/reviewing-prs.md @@ -0,0 +1,98 @@ +--- +title: Reviewing pull requests +content_template: templates/concept +main_menu: true +weight: 10 +--- + +{{% capture overview %}} + +Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) section in the Kubernetes website repository to see open pull requests. + +Reviewing documentation pull requests is a +great way to introduce yourself to the Kubernetes community. +It helps you learn the code base and build trust with other contributors. + +Before reviewing, it's a good idea to: + +- Read the [content guide](/docs/contribute/style/content-guide/) and +[style guide](/docs/contribute/style/style-guide/) so you can leave informed comments. +- Understand the different [roles and responsibilities](/docs/contribute/participating/#roles-and-responsibilities) in the Kubernetes documentation community. + +{{% /capture %}} + +{{% capture body %}} + +## Before you begin + +Before you start a review: + +- Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and ensure that you abide by it at all times. +- Be polite, considerate, and helpful. +- Comment on positive aspects of PRs as well as changes. +- Be empathetic and mindful of how your review may be received. +- Assume good intent and ask clarifying questions. +- Experienced contributors, consider pairing with new contributors whose work requires extensive changes. + +## Review process + +In general, review pull requests for content and style in English. + +1. Go to + [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). + You see a list of every open pull request against the Kubernetes website and + docs. + +2. Filter the open PRs using one or all of the following labels: + - `cncf-cla: yes` (Recommended): PRs submitted by contributors who have not signed the CLA cannot be merged. See [Sign the CLA](/docs/contribute/new-content/overview/#sign-the-cla) for more information. + - `language/en` (Recommended): Filters for english language PRs only. + - `size/`: filters for PRs of a certain size. If you're new, start with smaller PRs. + + Additionally, ensure the PR isn't marked as a work in progress. PRs using the `work in progress` label are not ready for review yet. + +3. Once you've selected a PR to review, understand the change by: + - Reading the PR description to understand the changes made, and read any linked issues + - Reading any comments by other reviewers + - Clicking the **Files changed** tab to see the files and lines changed + - Previewing the changes in the Netlify preview build by scrolling to the PR's build check section at the bottom of the **Conversation** tab and clicking the **deploy/netlify** line's **Details** link. + +4. Go to the **Files changed** tab to start your review. + 1. Click on the `+` symbol beside the line you want to comment on. + 2. Fill in any comments you have about the line and click either **Add single comment** (if you have only one comment to make) or **Start a review** (if you have multiple comments to make). + 3. When finished, click **Review changes** at the top of the page. Here, you can add + add a summary of your review (and leave some positive comments for the contributor!), + approve the PR, comment or request changes as needed. New contributors should always + choose **Comment**. + +## Reviewing checklist + +When reviewing, use the following as a starting point. + +### Language and grammar + +- Are there any obvious errors in language or grammar? Is there a better way to phrase something? +- Are there any complicated or archaic words which could be replaced with a simpler word? +- Are there any words, terms or phrases in use which could be replaced with a non-discriminatory alternative? +- Does the word choice and its capitalization follow the [style guide](/docs/contribute/style/style-guide/)? +- Are there long sentences which could be shorter or less complex? +- Are there any long paragraphs which might work better as a list or table? + +### Content + +- Does similar content exist elsewhere on the Kubernetes site? +- Does the content excessively link to off-site, individual vendor or non-open source documentation? + +### Website + +- Did this PR change or remove a page title, slug/alias or anchor link? If so, are there broken links as a result of this PR? Is there another option, like changing the page title without changing the slug? +- Does the PR introduce a new page? If so: + - Is the page using the right [page template](/docs/contribute/style/page-templates/) and associated Hugo shortcodes? + - Does the page appear correctly in the section's side navigation (or at all)? + - Should the page appear on the [Docs Home](/docs/home/) listing? +- Do the changes show up in the Netlify preview? Be particularly vigilant about lists, code blocks, tables, notes and images. + +### Other + +For small issues with a PR, like typos or whitespace, prefix your comments with `nit:`. This lets the author know the issue is non-critical. + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md deleted file mode 100644 index 181e359682..0000000000 --- a/content/en/docs/contribute/start.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -title: Start contributing -slug: start -content_template: templates/concept -weight: 10 -card: - name: contribute - weight: 10 ---- - -{{% capture overview %}} - -If you want to get started contributing to the Kubernetes documentation, this -page and its linked topics can help you get started. You don't need to be a -developer or a technical writer to make a big impact on the Kubernetes -documentation and user experience! All you need for the topics on this page is -a [GitHub account](https://github.com/join) and a web browser. - -If you're looking for information on how to start contributing to Kubernetes -code repositories, refer to -[the Kubernetes community guidelines](https://github.com/kubernetes/community/blob/master/governance.md). - -{{% /capture %}} - - -{{% capture body %}} - -## The basics about our docs - -The Kubernetes documentation is written in Markdown and processed and deployed using Hugo. The source is in GitHub at [https://github.com/kubernetes/website](https://github.com/kubernetes/website). Most of the documentation source is stored in `/content/en/docs/`. Some of the reference documentation is automatically generated from scripts in the `update-imported-docs/` directory. - -You can file issues, edit content, and review changes from others, all from the -GitHub website. You can also use GitHub's embedded history and search tools. - -Not all tasks can be done in the GitHub UI, but these are discussed in the -[intermediate](/docs/contribute/intermediate/) and -[advanced](/docs/contribute/advanced/) docs contribution guides. - -### Participating in SIG Docs - -The Kubernetes documentation is maintained by a -{{< glossary_tooltip text="Special Interest Group" term_id="sig" >}} (SIG) -called SIG Docs. We [communicate](#participate-in-sig-docs-discussions) using a Slack channel, a mailing list, and -weekly video meetings. New participants are welcome. For more information, see -[Participating in SIG Docs](/docs/contribute/participating/). - -### Content guidelines - -The SIG Docs community created guidelines about what kind of content is allowed -in the Kubernetes documentation. Look over the [Documentation Content -Guide](/docs/contribute/style/content-guide/) to determine if the content -contribution you want to make is allowed. You can ask questions about allowed -content in the [#sig-docs](#participate-in-sig-docs-discussions) Slack -channel. - -### Style guidelines - -We maintain a [style guide](/docs/contribute/style/style-guide/) with information -about choices the SIG Docs community has made about grammar, syntax, source -formatting, and typographic conventions. Look over the style guide before you -make your first contribution, and use it when you have questions. - -Changes to the style guide are made by SIG Docs as a group. To propose a change -or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the -discussion. See the [advanced contribution](/docs/contribute/advanced/) topic for more -information. - -### Page templates - -We use page templates to control the presentation of our documentation pages. -Be sure to understand how these templates work by reviewing -[Using page templates](/docs/contribute/style/page-templates/). - -### Hugo shortcodes - -The Kubernetes documentation is transformed from Markdown to HTML using Hugo. -We make use of the standard Hugo shortcodes, as well as a few that are custom to -the Kubernetes documentation. See [Custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) for -information about how to use them. - -### Multiple languages - -Documentation source is available in multiple languages in `/content/`. Each language has its own folder with a two-letter code determined by the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php). For example, English documentation source is stored in `/content/en/docs/`. - -For more information about contributing to documentation in multiple languages, see ["Localize content"](/docs/contribute/intermediate#localize-content) in the intermediate contributing guide. - -If you're interested in starting a new localization, see ["Localization"](/docs/contribute/localization/). - -## File actionable issues - -Anyone with a GitHub account can file an issue (bug report) against the -Kubernetes documentation. If you see something wrong, even if you have no idea -how to fix it, [file an issue](#how-to-file-an-issue). The exception to this -rule is a tiny bug like a typo that you intend to fix yourself. In that case, -you can instead [fix it](#improve-existing-content) without filing a bug first. - -### How to file an issue - -- **On an existing page** - - If you see a problem in an existing page in the [Kubernetes docs](/docs/), - go to the bottom of the page and click the **Create an Issue** button. If - you are not currently logged in to GitHub, log in. A GitHub issue form - appears with some pre-populated content. - - Using Markdown, fill in as many details as you can. In places where you see - empty square brackets (`[ ]`), put an `x` between the set of brackets that - represents the appropriate choice. If you have a proposed solution to fix - the issue, add it. - -- **Request a new page** - - If you think content should exist, but you aren't sure where it should go or - you don't think it fits within the pages that currently exist, you can - still file an issue. You can either choose an existing page near where you think the - new content should go and file the issue from that page, or go straight to - [https://github.com/kubernetes/website/issues/new/](https://github.com/kubernetes/website/issues/new/) - and file the issue from there. - -### How to file great issues - -To ensure that we understand your issue and can act on it, keep these guidelines -in mind: - -- Use the issue template, and fill out as many details as you can. -- Clearly explain the specific impact the issue has on users. -- Limit the scope of a given issue to a reasonable unit of work. For problems - with a large scope, break them down into smaller issues. - - For instance, "Fix the security docs" is not an actionable issue, but "Add - details to the 'Restricting network access' topic" might be. -- If the issue relates to another issue or pull request, you can refer to it - either by its full URL or by the issue or pull request number prefixed - with a `#` character. For instance, `Introduced by #987654`. -- Be respectful and avoid venting. For instance, "The docs about X suck" is not - helpful or actionable feedback. The - [Code of Conduct](/community/code-of-conduct/) also applies to interactions on - Kubernetes GitHub repositories. - -## Participate in SIG Docs discussions - -The SIG Docs team communicates using the following mechanisms: - -- [Join the Kubernetes Slack instance](http://slack.k8s.io/), then join the - `#sig-docs` channel, where we discuss docs issues in real-time. Be sure to - introduce yourself! -- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), - where broader discussions take place and official decisions are recorded. -- Participate in the [weekly SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) video meeting, which is announced on the Slack channel and the mailing list. Currently, these meetings take place on Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone. - -{{< note >}} -You can also check the SIG Docs weekly meeting on the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). -{{< /note >}} - -## Improve existing content - -To improve existing content, you file a _pull request (PR)_ after creating a -_fork_. Those two terms are [specific to GitHub](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/). -For the purposes of this topic, you don't need to know everything about them, -because you can do everything using your web browser. When you continue to the -[intermediate docs contributor guide](/docs/contribute/intermediate/), you will -need more background in Git terminology. - -{{< note >}} -**Kubernetes code developers**: If you are documenting a new feature for an -upcoming Kubernetes release, your process is a bit different. See -[Document a feature](/docs/contribute/intermediate/#sig-members-documenting-new-features) for -process guidelines and information about deadlines. -{{< /note >}} - -### Sign the CNCF CLA {#sign-the-cla} - -Before you can contribute code or documentation to Kubernetes, you **must** read -the [Contributor guide](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md) and -[sign the Contributor License Agreement (CLA)](https://github.com/kubernetes/community/blob/master/CLA.md). -Don't worry -- this doesn't take long! - -### Find something to work on - -If you see something you want to fix right away, just follow the instructions -below. You don't need to [file an issue](#file-actionable-issues) (although you -certainly can). - -If you want to start by finding an existing issue to work on, go to -[https://github.com/kubernetes/website/issues](https://github.com/kubernetes/website/issues) -and look for issues with the label `good first issue` (you can use -[this](https://github.com/kubernetes/website/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) shortcut). Read through the comments and make sure there is not an open pull -request against the issue and that nobody has left a comment saying they are -working on the issue recently (3 days is a good rule). Leave a comment saying -that you would like to work on the issue. - -### Choose which Git branch to use - -The most important aspect of submitting pull requests is choosing which branch -to base your work on. Use these guidelines to make the decision: - -- Use `master` for fixing problems in content that is already published, or - making improvements to content that already exists. -- Use `master` to document something that is already part of the current - Kubernetes release, but isn't yet documented. You should write this content - in English first, and then localization teams will pick that change up as a - localization task. -- If you're working on a localization, you should follow the convention for - that particular localization. To find this out, you can look at other - pull requests (tip: search for `is:pr is:merged label:language/xx`) - {{< comment >}}Localization note: when localizing that tip, replace `xx` - with the actual ISO3166 two-letter code for your target locale.{{< /comment >}} - - Some localization teams work with PRs that target `master` - - Some localization teams work with a series of long-lived branches, and - periodically merge these to `master`. This kind of branch has a name like - dev-\-\.\; for example: - `dev-{{< latest-semver >}}-ja.1` -- If you're writing or updating documentation for a feature change release, - then you need to know the major and minor version of Kubernetes that - the change will first appear in. - - For example, if the feature gate JustAnExample is going to move from alpha - to beta in the next minor version, you need to know what the next minor - version number is. - - Find the release branch named for that version. For example, features that - changed in the {{< latest-version >}} release got documented in the branch - named `dev-{{< latest-semver >}}`. - -If you're still not sure which branch to choose, ask in `#sig-docs` on Slack or -attend a weekly SIG Docs meeting to get clarity. - -{{< note >}} -If you already submitted your pull request and you know that the Base Branch -was wrong, you (and only you, the submitter) can change it. -{{< /note >}} - -### Submit a pull request - -Follow these steps to submit a pull request to improve the Kubernetes -documentation. - -1. On the page where you see the issue, click the pencil icon at the top right. - A new GitHub page appears, with some help text. -2. If you have never created a fork of the Kubernetes documentation - repository, you are prompted to do so. Create the fork under your GitHub - username, rather than another organization you may be a member of. The - fork usually has a URL such as `https://github.com//website`, - unless you already have a repository with a conflicting name. - - The reason you are prompted to create a fork is that you do not have - access to push a branch directly to the definitive Kubernetes repository. - -3. The GitHub Markdown editor appears with the source Markdown file loaded. - Make your changes. Below the editor, fill in the **Propose file change** - form. The first field is the summary of your commit message and should be - no more than 50 characters long. The second field is optional, but can - include more detail if appropriate. - - {{< note >}} -Do not include references to other GitHub issues or pull -requests in your commit message. You can add those to the pull request -description later. -{{< /note >}} - - Click **Propose file change**. The change is saved as a commit in a - new branch in your fork, which is automatically named something like - `patch-1`. - -4. The next screen summarizes the changes you made, by comparing your new - branch (the **head fork** and **compare** selection boxes) to the current - state of the **base fork** and **base** branch (`master` on the - `kubernetes/website` repository by default). You can change any of the - selection boxes, but don't do that now. Have a look at the difference - viewer on the bottom of the screen, and if everything looks right, click - **Create pull request**. - - {{< note >}} -If you don't want to create the pull request now, you can do it -later, by browsing to the main URL of the Kubernetes website repository or -your fork's repository. The GitHub website will prompt you to create the -pull request if it detects that you pushed a new branch to your fork. -{{< /note >}} - -5. The **Open a pull request** screen appears. The subject of the pull request - is the same as the commit summary, but you can change it if needed. The - body is populated by your extended commit message (if present) and some - template text. Read the template text and fill out the details it asks for, - then delete the extra template text. If you add to the description `fixes #<000000>` - or `closes #<000000>`, where `#<000000>` is the number of an associated issue, - GitHub will automatically close the issue when the PR merges. - Leave the **Allow edits from maintainers** checkbox selected. Click - **Create pull request**. - - Congratulations! Your pull request is available in - [Pull requests](https://github.com/kubernetes/website/pulls). - - After a few minutes, you can preview the website with your PR's changes - applied. Go to the **Conversation** tab of your PR and click the **Details** - link for the `deploy/netlify` test, near the bottom of the page. It opens in - the same browser window by default. - - {{< note >}} - Please limit pull requests to one language per PR. For example, if you need to make an identical change to the same code sample in multiple languages, open a separate PR for each language. - {{< /note >}} - -6. Wait for review. Generally, reviewers are suggested by the `k8s-ci-robot`. - If a reviewer asks you to make changes, you can go to the **Files changed** - tab and click the pencil icon on any files that have been changed by the - pull request. When you save the changed file, a new commit is created in - the branch being monitored by the pull request. If you are waiting on a - reviewer to review the changes, proactively reach out to the reviewer - once every 7 days. You can also drop into #sig-docs Slack channel, - which is a good place to ask for help regarding PR reviews. - -7. If your change is accepted, a reviewer merges your pull request, and the - change is live on the Kubernetes website a few minutes later. - -This is only one way to submit a pull request. If you are already a Git and -GitHub advanced user, you can use a local GUI or command-line Git client -instead of using the GitHub UI. Some basics about using the command-line Git -client are discussed in the [intermediate](/docs/contribute/intermediate/) docs -contribution guide. - -## Review docs pull requests - -People who are new to documentation can still review pull requests. You can -learn the code base and build trust with your fellow contributors. English docs -are the authoritative source for content. We communicate in English during -weekly meetings and in community announcements. Contributors' English skills -vary, so use simple and direct language in your reviews. Effective reviews focus -on both small details and a change's potential impact. - -The reviews are not considered "binding", which means that your review alone -won't cause a pull request to be merged. However, it can still be helpful. Even -if you don't leave any review comments, you can get a sense of pull request -conventions and etiquette and get used to the workflow. Familiarize yourself with the -[content guide](/docs/contribute/style/content-guide/) and -[style guide](/docs/contribute/style/style-guide/) before reviewing so you -get an idea of what the content should contain and how it should look. - -### Best practices - -- Be polite, considerate, and helpful -- Comment on positive aspects of PRs as well -- Be empathetic and mindful of how your review may be received -- Assume good intent and ask clarifying questions -- Experienced contributors, consider pairing with new contributors whose work requires extensive changes - -### How to find and review a pull request - -1. Go to - [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). - You see a list of every open pull request against the Kubernetes website and - docs. - -2. By default, the only filter that is applied is `open`, so you don't see - pull requests that have already been closed or merged. It's a good idea to - apply the `cncf-cla: yes` filter, and for your first review, it's a good - idea to add `size/S` or `size/XS`. The `size` label is applied automatically - based on how many lines of code the PR modifies. You can apply filters using - the selection boxes at the top of the page, or use - [this shortcut](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+yes%22+label%3Asize%2FS) for only small PRs. All filters are `AND`ed together, so - you can't search for both `size/XS` and `size/S` in the same query. - -3. Go to the **Files changed** tab. Look through the changes introduced in the - PR, and if applicable, also look at any linked issues. If you see a problem - or room for improvement, hover over the line and click the `+` symbol that - appears. - - You can type a comment, and either choose **Add single comment** or **Start - a review**. Typically, starting a review is better because it allows you to - leave multiple comments and notifies the PR owner only when you have - completed the review, rather than a separate notification for each comment. - -4. When finished, click **Review changes** at the top of the page. You can - summarize your review, and you can choose to comment, approve, or request - changes. New contributors should always choose **Comment**. - -Thanks for reviewing a pull request! When you are new to the project, it's a -good idea to ask for feedback on your pull request reviews. The `#sig-docs` -Slack channel is a great place to do this. - -## Write a blog post - -Anyone can write a blog post and submit it for review. Blog posts should not be -commercial in nature and should consist of content that will apply broadly to -the Kubernetes community. - -To submit a blog post, you can either submit it using the -[Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSdMpMoSIrhte5omZbTE7nB84qcGBy8XnnXhDFoW0h7p2zwXrw/viewform), -or follow the steps below. - -1. [Sign the CLA](#sign-the-cla) if you have not yet done so. -2. Have a look at the Markdown format for existing blog posts in the - [website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts). -3. Write out your blog post in a text editor of your choice. -4. On the same link from step 2, click the **Create new file** button. Paste - your content into the editor. Name the file to match the proposed title of - the blog post, but don't put the date in the file name. The blog reviewers - will work with you on the final file name and the date the blog will be - published. -5. When you save the file, GitHub will walk you through the pull request - process. -6. A blog post reviewer will review your submission and work with you on - feedback and final details. When the blog post is approved, the blog will be - scheduled for publication. - -## Submit a case study - -Case studies highlight how organizations are using Kubernetes to solve -real-world problems. They are written in collaboration with the Kubernetes -marketing team, which is handled by the {{< glossary_tooltip text="CNCF" term_id="cncf" >}}. - -Have a look at the source for the -[existing case studies](https://github.com/kubernetes/website/tree/master/content/en/case-studies). -Use the [Kubernetes case study submission form](https://www.cncf.io/people/end-user-community/) -to submit your proposal. - -{{% /capture %}} - -{{% capture whatsnext %}} - -When you are comfortable with all of the tasks discussed in this topic and you -want to engage with the Kubernetes docs team in deeper ways, read the -[intermediate docs contribution guide](/docs/contribute/intermediate/). - -{{% /capture %}} diff --git a/content/en/docs/contribute/style/content-guide.md b/content/en/docs/contribute/style/content-guide.md index 18d469510c..bc4b5be64a 100644 --- a/content/en/docs/contribute/style/content-guide.md +++ b/content/en/docs/contribute/style/content-guide.md @@ -3,10 +3,6 @@ title: Documentation Content Guide linktitle: Content guide content_template: templates/concept weight: 10 -card: - name: contribute - weight: 20 - title: Documentation Content Guide --- {{% capture overview %}} diff --git a/content/en/docs/contribute/style/hugo-shortcodes/index.md b/content/en/docs/contribute/style/hugo-shortcodes/index.md index 8cd3ed1e0f..60479c7fec 100644 --- a/content/en/docs/contribute/style/hugo-shortcodes/index.md +++ b/content/en/docs/contribute/style/hugo-shortcodes/index.md @@ -243,4 +243,4 @@ Renders to: * Learn about [using page templates](/docs/home/contribute/page-templates/). * Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/) * Learn about [creating a pull request](/docs/home/contribute/create-pull-request/). -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 26722e607f..34dce6adac 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -3,10 +3,6 @@ title: Documentation Style Guide linktitle: Style guide content_template: templates/concept weight: 10 -card: - name: contribute - weight: 20 - title: Documentation Style Guide --- {{% capture overview %}} @@ -15,10 +11,12 @@ These are guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. For additional information on creating new content for the Kubernetes -documentation, read the [Documentation Content -Guide](/docs/contribute/style/content-guide/) and follow the instructions on -[using page templates](/docs/contribute/style/page-templates/) and [creating a -documentation pull request](/docs/contribute/start/#improve-existing-content). +documentation, read the [Documentation Content Guide](/docs/contribute/style/content-guide/) and follow the instructions on +[using page templates](/docs/contribute/style/page-templates/) and [creating a documentation pull request](/docs/contribute/new-content/open-a-pr). + +Changes to the style guide are made by SIG Docs as a group. To propose a change +or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the +discussion. {{% /capture %}} diff --git a/content/en/docs/contribute/style/write-new-topic.md b/content/en/docs/contribute/style/write-new-topic.md index 50e49734a3..65dca22f1a 100644 --- a/content/en/docs/contribute/style/write-new-topic.md +++ b/content/en/docs/contribute/style/write-new-topic.md @@ -10,7 +10,7 @@ This page shows how to create a new topic for the Kubernetes docs. {{% capture prerequisites %}} Create a fork of the Kubernetes documentation repository as described in -[Start contributing](/docs/contribute/start/). +[Open a PR](/docs/new-content/open-a-pr/). {{% /capture %}} {{% capture steps %}} @@ -24,8 +24,8 @@ Type | Description :--- | :---------- Concept | A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application while it is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials. For an example of a concept topic, see Nodes. Task | A task page shows how to do a single thing. The idea is to give readers a sequence of steps that they can actually do as they read the page. A task page can be short or long, provided it stays focused on one area. In a task page, it is OK to blend brief explanations with the steps to be performed, but if you need to provide a lengthy explanation, you should do that in a concept topic. Related task and concept topics should link to each other. For an example of a short task page, see Configure a Pod to Use a Volume for Storage. For an example of a longer task page, see Configure Liveness and Readiness Probes -Tutorial | A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features. -{{< /table >}} +Tutorial | A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features. +{{< /table >}} Use a template for each new page. Each page type has a [template](/docs/contribute/style/page-templates/) @@ -162,7 +162,6 @@ image format is SVG. {{% /capture %}} {{% capture whatsnext %}} -* Learn about [using page templates](/docs/home/contribute/page-templates/). -* Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/). -* Learn about [creating a pull request](/docs/home/contribute/create-pull-request/). +* Learn about [using page templates](/docs/contribute/page-templates/). +* Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/). {{% /capture %}} diff --git a/content/en/docs/contribute/suggesting-improvements.md b/content/en/docs/contribute/suggesting-improvements.md new file mode 100644 index 0000000000..19133f379b --- /dev/null +++ b/content/en/docs/contribute/suggesting-improvements.md @@ -0,0 +1,65 @@ +--- +title: Suggesting content improvements +slug: suggest-improvements +content_template: templates/concept +weight: 10 +card: + name: contribute + weight: 20 +--- + +{{% capture overview %}} + +If you notice an issue with Kubernetes documentation, or have an idea for new content, then open an issue. All you need is a [GitHub account](https://github.com/join) and a web browser. + +In most cases, new work on Kubernetes documentation begins with an issue in GitHub. Kubernetes contributors +then review, categorize and tag issues as needed. Next, you or another member +of the Kubernetes community open a pull request with changes to resolve the issue. + +{{% /capture %}} + +{{% capture body %}} + +## Opening an issue + +If you want to suggest improvements to existing content, or notice an error, then open an issue. + +1. Go to the bottom of the page and click the **Create an Issue** button. This redirects you + to a GitHub issue page pre-populated with some headers. +2. Describe the issue or suggestion for improvement. Provide as many details as you can. +3. Click **Submit new issue**. + +After submitting, check in on your issue occasionally or turn on GitHub notifications. +Reviewers and other community members might ask questions before +they can take action on your issue. + +## Suggesting new content + +If you have an idea for new content, but you aren't sure where it should go, you can +still file an issue. Either: + +- Choose an existing page in the section you think the content belongs in and click **Create an issue**. +- Go to [GitHub](https://github.com/kubernetes/website/issues/new/) and file the issue directly. + +## How to file great issues + + +Keep the following in mind when filing an issue: + +- Provide a clear issue description. Describe what specifically is missing, out of date, + wrong, or needs improvement. +- Explain the specific impact the issue has on users. +- Limit the scope of a given issue to a reasonable unit of work. For problems + with a large scope, break them down into smaller issues. For example, "Fix the security docs" + is too broad, but "Add details to the 'Restricting network access' topic" is specific enough + to be actionable. +- Search the existing issues to see if there's anything related or similar to the + new issue. +- If the new issue relates to another issue or pull request, refer to it + either by its full URL or by the issue or pull request number prefixed + with a `#` character. For example, `Introduced by #987654`. +- Follow the [Code of Conduct](/community/code-of-conduct/). Respect your +fellow contributors. For example, "The docs are terrible" is not + helpful or polite feedback. + +{{% /capture %}} diff --git a/static/_redirects b/static/_redirects index 13cfce2d49..bdc6c1e4d8 100644 --- a/static/_redirects +++ b/static/_redirects @@ -132,8 +132,11 @@ /docs/contribute/review-issues/ /docs/home/contribute/review-issues/ 301 /docs/contribute/stage-documentation-changes/ /docs/home/contribute/stage-documentation-changes/ 301 /docs/contribute/style-guide/ /docs/home/contribute/style-guide/ 301 - /docs/contribute/write-new-topic/ /docs/home/contribute/write-new-topic/ 301 +/docs/contribute/start/ /docs/contribute/ 301 +/docs/contribute/intermediate/ /docs/contribute/ 301 + + /docs/deprecate/ /docs/reference/using-api/deprecation-policy/ 301 /docs/deprecated/ /docs/reference/using-api/deprecation-policy/ 301 /docs/deprecation-policy/ /docs/reference/using-api/deprecation-policy/ 301 @@ -477,4 +480,4 @@ /docs/setup/multiple-zones/ /docs/setup/best-practices/multiple-zones/ 301 /docs/setup/cluster-large/ /docs/setup/best-practices/cluster-large/ 301 /docs/setup/node-conformance/ /docs/setup/best-practices/node-conformance/ 301 -/docs/setup/certificates/ /docs/setup/best-practices/certificates/ 301 +/docs/setup/certificates/ /docs/setup/best-practices/certificates/ 301 \ No newline at end of file From 2d39523ee72de464e0cb225e3dc2731496011f98 Mon Sep 17 00:00:00 2001 From: Karen Bradshaw Date: Tue, 7 Apr 2020 17:09:33 -0400 Subject: [PATCH 062/105] fix page formatting --- .../cluster-administration/flow-control.md | 8 ++--- .../setup-konnectivity/setup-konnectivity.md | 33 ++++++++++++++----- 2 files changed, 28 insertions(+), 13 deletions(-) diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index fd8227bfa0..6540a05e8f 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -18,12 +18,12 @@ potentially crashing the API server, but these flags are not enough to ensure that the most important requests get through in a period of high traffic. The API Priority and Fairness feature (APF) is an alternative that improves upon -aforementioned max-inflight limitations. APF classifies -and isolates requests in a more fine-grained way. It also introduces +aforementioned max-inflight limitations. APF classifies +and isolates requests in a more fine-grained way. It also introduces a limited amount of queuing, so that no requests are rejected in cases of very brief bursts. Requests are dispatched from queues using a -fair queuing technique so that, for example, a poorly-behaved {{< -glossary_tooltip text="controller" term_id="controller" >}}) need not +fair queuing technique so that, for example, a poorly-behaved +{{< glossary_tooltip text="controller" term_id="controller" >}} need not starve others (even at the same priority level). {{< caution >}} diff --git a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md index 0fdbd0127d..b5dbd05215 100644 --- a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md +++ b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md @@ -1,30 +1,43 @@ --- -title: Setup Konnectivity Service +title: Set up Konnectivity service content_template: templates/task -weight: 110 +weight: 70 --- +{{% capture overview %}} + The Konnectivity service provides TCP level proxy for the Master → Cluster communication. -You can set it up with the following steps. +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Configure the Konnectivity service First, you need to configure the API Server to use the Konnectivity service to direct its network traffic to cluster nodes: + 1. Set the `--egress-selector-config-file` flag of the API Server, it is the path to the API Server egress configuration file. -2. At the path, create a configuration file. For example, +1. At the path, create a configuration file. For example, {{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}} -Next, you need to deploy the Konnectivity service server and agents. +Next, you need to deploy the Konnectivity server and agents. [kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy) is a reference implementation. -Deploy the Konnectivity server on your master node. The provided yaml assuming -Kubernetes components are deployed as {{< glossary_tooltip text="static pod" -term_id="static-pod" >}} in your cluster. If not , you can deploy it as a -Daemonset to be reliable. +Deploy the Konnectivity server on your master node. The provided yaml assumes +that the Kubernetes components are deployed as a {{< glossary_tooltip text="static Pod" +term_id="static-pod" >}} in your cluster. If not, you can deploy the Konnectivity +server as a DaemonSet. {{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}} @@ -35,3 +48,5 @@ Then deploy the Konnectivity agents in your cluster: Last, if RBAC is enabled in your cluster, create the relevant RBAC rules: {{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}} + +{{% /capture %}} \ No newline at end of file From 2e3f272219e35708e470f860b234fe14269b237f Mon Sep 17 00:00:00 2001 From: viniciusbds Date: Tue, 7 Apr 2020 23:53:15 -0300 Subject: [PATCH 063/105] Add templates in the portuguese docs --- content/pt/docs/templates/feature-state-alpha.txt | 7 +++++++ content/pt/docs/templates/feature-state-beta.txt | 8 ++++++++ .../pt/docs/templates/feature-state-deprecated.txt | 1 + content/pt/docs/templates/feature-state-stable.txt | 4 ++++ content/pt/docs/templates/index.md | 13 +++++++++++++ 5 files changed, 33 insertions(+) create mode 100644 content/pt/docs/templates/feature-state-alpha.txt create mode 100644 content/pt/docs/templates/feature-state-beta.txt create mode 100644 content/pt/docs/templates/feature-state-deprecated.txt create mode 100644 content/pt/docs/templates/feature-state-stable.txt create mode 100644 content/pt/docs/templates/index.md diff --git a/content/pt/docs/templates/feature-state-alpha.txt b/content/pt/docs/templates/feature-state-alpha.txt new file mode 100644 index 0000000000..f013bc3c5a --- /dev/null +++ b/content/pt/docs/templates/feature-state-alpha.txt @@ -0,0 +1,7 @@ +Atualmente, esse recurso está no estado *alpha*, o que significa: + +* Os nomes das versões contêm alfa (ex. v1alpha1). +* Pode estar bugado. A ativação do recurso pode expor bugs. Desabilitado por padrão. +* O suporte ao recurso pode ser retirado a qualquer momento sem aviso prévio. +* A API pode mudar de maneiras incompatíveis em uma versão de software posterior sem aviso prévio. +* Recomendado para uso apenas em clusters de teste de curta duração, devido ao aumento do risco de erros e falta de suporte a longo prazo. diff --git a/content/pt/docs/templates/feature-state-beta.txt b/content/pt/docs/templates/feature-state-beta.txt new file mode 100644 index 0000000000..0a0970f83b --- /dev/null +++ b/content/pt/docs/templates/feature-state-beta.txt @@ -0,0 +1,8 @@ +Atualmente, esse recurso está no estado *beta*, o que significa: + +* Os nomes das versões contêm beta (ex, v2beta3). +* O código está bem testado. A ativação do recurso é considerada segura. Ativado por padrão. +* O suporte para o recurso geral não será descartado, embora os detalhes possam mudar. +* O esquema e/ou semântica dos objetos podem mudar de maneiras incompatíveis em uma versão beta ou estável subsequente. Quando isso acontecer, forneceremos instruções para migrar para a próxima versão. Isso pode exigir a exclusão, edição e recriação de objetos da API. O processo de edição pode exigir alguma reflexão. Isso pode exigir tempo de inatividade para aplicativos que dependem do recurso. +* Recomendado apenas para usos não comerciais, devido ao potencial de alterações incompatíveis nas versões subsequentes. Se você tiver vários clusters que podem ser atualizados independentemente, poderá relaxar essa restrição. +* **Por favor, experimente nossos recursos beta e dê um feedback sobre eles! Depois que eles saem da versão beta, pode não ser prático para nós fazer mais alterações.** diff --git a/content/pt/docs/templates/feature-state-deprecated.txt b/content/pt/docs/templates/feature-state-deprecated.txt new file mode 100644 index 0000000000..e75f713b73 --- /dev/null +++ b/content/pt/docs/templates/feature-state-deprecated.txt @@ -0,0 +1 @@ +Este recurso está *obsoleto*. Para obter mais informações sobre esse estado, consulte a [Política de descontinuação do Kubernetes](/docs/reference/deprecation-policy/). diff --git a/content/pt/docs/templates/feature-state-stable.txt b/content/pt/docs/templates/feature-state-stable.txt new file mode 100644 index 0000000000..a9ef64609c --- /dev/null +++ b/content/pt/docs/templates/feature-state-stable.txt @@ -0,0 +1,4 @@ +Esse recurso é *estável*, o que significa: + +* O nome da versão é vX, em que X é um número inteiro. +* Versões estáveis dos recursos aparecerão no software lançado para muitas versões subsequentes. diff --git a/content/pt/docs/templates/index.md b/content/pt/docs/templates/index.md new file mode 100644 index 0000000000..9d7bccd143 --- /dev/null +++ b/content/pt/docs/templates/index.md @@ -0,0 +1,13 @@ +--- +headless: true + +resources: +- src: "*alpha*" + title: "alpha" +- src: "*beta*" + title: "beta" +- src: "*deprecated*" + title: "deprecated" +- src: "*stable*" + title: "stable" +--- From 3fbbd2ae372eed4fe1e7184097ff8e32f2142ca3 Mon Sep 17 00:00:00 2001 From: Ivan Kurnosov Date: Wed, 8 Apr 2020 14:54:08 +1200 Subject: [PATCH 064/105] Update 2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md Fixed wildcard host match examples. --- ...ovements-to-the-Ingress-API-in-Kubernetes-1.18.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md b/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md index 713c9821ce..0d78e3206b 100644 --- a/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md +++ b/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md @@ -51,11 +51,11 @@ IngressClass resource will ensure that new Ingresses without an `ingressClassNam ## Support for Hostname Wildcards Many Ingress providers have supported wildcard hostname matching like `*.foo.com` matching `app1.foo.com`, but until now the spec assumed an exact FQDN match of the host. Hosts can now be precise matches (for example “`foo.bar.com`”) or a wildcard (for example “`*.foo.com`”). Precise matches require that the http host header matches the Host setting. Wildcard matches require the http host header is equal to the suffix of the wildcard rule. -| Host | Host header | Match? | -| ------------- |-------------| -----| -| `*.foo.com` | `*.foo.com` | Matches based on shared suffix | -| `*.foo.com` | `*.foo.com` | No match, wildcard only covers a single DNS label | -| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label | +| Host | Host header | Match? | +| ----------- |-------------------| --------------------------------------------------| +| `*.foo.com` | `bar.foo.com` | Matches based on shared suffix | +| `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label | +| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label | ### Putting it All Together These new Ingress features allow for much more configurability. Here’s an example of an Ingress that makes use of pathType, `ingressClassName`, and a hostname wildcard: @@ -84,4 +84,4 @@ Since these features are new in Kubernetes 1.18, each Ingress controller impleme ## The Future of Ingress The Ingress API is on pace to graduate from beta to a stable API in Kubernetes 1.19. It will continue to provide a simple way to manage inbound network traffic for Kubernetes workloads. This API has intentionally been kept simple and lightweight, but there has been a desire for greater configurability for more advanced use cases. -Work is currently underway on a new highly configurable set of APIs that will provide an alternative to Ingress in the future. These APIs are being referred to as the new “Service APIs”. They are not intended to replace any existing APIs, but instead provide a more configurable alternative for complex use cases. For more information, check out the [Service APIs repo on GitHub](http://github.com/kubernetes-sigs/service-apis). \ No newline at end of file +Work is currently underway on a new highly configurable set of APIs that will provide an alternative to Ingress in the future. These APIs are being referred to as the new “Service APIs”. They are not intended to replace any existing APIs, but instead provide a more configurable alternative for complex use cases. For more information, check out the [Service APIs repo on GitHub](http://github.com/kubernetes-sigs/service-apis). From 1b70be972c8d308c1ab7d072ac41b42e1ab45f1e Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Wed, 8 Apr 2020 11:28:55 +0800 Subject: [PATCH 065/105] update content/zh/docs/concepts/storage/storage-classes.md ref https://github.com/k8smeetup/website-tasks/issues/3502 --- .../docs/concepts/storage/storage-classes.md | 116 +++++++++--------- 1 file changed, 59 insertions(+), 57 deletions(-) diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index b1d8becc60..33f6317e75 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -87,38 +87,38 @@ volumeBindingMode: Immediate ### 存储分配器 -`StorageClass` 有一个分配器,用来决定使用哪个`卷插件`分配`PV`。该字段必须指定。 +每个 `StorageClass` 都有一个分配器,用来决定使用哪个`卷插件`分配 `PV`。该字段必须指定。 -| 卷插件 | 内置分配器 | 配置例子 | -| :--- | :---: | :---: | -| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) | -| AzureFile | ✓ | [Azure File](#azure-file) | -| AzureDisk | ✓ | [Azure Disk](#azure-disk) | -| CephFS | - | - | -| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)| -| FC | - | - | -| FlexVolume | - | - | -| Flocker | ✓ | - | -| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | -| Glusterfs | ✓ | [Glusterfs](#glusterfs) | -| iSCSI | - | - | -| Quobyte | ✓ | [Quobyte](#quobyte) | -| NFS | - | - | -| RBD | ✓ | [Ceph RBD](#ceph-rbd) | -| VsphereVolume | ✓ | [vSphere](#vsphere) | -| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) | -| ScaleIO | ✓ | [ScaleIO](#scaleio) | -| StorageOS | ✓ | [StorageOS](#storageos) | -| Local | - | [Local](#local) | +| 卷插件 | 内置分配器 | 配置例子 | +|:---------------------|:----------:|:-------------------------------------:| +| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) | +| AzureFile | ✓ | [Azure File](#azure-file) | +| AzureDisk | ✓ | [Azure Disk](#azure-disk) | +| CephFS | - | - | +| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) | +| FC | - | - | +| FlexVolume | - | - | +| Flocker | ✓ | - | +| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | +| Glusterfs | ✓ | [Glusterfs](#glusterfs) | +| iSCSI | - | - | +| Quobyte | ✓ | [Quobyte](#quobyte) | +| NFS | - | - | +| RBD | ✓ | [Ceph RBD](#ceph-rbd) | +| VsphereVolume | ✓ | [vSphere](#vsphere) | +| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) | +| ScaleIO | ✓ | [ScaleIO](#scaleio) | +| StorageOS | ✓ | [StorageOS](#storageos) | +| Local | - | [Local](#local) | +| 卷类型 | Kubernetes 版本要求 | +|:---------------------|:--------------------------| +| gcePersistentDisk | 1.11 | +| awsElasticBlockStore | 1.11 | +| Cinder | 1.11 | +| glusterfs | 1.11 | +| rbd | 1.11 | +| Azure File | 1.11 | +| Azure Disk | 1.11 | +| Portworx | 1.11 | +| FlexVolume | 1.13 | +| CSI | 1.14 (alpha), 1.16 (beta) | + +{{< /table >}} {{< note >}} - -此功能不能用于缩小卷。 - +此功能仅可用于扩容卷,不能用于缩小卷。 {{< /note >}} ### 卷绑定模式 -{{< feature-state for_k8s_version="v1.12" state="beta" >}} - - -**注意:** 这个功能特性需要启用 `VolumeScheduling` 参数才能使用。 - 动态配置和预先创建的PVs也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), -但是您需要查看特定CSI驱动程序的文档以查看其支持的拓扑密钥和例子。 必须启用 `CSINodeInfo` 特性。 +但是您需要查看特定CSI驱动程序的文档以查看其支持的拓扑密钥和例子。 -**注意:** 这个特性需要开启 `VolumeScheduling` 特性开关。 - ## 参数 Storage class 具有描述属于卷的参数。取决于分配器,可以接受不同的参数。 例如,参数 type 的值 io1 和参数 iopsPerGB 特定于 EBS PV。当参数被省略时,会使用默认值。 +一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能超过 256 KiB, 包括参数的键和值。 + ### AWS EBS ```yaml @@ -441,11 +441,13 @@ parameters: is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. `zone` and `zones` parameters must not be used at the same time. +* `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system. * `replication-type`: `none` or `regional-pd`. Default: `none`. --> * `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard` * `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone` 和 `zones` 参数不能同时使用。 * `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。`zone` 和 `zones` 参数不能同时使用。 +* `fstype`: `ext4` 或 `xfs`。 默认: `ext4`。宿主机操作系统必须支持所定义的文件系统类型。 * `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。 -#### Azure Unmanaged Disk Storage Class(非托管磁盘存储类) +#### Azure Unmanaged Disk Storage Class(非托管磁盘存储类){#azure-unmanaged-disk-storage-class} ```yaml kind: StorageClass @@ -921,9 +923,9 @@ parameters: * `storageAccount`:Azure 存储帐户名称。如果提供存储帐户,它必须位于与集群相同的资源组中,并且 `location` 是被忽略的。如果未提供存储帐户,则会在与群集相同的资源组中创建新的存储帐户。 -#### 新的 Azure 磁盘 Storage Class(从 v1.7.2 开始) +#### 新的 Azure 磁盘 Storage Class(从 v1.7.2 开始){#azure-disk-storage-class} ```yaml kind: StorageClass From 37c24b23c1c82bdf099421c6a8632092f1b54d2e Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Wed, 8 Apr 2020 12:14:47 +0800 Subject: [PATCH 066/105] update content/zh/docs/concepts/storage/storage-classes.md --- .../docs/concepts/storage/storage-classes.md | 68 +++++++++---------- 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index 33f6317e75..cb4155bd14 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -26,7 +26,7 @@ with [volumes](/docs/concepts/storage/volumes/) and ## 介绍 -`StorageClass` 为管理员提供了描述存储 `"类"` 的方法。 -不同的`类型`可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。 -Kubernetes 本身并不清楚各种`类`代表的什么。这个`类`的概念在其他存储系统中有时被称为"配置文件"。 +StorageClass 为管理员提供了描述存储 "类" 的方法。 +不同的类型可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。 +Kubernetes 本身并不清楚各种类代表的什么。这个类的概念在其他存储系统中有时被称为 "配置文件"。 ## StorageClass 资源 -每个 `StorageClass` 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段, -这些字段会在`StorageClass`需要动态分配 `PersistentVolume` 时会使用到。 +每个 StorageClass 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段, +这些字段会在 StorageClass 需要动态分配 `PersistentVolume` 时会使用到。 -`StorageClass` 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 -当创建 `StorageClass` 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。 +StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 +当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。 -管理员可以为没有申请绑定到特定 `StorageClass` 的 PVC 指定一个默认的存储`类` : -更多详情请参阅 [`PersistentVolumeClaim` 章节](/docs/concepts/storage/persistent-volumes/#class-1)。 +管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类 : +更多详情请参阅 [PersistentVolumeClaim 章节](/docs/concepts/storage/persistent-volumes/#class-1)。 ```yaml apiVersion: storage.k8s.io/v1 @@ -87,12 +87,12 @@ volumeBindingMode: Immediate ### 存储分配器 -每个 `StorageClass` 都有一个分配器,用来决定使用哪个`卷插件`分配 `PV`。该字段必须指定。 +每个 StorageClass 都有一个分配器,用来决定使用哪个卷插件分配 PV。该字段必须指定。 -您不限于指定此处列出的"内置"分配器(其名称前缀为 kubernetes.io 并打包在 Kubernetes 中)。 +您不限于指定此处列出的 "内置" 分配器(其名称前缀为 "kubernetes.io" 并打包在 Kubernetes 中)。 您还可以运行和指定外部分配器,这些独立的程序遵循由 Kubernetes 定义的 [规范](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)。 外部供应商的作者完全可以自由决定他们的代码保存于何处、打包方式、运行方式、使用的插件(包括 Flex)等。 代码仓库 [kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner) @@ -152,20 +152,20 @@ vendors provide their own external provisioner. ### 回收策略 -由 `StorageClass` 动态创建的持久化卷会在的 `reclaimPolicy` 字段中指定回收策略,可以是 -`Delete` 或者 `Retain`。如果 `StorageClass` 对象被创建时没有指定 `reclaimPolicy` ,它将默认为 `Delete`。 +由 StorageClass 动态创建的 PersistentVolume 会在的 `reclaimPolicy` 字段中指定回收策略,可以是 +`Delete` 或者 `Retain`。如果 StorageClass 对象被创建时没有指定 `reclaimPolicy` ,它将默认为 `Delete`。 -通过 `StorageClass` 手动创建并管理的 Persistent Volume 会使用它们被创建时指定的回收政策。 +通过 StorageClass 手动创建并管理的 PersistentVolume 会使用它们被创建时指定的回收政策。 -永久卷可以配置为可扩展。将此功能设置为 `true` 时,允许用户通过编辑相应的PVC对象来调整卷大小。 +PersistentVolume 可以配置为可扩展。将此功能设置为 `true` 时,允许用户通过编辑相应的 PVC 对象来调整卷大小。 -当基础存储类的 `allowVolumeExpansion` 字段设置为true时,以下类型的卷支持卷扩展。 +当基础存储类的 `allowVolumeExpansion` 字段设置为 true 时,以下类型的卷支持卷扩展。 {{< table caption = "Table of Volume types and the version of Kubernetes they require" >}} @@ -217,7 +217,7 @@ You can only use the volume expansion feature to grow a Volume, not to shrink it ### 挂载选项 -由 `StorageClass` 动态创建的 Persistent Volume 将使用`类`中 `mountOption` 字段指定的挂载选项。 +由 StorageClass 动态创建的 PersistentVolume 将使用类中 `mountOptions` 字段指定的挂载选项。 如果卷插件不支持挂载选项,却指定了该选项,则分配操作会失败。 -挂载选项在 `StorageClass` 和持久卷上都不会做验证,所以如果挂载选项无效,那么这个 PV 就会失败。 +挂载选项在 StorageClass 和 PV 上都不会做验证,所以如果挂载选项无效,那么这个 PV 就会失败。 以下插件支持预创建绑定 PersistentVolume 的 `WaitForFirstConsumer` 模式: -* All of the above +* 上述全部 * [Local](#local) {{< feature-state state="beta" for_k8s_version="1.17" >}} @@ -303,8 +303,8 @@ and pre-created PVs, but you'll need to look at the documentation for a specific to see its supported topology keys and examples. --> -动态配置和预先创建的PVs也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), -但是您需要查看特定CSI驱动程序的文档以查看其支持的拓扑密钥和例子。 +动态配置和预先创建的 PV 也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), +但是您需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑密钥和例子。 -这个例子描述了如何将分配卷限的拓扑限制在特定的区域,在使用时应该根据插件支持情况替换 `zone` 和 `zones` 参数。 +这个例子描述了如何将分配卷的拓扑限制在特定的区域,在使用时应该根据插件支持情况替换 `zone` 和 `zones` 参数。 ```yaml apiVersion: storage.k8s.io/v1 From dcaa74cde2a24db4a6bafdefb1b59ec124706b9e Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Wed, 8 Apr 2020 14:22:02 +0800 Subject: [PATCH 067/105] Update content/zh/docs/concepts/storage/storage-classes.md Co-Authored-By: Qiming Teng --- content/zh/docs/concepts/storage/storage-classes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index cb4155bd14..c966c2bed5 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -162,7 +162,7 @@ whatever reclaim policy they were assigned at creation. --> ### 回收策略 -由 StorageClass 动态创建的 PersistentVolume 会在的 `reclaimPolicy` 字段中指定回收策略,可以是 +由 StorageClass 动态创建的 PersistentVolume 会在类的 `reclaimPolicy` 字段中指定回收策略,可以是 `Delete` 或者 `Retain`。如果 StorageClass 对象被创建时没有指定 `reclaimPolicy` ,它将默认为 `Delete`。 通过 StorageClass 手动创建并管理的 PersistentVolume 会使用它们被创建时指定的回收政策。 From 37e50169b14db47830bccbe3ff55dcc9b6455c3a Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Wed, 8 Apr 2020 14:22:27 +0800 Subject: [PATCH 068/105] Update content/zh/docs/concepts/storage/storage-classes.md Co-Authored-By: Qiming Teng --- content/zh/docs/concepts/storage/storage-classes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index c966c2bed5..ee1bb6240d 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -163,7 +163,7 @@ whatever reclaim policy they were assigned at creation. ### 回收策略 由 StorageClass 动态创建的 PersistentVolume 会在类的 `reclaimPolicy` 字段中指定回收策略,可以是 -`Delete` 或者 `Retain`。如果 StorageClass 对象被创建时没有指定 `reclaimPolicy` ,它将默认为 `Delete`。 +`Delete` 或者 `Retain`。如果 StorageClass 对象被创建时没有指定 `reclaimPolicy`,它将默认为 `Delete`。 通过 StorageClass 手动创建并管理的 PersistentVolume 会使用它们被创建时指定的回收政策。 From 4255f98514f9528edb8d35a71f2750d943642529 Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Wed, 8 Apr 2020 14:23:12 +0800 Subject: [PATCH 069/105] Update content/zh/docs/concepts/storage/storage-classes.md Co-Authored-By: Qiming Teng --- content/zh/docs/concepts/storage/storage-classes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index ee1bb6240d..f1296e55df 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -304,7 +304,7 @@ to see its supported topology keys and examples. --> 动态配置和预先创建的 PV 也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), -但是您需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑密钥和例子。 +但是您需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键名和例子。 -#### 新的 Azure 磁盘 Storage Class(从 v1.7.2 开始){#azure-disk-storage-class} +#### Azure 磁盘 Storage Class(从 v1.7.2 开始){#azure-disk-storage-class} ```yaml kind: StorageClass @@ -947,12 +947,15 @@ parameters: unmanaged disk in the same resource group as the cluster. When `kind` is `managed`, all managed disks are created in the same resource group as the cluster. +* `resourceGroup`: Specify the resource group in which the Azure disk will be created. + It must be an existing resource group name. If it is unspecified, the disk will be + placed in the same resource group as the current Kubernetes cluster. --> * `storageaccounttype`:Azure 存储帐户 Sku 层。默认为空。 * `kind`:可能的值是 `shared`(默认)、`dedicated` 和 `managed`。 当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。 当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。 - +* `resourceGroup`: 指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。 -Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Генератора подій життєвого циклу Пода ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері: +Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері: До базових об'єктів Kubernetes належать: -* [Под *(Pod)*](/docs/concepts/workloads/pods/pod-overview/) -* [Сервіс *(Service)*](/docs/concepts/services-networking/service/) +* [Pod](/docs/concepts/workloads/pods/pod-overview/) +* [Service](/docs/concepts/services-networking/service/) * [Volume](/docs/concepts/storage/volumes/) * [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) @@ -75,7 +75,7 @@ Kubernetes оперує певною кількістю абстракцій, щ -## Площина управління Kubernetes (*Kubernetes Control Plane*) +## Площина управління Kubernetes (*Kubernetes Control Plane*) {#площина-управління-kubernetes} diff --git a/content/uk/docs/concepts/configuration/manage-compute-resources-container.md b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md deleted file mode 100644 index a90b224f8c..0000000000 --- a/content/uk/docs/concepts/configuration/manage-compute-resources-container.md +++ /dev/null @@ -1,623 +0,0 @@ ---- -title: Managing Compute Resources for Containers -content_template: templates/concept -weight: 20 -feature: - # title: Automatic bin packing - title: Автоматичне пакування у контейнери - # description: > - # Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. - description: > - Автоматичне розміщення контейнерів з огляду на їхні потреби у ресурсах та інші обмеження, при цьому не поступаючись доступністю. Поєднання критичних і "найкращих з можливих" робочих навантажень для ефективнішого використання і більшого заощадження ресурсів. ---- - -{{% capture overview %}} - -When you specify a [Pod](/docs/concepts/workloads/pods/pod/), you can optionally specify how -much CPU and memory (RAM) each Container needs. When Containers have resource -requests specified, the scheduler can make better decisions about which nodes to -place Pods on. And when Containers have their limits specified, contention for -resources on a node can be handled in a specified manner. For more details about -the difference between requests and limits, see -[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). - -{{% /capture %}} - - -{{% capture body %}} - -## Resource types - -*CPU* and *memory* are each a *resource type*. A resource type has a base unit. -CPU is specified in units of cores, and memory is specified in units of bytes. -If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources. -Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory -that are much larger than the default page size. - -For example, on a system where the default page size is 4KiB, you could specify a limit, -`hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a -total of 80 MiB), that allocation fails. - -{{< note >}} -You cannot overcommit `hugepages-*` resources. -This is different from the `memory` and `cpu` resources. -{{< /note >}} - -CPU and memory are collectively referred to as *compute resources*, or just -*resources*. Compute -resources are measurable quantities that can be requested, allocated, and -consumed. They are distinct from -[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and -[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified -through the Kubernetes API server. - -## Resource requests and limits of Pod and Container - -Each Container of a Pod can specify one or more of the following: - -* `spec.containers[].resources.limits.cpu` -* `spec.containers[].resources.limits.memory` -* `spec.containers[].resources.limits.hugepages-` -* `spec.containers[].resources.requests.cpu` -* `spec.containers[].resources.requests.memory` -* `spec.containers[].resources.requests.hugepages-` - -Although requests and limits can only be specified on individual Containers, it -is convenient to talk about Pod resource requests and limits. A -*Pod resource request/limit* for a particular resource type is the sum of the -resource requests/limits of that type for each Container in the Pod. - - -## Meaning of CPU - -Limits and requests for CPU resources are measured in *cpu* units. -One cpu, in Kubernetes, is equivalent to: - -- 1 AWS vCPU -- 1 GCP Core -- 1 Azure vCore -- 1 IBM vCPU -- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading - -Fractional requests are allowed. A Container with -`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much -CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the -expression `100m`, which can be read as "one hundred millicpu". Some people say -"one hundred millicores", and this is understood to mean the same thing. A -request with a decimal point, like `0.1`, is converted to `100m` by the API, and -precision finer than `1m` is not allowed. For this reason, the form `100m` might -be preferred. - -CPU is always requested as an absolute quantity, never as a relative quantity; -0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine. - -## Meaning of memory - -Limits and requests for `memory` are measured in bytes. You can express memory as -a plain integer or as a fixed-point integer using one of these suffixes: -E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, -Mi, Ki. For example, the following represent roughly the same value: - -```shell -128974848, 129e6, 129M, 123Mi -``` - -Here's an example. -The following Pod has two Containers. Each Container has a request of 0.25 cpu -and 64MiB (226 bytes) of memory. Each Container has a limit of 0.5 -cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128 -MiB of memory, and a limit of 1 cpu and 256MiB of memory. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: frontend -spec: - containers: - - name: db - image: mysql - env: - - name: MYSQL_ROOT_PASSWORD - value: "password" - resources: - requests: - memory: "64Mi" - cpu: "250m" - limits: - memory: "128Mi" - cpu: "500m" - - name: wp - image: wordpress - resources: - requests: - memory: "64Mi" - cpu: "250m" - limits: - memory: "128Mi" - cpu: "500m" -``` - -## How Pods with resource requests are scheduled - -When you create a Pod, the Kubernetes scheduler selects a node for the Pod to -run on. Each node has a maximum capacity for each of the resource types: the -amount of CPU and memory it can provide for Pods. The scheduler ensures that, -for each resource type, the sum of the resource requests of the scheduled -Containers is less than the capacity of the node. Note that although actual memory -or CPU resource usage on nodes is very low, the scheduler still refuses to place -a Pod on a node if the capacity check fails. This protects against a resource -shortage on a node when resource usage later increases, for example, during a -daily peak in request rate. - -## How Pods with resource limits are run - -When the kubelet starts a Container of a Pod, it passes the CPU and memory limits -to the container runtime. - -When using Docker: - -- The `spec.containers[].resources.requests.cpu` is converted to its core value, - which is potentially fractional, and multiplied by 1024. The greater of this number - or 2 is used as the value of the - [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint) - flag in the `docker run` command. - -- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and - multiplied by 100. The resulting value is the total amount of CPU time that a container can use - every 100ms. A container cannot use more than its share of CPU time during this interval. - - {{< note >}} - The default quota period is 100ms. The minimum resolution of CPU quota is 1ms. - {{}} - -- The `spec.containers[].resources.limits.memory` is converted to an integer, and - used as the value of the - [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints) - flag in the `docker run` command. - -If a Container exceeds its memory limit, it might be terminated. If it is -restartable, the kubelet will restart it, as with any other type of runtime -failure. - -If a Container exceeds its memory request, it is likely that its Pod will -be evicted whenever the node runs out of memory. - -A Container might or might not be allowed to exceed its CPU limit for extended -periods of time. However, it will not be killed for excessive CPU usage. - -To determine whether a Container cannot be scheduled or is being killed due to -resource limits, see the -[Troubleshooting](#troubleshooting) section. - -## Monitoring compute resource usage - -The resource usage of a Pod is reported as part of the Pod status. - -If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md) -is configured for your cluster, then Pod resource usage can be retrieved from -the monitoring system. - -## Troubleshooting - -### My Pods are pending with event message failedScheduling - -If the scheduler cannot find any node where a Pod can fit, the Pod remains -unscheduled until a place can be found. An event is produced each time the -scheduler fails to find a place for the Pod, like this: - -```shell -kubectl describe pod frontend | grep -A 3 Events -``` -``` -Events: - FirstSeen LastSeen Count From Subobject PathReason Message - 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others -``` - -In the preceding example, the Pod named "frontend" fails to be scheduled due to -insufficient CPU resource on the node. Similar error messages can also suggest -failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod -is pending with a message of this type, there are several things to try: - -- Add more nodes to the cluster. -- Terminate unneeded Pods to make room for pending Pods. -- Check that the Pod is not larger than all the nodes. For example, if all the - nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will - never be scheduled. - -You can check node capacities and amounts allocated with the -`kubectl describe nodes` command. For example: - -```shell -kubectl describe nodes e2e-test-node-pool-4lw4 -``` -``` -Name: e2e-test-node-pool-4lw4 -[ ... lines removed for clarity ...] -Capacity: - cpu: 2 - memory: 7679792Ki - pods: 110 -Allocatable: - cpu: 1800m - memory: 7474992Ki - pods: 110 -[ ... lines removed for clarity ...] -Non-terminated Pods: (5 in total) - Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits - --------- ---- ------------ ---------- --------------- ------------- - kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) - kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) - kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) - kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) - kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) -Allocated resources: - (Total limits may be over 100 percent, i.e., overcommitted.) - CPU Requests CPU Limits Memory Requests Memory Limits - ------------ ---------- --------------- ------------- - 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%) -``` - -In the preceding output, you can see that if a Pod requests more than 1120m -CPUs or 6.23Gi of memory, it will not fit on the node. - -By looking at the `Pods` section, you can see which Pods are taking up space on -the node. - -The amount of resources available to Pods is less than the node capacity, because -system daemons use a portion of the available resources. The `allocatable` field -[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) -gives the amount of resources that are available to Pods. For more information, see -[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). - -The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured -to limit the total amount of resources that can be consumed. If used in conjunction -with namespaces, it can prevent one team from hogging all the resources. - -### My Container is terminated - -Your Container might get terminated because it is resource-starved. To check -whether a Container is being killed because it is hitting a resource limit, call -`kubectl describe pod` on the Pod of interest: - -```shell -kubectl describe pod simmemleak-hra99 -``` -``` -Name: simmemleak-hra99 -Namespace: default -Image(s): saadali/simmemleak -Node: kubernetes-node-tf0f/10.240.216.66 -Labels: name=simmemleak -Status: Running -Reason: -Message: -IP: 10.244.2.75 -Replication Controllers: simmemleak (1/1 replicas created) -Containers: - simmemleak: - Image: saadali/simmemleak - Limits: - cpu: 100m - memory: 50Mi - State: Running - Started: Tue, 07 Jul 2015 12:54:41 -0700 - Last Termination State: Terminated - Exit Code: 1 - Started: Fri, 07 Jul 2015 12:54:30 -0700 - Finished: Fri, 07 Jul 2015 12:54:33 -0700 - Ready: False - Restart Count: 5 -Conditions: - Type Status - Ready False -Events: - FirstSeen LastSeen Count From SubobjectPath Reason Message - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a -``` - -In the preceding example, the `Restart Count: 5` indicates that the `simmemleak` -Container in the Pod was terminated and restarted five times. - -You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status -of previously terminated Containers: - -```shell -kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 -``` -``` -Container Name: simmemleak -LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] -``` - -You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory. - -## Local ephemeral storage -{{< feature-state state="beta" >}} - -Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers. - -This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope. - -{{< note >}} -If an optional runtime partition is used, root partition will not hold any image layer or writable layers. -{{< /note >}} - -### Requests and limits setting for local ephemeral storage -Each Container of a Pod can specify one or more of the following: - -* `spec.containers[].resources.limits.ephemeral-storage` -* `spec.containers[].resources.requests.ephemeral-storage` - -Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as -a plain integer or as a fixed-point integer using one of these suffixes: -E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, -Mi, Ki. For example, the following represent roughly the same value: - -```shell -128974848, 129e6, 129M, 123Mi -``` - -For example, the following Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of storage. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: frontend -spec: - containers: - - name: db - image: mysql - env: - - name: MYSQL_ROOT_PASSWORD - value: "password" - resources: - requests: - ephemeral-storage: "2Gi" - limits: - ephemeral-storage: "4Gi" - - name: wp - image: wordpress - resources: - requests: - ephemeral-storage: "2Gi" - limits: - ephemeral-storage: "4Gi" -``` - -### How Pods with ephemeral-storage requests are scheduled - -When you create a Pod, the Kubernetes scheduler selects a node for the Pod to -run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). - -The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node. - -### How Pods with ephemeral-storage limits run - -For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted. - -### Monitoring ephemeral-storage consumption - -When local ephemeral storage is used, it is monitored on an ongoing -basis by the kubelet. The monitoring is performed by scanning each -emptyDir volume, log directories, and writable layers on a periodic -basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log -directories or writable layers) may, at the cluster operator's option, -be managed by use of [project -quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html). -Project quotas were originally implemented in XFS, and have more -recently been ported to ext4fs. Project quotas can be used for both -monitoring and enforcement; as of Kubernetes 1.16, they are available -as alpha functionality for monitoring only. - -Quotas are faster and more accurate than directory scanning. When a -directory is assigned to a project, all files created under a -directory are created in that project, and the kernel merely has to -keep track of how many blocks are in use by files in that project. If -a file is created and deleted, but with an open file descriptor, it -continues to consume space. This space will be tracked by the quota, -but will not be seen by a directory scan. - -Kubernetes uses project IDs starting from 1048576. The IDs in use are -registered in `/etc/projects` and `/etc/projid`. If project IDs in -this range are used for other purposes on the system, those project -IDs must be registered in `/etc/projects` and `/etc/projid` to prevent -Kubernetes from using them. - -To enable use of project quotas, the cluster operator must do the -following: - -* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true` - feature gate in the kubelet configuration. This defaults to `false` - in Kubernetes 1.16, so must be explicitly set to `true`. - -* Ensure that the root partition (or optional runtime partition) is - built with project quotas enabled. All XFS filesystems support - project quotas, but ext4 filesystems must be built specially. - -* Ensure that the root partition (or optional runtime partition) is - mounted with project quotas enabled. - -#### Building and mounting filesystems with project quotas enabled - -XFS filesystems require no special action when building; they are -automatically built with project quotas enabled. - -Ext4fs filesystems must be built with quotas enabled, then they must -be enabled in the filesystem: - -``` -% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device -% sudo tune2fs -O project -Q prjquota /dev/block_device - -``` - -To mount the filesystem, both ext4fs and XFS require the `prjquota` -option set in `/etc/fstab`: - -``` -/dev/block_device /var/kubernetes_data defaults,prjquota 0 0 -``` - - -## Extended resources - -Extended resources are fully-qualified resource names outside the -`kubernetes.io` domain. They allow cluster operators to advertise and users to -consume the non-Kubernetes-built-in resources. - -There are two steps required to use Extended Resources. First, the cluster -operator must advertise an Extended Resource. Second, users must request the -Extended Resource in Pods. - -### Managing extended resources - -#### Node-level extended resources - -Node-level extended resources are tied to nodes. - -##### Device plugin managed resources -See [Device -Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) -for how to advertise device plugin managed resources on each node. - -##### Other resources -To advertise a new node-level extended resource, the cluster operator can -submit a `PATCH` HTTP request to the API server to specify the available -quantity in the `status.capacity` for a node in the cluster. After this -operation, the node's `status.capacity` will include a new resource. The -`status.allocatable` field is updated automatically with the new resource -asynchronously by the kubelet. Note that because the scheduler uses the node -`status.allocatable` value when evaluating Pod fitness, there may be a short -delay between patching the node capacity with a new resource and the first Pod -that requests the resource to be scheduled on that node. - -**Example:** - -Here is an example showing how to use `curl` to form an HTTP request that -advertises five "example.com/foo" resources on node `k8s-node-1` whose master -is `k8s-master`. - -```shell -curl --header "Content-Type: application/json-patch+json" \ ---request PATCH \ ---data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \ -http://k8s-master:8080/api/v1/nodes/k8s-node-1/status -``` - -{{< note >}} -In the preceding request, `~1` is the encoding for the character `/` -in the patch path. The operation path value in JSON-Patch is interpreted as a -JSON-Pointer. For more details, see -[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3). -{{< /note >}} - -#### Cluster-level extended resources - -Cluster-level extended resources are not tied to nodes. They are usually managed -by scheduler extenders, which handle the resource consumption and resource quota. - -You can specify the extended resources that are handled by scheduler extenders -in [scheduler policy -configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31). - -**Example:** - -The following configuration for a scheduler policy indicates that the -cluster-level extended resource "example.com/foo" is handled by the scheduler -extender. - -- The scheduler sends a Pod to the scheduler extender only if the Pod requests - "example.com/foo". -- The `ignoredByScheduler` field specifies that the scheduler does not check - the "example.com/foo" resource in its `PodFitsResources` predicate. - -```json -{ - "kind": "Policy", - "apiVersion": "v1", - "extenders": [ - { - "urlPrefix":"", - "bindVerb": "bind", - "managedResources": [ - { - "name": "example.com/foo", - "ignoredByScheduler": true - } - ] - } - ] -} -``` - -### Consuming extended resources - -Users can consume extended resources in Pod specs just like CPU and memory. -The scheduler takes care of the resource accounting so that no more than the -available amount is simultaneously allocated to Pods. - -The API server restricts quantities of extended resources to whole numbers. -Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of -_invalid_ quantities are `0.5` and `1500m`. - -{{< note >}} -Extended resources replace Opaque Integer Resources. -Users can use any domain name prefix other than `kubernetes.io` which is reserved. -{{< /note >}} - -To consume an extended resource in a Pod, include the resource name as a key -in the `spec.containers[].resources.limits` map in the container spec. - -{{< note >}} -Extended resources cannot be overcommitted, so request and limit -must be equal if both are present in a container spec. -{{< /note >}} - -A Pod is scheduled only if all of the resource requests are satisfied, including -CPU, memory and any extended resources. The Pod remains in the `PENDING` state -as long as the resource request cannot be satisfied. - -**Example:** - -The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource). - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: my-pod -spec: - containers: - - name: my-container - image: myimage - resources: - requests: - cpu: 2 - example.com/foo: 1 - limits: - example.com/foo: 1 -``` - - - -{{% /capture %}} - - -{{% capture whatsnext %}} - -* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/). - -* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/). - -* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) - -* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) - -{{% /capture %}} diff --git a/content/uk/docs/concepts/configuration/secret.md b/content/uk/docs/concepts/configuration/secret.md deleted file mode 100644 index 6261650692..0000000000 --- a/content/uk/docs/concepts/configuration/secret.md +++ /dev/null @@ -1,1054 +0,0 @@ ---- -reviewers: -- mikedanese -title: Secrets -content_template: templates/concept -feature: - title: Управління секретами та конфігурацією - description: > - Розгортайте та оновлюйте секрети та конфігурацію застосунку без перезбирання образів, не розкриваючи секрети в конфігурацію стека. -weight: 50 ---- - - -{{% capture overview %}} - -Kubernetes `secret` objects let you store and manage sensitive information, such -as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` -is safer and more flexible than putting it verbatim in a -{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. - -{{% /capture %}} - -{{% capture body %}} - -## Overview of Secrets - -A Secret is an object that contains a small amount of sensitive data such as -a password, a token, or a key. Such information might otherwise be put in a -Pod specification or in an image; putting it in a Secret object allows for -more control over how it is used, and reduces the risk of accidental exposure. - -Users can create secrets, and the system also creates some secrets. - -To use a secret, a pod needs to reference the secret. -A secret can be used with a pod in two ways: as files in a -{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of -its containers, or used by kubelet when pulling images for the pod. - -### Built-in Secrets - -#### Service Accounts Automatically Create and Attach Secrets with API Credentials - -Kubernetes automatically creates secrets which contain credentials for -accessing the API and it automatically modifies your pods to use this type of -secret. - -The automatic creation and use of API credentials can be disabled or overridden -if desired. However, if all you need to do is securely access the apiserver, -this is the recommended workflow. - -See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more -information on how Service Accounts work. - -### Creating your own Secrets - -#### Creating a Secret Using kubectl create secret - -Say that some pods need to access a database. The -username and password that the pods should use is in the files -`./username.txt` and `./password.txt` on your local machine. - -```shell -# Create files needed for rest of example. -echo -n 'admin' > ./username.txt -echo -n '1f2d1e2e67df' > ./password.txt -``` - -The `kubectl create secret` command -packages these files into a Secret and creates -the object on the Apiserver. - -```shell -kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt -``` -``` -secret "db-user-pass" created -``` -{{< note >}} -Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: - -``` -kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' -``` - - You do not need to escape special characters in passwords from files (`--from-file`). -{{< /note >}} - -You can check that the secret was created like this: - -```shell -kubectl get secrets -``` -``` -NAME TYPE DATA AGE -db-user-pass Opaque 2 51s -``` -```shell -kubectl describe secrets/db-user-pass -``` -``` -Name: db-user-pass -Namespace: default -Labels: -Annotations: - -Type: Opaque - -Data -==== -password.txt: 12 bytes -username.txt: 5 bytes -``` - -{{< note >}} -`kubectl get` and `kubectl describe` avoid showing the contents of a secret by -default. -This is to protect the secret from being exposed accidentally to an onlooker, -or from being stored in a terminal log. -{{< /note >}} - -See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret. - -#### Creating a Secret Manually - -You can also create a Secret in a file first, in json or yaml format, -and then create that object. The -[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) contains two maps: -data and stringData. The data field is used to store arbitrary data, encoded using -base64. The stringData field is provided for convenience, and allows you to provide -secret data as unencoded strings. - -For example, to store two strings in a Secret using the data field, convert -them to base64 as follows: - -```shell -echo -n 'admin' | base64 -YWRtaW4= -echo -n '1f2d1e2e67df' | base64 -MWYyZDFlMmU2N2Rm -``` - -Write a Secret that looks like this: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` - -Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): - -```shell -kubectl apply -f ./secret.yaml -``` -``` -secret "mysecret" created -``` - -For certain scenarios, you may wish to use the stringData field instead. This -field allows you to put a non-base64 encoded string directly into the Secret, -and the string will be encoded for you when the Secret is created or updated. - -A practical example of this might be where you are deploying an application -that uses a Secret to store a configuration file, and you want to populate -parts of that configuration file during your deployment process. - -If your application uses the following configuration file: - -```yaml -apiUrl: "https://my.api.com/api/v1" -username: "user" -password: "password" -``` - -You could store this in a Secret using the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -stringData: - config.yaml: |- - apiUrl: "https://my.api.com/api/v1" - username: {{username}} - password: {{password}} -``` - -Your deployment tool could then replace the `{{username}}` and `{{password}}` -template variables before running `kubectl apply`. - -stringData is a write-only convenience field. It is never output when -retrieving Secrets. For example, if you run the following command: - -```shell -kubectl get secret mysecret -o yaml -``` - -The output will be similar to: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - creationTimestamp: 2018-11-15T20:40:59Z - name: mysecret - namespace: default - resourceVersion: "7225" - uid: c280ad2e-e916-11e8-98f2-025000000001 -type: Opaque -data: - config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 -``` - -If a field is specified in both data and stringData, the value from stringData -is used. For example, the following Secret definition: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= -stringData: - username: administrator -``` - -Results in the following secret: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - creationTimestamp: 2018-11-15T20:46:46Z - name: mysecret - namespace: default - resourceVersion: "7579" - uid: 91460ecb-e917-11e8-98f2-025000000001 -type: Opaque -data: - username: YWRtaW5pc3RyYXRvcg== -``` - -Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. - -The keys of data and stringData must consist of alphanumeric characters, -'-', '_' or '.'. - -**Encoding Note:** The serialized JSON and YAML values of secret data are -encoded as base64 strings. Newlines are not valid within these strings and must -be omitted. When using the `base64` utility on Darwin/macOS users should avoid -using the `-b` option to split long lines. Conversely Linux users *should* add -the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if -`-w` option is not available. - -#### Creating a Secret from Generator -Kubectl supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) -since 1.14. With this new feature, -you can also create a Secret from generators and then apply it to create the object on -the Apiserver. The generators -should be specified in a `kustomization.yaml` inside a directory. - -For example, to generate a Secret from files `./username.txt` and `./password.txt` -```shell -# Create a kustomization.yaml file with SecretGenerator -cat <./kustomization.yaml -secretGenerator: -- name: db-user-pass - files: - - username.txt - - password.txt -EOF -``` -Apply the kustomization directory to create the Secret object. -```shell -$ kubectl apply -k . -secret/db-user-pass-96mffmfh4k created -``` - -You can check that the secret was created like this: - -```shell -$ kubectl get secrets -NAME TYPE DATA AGE -db-user-pass-96mffmfh4k Opaque 2 51s - -$ kubectl describe secrets/db-user-pass-96mffmfh4k -Name: db-user-pass -Namespace: default -Labels: -Annotations: - -Type: Opaque - -Data -==== -password.txt: 12 bytes -username.txt: 5 bytes -``` - -For example, to generate a Secret from literals `username=admin` and `password=secret`, -you can specify the secret generator in `kustomization.yaml` as -```shell -# Create a kustomization.yaml file with SecretGenerator -$ cat <./kustomization.yaml -secretGenerator: -- name: db-user-pass - literals: - - username=admin - - password=secret -EOF -``` -Apply the kustomization directory to create the Secret object. -```shell -$ kubectl apply -k . -secret/db-user-pass-dddghtt9b5 created -``` -{{< note >}} -The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new -Secret is generated each time the contents is modified. -{{< /note >}} - -#### Decoding a Secret - -Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section: - -```shell -kubectl get secret mysecret -o yaml -``` -``` -apiVersion: v1 -kind: Secret -metadata: - creationTimestamp: 2016-01-22T18:41:56Z - name: mysecret - namespace: default - resourceVersion: "164619" - uid: cfee02d6-c137-11e5-8d73-42010af00002 -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` - -Decode the password field: - -```shell -echo 'MWYyZDFlMmU2N2Rm' | base64 --decode -``` -``` -1f2d1e2e67df -``` - -#### Editing a Secret - -An existing secret may be edited with the following command: - -```shell -kubectl edit secrets mysecret -``` - -This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field: - -``` -# Please edit the object below. Lines beginning with a '#' will be ignored, -# and an empty file will abort the edit. If an error occurs while saving this file will be -# reopened with the relevant failures. -# -apiVersion: v1 -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -kind: Secret -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: { ... } - creationTimestamp: 2016-01-22T18:41:56Z - name: mysecret - namespace: default - resourceVersion: "164619" - uid: cfee02d6-c137-11e5-8d73-42010af00002 -type: Opaque -``` - -## Using Secrets - -Secrets can be mounted as data volumes or be exposed as -{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} -to be used by a container in a pod. They can also be used by other parts of the -system, without being directly exposed to the pod. For example, they can hold -credentials that other parts of the system should use to interact with external -systems on your behalf. - -### Using Secrets as Files from a Pod - -To consume a Secret in a volume in a Pod: - -1. Create a secret or use an existing one. Multiple pods can reference the same secret. -1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].secret.secretName` field equal to the name of the secret object. -1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear. -1. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`. - -This is an example of a pod that mounts a secret in a volume: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret -``` - -Each secret you want to use needs to be referred to in `.spec.volumes`. - -If there are multiple containers in the pod, then each container needs its -own `volumeMounts` block, but only one `.spec.volumes` is needed per secret. - -You can package many files into one secret, or use many secrets, whichever is convenient. - -**Projection of secret keys to specific paths** - -We can also control the paths within the volume where Secret keys are projected. -You can use `.spec.volumes[].secret.items` field to change target path of each key: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret - items: - - key: username - path: my-group/my-username -``` - -What will happen: - -* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`. -* `password` secret is not projected - -If `.spec.volumes[].secret.items` is used, only keys specified in `items` are projected. -To consume all keys from the secret, all of them must be listed in the `items` field. -All listed keys must exist in the corresponding secret. Otherwise, the volume is not created. - -**Secret files permissions** - -You can also specify the permission mode bits files part of a secret will have. -If you don't specify any, `0644` is used by default. You can specify a default -mode for the whole secret volume and override per key if needed. - -For example, you can specify a default mode like this: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - volumes: - - name: foo - secret: - secretName: mysecret - defaultMode: 256 -``` - -Then, the secret will be mounted on `/etc/foo` and all the files created by the -secret volume mount will have permission `0400`. - -Note that the JSON spec doesn't support octal notation, so use the value 256 for -0400 permissions. If you use yaml instead of json for the pod, you can use octal -notation to specify permissions in a more natural way. - -You can also use mapping, as in the previous example, and specify different -permission for different files like this: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - volumes: - - name: foo - secret: - secretName: mysecret - items: - - key: username - path: my-group/my-username - mode: 511 -``` - -In this case, the file resulting in `/etc/foo/my-group/my-username` will have -permission value of `0777`. Owing to JSON limitations, you must specify the mode -in decimal notation. - -Note that this permission value might be displayed in decimal notation if you -read it later. - -**Consuming Secret Values from Volumes** - -Inside the container that mounts a secret volume, the secret keys appear as -files and the secret values are base-64 decoded and stored inside these files. -This is the result of commands -executed inside the container from the example above: - -```shell -ls /etc/foo/ -``` -``` -username -password -``` - -```shell -cat /etc/foo/username -``` -``` -admin -``` - - -```shell -cat /etc/foo/password -``` -``` -1f2d1e2e67df -``` - -The program in a container is responsible for reading the secrets from the -files. - -**Mounted Secrets are updated automatically** - -When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. -Kubelet is checking whether the mounted secret is fresh on every periodic sync. -However, it is using its local cache for getting the current value of the Secret. -The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in -[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). -It can be either propagated via watch (default), ttl-based, or simply redirecting -all requests to directly kube-apiserver. -As a result, the total delay from the moment when the Secret is updated to the moment -when new keys are projected to the Pod can be as long as kubelet sync period + cache -propagation delay, where cache propagation delay depends on the chosen cache type -(it equals to watch propagation delay, ttl of cache, or zero corespondingly). - -{{< note >}} -A container using a Secret as a -[subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive -Secret updates. -{{< /note >}} - -### Using Secrets as Environment Variables - -To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}} -in a pod: - -1. Create a secret or use an existing one. Multiple pods can reference the same secret. -1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. -1. Modify your image and/or command line so that the program looks for values in the specified environment variables - -This is an example of a pod that uses secrets from environment variables: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: secret-env-pod -spec: - containers: - - name: mycontainer - image: redis - env: - - name: SECRET_USERNAME - valueFrom: - secretKeyRef: - name: mysecret - key: username - - name: SECRET_PASSWORD - valueFrom: - secretKeyRef: - name: mysecret - key: password - restartPolicy: Never -``` - -**Consuming Secret Values from Environment Variables** - -Inside a container that consumes a secret in an environment variables, the secret keys appear as -normal environment variables containing the base-64 decoded values of the secret data. -This is the result of commands executed inside the container from the example above: - -```shell -echo $SECRET_USERNAME -``` -``` -admin -``` -```shell -echo $SECRET_PASSWORD -``` -``` -1f2d1e2e67df -``` - -### Using imagePullSecrets - -An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry -password to the Kubelet so it can pull a private image on behalf of your Pod. - -**Manually specifying an imagePullSecret** - -Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) - -### Arranging for imagePullSecrets to be Automatically Attached - -You can manually create an imagePullSecret, and reference it from -a serviceAccount. Any pods created with that serviceAccount -or that default to use that serviceAccount, will get their imagePullSecret -field set to that of the service account. -See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) - for a detailed explanation of that process. - -### Automatic Mounting of Manually Created Secrets - -Manually created secrets (e.g. one containing a token for accessing a github account) -can be automatically attached to pods based on their service account. -See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset/) for a detailed explanation of that process. - -## Details - -### Restrictions - -Secret volume sources are validated to ensure that the specified object -reference actually points to an object of type `Secret`. Therefore, a secret -needs to be created before any pods that depend on it. - -Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. -They can only be referenced by pods in that same namespace. - -Individual secrets are limited to 1MiB in size. This is to discourage creation -of very large secrets which would exhaust apiserver and kubelet memory. -However, creation of many smaller secrets could also exhaust memory. More -comprehensive limits on memory usage due to secrets is a planned feature. - -Kubelet only supports use of secrets for Pods it gets from the API server. -This includes any pods created using kubectl, or indirectly via a replication -controller. It does not include pods created via the kubelets -`--manifest-url` flag, its `--config` flag, or its REST API (these are -not common ways to create pods.) - -Secrets must be created before they are consumed in pods as environment -variables unless they are marked as optional. References to Secrets that do -not exist will prevent the pod from starting. - -References via `secretKeyRef` to keys that do not exist in a named Secret -will prevent the pod from starting. - -Secrets used to populate environment variables via `envFrom` that have keys -that are considered invalid environment variable names will have those keys -skipped. The pod will be allowed to start. There will be an event whose -reason is `InvalidVariableNames` and the message will contain the list of -invalid keys that were skipped. The example shows a pod which refers to the -default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad. - -```shell -kubectl get events -``` -``` -LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON -0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. -``` - -### Secret and Pod Lifetime interaction - -When a pod is created via the API, there is no check whether a referenced -secret exists. Once a pod is scheduled, the kubelet will try to fetch the -secret value. If the secret cannot be fetched because it does not exist or -because of a temporary lack of connection to the API server, kubelet will -periodically retry. It will report an event about the pod explaining the -reason it is not started yet. Once the secret is fetched, the kubelet will -create and mount a volume containing it. None of the pod's containers will -start until all the pod's volumes are mounted. - -## Use cases - -### Use-Case: Pod with ssh keys - -Create a kustomization.yaml with SecretGenerator containing some ssh keys: - -```shell -kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub -``` - -``` -secret "ssh-key-secret" created -``` - -{{< caution >}} -Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised. -{{< /caution >}} - - -Now we can create a pod which references the secret with the ssh key and -consumes it in a volume: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: secret-test-pod - labels: - name: secret-test -spec: - volumes: - - name: secret-volume - secret: - secretName: ssh-key-secret - containers: - - name: ssh-test-container - image: mySshImage - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -``` - -When the container's command runs, the pieces of the key will be available in: - -```shell -/etc/secret-volume/ssh-publickey -/etc/secret-volume/ssh-privatekey -``` - -The container is then free to use the secret data to establish an ssh connection. - -### Use-Case: Pods with prod / test credentials - -This example illustrates a pod which consumes a secret containing prod -credentials and another pod which consumes a secret with test environment -credentials. - -Make the kustomization.yaml with SecretGenerator - -```shell -kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 -``` -``` -secret "prod-db-secret" created -``` - -```shell -kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests -``` -``` -secret "test-db-secret" created -``` -{{< note >}} -Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: - -``` -kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' -``` - - You do not need to escape special characters in passwords from files (`--from-file`). -{{< /note >}} - -Now make the pods: - -```shell -$ cat < pod.yaml -apiVersion: v1 -kind: List -items: -- kind: Pod - apiVersion: v1 - metadata: - name: prod-db-client-pod - labels: - name: prod-db-client - spec: - volumes: - - name: secret-volume - secret: - secretName: prod-db-secret - containers: - - name: db-client-container - image: myClientImage - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -- kind: Pod - apiVersion: v1 - metadata: - name: test-db-client-pod - labels: - name: test-db-client - spec: - volumes: - - name: secret-volume - secret: - secretName: test-db-secret - containers: - - name: db-client-container - image: myClientImage - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -EOF -``` - -Add the pods to the same kustomization.yaml -```shell -$ cat <> kustomization.yaml -resources: -- pod.yaml -EOF -``` - -Apply all those objects on the Apiserver by - -```shell -kubectl apply -k . -``` - -Both containers will have the following files present on their filesystems with the values for each container's environment: - -```shell -/etc/secret-volume/username -/etc/secret-volume/password -``` - -Note how the specs for the two pods differ only in one field; this facilitates -creating pods with different capabilities from a common pod config template. - -You could further simplify the base pod specification by using two Service Accounts: -one called, say, `prod-user` with the `prod-db-secret`, and one called, say, -`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: prod-db-client-pod - labels: - name: prod-db-client -spec: - serviceAccount: prod-db-client - containers: - - name: db-client-container - image: myClientImage -``` - -### Use-case: Dotfiles in secret volume - -In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply -make that key begin with a dot. For example, when the following secret is mounted into a volume: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: dotfile-secret -data: - .secret-file: dmFsdWUtMg0KDQo= ---- -apiVersion: v1 -kind: Pod -metadata: - name: secret-dotfiles-pod -spec: - volumes: - - name: secret-volume - secret: - secretName: dotfile-secret - containers: - - name: dotfile-test-container - image: k8s.gcr.io/busybox - command: - - ls - - "-l" - - "/etc/secret-volume" - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -``` - - -The `secret-volume` will contain a single file, called `.secret-file`, and -the `dotfile-test-container` will have this file present at the path -`/etc/secret-volume/.secret-file`. - -{{< note >}} -Files beginning with dot characters are hidden from the output of `ls -l`; -you must use `ls -la` to see them when listing directory contents. -{{< /note >}} - -### Use-case: Secret visible to one container in a pod - -Consider a program that needs to handle HTTP requests, do some complex business -logic, and then sign some messages with an HMAC. Because it has complex -application logic, there might be an unnoticed remote file reading exploit in -the server, which could expose the private key to an attacker. - -This could be divided into two processes in two containers: a frontend container -which handles user interaction and business logic, but which cannot see the -private key; and a signer container that can see the private key, and responds -to simple signing requests from the frontend (e.g. over localhost networking). - -With this partitioned approach, an attacker now has to trick the application -server into doing something rather arbitrary, which may be harder than getting -it to read a file. - - - -## Best practices - -### Clients that use the secrets API - -When deploying applications that interact with the secrets API, access should be -limited using [authorization policies]( -/docs/reference/access-authn-authz/authorization/) such as [RBAC]( -/docs/reference/access-authn-authz/rbac/). - -Secrets often hold values that span a spectrum of importance, many of which can -cause escalations within Kubernetes (e.g. service account tokens) and to -external systems. Even if an individual app can reason about the power of the -secrets it expects to interact with, other apps within the same namespace can -render those assumptions invalid. - -For these reasons `watch` and `list` requests for secrets within a namespace are -extremely powerful capabilities and should be avoided, since listing secrets allows -the clients to inspect the values of all secrets that are in that namespace. The ability to -`watch` and `list` all secrets in a cluster should be reserved for only the most -privileged, system-level components. - -Applications that need to access the secrets API should perform `get` requests on -the secrets they need. This lets administrators restrict access to all secrets -while [white-listing access to individual instances]( -/docs/reference/access-authn-authz/rbac/#referring-to-resources) that -the app needs. - -For improved performance over a looping `get`, clients can design resources that -reference a secret then `watch` the resource, re-requesting the secret when the -reference changes. Additionally, a ["bulk watch" API]( -https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md) -to let clients `watch` individual resources has also been proposed, and will likely -be available in future releases of Kubernetes. - -## Security Properties - - -### Protections - -Because `secret` objects can be created independently of the `pods` that use -them, there is less risk of the secret being exposed during the workflow of -creating, viewing, and editing pods. The system can also take additional -precautions with `secret` objects, such as avoiding writing them to disk where -possible. - -A secret is only sent to a node if a pod on that node requires it. -Kubelet stores the secret into a `tmpfs` so that the secret is not written -to disk storage. Once the Pod that depends on the secret is deleted, kubelet -will delete its local copy of the secret data as well. - -There may be secrets for several pods on the same node. However, only the -secrets that a pod requests are potentially visible within its containers. -Therefore, one Pod does not have access to the secrets of another Pod. - -There may be several containers in a pod. However, each container in a pod has -to request the secret volume in its `volumeMounts` for it to be visible within -the container. This can be used to construct useful [security partitions at the -Pod level](#use-case-secret-visible-to-one-container-in-a-pod). - -On most Kubernetes-project-maintained distributions, communication between user -to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. -Secrets are protected when transmitted over these channels. - -{{< feature-state for_k8s_version="v1.13" state="beta" >}} - -You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) -for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}. - -### Risks - - - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}}; - therefore: - - Administrators should enable encryption at rest for cluster data (requires v1.13 or later) - - Administrators should limit access to etcd to admin users - - Administrators may want to wipe/shred disks used by etcd when no longer in use - - If running etcd in a cluster, administrators should make sure to use SSL/TLS - for etcd peer-to-peer communication. - - If you configure the secret through a manifest (JSON or YAML) file which has - the secret data encoded as base64, sharing this file or checking it in to a - source repository means the secret is compromised. Base64 encoding is _not_ an - encryption method and is considered the same as plain text. - - Applications still need to protect the value of secret after reading it from the volume, - such as not accidentally logging it or transmitting it to an untrusted party. - - A user who can create a pod that uses a secret can also see the value of that secret. Even - if apiserver policy does not allow that user to read the secret object, the user could - run a pod which exposes the secret. - - Currently, anyone with root on any node can read _any_ secret from the apiserver, - by impersonating the kubelet. It is a planned feature to only send secrets to - nodes that actually require them, to restrict the impact of a root exploit on a - single node. - - -{{% capture whatsnext %}} - -{{% /capture %}} diff --git a/content/uk/docs/concepts/overview/what-is-kubernetes.md b/content/uk/docs/concepts/overview/what-is-kubernetes.md index 7d484d3bd6..269239c30a 100644 --- a/content/uk/docs/concepts/overview/what-is-kubernetes.md +++ b/content/uk/docs/concepts/overview/what-is-kubernetes.md @@ -1,7 +1,4 @@ --- -reviewers: -- bgrant0607 -- mikedanese title: Що таке Kubernetes? content_template: templates/concept weight: 10 diff --git a/content/uk/docs/concepts/services-networking/dual-stack.md b/content/uk/docs/concepts/services-networking/dual-stack.md deleted file mode 100644 index a4e7bf57af..0000000000 --- a/content/uk/docs/concepts/services-networking/dual-stack.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -reviewers: -- lachie83 -- khenidak -- aramase -title: IPv4/IPv6 dual-stack -feature: - title: Подвійний стек IPv4/IPv6 - description: > - Призначення IPv4- та IPv6-адрес подам і сервісам. - -content_template: templates/concept -weight: 70 ---- - -{{% capture overview %}} - -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} - - IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}. - -If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses. - -{{% /capture %}} - -{{% capture body %}} - -## Supported Features - -Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: - - * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod) - * IPv4 and IPv6 enabled Services (each Service must be for a single address family) - * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces - -## Prerequisites - -The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters: - - * Kubernetes 1.16 or later - * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) - * A network plugin that supports dual-stack (such as Kubenet or Calico) - * Kube-proxy running in mode IPVS - -## Enable IPv4/IPv6 dual-stack - -To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments: - - * kube-controller-manager: - * `--feature-gates="IPv6DualStack=true"` - * `--cluster-cidr=,` eg. `--cluster-cidr=10.244.0.0/16,fc00::/24` - * `--service-cluster-ip-range=,` - * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6 - * kubelet: - * `--feature-gates="IPv6DualStack=true"` - * kube-proxy: - * `--proxy-mode=ipvs` - * `--cluster-cidrs=,` - * `--feature-gates="IPv6DualStack=true"` - -{{< caution >}} -If you specify an IPv6 address block larger than a /24 via `--cluster-cidr` on the command line, that assignment will fail. -{{< /caution >}} - -## Services - -If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service. -You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic. - -{{< note >}} -The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager. -{{< /note >}} - -You can set `.spec.ipFamily` to either: - - * `IPv4`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv4` - * `IPv6`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv6` - -The following Service specification does not include the `ipFamily` field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured `service-cluster-ip-range` to this Service. - -{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} - -The following Service specification includes the `ipFamily` field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service. - -{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} - -For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service. - -{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}} - -### Type LoadBalancer - -On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service. - -## Egress Traffic - -The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters. - -## Known Issues - - * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr) - -{{% /capture %}} - -{{% capture whatsnext %}} - -* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking - -{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/endpoint-slices.md b/content/uk/docs/concepts/services-networking/endpoint-slices.md deleted file mode 100644 index f6e918b13c..0000000000 --- a/content/uk/docs/concepts/services-networking/endpoint-slices.md +++ /dev/null @@ -1,188 +0,0 @@ ---- -reviewers: -- freehan -title: EndpointSlices -feature: - title: EndpointSlices - description: > - Динамічне відстеження мережевих вузлів у кластері Kubernetes. - -content_template: templates/concept -weight: 10 ---- - - -{{% capture overview %}} - -{{< feature-state for_k8s_version="v1.17" state="beta" >}} - -_EndpointSlices_ provide a simple way to track network endpoints within a -Kubernetes cluster. They offer a more scalable and extensible alternative to -Endpoints. - -{{% /capture %}} - -{{% capture body %}} - -## EndpointSlice resources {#endpointslice-resource} - -In Kubernetes, an EndpointSlice contains references to a set of network -endpoints. The EndpointSlice controller automatically creates EndpointSlices -for a Kubernetes Service when a {{< glossary_tooltip text="selector" -term_id="selector" >}} is specified. These EndpointSlices will include -references to any Pods that match the Service selector. EndpointSlices group -network endpoints together by unique Service and Port combinations. - -As an example, here's a sample EndpointSlice resource for the `example` -Kubernetes Service. - -```yaml -apiVersion: discovery.k8s.io/v1beta1 -kind: EndpointSlice -metadata: - name: example-abc - labels: - kubernetes.io/service-name: example -addressType: IPv4 -ports: - - name: http - protocol: TCP - port: 80 -endpoints: - - addresses: - - "10.1.2.3" - conditions: - ready: true - hostname: pod-1 - topology: - kubernetes.io/hostname: node-1 - topology.kubernetes.io/zone: us-west2-a -``` - -By default, EndpointSlices managed by the EndpointSlice controller will have no -more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1 -with Endpoints and Services and have similar performance. - -EndpointSlices can act as the source of truth for kube-proxy when it comes to -how to route internal traffic. When enabled, they should provide a performance -improvement for services with large numbers of endpoints. - -### Address Types - -EndpointSlices support three address types: - -* IPv4 -* IPv6 -* FQDN (Fully Qualified Domain Name) - -### Topology - -Each endpoint within an EndpointSlice can contain relevant topology information. -This is used to indicate where an endpoint is, containing information about the -corresponding Node, zone, and region. When the values are available, the -following Topology labels will be set by the EndpointSlice controller: - -* `kubernetes.io/hostname` - The name of the Node this endpoint is on. -* `topology.kubernetes.io/zone` - The zone this endpoint is in. -* `topology.kubernetes.io/region` - The region this endpoint is in. - -The values of these labels are derived from resources associated with each -endpoint in a slice. The hostname label represents the value of the NodeName -field on the corresponding Pod. The zone and region labels represent the value -of the labels with the same names on the corresponding Node. - -### Management - -By default, EndpointSlices are created and managed by the EndpointSlice -controller. There are a variety of other use cases for EndpointSlices, such as -service mesh implementations, that could result in other entities or controllers -managing additional sets of EndpointSlices. To ensure that multiple entities can -manage EndpointSlices without interfering with each other, a -`endpointslice.kubernetes.io/managed-by` label is used to indicate the entity -managing an EndpointSlice. The EndpointSlice controller sets -`endpointslice-controller.k8s.io` as the value for this label on all -EndpointSlices it manages. Other entities managing EndpointSlices should also -set a unique value for this label. - -### Ownership - -In most use cases, EndpointSlices will be owned by the Service that it tracks -endpoints for. This is indicated by an owner reference on each EndpointSlice as -well as a `kubernetes.io/service-name` label that enables simple lookups of all -EndpointSlices belonging to a Service. - -## EndpointSlice Controller - -The EndpointSlice controller watches Services and Pods to ensure corresponding -EndpointSlices are up to date. The controller will manage EndpointSlices for -every Service with a selector specified. These will represent the IPs of Pods -matching the Service selector. - -### Size of EndpointSlices - -By default, EndpointSlices are limited to a size of 100 endpoints each. You can -configure this with the `--max-endpoints-per-slice` {{< glossary_tooltip -text="kube-controller-manager" term_id="kube-controller-manager" >}} flag up to -a maximum of 1000. - -### Distribution of EndpointSlices - -Each EndpointSlice has a set of ports that applies to all endpoints within the -resource. When named ports are used for a Service, Pods may end up with -different target port numbers for the same named port, requiring different -EndpointSlices. This is similar to the logic behind how subsets are grouped -with Endpoints. - -The controller tries to fill EndpointSlices as full as possible, but does not -actively rebalance them. The logic of the controller is fairly straightforward: - -1. Iterate through existing EndpointSlices, remove endpoints that are no longer - desired and update matching endpoints that have changed. -2. Iterate through EndpointSlices that have been modified in the first step and - fill them up with any new endpoints needed. -3. If there's still new endpoints left to add, try to fit them into a previously - unchanged slice and/or create new ones. - -Importantly, the third step prioritizes limiting EndpointSlice updates over a -perfectly full distribution of EndpointSlices. As an example, if there are 10 -new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each, -this approach will create a new EndpointSlice instead of filling up the 2 -existing EndpointSlices. In other words, a single EndpointSlice creation is -preferrable to multiple EndpointSlice updates. - -With kube-proxy running on each Node and watching EndpointSlices, every change -to an EndpointSlice becomes relatively expensive since it will be transmitted to -every Node in the cluster. This approach is intended to limit the number of -changes that need to be sent to every Node, even if it may result with multiple -EndpointSlices that are not full. - -In practice, this less than ideal distribution should be rare. Most changes -processed by the EndpointSlice controller will be small enough to fit in an -existing EndpointSlice, and if not, a new EndpointSlice is likely going to be -necessary soon anyway. Rolling updates of Deployments also provide a natural -repacking of EndpointSlices with all pods and their corresponding endpoints -getting replaced. - -## Motivation - -The Endpoints API has provided a simple and straightforward way of -tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters -and Services have gotten larger, limitations of that API became more visible. -Most notably, those included challenges with scaling to larger numbers of -network endpoints. - -Since all network endpoints for a Service were stored in a single Endpoints -resource, those resources could get quite large. That affected the performance -of Kubernetes components (notably the master control plane) and resulted in -significant amounts of network traffic and processing when Endpoints changed. -EndpointSlices help you mitigate those issues as well as provide an extensible -platform for additional features such as topological routing. - -{{% /capture %}} - -{{% capture whatsnext %}} - -* [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) -* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) - -{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/service-topology.md b/content/uk/docs/concepts/services-networking/service-topology.md deleted file mode 100644 index c1be99267b..0000000000 --- a/content/uk/docs/concepts/services-networking/service-topology.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -reviewers: -- johnbelamaric -- imroc -title: Service Topology -feature: - title: Топологія Сервісів - description: > - Маршрутизація трафіка Сервісом відповідно до топології кластера. - -content_template: templates/concept -weight: 10 ---- - - -{{% capture overview %}} - -{{< feature-state for_k8s_version="v1.17" state="alpha" >}} - -_Service Topology_ enables a service to route traffic based upon the Node -topology of the cluster. For example, a service can specify that traffic be -preferentially routed to endpoints that are on the same Node as the client, or -in the same availability zone. - -{{% /capture %}} - -{{% capture body %}} - -## Introduction - -By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to -any backend address for the Service. Since Kubernetes 1.7 it has been possible -to route "external" traffic to the Pods running on the Node that received the -traffic, but this is not supported for `ClusterIP` Services, and more complex -topologies — such as routing zonally — have not been possible. The -_Service Topology_ feature resolves this by allowing the Service creator to -define a policy for routing traffic based upon the Node labels for the -originating and destination Nodes. - -By using Node label matching between the source and destination, the operator -may designate groups of Nodes that are "closer" and "farther" from one another, -using whatever metric makes sense for that operator's requirements. For many -operators in public clouds, for example, there is a preference to keep service -traffic within the same zone, because interzonal traffic has a cost associated -with it, while intrazonal traffic does not. Other common needs include being able -to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to -Nodes connected to the same top-of-rack switch for the lowest latency. - -## Prerequisites - -The following prerequisites are needed in order to enable topology aware service -routing: - - * Kubernetes 1.17 or later - * Kube-proxy running in iptables mode or IPVS mode - * Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/) - -## Enable Service Topology - -To enable service topology, enable the `ServiceTopology` feature gate for -kube-apiserver and kube-proxy: - -``` ---feature-gates="ServiceTopology=true" -``` - -## Using Service Topology - -If your cluster has Service Topology enabled, you can control Service traffic -routing by specifying the `topologyKeys` field on the Service spec. This field -is a preference-order list of Node labels which will be used to sort endpoints -when accessing this Service. Traffic will be directed to a Node whose value for -the first label matches the originating Node's value for that label. If there is -no backend for the Service on a matching Node, then the second label will be -considered, and so forth, until no labels remain. - -If no match is found, the traffic will be rejected, just as if there were no -backends for the Service at all. That is, endpoints are chosen based on the first -topology key with available backends. If this field is specified and all entries -have no backends that match the topology of the client, the service has no -backends for that client and connections should fail. The special value `"*"` may -be used to mean "any topology". This catch-all value, if used, only makes sense -as the last value in the list. - -If `topologyKeys` is not specified or empty, no topology constraints will be applied. - -Consider a cluster with Nodes that are labeled with their hostname, zone name, -and region name. Then you can set the `topologyKeys` values of a service to direct -traffic as follows. - -* Only to endpoints on the same node, failing if no endpoint exists on the node: - `["kubernetes.io/hostname"]`. -* Preferentially to endpoints on the same node, falling back to endpoints in the - same zone, followed by the same region, and failing otherwise: `["kubernetes.io/hostname", - "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`. - This may be useful, for example, in cases where data locality is critical. -* Preferentially to the same zone, but fallback on any available endpoint if - none are available within this zone: - `["topology.kubernetes.io/zone", "*"]`. - - - -## Constraints - -* Service topology is not compatible with `externalTrafficPolicy=Local`, and - therefore a Service cannot use both of these features. It is possible to use - both features in the same cluster on different Services, just not on the same - Service. - -* Valid topology keys are currently limited to `kubernetes.io/hostname`, - `topology.kubernetes.io/zone`, and `topology.kubernetes.io/region`, but will - be generalized to other node labels in the future. - -* Topology keys must be valid label keys and at most 16 keys may be specified. - -* The catch-all value, `"*"`, must be the last value in the topology keys, if - it is used. - - -{{% /capture %}} - -{{% capture whatsnext %}} - -* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology) -* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) - -{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/service.md b/content/uk/docs/concepts/services-networking/service.md deleted file mode 100644 index d6a72fcc63..0000000000 --- a/content/uk/docs/concepts/services-networking/service.md +++ /dev/null @@ -1,1197 +0,0 @@ ---- -reviewers: -- bprashanth -title: Service -feature: - title: Виявлення Сервісів і балансування навантаження - description: > - Не потрібно змінювати ваш застосунок для використання незнайомого механізму виявлення Сервісів. Kubernetes призначає Подам власні IP-адреси, а набору Подів - єдине DNS-ім'я, і балансує навантаження між ними. - -content_template: templates/concept -weight: 10 ---- - - -{{% capture overview %}} - -{{< glossary_definition term_id="service" length="short" >}} - -With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. -Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, -and can load-balance across them. - -{{% /capture %}} - -{{% capture body %}} - -## Motivation - -Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal. -They are born and when they die, they are not resurrected. -If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app, -it can create and destroy Pods dynamically. - -Each Pod gets its own IP address, however in a Deployment, the set of Pods -running in one moment in time could be different from -the set of Pods running that application a moment later. - -This leads to a problem: if some set of Pods (call them “backends”) provides -functionality to other Pods (call them “frontends”) inside your cluster, -how do the frontends find out and keep track of which IP address to connect -to, so that the frontend can use the backend part of the workload? - -Enter _Services_. - -## Service resources {#service-resource} - -In Kubernetes, a Service is an abstraction which defines a logical set of Pods -and a policy by which to access them (sometimes this pattern is called -a micro-service). The set of Pods targeted by a Service is usually determined -by a {{< glossary_tooltip text="selector" term_id="selector" >}} -(see [below](#services-without-selectors) for why you might want a Service -_without_ a selector). - -For example, consider a stateless image-processing backend which is running with -3 replicas. Those replicas are fungible—frontends do not care which backend -they use. While the actual Pods that compose the backend set may change, the -frontend clients should not need to be aware of that, nor should they need to keep -track of the set of backends themselves. - -The Service abstraction enables this decoupling. - -### Cloud-native service discovery - -If you're able to use Kubernetes APIs for service discovery in your application, -you can query the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} -for Endpoints, that get updated whenever the set of Pods in a Service changes. - -For non-native applications, Kubernetes offers ways to place a network port or load -balancer in between your application and the backend Pods. - -## Defining a Service - -A Service in Kubernetes is a REST object, similar to a Pod. Like all of the -REST objects, you can `POST` a Service definition to the API server to create -a new instance. - -For example, suppose you have a set of Pods that each listen on TCP port 9376 -and carry a label `app=MyApp`: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service -spec: - selector: - app: MyApp - ports: - - protocol: TCP - port: 80 - targetPort: 9376 -``` - -This specification creates a new Service object named “my-service”, which -targets TCP port 9376 on any Pod with the `app=MyApp` label. - -Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), -which is used by the Service proxies -(see [Virtual IPs and service proxies](#virtual-ips-and-service-proxies) below). - -The controller for the Service selector continuously scans for Pods that -match its selector, and then POSTs any updates to an Endpoint object -also named “my-service”. - -{{< note >}} -A Service can map _any_ incoming `port` to a `targetPort`. By default and -for convenience, the `targetPort` is set to the same value as the `port` -field. -{{< /note >}} - -Port definitions in Pods have names, and you can reference these names in the -`targetPort` attribute of a Service. This works even if there is a mixture -of Pods in the Service using a single configured name, with the same network -protocol available via different port numbers. -This offers a lot of flexibility for deploying and evolving your Services. -For example, you can change the port numbers that Pods expose in the next -version of your backend software, without breaking clients. - -The default protocol for Services is TCP; you can also use any other -[supported protocol](#protocol-support). - -As many Services need to expose more than one port, Kubernetes supports multiple -port definitions on a Service object. -Each port definition can have the same `protocol`, or a different one. - -### Services without selectors - -Services most commonly abstract access to Kubernetes Pods, but they can also -abstract other kinds of backends. -For example: - - * You want to have an external database cluster in production, but in your - test environment you use your own databases. - * You want to point your Service to a Service in a different - {{< glossary_tooltip term_id="namespace" >}} or on another cluster. - * You are migrating a workload to Kubernetes. Whilst evaluating the approach, - you run only a proportion of your backends in Kubernetes. - -In any of these scenarios you can define a Service _without_ a Pod selector. -For example: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service -spec: - ports: - - protocol: TCP - port: 80 - targetPort: 9376 -``` - -Because this Service has no selector, the corresponding Endpoint object is *not* -created automatically. You can manually map the Service to the network address and port -where it's running, by adding an Endpoint object manually: - -```yaml -apiVersion: v1 -kind: Endpoints -metadata: - name: my-service -subsets: - - addresses: - - ip: 192.0.2.42 - ports: - - port: 9376 -``` - -{{< note >}} -The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or -link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). - -Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, -because {{< glossary_tooltip term_id="kube-proxy" >}} doesn't support virtual IPs -as a destination. -{{< /note >}} - -Accessing a Service without a selector works the same as if it had a selector. -In the example above, traffic is routed to the single endpoint defined in -the YAML: `192.0.2.42:9376` (TCP). - -An ExternalName Service is a special case of Service that does not have -selectors and uses DNS names instead. For more information, see the -[ExternalName](#externalname) section later in this document. - -### EndpointSlices -{{< feature-state for_k8s_version="v1.17" state="beta" >}} - -EndpointSlices are an API resource that can provide a more scalable alternative -to Endpoints. Although conceptually quite similar to Endpoints, EndpointSlices -allow for distributing network endpoints across multiple resources. By default, -an EndpointSlice is considered "full" once it reaches 100 endpoints, at which -point additional EndpointSlices will be created to store any additional -endpoints. - -EndpointSlices provide additional attributes and functionality which is -described in detail in [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/). - -## Virtual IPs and service proxies - -Every node in a Kubernetes cluster runs a `kube-proxy`. `kube-proxy` is -responsible for implementing a form of virtual IP for `Services` of type other -than [`ExternalName`](#externalname). - -### Why not use round-robin DNS? - -A question that pops up every now and then is why Kubernetes relies on -proxying to forward inbound traffic to backends. What about other -approaches? For example, would it be possible to configure DNS records that -have multiple A values (or AAAA for IPv6), and rely on round-robin name -resolution? - -There are a few reasons for using proxying for Services: - - * There is a long history of DNS implementations not respecting record TTLs, - and caching the results of name lookups after they should have expired. - * Some apps do DNS lookups only once and cache the results indefinitely. - * Even if apps and libraries did proper re-resolution, the low or zero TTLs - on the DNS records could impose a high load on DNS that then becomes - difficult to manage. - -### User space proxy mode {#proxy-mode-userspace} - -In this mode, kube-proxy watches the Kubernetes master for the addition and -removal of Service and Endpoint objects. For each Service it opens a -port (randomly chosen) on the local node. Any connections to this "proxy port" -are -proxied to one of the Service's backend Pods (as reported via -Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into -account when deciding which backend Pod to use. - -Lastly, the user-space proxy installs iptables rules which capture traffic to -the Service's `clusterIP` (which is virtual) and `port`. The rules -redirect that traffic to the proxy port which proxies the backend Pod. - -By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm. - -![Services overview diagram for userspace proxy](/images/docs/services-userspace-overview.svg) - -### `iptables` proxy mode {#proxy-mode-iptables} - -In this mode, kube-proxy watches the Kubernetes control plane for the addition and -removal of Service and Endpoint objects. For each Service, it installs -iptables rules, which capture traffic to the Service's `clusterIP` and `port`, -and redirect that traffic to one of the Service's -backend sets. For each Endpoint object, it installs iptables rules which -select a backend Pod. - -By default, kube-proxy in iptables mode chooses a backend at random. - -Using iptables to handle traffic has a lower system overhead, because traffic -is handled by Linux netfilter without the need to switch between userspace and the -kernel space. This approach is also likely to be more reliable. - -If kube-proxy is running in iptables mode and the first Pod that's selected -does not respond, the connection fails. This is different from userspace -mode: in that scenario, kube-proxy would detect that the connection to the first -Pod had failed and would automatically retry with a different backend Pod. - -You can use Pod [readiness probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) -to verify that backend Pods are working OK, so that kube-proxy in iptables mode -only sees backends that test out as healthy. Doing this means you avoid -having traffic sent via kube-proxy to a Pod that's known to have failed. - -![Services overview diagram for iptables proxy](/images/docs/services-iptables-overview.svg) - -### IPVS proxy mode {#proxy-mode-ipvs} - -{{< feature-state for_k8s_version="v1.11" state="stable" >}} - -In `ipvs` mode, kube-proxy watches Kubernetes Services and Endpoints, -calls `netlink` interface to create IPVS rules accordingly and synchronizes -IPVS rules with Kubernetes Services and Endpoints periodically. -This control loop ensures that IPVS status matches the desired -state. -When accessing a Service, IPVS directs traffic to one of the backend Pods. - -The IPVS proxy mode is based on netfilter hook function that is similar to -iptables mode, but uses a hash table as the underlying data structure and works -in the kernel space. -That means kube-proxy in IPVS mode redirects traffic with lower latency than -kube-proxy in iptables mode, with much better performance when synchronising -proxy rules. Compared to the other proxy modes, IPVS mode also supports a -higher throughput of network traffic. - -IPVS provides more options for balancing traffic to backend Pods; -these are: - -- `rr`: round-robin -- `lc`: least connection (smallest number of open connections) -- `dh`: destination hashing -- `sh`: source hashing -- `sed`: shortest expected delay -- `nq`: never queue - -{{< note >}} -To run kube-proxy in IPVS mode, you must make the IPVS Linux available on -the node before you starting kube-proxy. - -When kube-proxy starts in IPVS proxy mode, it verifies whether IPVS -kernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy -falls back to running in iptables proxy mode. -{{< /note >}} - -![Services overview diagram for IPVS proxy](/images/docs/services-ipvs-overview.svg) - -In these proxy models, the traffic bound for the Service’s IP:Port is -proxied to an appropriate backend without the clients knowing anything -about Kubernetes or Services or Pods. - -If you want to make sure that connections from a particular client -are passed to the same Pod each time, you can select the session affinity based -on the client's IP addresses by setting `service.spec.sessionAffinity` to "ClientIP" -(the default is "None"). -You can also set the maximum session sticky time by setting -`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately. -(the default value is 10800, which works out to be 3 hours). - -## Multi-Port Services - -For some Services, you need to expose more than one port. -Kubernetes lets you configure multiple port definitions on a Service object. -When using multiple ports for a Service, you must give all of your ports names -so that these are unambiguous. -For example: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service -spec: - selector: - app: MyApp - ports: - - name: http - protocol: TCP - port: 80 - targetPort: 9376 - - name: https - protocol: TCP - port: 443 - targetPort: 9377 -``` - -{{< note >}} -As with Kubernetes {{< glossary_tooltip term_id="name" text="names">}} in general, names for ports -must only contain lowercase alphanumeric characters and `-`. Port names must -also start and end with an alphanumeric character. - -For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not. -{{< /note >}} - -## Choosing your own IP address - -You can specify your own cluster IP address as part of a `Service` creation -request. To do this, set the `.spec.clusterIP` field. For example, if you -already have an existing DNS entry that you wish to reuse, or legacy systems -that are configured for a specific IP address and difficult to re-configure. - -The IP address that you choose must be a valid IPv4 or IPv6 address from within the -`service-cluster-ip-range` CIDR range that is configured for the API server. -If you try to create a Service with an invalid clusterIP address value, the API -server will return a 422 HTTP status code to indicate that there's a problem. - -## Discovering services - -Kubernetes supports 2 primary modes of finding a Service - environment -variables and DNS. - -### Environment variables - -When a Pod is run on a Node, the kubelet adds a set of environment variables -for each active Service. It supports both [Docker links -compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see -[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49)) -and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, -where the Service name is upper-cased and dashes are converted to underscores. - -For example, the Service `"redis-master"` which exposes TCP port 6379 and has been -allocated cluster IP address 10.0.0.11, produces the following environment -variables: - -```shell -REDIS_MASTER_SERVICE_HOST=10.0.0.11 -REDIS_MASTER_SERVICE_PORT=6379 -REDIS_MASTER_PORT=tcp://10.0.0.11:6379 -REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379 -REDIS_MASTER_PORT_6379_TCP_PROTO=tcp -REDIS_MASTER_PORT_6379_TCP_PORT=6379 -REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11 -``` - -{{< note >}} -When you have a Pod that needs to access a Service, and you are using -the environment variable method to publish the port and cluster IP to the client -Pods, you must create the Service *before* the client Pods come into existence. -Otherwise, those client Pods won't have their environment variables populated. - -If you only use DNS to discover the cluster IP for a Service, you don't need to -worry about this ordering issue. -{{< /note >}} - -### DNS - -You can (and almost always should) set up a DNS service for your Kubernetes -cluster using an [add-on](/docs/concepts/cluster-administration/addons/). - -A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new -Services and creates a set of DNS records for each one. If DNS has been enabled -throughout your cluster then all Pods should automatically be able to resolve -Services by their DNS name. - -For example, if you have a Service called `"my-service"` in a Kubernetes -Namespace `"my-ns"`, the control plane and the DNS Service acting together -create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace -should be able to find it by simply doing a name lookup for `my-service` -(`"my-service.my-ns"` would also work). - -Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names -will resolve to the cluster IP assigned for the Service. - -Kubernetes also supports DNS SRV (Service) records for named ports. If the -`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to -`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover -the port number for `"http"`, as well as the IP address. - -The Kubernetes DNS server is the only way to access `ExternalName` Services. -You can find more information about `ExternalName` resolution in -[DNS Pods and Services](/docs/concepts/services-networking/dns-pod-service/). - -## Headless Services - -Sometimes you don't need load-balancing and a single Service IP. In -this case, you can create what are termed “headless” Services, by explicitly -specifying `"None"` for the cluster IP (`.spec.clusterIP`). - -You can use a headless Service to interface with other service discovery mechanisms, -without being tied to Kubernetes' implementation. - -For headless `Services`, a cluster IP is not allocated, kube-proxy does not handle -these Services, and there is no load balancing or proxying done by the platform -for them. How DNS is automatically configured depends on whether the Service has -selectors defined: - -### With selectors - -For headless Services that define selectors, the endpoints controller creates -`Endpoints` records in the API, and modifies the DNS configuration to return -records (addresses) that point directly to the `Pods` backing the `Service`. - -### Without selectors - -For headless Services that do not define selectors, the endpoints controller does -not create `Endpoints` records. However, the DNS system looks for and configures -either: - - * CNAME records for [`ExternalName`](#externalname)-type Services. - * A records for any `Endpoints` that share a name with the Service, for all - other types. - -## Publishing Services (ServiceTypes) {#publishing-services-service-types} - -For some parts of your application (for example, frontends) you may want to expose a -Service onto an external IP address, that's outside of your cluster. - -Kubernetes `ServiceTypes` allow you to specify what kind of Service you want. -The default is `ClusterIP`. - -`Type` values and their behaviors are: - - * `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value - makes the Service only reachable from within the cluster. This is the - default `ServiceType`. - * [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port - (the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service - routes, is automatically created. You'll be able to contact the `NodePort` Service, - from outside the cluster, - by requesting `:`. - * [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud - provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external - load balancer routes, are automatically created. - * [`ExternalName`](#externalname): Maps the Service to the contents of the - `externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record - - with its value. No proxying of any kind is set up. - {{< note >}} - You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type. - {{< /note >}} - -You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address. - -### Type NodePort {#nodeport} - -If you set the `type` field to `NodePort`, the Kubernetes control plane -allocates a port from a range specified by `--service-node-port-range` flag (default: 30000-32767). -Each node proxies that port (the same port number on every Node) into your Service. -Your Service reports the allocated port in its `.spec.ports[*].nodePort` field. - - -If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10. -This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node. - -For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases). - -If you want a specific port number, you can specify a value in the `nodePort` -field. The control plane will either allocate you that port or report that -the API transaction failed. -This means that you need to take care of possible port collisions yourself. -You also have to use a valid port number, one that's inside the range configured -for NodePort use. - -Using a NodePort gives you the freedom to set up your own load balancing solution, -to configure environments that are not fully supported by Kubernetes, or even -to just expose one or more nodes' IPs directly. - -Note that this Service is visible as `:spec.ports[*].nodePort` -and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) - -### Type LoadBalancer {#loadbalancer} - -On cloud providers which support external load balancers, setting the `type` -field to `LoadBalancer` provisions a load balancer for your Service. -The actual creation of the load balancer happens asynchronously, and -information about the provisioned balancer is published in the Service's -`.status.loadBalancer` field. -For example: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service -spec: - selector: - app: MyApp - ports: - - protocol: TCP - port: 80 - targetPort: 9376 - clusterIP: 10.0.171.239 - type: LoadBalancer -status: - loadBalancer: - ingress: - - ip: 192.0.2.127 -``` - -Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced. - -For LoadBalancer type of Services, when there is more than one port defined, all -ports must have the same protocol and the protocol must be one of `TCP`, `UDP`, -and `SCTP`. - -Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created -with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified, -the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP` -but your cloud provider does not support the feature, the `loadbalancerIP` field that you -set is ignored. - -{{< note >}} -If you're using SCTP, see the [caveat](#caveat-sctp-loadbalancer-service-type) below about the -`LoadBalancer` Service type. -{{< /note >}} - -{{< note >}} - -On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need -to create a static type public IP address resource. This public IP address resource should -be in the same resource group of the other automatically created resources of the cluster. -For example, `MC_myResourceGroup_myAKSCluster_eastus`. - -Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357). - -{{< /note >}} - -#### Internal load balancer -In a mixed environment it is sometimes necessary to route traffic from Services inside the same -(virtual) network address block. - -In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints. - -You can achieve this by adding one the following annotations to a Service. -The annotation to add depends on the cloud Service provider you're using. - -{{< tabs name="service_tabs" >}} -{{% tab name="Default" %}} -Select one of the tabs. -{{% /tab %}} -{{% tab name="GCP" %}} -```yaml -[...] -metadata: - name: my-service - annotations: - cloud.google.com/load-balancer-type: "Internal" -[...] -``` -{{% /tab %}} -{{% tab name="AWS" %}} -```yaml -[...] -metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-internal: "true" -[...] -``` -{{% /tab %}} -{{% tab name="Azure" %}} -```yaml -[...] -metadata: - name: my-service - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" -[...] -``` -{{% /tab %}} -{{% tab name="OpenStack" %}} -```yaml -[...] -metadata: - name: my-service - annotations: - service.beta.kubernetes.io/openstack-internal-load-balancer: "true" -[...] -``` -{{% /tab %}} -{{% tab name="Baidu Cloud" %}} -```yaml -[...] -metadata: - name: my-service - annotations: - service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" -[...] -``` -{{% /tab %}} -{{% tab name="Tencent Cloud" %}} -```yaml -[...] -metadata: - annotations: - service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx -[...] -``` -{{% /tab %}} -{{< /tabs >}} - - -#### TLS support on AWS {#ssl-support-on-aws} - -For partial TLS / SSL support on clusters running on AWS, you can add three -annotations to a `LoadBalancer` service: - -```yaml -metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012 -``` - -The first specifies the ARN of the certificate to use. It can be either a -certificate from a third party issuer that was uploaded to IAM or one created -within AWS Certificate Manager. - -```yaml -metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-backend-protocol: (https|http|ssl|tcp) -``` - -The second annotation specifies which protocol a Pod speaks. For HTTPS and -SSL, the ELB expects the Pod to authenticate itself over the encrypted -connection, using a certificate. - -HTTP and HTTPS selects layer 7 proxying: the ELB terminates -the connection with the user, parses headers, and injects the `X-Forwarded-For` -header with the user's IP address (Pods only see the IP address of the -ELB at the other end of its connection) when forwarding requests. - -TCP and SSL selects layer 4 proxying: the ELB forwards traffic without -modifying the headers. - -In a mixed-use environment where some ports are secured and others are left unencrypted, -you can use the following annotations: - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http - service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443" -``` - -In the above example, if the Service contained three ports, `80`, `443`, and -`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just -be proxied HTTP. - -From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. -To see which policies are available for use, you can use the `aws` command line tool: - -```bash -aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName' -``` - -You can then specify any one of those policies using the -"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`" -annotation; for example: - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01" -``` - -#### PROXY protocol support on AWS - -To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) -support for clusters running on AWS, you can use the following service -annotation: - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" -``` - -Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB -and cannot be configured otherwise. - -#### ELB Access Logs on AWS - -There are several annotations to manage access logs for ELB Services on AWS. - -The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled` -controls whether access logs are enabled. - -The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval` -controls the interval in minutes for publishing the access logs. You can specify -an interval of either 5 or 60 minutes. - -The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name` -controls the name of the Amazon S3 bucket where load balancer access logs are -stored. - -The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix` -specifies the logical hierarchy you created for your Amazon S3 bucket. - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" - # Specifies whether access logs are enabled for the load balancer - service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60" - # The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes). - service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket" - # The name of the Amazon S3 bucket where the access logs are stored - service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod" - # The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod` -``` - -#### Connection Draining on AWS - -Connection draining for Classic ELBs can be managed with the annotation -`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set -to the value of `"true"`. The annotation -`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can -also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances. - - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true" - service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60" -``` - -#### Other ELB annotations - -There are other annotations to manage Classic Elastic Load Balancers that are described below. - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60" - # The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer - - service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" - # Specifies whether cross-zone load balancing is enabled for the load balancer - - service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops" - # A comma-separated list of key-value pairs which will be recorded as - # additional tags in the ELB. - - service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "" - # The number of successive successful health checks required for a backend to - # be considered healthy for traffic. Defaults to 2, must be between 2 and 10 - - service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" - # The number of unsuccessful health checks required for a backend to be - # considered unhealthy for traffic. Defaults to 6, must be between 2 and 10 - - service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20" - # The approximate interval, in seconds, between health checks of an - # individual instance. Defaults to 10, must be between 5 and 300 - service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5" - # The amount of time, in seconds, during which no response means a failed - # health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval - # value. Defaults to 5, must be between 2 and 60 - - service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e" - # A list of additional security groups to be added to the ELB -``` - -#### Network Load Balancer support on AWS {#aws-nlb-support} - -{{< feature-state for_k8s_version="v1.15" state="beta" >}} - -To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`. - -```yaml - metadata: - name: my-service - annotations: - service.beta.kubernetes.io/aws-load-balancer-type: "nlb" -``` - -{{< note >}} -NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) -on Elastic Load Balancing for a list of supported instance types. -{{< /note >}} - -Unlike Classic Elastic Load Balancers, Network Load Balancers (NLBs) forward the -client's IP address through to the node. If a Service's `.spec.externalTrafficPolicy` -is set to `Cluster`, the client's IP address is not propagated to the end -Pods. - -By setting `.spec.externalTrafficPolicy` to `Local`, the client IP addresses is -propagated to the end Pods, but this could result in uneven distribution of -traffic. Nodes without any Pods for a particular LoadBalancer Service will fail -the NLB Target Group's health check on the auto-assigned -`.spec.healthCheckNodePort` and not receive any traffic. - -In order to achieve even traffic, either use a DaemonSet or specify a -[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) -to not locate on the same node. - -You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer) -annotation. - -In order for client traffic to reach instances behind an NLB, the Node security -groups are modified with the following IP rules: - -| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | -|------|----------|---------|------------|---------------------| -| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ | -| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ | -| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ | - -In order to limit which client IP's can access the Network Load Balancer, -specify `loadBalancerSourceRanges`. - -```yaml -spec: - loadBalancerSourceRanges: - - "143.231.0.0/16" -``` - -{{< note >}} -If `.spec.loadBalancerSourceRanges` is not set, Kubernetes -allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have -public IP addresses, be aware that non-NLB traffic can also reach all instances -in those modified security groups. - -{{< /note >}} - -#### Other CLB annotations on Tencent Kubernetes Engine (TKE) - -There are other annotations for managing Cloud Load Balancers on TKE as shown below. - -```yaml - metadata: - name: my-service - annotations: - # Bind Loadbalancers with specified nodes - service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2) - - # ID of an existing load balancer - service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx - - # Custom parameters for the load balancer (LB), does not support modification of LB type yet - service.kubernetes.io/service.extensiveParameters: "" - - # Custom parameters for the LB listener - service.kubernetes.io/service.listenerParameters: "" - - # Specifies the type of Load balancer; - # valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer) - service.kubernetes.io/loadbalance-type: xxxxx - - # Specifies the public network bandwidth billing method; - # valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth). - service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx - - # Specifies the bandwidth value (value range: [1,2000] Mbps). - service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10" - - # When this annotation is set,the loadbalancers will only register nodes - # with pod running on it, otherwise all nodes will be registered. - service.kubernetes.io/local-svc-only-bind-node-with-pod: true -``` - -### Type ExternalName {#externalname} - -Services of type ExternalName map a Service to a DNS name, not to a typical selector such as -`my-service` or `cassandra`. You specify these Services with the `spec.externalName` parameter. - -This Service definition, for example, maps -the `my-service` Service in the `prod` namespace to `my.database.example.com`: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service - namespace: prod -spec: - type: ExternalName - externalName: my.database.example.com -``` -{{< note >}} -ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName -is intended to specify a canonical DNS name. To hardcode an IP address, consider using -[headless Services](#headless-services). -{{< /note >}} - -When looking up the host `my-service.prod.svc.cluster.local`, the cluster DNS Service -returns a `CNAME` record with the value `my.database.example.com`. Accessing -`my-service` works in the same way as other Services but with the crucial -difference that redirection happens at the DNS level rather than via proxying or -forwarding. Should you later decide to move your database into your cluster, you -can start its Pods, add appropriate selectors or endpoints, and change the -Service's `type`. - -{{< warning >}} -You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references. - -For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to. -{{< /warning >}} - -{{< note >}} -This section is indebted to the [Kubernetes Tips - Part -1](https://akomljen.com/kubernetes-tips-part-1/) blog post from [Alen Komljen](https://akomljen.com/). -{{< /note >}} - -### External IPs - -If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those -`externalIPs`. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, -will be routed to one of the Service endpoints. `externalIPs` are not managed by Kubernetes and are the responsibility -of the cluster administrator. - -In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`. -In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`) - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service -spec: - selector: - app: MyApp - ports: - - name: http - protocol: TCP - port: 80 - targetPort: 9376 - externalIPs: - - 80.11.12.10 -``` - -## Shortcomings - -Using the userspace proxy for VIPs, work at small to medium scale, but will -not scale to very large clusters with thousands of Services. The [original -design proposal for portals](http://issue.k8s.io/1107) has more details on -this. - -Using the userspace proxy obscures the source IP address of a packet accessing -a Service. -This makes some kinds of network filtering (firewalling) impossible. The iptables -proxy mode does not -obscure in-cluster source IPs, but it does still impact clients coming through -a load balancer or node-port. - -The `Type` field is designed as nested functionality - each level adds to the -previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does -not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does) -but the current API requires it. - -## Virtual IP implementation {#the-gory-details-of-virtual-ips} - -The previous information should be sufficient for many people who just want to -use Services. However, there is a lot going on behind the scenes that may be -worth understanding. - -### Avoiding collisions - -One of the primary philosophies of Kubernetes is that you should not be -exposed to situations that could cause your actions to fail through no fault -of your own. For the design of the Service resource, this means not making -you choose your own port number if that choice might collide with -someone else's choice. That is an isolation failure. - -In order to allow you to choose a port number for your Services, we must -ensure that no two Services can collide. Kubernetes does that by allocating each -Service its own IP address. - -To ensure each Service receives a unique IP, an internal allocator atomically -updates a global allocation map in {{< glossary_tooltip term_id="etcd" >}} -prior to creating each Service. The map object must exist in the registry for -Services to get IP address assignments, otherwise creations will -fail with a message indicating an IP address could not be allocated. - -In the control plane, a background controller is responsible for creating that -map (needed to support migrating from older versions of Kubernetes that used -in-memory locking). Kubernetes also uses controllers to checking for invalid -assignments (eg due to administrator intervention) and for cleaning up allocated -IP addresses that are no longer used by any Services. - -### Service IP addresses {#ips-and-vips} - -Unlike Pod IP addresses, which actually route to a fixed destination, -Service IPs are not actually answered by a single host. Instead, kube-proxy -uses iptables (packet processing logic in Linux) to define _virtual_ IP addresses -which are transparently redirected as needed. When clients connect to the -VIP, their traffic is automatically transported to an appropriate endpoint. -The environment variables and DNS for Services are actually populated in -terms of the Service's virtual IP address (and port). - -kube-proxy supports three proxy modes—userspace, iptables and IPVS—which -each operate slightly differently. - -#### Userspace - -As an example, consider the image processing application described above. -When the backend Service is created, the Kubernetes master assigns a virtual -IP address, for example 10.0.0.1. Assuming the Service port is 1234, the -Service is observed by all of the kube-proxy instances in the cluster. -When a proxy sees a new Service, it opens a new random port, establishes an -iptables redirect from the virtual IP address to this new port, and starts accepting -connections on it. - -When a client connects to the Service's virtual IP address, the iptables -rule kicks in, and redirects the packets to the proxy's own port. -The “Service proxy” chooses a backend, and starts proxying traffic from the client to the backend. - -This means that Service owners can choose any port they want without risk of -collision. Clients can simply connect to an IP and port, without being aware -of which Pods they are actually accessing. - -#### iptables - -Again, consider the image processing application described above. -When the backend Service is created, the Kubernetes control plane assigns a virtual -IP address, for example 10.0.0.1. Assuming the Service port is 1234, the -Service is observed by all of the kube-proxy instances in the cluster. -When a proxy sees a new Service, it installs a series of iptables rules which -redirect from the virtual IP address to per-Service rules. The per-Service -rules link to per-Endpoint rules which redirect traffic (using destination NAT) -to the backends. - -When a client connects to the Service's virtual IP address the iptables rule kicks in. -A backend is chosen (either based on session affinity or randomly) and packets are -redirected to the backend. Unlike the userspace proxy, packets are never -copied to userspace, the kube-proxy does not have to be running for the virtual -IP address to work, and Nodes see traffic arriving from the unaltered client IP -address. - -This same basic flow executes when traffic comes in through a node-port or -through a load-balancer, though in those cases the client IP does get altered. - -#### IPVS - -iptables operations slow down dramatically in large scale cluster e.g 10,000 Services. -IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence). - -## API Object - -Service is a top-level resource in the Kubernetes REST API. You can find more details -about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). - -## Supported protocols {#protocol-support} - -### TCP - -You can use TCP for any kind of Service, and it's the default network protocol. - -### UDP - -You can use UDP for most Services. For type=LoadBalancer Services, UDP support -depends on the cloud provider offering this facility. - -### HTTP - -If your cloud provider supports it, you can use a Service in LoadBalancer mode -to set up external HTTP / HTTPS reverse proxying, forwarded to the Endpoints -of the Service. - -{{< note >}} -You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service -to expose HTTP / HTTPS Services. -{{< /note >}} - -### PROXY protocol - -If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)), -you can use a Service in LoadBalancer mode to configure a load balancer outside -of Kubernetes itself, that will forward connections prefixed with -[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). - -The load balancer will send an initial series of octets describing the -incoming connection, similar to this example - -``` -PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n -``` -followed by the data from the client. - -### SCTP - -{{< feature-state for_k8s_version="v1.12" state="alpha" >}} - -Kubernetes supports SCTP as a `protocol` value in Service, Endpoint, NetworkPolicy and Pod definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `--feature-gates=SCTPSupport=true,…`. - -When the feature gate is enabled, you can set the `protocol` field of a Service, Endpoint, NetworkPolicy or Pod to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections. - -#### Warnings {#caveat-sctp-overview} - -##### Support for multihomed SCTP associations {#caveat-sctp-multihomed} - -{{< warning >}} -The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod. - -NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. -{{< /warning >}} - -##### Service with type=LoadBalancer {#caveat-sctp-loadbalancer-service-type} - -{{< warning >}} -You can only create a Service with `type` LoadBalancer plus `protocol` SCTP if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise, the Service creation request is rejected. The current set of cloud load balancer providers (Azure, AWS, CloudStack, GCE, OpenStack) all lack support for SCTP. -{{< /warning >}} - -##### Windows {#caveat-sctp-windows-os} - -{{< warning >}} -SCTP is not supported on Windows based nodes. -{{< /warning >}} - -##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace} - -{{< warning >}} -The kube-proxy does not support the management of SCTP associations when it is in userspace mode. -{{< /warning >}} - -## Future work - -In the future, the proxy policy for Services can become more nuanced than -simple round-robin balancing, for example master-elected or sharded. We also -envision that some Services will have "real" load balancers, in which case the -virtual IP address will simply transport the packets there. - -The Kubernetes project intends to improve support for L7 (HTTP) Services. - -The Kubernetes project intends to have more flexible ingress modes for Services -that encompass the current ClusterIP, NodePort, and LoadBalancer modes and more. - - -{{% /capture %}} - -{{% capture whatsnext %}} - -* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) -* Read about [Ingress](/docs/concepts/services-networking/ingress/) -* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) - -{{% /capture %}} diff --git a/content/uk/docs/concepts/storage/persistent-volumes.md b/content/uk/docs/concepts/storage/persistent-volumes.md deleted file mode 100644 index e348abb931..0000000000 --- a/content/uk/docs/concepts/storage/persistent-volumes.md +++ /dev/null @@ -1,736 +0,0 @@ ---- -reviewers: -- jsafrane -- saad-ali -- thockin -- msau42 -title: Persistent Volumes -feature: - title: Оркестрація сховищем - description: > - Автоматично монтує систему збереження даних на ваш вибір: з локального носія даних, із хмарного сховища від провайдера публічних хмарних сервісів, як-от GCP чи AWS, або з мережевого сховища, такого як: NFS, iSCSI, Gluster, Ceph, Cinder чи Flocker. - -content_template: templates/concept -weight: 20 ---- - -{{% capture overview %}} - -This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested. - -{{% /capture %}} - - -{{% capture body %}} - -## Introduction - -Managing storage is a distinct problem from managing compute instances. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`. - -A `PersistentVolume` (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. - -A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only). - -While `PersistentVolumeClaims` allow a user to consume abstract storage -resources, it is common that users need `PersistentVolumes` with varying -properties, such as performance, for different problems. Cluster administrators -need to be able to offer a variety of `PersistentVolumes` that differ in more -ways than just size and access modes, without exposing users to the details of -how those volumes are implemented. For these needs, there is the `StorageClass` -resource. - -See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). - - -## Lifecycle of a volume and claim - -PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle: - -### Provisioning - -There are two ways PVs may be provisioned: statically or dynamically. - -#### Static -A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption. - -#### Dynamic -When none of the static PVs the administrator created match a user's `PersistentVolumeClaim`, -the cluster may try to dynamically provision a volume specially for the PVC. -This provisioning is based on `StorageClasses`: the PVC must request a -[storage class](/docs/concepts/storage/storage-classes/) and -the administrator must have created and configured that class for dynamic -provisioning to occur. Claims that request the class `""` effectively disable -dynamic provisioning for themselves. - -To enable dynamic storage provisioning based on storage class, the cluster administrator -needs to enable the `DefaultStorageClass` [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) -on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is -among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of -the API server component. For more information on API server command-line flags, -check [kube-apiserver](/docs/admin/kube-apiserver/) documentation. - -### Binding - -A user creates, or in the case of dynamic provisioning, has already created, a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, `PersistentVolumeClaim` binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping. - -Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. - -### Using - -Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod. - -Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` in their Pod's volumes block. [See below for syntax details](#claims-as-volumes). - -### Storage Object in Use Protection -The purpose of the Storage Object in Use Protection feature is to ensure that Persistent Volume Claims (PVCs) in active use by a Pod and Persistent Volume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss. - -{{< note >}} -PVC is in active use by a Pod when a Pod object exists that is using the PVC. -{{< /note >}} - -If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. - -You can see that a PVC is protected when the PVC's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pvc-protection`: - -```shell -kubectl describe pvc hostpath -Name: hostpath -Namespace: default -StorageClass: example-hostpath -Status: Terminating -Volume: -Labels: -Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath - volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath -Finalizers: [kubernetes.io/pvc-protection] -... -``` - -You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too: - -```shell -kubectl describe pv task-pv-volume -Name: task-pv-volume -Labels: type=local -Annotations: -Finalizers: [kubernetes.io/pv-protection] -StorageClass: standard -Status: Terminating -Claim: -Reclaim Policy: Delete -Access Modes: RWO -Capacity: 1Gi -Message: -Source: - Type: HostPath (bare host directory volume) - Path: /tmp/data - HostPathType: -Events: -``` - -### Reclaiming - -When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted. - -#### Retain - -The `Retain` reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. - -1. Delete the `PersistentVolume`. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted. -1. Manually clean up the data on the associated storage asset accordingly. -1. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new `PersistentVolume` with the storage asset definition. - -#### Delete - -For volume plugins that support the `Delete` reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to `Delete`. The administrator should configure the `StorageClass` according to users' expectations; otherwise, the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/). - -#### Recycle - -{{< warning >}} -The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning. -{{< /warning >}} - -If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. - -However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: pv-recycler - namespace: default -spec: - restartPolicy: Never - volumes: - - name: vol - hostPath: - path: /any/path/it/will/be/replaced - containers: - - name: pv-recycler - image: "k8s.gcr.io/busybox" - command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] - volumeMounts: - - name: vol - mountPath: /scrub -``` - -However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled. - -### Expanding Persistent Volumes Claims - -{{< feature-state for_k8s_version="v1.11" state="beta" >}} - -Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand -the following types of volumes: - -* gcePersistentDisk -* awsElasticBlockStore -* Cinder -* glusterfs -* rbd -* Azure File -* Azure Disk -* Portworx -* FlexVolumes -* CSI - -You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true. - -``` yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: gluster-vol-default -provisioner: kubernetes.io/glusterfs -parameters: - resturl: "http://192.168.10.100:8080" - restuser: "" - secretNamespace: "" - secretName: "" -allowVolumeExpansion: true -``` - -To request a larger volume for a PVC, edit the PVC object and specify a larger -size. This triggers expansion of the volume that backs the underlying `PersistentVolume`. A -new `PersistentVolume` is never created to satisfy the claim. Instead, an existing volume is resized. - -#### CSI Volume expansion - -{{< feature-state for_k8s_version="v1.16" state="beta" >}} - -Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information. - - -#### Resizing a volume containing a file system - -You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4. - -When a volume contains a file system, the file system is only resized when a new Pod is using -the `PersistentVolumeClaim` in ReadWrite mode. File system expansion is either done when a Pod is starting up -or when a Pod is running and the underlying file system supports online expansion. - -FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to `true`. -The FlexVolume can be resized on Pod restart. - -#### Resizing an in-use PersistentVolumeClaim - -{{< feature-state for_k8s_version="v1.15" state="beta" >}} - -{{< note >}} -Expanding in-use PVCs is available as beta since Kubernetes 1.15, and as alpha since 1.11. The `ExpandInUsePersistentVolumes` feature must be enabled, which is the case automatically for many clusters for beta features. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information. -{{< /note >}} - -In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC. -Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded. -This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that -uses the PVC before the expansion can complete. - - -Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod. - -{{< note >}} -FlexVolume resize is possible only when the underlying driver supports resize. -{{< /note >}} - -{{< note >}} -Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume quota of one modification every 6 hours. -{{< /note >}} - - -## Types of Persistent Volumes - -`PersistentVolume` types are implemented as plugins. Kubernetes currently supports the following plugins: - -* GCEPersistentDisk -* AWSElasticBlockStore -* AzureFile -* AzureDisk -* CSI -* FC (Fibre Channel) -* FlexVolume -* Flocker -* NFS -* iSCSI -* RBD (Ceph Block Device) -* CephFS -* Cinder (OpenStack block storage) -* Glusterfs -* VsphereVolume -* Quobyte Volumes -* HostPath (Single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) -* Portworx Volumes -* ScaleIO Volumes -* StorageOS - -## Persistent Volumes - -Each PV contains a spec and status, which is the specification and status of the volume. - -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: pv0003 -spec: - capacity: - storage: 5Gi - volumeMode: Filesystem - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Recycle - storageClassName: slow - mountOptions: - - hard - - nfsvers=4.1 - nfs: - path: /tmp - server: 172.17.0.2 -``` - -### Capacity - -Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`. - -Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc. - -### Volume Mode - -{{< feature-state for_k8s_version="v1.13" state="beta" >}} - -Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume. -Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem` -to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API -parameter. - -### Access Modes - -A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. - -The access modes are: - -* ReadWriteOnce -- the volume can be mounted as read-write by a single node -* ReadOnlyMany -- the volume can be mounted read-only by many nodes -* ReadWriteMany -- the volume can be mounted as read-write by many nodes - -In the CLI, the access modes are abbreviated to: - -* RWO - ReadWriteOnce -* ROX - ReadOnlyMany -* RWX - ReadWriteMany - -> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time. - - -| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany| -| :--- | :---: | :---: | :---: | -| AWSElasticBlockStore | ✓ | - | - | -| AzureFile | ✓ | ✓ | ✓ | -| AzureDisk | ✓ | - | - | -| CephFS | ✓ | ✓ | ✓ | -| Cinder | ✓ | - | - | -| CSI | depends on the driver | depends on the driver | depends on the driver | -| FC | ✓ | ✓ | - | -| FlexVolume | ✓ | ✓ | depends on the driver | -| Flocker | ✓ | - | - | -| GCEPersistentDisk | ✓ | ✓ | - | -| Glusterfs | ✓ | ✓ | ✓ | -| HostPath | ✓ | - | - | -| iSCSI | ✓ | ✓ | - | -| Quobyte | ✓ | ✓ | ✓ | -| NFS | ✓ | ✓ | ✓ | -| RBD | ✓ | ✓ | - | -| VsphereVolume | ✓ | - | - (works when Pods are collocated) | -| PortworxVolume | ✓ | - | ✓ | -| ScaleIO | ✓ | ✓ | - | -| StorageOS | ✓ | - | - | - -### Class - -A PV can have a class, which is specified by setting the -`storageClassName` attribute to the name of a -[StorageClass](/docs/concepts/storage/storage-classes/). -A PV of a particular class can only be bound to PVCs requesting -that class. A PV with no `storageClassName` has no class and can only be bound -to PVCs that request no particular class. - -In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead -of the `storageClassName` attribute. This annotation is still working; however, -it will become fully deprecated in a future Kubernetes release. - -### Reclaim Policy - -Current reclaim policies are: - -* Retain -- manual reclamation -* Recycle -- basic scrub (`rm -rf /thevolume/*`) -* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted - -Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion. - -### Mount Options - -A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node. - -{{< note >}} -Not all Persistent Volume types support mount options. -{{< /note >}} - -The following volume types support mount options: - -* AWSElasticBlockStore -* AzureDisk -* AzureFile -* CephFS -* Cinder (OpenStack block storage) -* GCEPersistentDisk -* Glusterfs -* NFS -* Quobyte Volumes -* RBD (Ceph Block Device) -* StorageOS -* VsphereVolume -* iSCSI - -Mount options are not validated, so mount will simply fail if one is invalid. - -In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead -of the `mountOptions` attribute. This annotation is still working; however, -it will become fully deprecated in a future Kubernetes release. - -### Node Affinity - -{{< note >}} -For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes. -{{< /note >}} - -A PV can specify [node affinity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity. - -### Phase - -A volume will be in one of the following phases: - -* Available -- a free resource that is not yet bound to a claim -* Bound -- the volume is bound to a claim -* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster -* Failed -- the volume has failed its automatic reclamation - -The CLI will show the name of the PVC bound to the PV. - -## PersistentVolumeClaims - -Each PVC contains a spec and status, which is the specification and status of the claim. - -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: myclaim -spec: - accessModes: - - ReadWriteOnce - volumeMode: Filesystem - resources: - requests: - storage: 8Gi - storageClassName: slow - selector: - matchLabels: - release: "stable" - matchExpressions: - - {key: environment, operator: In, values: [dev]} -``` - -### Access Modes - -Claims use the same conventions as volumes when requesting storage with specific access modes. - -### Volume Modes - -Claims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device. - -### Resources - -Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims. - -### Selector - -Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: - -* `matchLabels` - the volume must have a label with this value -* `matchExpressions` - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist. - -All of the requirements, from both `matchLabels` and `matchExpressions`, are ANDed together – they must all be satisfied in order to match. - -### Class - -A claim can request a particular class by specifying the name of a -[StorageClass](/docs/concepts/storage/storage-classes/) -using the attribute `storageClassName`. -Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can -be bound to the PVC. - -PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set -equal to `""` is always interpreted to be requesting a PV with no class, so it -can only be bound to PVs with no class (no annotation or one set equal to -`""`). A PVC with no `storageClassName` is not quite the same and is treated differently -by the cluster, depending on whether the -[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) -is turned on. - -* If the admission plugin is turned on, the administrator may specify a - default `StorageClass`. All PVCs that have no `storageClassName` can be bound only to - PVs of that default. Specifying a default `StorageClass` is done by setting the - annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in - a `StorageClass` object. If the administrator does not specify a default, the - cluster responds to PVC creation as if the admission plugin were turned off. If - more than one default is specified, the admission plugin forbids the creation of - all PVCs. -* If the admission plugin is turned off, there is no notion of a default - `StorageClass`. All PVCs that have no `storageClassName` can be bound only to PVs that - have no class. In this case, the PVCs that have no `storageClassName` are treated the - same way as PVCs that have their `storageClassName` set to `""`. - -Depending on installation method, a default StorageClass may be deployed -to a Kubernetes cluster by addon manager during installation. - -When a PVC specifies a `selector` in addition to requesting a `StorageClass`, -the requirements are ANDed together: only a PV of the requested class and with -the requested labels may be bound to the PVC. - -{{< note >}} -Currently, a PVC with a non-empty `selector` can't have a PV dynamically provisioned for it. -{{< /note >}} - -In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead -of `storageClassName` attribute. This annotation is still working; however, -it won't be supported in a future Kubernetes release. - -## Claims As Volumes - -Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the Pod. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: myfrontend - image: nginx - volumeMounts: - - mountPath: "/var/www/html" - name: mypd - volumes: - - name: mypd - persistentVolumeClaim: - claimName: myclaim -``` - -### A Note on Namespaces - -`PersistentVolumes` binds are exclusive, and since `PersistentVolumeClaims` are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace. - -## Raw Block Volume Support - -{{< feature-state for_k8s_version="v1.13" state="beta" >}} - -The following volume plugins support raw block volumes, including dynamic provisioning where -applicable: - -* AWSElasticBlockStore -* AzureDisk -* FC (Fibre Channel) -* GCEPersistentDisk -* iSCSI -* Local volume -* RBD (Ceph Block Device) -* VsphereVolume (alpha) - -{{< note >}} -Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9. -Support for the additional plugins was added in 1.10. -{{< /note >}} - -### Persistent Volumes using a Raw Block Volume -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: block-pv -spec: - capacity: - storage: 10Gi - accessModes: - - ReadWriteOnce - volumeMode: Block - persistentVolumeReclaimPolicy: Retain - fc: - targetWWNs: ["50060e801049cfd1"] - lun: 0 - readOnly: false -``` -### Persistent Volume Claim requesting a Raw Block Volume -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: block-pvc -spec: - accessModes: - - ReadWriteOnce - volumeMode: Block - resources: - requests: - storage: 10Gi -``` -### Pod specification adding Raw Block Device path in container -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: pod-with-block-volume -spec: - containers: - - name: fc-container - image: fedora:26 - command: ["/bin/sh", "-c"] - args: [ "tail -f /dev/null" ] - volumeDevices: - - name: data - devicePath: /dev/xvda - volumes: - - name: data - persistentVolumeClaim: - claimName: block-pvc -``` - -{{< note >}} -When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path. -{{< /note >}} - -### Binding Block Volumes - -If a user requests a raw block volume by indicating this using the `volumeMode` field in the `PersistentVolumeClaim` spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec. -Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations: -Volume binding matrix for statically provisioned volumes: - -| PV volumeMode | PVC volumeMode | Result | -| --------------|:---------------:| ----------------:| -| unspecified | unspecified | BIND | -| unspecified | Block | NO BIND | -| unspecified | Filesystem | BIND | -| Block | unspecified | NO BIND | -| Block | Block | BIND | -| Block | Filesystem | NO BIND | -| Filesystem | Filesystem | BIND | -| Filesystem | Block | NO BIND | -| Filesystem | unspecified | BIND | - -{{< note >}} -Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices. -{{< /note >}} - -## Volume Snapshot and Restore Volume from Snapshot Support - -{{< feature-state for_k8s_version="v1.12" state="alpha" >}} - -Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/). - -To enable support for restoring a volume from a volume snapshot data source, enable the -`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager. - -### Create Persistent Volume Claim from Volume Snapshot -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: restore-pvc -spec: - storageClassName: csi-hostpath-sc - dataSource: - name: new-snapshot-test - kind: VolumeSnapshot - apiGroup: snapshot.storage.k8s.io - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi -``` - -## Volume Cloning - -{{< feature-state for_k8s_version="v1.16" state="beta" >}} - -Volume clone feature was added to support CSI Volume Plugins only. For details, see [volume cloning](/docs/concepts/storage/volume-pvc-datasource/). - -To enable support for cloning a volume from a PVC data source, enable the -`VolumePVCDataSource` feature gate on the apiserver and controller-manager. - -### Create Persistent Volume Claim from an existing pvc -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: cloned-pvc -spec: - storageClassName: my-csi-plugin - dataSource: - name: existing-src-pvc-name - kind: PersistentVolumeClaim - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi -``` - -## Writing Portable Configuration - -If you're writing configuration templates or examples that run on a wide range of clusters -and need persistent storage, it is recommended that you use the following pattern: - -- Include PersistentVolumeClaim objects in your bundle of config (alongside - Deployments, ConfigMaps, etc). -- Do not include PersistentVolume objects in the config, since the user instantiating - the config may not have permission to create PersistentVolumes. -- Give the user the option of providing a storage class name when instantiating - the template. - - If the user provides a storage class name, put that value into the - `persistentVolumeClaim.storageClassName` field. - This will cause the PVC to match the right storage - class if the cluster has StorageClasses enabled by the admin. - - If the user does not provide a storage class name, leave the - `persistentVolumeClaim.storageClassName` field as nil. This will cause a - PV to be automatically provisioned for the user with the default StorageClass - in the cluster. Many cluster environments have a default StorageClass installed, - or administrators can create their own default StorageClass. -- In your tooling, watch for PVCs that are not getting bound after some time - and surface this to the user, as this may indicate that the cluster has no - dynamic storage support (in which case the user should create a matching PV) - or the cluster has no storage system (in which case the user cannot deploy - config requiring PVCs). - -{{% /capture %}} diff --git a/content/uk/docs/concepts/workloads/controllers/deployment.md b/content/uk/docs/concepts/workloads/controllers/deployment.md deleted file mode 100644 index 4d676e76f0..0000000000 --- a/content/uk/docs/concepts/workloads/controllers/deployment.md +++ /dev/null @@ -1,1152 +0,0 @@ ---- -reviewers: -- janetkuo -title: Deployments -feature: - title: Автоматичне розгортання і відкатування - description: > - Kubernetes вносить зміни до вашого застосунку чи його конфігурації по мірі їх надходження. Водночас система моніторить робочий стан застосунку для того, щоб ці зміни не призвели до одночасної зупинки усіх ваших Подів. У випадку будь-яких збоїв, Kubernetes відкотить зміни назад. Скористайтеся перевагами зростаючої екосистеми інструментів для розгортання застосунків. - -content_template: templates/concept -weight: 30 ---- - -{{% capture overview %}} - -A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and -[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/). - -You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. - -{{< note >}} -Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. -{{< /note >}} - -{{% /capture %}} - - -{{% capture body %}} - -## Use Case - -The following are typical use cases for Deployments: - -* [Create a Deployment to rollout a ReplicaSet](#creating-a-deployment). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not. -* [Declare the new state of the Pods](#updating-a-deployment) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment. -* [Rollback to an earlier Deployment revision](#rolling-back-a-deployment) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment. -* [Scale up the Deployment to facilitate more load](#scaling-a-deployment). -* [Pause the Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout. -* [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck. -* [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore. - -## Creating a Deployment - -The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods: - -{{< codenew file="controllers/nginx-deployment.yaml" >}} - -In this example: - -* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field. -* The Deployment creates three replicated Pods, indicated by the `replicas` field. -* The `selector` field defines how the Deployment finds which Pods to manage. - In this case, you simply select a label that is defined in the Pod template (`app: nginx`). - However, more sophisticated selection rules are possible, - as long as the Pod template itself satisfies the rule. - {{< note >}} - The `matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map - is equivalent to an element of `matchExpressions`, whose key field is "key" the operator is "In", - and the values array contains only "value". - All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match. - {{< /note >}} - -* The `template` field contains the following sub-fields: - * The Pods are labeled `app: nginx`using the `labels` field. - * The Pod template's specification, or `.template.spec` field, indicates that - the Pods run one container, `nginx`, which runs the `nginx` - [Docker Hub](https://hub.docker.com/) image at version 1.7.9. - * Create one container and name it `nginx` using the `name` field. - - Follow the steps given below to create the above Deployment: - - Before you begin, make sure your Kubernetes cluster is up and running. - - 1. Create the Deployment by running the following command: - - {{< note >}} - You may specify the `--record` flag to write the command executed in the resource annotation `kubernetes.io/change-cause`. It is useful for future introspection. - For example, to see the commands executed in each Deployment revision. - {{< /note >}} - - ```shell - kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml - ``` - - 2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following: - ```shell - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 0/3 0 0 1s - ``` - When you inspect the Deployments in your cluster, the following fields are displayed: - - * `NAME` lists the names of the Deployments in the cluster. - * `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_. - * `CURRENT` displays how many replicas are currently running. - * `UP-TO-DATE` displays the number of replicas that have been updated to achieve the desired state. - * `AVAILABLE` displays how many replicas of the application are available to your users. - * `AGE` displays the amount of time that the application has been running. - - Notice how the number of desired replicas is 3 according to `.spec.replicas` field. - - 3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. The output is similar to this: - ```shell - Waiting for rollout to finish: 2 out of 3 new replicas have been updated... - deployment.apps/nginx-deployment successfully rolled out - ``` - - 4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this: - ```shell - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 3/3 3 3 18s - ``` - Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available. - - 5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs`. The output is similar to this: - ```shell - NAME DESIRED CURRENT READY AGE - nginx-deployment-75675f5897 3 3 3 18s - ``` - Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is - randomly generated and uses the pod-template-hash as a seed. - - 6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. The following output is returned: - ```shell - NAME READY STATUS RESTARTS AGE LABELS - nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 - nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 - nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 - ``` - The created ReplicaSet ensures that there are three `nginx` Pods. - - {{< note >}} - You must specify an appropriate selector and Pod template labels in a Deployment (in this case, - `app: nginx`). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly. - {{< /note >}} - -### Pod-template-hash label - -{{< note >}} -Do not change this label. -{{< /note >}} - -The `pod-template-hash` label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts. - -This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the `PodTemplate` of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, -and in any existing Pods that the ReplicaSet might have. - -## Updating a Deployment - -{{< note >}} -A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, `.spec.template`) -is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. -{{< /note >}} - -Follow the steps given below to update your Deployment: - -1. Let's update the nginx Pods to use the `nginx:1.9.1` image instead of the `nginx:1.7.9` image. - - ```shell - kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 - ``` - or simply use the following command: - - ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` - - Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`: - - ```shell - kubectl edit deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment edited - ``` - -2. To see the rollout status, run: - - ```shell - kubectl rollout status deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - Waiting for rollout to finish: 2 out of 3 new replicas have been updated... - ``` - or - ``` - deployment.apps/nginx-deployment successfully rolled out - ``` - -Get more details on your updated Deployment: - -* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`. - The output is similar to this: - ``` - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 3/3 3 3 36s - ``` - -* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it -up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. - - ```shell - kubectl get rs - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1564180365 3 3 3 6s - nginx-deployment-2035384211 0 0 0 36s - ``` - -* Running `get pods` should now show only the new Pods: - - ```shell - kubectl get pods - ``` - - The output is similar to this: - ``` - NAME READY STATUS RESTARTS AGE - nginx-deployment-1564180365-khku8 1/1 Running 0 14s - nginx-deployment-1564180365-nacti 1/1 Running 0 14s - nginx-deployment-1564180365-z9gth 1/1 Running 0 14s - ``` - - Next time you want to update these Pods, you only need to update the Deployment's Pod template again. - - Deployment ensures that only a certain number of Pods are down while they are being updated. By default, - it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). - - Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. - By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). - - For example, if you look at the above Deployment closely, you will see that it first created a new Pod, - then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of - new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. - It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available. - -* Get details of your Deployment: - ```shell - kubectl describe deployments - ``` - The output is similar to this: - ``` - Name: nginx-deployment - Namespace: default - CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000 - Labels: app=nginx - Annotations: deployment.kubernetes.io/revision=2 - Selector: app=nginx - Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable - StrategyType: RollingUpdate - MinReadySeconds: 0 - RollingUpdateStrategy: 25% max unavailable, 25% max surge - Pod Template: - Labels: app=nginx - Containers: - nginx: - Image: nginx:1.9.1 - Port: 80/TCP - Environment: - Mounts: - Volumes: - Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True NewReplicaSetAvailable - OldReplicaSets: - NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created) - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3 - Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1 - Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2 - Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2 - Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1 - Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3 - Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0 - ``` - Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) - and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet - (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at - least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down - the new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas - in the new ReplicaSet, and the old ReplicaSet is scaled down to 0. - -### Rollover (aka multiple updates in-flight) - -Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up -the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels -match `.spec.selector` but whose template does not match `.spec.template` are scaled down. Eventually, the new -ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0. - -If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet -as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously - -- it will add it to its list of old ReplicaSets and start scaling it down. - -For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9`, -but then update the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3 -replicas of `nginx:1.7.9` had been created. In that case, the Deployment immediately starts -killing the 3 `nginx:1.7.9` Pods that it had created, and starts creating -`nginx:1.9.1` Pods. It does not wait for the 5 replicas of `nginx:1.7.9` to be created -before changing course. - -### Label selector updates - -It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. -In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped -all of the implications. - -{{< note >}} -In API version `apps/v1`, a Deployment's label selector is immutable after it gets created. -{{< /note >}} - -* Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, -otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does -not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and -creating a new ReplicaSet. -* Selector updates changes the existing value in a selector key -- result in the same behavior as additions. -* Selector removals removes an existing key from the Deployment selector -- do not require any changes in the -Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the -removed label still exists in any existing Pods and ReplicaSets. - -## Rolling Back a Deployment - -Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. -By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want -(you can change that by modifying revision history limit). - -{{< note >}} -A Deployment's revision is created when a Deployment's rollout is triggered. This means that the -new revision is created if and only if the Deployment's Pod template (`.spec.template`) is changed, -for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment, -do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling. -This means that when you roll back to an earlier revision, only the Deployment's Pod template part is -rolled back. -{{< /note >}} - -* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`: - - ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` - -* The rollout gets stuck. You can verify it by checking the rollout status: - - ```shell - kubectl rollout status deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - Waiting for rollout to finish: 1 out of 3 new replicas have been updated... - ``` - -* Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, -[read more here](#deployment-status). - -* You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211`) is 2, and new replicas (nginx-deployment-3066724191) is 1. - - ```shell - kubectl get rs - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1564180365 3 3 3 25s - nginx-deployment-2035384211 0 0 0 36s - nginx-deployment-3066724191 1 1 0 6s - ``` - -* Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. - - ```shell - kubectl get pods - ``` - - The output is similar to this: - ``` - NAME READY STATUS RESTARTS AGE - nginx-deployment-1564180365-70iae 1/1 Running 0 25s - nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s - nginx-deployment-1564180365-hysrc 1/1 Running 0 25s - nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s - ``` - - {{< note >}} - The Deployment controller stops the bad rollout automatically, and stops scaling up the new - ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified. - Kubernetes by default sets the value to 25%. - {{< /note >}} - -* Get the description of the Deployment: - ```shell - kubectl describe deployment - ``` - - The output is similar to this: - ``` - Name: nginx-deployment - Namespace: default - CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 - Labels: app=nginx - Selector: app=nginx - Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable - StrategyType: RollingUpdate - MinReadySeconds: 0 - RollingUpdateStrategy: 25% max unavailable, 25% max surge - Pod Template: - Labels: app=nginx - Containers: - nginx: - Image: nginx:1.91 - Port: 80/TCP - Host Port: 0/TCP - Environment: - Mounts: - Volumes: - Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True ReplicaSetUpdated - OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) - NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) - Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 - 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 - 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 - 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 - 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 - 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 - 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 - 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 - ``` - - To fix this, you need to rollback to a previous revision of Deployment that is stable. - -### Checking Rollout History of a Deployment - -Follow the steps given below to check the rollout history: - -1. First, check the revisions of this Deployment: - ```shell - kubectl rollout history deployment.v1.apps/nginx-deployment - ``` - The output is similar to this: - ``` - deployments "nginx-deployment" - REVISION CHANGE-CAUSE - 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true - 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true - 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true - ``` - - `CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by: - - * Annotating the Deployment with `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"` - * Append the `--record` flag to save the `kubectl` command that is making changes to the resource. - * Manually editing the manifest of the resource. - -2. To see the details of each revision, run: - ```shell - kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 - ``` - - The output is similar to this: - ``` - deployments "nginx-deployment" revision 2 - Labels: app=nginx - pod-template-hash=1159050644 - Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true - Containers: - nginx: - Image: nginx:1.9.1 - Port: 80/TCP - QoS Tier: - cpu: BestEffort - memory: BestEffort - Environment Variables: - No volumes. - ``` - -### Rolling Back to a Previous Revision -Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2. - -1. Now you've decided to undo the current rollout and rollback to the previous revision: - ```shell - kubectl rollout undo deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment - ``` - Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: - - ```shell - kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment - ``` - - For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout). - - The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event - for rolling back to revision 2 is generated from Deployment controller. - -2. Check if the rollback was successful and the Deployment is running as expected, run: - ```shell - kubectl get deployment nginx-deployment - ``` - - The output is similar to this: - ``` - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 3/3 3 3 30m - ``` -3. Get the description of the Deployment: - ```shell - kubectl describe deployment nginx-deployment - ``` - The output is similar to this: - ``` - Name: nginx-deployment - Namespace: default - CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 - Labels: app=nginx - Annotations: deployment.kubernetes.io/revision=4 - kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true - Selector: app=nginx - Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable - StrategyType: RollingUpdate - MinReadySeconds: 0 - RollingUpdateStrategy: 25% max unavailable, 25% max surge - Pod Template: - Labels: app=nginx - Containers: - nginx: - Image: nginx:1.9.1 - Port: 80/TCP - Host Port: 0/TCP - Environment: - Mounts: - Volumes: - Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True NewReplicaSetAvailable - OldReplicaSets: - NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 - Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 - Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 - Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 - Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 - Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0 - ``` - -## Scaling a Deployment - -You can scale a Deployment by using the following command: - -```shell -kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 -``` -The output is similar to this: -``` -deployment.apps/nginx-deployment scaled -``` - -Assuming [horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is enabled -in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of -Pods you want to run based on the CPU utilization of your existing Pods. - -```shell -kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 -``` -The output is similar to this: -``` -deployment.apps/nginx-deployment scaled -``` - -### Proportional scaling - -RollingUpdate Deployments support running multiple versions of an application at the same time. When you -or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress -or paused), the Deployment controller balances the additional replicas in the existing active -ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*. - -For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2. - -* Ensure that the 10 replicas in your Deployment are running. - ```shell - kubectl get deploy - ``` - The output is similar to this: - - ``` - NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE - nginx-deployment 10 10 10 10 50s - ``` - -* You update to a new image which happens to be unresolvable from inside the cluster. - ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` - -* The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the -`maxUnavailable` requirement that you mentioned above. Check out the rollout status: - ```shell - kubectl get rs - ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1989198191 5 5 0 9s - nginx-deployment-618515232 8 8 8 1m - ``` - -* Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas -to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using -proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you -spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the -most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the -ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up. - -In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the -new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming -the new replicas become healthy. To confirm this, run: - -```shell -kubectl get deploy -``` - -The output is similar to this: -``` -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -nginx-deployment 15 18 7 8 7m -``` -The rollout status confirms how the replicas were added to each ReplicaSet. -```shell -kubectl get rs -``` - -The output is similar to this: -``` -NAME DESIRED CURRENT READY AGE -nginx-deployment-1989198191 7 7 0 7m -nginx-deployment-618515232 11 11 11 7m -``` - -## Pausing and Resuming a Deployment - -You can pause a Deployment before triggering one or more updates and then resume it. This allows you to -apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. - -* For example, with a Deployment that was just created: - Get the Deployment details: - ```shell - kubectl get deploy - ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE - nginx 3 3 3 3 1m - ``` - Get the rollout status: - ```shell - kubectl get rs - ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 3 3 3 1m - ``` - -* Pause by running the following command: - ```shell - kubectl rollout pause deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment paused - ``` - -* Then update the image of the Deployment: - ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` - -* Notice that no new rollout started: - ```shell - kubectl rollout history deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - deployments "nginx" - REVISION CHANGE-CAUSE - 1 - ``` -* Get the rollout status to ensure that the Deployment is updates successfully: - ```shell - kubectl get rs - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 3 3 3 2m - ``` - -* You can make as many updates as you wish, for example, update the resources that will be used: - ```shell - kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment resource requirements updated - ``` - - The initial state of the Deployment prior to pausing it will continue its function, but new updates to - the Deployment will not have any effect as long as the Deployment is paused. - -* Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates: - ```shell - kubectl rollout resume deployment.v1.apps/nginx-deployment - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment resumed - ``` -* Watch the status of the rollout until it's done. - ```shell - kubectl get rs -w - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 2 2 2 2m - nginx-3926361531 2 2 0 6s - nginx-3926361531 2 2 1 18s - nginx-2142116321 1 2 2 2m - nginx-2142116321 1 2 2 2m - nginx-3926361531 3 2 1 18s - nginx-3926361531 3 2 1 18s - nginx-2142116321 1 1 1 2m - nginx-3926361531 3 3 1 18s - nginx-3926361531 3 3 2 19s - nginx-2142116321 0 1 1 2m - nginx-2142116321 0 1 1 2m - nginx-2142116321 0 0 0 2m - nginx-3926361531 3 3 3 20s - ``` -* Get the status of the latest rollout: - ```shell - kubectl get rs - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 0 0 0 2m - nginx-3926361531 3 3 3 28s - ``` -{{< note >}} -You cannot rollback a paused Deployment until you resume it. -{{< /note >}} - -## Deployment status - -A Deployment enters various states during its lifecycle. It can be [progressing](#progressing-deployment) while -rolling out a new ReplicaSet, it can be [complete](#complete-deployment), or it can [fail to progress](#failed-deployment). - -### Progressing Deployment - -Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed: - -* The Deployment creates a new ReplicaSet. -* The Deployment is scaling up its newest ReplicaSet. -* The Deployment is scaling down its older ReplicaSet(s). -* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)). - -You can monitor the progress for a Deployment by using `kubectl rollout status`. - -### Complete Deployment - -Kubernetes marks a Deployment as _complete_ when it has the following characteristics: - -* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any -updates you've requested have been completed. -* All of the replicas associated with the Deployment are available. -* No old replicas for the Deployment are running. - -You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed -successfully, `kubectl rollout status` returns a zero exit code. - -```shell -kubectl rollout status deployment.v1.apps/nginx-deployment -``` -The output is similar to this: -``` -Waiting for rollout to finish: 2 of 3 updated replicas are available... -deployment.apps/nginx-deployment successfully rolled out -$ echo $? -0 -``` - -### Failed Deployment - -Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur -due to some of the following factors: - -* Insufficient quota -* Readiness probe failures -* Image pull errors -* Insufficient permissions -* Limit ranges -* Application runtime misconfiguration - -One way you can detect this condition is to specify a deadline parameter in your Deployment spec: -([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `.spec.progressDeadlineSeconds` denotes the -number of seconds the Deployment controller waits before indicating (in the Deployment status) that the -Deployment progress has stalled. - -The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report -lack of progress for a Deployment after 10 minutes: - -```shell -kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' -``` -The output is similar to this: -``` -deployment.apps/nginx-deployment patched -``` -Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following -attributes to the Deployment's `.status.conditions`: - -* Type=Progressing -* Status=False -* Reason=ProgressDeadlineExceeded - -See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions. - -{{< note >}} -Kubernetes takes no action on a stalled Deployment other than to report a status condition with -`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for -example, rollback the Deployment to its previous version. -{{< /note >}} - -{{< note >}} -If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can -safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the -deadline. -{{< /note >}} - -You may experience transient errors with your Deployments, either due to a low timeout that you have set or -due to any other kind of error that can be treated as transient. For example, let's suppose you have -insufficient quota. If you describe the Deployment you will notice the following section: - -```shell -kubectl describe deployment nginx-deployment -``` -The output is similar to this: -``` -<...> -Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True ReplicaSetUpdated - ReplicaFailure True FailedCreate -<...> -``` - -If you run `kubectl get deployment nginx-deployment -o yaml`, the Deployment status is similar to this: - -``` -status: - availableReplicas: 2 - conditions: - - lastTransitionTime: 2016-10-04T12:25:39Z - lastUpdateTime: 2016-10-04T12:25:39Z - message: Replica set "nginx-deployment-4262182780" is progressing. - reason: ReplicaSetUpdated - status: "True" - type: Progressing - - lastTransitionTime: 2016-10-04T12:25:42Z - lastUpdateTime: 2016-10-04T12:25:42Z - message: Deployment has minimum availability. - reason: MinimumReplicasAvailable - status: "True" - type: Available - - lastTransitionTime: 2016-10-04T12:25:39Z - lastUpdateTime: 2016-10-04T12:25:39Z - message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota: - object-counts, requested: pods=1, used: pods=3, limited: pods=2' - reason: FailedCreate - status: "True" - type: ReplicaFailure - observedGeneration: 3 - replicas: 2 - unavailableReplicas: 2 -``` - -Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the -reason for the Progressing condition: - -``` -Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing False ProgressDeadlineExceeded - ReplicaFailure True FailedCreate -``` - -You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other -controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota -conditions and the Deployment controller then completes the Deployment rollout, you'll see the -Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`). - -``` -Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True NewReplicaSetAvailable -``` - -`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated -by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment -is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum -required new replicas are available (see the Reason of the condition for the particulars - in our case -`Reason=NewReplicaSetAvailable` means that the Deployment is complete). - -You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status` -returns a non-zero exit code if the Deployment has exceeded the progression deadline. - -```shell -kubectl rollout status deployment.v1.apps/nginx-deployment -``` -The output is similar to this: -``` -Waiting for rollout to finish: 2 out of 3 new replicas have been updated... -error: deployment "nginx" exceeded its progress deadline -$ echo $? -1 -``` - -### Operating on a failed deployment - -All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back -to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template. - -## Clean up Policy - -You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for -this Deployment you want to retain. The rest will be garbage-collected in the background. By default, -it is 10. - -{{< note >}} -Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment -thus that Deployment will not be able to roll back. -{{< /note >}} - -## Canary Deployment - -If you want to roll out releases to a subset of users or servers using the Deployment, you -can create multiple Deployments, one for each release, following the canary pattern described in -[managing resources](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments). - -## Writing a Deployment Spec - -As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields. -For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/), -configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. - -A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). - -### Pod Template - -The `.spec.template` and `.spec.selector` are the only required field of the `.spec`. - -The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an -`apiVersion` or `kind`. - -In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate -labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)). - -Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is -allowed, which is the default if not specified. - -### Replicas - -`.spec.replicas` is an optional field that specifies the number of desired Pods. It defaults to 1. - -### Selector - -`.spec.selector` is an required field that specifies a [label selector](/docs/concepts/overview/working-with-objects/labels/) -for the Pods targeted by this Deployment. - -`.spec.selector` must match `.spec.template.metadata.labels`, or it will be rejected by the API. - -In API version `apps/v1`, `.spec.selector` and `.metadata.labels` do not default to `.spec.template.metadata.labels` if not set. So they must be set explicitly. Also note that `.spec.selector` is immutable after creation of the Deployment in `apps/v1`. - -A Deployment may terminate Pods whose labels match the selector if their template is different -from `.spec.template` or if the total number of such Pods exceeds `.spec.replicas`. It brings up new -Pods with `.spec.template` if the number of Pods is less than the desired number. - -{{< note >}} -You should not create other Pods whose labels match this selector, either directly, by creating -another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you -do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this. -{{< /note >}} - -If you have multiple controllers that have overlapping selectors, the controllers will fight with each -other and won't behave correctly. - -### Strategy - -`.spec.strategy` specifies the strategy used to replace old Pods by new ones. -`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is -the default value. - -#### Recreate Deployment - -All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`. - -#### Rolling Update Deployment - -The Deployment updates Pods in a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) -fashion when `.spec.strategy.type==RollingUpdate`. You can specify `maxUnavailable` and `maxSurge` to control -the rolling update process. - -##### Max Unavailable - -`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the maximum number -of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) -or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by -rounding down. The value cannot be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. The default value is 25%. - -For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired -Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled -down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available -at all times during the update is at least 70% of the desired Pods. - -##### Max Surge - -`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the maximum number of Pods -that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a -percentage of desired Pods (for example, 10%). The value cannot be 0 if `MaxUnavailable` is 0. The absolute number -is calculated from the percentage by rounding up. The default value is 25%. - -For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the -rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired -Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the -total number of Pods running at any time during the update is at most 130% of desired Pods. - -### Progress Deadline Seconds - -`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want -to wait for your Deployment to progress before the system reports back that the Deployment has -[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`. -and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep -retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment -controller will roll back a Deployment as soon as it observes such a condition. - -If specified, this field needs to be greater than `.spec.minReadySeconds`. - -### Min Ready Seconds - -`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly -created Pod should be ready without any of its containers crashing, for it to be considered available. -This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when -a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). - -### Rollback To - -Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1` and `apps/v1beta1`, and is no longer supported in API versions starting `apps/v1beta2`. Instead, `kubectl rollout undo` as introduced in [Rolling Back to a Previous Revision](#rolling-back-to-a-previous-revision) should be used. - -### Revision History Limit - -A Deployment's revision history is stored in the ReplicaSets it controls. - -`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain -to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments. - -More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. -In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up. - -### Paused - -`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. The only difference between -a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused -Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when -it is created. - -## Alternative to Deployments - -### kubectl rolling-update - -[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) updates Pods and ReplicationControllers -in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have -additional features, such as rolling back to any previous revision even after the rolling update is done. - -{{% /capture %}} diff --git a/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md deleted file mode 100644 index 36bf7876bc..0000000000 --- a/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ /dev/null @@ -1,480 +0,0 @@ ---- -reviewers: -- erictune -- soltysh -title: Jobs - Run to Completion -content_template: templates/concept -feature: - title: Пакетна обробка - description: > - На додачу до Сервісів, Kubernetes може керувати вашими робочими навантаженнями систем безперервної інтеграції та пакетної обробки, за потреби замінюючи контейнери, що відмовляють. -weight: 70 ---- - -{{% capture overview %}} - -A Job creates one or more Pods and ensures that a specified number of them successfully terminate. -As pods successfully complete, the Job tracks the successful completions. When a specified number -of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up -the Pods it created. - -A simple case is to create one Job object in order to reliably run one Pod to completion. -The Job object will start a new Pod if the first Pod fails or is deleted (for example -due to a node hardware failure or a node reboot). - -You can also use a Job to run multiple Pods in parallel. - -{{% /capture %}} - - -{{% capture body %}} - -## Running an example Job - -Here is an example Job config. It computes π to 2000 places and prints it out. -It takes around 10s to complete. - -{{< codenew file="controllers/job.yaml" >}} - -You can run the example with this command: - -```shell -kubectl apply -f https://k8s.io/examples/controllers/job.yaml -``` -``` -job.batch/pi created -``` - -Check on the status of the Job with `kubectl`: - -```shell -kubectl describe jobs/pi -``` -``` -Name: pi -Namespace: default -Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c -Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c - job-name=pi -Annotations: kubectl.kubernetes.io/last-applied-configuration: - {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":... -Parallelism: 1 -Completions: 1 -Start Time: Mon, 02 Dec 2019 15:20:11 +0200 -Completed At: Mon, 02 Dec 2019 15:21:16 +0200 -Duration: 65s -Pods Statuses: 0 Running / 1 Succeeded / 0 Failed -Pod Template: - Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c - job-name=pi - Containers: - pi: - Image: perl - Port: - Host Port: - Command: - perl - -Mbignum=bpi - -wle - print bpi(2000) - Environment: - Mounts: - Volumes: -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 -``` - -To view completed Pods of a Job, use `kubectl get pods`. - -To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: - -```shell -pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}') -echo $pods -``` -``` -pi-5rwd7 -``` - -Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression -that just gets the name from each Pod in the returned list. - -View the standard output of one of the pods: - -```shell -kubectl logs $pods -``` -The output is similar to this: -```shell -3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 -``` - -## Writing a Job Spec - -As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. - -A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). - -### Pod Template - -The `.spec.template` is the only required field of the `.spec`. - -The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`. - -In addition to required fields for a Pod, a pod template in a Job must specify appropriate -labels (see [pod selector](#pod-selector)) and an appropriate restart policy. - -Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed. - -### Pod Selector - -The `.spec.selector` field is optional. In almost all cases you should not specify it. -See section [specifying your own pod selector](#specifying-your-own-pod-selector). - - -### Parallel Jobs - -There are three main types of task suitable to run as a Job: - -1. Non-parallel Jobs - - normally, only one Pod is started, unless the Pod fails. - - the Job is complete as soon as its Pod terminates successfully. -1. Parallel Jobs with a *fixed completion count*: - - specify a non-zero positive value for `.spec.completions`. - - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`. - - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`. -1. Parallel Jobs with a *work queue*: - - do not specify `.spec.completions`, default to `.spec.parallelism`. - - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue. - - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done. - - when _any_ Pod from the Job terminates with success, no new Pods are created. - - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success. - - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting. - -For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are -unset, both are defaulted to 1. - -For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed. -You can set `.spec.parallelism`, or leave it unset and it will default to 1. - -For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to -a non-negative integer. - -For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section. - - -#### Controlling Parallelism - -The requested parallelism (`.spec.parallelism`) can be set to any non-negative value. -If it is unspecified, it defaults to 1. -If it is specified as 0, then the Job is effectively paused until it is increased. - -Actual parallelism (number of pods running at any instant) may be more or less than requested -parallelism, for a variety of reasons: - -- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of - remaining completions. Higher values of `.spec.parallelism` are effectively ignored. -- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however. -- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react. -- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.), - then there may be fewer pods than requested. -- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job. -- When a Pod is gracefully shut down, it takes time to stop. - -## Handling Pod and Container Failures - -A container in a Pod may fail for a number of reasons, such as because the process in it exited with -a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this -happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays -on the node, but the container is re-run. Therefore, your program needs to handle the case when it is -restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`. -See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. - -An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node -(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the -`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller -starts a new Pod. This means that your application needs to handle the case when it is restarted in a new -pod. In particular, it needs to handle temporary files, locks, incomplete output and the like -caused by previous runs. - -Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and -`.spec.template.spec.restartPolicy = "Never"`, the same program may -sometimes be started twice. - -If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be -multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. - -### Pod backoff failure policy - -There are situations where you want to fail a Job after some amount of retries -due to a logical error in configuration etc. -To do so, set `.spec.backoffLimit` to specify the number of retries before -considering a Job as failed. The back-off limit is set by default to 6. Failed -Pods associated with the Job are recreated by the Job controller with an -exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The -back-off count is reset if no new failed Pods appear before the Job's next -status check. - -{{< note >}} -Issue [#54870](https://github.com/kubernetes/kubernetes/issues/54870) still exists for versions of Kubernetes prior to version 1.12 -{{< /note >}} -{{< note >}} -If your job has `restartPolicy = "OnFailure"`, keep in mind that your container running the Job -will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting -`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output -from failed Jobs is not lost inadvertently. -{{< /note >}} - -## Job Termination and Cleanup - -When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around -allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. -The job object also remains after it is completed so that you can view its status. It is up to the user to delete -old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too. - -By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the -`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated. - -Another way to terminate a Job is by setting an active deadline. -Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds. -The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created. -Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`. - -Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached. - -Example: - -```yaml -apiVersion: batch/v1 -kind: Job -metadata: - name: pi-with-timeout -spec: - backoffLimit: 5 - activeDeadlineSeconds: 100 - template: - spec: - containers: - - name: pi - image: perl - command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] - restartPolicy: Never -``` - -Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. - -Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`. -That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve. - -## Clean Up Finished Jobs Automatically - -Finished Jobs are usually no longer needed in the system. Keeping them around in -the system will put pressure on the API server. If the Jobs are managed directly -by a higher level controller, such as -[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be -cleaned up by CronJobs based on the specified capacity-based cleanup policy. - -### TTL Mechanism for Finished Jobs - -{{< feature-state for_k8s_version="v1.12" state="alpha" >}} - -Another way to clean up finished Jobs (either `Complete` or `Failed`) -automatically is to use a TTL mechanism provided by a -[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for -finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of -the Job. - -When the TTL controller cleans up the Job, it will delete the Job cascadingly, -i.e. delete its dependent objects, such as Pods, together with the Job. Note -that when the Job is deleted, its lifecycle guarantees, such as finalizers, will -be honored. - -For example: - -```yaml -apiVersion: batch/v1 -kind: Job -metadata: - name: pi-with-ttl -spec: - ttlSecondsAfterFinished: 100 - template: - spec: - containers: - - name: pi - image: perl - command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] - restartPolicy: Never -``` - -The Job `pi-with-ttl` will be eligible to be automatically deleted, `100` -seconds after it finishes. - -If the field is set to `0`, the Job will be eligible to be automatically deleted -immediately after it finishes. If the field is unset, this Job won't be cleaned -up by the TTL controller after it finishes. - -Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For -more information, see the documentation for -[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for -finished resources. - -## Job Patterns - -The Job object can be used to support reliable parallel execution of Pods. The Job object is not -designed to support closely-communicating parallel processes, as commonly found in scientific -computing. It does support parallel processing of a set of independent but related *work items*. -These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a -NoSQL database to scan, and so on. - -In a complex system, there may be multiple different sets of work items. Here we are just -considering one set of work items that the user wants to manage together — a *batch job*. - -There are several different patterns for parallel computation, each with strengths and weaknesses. -The tradeoffs are: - -- One Job object for each work item, vs. a single Job object for all work items. The latter is - better for large numbers of work items. The former creates some overhead for the user and for the - system to manage large numbers of Job objects. -- Number of pods created equals number of work items, vs. each Pod can process multiple work items. - The former typically requires less modification to existing code and containers. The latter - is better for large numbers of work items, for similar reasons to the previous bullet. -- Several approaches use a work queue. This requires running a queue service, - and modifications to the existing program or container to make it use the work queue. - Other approaches are easier to adapt to an existing containerised application. - - -The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. -The pattern names are also links to examples and more detailed description. - -| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? | -| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:| -| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ | -| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ | -| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ | -| Single Job with Static Work Assignment | ✓ | | ✓ | | - -When you specify completions with `.spec.completions`, each Pod created by the Job controller -has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that -all pods for a task will have the same command line and the same -image, the same volumes, and (almost) the same environment variables. These patterns -are different ways to arrange for pods to work on different things. - -This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns. -Here, `W` is the number of work items. - -| Pattern | `.spec.completions` | `.spec.parallelism` | -| -------------------------------------------------------------------- |:-------------------:|:--------------------:| -| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 | -| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any | -| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any | -| Single Job with Static Work Assignment | W | any | - - -## Advanced Usage - -### Specifying your own pod selector - -Normally, when you create a Job object, you do not specify `.spec.selector`. -The system defaulting logic adds this field when the Job is created. -It picks a selector value that will not overlap with any other jobs. - -However, in some cases, you might need to override this automatically set selector. -To do this, you can specify the `.spec.selector` of the Job. - -Be very careful when doing this. If you specify a label selector which is not -unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated -job may be deleted, or this Job may count other Pods as completing it, or one or both -Jobs may refuse to create Pods or run to completion. If a non-unique selector is -chosen, then other controllers (e.g. ReplicationController) and their Pods may behave -in unpredictable ways too. Kubernetes will not stop you from making a mistake when -specifying `.spec.selector`. - -Here is an example of a case when you might want to use this feature. - -Say Job `old` is already running. You want existing Pods -to keep running, but you want the rest of the Pods it creates -to use a different pod template and for the Job to have a new name. -You cannot update the Job because these fields are not updatable. -Therefore, you delete Job `old` but _leave its pods -running_, using `kubectl delete jobs/old --cascade=false`. -Before deleting it, you make a note of what selector it uses: - -``` -kubectl get job old -o yaml -``` -``` -kind: Job -metadata: - name: old - ... -spec: - selector: - matchLabels: - controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 - ... -``` - -Then you create a new Job with name `new` and you explicitly specify the same selector. -Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, -they are controlled by Job `new` as well. - -You need to specify `manualSelector: true` in the new Job since you are not using -the selector that the system normally generates for you automatically. - -``` -kind: Job -metadata: - name: new - ... -spec: - manualSelector: true - selector: - matchLabels: - controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 - ... -``` - -The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting -`manualSelector: true` tells the system to that you know what you are doing and to allow this -mismatch. - -## Alternatives - -### Bare Pods - -When the node that a Pod is running on reboots or fails, the pod is terminated -and will not be restarted. However, a Job will create new Pods to replace terminated ones. -For this reason, we recommend that you use a Job rather than a bare Pod, even if your application -requires only a single Pod. - -### Replication Controller - -Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller). -A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job -manages Pods that are expected to terminate (e.g. batch tasks). - -As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate -for pods with `RestartPolicy` equal to `OnFailure` or `Never`. -(Note: If `RestartPolicy` is not set, the default value is `Always`.) - -### Single Job starts Controller Pod - -Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort -of custom controller for those Pods. This allows the most flexibility, but may be somewhat -complicated to get started with and offers less integration with Kubernetes. - -One example of this pattern would be a Job which starts a Pod which runs a script that in turn -starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark -driver, and then cleans up. - -An advantage of this approach is that the overall process gets the completion guarantee of a Job -object, but complete control over what Pods are created and how work is assigned to them. - -## Cron Jobs {#cron-jobs} - -You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`. - -{{% /capture %}} diff --git a/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md deleted file mode 100644 index c8a666ac11..0000000000 --- a/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md +++ /dev/null @@ -1,291 +0,0 @@ ---- -reviewers: -- bprashanth -- janetkuo -title: ReplicationController -feature: - title: Самозцілення - anchor: How a ReplicationController Works - description: > - Перезапускає контейнери, що відмовили; заміняє і перерозподіляє контейнери у випадку непрацездатності вузла; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності. - -content_template: templates/concept -weight: 20 ---- - -{{% capture overview %}} - -{{< note >}} -A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication. -{{< /note >}} - -A _ReplicationController_ ensures that a specified number of pod replicas are running at any one -time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is -always up and available. - -{{% /capture %}} - - -{{% capture body %}} - -## How a ReplicationController Works - -If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the -ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a -ReplicationController are automatically replaced if they fail, are deleted, or are terminated. -For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. -For this reason, you should use a ReplicationController even if your application requires -only a single pod. A ReplicationController is similar to a process supervisor, -but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods -across multiple nodes. - -ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in -kubectl commands. - -A simple case is to create one ReplicationController object to reliably run one instance of -a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated -service, such as web servers. - -## Running an example ReplicationController - -This example ReplicationController config runs three copies of the nginx web server. - -{{< codenew file="controllers/replication.yaml" >}} - -Run the example job by downloading the example file and then running this command: - -```shell -kubectl apply -f https://k8s.io/examples/controllers/replication.yaml -``` -``` -replicationcontroller/nginx created -``` - -Check on the status of the ReplicationController using this command: - -```shell -kubectl describe replicationcontrollers/nginx -``` -``` -Name: nginx -Namespace: default -Selector: app=nginx -Labels: app=nginx -Annotations: -Replicas: 3 current / 3 desired -Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed -Pod Template: - Labels: app=nginx - Containers: - nginx: - Image: nginx - Port: 80/TCP - Environment: - Mounts: - Volumes: -Events: - FirstSeen LastSeen Count From SubobjectPath Type Reason Message - --------- -------- ----- ---- ------------- ---- ------ ------- - 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m - 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0 - 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v -``` - -Here, three pods are created, but none is running yet, perhaps because the image is being pulled. -A little later, the same command may show: - -```shell -Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed -``` - -To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this: - -```shell -pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name}) -echo $pods -``` -``` -nginx-3ntk0 nginx-4ok8v nginx-qrm3m -``` - -Here, the selector is the same as the selector for the ReplicationController (seen in the -`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option -specifies an expression that just gets the name from each pod in the returned list. - - -## Writing a ReplicationController Spec - -As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields. -For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/). - -A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). - -### Pod Template - -The `.spec.template` is the only required field of the `.spec`. - -The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`. - -In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate -labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [pod selector](#pod-selector). - -Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified. - -For local container restarts, ReplicationControllers delegate to an agent on the node, -for example the [Kubelet](/docs/admin/kubelet/) or Docker. - -### Labels on the ReplicationController - -The ReplicationController can itself have labels (`.metadata.labels`). Typically, you -would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified -then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be -different, and the `.metadata.labels` do not affect the behavior of the ReplicationController. - -### Pod Selector - -The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController -manages all the pods with labels that match the selector. It does not distinguish -between pods that it created or deleted and pods that another person or process created or -deleted. This allows the ReplicationController to be replaced without affecting the running pods. - -If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will -be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to -`.spec.template.metadata.labels`. - -Also you should not normally create any pods whose labels match this selector, either directly, with -another ReplicationController, or with another controller such as Job. If you do so, the -ReplicationController thinks that it created the other pods. Kubernetes does not stop you -from doing this. - -If you do end up with multiple controllers that have overlapping selectors, you -will have to manage the deletion yourself (see [below](#working-with-replicationcontrollers)). - -### Multiple Replicas - -You can specify how many pods should run concurrently by setting `.spec.replicas` to the number -of pods you would like to have running concurrently. The number running at any time may be higher -or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully -shutdown, and a replacement starts early. - -If you do not specify `.spec.replicas`, then it defaults to 1. - -## Working with ReplicationControllers - -### Deleting a ReplicationController and its Pods - -To delete a ReplicationController and all its pods, use [`kubectl -delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl will scale the ReplicationController to zero and wait -for it to delete each pod before deleting the ReplicationController itself. If this kubectl -command is interrupted, it can be restarted. - -When using the REST API or go client library, you need to do the steps explicitly (scale replicas to -0, wait for pod deletions, then delete the ReplicationController). - -### Deleting just a ReplicationController - -You can delete a ReplicationController without affecting any of its pods. - -Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). - -When using the REST API or go client library, simply delete the ReplicationController object. - -Once the original is deleted, you can create a new ReplicationController to replace it. As long -as the old and new `.spec.selector` are the same, then the new one will adopt the old pods. -However, it will not make any effort to make existing pods match a new, different pod template. -To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates). - -### Isolating pods from a ReplicationController - -Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). - -## Common usage patterns - -### Rescheduling - -As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent). - -### Scaling - -The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field. - -### Rolling updates - -The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one. - -As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. - -Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. - -The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. - -Rolling update is implemented in the client tool -[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples. - -### Multiple release tracks - -In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels. - -For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc. - -### Using ReplicationControllers with Services - -Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic -goes to the old version, and some goes to the new version. - -A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services. - -## Writing programs for Replication - -Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself. - -## Responsibilities of the ReplicationController - -The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. - -The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)). - -The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. - - -## API Object - -Replication controller is a top-level resource in the Kubernetes REST API. More details about the -API object can be found at: -[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core). - -## Alternatives to ReplicationController - -### ReplicaSet - -[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement). -It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates. -Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all. - - -### Deployment (Recommended) - -[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods -in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality, -because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. - -### Bare Pods - -Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker). - -### Job - -Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own -(that is, batch jobs). - -### DaemonSet - -Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicationController for pods that provide a -machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied -to a machine lifetime: the pod needs to be running on the machine before other pods start, and are -safe to terminate when the machine is otherwise ready to be rebooted/shutdown. - -## For more information - -Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/). - -{{% /capture %}} diff --git a/content/uk/docs/contribute/localization_uk.md b/content/uk/docs/contribute/localization_uk.md index 67a9c1789b..0235b56a32 100644 --- a/content/uk/docs/contribute/localization_uk.md +++ b/content/uk/docs/contribute/localization_uk.md @@ -22,17 +22,15 @@ anchors: * У випадку, якщо у перекладі термін набуває неоднозначності і розуміння тексту ускладнюється, надайте у дужках англійський варіант, наприклад: кінцеві точки (endpoints). Якщо при перекладі термін втрачає своє значення, краще не перекладати його, наприклад: характеристики affinity. -* Співзвучні слова передаємо транслітерацією зі збереженням написання (Service -> Сервіс). +* Назви об'єктів Kubernetes залишаємо без перекладу і пишемо з великої літери: Service, Pod, Deployment, Volume, Namespace, за винятком терміна node (вузол). Назви об'єктів Kubernetes вважаємо за іменники ч.р. і відмінюємо за допомогою апострофа: Pod'ів, Deployment'ами. + Для слів, що закінчуються на приголосний, у родовому відмінку однини використовуємо закінчення -а: Pod'а, Deployment'а. + Слова, що закінчуються на голосний, не відмінюємо: доступ до Service, за допомогою Namespace. У множині використовуємо англійську форму: користуватися Services, спільні Volumes. -* Реалії Kubernetes пишемо з великої літери: Сервіс, Под, але вузол (node). - -* Для слів з великих літер, які не мають трансліт-аналогу, використовуємо англійські слова (Deployment, Volume, Namespace). +* Частовживані і усталені за межами Kubernetes слова перекладаємо українською і пишемо з малої літери (label -> мітка). У випадку, якщо термін для означення об'єкта Kubernetes вживається у своєму загальному значенні поза контекстом Kubernetes (service як службова програма, deployment як розгортання), перекладаємо його і пишемо з малої літери, наприклад: service discovery -> виявлення сервісу, continuous deployment -> безперервне розгортання. * Складені слова вважаємо за власні назви і не перекладаємо (LabelSelector, kube-apiserver). -* Частовживані і усталені за межами K8s слова перекладаємо українською і пишемо з маленької літери (label -> мітка). - -* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Більшість необхідних нам термінів є словами іншомовного походження, які у родовому відмінку однини приймають закінчення -а: Пода, Deployment'а. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv). +* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv). ## Словник {#словник} @@ -41,24 +39,24 @@ English | Українська | addon | розширення | application | застосунок | backend | бекенд | -build | збирати (процес) | build | збирання (результат) | +build | збирати (процес) | cache | кеш | CLI | інтерфейс командного рядка | cloud | хмара; хмарний провайдер | containerized | контейнеризований | -Continuous development | безперервна розробка | -Continuous integration | безперервна інтеграція | -Continuous deployment | безперервне розгортання | +continuous deployment | безперервне розгортання | +continuous development | безперервна розробка | +continuous integration | безперервна інтеграція | contribute | робити внесок (до проекту), допомагати (проекту) | contributor | контриб'ютор, учасник проекту | control plane | площина управління | controller | контролер | -CPU | ЦПУ | +CPU | ЦП | dashboard | дашборд | data plane | площина даних | -default settings | типові налаштування | default (by) | за умовчанням | +default settings | типові налаштування | Deployment | Deployment | deprecated | застарілий | desired state | бажаний стан | @@ -81,8 +79,8 @@ label | мітка | lifecycle | життєвий цикл | logging | логування | maintenance | обслуговування | -master | master | map | спроектувати, зіставити, встановити відповідність | +master | master | monitor | моніторити | monitoring | моніторинг | Namespace | Namespace | @@ -91,7 +89,7 @@ node | вузол | orchestrate | оркеструвати | output | вивід | patch | патч | -Pod | Под | +Pod | Pod | production | прод | pull request | pull request | release | реліз | @@ -101,13 +99,14 @@ rolling update | послідовне оновлення | rollout (new updates) | викатка (оновлень) | run | запускати | scale | масштабувати | -schedule | розподіляти (Поди по вузлах) | -scheduler | scheduler | -secret | секрет | -selector | селектор | +schedule | розподіляти (Pod'и по вузлах) | +Scheduler | Scheduler | +Secret | Secret | +Selector | Селектор | self-healing | самозцілення | self-restoring | самовідновлення | -service | сервіс | +Service | Service (як об'єкт Kubernetes) | +service | сервіс (як службова програма) | service discovery | виявлення сервісу | source code | вихідний код | stateful app | застосунок зі станом | @@ -115,7 +114,7 @@ stateless app | застосунок без стану | task | завдання | terminated | зупинений | traffic | трафік | -VM (virtual machine) | ВМ | +VM (virtual machine) | ВМ (віртуальна машина) | Volume | Volume | workload | робоче навантаження | YAML | YAML | diff --git a/content/uk/docs/home/_index.md b/content/uk/docs/home/_index.md index d72f91478f..5a8cfc3c51 100644 --- a/content/uk/docs/home/_index.md +++ b/content/uk/docs/home/_index.md @@ -1,12 +1,10 @@ --- -approvers: -- chenopis title: Документація Kubernetes noedit: true cid: docsHome layout: docsportal_home -class: gridPage -linkTitle: "Home" +class: gridPage gridPageHome +linkTitle: "Головна" main_menu: true weight: 10 hide_feedback: true diff --git a/content/uk/docs/reference/glossary/cluster-operations.md b/content/uk/docs/reference/glossary/cluster-operations.md index e274bb4f7f..0b8ec9f510 100644 --- a/content/uk/docs/reference/glossary/cluster-operations.md +++ b/content/uk/docs/reference/glossary/cluster-operations.md @@ -7,8 +7,7 @@ full_link: # short_description: > # Activities such as upgrading the clusters, implementing security, storage, ingress, networking, logging and monitoring, and other operations involved in managing a Kubernetes cluster. short_description: > -Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером. - + Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером. aka: tags: - operations diff --git a/content/uk/docs/reference/glossary/cluster.md b/content/uk/docs/reference/glossary/cluster.md index 58fc3bd6fd..4748e61236 100644 --- a/content/uk/docs/reference/glossary/cluster.md +++ b/content/uk/docs/reference/glossary/cluster.md @@ -19,4 +19,4 @@ tags: -На робочих вузлах розміщуються Поди, які є складовими застосунку. Площина управління керує робочими вузлами і Подами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності. +На робочих вузлах розміщуються Pod'и, які є складовими застосунку. Площина управління керує робочими вузлами і Pod'ами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності. diff --git a/content/uk/docs/reference/glossary/control-plane.md b/content/uk/docs/reference/glossary/control-plane.md index da9fd4c08a..1fb28e40a4 100644 --- a/content/uk/docs/reference/glossary/control-plane.md +++ b/content/uk/docs/reference/glossary/control-plane.md @@ -7,7 +7,7 @@ full_link: # short_description: > # The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. short_description: > -Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів. + Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів. aka: tags: diff --git a/content/uk/docs/reference/glossary/deployment.md b/content/uk/docs/reference/glossary/deployment.md index 5e62f7f784..4c9c5ba3b9 100644 --- a/content/uk/docs/reference/glossary/deployment.md +++ b/content/uk/docs/reference/glossary/deployment.md @@ -8,7 +8,7 @@ full_link: /docs/concepts/workloads/controllers/deployment/ short_description: > Об'єкт API, що керує реплікованим застосунком. -aka: +aka: tags: - fundamental - core-object @@ -17,7 +17,7 @@ tags: Об'єкт API, що керує реплікованим застосунком. - + -Кожна репліка являє собою {{< glossary_tooltip term_id="Под" >}}; Поди розподіляються між вузлами кластера. +Кожна репліка являє собою {{< glossary_tooltip term_id="pod" text="Pod" >}}; Pod'и розподіляються між вузлами кластера. diff --git a/content/uk/docs/reference/glossary/index.md b/content/uk/docs/reference/glossary/index.md index 3cbc4533ba..fd57553ef8 100644 --- a/content/uk/docs/reference/glossary/index.md +++ b/content/uk/docs/reference/glossary/index.md @@ -1,7 +1,7 @@ --- approvers: -- chenopis -- abiogenesis-now +- maxymvlasov +- anastyakulyk # title: Standardized Glossary title: Глосарій layout: glossary diff --git a/content/uk/docs/reference/glossary/kube-proxy.md b/content/uk/docs/reference/glossary/kube-proxy.md index 5086226f8e..a6db7e07de 100644 --- a/content/uk/docs/reference/glossary/kube-proxy.md +++ b/content/uk/docs/reference/glossary/kube-proxy.md @@ -17,7 +17,7 @@ tags: network proxy that runs on each node in your cluster, implementing part of the Kubernetes {{< glossary_tooltip term_id="service">}} concept. --> -[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="сервісу">}}. +[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="service" text="Service">}}. @@ -25,7 +25,7 @@ the Kubernetes {{< glossary_tooltip term_id="service">}} concept. network communication to your Pods from network sessions inside or outside of your cluster. --> -kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Подів всередині чи поза межами кластера. +kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Pod'ів всередині чи поза межами кластера. -Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть. +Компонент площини управління, що відстежує створені Pod'и, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть. diff --git a/content/uk/docs/reference/glossary/kubelet.md b/content/uk/docs/reference/glossary/kubelet.md index c1178ddf45..d46d18610c 100644 --- a/content/uk/docs/reference/glossary/kubelet.md +++ b/content/uk/docs/reference/glossary/kubelet.md @@ -6,7 +6,7 @@ full_link: /docs/reference/generated/kubelet # short_description: > # An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. short_description: > - Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах. + Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Pod'ах. aka: tags: @@ -14,7 +14,7 @@ tags: - core-object --- -Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах. +Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Pod'ах. diff --git a/content/uk/docs/reference/glossary/pod.md b/content/uk/docs/reference/glossary/pod.md index b205c0bd1d..7ee87bb1af 100644 --- a/content/uk/docs/reference/glossary/pod.md +++ b/content/uk/docs/reference/glossary/pod.md @@ -1,13 +1,13 @@ --- # title: Pod -title: Под +title: Pod id: pod date: 2018-04-12 full_link: /docs/concepts/workloads/pods/pod-overview/ # short_description: > # The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. short_description: > - Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу контейнерів, що запущені у вашому кластері. + Найменший і найпростіший об'єкт Kubernetes. Pod являє собою групу контейнерів, що запущені у вашому кластері. aka: tags: @@ -15,9 +15,9 @@ tags: - fundamental --- - Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері. + Найменший і найпростіший об'єкт Kubernetes. Pod являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері. -Як правило, в одному Поді запускається один контейнер. У Поді також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Подами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}. +Як правило, в одному Pod'і запускається один контейнер. У Pod'і також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Pod'ами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}. diff --git a/content/uk/docs/reference/glossary/service.md b/content/uk/docs/reference/glossary/service.md index 91407b199a..d813d7056c 100755 --- a/content/uk/docs/reference/glossary/service.md +++ b/content/uk/docs/reference/glossary/service.md @@ -1,11 +1,11 @@ --- -title: Сервіс +title: Service id: service date: 2018-04-12 full_link: /docs/concepts/services-networking/service/ # A way to expose an application running on a set of Pods as a network service. short_description: > - Спосіб відкрити доступ до застосунку, що запущений на декількох Подах у вигляді мережевої служби. + Спосіб відкрити доступ до застосунку, що запущений на декількох Pod'ах у вигляді мережевої служби. aka: tags: @@ -15,10 +15,10 @@ tags: -Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Подів" term_id="pod" >}} у вигляді мережевої служби. +Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Pod'ів" term_id="pod" >}} у вигляді мережевої служби. -Переважно група Подів визначається як Сервіс за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Подів змінить групу Подів, визначених селектором. Сервіс забезпечує надходження мережевого трафіка до актуальної групи Подів для підтримки робочого навантаження. +Переважно група Pod'ів визначається як Service за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Pod'ів змінить групу Pod'ів, визначених селектором. Service забезпечує надходження мережевого трафіка до актуальної групи Pod'ів для підтримки робочого навантаження. diff --git a/content/uk/docs/setup/best-practices/_index.md b/content/uk/docs/setup/best-practices/_index.md new file mode 100644 index 0000000000..696ad54d32 --- /dev/null +++ b/content/uk/docs/setup/best-practices/_index.md @@ -0,0 +1,5 @@ +--- +#title: Best practices +title: Найкращі практики +weight: 40 +--- diff --git a/content/uk/docs/setup/learning-environment/_index.md b/content/uk/docs/setup/learning-environment/_index.md new file mode 100644 index 0000000000..879c4eb9f6 --- /dev/null +++ b/content/uk/docs/setup/learning-environment/_index.md @@ -0,0 +1,5 @@ +--- +# title: Learning environment +title: Навчальне оточення +weight: 20 +--- diff --git a/content/uk/docs/setup/production-environment/_index.md b/content/uk/docs/setup/production-environment/_index.md new file mode 100644 index 0000000000..81463114fd --- /dev/null +++ b/content/uk/docs/setup/production-environment/_index.md @@ -0,0 +1,5 @@ +--- +#title: Production environment +title: Прод оточення +weight: 30 +--- diff --git a/content/uk/docs/setup/production-environment/on-premises-vm/_index.md b/content/uk/docs/setup/production-environment/on-premises-vm/_index.md new file mode 100644 index 0000000000..d672032037 --- /dev/null +++ b/content/uk/docs/setup/production-environment/on-premises-vm/_index.md @@ -0,0 +1,5 @@ +--- +# title: On-Premises VMs +title: Менеджери віртуалізації +weight: 40 +--- diff --git a/content/uk/docs/setup/production-environment/tools/_index.md b/content/uk/docs/setup/production-environment/tools/_index.md new file mode 100644 index 0000000000..8891b3ef4f --- /dev/null +++ b/content/uk/docs/setup/production-environment/tools/_index.md @@ -0,0 +1,5 @@ +--- +# title: Installing Kubernetes with deployment tools +title: Встановлення Kubernetes за допомогою інструментів розгортання +weight: 30 +--- diff --git a/content/uk/docs/setup/production-environment/tools/kubeadm/_index.md b/content/uk/docs/setup/production-environment/tools/kubeadm/_index.md new file mode 100644 index 0000000000..b5cd8e5426 --- /dev/null +++ b/content/uk/docs/setup/production-environment/tools/kubeadm/_index.md @@ -0,0 +1,5 @@ +--- +# title: "Bootstrapping clusters with kubeadm" +title: "Запуск кластерів з kubeadm" +weight: 10 +--- diff --git a/content/uk/docs/setup/production-environment/turnkey/_index.md b/content/uk/docs/setup/production-environment/turnkey/_index.md new file mode 100644 index 0000000000..4251bca2c4 --- /dev/null +++ b/content/uk/docs/setup/production-environment/turnkey/_index.md @@ -0,0 +1,5 @@ +--- +# title: Turnkey Cloud Solutions +title: Хмарні рішення під ключ +weight: 30 +--- diff --git a/content/uk/docs/setup/production-environment/windows/_index.md b/content/uk/docs/setup/production-environment/windows/_index.md new file mode 100644 index 0000000000..a0d1574f6c --- /dev/null +++ b/content/uk/docs/setup/production-environment/windows/_index.md @@ -0,0 +1,5 @@ +--- +# title: "Windows in Kubernetes" +title: "Windows в Kubernetes" +weight: 50 +--- diff --git a/content/uk/docs/setup/release/_index.md b/content/uk/docs/setup/release/_index.md new file mode 100755 index 0000000000..5fed3d9c90 --- /dev/null +++ b/content/uk/docs/setup/release/_index.md @@ -0,0 +1,5 @@ +--- +#title: "Release notes and version skew" +title: "Зміни в релізах нових версій" +weight: 10 +--- diff --git a/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md deleted file mode 100644 index 90dbfdb914..0000000000 --- a/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md +++ /dev/null @@ -1,293 +0,0 @@ ---- -reviewers: -- fgrzadkowski -- jszczepkowski -- directxman12 -title: Horizontal Pod Autoscaler -feature: - title: Горизонтальне масштабування - description: > - Масштабуйте ваш застосунок за допомогою простої команди, інтерфейсу користувача чи автоматично, виходячи із навантаження на ЦПУ. - -content_template: templates/concept -weight: 90 ---- - -{{% capture overview %}} - -The Horizontal Pod Autoscaler automatically scales the number of pods -in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with -[custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md) -support, on some other application-provided metrics). Note that Horizontal -Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets. - -The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. -The resource determines the behavior of the controller. -The controller periodically adjusts the number of replicas in a replication controller or deployment -to match the observed average CPU utilization to the target specified by user. - -{{% /capture %}} - - -{{% capture body %}} - -## How does the Horizontal Pod Autoscaler work? - -![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg) - -The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled -by the controller manager's `--horizontal-pod-autoscaler-sync-period` flag (with a default -value of 15 seconds). - -During each period, the controller manager queries the resource utilization against the -metrics specified in each HorizontalPodAutoscaler definition. The controller manager -obtains the metrics from either the resource metrics API (for per-pod resource metrics), -or the custom metrics API (for all other metrics). - -* For per-pod resource metrics (like CPU), the controller fetches the metrics - from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler. - Then, if a target utilization value is set, the controller calculates the utilization - value as a percentage of the equivalent resource request on the containers in - each pod. If a target raw value is set, the raw metric values are used directly. - The controller then takes the mean of the utilization or the raw value (depending on the type - of target specified) across all targeted pods, and produces a ratio used to scale - the number of desired replicas. - - Please note that if some of the pod's containers do not have the relevant resource request set, - CPU utilization for the pod will not be defined and the autoscaler will - not take any action for that metric. See the [algorithm - details](#algorithm-details) section below for more information about - how the autoscaling algorithm works. - -* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics, - except that it works with raw values, not utilization values. - -* For object metrics and external metrics, a single metric is fetched, which describes - the object in question. This metric is compared to the target - value, to produce a ratio as above. In the `autoscaling/v2beta2` API - version, this value can optionally be divided by the number of pods before the - comparison is made. - -The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`, -`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by -metrics-server, which needs to be launched separately. See -[metrics-server](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server) -for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster. - -{{< note >}} -{{< feature-state state="deprecated" for_k8s_version="1.11" >}} -Fetching metrics from Heapster is deprecated as of Kubernetes 1.11. -{{< /note >}} - -See [Support for metrics APIs](#support-for-metrics-apis) for more details. - -The autoscaler accesses corresponding scalable controllers (such as replication controllers, deployments, and replica sets) -by using the scale sub-resource. Scale is an interface that allows you to dynamically set the number of replicas and examine -each of their current states. More details on scale sub-resource can be found -[here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource). - -### Algorithm Details - -From the most basic perspective, the Horizontal Pod Autoscaler controller -operates on the ratio between desired metric value and current metric -value: - -``` -desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )] -``` - -For example, if the current metric value is `200m`, and the desired value -is `100m`, the number of replicas will be doubled, since `200.0 / 100.0 == -2.0` If the current value is instead `50m`, we'll halve the number of -replicas, since `50.0 / 100.0 == 0.5`. We'll skip scaling if the ratio is -sufficiently close to 1.0 (within a globally-configurable tolerance, from -the `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1). - -When a `targetAverageValue` or `targetAverageUtilization` is specified, -the `currentMetricValue` is computed by taking the average of the given -metric across all Pods in the HorizontalPodAutoscaler's scale target. -Before checking the tolerance and deciding on the final values, we take -pod readiness and missing metrics into consideration, however. - -All Pods with a deletion timestamp set (i.e. Pods in the process of being -shut down) and all failed Pods are discarded. - -If a particular Pod is missing metrics, it is set aside for later; Pods -with missing metrics will be used to adjust the final scaling amount. - -When scaling on CPU, if any pod has yet to become ready (i.e. it's still -initializing) *or* the most recent metric point for the pod was before it -became ready, that pod is set aside as well. - -Due to technical constraints, the HorizontalPodAutoscaler controller -cannot exactly determine the first time a pod becomes ready when -determining whether to set aside certain CPU metrics. Instead, it -considers a Pod "not yet ready" if it's unready and transitioned to -unready within a short, configurable window of time since it started. -This value is configured with the `--horizontal-pod-autoscaler-initial-readiness-delay` flag, and its default is 30 -seconds. Once a pod has become ready, it considers any transition to -ready to be the first if it occurred within a longer, configurable time -since it started. This value is configured with the `--horizontal-pod-autoscaler-cpu-initialization-period` flag, and its -default is 5 minutes. - -The `currentMetricValue / desiredMetricValue` base scale ratio is then -calculated using the remaining pods not set aside or discarded from above. - -If there were any missing metrics, we recompute the average more -conservatively, assuming those pods were consuming 100% of the desired -value in case of a scale down, and 0% in case of a scale up. This dampens -the magnitude of any potential scale. - -Furthermore, if any not-yet-ready pods were present, and we would have -scaled up without factoring in missing metrics or not-yet-ready pods, we -conservatively assume the not-yet-ready pods are consuming 0% of the -desired metric, further dampening the magnitude of a scale up. - -After factoring in the not-yet-ready pods and missing metrics, we -recalculate the usage ratio. If the new ratio reverses the scale -direction, or is within the tolerance, we skip scaling. Otherwise, we use -the new ratio to scale. - -Note that the *original* value for the average utilization is reported -back via the HorizontalPodAutoscaler status, without factoring in the -not-yet-ready pods or missing metrics, even when the new usage ratio is -used. - -If multiple metrics are specified in a HorizontalPodAutoscaler, this -calculation is done for each metric, and then the largest of the desired -replica counts is chosen. If any of these metrics cannot be converted -into a desired replica count (e.g. due to an error fetching the metrics -from the metrics APIs) and a scale down is suggested by the metrics which -can be fetched, scaling is skipped. This means that the HPA is still capable -of scaling up if one or more metrics give a `desiredReplicas` greater than -the current value. - -Finally, just before HPA scales the target, the scale recommendation is recorded. The -controller considers all recommendations within a configurable window choosing the -highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. -This means that scaledowns will occur gradually, smoothing out the impact of rapidly -fluctuating metric values. - -## API Object - -The Horizontal Pod Autoscaler is an API resource in the Kubernetes `autoscaling` API group. -The current stable version, which only includes support for CPU autoscaling, -can be found in the `autoscaling/v1` API version. - -The beta version, which includes support for scaling on memory and custom metrics, -can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2` -are preserved as annotations when working with `autoscaling/v1`. - -More details about the API object can be found at -[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). - -## Support for Horizontal Pod Autoscaler in kubectl - -Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`. -We can create a new autoscaler using `kubectl create` command. -We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`. -Finally, we can delete an autoscaler using `kubectl delete hpa`. - -In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler. -For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80` -will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%` -and the number of replicas between 2 and 5. -The detailed documentation of `kubectl autoscale` can be found [here](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). - - -## Autoscaling during rolling update - -Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly, -or by using the deployment object, which manages the underlying replica sets for you. -Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object, -it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets. - -Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers, -i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`). -The reason this doesn't work is that when rolling update creates a new replication controller, -the Horizontal Pod Autoscaler will not be bound to the new replication controller. - -## Support for cooldown/delay - -When managing the scale of a group of replicas using the Horizontal Pod Autoscaler, -it is possible that the number of replicas keeps fluctuating frequently due to the -dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*. - -Starting from v1.6, a cluster operator can mitigate this problem by tuning -the global HPA settings exposed as flags for the `kube-controller-manager` component: - -Starting from v1.12, a new algorithmic update removes the need for the -upscale delay. - -- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a - duration that specifies how long the autoscaler has to wait before another - downscale operation can be performed after the current one has completed. - The default value is 5 minutes (`5m0s`). - -{{< note >}} -When tuning these parameter values, a cluster operator should be aware of the possible -consequences. If the delay (cooldown) value is set too long, there could be complaints -that the Horizontal Pod Autoscaler is not responsive to workload changes. However, if -the delay value is set too short, the scale of the replicas set may keep thrashing as -usual. -{{< /note >}} - -## Support for multiple metrics - -Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API -version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod -Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the -proposed scales will be used as the new scale. - -## Support for custom metrics - -{{< note >}} -Kubernetes 1.2 added alpha support for scaling based on application-specific metrics using special annotations. -Support for these annotations was removed in Kubernetes 1.6 in favor of the new autoscaling API. While the old method for collecting -custom metrics is still available, these metrics will not be available for use by the Horizontal Pod Autoscaler, and the former -annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller. -{{< /note >}} - -Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler. -You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API. -Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics. - -See [Support for metrics APIs](#support-for-metrics-apis) for the requirements. - -## Support for metrics APIs - -By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these -APIs, cluster administrators must ensure that: - -* The [API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) is enabled. - -* The corresponding APIs are registered: - - * For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-incubator/metrics-server). - It can be launched as a cluster addon. - - * For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors. - Check with your metrics pipeline, or the [list of known solutions](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api). - If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-incubator/custom-metrics-apiserver) to get started. - - * For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above. - -* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated. - -For more information on these different metrics paths and how they differ please see the relevant design proposals for -[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md), -[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md) -and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md). - -For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics) -and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects). - -{{% /capture %}} - -{{% capture whatsnext %}} - -* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md). -* kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). -* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). - -{{% /capture %}} diff --git a/content/uk/docs/tutorials/_index.md b/content/uk/docs/tutorials/_index.md index ad03de23df..5c30bc87ff 100644 --- a/content/uk/docs/tutorials/_index.md +++ b/content/uk/docs/tutorials/_index.md @@ -41,13 +41,13 @@ Before walking through each tutorial, you may want to bookmark the * [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) -## Застосунки без стану (Stateless Applications) +## Застосунки без стану (Stateless Applications) {#застосунки-без-стану} * [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) * [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) -## Застосунки зі станом (Stateful Applications) +## Застосунки зі станом (Stateful Applications) {#застосунки-зі-станом} * [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/) diff --git a/content/uk/docs/tutorials/hello-minikube.md b/content/uk/docs/tutorials/hello-minikube.md index 356426112b..b887c8f4cd 100644 --- a/content/uk/docs/tutorials/hello-minikube.md +++ b/content/uk/docs/tutorials/hello-minikube.md @@ -109,12 +109,12 @@ tutorial has only one Container. A Kubernetes Pod and restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. --> -[*Под*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Под має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Пода і перезапускає контейнер Пода, якщо контейнер перестає працювати. Створювати і масштабувати Поди рекомендується за допомогою Deployment'ів. +[*Pod*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Pod має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Pod'а і перезапускає контейнер Pod'а, якщо контейнер перестає працювати. Створювати і масштабувати Pod'и рекомендується за допомогою Deployment'ів. -1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Подом. Под запускає контейнер на основі наданого Docker образу. +1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Pod'ом. Pod запускає контейнер на основі наданого Docker образу. ```shell kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node @@ -139,7 +139,7 @@ Pod runs a Container based on the provided Docker image. -3. Перегляньте інформацію про запущені Поди: +3. Перегляньте інформацію про запущені Pod'и: ```shell kubectl get pods @@ -176,18 +176,18 @@ Pod runs a Container based on the provided Docker image. -## Створення Сервісу +## Створення Service -За умовчанням, Под доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Под необхідно відкрити як Kubernetes [*Сервіс*](/docs/concepts/services-networking/service/). +За умовчанням, Pod доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Pod необхідно відкрити як Kubernetes [*Service*](/docs/concepts/services-networking/service/). -1. Відкрийте Под для публічного доступу з інтернету за допомогою команди `kubectl expose`: +1. Відкрийте Pod для публічного доступу з інтернету за допомогою команди `kubectl expose`: ```shell kubectl expose deployment hello-node --type=LoadBalancer --port=8080 @@ -196,11 +196,11 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). - Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Сервісу за межами кластера. + Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Service за межами кластера. -2. Перегляньте інформацію за Сервісом, який ви щойно створили: +2. Перегляньте інформацію про Service, який ви щойно створили: ```shell kubectl get services @@ -221,7 +221,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). the `LoadBalancer` type makes the Service accessible through the `minikube service` command. --> - Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Сервісу надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Сервіс доступним ззовні за допомогою команди `minikube service`. + Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Service надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Service доступним ззовні за допомогою команди `minikube service`. @@ -301,7 +301,7 @@ Minikube має ряд вбудованих {{< glossary_tooltip text="розш -3. Перегляньте інформацію про Под і Сервіс, які ви щойно створили: +3. Перегляньте інформацію про Pod і Service, які ви щойно створили: ```shell kubectl get pod,svc -n kube-system @@ -389,6 +389,6 @@ minikube delete * Дізнайтеся більше про [розгортання застосунків](/docs/user-guide/deploying-applications/). -* Дізнайтеся більше про [об'єкти сервісу](/docs/concepts/services-networking/service/). +* Дізнайтеся більше про [об'єкти Service](/docs/concepts/services-networking/service/). {{% /capture %}} diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index ce9229ca85..3cdf8dfd93 100644 --- a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -43,16 +43,16 @@ weight: 10

-->

- Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити Deployment конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Поди для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Поди по окремих вузлах кластера. + Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити Deployment конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Pod'и для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Pod'и по окремих вузлах кластера.

-

Після створення Поди застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Под, зупинив роботу або був видалений, Deployment контролер переміщає цей Под на інший вузол кластера. Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.

+

Після створення Pod'и застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Pod, зупинив роботу або був видалений, Deployment контролер переміщає цей Pod на інший вузол кластера. Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.

-

До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Подів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками.

+

До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Pod'ів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками.

@@ -74,7 +74,7 @@ weight: 10

-->

- Deployment відповідає за створення і оновлення Подів для вашого застосунку + Deployment відповідає за створення і оновлення Pod'ів для вашого застосунку

diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html index 93899fd77f..b40ac6e1ef 100644 --- a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -1,5 +1,5 @@ --- -title: Ознайомлення з Подами і вузлами (nodes) +title: Ознайомлення з Pod'ами і вузлами (nodes) weight: 10 --- @@ -25,7 +25,7 @@ weight: 10
    -
  • Дізнатися, що таке Поди Kubernetes.
  • +
  • Дізнатися, що таке Pod'и Kubernetes.
  • Дізнатися, що таке вузли Kubernetes.
  • @@ -38,35 +38,35 @@ weight: 10
    -

    Поди Kubernetes

    +

    Pod'и Kubernetes

    -

    Коли ви створили Deployment у модулі 2, Kubernetes створив Под, щоб розмістити ваш застосунок. Под - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:

    +

    Коли ви створили Deployment у модулі 2, Kubernetes створив Pod, щоб розмістити ваш застосунок. Pod - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:

    • Спільні сховища даних, або Volumes
    • -
    • Мережа, адже кожен Под у кластері має унікальну IP-адресу
    • +
    • Мережа, адже кожен Pod у кластері має унікальну IP-адресу
    • Інформація з запуску кожного контейнера, така як версія образу контейнера або використання певних портів
    -

    Под моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Поді може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Пода мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.

    +

    Pod моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Pod'і може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Pod'а мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.

    -

    Под є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Поди вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Под прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Поди розподіляються по інших доступних вузлах кластера.

    +

    Pod є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Pod'и вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Pod прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Pod'и розподіляються по інших доступних вузлах кластера.

    Зміст:

      -
    • Поди
    • +
    • Pod'и
    • Вузли
    • Основні команди kubectl
    @@ -77,7 +77,7 @@ weight: 10

    -->

    - Под - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити. + Pod - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити.

    @@ -86,7 +86,7 @@ weight: 10
    -

    Узагальнена схема Подів

    +

    Узагальнена схема Pod'ів

    @@ -104,7 +104,7 @@ weight: 10

    Вузли

    -

    Под завжди запускається на вузлі. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Подів. Kubernetes master автоматично розподіляє Поди по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.

    +

    Pod завжди запускається на вузлі. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Pod'ів. Kubernetes master автоматично розподіляє Pod'и по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.

    @@ -112,7 +112,7 @@ weight: 10
      -
    • kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Поди і контейнери, запущені на машині.
    • +
    • kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Pod'и і контейнери, запущені на машині.
    • оточення для контейнерів (таке як Docker, rkt), що забезпечує завантаження образу контейнера з реєстру, розпакування контейнера і запуск застосунку.
    • @@ -124,7 +124,7 @@ weight: 10
      -

      Контейнери повинні бути разом в одному Поді, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск.

      +

      Контейнери повинні бути разом в одному Pod'і, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск.

      @@ -161,10 +161,10 @@ weight: 10
    • kubectl describe - показати детальну інформацію про ресурс
    • -
    • kubectl logs - вивести логи контейнера, розміщеного в Поді
    • +
    • kubectl logs - вивести логи контейнера, розміщеного в Pod'і
    • -
    • kubectl exec - виконати команду в контейнері, розміщеному в Поді
    • +
    • kubectl exec - виконати команду в контейнері, розміщеному в Pod'і
    -

    Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Подів.

    +

    Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Pod'ів.

    diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html index 0ddad3b8da..61521e394b 100644 --- a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -41,35 +41,35 @@ weight: 10 -

    Поди Kubernetes "смертні" і мають власний життєвий цикл. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Поди, запущені на ньому. ReplicaSet здатна динамічно повернути кластер до бажаного стану шляхом створення нових Подів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Поду. Водночас, кожний Под у Kubernetes кластері має унікальну IP-адресу, навіть Поди на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Подами для того, щоб ваші застосунки продовжували працювати.

    +

    Pod'и Kubernetes "смертні" і мають власний життєвий цикл. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Pod'и, запущені на ньому. ReplicaSet здатна динамічно повернути кластер до бажаного стану шляхом створення нових Pod'ів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Pod'а. Водночас, кожний Pod у Kubernetes кластері має унікальну IP-адресу, навіть Pod'и на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Pod'ами для того, щоб ваші застосунки продовжували працювати.

    -

    Сервіс у Kubernetes - це абстракція, що визначає логічний набір Подів і політику доступу до них. Сервіси уможливлюють слабку зв'язаність між залежними Подами. Для визначення Сервісу використовують YAML-файл (рекомендовано) або JSON, як для решти об'єктів Kubernetes. Набір Подів, призначених для Сервісу, зазвичай визначається через LabelSelector (нижче пояснюється, чому параметр selector іноді не включають у специфікацію сервісу).

    +

    Service у Kubernetes - це абстракція, що визначає логічний набір Pod'ів і політику доступу до них. Services уможливлюють слабку зв'язаність між залежними Pod'ами. Для визначення Service використовують YAML-файл (рекомендовано) або JSON, як для решти об'єктів Kubernetes. Набір Pod'ів, призначених для Service, зазвичай визначається через LabelSelector (нижче пояснюється, чому параметр selector іноді не включають у специфікацію Service).

    -

    Попри те, що кожен Под має унікальний IP, ці IP-адреси не видні за межами кластера без Сервісу. Сервіси уможливлюють надходження трафіка до ваших застосунків. Відкрити Сервіс можна по-різному, вказавши потрібний type у ServiceSpec:

    +

    Попри те, що кожен Pod має унікальний IP, ці IP-адреси не видні за межами кластера без Service. Services уможливлюють надходження трафіка до ваших застосунків. Відкрити Service можна по-різному, вказавши потрібний type у ServiceSpec:

      -
    • ClusterIP (типове налаштування) - відкриває доступ до Сервісу у кластері за внутрішнім IP. Цей тип робить Сервіс доступним лише у межах кластера.
    • +
    • ClusterIP (типове налаштування) - відкриває доступ до Service у кластері за внутрішнім IP. Цей тип робить Service доступним лише у межах кластера.
    • -
    • NodePort - відкриває доступ до Сервісу на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Сервіс доступним поза межами кластера, використовуючи <NodeIP>:<NodePort>. Є надмножиною відносно ClusterIP.
    • +
    • NodePort - відкриває доступ до Service на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Service доступним поза межами кластера, використовуючи <NodeIP>:<NodePort>. Є надмножиною відносно ClusterIP.
    • -
    • LoadBalancer - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Сервісу статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.
    • +
    • LoadBalancer - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Service статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.
    • -
    • ExternalName - відкриває доступ до Сервісу, використовуючи довільне ім'я (визначається параметром externalName у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії kube-dns 1.7 і вище.
    • +
    • ExternalName - відкриває доступ до Service, використовуючи довільне ім'я (визначається параметром externalName у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії kube-dns 1.7 і вище.
    -

    Більше інформації про різні типи Сервісів ви знайдете у навчальному матеріалі Використання вихідної IP-адреси. Дивіться також Поєднання застосунків з Сервісами.

    +

    Більше інформації про різні типи Services ви знайдете у навчальному матеріалі Використання вихідної IP-адреси. Дивіться також Поєднання застосунків з Services.

    -

    Також зауважте, що для деяких сценаріїв використання Сервісів параметр selector не задається у специфікації Сервісу. Сервіс, створений без визначення параметра selector, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Сервіс на конкретні кінцеві точки (endpoints). Інший випадок, коли селектор може бути не потрібний - використання строго заданого параметра type: ExternalName.

    +

    Також зауважте, що для деяких сценаріїв використання Services параметр selector не задається у специфікації Service. Service, створений без визначення параметра selector, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Service на конкретні кінцеві точки (endpoints). Інший випадок, коли Селектор може бути не потрібний - використання строго заданого параметра type: ExternalName.

    @@ -79,10 +79,10 @@ weight: 10
      -
    • Відкриття Подів для зовнішнього трафіка
    • +
    • Відкриття Pod'ів для зовнішнього трафіка
    • -
    • Балансування навантаження трафіка між Подами
    • +
    • Балансування навантаження трафіка між Pod'ами
    • Використання міток
    • @@ -91,7 +91,7 @@ weight: 10
      -

      Сервіс Kubernetes - це шар абстракції, який визначає логічний набір Подів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Подів.

      +

      Service Kubernetes - це шар абстракції, який визначає логічний набір Pod'ів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Pod'ів.

    @@ -101,7 +101,7 @@ weight: 10
    -

    Сервіси і мітки

    +

    Services і мітки

    @@ -115,10 +115,10 @@ weight: 10
    -

    Сервіс маршрутизує трафік між Подами, що входять до його складу. Сервіс - це абстракція, завдяки якій Поди в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Сервіси в Kubernetes здійснюють виявлення і маршрутизацію між залежними Подами (як наприклад, фронтенд- і бекенд-компоненти застосунку).

    +

    Service маршрутизує трафік між Pod'ами, що входять до його складу. Service - це абстракція, завдяки якій Pod'и в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Services в Kubernetes здійснюють виявлення і маршрутизацію між залежними Pod'ами (як наприклад, фронтенд- і бекенд-компоненти застосунку).

    -

    Сервіси співвідносяться з набором Подів за допомогою міток і селекторів -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:

    +

    Services співвідносяться з набором Pod'ів за допомогою міток і Селекторів -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:

      @@ -136,7 +136,7 @@ weight: 10
      -

      Ви можете створити Сервіс одночасно із Deployment, виконавши команду
      --expose в kubectl.

      +

      Ви можете створити Service одночасно із Deployment, виконавши команду
      --expose в kubectl.

    @@ -153,7 +153,7 @@ weight: 10
    -

    Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Сервісу і прикріпимо мітки.

    +

    Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Service і прикріпимо мітки.


    diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html index c2318da8ed..0add32ddc1 100644 --- a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -1,5 +1,5 @@ --- -title: Запуск вашого застосунку на декількох Подах +title: Запуск вашого застосунку на декількох Pod'ах weight: 10 --- @@ -35,7 +35,7 @@ weight: 10 -

    У попередніх модулях ми створили Deployment і відкрили його для зовнішнього трафіка за допомогою Сервісу. Deployment створив лише один Под для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.

    +

    У попередніх модулях ми створили Deployment і відкрили його для зовнішнього трафіка за допомогою Service. Deployment створив лише один Pod для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.

    @@ -56,7 +56,7 @@ weight: 10
    -

    Кількість Подів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run

    +

    Кількість Pod'ів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run

    @@ -104,11 +104,11 @@ weight: 10 -

    Масштабування Deployment'а забезпечує створення нових Подів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Подів відповідно до нового бажаного стану. Kubernetes також підтримує автоматичне масштабування, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Подів у визначеному Deployment'і.

    +

    Масштабування Deployment'а забезпечує створення нових Pod'ів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Pod'ів відповідно до нового бажаного стану. Kubernetes також підтримує автоматичне масштабування, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Pod'ів у визначеному Deployment'і.

    -

    Запустивши застосунок на декількох Подах, необхідно розподілити між ними трафік. Сервіси мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Подами відкритого Deployment'а. Сервіси безперервно моніторять запущені Поди за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Поди.

    +

    Запустивши застосунок на декількох Pod'ах, необхідно розподілити між ними трафік. Services мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Pod'ами відкритого Deployment'а. Services безперервно моніторять запущені Pod'и за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Pod'и.

    diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html index 5630eeaf81..089e567c35 100644 --- a/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html +++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html @@ -36,12 +36,12 @@ weight: 10 -

    Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. Послідовні оновлення дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Подів іншими. Нові Поди розподіляються по вузлах з доступними ресурсами.

    +

    Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. Послідовні оновлення дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Pod'ів іншими. Нові Pod'и розподіляються по вузлах з доступними ресурсами.

    -

    У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Подах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Подів, недоступних під час оновлення, і максимальна кількість нових Подів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Подів) еквіваленті. +

    У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Pod'ах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Pod'ів, недоступних під час оновлення, і максимальна кількість нових Pod'ів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Pod'ів) еквіваленті. У Kubernetes оновлення версіонуються, тому кожне оновлення Deployment'а можна відкотити до попередньої (стабільної) версії.

    @@ -59,7 +59,7 @@ weight: 10
    -

    Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Подів іншими.

    +

    Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Pod'ів іншими.

    @@ -115,7 +115,7 @@ weight: 10 -

    Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Сервіс розподілятиме трафік лише на доступні Поди. Під доступним мається на увазі Под, готовий до експлуатації користувачами застосунку.

    +

    Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Service розподілятиме трафік лише на доступні Pod'и. Під доступним мається на увазі Pod, готовий до експлуатації користувачами застосунку.

    @@ -138,7 +138,7 @@ weight: 10
    -

    Якщо Deployment "відкритий у світ", то під час оновлення сервіс розподілятиме трафік лише на доступні Поди.

    +

    Якщо Deployment "відкритий у світ", то під час оновлення Service розподілятиме трафік лише на доступні Pod'и.

    diff --git a/static/_redirects b/static/_redirects index 13cfce2d49..9bbf580a54 100644 --- a/static/_redirects +++ b/static/_redirects @@ -16,6 +16,7 @@ /pl/docs/ /pl/docs/home/ 301! /pt/docs/ /pt/docs/home/ 301! /ru/docs/ /ru/docs/home/ 301! +/uk/docs/ /uk/docs/home/ 301! /vi/docs/ /vi/docs/home/ 301! /zh/docs/ /zh/docs/home/ 301! /blog/2018/03/kubernetes-1.10-stabilizing-storage-security-networking/ /blog/2018/03/26/kubernetes-1.10-stabilizing-storage-security-networking/ 301! From c0950b5094ced6e3c8cac519c77ddb598661c43f Mon Sep 17 00:00:00 2001 From: Alpha Date: Sat, 28 Mar 2020 18:07:10 +0800 Subject: [PATCH 073/105] update kubectl create commend --- .../en/docs/reference/kubectl/conventions.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/content/en/docs/reference/kubectl/conventions.md b/content/en/docs/reference/kubectl/conventions.md index bfa40b3be2..598db3bfcb 100644 --- a/content/en/docs/reference/kubectl/conventions.md +++ b/content/en/docs/reference/kubectl/conventions.md @@ -33,6 +33,25 @@ For `kubectl run` to satisfy infrastructure as code: * Switch to configuration files checked into source control for features that are needed, but not expressible via `kubectl run` flags. #### Generators +You can generate the following resources with a kubectl command, `kubectl create --dry-run -o yaml`: +``` + clusterrole Create a ClusterRole. + clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole. + configmap Create a configmap from a local file, directory or literal value. + cronjob Create a cronjob with the specified name. + deployment Create a deployment with the specified name. + job Create a job with the specified name. + namespace Create a namespace with the specified name. + poddisruptionbudget Create a pod disruption budget with the specified name. + priorityclass Create a priorityclass with the specified name. + quota Create a quota with the specified name. + role Create a role with single rule. + rolebinding Create a RoleBinding for a particular Role or ClusterRole. + secret Create a secret using specified subcommand. + service Create a service using specified subcommand. + serviceaccount Create a service account with the specified name. +``` + You can create the following resources using `kubectl run` with the `--generator` flag: From 67b6941cbdb5021862f49b5e570fa48ccdcc2f88 Mon Sep 17 00:00:00 2001 From: Vinicius Barbosa Date: Wed, 8 Apr 2020 11:20:25 -0300 Subject: [PATCH 074/105] Update content/pt/docs/templates/feature-state-alpha.txt Co-Authored-By: Jhon Mike --- content/pt/docs/templates/feature-state-alpha.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt/docs/templates/feature-state-alpha.txt b/content/pt/docs/templates/feature-state-alpha.txt index f013bc3c5a..891096f123 100644 --- a/content/pt/docs/templates/feature-state-alpha.txt +++ b/content/pt/docs/templates/feature-state-alpha.txt @@ -1,7 +1,7 @@ Atualmente, esse recurso está no estado *alpha*, o que significa: * Os nomes das versões contêm alfa (ex. v1alpha1). -* Pode estar bugado. A ativação do recurso pode expor bugs. Desabilitado por padrão. +* Pode estar com bugs. A ativação do recurso pode expor bugs. Desabilitado por padrão. * O suporte ao recurso pode ser retirado a qualquer momento sem aviso prévio. * A API pode mudar de maneiras incompatíveis em uma versão de software posterior sem aviso prévio. * Recomendado para uso apenas em clusters de teste de curta duração, devido ao aumento do risco de erros e falta de suporte a longo prazo. From cb4ad13b5ea2c4ab57303535af150651387326a3 Mon Sep 17 00:00:00 2001 From: Senthil Kumar Sekar Date: Wed, 8 Apr 2020 23:43:18 +0530 Subject: [PATCH 075/105] Update _index.md Typo correction. --- content/en/docs/contribute/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index 606a361c0c..16eaee6039 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -58,7 +58,7 @@ roles and permissions. ## Get involved with SIG Docs -[SIG Docs](/docs/contribute/participating/) is the group of contributors who publish and maintain Kubernetes documentation and the webwsite. Getting involved with SIG Docs is a great way for Kubernetes contributors (feature development or otherwise) to have a large impact on the Kubernetes project. +[SIG Docs](/docs/contribute/participating/) is the group of contributors who publish and maintain Kubernetes documentation and the website. Getting involved with SIG Docs is a great way for Kubernetes contributors (feature development or otherwise) to have a large impact on the Kubernetes project. SIG Docs communicates with different methods: @@ -74,4 +74,4 @@ SIG Docs communicates with different methods: - Read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get involved with Kubernetes feature development. - Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/). -{{% /capture %}} \ No newline at end of file +{{% /capture %}} From fce5bbabfc34e4f5b5291ca99196b3a00d273043 Mon Sep 17 00:00:00 2001 From: mikejoh Date: Wed, 8 Apr 2020 20:51:24 +0200 Subject: [PATCH 076/105] Fix broken links --- content/en/docs/contribute/_index.md | 2 +- content/en/docs/contribute/localization.md | 2 +- content/en/docs/contribute/new-content/blogs-case-studies.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index 606a361c0c..a80e27390c 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -51,7 +51,7 @@ roles and permissions. ## Next steps -- Learn to [work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the repository. +- Learn to [work from a local clone](/docs/contribute/new-content/new-content/#fork-the-repo) of the repository. - Document [features in a release](/docs/contribute/new-content/new-features/). - Participate in [SIG Docs](/docs/contribute/participating/), and become a [member or reviewer](/docs/contribute/participating/#roles-and-responsibilities). - Start or help with a [localization](/docs/contribute/localization/). diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index bdd27c536e..bd561ed515 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -32,7 +32,7 @@ First, consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/p ### Fork and clone the repo -First, [create your own fork](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the [kubernetes/website](https://github.com/kubernetes/website) repository. +First, [create your own fork](/docs/contribute/new-content/new-content/#fork-the-repo) of the [kubernetes/website](https://github.com/kubernetes/website) repository. Then, clone your fork and `cd` into it: diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md index 8bbad41840..2bfc7259ed 100644 --- a/content/en/docs/contribute/new-content/blogs-case-studies.md +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -26,7 +26,7 @@ To submit a blog post, you can either: - Use the [Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSdMpMoSIrhte5omZbTE7nB84qcGBy8XnnXhDFoW0h7p2zwXrw/viewform) -- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. Create new blog posts in the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts) directory. +- [Open a pull request](/docs/contribute/new-content/new-content/#fork-the-repo) with a new blog post. Create new blog posts in the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts) directory. If you open a pull request, ensure that your blog post follows the correct naming conventions and frontmatter information: From 056ab08b5a4b67d957922f854ae9dacaae998eb3 Mon Sep 17 00:00:00 2001 From: Vinicius Barbosa Date: Wed, 8 Apr 2020 16:02:17 -0300 Subject: [PATCH 077/105] Update container-runtime.md --- content/en/docs/reference/glossary/container-runtime.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/glossary/container-runtime.md b/content/en/docs/reference/glossary/container-runtime.md index c45bed0f7b..89bd4ae7ad 100644 --- a/content/en/docs/reference/glossary/container-runtime.md +++ b/content/en/docs/reference/glossary/container-runtime.md @@ -2,7 +2,7 @@ title: Container Runtime id: container-runtime date: 2019-06-05 -full_link: /docs/reference/generated/container-runtime +full_link: /docs/setup/production-environment/container-runtimes short_description: > The container runtime is the software that is responsible for running containers. From bc5ac6c7ecd641ef549126d3cc1cf5eb2b5de141 Mon Sep 17 00:00:00 2001 From: Gaurav Sofat Date: Thu, 9 Apr 2020 06:13:04 +0530 Subject: [PATCH 078/105] Modify RBAC Authorizer log message --- content/en/docs/reference/access-authn-authz/rbac.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 1ca0b98b7c..0ee1e61381 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -1182,7 +1182,7 @@ allowed by *either* the RBAC or ABAC policies is allowed. When the kube-apiserver is run with a log level of 5 or higher for the RBAC component (`--vmodule=rbac*=5` or `--v=5`), you can see RBAC denials in the API server log -(prefixed with `RBAC DENY:`). +(prefixed with `RBAC:`). You can use that information to determine which roles need to be granted to which users, groups, or service accounts. Once you have [granted roles to service accounts](#service-account-permissions) and workloads From 128f1f2ae2780ccec5f6cc9b708b651bf11471dc Mon Sep 17 00:00:00 2001 From: Amim Knabben Date: Wed, 8 Apr 2020 22:49:14 -0400 Subject: [PATCH 079/105] Exec on correct pod name --- .../docs/tasks/debug-application-cluster/debug-running-pod.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md index 95065ca595..a812640555 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -107,7 +107,7 @@ If you attempt to use `kubectl exec` to create a shell you will see an error because there is no shell in this container image. ```shell -kubectl exec -it pause -- sh +kubectl exec -it ephemeral-demo -- sh ``` ``` From 09b1eb29d3a5c37b0d60be31d4c56caad48c38f7 Mon Sep 17 00:00:00 2001 From: Gaurav Sofat Date: Thu, 9 Apr 2020 09:05:57 +0530 Subject: [PATCH 080/105] Update content/en/docs/reference/access-authn-authz/rbac.md Co-Authored-By: Jordan Liggitt --- content/en/docs/reference/access-authn-authz/rbac.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 0ee1e61381..eab71e4560 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -1182,7 +1182,7 @@ allowed by *either* the RBAC or ABAC policies is allowed. When the kube-apiserver is run with a log level of 5 or higher for the RBAC component (`--vmodule=rbac*=5` or `--v=5`), you can see RBAC denials in the API server log -(prefixed with `RBAC:`). +(prefixed with `RBAC`). You can use that information to determine which roles need to be granted to which users, groups, or service accounts. Once you have [granted roles to service accounts](#service-account-permissions) and workloads From 26a63656ae35325c1fe01f112d7043b4376ef14e Mon Sep 17 00:00:00 2001 From: ljnaresh Date: Thu, 9 Apr 2020 01:48:08 +0200 Subject: [PATCH 081/105] Email address placeholder localization --- i18n/uk.toml | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/i18n/uk.toml b/i18n/uk.toml index a86a980e7a..ef59e46cfa 100644 --- a/i18n/uk.toml +++ b/i18n/uk.toml @@ -71,6 +71,10 @@ other = "Чи була ця сторінка корисною?" # other = "Yes" other = "Так" +[input_placeholder_email_address] +# other = "email address" +other = "електронна адреса" + [latest_version] # other = "latest version." other = "остання версія." From 1fff4f60dbe6a28b9899d8ea6eed04db0215b66c Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Thu, 9 Apr 2020 16:26:32 +0800 Subject: [PATCH 082/105] update kubevirt link ref #19829 --- content/zh/docs/concepts/cluster-administration/addons.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh/docs/concepts/cluster-administration/addons.md b/content/zh/docs/concepts/cluster-administration/addons.md index be54c6f48b..c58f49b1ac 100644 --- a/content/zh/docs/concepts/cluster-administration/addons.md +++ b/content/zh/docs/concepts/cluster-administration/addons.md @@ -89,12 +89,12 @@ Add-ons 扩展了 Kubernetes 的功能。 ## 基础设施 -* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) 是可以让 Kubernetes 运行虚拟机的 add-ons 。通常运行在裸机群集上。 +* [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation) 是可以让 Kubernetes 运行虚拟机的 add-ons 。通常运行在裸机群集上。 -Kubernetes v1.6 contains a new binary called cloud-controller-manager. cloud-controller-manager is a daemon that embeds cloud-specific control loops. These cloud-specific control loops were originally in the kube-controller-manager. Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently from the core Kubernetes code. +Originally part of the kube-controller-manager, the cloud-controller-manager is responsible to decoupling the interoperability logic between Kubernetes and the underlying cloud infrastructure, enabling cloud providers to release features at a different pace compared to the main project. diff --git a/content/en/docs/reference/glossary/host-aliases.md b/content/en/docs/reference/glossary/host-aliases.md index 47bd22d433..dc848960c3 100644 --- a/content/en/docs/reference/glossary/host-aliases.md +++ b/content/en/docs/reference/glossary/host-aliases.md @@ -2,7 +2,7 @@ title: HostAliases id: HostAliases date: 2019-01-31 -full_link: /docs/reference/generated/kubernetes-api/v1.13/#hostalias-v1-core +full_link: /docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core short_description: > A HostAliases is a mapping between the IP address and hostname to be injected into a Pod's hosts file. @@ -14,4 +14,4 @@ tags: -[HostAliases](/docs/reference/generated/kubernetes-api/v1.13/#hostalias-v1-corev) is an optional list of hostnames and IP addresses that will be injected into the Pod's hosts file if specified. This is only valid for non-hostNetwork Pods. +[HostAliases](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core) is an optional list of hostnames and IP addresses that will be injected into the Pod's hosts file if specified. This is only valid for non-hostNetwork Pods. From 264869ae3a75efd2bcef103c21bac4c031ae4fdc Mon Sep 17 00:00:00 2001 From: "Yuk, Yongsu" Date: Tue, 31 Mar 2020 09:24:40 +0900 Subject: [PATCH 091/105] First Korean l10n Work For Release 1.18 * Update to Outdated files in the dev-1.18-ko.1 branch (#19886) * Translate reference/glossary/service-catalog.md in Korean (#20055) * Translate storage/dynamic-provisioning.md in Korean (#19952) * Translate extend-kubernetes/extend-cluster.md in Korean (#20031) * Translate storage/persistent-volumes.md in Korean (#19906) * Translate partners/_index.html in Korean (#19967) * Translate extend-kubernetes/operator.md in Korean (#20037) * Translate assign-pods-nodes to Korean (#20035) * Translate scheduling/kube-scheduler.md in Korean (#19879) * Translate concepts/containers/overview in Korean (#19885) * Translate policy/pod-security-policy.md in Korean (#19922) * Translate policy/limit-range.md in Korean (#19912) * Translate policy/resource-quotas.md in Korean (#19931) * Translate extend-kubernetes/api-extension/custom-resources.md in Korean (#20093) * Translate reference/glossary/managed-service.md in Korean (#20150) Co-authored-by: Jerry Park Co-authored-by: heechang lee Co-authored-by: Yuk, Yongsu Co-authored-by: Seokho Son Co-authored-by: coolguyhong Co-authored-by: santachopa Co-authored-by: Jesang Myung Co-authored-by: June Yi --- .../ko/docs/concepts/architecture/nodes.md | 2 +- .../cluster-administration-overview.md | 2 +- .../concepts/configuration/assign-pod-node.md | 2 +- ...-variables.md => container-environment.md} | 0 .../containers/container-lifecycle-hooks.md | 2 +- .../ko/docs/concepts/containers/overview.md | 43 + .../docs/concepts/containers/runtime-class.md | 68 +- .../api-extension/custom-resources.md | 254 ++++++ .../extend-kubernetes/extend-cluster.md | 205 +++++ .../concepts/extend-kubernetes/operator.md | 133 +++ .../working-with-objects/annotations.md | 2 +- .../overview/working-with-objects/labels.md | 2 +- .../overview/working-with-objects/names.md | 2 +- content/ko/docs/concepts/policy/_index.md | 4 + .../ko/docs/concepts/policy/limit-range.md | 388 +++++++++ .../concepts/policy/pod-security-policy.md | 635 +++++++++++++++ .../docs/concepts/policy/resource-quotas.md | 600 ++++++++++++++ .../concepts/scheduling/kube-scheduler.md | 97 +++ .../concepts/services-networking/ingress.md | 79 ++ .../concepts/services-networking/service.md | 11 + .../concepts/storage/dynamic-provisioning.md | 131 +++ .../concepts/storage/persistent-volumes.md | 757 ++++++++++++++++++ .../concepts/storage/volume-pvc-datasource.md | 2 +- content/ko/docs/concepts/storage/volumes.md | 16 +- .../workloads/controllers/cron-jobs.md | 7 +- .../workloads/controllers/deployment.md | 42 +- .../workloads/pods/ephemeral-containers.md | 46 +- .../pods/pod-topology-spread-constraints.md | 51 +- content/ko/docs/home/_index.md | 4 +- content/ko/docs/reference/_index.md | 4 +- .../reference/glossary/managed-service.md | 18 + .../glossary/replication-controller.md | 14 +- .../reference/glossary/service-catalog.md | 18 + content/ko/docs/setup/_index.md | 59 +- .../setup/learning-environment/minikube.md | 16 +- .../container-runtimes.md | 54 +- .../windows/user-guide-windows-containers.md | 2 +- .../windows/user-guide-windows-nodes.md | 354 -------- .../administer-cluster/cluster-management.md | 2 +- .../tasks/configure-pod-container/_index.md | 4 + .../assign-pods-nodes.md | 104 +++ .../declarative-config.md | 38 +- .../imperative-command.md | 6 +- .../kustomization.md | 6 + .../horizontal-pod-autoscale.md | 2 +- .../ko/docs/tasks/tools/install-minikube.md | 10 +- .../resource/limit-mem-cpu-container.yaml | 19 + .../admin/resource/limit-mem-cpu-pod.yaml | 10 + .../resource/limit-memory-ratio-pod.yaml | 9 + .../admin/resource/limit-range-pod-1.yaml | 37 + .../admin/resource/limit-range-pod-2.yaml | 37 + .../admin/resource/limit-range-pod-3.yaml | 13 + .../admin/resource/pvc-limit-greater.yaml | 10 + .../admin/resource/pvc-limit-lower.yaml | 10 + .../admin/resource/storagelimits.yaml | 11 + .../ko/examples/application/deployment.yaml | 6 +- .../application/simple_deployment.yaml | 2 +- .../application/update_deployment.yaml | 2 +- .../ko/examples/controllers/daemonset.yaml | 2 + .../controllers/nginx-deployment.yaml | 2 +- .../pods/pod-nginx-specific-node.yaml | 10 + content/ko/examples/policy/example-psp.yaml | 17 + .../ko/examples/policy/privileged-psp.yaml | 27 + .../ko/examples/policy/restricted-psp.yaml | 48 ++ content/ko/partners/_index.html | 91 +++ 65 files changed, 4046 insertions(+), 615 deletions(-) rename content/ko/docs/concepts/containers/{container-environment-variables.md => container-environment.md} (100%) create mode 100644 content/ko/docs/concepts/containers/overview.md create mode 100644 content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md create mode 100644 content/ko/docs/concepts/extend-kubernetes/extend-cluster.md create mode 100644 content/ko/docs/concepts/extend-kubernetes/operator.md create mode 100644 content/ko/docs/concepts/policy/_index.md create mode 100644 content/ko/docs/concepts/policy/limit-range.md create mode 100644 content/ko/docs/concepts/policy/pod-security-policy.md create mode 100644 content/ko/docs/concepts/policy/resource-quotas.md create mode 100644 content/ko/docs/concepts/scheduling/kube-scheduler.md create mode 100644 content/ko/docs/concepts/storage/dynamic-provisioning.md create mode 100644 content/ko/docs/concepts/storage/persistent-volumes.md create mode 100644 content/ko/docs/reference/glossary/managed-service.md create mode 100644 content/ko/docs/reference/glossary/service-catalog.md delete mode 100644 content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md create mode 100644 content/ko/docs/tasks/configure-pod-container/_index.md create mode 100644 content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md create mode 100644 content/ko/examples/admin/resource/limit-mem-cpu-container.yaml create mode 100644 content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml create mode 100644 content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml create mode 100644 content/ko/examples/admin/resource/limit-range-pod-1.yaml create mode 100644 content/ko/examples/admin/resource/limit-range-pod-2.yaml create mode 100644 content/ko/examples/admin/resource/limit-range-pod-3.yaml create mode 100644 content/ko/examples/admin/resource/pvc-limit-greater.yaml create mode 100644 content/ko/examples/admin/resource/pvc-limit-lower.yaml create mode 100644 content/ko/examples/admin/resource/storagelimits.yaml create mode 100644 content/ko/examples/pods/pod-nginx-specific-node.yaml create mode 100644 content/ko/examples/policy/example-psp.yaml create mode 100644 content/ko/examples/policy/privileged-psp.yaml create mode 100644 content/ko/examples/policy/restricted-psp.yaml create mode 100644 content/ko/partners/_index.html diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md index 423004b27e..42f14c04ef 100644 --- a/content/ko/docs/concepts/architecture/nodes.md +++ b/content/ko/docs/concepts/architecture/nodes.md @@ -180,7 +180,7 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할 보다 훨씬 길다). - kubelet은 10초마다 리스 오브젝트를 생성하고 업데이트 한다(기본 업데이트 주기). 리스 업데이트는 `NodeStatus` 업데이트와는 - 독립적으로 발생한다. + 독립적으로 발생한다. 리스 업데이트가 실패하면 kubelet에 의해 재시도하며 7초로 제한된 지수 백오프를 200 밀리초에서 부터 시작한다. #### 안정성 diff --git a/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md index 2ae19a2961..bcae9a4a1d 100644 --- a/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -41,7 +41,7 @@ weight: 10 * [인증서](/docs/concepts/cluster-administration/certificates/)는 다른 툴 체인을 이용하여 인증서를 생성하는 방법을 설명한다. -* [쿠버네티스 컨테이너 환경](/ko/docs/concepts/containers/container-environment-variables/)은 쿠버네티스 노드에서 Kubelet에 의해 관리되는 컨테이너 환경에 대해 설명한다. +* [쿠버네티스 컨테이너 환경](/ko/docs/concepts/containers/container-environment/)은 쿠버네티스 노드에서 Kubelet에 의해 관리되는 컨테이너 환경에 대해 설명한다. * [쿠버네티스 API에 대한 접근 제어](/docs/reference/access-authn-authz/controlling-access/)는 사용자와 서비스 계정에 어떻게 권한 설정을 하는지 설명한다. diff --git a/content/ko/docs/concepts/configuration/assign-pod-node.md b/content/ko/docs/concepts/configuration/assign-pod-node.md index 027a7d38d3..5136fddeeb 100644 --- a/content/ko/docs/concepts/configuration/assign-pod-node.md +++ b/content/ko/docs/concepts/configuration/assign-pod-node.md @@ -315,7 +315,7 @@ spec: topologyKey: "kubernetes.io/hostname" containers: - name: web-app - image: nginx:1.12-alpine + image: nginx:1.16-alpine ``` 만약 위의 두 디플로이먼트를 생성하면 세 개의 노드가 있는 클러스터는 다음과 같아야 한다. diff --git a/content/ko/docs/concepts/containers/container-environment-variables.md b/content/ko/docs/concepts/containers/container-environment.md similarity index 100% rename from content/ko/docs/concepts/containers/container-environment-variables.md rename to content/ko/docs/concepts/containers/container-environment.md diff --git a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md index 6967e70fc0..6264621a24 100644 --- a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md @@ -113,7 +113,7 @@ Events: {{% capture whatsnext %}} -* [컨테이너 환경](/ko/docs/concepts/containers/container-environment-variables/)에 대해 더 배우기. +* [컨테이너 환경](/ko/docs/concepts/containers/container-environment/)에 대해 더 배우기. * [컨테이너 라이프사이클 이벤트에 핸들러 부착](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) 실습 경험하기. diff --git a/content/ko/docs/concepts/containers/overview.md b/content/ko/docs/concepts/containers/overview.md new file mode 100644 index 0000000000..11d29a18ce --- /dev/null +++ b/content/ko/docs/concepts/containers/overview.md @@ -0,0 +1,43 @@ +--- +title: 컨테이너 개요 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +컨테이너는 런타임에 필요한 종속성과 애플리케이션의 +컴파일 된 코드를 패키징 하는 기술이다. 실행되는 각각의 +컨테이너는 반복해서 사용 가능하다. 종속성이 포함된 표준화를 +통해 컨테이너가 실행되는 환경과 무관하게 항상 동일하게 +동작한다. + +컨테이너는 기본 호스트 인프라 환경에서 애플리케이션의 실행환경을 분리한다. +따라서 다양한 클라우드 환경이나 운영체제에서 쉽게 배포 할 수 있다. + +{{% /capture %}} + + +{{% capture body %}} + +## 컨테이너 이미지 +[컨테이너 이미지](/ko/docs/concepts/containers/images/) 는 즉시 실행할 수 있는 +소프트웨어 패키지이며, 애플리케이션을 실행하는데 필요한 모든 것 +(필요한 코드와 런타임, 애플리케이션 및 시스템 라이브러리 등의 모든 필수 설정에 대한 기본값) +을 포함한다. + +원칙적으로, 컨테이너는 변경되지 않는다. 이미 구동 중인 컨테이너의 +코드를 변경할 수 없다. 컨테이너화 된 애플리케이션이 있고 그 +애플리케이션을 변경하려는 경우, 변경사항을 포함하여 만든 +새로운 이미지를 통해 컨테이너를 다시 생성해야 한다. + + +## 컨테이너 런타임 + +{{< glossary_definition term_id="container-runtime" length="all" >}} + +{{% /capture %}} +{{% capture whatsnext %}} +* [컨테이너 이미지](/ko/docs/concepts/containers/images/)에 대해 읽어보기 +* [파드](/ko/docs/concepts/workloads/pods/)에 대해 읽어보기 +{{% /capture %}} diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md index befcefcfe4..eab78a19fb 100644 --- a/content/ko/docs/concepts/containers/runtime-class.md +++ b/content/ko/docs/concepts/containers/runtime-class.md @@ -10,22 +10,14 @@ weight: 20 이 페이지는 런타임 클래스(RuntimeClass) 리소스와 런타임 선택 메커니즘에 대해서 설명한다. -{{< warning >}} -런타임클래스는 v1.14 베타 업그레이드에서 *중대한* 변화를 포함한다. -런타임클래스를 v1.14 이전부터 사용하고 있었다면, -[런타임 클래스를 알파에서 베타로 업그레이드하기](#upgrading-runtimeclass-from-alpha-to-beta)를 확인한다. -{{< /warning >}} +런타임클래스는 컨테이너 런타임을 구성을 선택하는 기능이다. 컨테이너 런타임 +구성은 파드의 컨테이너를 실행하는데 사용된다. {{% /capture %}} {{% capture body %}} -## 런타임 클래스 - -런타임 클래스는 컨테이너 런타임 설정을 선택하는 기능이다. -이 컨테이너 런타임 설정은 파드의 컨테이너를 실행할 때에 이용한다. - ## 동기 서로 다른 파드간에 런타임 클래스를 설정하여 @@ -38,7 +30,7 @@ weight: 20 또한 런타임 클래스를 사용하여 컨테이너 런타임이 같으나 설정이 다른 여러 파드를 실행할 수 있다. -### 셋업 +## 셋업 RuntimeClass 특징 게이트가 활성화(기본값)를 확인한다. 특징 게이트 활성화에 대한 설명은 [특징 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를 @@ -47,7 +39,7 @@ RuntimeClass 특징 게이트가 활성화(기본값)를 확인한다. 1. CRI 구현(implementation)을 노드에 설정(런타임에 따라서) 2. 상응하는 런타임 클래스 리소스 생성 -#### 1. CRI 구현을 노드에 설정 +### 1. CRI 구현을 노드에 설정 런타임 클래스를 통한 가능한 구성은 컨테이너 런타임 인터페이스(CRI) 구현에 의존적이다. 사용자의 CRI 구현에 따른 설정 방법은 @@ -62,7 +54,7 @@ RuntimeClass 특징 게이트가 활성화(기본값)를 확인한다. 해당 설정은 상응하는 `handler` 이름을 가지며, 이는 런타임 클래스에 의해서 참조된다. 런타임 핸들러는 유효한 DNS 1123 서브도메인(알파-숫자 + `-`와 `.`문자)을 가져야 한다. -#### 2. 상응하는 런타임 클래스 리소스 생성 +### 2. 상응하는 런타임 클래스 리소스 생성 1단계에서 셋업 한 설정은 연관된 `handler` 이름을 가져야 하며, 이를 통해서 설정을 식별할 수 있다. 각 런타임 핸들러(그리고 선택적으로 비어있는 `""` 핸들러)에 대해서, 상응하는 런타임 클래스 오브젝트를 생성한다. @@ -88,7 +80,7 @@ handler: myconfiguration # 상응하는 CRI 설정의 이름임 더 자세한 정보는 [권한 개요](/docs/reference/access-authn-authz/authorization/)를 참고한다. {{< /note >}} -### 사용 +## 사용 클러스터를 위해서 런타임 클래스를 설정하고 나면, 그것을 사용하는 것은 매우 간단하다. 파드 스펙에 `runtimeClassName`를 명시한다. 예를 들면 다음과 같다. @@ -147,13 +139,13 @@ https://github.com/containerd/cri/blob/master/docs/config.md [100]: https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md -### 스케줄 +## 스케줄 {{< feature-state for_k8s_version="v1.16" state="beta" >}} 쿠버네티스 v1.16 부터, 런타임 클래스는 `scheduling` 필드를 통해 이종의 클러스터 지원을 포함한다. 이 필드를 사용하면, 이 런타임 클래스를 갖는 파드가 이를 지원하는 노드로 스케줄된다는 것을 보장할 수 있다. -이 스케줄링 기능을 사용하려면, 런타임 클래스 [어드미션(admission) 컨트롤러][]를 활성화(1.16 부터 기본 값)해야 한다. +이 스케줄링 기능을 사용하려면, [런타임 클래스 어드미션(admission) 컨트롤러][]를 활성화(1.16 부터 기본 값)해야 한다. 파드가 지정된 런타임 클래스를 지원하는 노드에 안착한다는 것을 보장하려면, 해당 노드들은 `runtimeClass.scheduling.nodeSelector` 필드에서 선택되는 공통 레이블을 가져야한다. @@ -168,50 +160,24 @@ https://github.com/containerd/cri/blob/master/docs/config.md 노드 셀렉터와 톨러레이션 설정에 대해 더 배우려면 [노드에 파드 할당](/ko/docs/concepts/configuration/assign-pod-node/)을 참고한다. -[어드미션 컨트롤러]: /docs/reference/access-authn-authz/admission-controllers/ +[런타임 클래스 어드미션 컨트롤러]: /docs/reference/access-authn-authz/admission-controllers/#runtimeclass ### 파드 오버헤드 -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.18" state="beta" >}} -쿠버네티스 v1.16 부터는, 런타임 클래스에는 구동 중인 파드와 관련된 오버헤드를 -지정할 수 있는 기능이 [`PodOverhead`](/docs/concepts/configuration/pod-overhead) 기능을 통해 지원된다. -`PodOverhead`를 사용하려면, PodOverhead [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를 -활성화 시켜야 한다. (기본 값으로는 비활성화 되어 있다.) +파드 실행과 연관되는 _오버헤드_ 리소스를 지정할 수 있다. 오버헤드를 선언하면 +클러스터(스케줄러 포함)가 파드와 리소스에 대한 결정을 내릴 때 처리를 할 수 있다. +PodOverhead를 사용하려면, PodOverhead [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/) +를 활성화 시켜야 한다. (기본으로 활성화 되어 있다.) -파드 오버헤드는 런타임 클래스에서 `Overhead` 필드를 통해 정의된다. 이 필드를 사용하면, +파드 오버헤드는 런타임 클래스에서 `overhead` 필드를 통해 정의된다. 이 필드를 사용하면, 해당 런타임 클래스를 사용해서 구동 중인 파드의 오버헤드를 특정할 수 있고 이 오버헤드가 쿠버네티스 내에서 처리된다는 것을 보장할 수 있다. -### 런타임 클래스를 알파에서 베타로 업그레이드 {#upgrading-runtimeclass-from-alpha-to-beta} - -런타임 클래스 베타 기능은 다음의 변화를 포함한다. - -- `node.k8s.io` API 그룹과 `runtimeclasses.node.k8s.io` 리소스는 CustomResourceDefinition에서 - 내장 API로 이전되었다. -- 런타임 클래스 정의에서 `spec`을 직접 사용할 수 있다. - (즉, 더 이상 RuntimeClassSpec는 없다). -- `runtimeHandler` 필드는 `handler`로 이름이 바뀌었다. -- `handler` 필드는 이제 모두 API 버전에서 요구된다. 이는 알파 API에서도 `runtimeHandler` 필드가 - 필요하다는 의미이다. -- `handler` 필드는 반드시 올바른 DNS 레이블([RFC 1123](https://tools.ietf.org/html/rfc1123))으로, - 이는 더 이상 `.` 캐릭터(모든 버전에서)를 포함할 수 없다 의미이다. 올바른 핸들러는 - 다음의 정규 표현식을 따른다. `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`. - -**작업 필요** 다음 작업은 알파 버전의 런타임 기능을 -베타 버전으로 업그레이드하기 위해 진행되어야 한다. - -- 런타임 클래스 리소스는 v1.14로 업그레이드 *후에* 반드시 재생성되어야 하고, - `runtimeclasses.node.k8s.io` CRD는 다음과 같이 수동으로 지워야 한다. - ``` - kubectl delete customresourcedefinitions.apiextensions.k8s.io runtimeclasses.node.k8s.io - ``` -- 지정되지 않았거나 비어 있는 `runtimeHandler` 이거나 핸들러 내에 `.` 캐릭터를 사용한 알파 런타임 클래스는 - 더 이상 올바르지 않으며, 반드시 올바른 핸들러 구성으로 이전헤야 한다 - (위를 참조). - -### 더 읽기 +{{% /capture %}} +{{% capture whatsnext %}} - [런타임 클래스 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) - [런타임 클래스 스케줄링 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md) diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md new file mode 100644 index 0000000000..e7e380d6ba --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -0,0 +1,254 @@ +--- +title: 커스텀 리소스 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +*커스텀 리소스* 는 쿠버네티스 API의 익스텐션이다. 이 페이지에서는 쿠버네티스 클러스터에 +커스텀 리소스를 추가할 시기와 독립형 서비스를 사용하는 시기에 대해 설명한다. 커스텀 리소스를 +추가하는 두 가지 방법과 이들 중에서 선택하는 방법에 대해 설명한다. + +{{% /capture %}} + +{{% capture body %}} +## 커스텀 리소스 + +*리소스* 는 [쿠버네티스 API](/ko/docs/reference/using-api/api-overview/)에서 특정 종류의 +[API 오브젝트](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/) 모음을 저장하는 엔드포인트이다. 예를 들어 빌트인 *파드* 리소스에는 파드 오브젝트 모음이 포함되어 있다. + +*커스텀 리소스* 는 쿠버네티스 API의 익스텐션으로, 기본 쿠버네티스 설치에서 반드시 +사용할 수 있는 것은 아니다. 이는 특정 쿠버네티스 설치에 수정이 가해졌음을 나타낸다. 그러나 +많은 핵심 쿠버네티스 기능은 이제 커스텀 리소스를 사용하여 구축되어, 쿠버네티스를 더욱 모듈화한다. + +동적 등록을 통해 실행 중인 클러스터에서 커스텀 리소스가 나타나거나 사라질 수 있으며 +클러스터 관리자는 클러스터 자체와 독립적으로 커스텀 리소스를 업데이트 할 수 있다. +커스텀 리소스가 설치되면 사용자는 *파드* 와 같은 빌트인 리소스와 마찬가지로 +[kubectl](/docs/user-guide/kubectl-overview/)을 사용하여 해당 오브젝트를 생성하고 +접근할 수 있다. + +## 커스텀 컨트롤러 + +자체적으로 커스텀 리소스를 사용하면 구조화된 데이터를 저장하고 검색할 수 있다. +커스텀 리소스를 *커스텀 컨트롤러* 와 결합하면, 커스텀 리소스가 진정한 +_선언적(declarative) API_ 를 제공하게 된다. + +[선언적 API](/ko/docs/concepts/overview/kubernetes-api/)는 리소스의 의도한 상태를 +_선언_ 하거나 지정할 수 있게 해주며 쿠버네티스 오브젝트의 현재 상태를 의도한 상태와 +동기화 상태로 유지하려고 한다. 컨트롤러는 구조화된 데이터를 사용자가 +원하는 상태의 레코드로 해석하고 지속적으로 +이 상태를 유지한다. + +클러스터 라이프사이클과 관계없이 실행 중인 클러스터에 커스텀 컨트롤러를 배포하고 +업데이트할 수 있다. 커스텀 컨트롤러는 모든 종류의 리소스와 함께 작동할 수 있지만 +커스텀 리소스와 결합할 때 특히 효과적이다. +[오퍼레이터 패턴](https://coreos.com/blog/introducing-operators.html)은 사용자 정의 +리소스와 커스텀 컨트롤러를 결합한다. 커스텀 컨트롤러를 사용하여 특정 애플리케이션에 대한 도메인 지식을 +쿠버네티스 API의 익스텐션으로 인코딩할 수 있다. + +## 쿠버네티스 클러스터에 커스텀 리소스를 추가해야 하나? + +새로운 API를 생성할 때 [쿠버네티스 클러스터 API와 생성한 API를 애그리게이트](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)할 것인지 아니면 생성한 API를 독립적으로 유지할 것인지 고려하자. + +| API 애그리게이트를 고려할 경우 | 독립 API를 고려할 경우 | +| ---------------------------- | ---------------------------- | +| API가 [선언적](#선언적-api)이다. | API가 [선언적](#선언적-api) 모델에 맞지 않다. | +| `kubectl`을 사용하여 새로운 타입을 읽고 쓸 수 있기를 원한다.| `kubectl` 지원이 필요하지 않다. | +| 쿠버네티스 UI(예: 대시보드)에서 빌트인 타입과 함께 새로운 타입을 보길 원한다. | 쿠버네티스 UI 지원이 필요하지 않다. | +| 새로운 API를 개발 중이다. | 생성한 API를 제공하는 프로그램이 이미 있고 잘 작동하고 있다. | +| API 그룹 및 네임스페이스와 같은 REST 리소스 경로에 적용하는 쿠버네티스의 형식 제한을 기꺼이 수용한다. ([API 개요](/ko/docs/concepts/overview/kubernetes-api/)를 참고한다.) | 이미 정의된 REST API와 호환되도록 특정 REST 경로가 있어야 한다. | +| 자체 리소스는 자연스럽게 클러스터 또는 클러스터의 네임스페이스로 범위가 지정된다. | 클러스터 또는 네임스페이스 범위의 리소스는 적합하지 않다. 특정 리소스 경로를 제어해야 한다. | +| [쿠버네티스 API 지원 기능](#일반적인-기능)을 재사용하려고 한다. | 이러한 기능이 필요하지 않다. | + +### 선언적 API + +선언적 API에서는 다음의 특성이 있다. + + - API는 상대적으로 적은 수의 상대적으로 작은 오브젝트(리소스)로 구성된다. + - 오브젝트는 애플리케이션 또는 인프라의 구성을 정의한다. + - 오브젝트는 비교적 드물게 업데이트 된다. + - 사람이 종종 오브젝트를 읽고 쓸 필요가 있다. + - 오브젝트의 주요 작업은 CRUD-y(생성, 읽기, 업데이트 및 삭제)이다. + - 오브젝트 간 트랜잭션은 필요하지 않다. API는 정확한(exact) 상태가 아니라 의도한 상태를 나타낸다. + +명령형 API는 선언적이지 않다. +자신의 API가 선언적이지 않을 수 있다는 징후는 다음과 같다. + + - 클라이언트는 "이 작업을 수행한다"라고 말하고 완료되면 동기(synchronous) 응답을 받는다. + - 클라이언트는 "이 작업을 수행한다"라고 말한 다음 작업 ID를 다시 가져오고 별도의 오퍼레이션(operation) 오브젝트를 확인하여 요청의 완료 여부를 결정해야 한다. + - RPC(원격 프로시저 호출)에 대해 얘기한다. + - 대량의 데이터를 직접 저장한다(예: > 오브젝트별 몇 kB 또는 >1000개의 오브젝트). + - 높은 대역폭 접근(초당 10개의 지속적인 요청)이 필요하다. + - 최종 사용자 데이터(예: 이미지, PII 등) 또는 애플리케이션에서 처리한 기타 대규모 데이터를 저장한다. + - 오브젝트에 대한 자연스러운 조작은 CRUD-y가 아니다. + - API는 오브젝트로 쉽게 모델링되지 않는다. + - 작업 ID 또는 작업 오브젝트로 보류 중인 작업을 나타내도록 선택했다. + +## 컨피그맵을 사용해야 하나, 커스텀 리소스를 사용해야 하나? + +다음 중 하나에 해당하면 컨피그맵을 사용하자. + +* `mysql.cnf` 또는 `pom.xml`과 같이 잘 문서화된 기존 구성 파일 형식이 있다. +* 전체 구성 파일을 컨피그맵의 하나의 키에 넣고 싶다. +* 구성 파일의 주요 용도는 클러스터의 파드에서 실행 중인 프로그램이 파일을 사용하여 자체 구성하는 것이다. +* 파일 사용자는 쿠버네티스 API가 아닌 파드의 환경 변수 또는 파드의 파일을 통해 사용하는 것을 선호한다. +* 파일이 업데이트될 때 디플로이먼트 등을 통해 롤링 업데이트를 수행하려고 한다. + +{{< note >}} +민감한 데이터에는 [시크릿](/docs/concepts/configuration/secret/)을 사용하자. 이는 컨피그맵과 비슷하지만 더 안전한다. +{{< /note >}} + +다음 중 대부분이 적용되는 경우 커스텀 리소스(CRD 또는 애그리게이트 API(aggregated API))를 사용하자. + +* 쿠버네티스 클라이언트 라이브러리 및 CLI를 사용하여 새 리소스를 만들고 업데이트하려고 한다. +* kubectl의 최상위 지원을 원한다(예: `kubectl get my-object object-name`). +* 새 오브젝트에 대한 업데이트를 감시한 다음 다른 오브젝트를 CRUD하거나 그 반대로 하는 새로운 자동화를 구축하려고 한다. +* 오브젝트의 업데이트를 처리하는 자동화를 작성하려고 한다. +* `.spec`, `.status` 및 `.metadata`와 같은 쿠버네티스 API 규칙을 사용하려고 한다. +* 제어된 리소스의 콜렉션 또는 다른 리소스의 요약에 대한 오브젝트가 되기를 원한다. + +## 커스텀 리소스 추가 + +쿠버네티스는 클러스터에 커스텀 리소스를 추가하는 두 가지 방법을 제공한다. + +- CRD는 간단하며 프로그래밍 없이 만들 수 있다. +- [API 애그리게이션](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)에는 프로그래밍이 필요하지만, 데이터 저장 방법 및 API 버전 간 변환과 같은 API 동작을 보다 강력하게 제어할 수 있다. + +쿠버네티스는 다양한 사용자의 요구를 충족시키기 위해 이 두 가지 옵션을 제공하므로 사용의 용이성이나 유연성이 저하되지 않는다. + +애그리게이트 API는 기본 API 서버 뒤에 있는 하위 API 서버이며 프록시 역할을 한다. 이 배치를 [API 애그리게이션](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA)이라고 한다. 사용자에게는 쿠버네티스 API가 확장된 것과 같다. + +CRD를 사용하면 다른 API 서버를 추가하지 않고도 새로운 타입의 리소스를 생성할 수 있다. CRD를 사용하기 위해 API 애그리게이션을 이해할 필요는 없다. + +설치 방법에 관계없이 새 리소스는 커스텀 리소스라고 하며 빌트인 쿠버네티스 리소스(파드 등)와 구별된다. + +## 커스텀리소스데피니션 + +[커스텀리소스데피니션](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) +API 리소스를 사용하면 커스텀 리소스를 정의할 수 있다. +CRD 오브젝트를 정의하면 지정한 이름과 스키마를 사용하여 새 커스텀 리소스가 만들어진다. +쿠버네티스 API는 커스텀 리소스의 스토리지를 제공하고 처리한다. +CRD 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-서브도메인-이름)이어야 한다. + +따라서 커스텀 리소스를 처리하기 위해 자신의 API 서버를 작성할 수 없지만 +구현의 일반적인 특성으로 인해 +[API 서버 애그리게이션](#api-서버-애그리게이션)보다 유연성이 떨어진다. + +새 커스텀 리소스를 등록하고 새 리소스 타입의 인스턴스에 대해 작업하고 +컨트롤러를 사용하여 이벤트를 처리하는 방법에 대한 예제는 +[커스텀 컨트롤러 예제](https://github.com/kubernetes/sample-controller)를 참고한다. + +## API 서버 애그리게이션 + +일반적으로 쿠버네티스 API의 각 리소스에는 REST 요청을 처리하고 오브젝트의 퍼시스턴트 스토리지를 관리하는 코드가 필요하다. 주요 쿠버네티스 API 서버는 *파드* 및 *서비스* 와 같은 빌트인 리소스를 처리하고, 일반적으로 [CRD](#커스텀리소스데피니션)를 통해 커스텀 리소스를 처리할 수 ​​있다. + +[애그리게이션 레이어](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)를 사용하면 자체 독립형 API 서버를 +작성하고 배포하여 커스텀 리소스에 대한 특수한 구현을 제공할 수 있다. +기본 API 서버는 처리하는 커스텀 리소스에 대한 요청을 사용자에게 위임하여 +모든 클라이언트가 사용할 수 있게 한다. + +## 커스텀 리소스를 추가할 방법 선택 + +CRD는 사용하기가 더 쉽다. 애그리게이트 API가 더 유연하다. 자신의 요구에 가장 잘 맞는 방법을 선택하자. + +일반적으로 CRD는 다음과 같은 경우에 적합하다. + +* 필드가 몇 개 되지 않는다 +* 회사 내에서 또는 소규모 오픈소스 프로젝트의 일부인(상용 제품이 아닌) 리소스를 사용하고 있다. + +### 사용 편의성 비교 + +CRD는 애그리게이트 API보다 생성하기가 쉽다. + +| CRD | 애그리게이트 API | +| --------------------------- | -------------- | +| 프로그래밍이 필요하지 않다. 사용자는 CRD 컨트롤러에 대한 모든 언어를 선택할 수 있다. | Go로 프로그래밍하고 바이너리와 이미지를 빌드해야 한다. | +| 실행할 추가 서비스가 없다. CR은 API 서버에서 처리한다. | 추가 서비스를 생성하면 실패할 수 있다. | +| CRD가 생성된 후에는 지속적인 지원이 없다. 모든 버그 픽스는 일반적인 쿠버네티스 마스터 업그레이드의 일부로 선택된다. | 업스트림에서 버그 픽스를 주기적으로 선택하고 애그리게이트 API 서버를 다시 빌드하고 업데이트해야 할 수 있다. | +| 여러 버전의 API를 처리할 필요가 없다. 예를 들어, 이 리소스에 대한 클라이언트를 제어할 때 API와 동기화하여 업그레이드할 수 있다. | 인터넷에 공유할 익스텐션을 개발할 때와 같이 여러 버전의 API를 처리해야 한다. | + +### 고급 기능 및 유연성 + +애그리게이트 API는 보다 고급 API 기능과 스토리지 레이어와 같은 다른 기능의 사용자 정의를 제공한다. + +| 기능 | 설명 | CRD | 애그리게이트 API | +| ------- | ----------- | ---- | -------------- | +| 유효성 검사 | 사용자가 오류를 방지하고 클라이언트와 독립적으로 API를 발전시킬 수 있도록 도와준다. 이러한 기능은 동시에 많은 클라이언트를 모두 업데이트할 수 없는 경우에 아주 유용하다. | 예. [OpenAPI v3.0 유효성 검사](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation)를 사용하여 CRD에서 대부분의 유효성 검사를 지정할 수 있다. [웹훅 유효성 검사](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9)를 추가해서 다른 모든 유효성 검사를 지원한다. | 예, 임의의 유효성 검사를 지원한다. | +| 기본 설정 | 위를 참고하자. | 예, [OpenAPI v3.0 유효성 검사](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#defaulting)의 `default` 키워드(1.17에서 GA) 또는 [웹훅 변형(mutating)](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)(이전 오브젝트의 etcd에서 읽을 때는 실행되지 않음)을 통해 지원한다. | 예 | +| 다중 버전 관리 | 두 가지 API 버전을 통해 동일한 오브젝트를 제공할 수 있다. 필드 이름 바꾸기와 같은 API 변경을 쉽게 할 수 있다. 클라이언트 버전을 제어하는 ​​경우는 덜 중요하다. | [예](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning) | 예 | +| 사용자 정의 스토리지 | 다른 성능 모드(예를 들어, 키-값 저장소 대신 시계열 데이터베이스)나 보안에 대한 격리(예를 들어, 암호화된 시크릿이나 다른 암호화) 기능을 가진 스토리지가 필요한 경우 | 아니오 | 예 | +| 사용자 정의 비즈니스 로직 | 오브젝트를 생성, 읽기, 업데이트 또는 삭제를 할 때 임의의 점검 또는 조치를 수행한다. | 예, [웹훅](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)을 사용한다. | 예 | +| 서브리소스 크기 조정 | HorizontalPodAutoscaler 및 PodDisruptionBudget과 같은 시스템이 새로운 리소스와 상호 작용할 수 있다. | [예](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#scale-subresource) | 예 | +| 서브리소스 상태 | 사용자가 스펙 섹션을 작성하고 컨트롤러가 상태 섹션을 작성하는 세분화된 접근 제어를 허용한다. 커스텀 리소스 데이터 변형 시 오브젝트 생성을 증가시킨다(리소스에서 별도의 스펙과 상태 섹션 필요). | [예](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource) | 예 | +| 기타 서브리소스 | "logs" 또는 "exec"과 같은 CRUD 이외의 작업을 추가한다. | 아니오 | 예 | +| strategic-merge-patch | 새로운 엔드포인트는 `Content-Type: application/strategic-merge-patch+json` 형식의 PATCH를 지원한다. 로컬 및 서버 양쪽에서 수정할 수도 있는 오브젝트를 업데이트하는 데 유용하다. 자세한 내용은 ["kubectl 패치를 사용한 API 오브젝트 업데이트"](/docs/tasks/run-application/update-api-object-kubectl-patch/)를 참고한다. | 아니오 | 예 | +| 프로토콜 버퍼 | 새로운 리소스는 프로토콜 버퍼를 사용하려는 클라이언트를 지원한다. | 아니오 | 예 | +| OpenAPI 스키마 | 서버에서 동적으로 가져올 수 있는 타입에 대한 OpenAPI(스웨거(swagger)) 스키마가 있는가? 허용된 필드만 설정하여 맞춤법이 틀린 필드 이름으로부터 사용자를 보호하는가? 타입이 적용되는가(즉, `string` 필드에 `int`를 넣지 않는가?) | 예, [OpenAPI v3.0 유효성 검사](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation)를 기반으로 하는 스키마(1.16에서 GA) | 예 | + +### 일반적인 기능 + +CRD 또는 AA를 통해 커스텀 리소스를 생성하면 쿠버네티스 플랫폼 외부에서 구현하는 것과 비교하여 API에 대한 많은 기능이 제공된다. + +| 기능 | 설명 | +| ------- | ------------ | +| CRUD | 새로운 엔드포인트는 HTTP 및 `kubectl`을 통해 CRUD 기본 작업을 지원한다. | +| 감시 | 새로운 엔드포인트는 HTTP를 통해 쿠버네티스 감시 작업을 지원한다. | +| 디스커버리 | kubectl 및 대시보드와 같은 클라이언트는 리소스에 대해 목록, 표시 및 필드 수정 작업을 자동으로 제공한다. | +| json-patch | 새로운 엔드포인트는 `Content-Type: application/json-patch+json` 형식의 PATCH를 지원한다. | +| merge-patch | 새로운 엔드포인트는 `Content-Type: application/merge-patch+json` 형식의 PATCH를 지원한다. | +| HTTPS | 새로운 엔드포인트는 HTTPS를 사용한다. | +| 빌트인 인증 | 익스텐션에 대한 접근은 인증을 위해 기본 API 서버(애그리게이션 레이어)를 사용한다. | +| 빌트인 권한 부여 | 익스텐션에 접근하면 기본 API 서버(예: RBAC)에서 사용하는 권한을 재사용할 수 있다. | +| Finalizer | 외부 정리가 발생할 때까지 익스텐션 리소스의 삭제를 차단한다. | +| 어드미션 웹훅 | 생성/업데이트/삭제 작업 중에 기본값을 설정하고 익스텐션 리소스의 유효성 검사를 한다. | +| UI/CLI 디스플레이 | Kubectl, 대시보드는 익스텐션 리소스를 표시할 수 있다. | +| 설정하지 않음(unset)과 비어있음(empty) | 클라이언트는 값이 없는 필드 중에서 설정되지 않은 필드를 구별할 수 있다. | +| 클라이언트 라이브러리 생성 | 쿠버네티스는 일반 클라이언트 라이브러리와 타입별 클라이언트 라이브러리를 생성하는 도구를 제공한다. | +| 레이블 및 어노테이션 | 공통 메타데이터는 핵심 및 커스텀 리소스를 수정하는 방법을 알고 있는 도구이다. | + +## 커스텀 리소스 설치 준비 + +클러스터에 커스텀 리소스를 추가하기 전에 알아야 할 몇 가지 사항이 있다. + +### 써드파티 코드 및 새로운 장애 포인트 + +CRD를 생성해도 새로운 장애 포인트(예를 들어, API 서버에서 장애를 유발하는 써드파티 코드가 실행됨)가 자동으로 추가되지는 않지만, 패키지(예: 차트(Charts)) 또는 기타 설치 번들에는 CRD 및 새로운 커스텀 리소스에 대한 비즈니스 로직을 구현하는 써드파티 코드의 디플로이먼트가 포함되는 경우가 종종 있다. + +애그리게이트 API 서버를 설치하려면 항상 새 디플로이먼트를 실행해야 한다. + +### 스토리지 + +커스텀 리소스는 컨피그맵과 동일한 방식으로 스토리지 공간을 사용한다. 너무 많은 커스텀 리소스를 생성하면 API 서버의 스토리지 공간이 과부하될 수 있다. + +애그리게이트 API 서버는 기본 API 서버와 동일한 스토리지를 사용할 수 있으며 이 경우 동일한 경고가 적용된다. + +### 인증, 권한 부여 및 감사 + +CRD는 항상 API 서버의 빌트인 리소스와 동일한 인증, 권한 부여 및 감사 로깅을 사용한다. + +권한 부여에 RBAC를 사용하는 경우 대부분의 RBAC 역할은 새로운 리소스에 대한 접근 권한을 부여하지 않는다(클러스터 관리자 역할 또는 와일드 카드 규칙으로 생성된 역할 제외). 새로운 리소스에 대한 접근 권한을 명시적으로 부여해야 한다. CRD 및 애그리게이트 API는 종종 추가하는 타입에 대한 새로운 역할 정의와 함께 제공된다. + +애그리게이트 API 서버는 기본 API 서버와 동일한 인증, 권한 부여 및 감사를 사용하거나 사용하지 않을 수 있다. + +## 커스텀 리소스에 접근 + +쿠버네티스 [클라이언트 라이브러리](/docs/reference/using-api/client-libraries/)를 사용하여 커스텀 리소스에 접근할 수 있다. 모든 클라이언트 라이브러리가 커스텀 리소스를 지원하는 것은 아니다. Go와 python 클라이언트 라이브러리가 지원한다. + +커스텀 리소스를 추가하면 다음을 사용하여 접근할 수 있다. + +- kubectl +- 쿠버네티스 동적 클라이언트 +- 작성한 REST 클라이언트 +- [쿠버네티스 클라이언트 생성 도구](https://github.com/kubernetes/code-generator)를 사용하여 생성된 클라이언트(하나를 생성하는 것은 고급 기능이지만, 일부 프로젝트는 CRD 또는 AA와 함께 클라이언트를 제공할 수 있다). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [애그리게이션 레이어(aggregation layer)로 쿠버네티스 API 확장](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)하는 방법에 대해 배우기. + +* [커스텀리소스데피니션으로 쿠버네티스 API 확장](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)하는 방법에 대해 배우기. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md new file mode 100644 index 0000000000..a2a7c1745c --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md @@ -0,0 +1,205 @@ +--- +title: 쿠버네티스 클러스터 확장 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +쿠버네티스는 매우 유연하게 구성할 수 있고 확장 가능하다. 결과적으로 +쿠버네티스 프로젝트를 포크하거나 코드에 패치를 제출할 필요가 +거의 없다. + +이 가이드는 쿠버네티스 클러스터를 사용자 정의하기 위한 옵션을 설명한다. +쿠버네티스 클러스터를 업무 환경의 요구에 맞게 +조정하는 방법을 이해하려는 {{< glossary_tooltip text="클러스터 운영자" term_id="cluster-operator" >}}를 대상으로 한다. +잠재적인 {{< glossary_tooltip text="플랫폼 개발자" term_id="platform-developer" >}} 또는 쿠버네티스 프로젝트 {{< glossary_tooltip text="컨트리뷰터" term_id="contributor" >}}인 개발자에게도 +어떤 익스텐션 포인트와 패턴이 있는지, +그리고 그것들의 트레이드오프와 제약에 대한 소개 자료로 유용할 것이다. + +{{% /capture %}} + + +{{% capture body %}} + +## 개요 + +사용자 정의 방식은 크게 플래그, 로컬 구성 파일 또는 API 리소스 변경만 포함하는 *구성* 과 추가 프로그램이나 서비스 실행과 관련된 *익스텐션* 으로 나눌 수 있다. 이 문서는 주로 익스텐션에 관한 것이다. + +## 구성 + +*구성 파일* 및 *플래그* 는 온라인 문서의 레퍼런스 섹션에 각 바이너리 별로 문서화되어 있다. + +* [kubelet](/docs/admin/kubelet/) +* [kube-apiserver](/docs/admin/kube-apiserver/) +* [kube-controller-manager](/docs/admin/kube-controller-manager/) +* [kube-scheduler](/docs/admin/kube-scheduler/). + +호스팅된 쿠버네티스 서비스 또는 매니지드 설치 환경의 배포판에서 플래그 및 구성 파일을 항상 변경할 수 있는 것은 아니다. 변경 가능한 경우 일반적으로 클러스터 관리자만 변경할 수 있다. 또한 향후 쿠버네티스 버전에서 변경될 수 있으며, 이를 설정하려면 프로세스를 다시 시작해야 할 수도 있다. 이러한 이유로 다른 옵션이 없는 경우에만 사용해야 한다. + +[리소스쿼터](/ko/docs/concepts/policy/resource-quotas/), [PodSecurityPolicy](/ko/docs/concepts/policy/pod-security-policy/), [네트워크폴리시](/ko/docs/concepts/services-networking/network-policies/) 및 역할 기반 접근 제어([RBAC](/docs/reference/access-authn-authz/rbac/))와 같은 *빌트인 정책 API(built-in Policy API)* 는 기본적으로 제공되는 쿠버네티스 API이다. API는 일반적으로 호스팅된 쿠버네티스 서비스 및 매니지드 쿠버네티스 설치 환경과 함께 사용된다. 그것들은 선언적이며 파드와 같은 다른 쿠버네티스 리소스와 동일한 규칙을 사용하므로, 새로운 클러스터 구성을 반복할 수 있고 애플리케이션과 동일한 방식으로 관리할 수 ​​있다. 또한, 이들 API가 안정적인 경우, 다른 쿠버네티스 API와 같이 [정의된 지원 정책](/docs/reference/deprecation-policy/)을 사용할 수 있다. 이러한 이유로 인해 구성 파일과 플래그보다 선호된다. + +## 익스텐션(Extension) {#익스텐션} + +익스텐션은 쿠버네티스를 확장하고 쿠버네티스와 긴밀하게 통합되는 소프트웨어 컴포넌트이다. +이들 컴포넌트는 쿠버네티스가 새로운 유형과 새로운 종류의 하드웨어를 지원할 수 있게 해준다. + +대부분의 클러스터 관리자는 쿠버네티스의 호스팅 또는 배포판 인스턴스를 사용한다. +결과적으로 대부분의 쿠버네티스 사용자는 익스텐션 기능을 설치할 필요가 있고 +새로운 익스텐션 기능을 작성할 필요가 있는 사람은 더 적다. + +## 익스텐션 패턴 + +쿠버네티스는 클라이언트 프로그램을 작성하여 자동화 되도록 설계되었다. +쿠버네티스 API를 읽고 쓰는 프로그램은 유용한 자동화를 제공할 수 있다. +*자동화* 는 클러스터 상에서 또는 클러스터 밖에서 실행할 수 있다. 이 문서의 지침에 따라 +고가용성과 강력한 자동화를 작성할 수 있다. +자동화는 일반적으로 호스트 클러스터 및 매니지드 설치 환경을 포함한 모든 +쿠버네티스 클러스터에서 작동한다. + +쿠버네티스와 잘 작동하는 클라이언트 프로그램을 작성하기 위한 특정 패턴은 *컨트롤러* 패턴이라고 한다. +컨트롤러는 일반적으로 오브젝트의 `.spec`을 읽고, 가능한 경우 수행한 다음 +오브젝트의 `.status`를 업데이트 한다. + +컨트롤러는 쿠버네티스의 클라이언트이다. 쿠버네티스가 클라이언트이고 +원격 서비스를 호출할 때 이를 *웹훅(Webhook)* 이라고 한다. 원격 서비스를 +*웹훅 백엔드* 라고 한다. 컨트롤러와 마찬가지로 웹훅은 장애 지점을 +추가한다. + +웹훅 모델에서 쿠버네티스는 원격 서비스에 네트워크 요청을 한다. +*바이너리 플러그인* 모델에서 쿠버네티스는 바이너리(프로그램)를 실행한다. +바이너리 플러그인은 kubelet(예: +[Flex Volume 플러그인](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)과 +[네트워크 플러그인](/docs/concepts/cluster-administration/network-plugins/))과 +kubectl에서 +사용한다. + +아래는 익스텐션 포인트가 쿠버네티스 컨트롤 플레인과 상호 작용하는 방법을 +보여주는 다이어그램이다. + + + + + + +## 익스텐션 포인트 + +이 다이어그램은 쿠버네티스 시스템의 익스텐션 포인트를 보여준다. + + + + + +1. 사용자는 종종 `kubectl`을 사용하여 쿠버네티스 API와 상호 작용한다. [Kubectl 플러그인](/docs/tasks/extend-kubectl/kubectl-plugins/)은 kubectl 바이너리를 확장한다. 개별 사용자의 로컬 환경에만 영향을 미치므로 사이트 전체 정책을 적용할 수는 없다. +2. apiserver는 모든 요청을 처리한다. apiserver의 여러 유형의 익스텐션 포인트는 요청을 인증하거나, 콘텐츠를 기반으로 요청을 차단하거나, 콘텐츠를 편집하고, 삭제 처리를 허용한다. 이 내용은 [API 접근 익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#api-접근-익스텐션) 섹션에 설명되어 있다. +3. apiserver는 다양한 종류의 *리소스* 를 제공한다. `pods`와 같은 *빌트인 리소스 종류* 는 쿠버네티스 프로젝트에 의해 정의되며 변경할 수 없다. 직접 정의한 리소스를 추가할 수도 있고, [커스텀 리소스](/ko/docs/concepts/extend-kubernetes/extend-cluster/#사용자-정의-유형) 섹션에 설명된대로 *커스텀 리소스* 라고 부르는 다른 프로젝트에서 정의한 리소스를 추가할 수도 있다. 커스텀 리소스는 종종 API 접근 익스텐션과 함께 사용된다. +4. 쿠버네티스 스케줄러는 파드를 배치할 노드를 결정한다. 스케줄링을 확장하는 몇 가지 방법이 있다. 이들은 [스케줄러 익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#스케줄러-익스텐션) 섹션에 설명되어 있다. +5. 쿠버네티스의 많은 동작은 API-Server의 클라이언트인 컨트롤러(Controller)라는 프로그램으로 구현된다. 컨트롤러는 종종 커스텀 리소스와 함께 사용된다. +6. kubelet은 서버에서 실행되며 파드가 클러스터 네트워크에서 자체 IP를 가진 가상 서버처럼 보이도록 한다. [네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/extend-cluster/#네트워크-플러그인)을 사용하면 다양한 파드 네트워킹 구현이 가능하다. +7. kubelet은 컨테이너의 볼륨을 마운트 및 마운트 해제한다. 새로운 유형의 스토리지는 [스토리지 플러그인](/ko/docs/concepts/extend-kubernetes/extend-cluster/#스토리지-플러그인)을 통해 지원될 수 있다. + +어디서부터 시작해야 할지 모르겠다면, 이 플로우 차트가 도움이 될 수 있다. 일부 솔루션에는 여러 유형의 익스텐션이 포함될 수 있다. + + + + + + +## API 익스텐션 +### 사용자 정의 유형 + +새 컨트롤러, 애플리케이션 구성 오브젝트 또는 기타 선언적 API를 정의하고 `kubectl`과 같은 쿠버네티스 도구를 사용하여 관리하려면 쿠버네티스에 커스텀 리소스를 추가하자. + +애플리케이션, 사용자 또는 모니터링 데이터의 데이터 저장소로 커스텀 리소스를 사용하지 않는다. + +커스텀 리소스에 대한 자세한 내용은 [커스텀 리소스 개념 가이드](/docs/concepts/api-extension/custom-resources/)를 참고하길 바란다. + + +### 새로운 API와 자동화의 결합 + +사용자 정의 리소스 API와 컨트롤 루프의 조합을 [오퍼레이터(operator) 패턴](/docs/concepts/extend-kubernetes/operator/)이라고 한다. 오퍼레이터 패턴은 특정 애플리케이션, 일반적으로 스테이트풀(stateful) 애플리케이션을 관리하는 데 사용된다. 이러한 사용자 정의 API 및 컨트롤 루프를 사용하여 스토리지나 정책과 같은 다른 리소스를 제어할 수도 있다. + +### 빌트인 리소스 변경 + +사용자 정의 리소스를 추가하여 쿠버네티스 API를 확장하면 추가된 리소스는 항상 새로운 API 그룹에 속한다. 기존 API 그룹을 바꾸거나 변경할 수 없다. +API를 추가해도 기존 API(예: 파드)의 동작에 직접 영향을 미치지는 않지만 API 접근 익스텐션은 영향을 준다. + + +### API 접근 익스텐션 + +요청이 쿠버네티스 API 서버에 도달하면 먼저 인증이 되고, 그런 다음 승인된 후 다양한 유형의 어드미션 컨트롤이 적용된다. 이 흐름에 대한 자세한 내용은 [쿠버네티스 API에 대한 접근 제어](/docs/reference/access-authn-authz/controlling-access/)를 참고하길 바란다. + +이러한 각 단계는 익스텐션 포인트를 제공한다. + +쿠버네티스에는 이를 지원하는 몇 가지 빌트인 인증 방법이 있다. 또한 인증 프록시 뒤에 있을 수 있으며 인증 헤더에서 원격 서비스로 토큰을 전송하여 확인할 수 있다(웹훅). 이러한 방법은 모두 [인증 설명서](/docs/reference/access-authn-authz/authentication/)에 설명되어 있다. + +### 인증 + +[인증](/docs/reference/access-authn-authz/authentication/)은 모든 요청의 헤더 또는 인증서를 요청하는 클라이언트의 사용자 이름에 매핑한다. + +쿠버네티스는 몇 가지 빌트인 인증 방법과 필요에 맞지 않는 경우 [인증 웹훅](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) 방법을 제공한다. + + +### 승인 + +[승인](/docs/reference/access-authn-authz/webhook/)은 특정 사용자가 API 리소스에서 읽고, 쓰고, 다른 작업을 수행할 수 있는지를 결정한다. 전체 리소스 레벨에서 작동하며 임의의 오브젝트 필드를 기준으로 구별하지 않는다. 빌트인 인증 옵션이 사용자의 요구를 충족시키지 못하면 [인증 웹훅](/docs/reference/access-authn-authz/webhook/)을 통해 사용자가 제공한 코드를 호출하여 인증 결정을 내릴 수 있다. + + +### 동적 어드미션 컨트롤 + +요청이 승인된 후, 쓰기 작업인 경우 [어드미션 컨트롤](/docs/reference/access-authn-authz/admission-controllers/) 단계도 수행된다. 빌트인 단계 외에도 몇 가지 익스텐션이 있다. + +* [이미지 정책 웹훅](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)은 컨테이너에서 실행할 수 있는 이미지를 제한한다. +* 임의의 어드미션 컨트롤 결정을 내리기 위해 일반적인 [어드미션 웹훅](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)을 사용할 수 있다. 어드미션 웹훅은 생성 또는 업데이트를 거부할 수 있다. + +## 인프라스트럭처 익스텐션 + + +### 스토리지 플러그인 + +[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md)을 사용하면 +Kubelet이 바이너리 플러그인을 호출하여 볼륨을 마운트하도록 함으로써 +빌트인 지원 없이 볼륨 유형을 마운트 할 수 있다. + + +### 장치 플러그인 + +장치 플러그인은 노드가 [장치 플러그인](/docs/concepts/cluster-administration/device-plugins/)을 +통해 새로운 노드 리소스(CPU 및 메모리와 같은 빌트인 자원 외에)를 +발견할 수 있게 해준다. + + +### 네트워크 플러그인 + +노드-레벨의 [네트워크 플러그인](/docs/admin/network-plugins/)을 통해 다양한 네트워킹 패브릭을 지원할 수 있다. + +### 스케줄러 익스텐션 + +스케줄러는 파드를 감시하고 파드를 노드에 할당하는 특수한 유형의 +컨트롤러이다. 다른 쿠버네티스 컴포넌트를 계속 사용하면서 +기본 스케줄러를 완전히 교체하거나, +[여러 스케줄러](/docs/tasks/administer-cluster/configure-multiple-schedulers/)를 +동시에 실행할 수 있다. + +이것은 중요한 부분이며, 거의 모든 쿠버네티스 사용자는 스케줄러를 수정할 +필요가 없다는 것을 알게 된다. + +스케줄러는 또한 웹훅 백엔드(스케줄러 익스텐션)가 +파드에 대해 선택된 노드를 필터링하고 우선 순위를 지정할 수 있도록 하는 +[웹훅](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md)을 +지원한다. + +{{% /capture %}} + + +{{% capture whatsnext %}} + +* [커스텀 리소스](/docs/concepts/api-extension/custom-resources/)에 대해 더 알아보기 +* [동적 어드미션 컨트롤](/docs/reference/access-authn-authz/extensible-admission-controllers/)에 대해 알아보기 +* 인프라스트럭처 익스텐션에 대해 더 알아보기 + * [네트워크 플러그인](/docs/concepts/cluster-administration/network-plugins/) + * [장치 플러그인](/docs/concepts/cluster-administration/device-plugins/) +* [kubectl 플러그인](/docs/tasks/extend-kubectl/kubectl-plugins/)에 대해 알아보기 +* [오퍼레이터 패턴](/docs/concepts/extend-kubernetes/operator/)에 대해 알아보기 + +{{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/operator.md b/content/ko/docs/concepts/extend-kubernetes/operator.md new file mode 100644 index 0000000000..91c6b4165b --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/operator.md @@ -0,0 +1,133 @@ +--- +title: 오퍼레이터(operator) 패턴 +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +오퍼레이터(Operator)는 +[사용자 정의 리소스](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)를 +사용하여 애플리케이션 및 해당 컴포넌트를 관리하는 쿠버네티스의 소프트웨어 익스텐션이다. 오퍼레이터는 +쿠버네티스 원칙, 특히 [컨트롤 루프](/ko/docs/concepts/#쿠버네티스-컨트롤-플레인)를 따른다. + +{{% /capture %}} + + +{{% capture body %}} + +## 동기 부여 + +오퍼레이터 패턴은 서비스 또는 서비스 셋을 관리하는 운영자의 +주요 목표를 포착하는 것을 목표로 한다. 특정 애플리케이션 및 +서비스를 돌보는 운영자는 시스템의 작동 방식, 배포 방법 및 문제가 있는 경우 +대처 방법에 대해 깊이 알고 있다. + +쿠버네티스에서 워크로드를 실행하는 사람들은 종종 반복 가능한 작업을 처리하기 위해 +자동화를 사용하는 것을 좋아한다. 오퍼레이터 패턴은 쿠버네티스 자체가 제공하는 것 이상의 +작업을 자동화하기 위해 코드를 작성하는 방법을 포착한다. + +## 쿠버네티스의 오퍼레이터 + +쿠버네티스는 자동화를 위해 설계되었다. 기본적으로 쿠버네티스의 중추를 통해 많은 +빌트인 자동화 기능을 사용할 수 있다. 쿠버네티스를 사용하여 워크로드 배포 +및 실행을 자동화할 수 있고, *또한* 쿠버네티스가 수행하는 방식을 +자동화할 수 있다. + +쿠버네티스의 {{< glossary_tooltip text="컨트롤러" term_id="controller" >}} +개념을 통해 쿠버네티스 코드 자체를 수정하지 않고도 클러스터의 동작을 +확장할 수 있다. +오퍼레이터는 [사용자 정의 리소스](/docs/concepts/api-extension/custom-resources/)의 +컨트롤러 역할을 하는 쿠버네티스 API의 클라이언트이다. + +## 오퍼레이터 예시 {#example} + +오퍼레이터를 사용하여 자동화할 수 있는 몇 가지 사항은 다음과 같다. + +* 주문형 애플리케이션 배포 +* 해당 애플리케이션의 상태를 백업하고 복원 +* 데이터베이스 스키마 또는 추가 구성 설정과 같은 관련 변경 사항에 따른 + 애플리케이션 코드 업그레이드 처리 +* 쿠버네티스 API를 지원하지 않는 애플리케이션에 서비스를 + 게시하여 검색을 지원 +* 클러스터의 전체 또는 일부에서 장애를 시뮬레이션하여 가용성 테스트 +* 내부 멤버 선출 절차없이 분산 애플리케이션의 + 리더를 선택 + +오퍼레이터의 모습을 더 자세하게 볼 수 있는 방법은 무엇인가? 자세한 예는 +다음과 같다. + +1. 클러스터에 구성할 수 있는 SampleDB라는 사용자 정의 리소스. +2. 오퍼레이터의 컨트롤러 부분이 포함된 파드의 실행을 + 보장하는 디플로이먼트. +3. 오퍼레이터 코드의 컨테이너 이미지. +4. 컨트롤 플레인을 쿼리하여 어떤 SampleDB 리소스가 구성되어 있는지 + 알아내는 컨트롤러 코드. +5. 오퍼레이터의 핵심은 API 서버에 구성된 리소스와 현재 상태를 + 일치시키는 방법을 알려주는 코드이다. + * 새 SampleDB를 추가하면 오퍼레이터는 퍼시스턴트볼륨클레임을 + 설정하여 내구성있는 데이터베이스 스토리지, SampleDB를 실행하는 스테이트풀셋 및 + 초기 구성을 처리하는 잡을 제공한다. + * SampleDB를 삭제하면 오퍼레이터는 스냅샷을 생성한 다음 스테이트풀셋과 볼륨도 + 제거되었는지 확인한다. +6. 오퍼레이터는 정기적인 데이터베이스 백업도 관리한다. 오퍼레이터는 각 SampleDB + 리소스에 대해 데이터베이스에 연결하고 백업을 수행할 수 있는 파드를 생성하는 + 시기를 결정한다. 이 파드는 데이터베이스 연결 세부 정보 및 자격 증명이 있는 + 컨피그맵 및 / 또는 시크릿에 의존한다. +7. 오퍼레이터는 관리하는 리소스에 견고한 자동화를 제공하는 것을 목표로 하기 때문에 + 추가 지원 코드가 있다. 이 예제에서 코드는 데이터베이스가 이전 버전을 실행 중인지 + 확인하고, 업그레이드된 경우 이를 업그레이드하는 + 잡 오브젝트를 생성한다. + +## 오퍼레이터 배포 + +오퍼레이터를 배포하는 가장 일반적인 방법은 +커스텀 리소스 데피니션의 정의 및 연관된 컨트롤러를 클러스터에 추가하는 것이다. +컨테이너화된 애플리케이션을 실행하는 것처럼 +컨트롤러는 일반적으로 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}} +외부에서 실행된다. +예를 들어 클러스터에서 컨트롤러를 디플로이먼트로 실행할 수 있다. + +## 오퍼레이터 사용 {#using-operators} + +오퍼레이터가 배포되면 오퍼레이터가 사용하는 리소스의 종류를 추가, 수정 또는 +삭제하여 사용한다. 위의 예에 따라 오퍼레이터 자체에 대한 +디플로이먼트를 설정한 후 다음을 수행한다. + +```shell +kubectl get SampleDB # 구성된 데이터베이스 찾기 + +kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하기 +``` + +…이것으로 끝이다! 오퍼레이터는 변경 사항을 적용하고 기존 서비스를 +양호한 상태로 유지한다. + +## 자신만의 오퍼레이터 작성 {#writing-operator} + +에코시스템에 원하는 동작을 구현하는 오퍼레이터가 없다면 직접 코딩할 수 있다. +[다음 내용](#다음-내용)에서는 클라우드 네이티브 오퍼레이터를 작성하는 데 +사용할 수 있는 라이브러리 및 도구에 대한 몇 가지 링크를 +찾을 수 있다. + +또한 [쿠버네티스 API의 클라이언트](/docs/reference/using-api/client-libraries/) +역할을 할 수 있는 모든 언어 / 런타임을 사용하여 오퍼레이터(즉, 컨트롤러)를 구현한다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [사용자 정의 리소스](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에 대해 더 알아보기 +* [OperatorHub.io](https://operatorhub.io/)에서 유스케이스에 맞는 이미 만들어진 오퍼레이터 찾기 +* 기존 도구를 사용하여 자신만의 오퍼레이터를 작성해보자. 다음은 예시이다. + * [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) 사용하기 + * [kubebuilder](https://book.kubebuilder.io/) 사용하기 + * 웹훅(WebHook)과 함께 [Metacontroller](https://metacontroller.app/)를 + 사용하여 직접 구현하기 + * [오퍼레이터 프레임워크](https://github.com/operator-framework/getting-started) 사용하기 +* 다른 사람들이 사용할 수 있도록 자신의 오퍼레이터를 [게시](https://operatorhub.io/)하기 +* 오퍼레이터 패턴을 소개한 [CoreOS 원본 기사](https://coreos.com/blog/introducing-operators.html) 읽기 +* 오퍼레이터 구축을 위한 모범 사례에 대한 구글 클라우드(Google Cloud)의 [기사](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) 읽기 + +{{% /capture %}} + diff --git a/content/ko/docs/concepts/overview/working-with-objects/annotations.md b/content/ko/docs/concepts/overview/working-with-objects/annotations.md index dfac3521d1..4b238bf313 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/annotations.md +++ b/content/ko/docs/concepts/overview/working-with-objects/annotations.md @@ -82,7 +82,7 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/docs/concepts/overview/working-with-objects/labels.md b/content/ko/docs/concepts/overview/working-with-objects/labels.md index a76972a874..7c9983093b 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/labels.md +++ b/content/ko/docs/concepts/overview/working-with-objects/labels.md @@ -67,7 +67,7 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/docs/concepts/overview/working-with-objects/names.md b/content/ko/docs/concepts/overview/working-with-objects/names.md index 069c49d908..3841e76c1e 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/names.md +++ b/content/ko/docs/concepts/overview/working-with-objects/names.md @@ -62,7 +62,7 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 ``` diff --git a/content/ko/docs/concepts/policy/_index.md b/content/ko/docs/concepts/policy/_index.md new file mode 100644 index 0000000000..ae03c565c1 --- /dev/null +++ b/content/ko/docs/concepts/policy/_index.md @@ -0,0 +1,4 @@ +--- +title: "정책" +weight: 90 +--- diff --git a/content/ko/docs/concepts/policy/limit-range.md b/content/ko/docs/concepts/policy/limit-range.md new file mode 100644 index 0000000000..527acb63ef --- /dev/null +++ b/content/ko/docs/concepts/policy/limit-range.md @@ -0,0 +1,388 @@ +--- +title: 리밋 레인지(Limit Range) +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +기본적으로 컨테이너는 쿠버네티스 클러스터에서 무제한 [컴퓨팅 리소스](/docs/user-guide/compute-resources)로 실행된다. +리소스 쿼터을 사용하면 클러스터 관리자는 네임스페이스별로 리소스 사용과 생성을 제한할 수 있다. +네임스페이스 내에서 파드나 컨테이너는 네임스페이스의 리소스 쿼터에 정의된 만큼의 CPU와 메모리를 사용할 수 있다. 하나의 파드 또는 컨테이너가 사용 가능한 모든 리소스를 독점할 수 있다는 우려가 있다. 리밋레인지는 네임스페이스에서 리소스 할당(파드 또는 컨테이너)을 제한하는 정책이다. + +{{% /capture %}} + + +{{% capture body %}} + +_리밋레인지_ 는 다음과 같은 제약 조건을 제공한다. + +- 네임스페이스에서 파드 또는 컨테이너별 최소 및 최대 컴퓨팅 리소스 사용량을 지정한다. +- 네임스페이스에서 스토리지클래스별 최소 및 최대 스토리지 요청을 지정한다. +- 네임스페이스에서 리소스에 대한 요청과 제한 사이의 비율을 지정한다. +- 네임스페이스에서 컴퓨팅 리소스에 대한 기본 요청/제한을 설정하고 런타임에 있는 컨테이너에 자동으로 설정한다. + +## 리밋레인지 활성화 + +많은 쿠버네티스 배포판에 리밋레인지 지원이 기본적으로 활성화되어 있다. apiserver `--enable-admission-plugins=` 플래그의 인수 중 하나로 `LimitRanger` 어드미션 컨트롤러가 있는 경우 활성화된다. + +해당 네임스페이스에 리밋레인지 오브젝트가 있는 경우 특정 네임스페이스에 리밋레인지가 지정된다. + +리밋레인지 오브젝트의 이름은 유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야한다. + +### 범위 제한의 개요 + +- 관리자는 하나의 네임스페이스에 하나의 `LimitRange`를 만든다. +- 사용자는 네임스페이스에서 파드, 컨테이너 및 퍼시스턴트볼륨클레임과 같은 리소스를 생성한다. +- `LimitRanger` 어드미션 컨트롤러는 컴퓨팅 리소스 요청 사항을 설정하지 않은 모든 파드와 컨테이너에 대한 기본값과 제한을 지정하고 네임스페이스의 리밋레인지에 정의된 리소스의 최소, 최대 및 비율을 초과하지 않도록 사용량을 추적한다. +- 리밋레인지 제약 조건을 위반하는 리소스(파드, 컨테이너, 퍼시스턴트볼륨클레임)를 생성하거나 업데이트하는 경우 HTTP 상태 코드 `403 FORBIDDEN` 및 위반된 제약 조건을 설명하는 메시지와 함께 API 서버에 대한 요청이 실패한다. +- `cpu`, `memory`와 같은 컴퓨팅 리소스의 네임스페이스에서 리밋레인지가 활성화된 경우 사용자는 해당 값에 대한 요청 또는 제한을 지정해야 한다. 그렇지 않으면 시스템에서 파드 생성이 거부될 수 있다. +- 리밋레인지 유효성 검사는 파드 실행 단계가 아닌 파드 어드미션 단계에서만 발생한다. + +범위 제한을 사용하여 생성할 수 있는 정책의 예는 다음과 같다. + +- 용량이 8GiB RAM과 16 코어인 2 노드 클러스터에서 네임스페이스의 파드를 제한하여 CPU의 최대 제한이 500m인 CPU 100m를 요청하고 메모리의 최대 제한이 600M인 메모리 200Mi를 요청하라. +- 스펙에 CPU 및 메모리 요청없이 시작된 컨테이너에 대해 기본 CPU 제한 및 요청을 150m로, 메모리 기본 요청을 300Mi로 정의하라. + +네임스페이스의 총 제한이 파드/컨테이너의 제한 합보다 작은 경우 리소스에 대한 경합이 있을 수 있다. +이 경우 컨테이너 또는 파드가 생성되지 않는다. + +경합이나 리밋레인지 변경은 이미 생성된 리소스에 영향을 미치지 않는다. + +## 컨테이너 컴퓨팅 리소스 제한 + +다음 절에서는 컨테이너 레벨에서 작동하는 리밋레인지 생성에 대해 설명한다. +4개의 컨테이너가 있는 파드가 먼저 생성된다. 파드 내의 각 컨테이너에는 특정 `spec.resource` 구성이 있다. +파드 내의 각 컨테이너는 `LimitRanger` 어드미션 컨트롤러에 의해 다르게 처리된다. + +다음 kubectl 명령을 사용하여 네임스페이스 `limitrange-demo`를 생성한다. + +```shell +kubectl create namespace limitrange-demo +``` + +kubectl 명령에서 네임스페이스 대상인 `limitrange-demo`를 빠트리지 않으려면 다음 명령으로 컨텍스트를 변경한다. + +```shell +kubectl config set-context --current --namespace=limitrange-demo +``` + +다음은 리밋레인지 오브젝트의 구성 파일이다. +{{< codenew file="admin/resource/limit-mem-cpu-container.yaml" >}} + +이 오브젝트는 컨테이너에 적용할 최소 및 최대 CPU/메모리 제한, 기본 CPU/메모리 요청과 CPU/메모리 리소스에 대한 기본 제한을 정의한다. + +다음 kubectl 명령을 사용하여 `limit-mem-cpu-per-container` 리밋레인지를 생성한다. + +```shell +kubectl create -f https://k8s.io/examples/admin/resource/limit-mem-cpu-container.yaml +``` + +```shell +kubectl describe limitrange/limit-mem-cpu-per-container +``` + +```shell +Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio +---- -------- --- --- --------------- ------------- ----------------------- +Container cpu 100m 800m 110m 700m - +Container memory 99Mi 1Gi 111Mi 900Mi - +``` +다음은 4개의 컨테이너가 포함된 파드의 구성 파일로 리밋레인지 기능을 보여준다. +{{< codenew file="admin/resource/limit-range-pod-1.yaml" >}} + +`busybox1` 파드를 생성한다. + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-1.yaml +``` + +### 유효한 CPU/메모리 요청과 제한이 있는 컨테이너 스펙 + +`busybox-cnt01`의 리소스 구성을 보자. + +```shell +kubectl get po/busybox1 -o json | jq ".spec.containers[0].resources" +``` + +```json +{ + "limits": { + "cpu": "500m", + "memory": "200Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` + +- `busybox` 파드 내의 `busybox-cnt01` 컨테이너는 `requests.cpu=100m`와 `requests.memory=100Mi`로 정의됐다. +- `100m <= 500m <= 800m`, 컨테이너 CPU 제한(500m)은 승인된 CPU 리밋레인지 내에 있다. +- `99Mi <= 200Mi <= 1Gi`, 컨테이너 메모리 제한(200Mi)은 승인된 메모리 리밋레인지 내에 있다. +- CPU/메모리에 대한 요청/제한 비율 검증이 없으므로 컨테이너가 유효하며 생성되었다. + + +### 유효한 CPU/메모리 요청은 있지만 제한이 없는 컨테이너 스펙 + +`busybox-cnt02`의 리소스 구성을 보자. + +```shell +kubectl get po/busybox1 -o json | jq ".spec.containers[1].resources" +``` + +```json +{ + "limits": { + "cpu": "700m", + "memory": "900Mi" + }, + "requests": { + "cpu": "100m", + "memory": "100Mi" + } +} +``` +- `busybox1` 파드 내의 `busybox-cnt02` 컨테이너는 `requests.cpu=100m`와 `requests.memory=100Mi`를 정의했지만 CPU와 메모리에 대한 제한은 없다. +- 컨테이너에 제한 섹션이 없다. `limit-mem-cpu-per-container` 리밋레인지 오브젝트에 정의된 기본 제한은 `limits.cpu=700mi` 및 `limits.memory=900Mi`로 이 컨테이너에 설정된다. +- `100m <= 700m <= 800m`, 컨테이너 CPU 제한(700m)이 승인된 CPU 제한 범위 내에 있다. +- `99Mi <= 900Mi <= 1Gi`, 컨테이너 메모리 제한(900Mi)이 승인된 메모리 제한 범위 내에 있다. +- 요청/제한 비율이 설정되지 않았으므로 컨테이너가 유효하며 생성되었다. + +### 유효한 CPU/메모리 제한은 있지만 요청은 없는 컨테이너 스펙 + +`busybox-cnt03`의 리소스 구성을 보자. + +```shell +kubectl get po/busybox1 -o json | jq ".spec.containers[2].resources" +``` +```json +{ + "limits": { + "cpu": "500m", + "memory": "200Mi" + }, + "requests": { + "cpu": "500m", + "memory": "200Mi" + } +} +``` + +- `busybox1` 파드 내의 `busybox-cnt03` 컨테이너는 `limits.cpu=500m`와 `limits.memory=200Mi`를 정의했지만 CPU와 메모리에 대한 요청은 없다. +- 컨테이너에 요청 섹션이 정의되지 않았다. `limit-mem-cpu-per-container` 리밋레인지에 정의된 기본 요청은 제한 섹션을 채우는 데 사용되지 않지만 컨테이너에 의해 정의된 제한은 `limits.cpu=500m` 및 `limits.memory=200Mi`로 설정된다. +- `100m <= 500m <= 800m`, 컨테이너 CPU 제한(500m)은 승인된 CPU 제한 범위 내에 있다. +- `99Mi <= 200Mi <= 1Gi`, 컨테이너 메모리 제한(200Mi)은 승인된 메모리 제한 범위 내에 있다. +- 요청/제한 비율이 설정되지 않았으므로 컨테이너가 유효하며 생성되었다. + +### CPU/메모리 요청/제한이 없는 컨테이너 스펙 + +`busybox-cnt04`의 리소스 구성을 보자. + +```shell +kubectl get po/busybox1 -o json | jq ".spec.containers[3].resources" +``` + +```json +{ + "limits": { + "cpu": "700m", + "memory": "900Mi" + }, + "requests": { + "cpu": "110m", + "memory": "111Mi" + } +} +``` + +- `busybox1`의 `busybox-cnt04` 컨테이너는 제한이나 요청을 정의하지 않았다. +- 컨테이너는 제한 섹션을 정의하지 않으며 `limit-mem-cpu-per-container` 리밋레인지에 정의된 기본 제한은 `limit.cpu=700m` 및 `limits.memory=900Mi`로 설정된다. +- 컨테이너는 요청 섹션을 정의하지 않으며 `limit-mem-cpu-per-container` 리밋레인지에 정의된 defaultRequest는 `requests.cpu=110m` 및 `requests.memory=111Mi`로 설정된다. +- `100m <= 700m <= 800m`, 컨테이너 CPU 제한(700m)은 승인된 CPU 제한 범위 내에 있다. +- `99Mi <= 900Mi <= 1Gi`, 컨테이너 메모리 제한(900Mi)은 승인된 메모리 제한 범위 내에 있다. +- 요청/제한 비율이 설정되지 않았으므로 컨테이너가 유효하며 생성되었다. + +`busybox` 파드에 정의된 모든 컨테이너는 리밋레인지 유효성 검사를 통과했으므로 이 파드는 유효하며 네임스페이스에서 생성된다. + +## 파드 컴퓨팅 리소스 제한 + +다음 절에서는 파드 레벨에서 리소스를 제한하는 방법에 대해 설명한다. + +{{< codenew file="admin/resource/limit-mem-cpu-pod.yaml" >}} + +`busybox1` 파드를 삭제하지 않고 `limitrange-demo` 네임스페이스에 `limit-mem-cpu-pod` 리밋레인지를 생성한다. + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/limit-mem-cpu-pod.yaml +``` +리밋레인지가 생성되고 파드별로 CPU가 2 코어로, 메모리가 2Gi로 제한된다. + +```shell +limitrange/limit-mem-cpu-per-pod created +``` + +다음 kubectl 명령을 사용하여 `limit-mem-cpu-per-pod` 리밋레인지 오브젝트의 정보를 나타낸다. + +```shell +kubectl describe limitrange/limit-mem-cpu-per-pod +``` + +```shell +Name: limit-mem-cpu-per-pod +Namespace: limitrange-demo +Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio +---- -------- --- --- --------------- ------------- ----------------------- +Pod cpu - 2 - - - +Pod memory - 2Gi - - - +``` + +이제 `busybox2` 파드를 생성한다. + +{{< codenew file="admin/resource/limit-range-pod-2.yaml" >}} + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-2.yaml +``` + +`busybox2` 파드 정의는 `busybox1`과 동일하지만 이제 파드 리소스가 제한되어 있으므로 오류가 보고된다. + +```shell +Error from server (Forbidden): error when creating "limit-range-pod-2.yaml": pods "busybox2" is forbidden: [maximum cpu usage per Pod is 2, but limit is 2400m., maximum memory usage per Pod is 2Gi, but limit is 2306867200.] +``` + +```shell +kubectl get po/busybox1 -o json | jq ".spec.containers[].resources.limits.memory" +"200Mi" +"900Mi" +"200Mi" +"900Mi" +``` + +해당 컨테이너의 총 메모리 제한이 리밋레인지에 정의된 제한보다 크므로 `busybox2` 파드는 클러스터에서 허용되지 않는다. +`busyRange1`은 리밋레인지를 생성하기 전에 클러스터에서 생성되고 허용되므로 제거되지 않는다. + +## 스토리지 리소스 제한 + +리밋레인지를 사용하여 네임스페이스에서 각 퍼시스턴트볼륨클레임이 요청할 수 있는 [스토리지 리소스](/ko/docs/concepts/storage/persistent-volumes/)의 최소 및 최대 크기를 지정할 수 있다. + +{{< codenew file="admin/resource/storagelimits.yaml" >}} + +`kubectl create`를 사용하여 YAML을 적용한다. + +```shell +kubectl create -f https://k8s.io/examples/admin/resource/storagelimits.yaml +``` + +```shell +limitrange/storagelimits created +``` + +생성된 오브젝트의 정보를 나타낸다. + +```shell +kubectl describe limits/storagelimits +``` + +출력은 다음과 같다. + +```shell +Name: storagelimits +Namespace: limitrange-demo +Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio +---- -------- --- --- --------------- ------------- ----------------------- +PersistentVolumeClaim storage 1Gi 2Gi - - - +``` + +{{< codenew file="admin/resource/pvc-limit-lower.yaml" >}} + +```shell +kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-lower.yaml +``` + +`requests.storage`가 리밋레인지의 Min 값보다 낮은 PVC를 만드는 동안 서버에서 발생하는 오류는 다음과 같다. + +```shell +Error from server (Forbidden): error when creating "pvc-limit-lower.yaml": persistentvolumeclaims "pvc-limit-lower" is forbidden: minimum storage usage per PersistentVolumeClaim is 1Gi, but request is 500Mi. +``` + +`requests.storage`가 리밋레인지의 Max 값보다 큰 경우에도 동일한 동작이 나타난다. + +{{< codenew file="admin/resource/pvc-limit-greater.yaml" >}} + +```shell +kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-greater.yaml +``` + +```shell +Error from server (Forbidden): error when creating "pvc-limit-greater.yaml": persistentvolumeclaims "pvc-limit-greater" is forbidden: maximum storage usage per PersistentVolumeClaim is 2Gi, but request is 5Gi. +``` + +## 제한/요청 비율 + +`LimitRangeSpec`에 `LimitRangeItem.maxLimitRequestRatio`가 지정되어 있으면 명명된 리소스는 제한을 요청으로 나눈 값이 열거된 값보다 작거나 같은 0이 아닌 값을 요청과 제한 모두 가져야 한다. + +다음의 리밋레인지는 메모리 제한이 네임스페이스의 모든 파드에 대한 메모리 요청 양의 최대 두 배가 되도록 한다. + +{{< codenew file="admin/resource/limit-memory-ratio-pod.yaml" >}} + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/limit-memory-ratio-pod.yaml +``` + +다음의 kubectl 명령으로 `limit-memory-ratio-pod` 리밋레인지의 정보를 나타낸다. + +```shell +kubectl describe limitrange/limit-memory-ratio-pod +``` + +```shell +Name: limit-memory-ratio-pod +Namespace: limitrange-demo +Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio +---- -------- --- --- --------------- ------------- ----------------------- +Pod memory - - - - 2 +``` + + +`requests.memory=100Mi` 및 `limits.memory=300Mi`로 파드를 생성한다. + +{{< codenew file="admin/resource/limit-range-pod-3.yaml" >}} + +```shell +kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-3.yaml +``` + +위 예에서 제한/요청 비율(`3`)이 `limit-memory-ratio-pod` 리밋레인지에 지정된 제한 비율(`2`)보다 커서 파드 생성에 실패했다. + +``` +Error from server (Forbidden): error when creating "limit-range-pod-3.yaml": pods "busybox3" is forbidden: memory max limit to request ratio per Pod is 2, but provided ratio is 3.000000. +``` + +## 정리 + +모든 리소스를 해제하려면 `limitrange-demo` 네임스페이스를 삭제한다. + +```shell +kubectl delete ns limitrange-demo +``` +다음 명령을 사용하여 컨텍스트를 `default` 네임스페이스로 변경한다. + +```shell +kubectl config set-context --current --namespace=default +``` + +## 예제 + +- [네임스페이스별 컴퓨팅 리소스를 제한하는 방법에 대한 튜토리얼](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)을 참고하길 바란다. +- [스토리지 사용을 제한하는 방법](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)을 확인하라. +- [네임스페이스별 쿼터에 대한 자세한 예](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)를 참고하길 바란다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +보다 자세한 내용은 [LimitRanger 설계 문서](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)를 참고하길 바란다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/policy/pod-security-policy.md b/content/ko/docs/concepts/policy/pod-security-policy.md new file mode 100644 index 0000000000..13d363aa39 --- /dev/null +++ b/content/ko/docs/concepts/policy/pod-security-policy.md @@ -0,0 +1,635 @@ +--- +title: 파드 시큐리티 폴리시 +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< feature-state state="beta" >}} + +파드 시큐리티 폴리시를 사용하면 파드 생성 및 업데이트에 대한 세분화된 권한을 +부여할 수 있다. + +{{% /capture %}} + + +{{% capture body %}} + +## 파드 시큐리티 폴리시란? + +_Pod Security Policy_ 는 파드 명세의 보안 관련 측면을 제어하는 ​​클러스터-레벨의 +리소스이다. [파드시큐리티폴리시](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) 오브젝트는 +관련 필드에 대한 기본값뿐만 아니라 시스템에 적용하기 위해 파드가 실행해야만 하는 +조건 셋을 정의한다. 관리자는 +다음을 제어할 수 있다. + +| 제어 측면 | 필드 이름 | +| ----------------------------------------------------| ------------------------------------------- | +| 특권을 가진(privileged) 컨테이너의 실행 | [`privileged`](#privileged) | +| 호스트 네임스페이스의 사용 | [`hostPID`, `hostIPC`](#host-namespaces) | +| 호스트 네트워킹과 포트의 사용 | [`hostNetwork`, `hostPorts`](#host-namespaces) | +| 볼륨 유형의 사용 | [`volumes`](#volumes-and-file-systems) | +| 호스트 파일시스템의 사용 | [`allowedHostPaths`](#volumes-and-file-systems) | +| FlexVolume 드라이버의 화이트리스트 | [`allowedFlexVolumes`](#flexvolume-drivers) | +| 파드 볼륨을 소유한 FSGroup 할당 | [`fsGroup`](#volumes-and-file-systems) | +| 읽기 전용 루트 파일시스템 사용 필요 | [`readOnlyRootFilesystem`](#volumes-and-file-systems) | +| 컨테이너의 사용자 및 그룹 ID | [`runAsUser`, `runAsGroup`, `supplementalGroups`](#users-and-groups) | +| 루트 특권으로의 에스컬레이션 제한 | [`allowPrivilegeEscalation`, `defaultAllowPrivilegeEscalation`](#privilege-escalation) | +| 리눅스 기능 | [`defaultAddCapabilities`, `requiredDropCapabilities`, `allowedCapabilities`](#capabilities) | +| 컨테이너의 SELinux 컨텍스트 | [`seLinux`](#selinux) | +| 컨테이너에 허용된 Proc 마운트 유형 | [`allowedProcMountTypes`](#allowedprocmounttypes) | +| 컨테이너가 사용하는 AppArmor 프로파일 | [어노테이션](#apparmor) | +| 컨테이너가 사용하는 seccomp 프로파일 | [어노테이션](#seccomp) | +| 컨테이너가 사용하는 sysctl 프로파일 | [`forbiddenSysctls`,`allowedUnsafeSysctls`](#sysctl) | + + +## 파드 시큐리티 폴리시 활성화 + +파드 시큐리티 폴리시 제어는 선택 사항(하지만 권장함)인 +[어드미션 +컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy)로 +구현된다. [어드미션 컨트롤러 활성화](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in)하면 +파드시큐리티폴리시가 적용되지만, +정책을 승인하지 않고 활성화하면 클러스터에 +**파드가 생성되지 않는다.** + +파드 시큐리티 폴리시 API(`policy/v1beta1/podsecuritypolicy`)는 +어드미션 컨트롤러와 독립적으로 활성화되므로 기존 클러스터의 경우 +어드미션 컨트롤러를 활성화하기 전에 정책을 추가하고 권한을 +부여하는 것이 좋다. + +## 정책 승인 + +파드시큐리티폴리시 리소스가 생성되면 아무 것도 수행하지 않는다. 이를 사용하려면 +요청 사용자 또는 대상 파드의 +[서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/)는 +정책에서 `use` 동사를 허용하여 정책을 사용할 권한이 있어야 한다. + +대부분의 쿠버네티스 파드는 사용자가 직접 만들지 않는다. 대신 일반적으로 +컨트롤러 관리자를 통해 +[디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/), +[레플리카셋](/ko/docs/concepts/workloads/controllers/replicaset/), 또는 기타 +템플릿 컨트롤러의 일부로 간접적으로 생성된다. 컨트롤러에 정책에 대한 접근 권한을 부여하면 +해당 컨트롤러에 의해 생성된 *모든* 파드에 대한 접근 권한이 부여되므로 정책을 승인하는 +기본 방법은 파드의 서비스 어카운트에 대한 접근 권한을 +부여하는 것이다([예](#다른-파드를-실행) 참고). + +### RBAC을 통한 방법 + +[RBAC](/docs/reference/access-authn-authz/rbac/)은 표준 쿠버네티스 권한 부여 모드이며, +정책 사용 권한을 부여하는 데 쉽게 사용할 수 있다. + +먼저, `Role` 또는 `ClusterRole`은 원하는 정책을 `use` 하려면 접근 권한을 부여해야 한다. +접근 권한을 부여하는 규칙은 다음과 같다. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +``` + +그런 다음 `(Cluster)Role`이 승인된 사용자에게 바인딩된다. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +# Authorize specific service accounts: +- kind: ServiceAccount + name: + namespace: +# Authorize specific users (not recommended): +- kind: User + apiGroup: rbac.authorization.k8s.io + name: +``` + +`RoleBinding`(`ClusterRoleBinding` 아님)을 사용하는 경우, 바인딩과 동일한 네임스페이스에서 +실행되는 파드에 대해서만 사용 권한을 부여한다. 네임스페이스에서 실행되는 모든 파드에 접근 권한을 +부여하기 위해 시스템 그룹과 쌍을 이룰 수 있다. +```yaml +# Authorize all service accounts in a namespace: +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:serviceaccounts +# Or equivalently, all authenticated users in a namespace: +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:authenticated +``` + +RBAC 바인딩에 대한 자세한 예는, +[역할 바인딩 예제](/docs/reference/access-authn-authz/rbac#role-binding-examples)를 참고하길 바란다. +파드시큐리티폴리시 인증에 대한 전체 예제는 +[아래](#예제)를 참고하길 바란다. + + +### 문제 해결 + +- [컨트롤러 관리자](/docs/admin/kube-controller-manager/)는 +[보안 API 포트](/docs/reference/access-authn-authz/controlling-access/)에 대해 실행해야 하며, +슈퍼유저 권한이 없어야 한다. 그렇지 않으면 요청이 인증 및 권한 부여 모듈을 우회하고, +모든 파드시큐리티폴리시 오브젝트가 허용되며 +사용자는 특권있는 컨테이너를 만들 수 있다. 컨트롤러 관리자 권한 구성에 대한 자세한 +내용은 [컨트롤러 역할](/docs/reference/access-authn-authz/rbac/#controller-roles)을 +참고하길 바란다. + +## 정책 순서 + +파드 생성 및 업데이트를 제한할 뿐만 아니라 파드 시큐리티 폴리시를 사용하여 +제어하는 ​​많은 필드에 기본값을 제공할 수도 있다. 여러 정책을 +사용할 수 있는 경우 파드 시큐리티 폴리시 컨트롤러는 +다음 기준에 따라 정책을 선택한다. + +1. 기본 설정을 변경하거나 파드를 변경하지 않고 파드를 있는 그대로 허용하는 파드시큐리티폴리시가 + 선호된다. 이러한 비-변이(non-mutating) 파드시큐리티폴리시의 + 순서는 중요하지 않다. +2. 파드를 기본값으로 설정하거나 변경해야 하는 경우, 파드를 허용할 첫 번째 파드시큐리티폴리시 + (이름순)가 선택된다. + +{{< note >}} +업데이트 작업 중(파드 스펙에 대한 변경이 허용되지 않는 동안) 비-변이 파드시큐리티폴리시만 +파드의 유효성을 검사하는 데 사용된다. +{{< /note >}} + +## 예제 + +_이 예에서는 파드시큐리티폴리시 어드미션 컨트롤러가 활성화된 클러스터가 실행 중이고 +클러스터 관리자 권한이 있다고 가정한다._ + +### 설정 + +이 예제와 같이 네임스페이스와 서비스 어카운트를 설정한다. +이 서비스 어카운트를 사용하여 관리자가 아닌 사용자를 조정한다. + +```shell +kubectl create namespace psp-example +kubectl create serviceaccount -n psp-example fake-user +kubectl create rolebinding -n psp-example fake-editor --clusterrole=edit --serviceaccount=psp-example:fake-user +``` + +어떤 사용자로 활동하고 있는지 명확하게 하고 입력 내용을 저장하려면 2개의 별칭(alias)을 +만든다. + +```shell +alias kubectl-admin='kubectl -n psp-example' +alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n psp-example' +``` + +### 정책과 파드 생성 + +파일에서 예제 파드시큐리티폴리시 오브젝트를 정의한다. 이는 특권있는 파드를 +만들지 못하게 하는 정책이다. +파드시큐리티폴리시 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-서브도메인-이름)이어야 한다. + +{{< codenew file="policy/example-psp.yaml" >}} + +그리고 kubectl로 생성한다. + +```shell +kubectl-admin create -f example-psp.yaml +``` + +이제 권한이 없는 사용자로서 간단한 파드를 생성해보자. + +```shell +kubectl-user create -f- <}} +이 방법은 권장하지 않는다! 선호하는 방법은 [다음 절](#다른-파드를-실행)을 +참고하길 바란다. +{{< /note >}} + +```shell +kubectl-admin create role psp:unprivileged \ + --verb=use \ + --resource=podsecuritypolicy \ + --resource-name=example +role "psp:unprivileged" created + +kubectl-admin create rolebinding fake-user:psp:unprivileged \ + --role=psp:unprivileged \ + --serviceaccount=psp-example:fake-user +rolebinding "fake-user:psp:unprivileged" created + +kubectl-user auth can-i use podsecuritypolicy/example +yes +``` + +이제 파드 생성을 다시 시도하자. + +```shell +kubectl-user create -f- <}} + +다음은 권한이 없는 사용자로서의 실행을 필요로 하고, 루트로의 에스컬레이션(escalation) 가능성을 차단하고, +여러 보안 메커니즘을 사용을 필요로 하는 제한적 +정책의 예제이다. + +{{< codenew file="policy/restricted-psp.yaml" >}} + +## 정책 레퍼런스 + +### 특권을 가진 + +**Privileged** - 파드의 컨테이너가 특권 모드를 사용할 수 있는지 여부를 결정한다. +기본적으로 컨테이너는 호스트의 모든 장치에 접근할 수 없지만 +"특권을 가진" 컨테이너는 호스트의 모든 장치에 접근할 수 있다. 이것은 +컨테이너가 호스트에서 실행되는 프로세스와 거의 동일한 접근을 허용한다. +이것은 네트워크 스택 조작 및 장치 접근과 같은 +리눅스 기능을 사용하려는 컨테이너에 유용하다. + +### 호스트 네임스페이스 + +**HostPID** - 파드 컨테이너가 호스트 프로세스 ID 네임스페이스를 공유할 수 있는지 여부를 +제어한다. ptrace와 함께 사용하면 컨테이너 외부로 권한을 에스컬레이션하는 데 사용할 수 +있다(ptrace는 기본적으로 금지되어 있음). + +**HostIPC** - 파드 컨테이너가 호스트 IPC 네임스페이스를 공유할 수 있는지 여부를 +제어한다. + +**HostNetwork** - 파드가 노드 네트워크 네임스페이스를 사용할 수 있는지 여부를 제어한다. +이렇게 하면 파드에 루프백 장치에 접근 권한을 주고, 서비스는 로컬호스트(localhost)를 리스닝할 수 있으며, +동일한 노드에 있는 다른 파드의 네트워크 활동을 스누핑(snoop)하는 데 +사용할 수 있다. + +**HostPorts** - 호스트 네트워크 네임스페이스에 허용되는 포트 범위의 화이트리스트(whitelist)를 +제공한다. `min`과 `max`를 포함하여 `HostPortRange`의 목록으로 정의된다. +기본값은 허용하는 호스트 포트 없음(no allowed host ports)이다. + +### 볼륨 및 파일시스템 + +**Volumes** - 허용되는 볼륨 유형의 화이트리스트를 제공한다. 허용 가능한 값은 +볼륨을 생성할 때 정의된 볼륨 소스에 따른다. 볼륨 유형의 전체 목록은 +[볼륨 유형들](/ko/docs/concepts/storage/volumes/#볼륨-유형들)에서 참고한다. +또한 `*`를 사용하여 모든 볼륨 유형을 +허용할 수 있다. + +새 PSP에 허용되는 볼륨의 **최소 권장 셋** 은 다음과 같다. + +- 컨피그맵 +- 다운워드API +- emptyDir +- 퍼시스턴트볼륨클레임 +- 시크릿 +- 프로젝티드(projected) + +{{< warning >}} +파드시큐리티폴리시는 `PersistentVolumeClaim`이 참조할 수 있는 `PersistentVolume` +오브젝트의 유형을 제한하지 않으며 hostPath 유형 +`PersistentVolumes`은 읽기-전용 접근 모드를 지원하지 않는다. 신뢰할 수 있는 사용자만 +`PersistentVolume` 오브젝트를 생성할 수 있는 권한을 부여 받아야 한다. +{{< /warning >}} + +**FSGroup** - 일부 볼륨에 적용되는 보충 그룹(supplemental group)을 제어한다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MayRunAs* - 하나 이상의 `range`를 지정해야 한다. 기본값을 제공하지 않고 +`FSGroups`을 설정하지 않은 상태로 둘 수 있다. `FSGroups`이 설정된 경우 모든 범위에 대해 +유효성을 검사한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `fsGroup` ID의 지정도 허용한다. + +**AllowedHostPaths** - hostPath 볼륨에서 사용할 수 있는 호스트 경로의 화이트리스트를 +지정한다. 빈 목록은 사용되는 호스트 경로에 제한이 없음을 의미한다. +이는 단일 `pathPrefix` 필드가 있는 오브젝트 목록으로 정의되며, hostPath 볼륨은 +허용된 접두사로 시작하는 경로를 마운트할 수 있으며 `readOnly` 필드는 +읽기-전용으로 마운트 되어야 함을 나타낸다. +예를 들면 다음과 같습니다. + +```yaml +allowedHostPaths: + # 이 정책은 "/foo", "/foo/", "/foo/bar" 등을 허용하지만, + # "/fool", "/etc/foo" 등은 허용하지 않는다. + # "/foo/../" 는 절대 유효하지 않다. + - pathPrefix: "/foo" + readOnly: true # 읽기 전용 마운트만 허용 +``` + +{{< warning >}}호스트 파일시스템에 제한없는 접근을 부여하며, 컨테이너가 특권을 에스컬레이션 +(다른 컨테이너들에 있는 데이터를 읽고, 시스템 서비스의 자격 증명을 어뷰징(abusing)하는 등)할 +수 있도록 만드는 다양한 방법이 있다. 예를 들면, Kubelet과 같다. + +쓰기 가능한 hostPath 디렉토리 볼륨을 사용하면, 컨테이너가 `pathPrefix` 외부의 +호스트 파일시스템에 대한 통행을 허용하는 방식으로 컨테이너의 파일시스템 쓰기(write)를 허용한다. +쿠버네티스 1.11 이상 버전에서 사용 가능한 `readOnly: true`는 지정된 `pathPrefix`에 대한 +접근을 효과적으로 제한하기 위해 **모든** `allowedHostPaths`에서 사용해야 한다. +{{< /warning >}} + +**ReadOnlyRootFilesystem** - 컨테이너는 읽기-전용 루트 파일시스템(즉, 쓰기 가능한 레이어 없음)으로 +실행해야 한다. + +### FlexVolume 드라이버 + +flexvolume에서 사용할 수 있는 FlexVolume 드라이버의 화이트리스트를 지정한다. +빈 목록 또는 nil은 드라이버에 제한이 없음을 의미한다. +[`volumes`](#볼륨-및-파일시스템) 필드에 `flexVolume` 볼륨 유형이 포함되어 +있는지 확인한다. 그렇지 않으면 FlexVolume 드라이버가 허용되지 않는다. + +예를 들면 다음과 같다. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: allow-flex-volumes +spec: + # ... 다른 스펙 필드 + volumes: + - flexVolume + allowedFlexVolumes: + - driver: example/lvm + - driver: example/cifs +``` + +### 사용자 및 그룹 + +**RunAsUser** - 컨테이너를 실행할 사용자 ID를 제어힌다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MustRunAsNonRoot* - 파드가 0이 아닌 `runAsUser`로 제출되거나 +이미지에 `USER` 지시문이 정의되어 있어야 한다(숫자 UID 사용). `runAsNonRoot` 또는 +`runAsUser` 설정을 지정하지 않은 파드는 `runAsNonRoot=true`를 설정하도록 +변경되므로 컨테이너에 0이 아닌 숫자가 정의된 `USER` 지시문이 +필요하다. 기본값은 제공되지 않는다. +이 전략에서는 `allowPrivilegeEscalation=false`를 설정하는 것이 좋다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `runAsUser`의 지정도 허용한다. + +**RunAsGroup** - 컨테이너가 실행될 기본 그룹 ID를 제어한다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MayRunAs* - `RunAsGroup`을 지정할 필요가 없다. 그러나 `RunAsGroup`을 지정하면 +정의된 범위에 속해야 한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `runAsGroup`의 지정도 허용한다. + + +**SupplementalGroups** - 컨테이너가 추가할 그룹 ID를 제어한다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MayRunAs* - 하나 이상의 `range`를 지정해야 한다. `supplementalGroups`에 +기본값을 제공하지 않고 설정하지 않은 상태로 둘 수 있다. +`supplementalGroups`가 설정된 경우 모든 범위에 대해 유효성을 검증한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `supplementalGroups`의 지정도 +허용한다. + +### 권한 에스컬레이션 + +이 옵션은 `allowPrivilegeEscalation` 컨테이너 옵션을 제어한다. 이 bool은 +컨테이너 프로세스에서 +[`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) +플래그가 설정되는지 여부를 직접 제어한다. 이 플래그는 `setuid` 바이너리가 +유효 사용자 ID를 변경하지 못하게 하고 파일에 추가 기능을 활성화하지 못하게 +한다(예: `ping` 도구 사용을 못하게 함). `MustRunAsNonRoot`를 효과적으로 +강제하려면 이 동작이 필요하다. + +**AllowPrivilegeEscalation** - 사용자가 컨테이너의 보안 컨텍스트를 +`allowPrivilegeEscalation=true`로 설정할 수 있는지 여부를 게이트한다. +이 기본값은 setuid 바이너리를 중단하지 않도록 허용한다. 이를 `false`로 설정하면 +컨테이너의 하위 프로세스가 상위 프로세스보다 더 많은 권한을 얻을 수 없다. + +**DefaultAllowPrivilegeEscalation** - `allowPrivilegeEscalation` 옵션의 +기본값을 설정한다. 이것이 없는 기본 동작은 setuid 바이너리를 중단하지 않도록 +권한 에스컬레이션을 허용하는 것이다. 해당 동작이 필요하지 않은 경우 이 필드를 사용하여 +기본적으로 허용하지 않도록 설정할 수 있지만 파드는 여전히 `allowPrivilegeEscalation`을 +명시적으로 요청할 수 있다. + +### 기능 + +리눅스 기능은 전통적으로 슈퍼유저와 관련된 권한을 보다 세밀하게 분류한다. +이러한 기능 중 일부는 권한 에스컬레이션 또는 컨테이너 분류에 사용될 수 있으며 +파드시큐리티폴리시에 의해 제한될 수 있다. 리눅스 기능에 대한 자세한 내용은 +[기능(7)](http://man7.org/linux/man-pages/man7/capabilities.7.html)을 +참고하길 바란다. + +다음 필드는 대문자로 표기된 기능 이름 목록을 +`CAP_` 접두사 없이 가져온다. + +**AllowedCapabilities** - 컨테이너에 추가될 수 있는 기능의 화이트리스트를 +제공한다. 기본적인 기능 셋은 암시적으로 허용된다. 비어있는 셋은 +기본 셋을 넘어서는 추가 기능이 추가되지 않는 것을 +의미한다. `*`는 모든 기능을 허용하는 데 사용할 수 있다. + +**RequiredDropCapabilities** - 컨테이너에서 삭제해야 하는 기능이다. +이러한 기능은 기본 셋에서 제거되며 추가해서는 안된다. +`RequiredDropCapabilities`에 나열된 기능은 `AllowedCapabilities` 또는 +`DefaultAddCapabilities`에 포함되지 않아야 한다. + +**DefaultAddCapabilities** - 런타임 기본값 외에 기본적으로 컨테이너에 추가되는 기능이다. +도커 런타임을 사용할 때 기본 기능 목록은 +[도커 문서](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities)를 +참고하길 바란다. + +### SELinux + +- *MustRunAs* - `seLinuxOptions`을 구성해야 한다. +`seLinuxOptions`을 기본값으로 사용한다. `seLinuxOptions`에 대해 유효성을 검사한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `seLinuxOptions`의 지정도 +허용한다. + +### AllowedProcMountTypes + +`allowedProcMountTypes`는 허용된 ProcMountTypes의 화이트리스트이다. +비어 있거나 nil은 `DefaultProcMountType`만 사용할 수 있음을 나타낸다. + +`DefaultProcMount`는 /proc의 읽기 전용 및 마스킹(masking)된 경로에 컨테이너 런타임 +기본값을 사용한다. 대부분의 컨테이너 런타임은 특수 장치나 정보가 실수로 보안에 +노출되지 않도록 /proc의 특정 경로를 마스킹한다. 이것은 문자열 +`Default`로 표시된다. + +유일하게 다른 ProcMountType은 `UnmaskedProcMount`로, 컨테이너 런타임의 +기본 마스킹 동작을 무시하고 새로 작성된 /proc 컨테이너가 수정없이 +그대로 유지되도록 한다. 이 문자열은 +`Unmasked`로 표시된다. + +### AppArmor + +파드시큐리티폴리시의 어노테이션을 통해 제어된다. [AppArmor +문서](/docs/tutorials/clusters/apparmor/#podsecuritypolicy-annotations)를 참고하길 바란다. + +### Seccomp + +파드에서 seccomp 프로파일의 사용은 파드시큐리티폴리시의 어노테이션을 통해 +제어할 수 있다. Seccomp는 쿠버네티스의 알파 기능이다. + +**seccomp.security.alpha.kubernetes.io/defaultProfileName** - 컨테이너에 +적용할 기본 seccomp 프로파일을 지정하는 어노테이션이다. 가능한 값은 +다음과 같다. + +- `unconfined` - 대안이 제공되지 않으면 Seccomp가 컨테이너 프로세스에 적용되지 + 않는다(쿠버네티스의 기본값임). +- `runtime/default` - 기본 컨테이너 런타임 프로파일이 사용된다. +- `docker/default` - 도커 기본 seccomp 프로파일이 사용된다. 쿠버네티스 1.11 부터 사용 중단(deprecated) + 되었다. 대신 `runtime/default` 사용을 권장한다. +- `localhost/` - `/`에 있는 노드에서 파일을 프로파일로 + 지정한다. 여기서 ``는 Kubelet의 `--seccomp-profile-root` 플래그를 + 통해 정의된다. + +**seccomp.security.alpha.kubernetes.io/allowedProfileNames** - 파드 seccomp +어노테이션에 허용되는 값을 지정하는 어노테이션. 쉼표로 구분된 +허용된 값의 목록으로 지정된다. 가능한 값은 위에 나열된 값과 +모든 프로파일을 허용하는 `*` 이다. +이 주석이 없으면 기본값을 변경할 수 없다. + +### Sysctl + +기본적으로 모든 안전한 sysctls가 허용된다. + +- `forbiddenSysctls` - 특정 sysctls를 제외한다. 목록에서 안전한 것과 안전하지 않은 sysctls의 조합을 금지할 수 있다. 모든 sysctls 설정을 금지하려면 자체적으로 `*`를 사용한다. +- `allowedUnsafeSysctls` - `forbiddenSysctls`에 나열되지 않는 한 기본 목록에서 허용하지 않은 특정 sysctls를 허용한다. + +[Sysctl 문서]( +/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy)를 참고하길 바란다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +API 세부 정보는 [파드 시큐리티 폴리시 레퍼런스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) 참조 + +{{% /capture %}} diff --git a/content/ko/docs/concepts/policy/resource-quotas.md b/content/ko/docs/concepts/policy/resource-quotas.md new file mode 100644 index 0000000000..d86685d986 --- /dev/null +++ b/content/ko/docs/concepts/policy/resource-quotas.md @@ -0,0 +1,600 @@ +--- +title: 리소스 쿼터 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +여러 사용자나 팀이 정해진 수의 노드로 클러스터를 공유할 때 +한 팀이 공정하게 분배된 리소스보다 많은 리소스를 사용할 수 있다는 우려가 있다. + +리소스 쿼터는 관리자가 이 문제를 해결하기 위한 도구이다. + +{{% /capture %}} + + +{{% capture body %}} + +`ResourceQuota` 오브젝트로 정의된 리소스 쿼터는 네임스페이스별 총 리소스 사용을 제한하는 +제약 조건을 제공한다. 유형별로 네임스페이스에서 만들 수 있는 오브젝트 수와 +해당 프로젝트의 리소스가 사용할 수 있는 총 컴퓨트 리소스의 양을 +제한할 수 있다. + +리소스 쿼터는 다음과 같이 작동한다. + +- 다른 팀은 다른 네임스페이스에서 작동한다. 현재 이것은 자발적이지만 ACL을 통해 이 필수 사항을 + 적용하기 위한 지원이 계획되어 있다. +- 관리자는 각 네임스페이스에 대해 하나의 `ResourceQuota`를 생성한다. +- 사용자는 네임스페이스에서 리소스(파드, 서비스 등)를 생성하고 쿼터 시스템은 + 사용량을 추적하여 `ResourceQuota`에 정의된 하드(hard) 리소스 제한을 초과하지 않도록 한다. +- 리소스를 생성하거나 업데이트할 때 쿼터 제약 조건을 위반하면 위반된 제약 조건을 설명하는 + 메시지와 함께 HTTP 상태 코드 `403 FORBIDDEN`으로 요청이 실패한다. +- `cpu`, `memory`와 같은 컴퓨트 리소스에 대해 네임스페이스에서 쿼터가 활성화된 경우 + 사용자는 해당값에 대한 요청 또는 제한을 지정해야 한다. 그렇지 않으면 쿼터 시스템이 + 파드 생성을 거부할 수 있다. 힌트: 컴퓨트 리소스 요구 사항이 없는 파드를 기본값으로 설정하려면 `LimitRanger` 어드미션 컨트롤러를 사용하자. + 이 문제를 회피하는 방법에 대한 예제는 [연습](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)을 참고하길 바란다. + +`ResourceQuota` 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-서브도메인-이름)이어야 한다. + +네임스페이스와 쿼터를 사용하여 만들 수 있는 정책의 예는 다음과 같다. + +- 용량이 32GiB RAM, 16 코어인 클러스터에서 A 팀이 20GiB 및 10 코어를 사용하고 + B 팀은 10GiB 및 4 코어를 사용하게 하고 2GiB 및 2 코어를 향후 할당을 위해 보유하도록 한다. +- "testing" 네임스페이스를 1 코어 및 1GiB RAM을 사용하도록 제한한다. + "production" 네임스페이스에는 원하는 양을 사용하도록 한다. + +클러스터의 총 용량이 네임스페이스의 쿼터 합보다 작은 경우 리소스에 대한 경합이 있을 수 있다. +이것은 선착순으로 처리된다. + +경합이나 쿼터 변경은 이미 생성된 리소스에 영향을 미치지 않는다. + +## 리소스 쿼터 활성화 + +많은 쿠버네티스 배포판에 기본적으로 리소스 쿼터 지원이 활성화되어 있다. +API 서버 `--enable-admission-plugins=` 플래그의 인수 중 하나로 +`ResourceQuota`가 있는 경우 활성화된다. + +해당 네임스페이스에 `ResourceQuota`가 있는 경우 특정 네임스페이스에 리소스 쿼터가 적용된다. + +## 컴퓨트 리소스 쿼터 + +지정된 네임스페이스에서 요청할 수 있는 총 [컴퓨트 리소스](/docs/user-guide/compute-resources) 합을 제한할 수 있다. + +다음과 같은 리소스 유형이 지원된다. + +| 리소스 이름 | 설명 | +| --------------------- | ----------------------------------------------------------- | +| `limits.cpu` | 터미널이 아닌 상태의 모든 파드에서 CPU 제한의 합은 이 값을 초과할 수 없음 | +| `limits.memory` | 터미널이 아닌 상태의 모든 파드에서 메모리 제한의 합은 이 값을 초과할 수 없음 | +| `requests.cpu` | 터미널이 아닌 상태의 모든 파드에서 CPU 요청의 합은 이 값을 초과할 수 없음 | +| `requests.memory` | 터미널이 아닌 상태의 모든 파드에서 메모리 요청의 합은 이 값을 초과할 수 없음 | + +### 확장된 리소스에 대한 리소스 쿼터 + +위에서 언급한 리소스 외에도 릴리스 1.10에서는 +[확장된 리소스](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)에 대한 쿼터 지원이 추가되었다. + +확장된 리소스에는 오버커밋(overcommit)이 허용되지 않으므로 하나의 쿼터에서 +동일한 확장된 리소스에 대한 `requests`와 `limits`을 모두 지정하는 것은 의미가 없다. 따라서 확장된 +리소스의 경우 지금은 접두사 `requests.`이 있는 쿼터 항목만 허용된다. + +예를 들어, 리소스 이름이 `nvidia.com/gpu`이고 네임스페이스에서 요청된 총 GPU 수를 4개로 제한하려는 경우, +GPU 리소스를 다음과 같이 쿼터를 정의할 수 있다. + +* `requests.nvidia.com/gpu: 4` + +자세한 내용은 [쿼터 보기 및 설정](#쿼터-보기-및-설정)을 참고하길 바란다. + + +## 스토리지 리소스 쿼터 + +지정된 네임스페이스에서 요청할 수 있는 총 [스토리지 리소스](/ko/docs/concepts/storage/persistent-volumes/) 합을 제한할 수 있다. + +또한 연관된 ​​스토리지 클래스를 기반으로 스토리지 리소스 사용을 제한할 수 있다. + +| 리소스 이름 | 설명 | +| --------------------- | ----------------------------------------------------------- | +| `requests.storage` | 모든 퍼시스턴트 볼륨 클레임에서 스토리지 요청의 합은 이 값을 초과할 수 없음 | +| `persistentvolumeclaims` | 네임스페이스에 존재할 수 있는 총 [퍼시스턴트 볼륨 클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임) 수 | +| `.storageclass.storage.k8s.io/requests.storage` | storage-class-name과 관련된 모든 퍼시스턴트 볼륨 클레임에서 스토리지 요청의 합은 이 값을 초과할 수 없음 | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | storage-class-name과 관련된 모든 퍼시스턴트 볼륨 클레임에서 네임스페이스에 존재할 수 있는 총 [퍼시스턴트 볼륨 클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임) 수 | + +예를 들어, 운영자가 `bronze` 스토리지 클래스와 별도로 `gold` 스토리지 클래스를 사용하여 스토리지에 쿼터를 지정하려는 경우 운영자는 다음과 같이 +쿼터를 정의할 수 있다. + +* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi` +* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi` + +릴리스 1.8에서는 로컬 임시 스토리지에 대한 쿼터 지원이 알파 기능으로 추가되었다. + +| 리소스 이름 | 설명 | +| ------------------------------- |----------------------------------------------------------- | +| `requests.ephemeral-storage` | 네임스페이스의 모든 파드에서 로컬 임시 스토리지 요청의 합은 이 값을 초과할 수 없음 | +| `limits.ephemeral-storage` | 네임스페이스의 모든 파드에서 로컬 임시 스토리지 제한의 합은 이 값을 초과할 수 없음 | + +## 오브젝트 수 쿼터 + +1.9 릴리스는 다음 구문을 사용하여 모든 표준 네임스페이스 리소스 유형에 쿼터를 지정하는 지원을 추가했다. + +* `count/.` + +다음은 사용자가 오브젝트 수 쿼터 아래에 배치하려는 리소스 셋의 예이다. + +* `count/persistentvolumeclaims` +* `count/services` +* `count/secrets` +* `count/configmaps` +* `count/replicationcontrollers` +* `count/deployments.apps` +* `count/replicasets.apps` +* `count/statefulsets.apps` +* `count/jobs.batch` +* `count/cronjobs.batch` +* `count/deployments.extensions` + +1.15 릴리스는 동일한 구문을 사용하여 사용자 정의 리소스에 대한 지원을 추가했다. +예를 들어 `example.com` API 그룹에서 `widgets` 사용자 정의 리소스에 대한 쿼터를 생성하려면 `count/widgets.example.com`을 사용한다. + +`count/*` 리소스 쿼터를 사용할 때 서버 스토리지 영역에 있다면 오브젝트는 쿼터에 대해 과금된다. +이러한 유형의 쿼터는 스토리지 리소스 고갈을 방지하는 데 유용하다. 예를 들어, +크기가 큰 서버에서 시크릿 수에 쿼터를 지정할 수 있다. 클러스터에 시크릿이 너무 많으면 실제로 서버와 +컨트롤러가 시작되지 않을 수 있다! 네임스페이스에 너무 많은 작업을 생성하는 +잘못 구성된 크론 잡으로 인해 서비스 거부를 유발하는 것으로부터 보호하기 위해 작업의 쿼터를 지정하도록 선택할 수 있다. + +1.9 릴리스 이전에는 제한된 리소스 셋에서 일반 오브젝트 수 쿼터를 적용할 수 있었다. +또한, 특정 리소스에 대한 쿼터를 유형별로 추가로 제한할 수 있다. + +다음 유형이 지원된다. + +| 리소스 이름 | 설명 | +| ------------------------------- | ------------------------------------------------- | +| `configmaps` | 네임스페이스에 존재할 수 있는 총 구성 맵 수 | +| `persistentvolumeclaims` | 네임스페이스에 존재할 수 있는 총 [퍼시스턴트 볼륨 클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임) 수 | +| `pods` | 네임스페이스에 존재할 수 있는 터미널이 아닌 상태의 파드의 총 수. `.status.phase in (Failed, Succeeded)`가 true인 경우 파드는 터미널 상태임 | +| `replicationcontrollers` | 네임스페이스에 존재할 수 있는 총 레플리케이션 컨트롤러 수 | +| `resourcequotas` | 네임스페이스에 존재할 수 있는 총 [리소스 쿼터](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) 수 | +| `services` | 네임스페이스에 존재할 수 있는 총 서비스 수 | +| `services.loadbalancers` | 네임스페이스에 존재할 수 있는 로드 밸런서 유형의 총 서비스 수 | +| `services.nodeports` | 네임스페이스에 존재할 수 있는 노드 포트 유형의 총 서비스 수 | +| `secrets` | 네임스페이스에 존재할 수 있는 총 시크릿 수 | + +예를 들어, `pods` 쿼터는 터미널이 아닌 단일 네임스페이스에서 생성된 `pods` 수를 계산하고 최대값을 적용한다. +사용자가 작은 파드를 많이 생성하여 클러스터의 파드 IP 공급이 고갈되는 경우를 피하기 위해 +네임스페이스에 `pods` 쿼터를 설정할 수 있다. + +## 쿼터 범위 + +각 쿼터에는 연결된 범위 셋이 있을 수 있다. 쿼터는 열거된 범위의 교차 부분과 일치하는 경우에만 +리소스 사용량을 측정한다. + +범위가 쿼터에 추가되면 해당 범위와 관련된 리소스를 지원하는 리소스 수가 제한된다. +허용된 셋 이외의 쿼터에 지정된 리소스는 유효성 검사 오류가 발생한다. + +| 범위 | 설명 | +| ----- | ----------- | +| `Terminating` | `.spec.activeDeadlineSeconds >= 0`에 일치하는 파드 | +| `NotTerminating` | `.spec.activeDeadlineSeconds is nil`에 일치하는 파드 | +| `BestEffort` | 최상의 서비스 품질을 제공하는 파드 | +| `NotBestEffort` | 서비스 품질이 나쁜 파드 | + +`BestEffort` 범위는 다음의 리소스(파드)를 추적하도록 쿼터를 제한한다. + +`Terminating`, `NotTerminating` 및 `NotBestEffort` 범위는 쿼터를 제한하여 다음의 리소스를 추적한다. + +* `cpu` +* `limits.cpu` +* `limits.memory` +* `memory` +* `pods` +* `requests.cpu` +* `requests.memory` + +### PriorityClass별 리소스 쿼터 + +{{< feature-state for_k8s_version="1.12" state="beta" >}} + +특정 [우선 순위](/docs/concepts/configuration/pod-priority-preemption/#pod-priority)로 파드를 생성할 수 있다. +쿼터 스펙의 `scopeSelector` 필드를 사용하여 파드의 우선 순위에 따라 파드의 시스템 리소스 사용을 +제어할 수 있다. + +쿼터 스펙의 `scopeSelector`가 파드를 선택한 경우에만 쿼터가 일치하고 사용된다. + +이 예에서는 쿼터 오브젝트를 생성하여 특정 우선 순위의 파드와 일치시킨다. +예제는 다음과 같이 작동한다. + +- 클러스터의 파드는 "low(낮음)", "medium(중간)", "high(높음)"의 세 가지 우선 순위 클래스 중 하나를 가진다. +- 각 우선 순위마다 하나의 쿼터 오브젝트가 생성된다. + +다음 YAML을 `quota.yml` 파일에 저장한다. + +```yaml +apiVersion: v1 +kind: List +items: +- apiVersion: v1 + kind: ResourceQuota + metadata: + name: pods-high + spec: + hard: + cpu: "1000" + memory: 200Gi + pods: "10" + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["high"] +- apiVersion: v1 + kind: ResourceQuota + metadata: + name: pods-medium + spec: + hard: + cpu: "10" + memory: 20Gi + pods: "10" + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["medium"] +- apiVersion: v1 + kind: ResourceQuota + metadata: + name: pods-low + spec: + hard: + cpu: "5" + memory: 10Gi + pods: "10" + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["low"] +``` + +`kubectl create`를 사용하여 YAML을 적용한다. + +```shell +kubectl create -f ./quota.yml +``` + +```shell +resourcequota/pods-high created +resourcequota/pods-medium created +resourcequota/pods-low created +``` + +`kubectl describe quota`를 사용하여 `Used` 쿼터가 `0`인지 확인하자. + +```shell +kubectl describe quota +``` + +```shell +Name: pods-high +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 1k +memory 0 200Gi +pods 0 10 + + +Name: pods-low +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 5 +memory 0 10Gi +pods 0 10 + + +Name: pods-medium +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 10 +memory 0 20Gi +pods 0 10 +``` + +우선 순위가 "high"인 파드를 생성한다. 다음 YAML을 +`high-priority-pod.yml` 파일에 저장한다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: high-priority +spec: + containers: + - name: high-priority + image: ubuntu + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello; sleep 10;done"] + resources: + requests: + memory: "10Gi" + cpu: "500m" + limits: + memory: "10Gi" + cpu: "500m" + priorityClassName: high +``` + +`kubectl create`로 적용하자. + +```shell +kubectl create -f ./high-priority-pod.yml +``` + +"high" 우선 순위 쿼터가 적용된 `pods-high`에 대한 "Used" 통계가 변경되었고 +다른 두 쿼터는 변경되지 않았는지 확인한다. + +```shell +kubectl describe quota +``` + +```shell +Name: pods-high +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 500m 1k +memory 10Gi 200Gi +pods 1 10 + + +Name: pods-low +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 5 +memory 0 10Gi +pods 0 10 + + +Name: pods-medium +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 10 +memory 0 20Gi +pods 0 10 +``` + +`scopeSelector`는 `operator` 필드에서 다음 값을 지원한다. + +* `In` +* `NotIn` +* `Exist` +* `DoesNotExist` + +## 요청과 제한의 비교 {#requests-vs-limits} + +컴퓨트 리소스를 할당할 때 각 컨테이너는 CPU 또는 메모리에 대한 요청과 제한값을 지정할 수 있다. +쿼터는 값에 대한 쿼터를 지정하도록 구성할 수 있다. + +쿼터에 `requests.cpu`나 `requests.memory`에 지정된 값이 있으면 들어오는 모든 +컨테이너가 해당 리소스에 대한 명시적인 요청을 지정해야 한다. 쿼터에 `limits.cpu`나 +`limits.memory`에 지정된 값이 있으면 들어오는 모든 컨테이너가 해당 리소스에 대한 명시적인 제한을 지정해야 한다. + +## 쿼터 보기 및 설정 + +Kubectl은 쿼터 생성, 업데이트 및 보기를 지원한다. + +```shell +kubectl create namespace myspace +``` + +```shell +cat < compute-resources.yaml +apiVersion: v1 +kind: ResourceQuota +metadata: + name: compute-resources +spec: + hard: + requests.cpu: "1" + requests.memory: 1Gi + limits.cpu: "2" + limits.memory: 2Gi + requests.nvidia.com/gpu: 4 +EOF +``` + +```shell +kubectl create -f ./compute-resources.yaml --namespace=myspace +``` + +```shell +cat < object-counts.yaml +apiVersion: v1 +kind: ResourceQuota +metadata: + name: object-counts +spec: + hard: + configmaps: "10" + persistentvolumeclaims: "4" + pods: "4" + replicationcontrollers: "20" + secrets: "10" + services: "10" + services.loadbalancers: "2" +EOF +``` + +```shell +kubectl create -f ./object-counts.yaml --namespace=myspace +``` + +```shell +kubectl get quota --namespace=myspace +``` + +```shell +NAME AGE +compute-resources 30s +object-counts 32s +``` + +```shell +kubectl describe quota compute-resources --namespace=myspace +``` + +```shell +Name: compute-resources +Namespace: myspace +Resource Used Hard +-------- ---- ---- +limits.cpu 0 2 +limits.memory 0 2Gi +requests.cpu 0 1 +requests.memory 0 1Gi +requests.nvidia.com/gpu 0 4 +``` + +```shell +kubectl describe quota object-counts --namespace=myspace +``` + +```shell +Name: object-counts +Namespace: myspace +Resource Used Hard +-------- ---- ---- +configmaps 0 10 +persistentvolumeclaims 0 4 +pods 0 4 +replicationcontrollers 0 20 +secrets 1 10 +services 0 10 +services.loadbalancers 0 2 +``` + +Kubectl은 `count/.` 구문을 사용하여 모든 표준 네임스페이스 리소스에 대한 +오브젝트 수 쿼터를 지원한다. + +```shell +kubectl create namespace myspace +``` + +```shell +kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 --namespace=myspace +``` + +```shell +kubectl run nginx --image=nginx --replicas=2 --namespace=myspace +``` + +```shell +kubectl describe quota --namespace=myspace +``` + +```shell +Name: test +Namespace: myspace +Resource Used Hard +-------- ---- ---- +count/deployments.extensions 1 2 +count/pods 2 3 +count/replicasets.extensions 1 4 +count/secrets 1 4 +``` + +## 쿼터 및 클러스터 용량 + +`ResourceQuotas`는 클러스터 용량과 무관하다. 그것들은 절대 단위로 표현된다. +따라서 클러스터에 노드를 추가해도 각 네임스페이스에 더 많은 리소스를 +사용할 수 있는 기능이 자동으로 부여되지는 *않는다*. + +가끔 다음과 같은 보다 복잡한 정책이 필요할 수 있다. + + - 여러 팀으로 전체 클러스터 리소스를 비례적으로 나눈다. + - 각 테넌트가 필요에 따라 리소스 사용량을 늘릴 수 있지만, 실수로 리소스가 고갈되는 것을 + 막기 위한 충분한 제한이 있다. + - 하나의 네임스페이스에서 요구를 감지하고 노드를 추가하며 쿼터를 늘린다. + +이러한 정책은 쿼터 사용을 감시하고 다른 신호에 따라 각 네임스페이스의 쿼터 하드 제한을 +조정하는 "컨트롤러"를 작성하여 `ResourceQuotas`를 구성 요소로 +사용하여 구현할 수 있다. + +리소스 쿼터는 통합된 클러스터 리소스를 분할하지만 노드에 대한 제한은 없다. +여러 네임스페이스의 파드가 동일한 노드에서 실행될 수 있다. + +## 기본적으로 우선 순위 클래스 소비 제한 + +파드가 특정 우선 순위, 예를 들어 일치하는 쿼터 오브젝트가 존재하는 경우에만 "cluster-services"가 네임스페이스에 허용되어야 힌다. + +이 메커니즘을 통해 운영자는 특정 우선 순위가 높은 클래스의 사용을 제한된 수의 네임스페이스로 제한할 수 있으며 모든 네임스페이스가 기본적으로 이러한 우선 순위 클래스를 사용할 수 있는 것은 아니다. + +이를 적용하려면 kube-apiserver 플래그 `--admission-control-config-file`을 사용하여 다음 구성 파일의 경로를 전달해야 한다. + +{{< tabs name="example1" >}} +{{% tab name="apiserver.config.k8s.io/v1" %}} +```yaml +apiVersion: apiserver.config.k8s.io/v1 +kind: AdmissionConfiguration +plugins: +- name: "ResourceQuota" + configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: ResourceQuotaConfiguration + limitedResources: + - resource: pods + matchScopes: + - scopeName: PriorityClass + operator: In + values: ["cluster-services"] +``` +{{% /tab %}} +{{% tab name="apiserver.k8s.io/v1alpha1" %}} +```yaml +# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 +apiVersion: apiserver.k8s.io/v1alpha1 +kind: AdmissionConfiguration +plugins: +- name: "ResourceQuota" + configuration: + # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, ResourceQuotaConfiguration + apiVersion: resourcequota.admission.k8s.io/v1beta1 + kind: Configuration + limitedResources: + - resource: pods + matchScopes: + - scopeName: PriorityClass + operator: In + values: ["cluster-services"] +``` +{{% /tab %}} +{{< /tabs >}} + +이제 "cluster-services" 파드는 `scopeSelector`와 일치하는 쿼터 오브젝트가 있는 네임스페이스에서만 허용된다. +예를 들면 다음과 같다. +```yaml + scopeSelector: + matchExpressions: + - scopeName: PriorityClass + operator: In + values: ["cluster-services"] +``` + +자세한 내용은 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)와 [우선 순위 클래스에 대한 쿼터 지원 디자인 문서](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)를 참고하길 바란다. + +## 예제 + +[리소스 쿼터를 사용하는 방법에 대한 자세한 예](/docs/tasks/administer-cluster/quota-api-object/)를 참고하길 바란다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +자세한 내용은 [리소스쿼터 디자인 문서](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)를 참고하길 바란다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/scheduling/kube-scheduler.md b/content/ko/docs/concepts/scheduling/kube-scheduler.md new file mode 100644 index 0000000000..a7c33265f2 --- /dev/null +++ b/content/ko/docs/concepts/scheduling/kube-scheduler.md @@ -0,0 +1,97 @@ +--- +title: 쿠버네티스 스케줄러 +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} + +쿠버네티스에서 _스케줄링_ 은 {{< glossary_tooltip term_id="kubelet" >}}이 +파드를 실행할 수 있도록 {{< glossary_tooltip text="파드" term_id="pod" >}}가 +{{< glossary_tooltip text="노드" term_id="node" >}}에 적합한지 확인하는 것을 말한다. + +{{% /capture %}} + +{{% capture body %}} + +## 스케줄링 개요 {#scheduling} + +스케줄러는 노드가 할당되지 않은 새로 생성된 파드를 감시한다. +스케줄러가 발견한 모든 파드에 대해 스케줄러는 해당 파드가 실행될 +최상의 노드를 찾는 책임을 진다. 스케줄러는 +아래 설명된 스케줄링 원칙을 고려하여 이 배치 결정을 +하게 된다. + +파드가 특정 노드에 배치되는 이유를 이해하려고 하거나 +사용자 정의된 스케줄러를 직접 구현하려는 경우 이 +페이지를 통해서 스케줄링에 대해 배울 수 있을 것이다. + +## kube-scheduler + +[kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)는 +쿠버네티스의 기본 스케줄러이며 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}의 +일부로 실행된다. +kube-scheduler는 원하거나 필요에 따라 자체 스케줄링 컴포넌트를 +만들고 대신 사용할 수 있도록 설계되었다. + +새로 생성된 모든 파드 또는 예약되지 않은 다른 파드에 대해 kube-scheduler는 +실행할 최적의 노드를 선택한다. 그러나 파드의 모든 컨테이너에는 +리소스에 대한 요구사항이 다르며 모든 파드에도 +요구사항이 다르다. 따라서 기존 노드들은 +특정 스케줄링 요구사항에 따라 필터링 되어야 한다. + +클러스터에서 파드에 대한 스케줄링 요구사항을 충족하는 노드를 +_실행 가능한(feasible)_ 노드라고 한다. 적합한 노드가 없으면 스케줄러가 +배치할 수 있을 때까지 파드가 스케줄 되지 않은 상태로 유지된다. + +스케줄러는 파드가 실행 가능한 노드를 찾은 다음 실행 가능한 노드의 +점수를 측정하는 기능 셋을 수행하고 실행 가능한 노드 중에서 가장 높은 점수를 +가진 노드를 선택하여 파드를 실행한다. 그런 다음 스케줄러는 +_바인딩_ 이라는 프로세스에서 이 결정에 대해 API 서버에 알린다. + +스케줄링 결정을 위해 고려해야 할 요소에는 +개별 및 집단 리소스 요구사항, 하드웨어 / 소프트웨어 / +정책 제한조건, 어피니티 및 안티-어피니티 명세, 데이터 +지역성(data locality), 워크로드 간 간섭 등이 포함된다. + +### kube-scheduler에서 노드 선택 {#kube-scheduler-implementation} + +kube-scheduler는 2단계 작업에서 파드에 대한 노드를 선택한다. + +1. 필터링 +1. 스코어링(scoring) + +_필터링_ 단계는 파드를 스케줄링 할 수 있는 노드 셋을 +찾는다. 예를 들어 PodFitsResources 필터는 +후보 노드가 파드의 특정 리소스 요청을 충족시키기에 충분한 가용 리소스가 +있는지 확인한다. 이 단계 다음에 노드 목록에는 적합한 노드들이 +포함된다. 하나 이상의 노드가 포함된 경우가 종종 있을 것이다. 목록이 비어 있으면 +해당 파드는 (아직) 스케줄링 될 수 없다. + +_스코어링_ 단계에서 스케줄러는 목록에 남아있는 노드의 순위를 지정하여 +가장 적합한 파드 배치를 선택한다. 스케줄러는 사용 중인 스코어링 규칙에 따라 +이 점수를 기준으로 필터링에서 통과된 각 노드에 대해 점수를 지정한다. + +마지막으로 kube-scheduler는 파드를 순위가 가장 높은 노드에 할당한다. +점수가 같은 노드가 두 개 이상인 경우 kube-scheduler는 +이들 중 하나를 임의로 선택한다. + +스케줄러의 필터링 및 스코어링 동작을 구성하는 데 지원되는 두 가지 +방법이 있다. + +1. [스케줄링 정책](/docs/reference/scheduling/policies)을 사용하면 + 필터링을 위한 _단정(Predicates)_ 및 스코어링을 위한 _우선순위(Priorities)_ 를 구성할 수 있다. +1. [스케줄링 프로파일](/docs/reference/scheduling/profiles)을 사용하면 + `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit` 등의 + 다른 스케줄링 단계를 구현하는 플러그인을 구성할 수 있다. 다른 프로파일을 실행하도록 + kube-scheduler를 구성할 수도 있다. + +{{% /capture %}} +{{% capture whatsnext %}} +* [스케줄러 성능 튜닝](/ko/docs/concepts/scheduling/scheduler-perf-tuning/)에 대해 읽기 +* [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)에 대해 읽기 +* kube-scheduler의 [레퍼런스 문서](/docs/reference/command-line-tools-reference/kube-scheduler/) 읽기 +* [멀티 스케줄러 구성하기](/docs/tasks/administer-cluster/configure-multiple-schedulers/)에 대해 배우기 +* [토폴로지 관리 정책](/docs/tasks/administer-cluster/topology-manager/)에 대해 배우기 +* [파드 오버헤드](/docs/concepts/configuration/pod-overhead/)에 대해 배우기 +{{% /capture %}} diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md index 03d8700cd2..dac88a68c3 100644 --- a/content/ko/docs/concepts/services-networking/ingress.md +++ b/content/ko/docs/concepts/services-networking/ingress.md @@ -71,6 +71,7 @@ spec: - http: paths: - path: /testpath + pathType: Prefix backend: serviceName: test servicePort: 80 @@ -115,6 +116,84 @@ spec: 만약 인그레스 오브젝트의 HTTP 요청과 일치하는 호스트 또는 경로가 없으면, 트래픽은 기본 백엔드로 라우팅 된다. +### 경로(Path) 유형 + +인그레스의 각 경로에는 해당하는 경로 유형이 있다. 지원되는 세 가지의 경로 +유형이 있다. + +* _`ImplementationSpecific`_ (기본): 이 경로 유형의 일치 여부는 IngressClass에 따라 + 달라진다. 이를 구현할 때 별도 pathType으로 처리하거나, `Prefix` 또는 `Exact` + 경로 유형과 같이 동일하게 처리할 수 있다. + +* _`Exact`_: URL 경로의 대소문자를 엄격하게 일치시킨다. + +* _`Prefix`_: URL 경로의 접두사를 `/` 를 기준으로 분리한 값과 일치시킨다. + 일치는 대소문자를 구분하고, + 요소별로 경로 요소에 대해 수행한다. + 모든 _p_ 가 요청 경로의 요소별 접두사가 _p_ 인 경우 + 요청은 _p_ 경로에 일치한다. + {{< note >}} + 경로의 마지막 요소가 요청 경로에 있는 마지막 요소의 + 하위 문자열인 경우에는 일치하지 않는다(예시: + `/foo/bar` 와 `/foo/bar/baz` 와 일치하지만, `/foo/barbaz` 는 일치하지 않는다). + {{< /note >}} + +#### 다중 일치 +경우에 따라 인그레스의 여러 경로가 요청과 일치할 수 있다. +이 경우 가장 긴 일치하는 경로가 우선하게 된다. 두 개의 경로가 +여전히 동일하게 일치하는 경우 접두사(prefix) 경로 유형보다 +정확한(exact) 경로 유형을 가진 경로가 사용 된다. + +## 인그레스 클래스 + +인그레스는 서로 다른 컨트롤러에 의해 구현될 수 있으며, 종종 다른 구성으로 +구현될 수 있다. 각 인그레스에서는 클래스를 구현해야하는 컨트롤러 +이름을 포함하여 추가 구성이 포함된 IngressClass +리소스에 대한 참조 클래스를 지정해야 한다. + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: IngressClass +metadata: + name: external-lb +spec: + controller: example.com/ingress-controller + parameters: + apiGroup: k8s.example.com/v1alpha + kind: IngressParameters + name: external-lb +``` + +IngressClass 리소스에는 선택적인 파라미터 필드가 있다. 이 클래스에 대한 +추가 구성을 참조하는데 사용할 수 있다. + +### 사용중단(Deprecated) 어노테이션 + +쿠버네티스 1.18에 IngressClass 리소스 및 `ingressClassName` 필드가 추가되기 +전에 인그레스 클래스는 인그레스에서 +`kubernetes.io/ingress.class` 어노테이션으로 지정되었다. 이 어노테이션은 +공식적으로 정의된 것은 아니지만, 인그레스 컨트롤러에서 널리 지원되었었다. + +인그레스의 최신 `ingressClassName` 필드는 해당 어노테이션을 +대체하지만, 직접적으로 해당하는 것은 아니다. 어노테이션은 일반적으로 +인그레스를 구현해야 하는 인그레스 컨트롤러의 이름을 참조하는 데 사용되었지만, +이 필드는 인그레스 컨트롤러의 이름을 포함하는 추가 인그레스 구성이 +포함된 인그레스 클래스 리소스에 대한 참조이다. + +### 기본 인그레스 클래스 + +특정 IngressClass를 클러스터의 기본 값으로 표시할 수 있다. IngressClass +리소스에서 `ingressclass.kubernetes.io/is-default-class` 를 `true` 로 +설정하면 `ingressClassName` 필드가 지정되지 않은 +새 인그레스에게 기본 IngressClass가 할당된다. + +{{< caution >}} +클러스터의 기본값으로 표시된 IngressClass가 두 개 이상 있는 경우 +어드미션 컨트롤러에서 `ingressClassName` 이 지정되지 않은 +새 인그레스 오브젝트를 생성할 수 없다. 클러스터에서 최대 1개의 IngressClass가 +기본값으로 표시하도록 해서 이 문제를 해결할 수 있다. +{{< /caution >}} + ## 인그레스 유형들 ### 단일 서비스 인그레스 diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md index 3072c277a0..1b19121672 100644 --- a/content/ko/docs/concepts/services-networking/service.md +++ b/content/ko/docs/concepts/services-networking/service.md @@ -202,6 +202,17 @@ API 리소스이다. 개념적으로 엔드포인트와 매우 유사하지만, 엔드포인트슬라이스는 [엔드포인트슬라이스](/ko/docs/concepts/services-networking/endpoint-slices/)에서 자세하게 설명된 추가적인 속성 및 기능을 제공한다. +### 애플리케이션 프로토콜 + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +AppProtocol 필드는 각 서비스 포트에 사용될 애플리케이션 프로토콜을 +지정하는 방법을 제공한다. + +알파 기능으로 이 필드는 기본적으로 활성화되어 있지 않다. 이 필드를 사용하려면, +[기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)에서 +`ServiceAppProtocol` 을 활성화해야 한다. + ## 가상 IP와 서비스 프록시 쿠버네티스 클러스터의 모든 노드는 `kube-proxy`를 실행한다. `kube-proxy`는 diff --git a/content/ko/docs/concepts/storage/dynamic-provisioning.md b/content/ko/docs/concepts/storage/dynamic-provisioning.md new file mode 100644 index 0000000000..11564490ec --- /dev/null +++ b/content/ko/docs/concepts/storage/dynamic-provisioning.md @@ -0,0 +1,131 @@ +--- +title: 동적 볼륨 프로비저닝 +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +동적 볼륨 프로비저닝을 통해 온-디맨드 방식으로 스토리지 볼륨을 생성할 수 있다. +동적 프로비저닝이 없으면 클러스터 관리자는 클라우드 또는 스토리지 +공급자에게 수동으로 요청해서 새 스토리지 볼륨을 생성한 다음, 쿠버네티스에 +표시하기 위해 [`PersistentVolume` 오브젝트](/ko/docs/concepts/storage/persistent-volumes/)를 +생성해야 한다. 동적 프로비저닝 기능을 사용하면 클러스터 관리자가 +스토리지를 사전 프로비저닝 할 필요가 없다. 대신 사용자가 +스토리지를 요청하면 자동으로 프로비저닝 한다. + +{{% /capture %}} + + +{{% capture body %}} + +## 배경 + +동적 볼륨 프로비저닝의 구현은 `storage.k8s.io` API 그룹의 `StorageClass` +API 오브젝트를 기반으로 한다. 클러스터 관리자는 볼륨을 프로비전하는 +*볼륨 플러그인* (프로비저너라고도 알려짐)과 프로비저닝시에 프로비저너에게 +전달할 파라미터 집합을 지정하는 `StorageClass` +오브젝트를 필요한 만큼 정의할 수 있다. +클러스터 관리자는 클러스터 내에서 사용자 정의 파라미터 집합을 +사용해서 여러 가지 유형의 스토리지 (같거나 다른 스토리지 시스템들)를 +정의하고 노출시킬 수 있다. 또한 이 디자인을 통해 최종 사용자는 +스토리지 프로비전 방식의 복잡성과 뉘앙스에 대해 걱정할 필요가 없다. 하지만, +여전히 여러 스토리지 옵션들을 선택할 수 있다. + +스토리지 클래스에 대한 자세한 정보는 +[여기](/docs/concepts/storage/storage-classes/)에서 찾을 수 있다. + +## 동적 프로비저닝 활성화하기 + +동적 프로비저닝을 활성화하려면 클러스터 관리자가 사용자를 위해 하나 이상의 StorageClass +오브젝트를 사전 생성해야 한다. +StorageClass 오브젝트는 동적 프로비저닝이 호출될 때 사용할 프로비저너와 +해당 프로비저너에게 전달할 파라미터를 정의한다. +StorageClass 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야 한다. + +다음 매니페스트는 표준 디스크와 같은 퍼시스턴트 디스크를 프로비전하는 +스토리지 클래스 "slow"를 만든다. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard +``` + +다음 매니페스트는 SSD와 같은 퍼시스턴트 디스크를 프로비전하는 +스토리지 클래스 "fast"를 만든다. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: fast +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-ssd +``` + +## 동적 프로비저닝 사용하기 + +사용자는 `PersistentVolumeClaim` 에 스토리지 클래스를 포함시켜 동적으로 프로비전된 +스토리지를 요청한다. 쿠버네티스 v1.6 이전에는 `volume.beta.kubernetes.io/storage-class` +어노테이션을 통해 수행되었다. 그러나 이 어노테이션은 +v1.6부터 더 이상 사용하지 않는다. 사용자는 이제 `PersistentVolumeClaim` 오브젝트의 +`storageClassName` 필드를 사용할 수 있기에 대신하여 사용해야 한다. 이 필드의 값은 +관리자가 구성한 `StorageClass` 의 이름과 +일치해야 한다. ([아래](#동적-프로비저닝-활성화하기)를 참고) + +예를 들어 “fast” 스토리지 클래스를 선택하려면 다음과 +같은 `PersistentVolumeClaim` 을 생성한다. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: claim1 +spec: + accessModes: + - ReadWriteOnce + storageClassName: fast + resources: + requests: + storage: 30Gi +``` + +이 클레임의 결과로 SSD와 같은 퍼시스턴트 디스크가 자동으로 +프로비전 된다. 클레임이 삭제되면 볼륨이 삭제된다. + +## 기본 동작 + +스토리지 클래스가 지정되지 않은 경우 모든 클레임이 동적으로 +프로비전이 되도록 클러스터에서 동적 프로비저닝을 활성화 할 수 있다. 클러스터 관리자는 +이 방법으로 활성화 할 수 있다. + +- 하나의 `StorageClass` 오브젝트를 *default* 로 표시한다. +- API 서버에서 [`DefaultStorageClass` 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)를 + 사용하도록 설정한다. + +관리자는 `storageclass.kubernetes.io/is-default-class` 어노테이션을 +추가해서 특정 `StorageClass` 를 기본으로 표시할 수 있다. +기본 `StorageClass` 가 클러스터에 존재하고 사용자가 +`storageClassName` 를 지정하지 않은 `PersistentVolumeClaim` 을 +작성하면, `DefaultStorageClass` 어드미션 컨트롤러가 디폴트 +스토리지 클래스를 가리키는 `storageClassName` 필드를 자동으로 추가한다. + +클러스터에는 최대 하나의 *default* 스토리지 클래스가 있을 수 있다. 그렇지 않은 경우 +`storageClassName` 을 명시적으로 지정하지 않은 `PersistentVolumeClaim` 을 +생성할 수 없다. + +## 토폴로지 인식 + +[다중 영역](/ko/docs/setup/best-practices/multiple-zones/) 클러스터에서 파드는 한 지역 내 +여러 영역에 걸쳐 분산될 수 있다. 파드가 예약된 영역에서 단일 영역 스토리지 백엔드를 +프로비전 해야 한다. [볼륨 바인딩 모드](/docs/concepts/storage/storage-classes/#volume-binding-mode)를 +설정해서 수행할 수 있다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/storage/persistent-volumes.md b/content/ko/docs/concepts/storage/persistent-volumes.md new file mode 100644 index 0000000000..879ef3b939 --- /dev/null +++ b/content/ko/docs/concepts/storage/persistent-volumes.md @@ -0,0 +1,757 @@ +--- +title: 퍼시스턴트 볼륨 +feature: + title: 스토리지 오케스트레이션 + description: > + 로컬 스토리지, GCPAWS와 같은 퍼블릭 클라우드 공급자 또는 NFS, iSCSI, Gluster, Ceph, Cinder나 Flocker와 같은 네트워크 스토리지 시스템에서 원하는 스토리지 시스템을 자동으로 마운트한다. + +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +이 페이지는 쿠버네티스의 _퍼시스턴트 볼륨_ 의 현재 상태를 설명한다. [볼륨](/ko/docs/concepts/storage/volumes/)에 대해 익숙해지는 것을 추천한다. + +{{% /capture %}} + + +{{% capture body %}} + +## 소개 + +스토리지 관리는 컴퓨트 인스턴스 관리와는 별개의 문제다. 퍼시스턴트볼륨 서브시스템은 사용자 및 관리자에게 스토리지 사용 방법에서부터 스토리지가 제공되는 방법에 대한 세부 사항을 추상화하는 API를 제공한다. 이를 위해 퍼시스턴트볼륨 및 퍼시스턴트볼륨클레임이라는 두 가지 새로운 API 리소스를 소개한다. + +퍼시스턴트볼륨(PV)은 관리자가 프로비저닝하거나 [스토리지 클래스](/docs/concepts/storage/storage-classes/)를 사용하여 동적으로 프로비저닝한 클러스터의 스토리지이다. 노드가 클러스터 리소스인 것처럼 PV는 클러스터 리소스이다. PV는 Volumes와 같은 볼륨 플러그인이지만, PV를 사용하는 개별 파드와는 별개의 라이프사이클을 가진다. 이 API 오브젝트는 NFS, iSCSI 또는 클라우드 공급자별 스토리지 시스템 등 스토리지 구현에 대한 세부 정보를 담아낸다. + +퍼시스턴트볼륨클레임(PVC)은 사용자의 스토리지에 대한 요청이다. 파드와 비슷하다. 파드는 노드 리소스를 사용하고 PVC는 PV 리소스를 사용한다. 파드는 특정 수준의 리소스(CPU 및 메모리)를 요청할 수 있다. 클레임은 특정 크기 및 접근 모드를 요청할 수 있다(예: 한 번 읽기/쓰기 또는 여러 번 읽기 전용으로 마운트 할 수 있음). + +퍼시스턴트볼륨클레임을 사용하면 사용자가 추상화된 스토리지 리소스를 사용할 수 있지만, 다른 문제들 때문에 성능과 같은 다양한 속성을 가진 퍼시스턴트볼륨이 필요한 경우가 일반적이다. 클러스터 관리자는 사용자에게 해당 볼륨의 구현 방법에 대한 세부 정보를 제공하지 않고 단순히 크기와 접근 모드와는 다른 방식으로 다양한 퍼시스턴트볼륨을 제공할 수 있어야 한다. 이러한 요구에는 _스토리지클래스_ 리소스가 있다. + +[실습 예제와 함께 상세한 내용](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)을 참고하길 바란다. + +## 볼륨과 클레임 라이프사이클 + +PV는 클러스터 리소스이다. PVC는 해당 리소스에 대한 요청이며 리소스에 대한 클레임 검사 역할을 한다. PV와 PVC 간의 상호 작용은 다음 라이프사이클을 따른다. + +### 프로비저닝 + +PV를 프로비저닝 할 수 있는 두 가지 방법이 있다: 정적(static) 프로비저닝과 동적(dynamic) 프로비저닝 + +#### 정적 프로비저닝 + +클러스터 관리자는 여러 PV를 만든다. 클러스터 사용자가 사용할 수 있는 실제 스토리지의 세부 사항을 제공한다. 이 PV들은 쿠버네티스 API에 존재하며 사용할 수 있다. + +#### 동적 프로비저닝 + +관리자가 생성한 정적 PV가 사용자의 퍼시스턴트볼륨클레임과 일치하지 않으면 +클러스터는 PVC를 위해 특별히 볼륨을 동적으로 프로비저닝 하려고 시도할 수 있다. +이 프로비저닝은 스토리지클래스를 기반으로 한다. PVC는 +[스토리지 클래스](/docs/concepts/storage/storage-classes/)를 +요청해야 하며 관리자는 동적 프로비저닝이 발생하도록 해당 클래스를 생성하고 구성해야 한다. +`""` 클래스를 요청하는 클레임은 동적 프로비저닝을 효과적으로 +비활성화한다. + +스토리지 클래스를 기반으로 동적 스토리지 프로비저닝을 사용하려면 클러스터 관리자가 API 서버에서 +`DefaultStorageClass` [어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)를 사용하도록 설정해야 한다. +예를 들어 API 서버 컴포넌트의 `--enable-admission-plugins` 플래그에 대한 쉼표로 구분되어 +정렬된 값들의 목록 중에 `DefaultStorageClass`가 포함되어 있는지 확인하여 설정할 수 있다. +API 서버 커맨드라인 플래그에 대한 자세한 정보는 +[kube-apiserver](/docs/admin/kube-apiserver/) 문서를 확인하면 된다. + +### 바인딩 + +사용자는 원하는 특정 용량의 스토리지와 특정 접근 모드로 퍼시스턴트볼륨클레임을 생성하거나 동적 프로비저닝의 경우 이미 생성한 상태다. 마스터의 컨트롤 루프는 새로운 PVC를 감시하고 일치하는 PV(가능한 경우)를 찾아 서로 바인딩한다. PV가 새 PVC에 대해 동적으로 프로비저닝된 경우 루프는 항상 해당 PV를 PVC에 바인딩한다. 그렇지 않으면 사용자는 항상 최소한 그들이 요청한 것을 얻지만 볼륨은 요청된 것을 초과할 수 있다. 일단 바인딩되면 퍼시스턴트볼륨클레임은 어떻게 바인딩되었는지 상관없이 배타적으로 바인딩된다. PVC 대 PV 바인딩은 일대일 매핑으로, 퍼시스턴트볼륨과 퍼시스턴트볼륨클레임 사이의 양방향 바인딩인 ClaimRef를 사용한다. + +일치하는 볼륨이 없는 경우 클레임은 무한정 바인딩되지 않은 상태로 남아 있다. 일치하는 볼륨이 제공되면 클레임이 바인딩된다. 예를 들어 많은 수의 50Gi PV로 프로비저닝된 클러스터는 100Gi를 요청하는 PVC와 일치하지 않는다. 100Gi PV가 클러스터에 추가되면 PVC를 바인딩할 수 있다. + +### 사용 중 + +파드는 클레임을 볼륨으로 사용한다. 클러스터는 클레임을 검사하여 바인딩된 볼륨을 찾고 해당 볼륨을 파드에 마운트한다. 여러 접근 모드를 지원하는 볼륨의 경우 사용자는 자신의 클레임을 파드에서 볼륨으로 사용할 때 원하는 접근 모드를 지정한다. + +일단 사용자에게 클레임이 있고 그 클레임이 바인딩되면, 바인딩된 PV는 사용자가 필요로 하는 한 사용자에게 속한다. 사용자는 파드의 `volumes` 블록에 `persistentVolumeClaim`을 포함하여 파드를 스케줄링하고 클레임한 PV에 접근한다. 이에 대한 자세한 내용은 [볼륨으로 클레임하기](#볼륨으로-클레임하기)를 참고하길 바란다. + +### 사용 중인 스토리지 오브젝트 ​​보호 +사용 중인 스토리지 오브젝트 ​​보호 기능의 목적은 PVC에 바인딩된 파드와 퍼시스턴트볼륨(PV)이 사용 중인 퍼시스턴트볼륨클레임(PVC)을 시스템에서 삭제되지 않도록 하는 것이다. 삭제되면 이로 인해 데이터의 손실이 발생할 수 있기 때문이다. + +{{< note >}} +PVC를 사용하는 파드 오브젝트가 존재하면 파드가 PVC를 사용하고 있는 상태이다. +{{< /note >}} + +사용자가 파드에서 활발하게 사용 중인 PVC를 삭제하면 PVC는 즉시 삭제되지 않는다. PVC가 더 이상 파드에서 적극적으로 사용되지 않을 때까지 PVC 삭제가 연기된다. 또한 관리자가 PVC에 바인딩된 PV를 삭제하면 PV는 즉시 삭제되지 않는다. PV가 더 이상 PVC에 바인딩되지 않을 때까지 PV 삭제가 연기된다. + +PVC의 상태가 `Terminating`이고 `Finalizers` 목록에 `kubernetes.io/pvc-protection`이 포함되어 있으면 PVC가 보호된 것으로 볼 수 있다. + +```shell +kubectl describe pvc hostpath +Name: hostpath +Namespace: default +StorageClass: example-hostpath +Status: Terminating +Volume: +Labels: +Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath + volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath +Finalizers: [kubernetes.io/pvc-protection] +... +``` + +마찬가지로 PV 상태가 `Terminating`이고 `Finalizers` 목록에 `kubernetes.io/pv-protection`이 포함되어 있으면 PV가 보호된 것으로 볼 수 있다. + +```shell +kubectl describe pv task-pv-volume +Name: task-pv-volume +Labels: type=local +Annotations: +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Terminating +Claim: +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + +### 반환(Reclaiming) + +사용자가 볼륨을 다 사용하고나면 리소스를 반환할 수 있는 API를 사용하여 PVC 오브젝트를 삭제할 수 있다. 퍼시스턴트볼륨의 반환 정책은 볼륨에서 클레임을 해제한 후 볼륨에 수행할 작업을 클러스터에 알려준다. 현재 볼륨에 대한 반환 정책은 Retain, Recycle, 그리고 Delete가 있다. + +#### Retain(보존) + +`Retain` 반환 정책은 리소스를 수동으로 반환할 수 있게 한다. 퍼시스턴트볼륨클레임이 삭제되면 퍼시스턴트볼륨은 여전히 존재하며 볼륨은 "릴리스 된" 것으로 간주된다. 그러나 이전 요청자의 데이터가 여전히 볼륨에 남아 있기 때문에 다른 요청에 대해서는 아직 사용할 수 없다. 관리자는 다음 단계에 따라 볼륨을 수동으로 반환할 수 있다. + +1. 퍼시스턴트볼륨을 삭제한다. PV가 삭제된 후에도 외부 인프라(예: AWS EBS, GCE PD, Azure Disk 또는 Cinder 볼륨)의 관련 스토리지 자산이 존재한다. +1. 관련 스토리지 자산의 데이터를 수동으로 삭제한다. +1. 연결된 스토리지 자산을 수동으로 삭제하거나 동일한 스토리지 자산을 재사용하려는 경우 스토리지 자산 정의로 새 퍼시스턴트볼륨을 생성한다. + +#### Delete(삭제) + +`Delete` 반환 정책을 지원하는 볼륨 플러그인의 경우, 삭제는 쿠버네티스에서 퍼시스턴트볼륨 오브젝트와 외부 인프라(예: AWS EBS, GCE PD, Azure Disk 또는 Cinder 볼륨)의 관련 스토리지 자산을 모두 삭제한다. 동적으로 프로비저닝된 볼륨은 [스토리지클래스의 반환 정책](#반환-정책)을 상속하며 기본값은 `Delete`이다. 관리자는 사용자의 기대에 따라 스토리지클래스를 구성해야 한다. 그렇지 않으면 PV를 생성한 후 PV를 수정하거나 패치해야 한다. [퍼시스턴트볼륨의 반환 정책 변경](/docs/tasks/administer-cluster/change-pv-reclaim-policy/)을 참고하길 바란다. + +#### Recycle(재활용) + +{{< warning >}} +`Recycle` 반환 정책은 더 이상 사용하지 않는다. 대신 권장되는 방식은 동적 프로비저닝을 사용하는 것이다. +{{< /warning >}} + +기본 볼륨 플러그인에서 지원하는 경우 `Recycle` 반환 정책은 볼륨에서 기본 스크럽(`rm -rf /thevolume/*`)을 수행하고 새 클레임에 다시 사용할 수 있도록 한다. + +그러나 관리자는 [여기](/docs/admin/kube-controller-manager/)에 설명된대로 쿠버네티스 컨트롤러 관리자 커맨드라인 인자(command line arguments)를 사용하여 사용자 정의 재활용 파드 템플릿을 구성할 수 있다. 사용자 정의 재활용 파드 템플릿에는 아래 예와 같이 `volumes` 명세가 포함되어야 한다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-recycler + namespace: default +spec: + restartPolicy: Never + volumes: + - name: vol + hostPath: + path: /any/path/it/will/be/replaced + containers: + - name: pv-recycler + image: "k8s.gcr.io/busybox" + command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] + volumeMounts: + - name: vol + mountPath: /scrub +``` + +그러나 `volumes` 부분의 사용자 정의 재활용 파드 템플릿에 지정된 특정 경로는 재활용되는 볼륨의 특정 경로로 바뀐다. + +### 퍼시스턴트 볼륨 클레임 확장 + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +이제 퍼시스턴트볼륨클레임(PVC) 확장 지원이 기본적으로 활성화되어 있다. 다음 유형의 +볼륨을 확장할 수 있다. + +* gcePersistentDisk +* awsElasticBlockStore +* Cinder +* glusterfs +* rbd +* Azure File +* Azure Disk +* Portworx +* FlexVolumes +* CSI + +스토리지 클래스의 `allowVolumeExpansion` 필드가 true로 설정된 경우에만 PVC를 확장할 수 있다. + +``` yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gluster-vol-default +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://192.168.10.100:8080" + restuser: "" + secretNamespace: "" + secretName: "" +allowVolumeExpansion: true +``` + +PVC에 대해 더 큰 볼륨을 요청하려면 PVC 오브젝트를 수정하여 더 큰 용량을 +지정한다. 이는 기본 퍼시스턴트볼륨을 지원하는 볼륨의 확장을 트리거한다. 클레임을 만족시키기 위해 +새로운 퍼시스턴트볼륨이 생성되지 않고 기존 볼륨의 크기가 조정된다. + +#### CSI 볼륨 확장 + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +CSI 볼륨 확장 지원은 기본적으로 활성화되어 있지만 볼륨 확장을 지원하려면 특정 CSI 드라이버도 필요하다. 자세한 내용은 특정 CSI 드라이버 문서를 참고한다. + + +#### 파일시스템을 포함하는 볼륨 크기 조정 + +파일시스템이 XFS, Ext3 또는 Ext4 인 경우에만 파일시스템을 포함하는 볼륨의 크기를 조정할 수 있다. + +볼륨에 파일시스템이 포함된 경우 새 파드가 `ReadWrite` 모드에서 퍼시스턴트볼륨클레임을 사용하는 +경우에만 파일시스템의 크기가 조정된다. 파일시스템 확장은 파드가 시작되거나 +파드가 실행 중이고 기본 파일시스템이 온라인 확장을 지원할 때 수행된다. + +FlexVolumes는 `RequiresFSResize` 기능으로 드라이버가 `true`로 설정된 경우 크기 조정을 허용한다. +FlexVolume은 파드 재시작 시 크기를 조정할 수 있다. + +#### 사용 중인 퍼시스턴트볼륨클레임 크기 조정 + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +{{< note >}} +사용 중인 PVC 확장은 쿠버네티스 1.15 이후 버전에서는 베타로, 1.11 이후 버전에서는 알파로 제공된다. `ExpandInUsePersistentVolumes` 기능을 사용하도록 설정해야 한다. 베타 기능의 경우 여러 클러스터에서 자동으로 적용된다. 자세한 내용은 [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/) 문서를 참고한다. +{{< /note >}} + +이 경우 기존 PVC를 사용하는 파드 또는 디플로이먼트를 삭제하고 다시 만들 필요가 없다. +파일시스템이 확장되자마자 사용 중인 PVC가 파드에서 자동으로 사용 가능하다. +이 기능은 파드나 디플로이먼트에서 사용하지 않는 PVC에는 영향을 미치지 않는다. 확장을 완료하기 전에 +PVC를 사용하는 파드를 만들어야 한다. + + +다른 볼륨 유형과 비슷하게 FlexVolume 볼륨도 파드에서 사용 중인 경우 확장할 수 있다. + +{{< note >}} +FlexVolume의 크기 조정은 기본 드라이버가 크기 조정을 지원하는 경우에만 가능하다. +{{< /note >}} + +{{< note >}} +EBS 볼륨 확장은 시간이 많이 걸리는 작업이다. 또한 6시간마다 한 번의 수정을 할 수 있는 볼륨별 쿼터(quota)가 있다. +{{< /note >}} + + +## 퍼시스턴트 볼륨의 유형 + +퍼시스턴트볼륨 유형은 플러그인으로 구현된다. 쿠버네티스는 현재 다음의 플러그인을 지원한다. + +* GCEPersistentDisk +* AWSElasticBlockStore +* AzureFile +* AzureDisk +* CSI +* FC (파이버 채널) +* FlexVolume +* Flocker +* NFS +* iSCSI +* RBD (Ceph Block Device) +* CephFS +* Cinder (OpenStack 블록 스토리지) +* Glusterfs +* VsphereVolume +* Quobyte Volumes +* HostPath (단일 노드 테스트 전용 – 로컬 스토리지는 어떤 방식으로도 지원되지 않으며 다중-노드 클러스터에서 작동하지 않음) +* Portworx Volumes +* ScaleIO Volumes +* StorageOS + +## 퍼시스턴트 볼륨 + +각 PV에는 스펙과 상태(볼륨의 명세와 상태)가 포함된다. +퍼시스턴트볼륨 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0003 +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Recycle + storageClassName: slow + mountOptions: + - hard + - nfsvers=4.1 + nfs: + path: /tmp + server: 172.17.0.2 +``` + +{{< note >}} +클러스터 내에서 퍼시스턴트볼륨을 사용하려면 볼륨 유형과 관련된 헬퍼(Helper) 프로그램이 필요할 수 있다. 이 예에서 퍼시스턴트볼륨은 NFS 유형이며 NFS 파일시스템 마운트를 지원하려면 헬퍼 프로그램인 /sbin/mount.nfs가 필요하다. +{{< /note >}} + +### 용량 + +일반적으로 PV는 특정 저장 용량을 가진다. 이것은 PV의 `capacity` 속성을 사용하여 설정된다. `capacity`가 사용하는 단위를 이해하려면 쿠버네티스 [리소스 모델](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)을 참고한다. + +현재 스토리지 용량 크기는 설정하거나 요청할 수 있는 유일한 리소스이다. 향후 속성에 IOPS, 처리량 등이 포함될 수 있다. + +### 볼륨 모드 + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +쿠버네티스는 퍼시스턴트볼륨의 두 가지 `volumeModes`인 `Filesystem`과 `Block`을 지원한다. + +`volumeMode`는 선택적 API 파라미터이다. +`Filesystem`은 `volumeMode` 파라미터가 생략될 때 사용되는 기본 모드이다. + +`volumeMode: Filesystem`이 있는 볼륨은 파드의 디렉터리에 *마운트* 된다. 볼륨이 장치에 +의해 지원되고 그 장치가 비어 있으면 쿠버네티스는 장치를 +처음 마운트하기 전에 장치에 파일시스템을 만든다. + +볼륨을 원시 블록 장치로 사용하려면 `volumeMode`의 값을 `Block`으로 설정할 수 있다. +이러한 볼륨은 파일시스템이 없는 블록 장치로 파드에 제공된다. +이 모드는 파드와 볼륨 사이에 파일시스템 계층 없이도 볼륨에 액세스하는 +가장 빠른 방법을 파드에 제공하는 데 유용하다. 반면에 파드에서 실행되는 애플리케이션은 +원시 블록 장치를 처리하는 방법을 알아야 한다. +파드에서 `volumeMode: Block`으로 볼륨을 사용하는 방법에 대한 예는 +[원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)를 참조하십시오. + +### 접근 모드 + +리소스 제공자가 지원하는 방식으로 호스트에 퍼시스턴트볼륨을 마운트할 수 있다. 아래 표에서 볼 수 있듯이 제공자들은 서로 다른 기능을 가지며 각 PV의 접근 모드는 해당 볼륨에서 지원하는 특정 모드로 설정된다. 예를 들어 NFS는 다중 읽기/쓰기 클라이언트를 지원할 수 있지만 특정 NFS PV는 서버에서 읽기 전용으로 export할 수 있다. 각 PV는 특정 PV의 기능을 설명하는 자체 접근 모드 셋을 갖는다. + +접근 모드는 다음과 같다. + +* ReadWriteOnce -- 하나의 노드에서 볼륨을 읽기-쓰기로 마운트할 수 있다 +* ReadOnlyMany -- 여러 노드에서 볼륨을 읽기 전용으로 마운트할 수 있다 +* ReadWriteMany -- 여러 노드에서 볼륨을 읽기-쓰기로 마운트할 수 있다 + +CLI에서 접근 모드는 다음과 같이 약어로 표시된다. + +* RWO - ReadWriteOnce +* ROX - ReadOnlyMany +* RWX - ReadWriteMany + +> __중요!__ 볼륨이 여러 접근 모드를 지원하더라도 한 번에 하나의 접근 모드를 사용하여 마운트할 수 있다. 예를 들어 GCEPersistentDisk는 하나의 노드가 ReadWriteOnce로 마운트하거나 여러 노드가 ReadOnlyMany로 마운트할 수 있지만 동시에는 불가능하다. + + +| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany| +| :--- | :---: | :---: | :---: | +| AWSElasticBlockStore | ✓ | - | - | +| AzureFile | ✓ | ✓ | ✓ | +| AzureDisk | ✓ | - | - | +| CephFS | ✓ | ✓ | ✓ | +| Cinder | ✓ | - | - | +| CSI | 드라이버에 따라 다름 | 드라이버에 따라 다름 | 드라이버에 따라 다름 | +| FC | ✓ | ✓ | - | +| FlexVolume | ✓ | ✓ | 드라이버에 따라 다름 | +| Flocker | ✓ | - | - | +| GCEPersistentDisk | ✓ | ✓ | - | +| Glusterfs | ✓ | ✓ | ✓ | +| HostPath | ✓ | - | - | +| iSCSI | ✓ | ✓ | - | +| Quobyte | ✓ | ✓ | ✓ | +| NFS | ✓ | ✓ | ✓ | +| RBD | ✓ | ✓ | - | +| VsphereVolume | ✓ | - | - (파드가 병치될(collocated) 때 작동) | +| PortworxVolume | ✓ | - | ✓ | +| ScaleIO | ✓ | ✓ | - | +| StorageOS | ✓ | - | - | + +### 클래스 + +PV는 `storageClassName` 속성을 +[스토리지클래스](/docs/concepts/storage/storage-classes/)의 +이름으로 설정하여 지정하는 클래스를 가질 수 있다. +특정 클래스의 PV는 해당 클래스를 요청하는 PVC에만 바인딩될 수 있다. +`storageClassName`이 없는 PV에는 클래스가 없으며 특정 클래스를 요청하지 않는 PVC에만 +바인딩할 수 있다. + +이전에는 `volume.beta.kubernetes.io/storage-class` 어노테이션이 +`storageClassName` 속성 대신 사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, +향후 쿠버네티스 릴리스에서 완전히 사용 중단(deprecated)이 될 예정이다. + +### 반환 정책 + +현재 반환 정책은 다음과 같다. + +* Retain(보존) -- 수동 반환 +* Recycle(재활용) -- 기본 스크럽 (`rm -rf /thevolume/*`) +* Delete(삭제) -- AWS EBS, GCE PD, Azure Disk 또는 OpenStack Cinder 볼륨과 같은 관련 스토리지 자산이 삭제됨 + +현재 NFS 및 HostPath만 재활용을 지원한다. AWS EBS, GCE PD, Azure Disk 및 Cinder 볼륨은 삭제를 지원한다. + +### 마운트 옵션 + +쿠버네티스 관리자는 퍼시스턴트 볼륨이 노드에 마운트될 때 추가 마운트 옵션을 지정할 수 있다. + +{{< note >}} +모든 퍼시스턴트 볼륨 유형이 마운트 옵션을 지원하는 것은 아니다. +{{< /note >}} + +다음 볼륨 유형은 마운트 옵션을 지원한다. + +* AWSElasticBlockStore +* AzureDisk +* AzureFile +* CephFS +* Cinder (OpenStack 블록 스토리지) +* GCEPersistentDisk +* Glusterfs +* NFS +* Quobyte Volumes +* RBD (Ceph Block Device) +* StorageOS +* VsphereVolume +* iSCSI + +마운트 옵션의 유효성이 검사되지 않으므로 마운트 옵션이 유효하지 않으면 마운트가 실패한다. + +이전에는 `mountOptions` 속성 대신 `volume.beta.kubernetes.io/mount-options` 어노테이션이 +사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, +향후 쿠버네티스 릴리스에서 완전히 사용 중단(deprecated)이 될 예정이다. + +### 노드 어피니티(affinity) + +{{< note >}} +대부분의 볼륨 유형의 경우 이 필드를 설정할 필요가 없다. [AWS EBS](/ko/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/ko/docs/concepts/storage/volumes/#gcepersistentdisk) 및 [Azure Disk](/ko/docs/concepts/storage/volumes/#azuredisk) 볼륨 블록 유형에 자동으로 채워진다. [로컬](/ko/docs/concepts/storage/volumes/#local) 볼륨에 대해서는 이를 명시적으로 설정해야 한다. +{{< /note >}} + +PV는 [노드 어피니티](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core)를 지정하여 이 볼륨에 접근할 수 있는 노드를 제한하는 제약 조건을 정의할 수 있다. PV를 사용하는 파드는 노드 어피니티에 의해 선택된 노드로만 스케줄링된다. + +### 단계(Phase) + +볼륨은 다음 단계 중 하나이다. + +* Available(사용 가능) -– 아직 클레임에 바인딩되지 않은 사용할 수 있는 리소스 +* Bound(바인딩) –- 볼륨이 클레임에 바인딩됨 +* Released(릴리스) –- 클레임이 삭제되었지만 클러스터에서 아직 리소스를 반환하지 않음 +* Failed(실패) –- 볼륨이 자동 반환에 실패함 + +CLI는 PV에 바인딩된 PVC의 이름을 표시한다. + +## 퍼시스턴트볼륨클레임 + +각 PVC에는 스펙과 상태(클레임의 명세와 상태)가 포함된다. +퍼시스턴트볼륨클레임 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 +한다. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myclaim +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 8Gi + storageClassName: slow + selector: + matchLabels: + release: "stable" + matchExpressions: + - {key: environment, operator: In, values: [dev]} +``` + +### 접근 모드 + +클레임은 특정 접근 모드로 저장소를 요청할 때 볼륨과 동일한 규칙을 사용한다. + +### 볼륨 모드 + +클레임은 볼륨과 동일한 규칙을 사용하여 파일시스템 또는 블록 장치로 볼륨을 사용함을 나타낸다. + +### 리소스 + +파드처럼 클레임은 특정 수량의 리소스를 요청할 수 있다. 이 경우는 스토리지에 대한 요청이다. 동일한 [리소스 모델](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)이 볼륨과 클레임 모두에 적용된다. + +### 셀렉터 + +클레임은 볼륨 셋을 추가로 필터링하기 위해 [레이블 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/#레이블-셀렉터)를 지정할 수 있다. 레이블이 셀렉터와 일치하는 볼륨만 클레임에 바인딩할 수 있다. 셀렉터는 두 개의 필드로 구성될 수 있다. + +* `matchLabels` - 볼륨에 이 값의 레이블이 있어야함 +* `matchExpressions` - 키, 값의 목록, 그리고 키와 값에 관련된 연산자를 지정하여 만든 요구 사항 목록. 유효한 연산자에는 In, NotIn, Exists 및 DoesNotExist가 있다. + +`matchLabels` 및 `matchExpressions`의 모든 요구 사항이 AND 조건이다. 일치하려면 모두 충족해야 한다. + +### 클래스 + +클레임은 `storageClassName` 속성을 사용하여 +[스토리지클래스](/docs/concepts/storage/storage-classes/)의 이름을 지정하여 +특정 클래스를 요청할 수 있다. +요청된 클래스의 PV(PVC와 동일한 `storageClassName`을 갖는 PV)만 PVC에 +바인딩될 수 있다. + +PVC는 반드시 클래스를 요청할 필요는 없다. `storageClassName`이 `""`로 설정된 +PVC는 항상 클래스가 없는 PV를 요청하는 것으로 해석되므로 +클래스가 없는 PV(어노테이션이 없거나 `""`와 같은 하나의 셋)에만 바인딩될 수 +있다. `storageClassName`이 없는 PVC는 +[`DefaultStorageClass` 어드미션 플러그인](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)이 +켜져 있는지 여부에 따라 동일하지 않으며 +클러스터에 따라 다르게 처리된다. + +* 어드미션 플러그인이 켜져 있으면 관리자가 기본 스토리지클래스를 지정할 수 있다. + `storageClassName`이 없는 모든 PVC는 해당 기본값의 PV에만 바인딩할 수 있다. 기본 + 스토리지클래스 지정은 스토리지클래스 오브젝트에서 어노테이션 + `storageclass.kubernetes.io/is-default-class`를 `true`로 + 설정하여 수행된다. 관리자가 기본값을 지정하지 않으면 어드미션 플러그인이 꺼져 있는 것처럼 + 클러스터가 PVC 생성에 응답한다. 둘 이상의 기본값이 지정된 경우 어드미션 + 플러그인은 모든 PVC 생성을 + 금지한다. +* 어드미션 플러그인이 꺼져 있으면 기본 스토리지클래스에 대한 기본값 자체가 없다. + `storageClassName`이 없는 모든 PVC는 클래스가 없는 PV에만 바인딩할 수 있다. 이 경우 + `storageClassName`이 없는 PVC는 `storageClassName`이 `""`로 설정된 PVC와 + 같은 방식으로 처리된다. + +설치 방법에 따라 설치 중에 애드온 관리자가 기본 스토리지클래스를 쿠버네티스 클러스터에 +배포할 수 있다. + +PVC가 스토리지클래스를 요청하는 것 외에도 `selector`를 지정하면 요구 사항들이 +AND 조건으로 동작한다. 요청된 클래스와 요청된 레이블이 있는 PV만 PVC에 +바인딩될 수 있다. + +{{< note >}} +현재 비어 있지 않은 `selector`가 있는 PVC에는 PV를 동적으로 프로비저닝할 수 없다. +{{< /note >}} + +이전에는 `volume.beta.kubernetes.io/storage-class` 어노테이션이 `storageClassName` +속성 대신 사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, +향후 쿠버네티스 릴리스에서는 지원되지 않는다. + +## 볼륨으로 클레임하기 + +클레임을 볼륨으로 사용해서 파드가 스토리지에 접근한다. 클레임은 클레임을 사용하는 파드와 동일한 네임스페이스에 있어야 한다. 클러스터는 파드의 네임스페이스에서 클레임을 찾고 이를 사용하여 클레임과 관련된 퍼시스턴트볼륨을 얻는다. 그런 다음 볼륨이 호스트와 파드에 마운트된다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: myfrontend + image: nginx + volumeMounts: + - mountPath: "/var/www/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myclaim +``` + +### 네임스페이스에 대한 참고 사항 + +퍼시스턴트볼륨 바인딩은 배타적이며, 퍼시스턴트볼륨클레임은 네임스페이스 오브젝트이므로 "다중" 모드(`ROX`, `RWX`)를 사용한 클레임은 하나의 네임스페이스 내에서만 가능하다. + +## 원시 블록 볼륨 지원 + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +다음 볼륨 플러그인에 해당되는 경우 동적 프로비저닝을 포함하여 원시 블록 볼륨을 +지원한다. + +* AWSElasticBlockStore +* AzureDisk +* CSI +* FC (파이버 채널) +* GCEPersistentDisk +* iSCSI +* Local volume +* OpenStack Cinder +* RBD (Ceph Block Device) +* VsphereVolume + +### 원시 블록 볼륨을 사용하는 퍼시스턴트볼륨 {#persistent-volume-using-a-raw-block-volume} + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: block-pv +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + volumeMode: Block + persistentVolumeReclaimPolicy: Retain + fc: + targetWWNs: ["50060e801049cfd1"] + lun: 0 + readOnly: false +``` +### 원시 블록 볼륨을 요청하는 퍼시스턴트볼륨클레임 {#persistent-volume-claim-requesting-a-raw-block-volume} + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: block-pvc +spec: + accessModes: + - ReadWriteOnce + volumeMode: Block + resources: + requests: + storage: 10Gi +``` + +### 컨테이너에 원시 블록 장치 경로를 추가하는 파드 명세 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-block-volume +spec: + containers: + - name: fc-container + image: fedora:26 + command: ["/bin/sh", "-c"] + args: [ "tail -f /dev/null" ] + volumeDevices: + - name: data + devicePath: /dev/xvda + volumes: + - name: data + persistentVolumeClaim: + claimName: block-pvc +``` + +{{< note >}} +파드에 대한 원시 블록 장치를 추가할 때 마운트 경로 대신 컨테이너에 장치 경로를 지정한다. +{{< /note >}} + +### 블록 볼륨 바인딩 + +사용자가 퍼시스턴트볼륨클레임 스펙에서 `volumeMode` 필드를 사용하여 이를 나타내는 원시 블록 볼륨을 요청하는 경우 바인딩 규칙은 스펙의 일부분으로 이 모드를 고려하지 않은 이전 릴리스에 비해 약간 다르다. +사용자와 관리자가 원시 블록 장치를 요청하기 위해 지정할 수 있는 가능한 조합의 표가 아래 나열되어 있다. 이 테이블은 볼륨이 바인딩되는지 여부를 나타낸다. +정적 프로비저닝된 볼륨에 대한 볼륨 바인딩 매트릭스이다. + +| PV volumeMode | PVC volumeMode | Result | +| --------------|:---------------:| ----------------:| +| 지정되지 않음 | 지정되지 않음 | BIND | +| 지정되지 않음 | Block | NO BIND | +| 지정되지 않음 | Filesystem | BIND | +| Block | 지정되지 않음 | NO BIND | +| Block | Block | BIND | +| Block | Filesystem | NO BIND | +| Filesystem | Filesystem | BIND | +| Filesystem | Block | NO BIND | +| Filesystem | 지정되지 않음 | BIND | + +{{< note >}} +알파 릴리스에서는 정적으로 프로비저닝된 볼륨만 지원된다. 관리자는 원시 블록 장치로 작업할 때 이러한 값을 고려해야 한다. +{{< /note >}} + +## 볼륨 스냅샷 및 스냅샷 지원에서 볼륨 복원 + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +CSI 볼륨 플러그인만 지원하도록 볼륨 스냅샷 기능이 추가되었다. 자세한 내용은 [볼륨 스냅샷](/docs/concepts/storage/volume-snapshots/)을 참고한다. + +볼륨 스냅샷 데이터 소스에서 볼륨 복원을 지원하려면 apiserver와 controller-manager에서 +`VolumeSnapshotDataSource` 기능 게이트를 활성화한다. + +### 볼륨 스냅샷에서 퍼시스턴트볼륨클레임 생성 {#create-persistent-volume-claim-from-volume-snapshot} + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: restore-pvc +spec: + storageClassName: csi-hostpath-sc + dataSource: + name: new-snapshot-test + kind: VolumeSnapshot + apiGroup: snapshot.storage.k8s.io + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## 볼륨 복제 + +[볼륨 복제](/docs/concepts/storage/volume-pvc-datasource/)는 CSI 볼륨 플러그인만 사용할 수 있다. + +### 기존 pvc에서 퍼시스턴트볼륨클레임 생성 {#create-persistent-volume-claim-from-an-existing-pvc} + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cloned-pvc +spec: + storageClassName: my-csi-plugin + dataSource: + name: existing-src-pvc-name + kind: PersistentVolumeClaim + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## 포터블 구성 작성 + +광범위한 클러스터에서 실행되고 퍼시스턴트 스토리지가 필요한 +구성 템플릿 또는 예제를 작성하는 경우 다음 패턴을 사용하는 것이 좋다. + +- 구성 번들(디플로이먼트, 컨피그맵 등)에 퍼시스턴트볼륨클레임 + 오브젝트를 포함시킨다. +- 구성을 인스턴스화 하는 사용자에게 퍼시스턴트볼륨을 생성할 권한이 없을 수 있으므로 + 퍼시스턴트볼륨 오브젝트를 구성에 포함하지 않는다. +- 템플릿을 인스턴스화 할 때 스토리지 클래스 이름을 제공하는 옵션을 + 사용자에게 제공한다. + - 사용자가 스토리지 클래스 이름을 제공하는 경우 해당 값을 + `permanentVolumeClaim.storageClassName` 필드에 입력한다. + 클러스터에서 관리자가 스토리지클래스를 활성화한 경우 + PVC가 올바른 스토리지 클래스와 일치하게 된다. + - 사용자가 스토리지 클래스 이름을 제공하지 않으면 + `permanentVolumeClaim.storageClassName` 필드를 nil로 남겨둔다. + 그러면 클러스터에 기본 스토리지클래스가 있는 사용자에 대해 PV가 자동으로 프로비저닝된다. + 많은 클러스터 환경에 기본 스토리지클래스가 설치되어 있거나 관리자가 + 고유한 기본 스토리지클래스를 생성할 수 있다. +- 도구(tooling)에서 일정 시간이 지나도 바인딩되지 않는 PVC를 관찰하여 사용자에게 + 노출시킨다. 이는 클러스터가 동적 스토리지를 지원하지 + 않거나(이 경우 사용자가 일치하는 PV를 생성해야 함), + 클러스터에 스토리지 시스템이 없음을 나타낸다(이 경우 + 사용자는 PVC가 필요한 구성을 배포할 수 없음). +{{% /capture %}} + {{% capture whatsnext %}} + +* [퍼시스턴트볼륨 생성](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume)에 대해 자세히 알아보기 +* [퍼시스턴트볼륨클레임 생성](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim)에 대해 자세히 알아보기 +* [퍼시스턴트 스토리지 설계 문서](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md) 읽기 + +### 참고 + +* [퍼시스턴트볼륨](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolume-v1-core) +* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core) +* [퍼시스턴트볼륨클레임](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) +* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core) +{{% /capture %}} diff --git a/content/ko/docs/concepts/storage/volume-pvc-datasource.md b/content/ko/docs/concepts/storage/volume-pvc-datasource.md index 92ea37f8cc..b58b882d6d 100644 --- a/content/ko/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/ko/docs/concepts/storage/volume-pvc-datasource.md @@ -6,7 +6,6 @@ weight: 30 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.16" state="beta" >}} 이 문서에서는 쿠버네티스의 기존 CSI 볼륨 복제의 개념을 설명한다. [볼륨] (/ko/docs/concepts/storage/volumes)을 숙지하는 것을 추천한다. @@ -32,6 +31,7 @@ weight: 30 * 복제는 동일한 스토리지 클래스 내에서만 지원된다. - 대상 볼륨은 소스와 동일한 스토리지 클래스여야 한다. - 기본 스토리지 클래스를 사용할 수 있으며, 사양에 storageClassName을 생략할 수 있다. +* 동일한 VolumeMode 설정을 사용하는 두 볼륨에만 복제를 수행할 수 있다(블록 모드 볼륨을 요청하는 경우에는 반드시 소스도 블록 모드여야 한다). ## 프로비저닝 diff --git a/content/ko/docs/concepts/storage/volumes.md b/content/ko/docs/concepts/storage/volumes.md index 95395f7587..9d79a5ff13 100644 --- a/content/ko/docs/concepts/storage/volumes.md +++ b/content/ko/docs/concepts/storage/volumes.md @@ -1327,19 +1327,13 @@ CSI 호환 볼륨 드라이버가 쿠버네티스 클러스터에 배포되면 #### CSI 원시(raw) 블록 볼륨 지원 -{{< feature-state for_k8s_version="v1.14" state="beta" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} -1.11 버전부터 CSI는 이전 버전의 쿠버네티스에서 도입된 원시 -블록 볼륨 기능에 의존하는 원시 블록 볼륨에 대한 지원을 -도입했다. 이 기능을 사용하면 외부 CSI 드라이버가 있는 벤더들이 쿠버네티스 -워크로드에서 원시 블록 볼륨 지원을 구현할 수 있다. +외부 CSI 드라이버가 있는 벤더들은 쿠버네티스 워크로드에서 원시(raw) 블록 볼륨 +지원을 구현할 수 있다. -CSI 블록 볼륨은 기능 게이트로 지원하지만, 기본적으로 활성화되어있다. 이 -기능을 위해 활성화 되어야하는 두개의 기능 게이트는 `BlockVolume` 과 -`CSIBlockVolume` 이다. - -[원시 블록 볼륨 지원으로 PV/PVC 설정](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support) -방법을 알아본다. +CSI 설정 변경 없이 평소와 같이 +[원시 블록 볼륨 지원으로 PV/PVC 설정](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)을 할 수 있다. #### CSI 임시(ephemeral) 볼륨 diff --git a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md index 4bbcf22ebb..b72607d41a 100644 --- a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md @@ -14,12 +14,7 @@ _크론 잡은_ 시간 기반의 일정에 따라 [잡](/ko/docs/concepts/worklo {{< caution >}} -모든 **크론잡** `일정:` 시간은 {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} -의 시간대를 기준으로 한다. - -컨트롤 플레인이 파드 또는 베어 컨테이너에서 kube-controller-manager를 -실행하는 경우 kube-controller-manager 컨테이너의 설정된 시간대는 크론 잡 컨트롤러가 -사용하는 시간대로 설정한다. +모든 **크론잡** `일정:` 시간은 UTC로 표시된다. {{< /caution >}} 크론잡 리소스에 대한 매니페스트를 생성할때에는 제공하는 이름이 diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md index 2d9097c718..810b750a49 100644 --- a/content/ko/docs/concepts/workloads/controllers/deployment.md +++ b/content/ko/docs/concepts/workloads/controllers/deployment.md @@ -61,7 +61,7 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 * `template` 필드에는 다음 하위 필드가 포함되어있다. * 파드는 `labels` 필드를 사용해서 `app: nginx` 이라는 레이블을 붙인다. * 파드 템플릿의 사양 또는 `.template.spec` 필드는 - 파드가 [도커 허브](https://hub.docker.com/)의 `nginx` 1.7.9 버전 이미지를 실행하는 + 파드가 [도커 허브](https://hub.docker.com/)의 `nginx` 1.14.2 버전 이미지를 실행하는 `nginx` 컨테이너 1개를 실행하는 것을 나타낸다. * 컨테이너 1개를 생성하고, `name` 필드를 사용해서 `nginx` 이름을 붙인다. @@ -151,15 +151,15 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 다음 단계에 따라 디플로이먼트를 업데이트한다. -1. `nginx:1.7.9` 이미지 대신 `nginx:1.9.1` 이미지를 사용하도록 nginx 파드를 업데이트 한다. +1. `nginx:1.14.2` 이미지 대신 `nginx:1.16.1` 이미지를 사용하도록 nginx 파드를 업데이트 한다. ```shell - kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` 또는 간단하게 다음의 명령어를 사용한다. ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record + kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record ``` 이와 유사하게 출력된다. @@ -167,7 +167,7 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 deployment.apps/nginx-deployment image updated ``` - 대안으로 디플로이먼트를 `edit` 해서 `.spec.template.spec.containers[0].image` 를 `nginx:1.7.9` 에서 `nginx:1.9.1` 로 변경한다. + 대안으로 디플로이먼트를 `edit` 해서 `.spec.template.spec.containers[0].image` 를 `nginx:1.14.2` 에서 `nginx:1.16.1` 로 변경한다. ```shell kubectl edit deployment.v1.apps/nginx-deployment @@ -263,7 +263,7 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 Labels: app=nginx Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP Environment: Mounts: @@ -304,11 +304,11 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 스케일 업하기 시작한다. 그리고 이전에 스케일 업 하던 레플리카셋에 롤오버 한다. --이것은 기존 레플리카셋 목록에 추가하고 스케일 다운을 할 것이다. -예를 들어 디플로이먼트로 `nginx:1.7.9` 레플리카를 5개 생성을 한다. -하지만 `nginx:1.7.9` 레플리카 3개가 생성되었을 때 디플로이먼트를 업데이트해서 `nginx:1.9.1` +예를 들어 디플로이먼트로 `nginx:1.14.2` 레플리카를 5개 생성을 한다. +하지만 `nginx:1.14.2` 레플리카 3개가 생성되었을 때 디플로이먼트를 업데이트해서 `nginx:1.16.1` 레플리카 5개를 생성성하도록 업데이트를 한다고 가정한다. 이 경우 디플로이먼트는 즉시 생성된 3개의 -`nginx:1.7.9` 파드 3개를 죽이기 시작하고 `nginx:1.9.1` 파드를 생성하기 시작한다. -이것은 과정이 변경되기 전 `nginx:1.7.9` 레플리카 5개가 +`nginx:1.14.2` 파드 3개를 죽이기 시작하고 `nginx:1.16.1` 파드를 생성하기 시작한다. +이것은 과정이 변경되기 전 `nginx:1.14.2` 레플리카 5개가 생성되는 것을 기다리지 않는다. ### 레이블 셀렉터 업데이트 @@ -345,10 +345,10 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 롤백된다는 것을 의미한다. {{< /note >}} -* 디플로이먼트를 업데이트하는 동안 이미지 이름을 `nginx:1.9.1` 이 아닌 `nginx:1.91` 로 입력해서 오타를 냈다고 가정한다. +* 디플로이먼트를 업데이트하는 동안 이미지 이름을 `nginx:1.16.1` 이 아닌 `nginx:1.161` 로 입력해서 오타를 냈다고 가정한다. ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true ``` 이와 유사하게 출력된다. @@ -425,7 +425,7 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 Labels: app=nginx Containers: nginx: - Image: nginx:1.91 + Image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: @@ -466,13 +466,13 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true - 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true - 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true ``` `CHANGE-CAUSE` 는 수정 생성시 디플로이먼트 주석인 `kubernetes.io/change-cause` 에서 복사한다. 다음에 대해 `CHANGE-CAUSE` 메시지를 지정할 수 있다. - * 디플로이먼트에 `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"` 로 주석을 단다. + * 디플로이먼트에 `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"` 로 주석을 단다. * `kubectl` 명령어 이용시 `--record` 플래그를 추가해서 리소스 변경을 저장한다. * 수동으로 리소스 매니페스트 편집. @@ -486,10 +486,10 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 - Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP QoS Tier: cpu: BestEffort @@ -547,7 +547,7 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=4 - kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate @@ -557,7 +557,7 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 Labels: app=nginx Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP Host Port: 0/TCP Environment: @@ -720,7 +720,7 @@ nginx-deployment-618515232 11 11 11 7m * 그런 다음 디플로이먼트의 이미지를 업데이트 한다. ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` 이와 유사하게 출력된다. diff --git a/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md b/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md index c394c58ab7..dd061e5130 100644 --- a/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md @@ -15,10 +15,8 @@ weight: 80 {{< warning >}} 임시 컨테이너는 초기 알파 상태이며, 프로덕션 클러스터에는 -적합하지 않다. 사용자는 컨테이너 네임스페이스를 대상으로 하는 경우와 -같은 어떤 상황에서 기능이 작동하지 않을 것으로 예상해야 한다. [쿠버네티스 -사용중단(deprecation) 정책](/docs/reference/using-api/deprecation-policy/)에 따라 이 알파 -기능은 향후 크게 변경되거나, 완전히 제거될 수 있다. +적합하지 않다. [쿠버네티스 사용중단(deprecation) 정책](/docs/reference/using-api/deprecation-policy/)에 따라 +이 알파 기능은 향후 크게 변경되거나, 완전히 제거될 수 있다. {{< /warning >}} {{% /capture %}} @@ -75,7 +73,11 @@ API에서 특별한 `ephemeralcontainers` 핸들러를 사용해서 만들어지 공유](/docs/tasks/configure-pod-container/share-process-namespace/)를 활성화하면 다른 컨테이너 안의 프로세스를 보는데 도움이 된다. -### 예시 +임시 컨테이너를 사용해서 문제를 해결하는 예시는 +[임시 디버깅 컨테이너로 디버깅하기] +(/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)를 참조한다. + +## 임시 컨테이너 API {{< note >}} 이 섹션의 예시는 `EphemeralContainers` [기능 @@ -84,8 +86,9 @@ API에서 특별한 `ephemeralcontainers` 핸들러를 사용해서 만들어지 {{< /note >}} 이 섹션의 에시는 임시 컨테이너가 어떻게 API에 나타나는지 -보여준다. 사용자는 일반적으로 자동화하는 단계의 문제 해결을 위해 `kubectl` -플러그인을 사용했을 것이다. +보여준다. 일반적으로 `kubectl alpha debug` 또는 +다른 `kubectl` [플러그인](/docs/tasks/extend-kubectl/kubectl-plugins/)을 +사용해서 API를 직접 호출하지 않고 이런 단계들을 자동화 한다. 임시 컨테이너는 파드의 `ephemeralcontainers` 하위 리소스를 사용해서 생성되며, `kubectl --raw` 를 사용해서 보여준다. 먼저 @@ -177,35 +180,12 @@ Ephemeral Containers: ... ``` -사용자는 `kubectl attach` 를 사용해서 새로운 임시 컨테이너에 붙을 수 있다. +예시와 같이 `kubectl attach`, `kubectl exec`, 그리고 `kubectl logs` 를 사용해서 +다른 컨테이너와 같은 방식으로 새로운 임시 컨테이너와 +상호작용할 수 있다. ```shell kubectl attach -it example-pod -c debugger ``` -만약 프로세스 네임스페이스를 공유를 활성화하면, 사용자는 해당 파드 안의 모든 컨테이너의 프로세스를 볼 수 있다. -예를 들어, 임시 컨테이너에 붙은 이후에 디버거 컨테이너에서 `ps` 를 실행한다. - -```shell -# "디버거" 임시 컨테이너 내부 쉘에서 이것을 실행한다. -ps auxww -``` -다음과 유사하게 출력된다. -``` -PID USER TIME COMMAND - 1 root 0:00 /pause - 6 root 0:00 nginx: master process nginx -g daemon off; - 11 101 0:00 nginx: worker process - 12 101 0:00 nginx: worker process - 13 101 0:00 nginx: worker process - 14 101 0:00 nginx: worker process - 15 101 0:00 nginx: worker process - 16 101 0:00 nginx: worker process - 17 101 0:00 nginx: worker process - 18 101 0:00 nginx: worker process - 19 root 0:00 /pause - 24 root 0:00 sh - 29 root 0:00 ps auxww -``` - {{% /capture %}} diff --git a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 8ebdab2345..7bde7139a9 100644 --- a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -6,7 +6,7 @@ weight: 50 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.18" state="beta" >}} 사용자는 _토폴로지 분배 제약 조건_ 을 사용해서 지역, 영역, 노드 그리고 기타 사용자-정의 토폴로지 도메인과 같이 장애-도메인으로 설정된 클러스터에 걸쳐 파드가 분산되는 방식을 제어할 수 있다. 이를 통해 고가용성뿐만 아니라, 효율적인 리소스 활용의 목적을 이루는 데 도움이 된다. @@ -18,11 +18,10 @@ weight: 50 ### 기능 게이트 활성화 -`EvenPodsSpread` 기능 게이트의 활성화가 되었는지 확인한다(기본적으로 1.16에서는 -비활성화되어있다). 기능 게이트의 활성화에 대한 설명은 [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/) 를 참조한다. {{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}} **와** {{< glossary_tooltip text="스케줄러" term_id="kube-scheduler" >}}에 -대해 `EvenPodsSpread` 기능 게이트가 활성화되어야 한다. +대해 `EvenPodsSpread` +[기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)가 활성화되어야 한다. ### 노드 레이블 @@ -184,6 +183,46 @@ spec: {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} +### 클러스터 수준의 기본 제약 조건 + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +클러스터에 대한 기본 토폴로지 분배 제약 조건을 설정할 수 있다. 기본 +토폴로지 분배 제약 조건은 다음과 같은 경우에만 파드에 적용된다. + +- `.spec.topologySpreadConstraints` 에는 어떠한 제약도 정의되어 있지 않는 경우. +- 서비스, 레플리케이션 컨트롤러, 레플리카 셋 또는 스테이트풀 셋에 속해있는 경우. + +기본 제약 조건은 [스케줄링 프로파일](/docs/reference/scheduling/profiles)에서 +`PodTopologySpread` 플러그인의 일부로 설정할 수 있다. +제약 조건은 `labelSelector` 가 비어 있어야 한다는 점을 제외하고, [위와 동일한 API](#api)로 +제약 조건을 지정한다. 셀렉터는 파드가 속한 서비스, 레플리케이션 컨트롤러, +레플리카 셋 또는 스테이트풀 셋에서 계산한다. + +예시 구성은 다음과 같다. + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1alpha2 +kind: KubeSchedulerConfiguration + +profiles: + pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: failure-domain.beta.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway +``` + +{{< note >}} +기본 스케줄링 제약 조건에 의해 생성된 점수는 +[`DefaultPodTopologySpread` 플러그인](/docs/reference/scheduling/profiles/#scheduling-plugins)에 +의해 생성된 점수와 충돌 할 수 있다. +`PodTopologySpread` 에 대한 기본 제약 조건을 사용할 때 스케줄링 프로파일에서 +이 플러그인을 비활성화 하는 것을 권장한다. +{{< /note >}} + ## 파드어피니티(PodAffinity)/파드안티어피니티(PodAntiAffinity)와의 비교 쿠버네티스에서 "어피니티(Affinity)"와 관련된 지침은 파드가 @@ -201,9 +240,9 @@ spec: ## 알려진 제한사항 -1.16을 기준으로 이 기능은 알파(Alpha)이며, 몇 가지 알려진 제한사항이 있다. +1.18을 기준으로 이 기능은 베타(Beta)이며, 몇 가지 알려진 제한사항이 있다. -- `Deployment` 를 스케일링 다운하면 그 결과로 파드의 분포가 불균형이 될 수 있다. +- 디플로이먼트를 스케일링 다운하면 그 결과로 파드의 분포가 불균형이 될 수 있다. - 파드와 일치하는 테인트(taint)가 된 노드가 존중된다. [이슈 80921](https://github.com/kubernetes/kubernetes/issues/80921)을 본다. {{% /capture %}} diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md index 1a12f2028a..94927fbfc1 100644 --- a/content/ko/docs/home/_index.md +++ b/content/ko/docs/home/_index.md @@ -3,7 +3,7 @@ title: 쿠버네티스 문서 noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "홈" main_menu: true weight: 10 @@ -40,7 +40,7 @@ cards: button: "태스크 보기" button_path: "/ko/docs/tasks" - name: training - title: 교육" + title: "교육" description: "공인 쿠버네티스 인증을 획득하고 클라우드 네이티브 프로젝트를 성공적으로 수행하세요!" button: "교육 보기" button_path: "/training" diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md index ada3246855..a9ce09b988 100644 --- a/content/ko/docs/reference/_index.md +++ b/content/ko/docs/reference/_index.md @@ -36,13 +36,15 @@ content_template: templates/concept * [JSONPath](/docs/reference/kubectl/jsonpath/) - kubectl에서 [JSONPath 표현](http://goessner.net/articles/JsonPath/)을 사용하기 위한 문법 가이드. * [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - 안정적인 쿠버네티스 클러스터를 쉽게 프로비전하기 위한 CLI 도구. -## 설정 레퍼런스 +## 컴포넌트 레퍼런스 * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 각 노드에서 구동되는 주요한 *노드 에이전트*. kubelet은 PodSpecs 집합을 가지며 기술된 컨테이너가 구동되고 있는지, 정상 작동하는지를 보장한다. * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - 파드, 서비스, 레플리케이션 컨트롤러와 같은 API 오브젝트에 대한 검증과 구성을 수행하는 REST API. * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - 쿠버네티스에 탑재된 핵심 제어 루프를 포함하는 데몬. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 간단한 TCP/UDP 스트림 포워딩이나 백-엔드 집합에 걸쳐서 라운드-로빈 TCP/UDP 포워딩을 할 수 있다. * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - 가용성, 성능 및 용량을 관리하는 스케줄러. + * [kube-scheduler 정책](/docs/reference/scheduling/policies) + * [kube-scheduler 프로파일](/docs/reference/scheduling/profiles) ## 설계 문서 diff --git a/content/ko/docs/reference/glossary/managed-service.md b/content/ko/docs/reference/glossary/managed-service.md new file mode 100644 index 0000000000..282bc7e9d6 --- /dev/null +++ b/content/ko/docs/reference/glossary/managed-service.md @@ -0,0 +1,18 @@ +--- +title: 매니지드 서비스 +id: managed-service +date: 2018-04-12 +full_link: +short_description: > + 타사 공급자가 유지보수하는 소프트웨어. + +aka: + +tags: +- extension +--- + 타사 공급자가 유지보수하는 소프트웨어. + + + +매니지드 서비스의 몇 가지 예시로 AWS EC2, Azure SQL Database 그리고 GCP Pub/Sub이 있으나, 애플리케이션에서 사용할 수 있는 모든 소프트웨어 제품이 될 수 있다. [서비스 카탈로그](/docs/concepts/service-catalog/)는 {{< glossary_tooltip text="서비스 브로커" term_id="service-broker" >}}가 제공하는 매니지드 서비스의 목록과 프로비전, 바인딩하는 방법을 제공한다. diff --git a/content/ko/docs/reference/glossary/replication-controller.md b/content/ko/docs/reference/glossary/replication-controller.md index 30f2aabf94..352529f4ce 100755 --- a/content/ko/docs/reference/glossary/replication-controller.md +++ b/content/ko/docs/reference/glossary/replication-controller.md @@ -1,19 +1,25 @@ --- -title: 레플리케이션 컨트롤러(Replication Controller) +title: 레플리케이션 컨트롤러(ReplicationController) id: replication-controller date: 2018-04-12 full_link: short_description: > - 특정 수의 파드 인스턴스가 항상 동작하도록 보장하는 쿠버네티스 서비스. + (사용 중단된) 복제된 애플리케이션을 관리하는 API 오브젝트 aka: tags: - workload - core-object --- - 특정 수의 {{< glossary_tooltip text="파드" term_id="pod" >}} 인스턴스가 항상 동작하도록 보장하는 쿠버네티스 서비스. + 특정한 수의 {{< glossary_tooltip text="파드" term_id="pod" >}} 인스턴스가 +실행 중인지 확인하면서 복제된 애플리케이션을 관리하는 워크로드 리소스이다. -레플리케이션 컨트롤러는 파드에 설정된 값에 따라서, 동작하는 파드의 인스턴스를 자동으로 추가하거나 제거할 것이다. 파드가 삭제되거나 실수로 너무 많은 수의 파드가 시작된 경우, 파드가 지정된 수의 인스턴스로 돌아갈 수 있게 허용한다. +컨트롤 플레인은 일부 파드에 장애가 발생하거나, 수동으로 파드를 삭제하거나, +실수로 너무 많은 수의 파드가 시작된 경우에도 정의된 수량의 파드가 실행되도록 한다. +{{< note >}} +레플리케이션컨트롤러는 사용 중단되었다. 유사한 +것으로는 {{< glossary_tooltip text="디플로이먼트" term_id="deployment" >}}를 본다. +{{< /note >}} diff --git a/content/ko/docs/reference/glossary/service-catalog.md b/content/ko/docs/reference/glossary/service-catalog.md new file mode 100644 index 0000000000..25f1667853 --- /dev/null +++ b/content/ko/docs/reference/glossary/service-catalog.md @@ -0,0 +1,18 @@ +--- +title: 서비스 카탈로그(Service Catalog) +id: service-catalog +date: 2018-04-12 +full_link: +short_description: > + 쿠버네티스 클러스터 내에서 실행되는 응용 프로그램이 클라우드 공급자가 제공하는 데이터 저장소 서비스와 같은 외부 관리 소프트웨어 제품을 쉽게 사용할 수 있도록하는 확장 API이다. + +aka: +tags: +- extension +--- + 쿠버네티스 클러스터 내에서 실행되는 응용 프로그램이 클라우드 공급자가 제공하는 데이터 저장소 서비스와 같은 외부 관리 소프트웨어 제품을 쉽게 사용할 수 있도록하는 확장 API이다. + + + +서비스 생성 또는 관리에 대한 자세한 지식 없이도 {{< glossary_tooltip text="서비스 브로커" term_id="service-broker" >}}를 통해 외부의 {{< glossary_tooltip text="매니지드 서비스" term_id="managed-service" >}}의 목록과 프로비전, 바인딩하는 방법을 제공한다. + diff --git a/content/ko/docs/setup/_index.md b/content/ko/docs/setup/_index.md index fe3e81d84f..5196fb724f 100644 --- a/content/ko/docs/setup/_index.md +++ b/content/ko/docs/setup/_index.md @@ -49,63 +49,6 @@ card: 운영 환경을 위한 솔루션을 평가할 때에는, 쿠버네티스 클러스터 운영에 대한 어떤 측면(또는 _추상적인 개념_)을 스스로 관리하기를 원하는지, 제공자에게 넘기기를 원하는지 고려하자. -몇 가지 가능한 쿠버네티스 클러스터의 추상적인 개념은 {{< glossary_tooltip text="애플리케이션" term_id="applications" >}}, {{< glossary_tooltip text="데이터 플레인" term_id="data-plane" >}}, {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}, {{< glossary_tooltip text="클러스터 인프라스트럭처" term_id="cluster-infrastructure" >}}, 및 {{< glossary_tooltip text="클러스터 운영" term_id="cluster-operations" >}}이다. - -다음의 다이어그램은 쿠버네티스 클러스터에 대해 가능한 추상적인 개념을 나열하고, 각 추상적인 개념을 사용자 스스로 관리하는지 제공자에 의해 관리되는지를 보여준다. - -운영 환경 솔루션![운영 환경 솔루션](/images/docs/KubernetesSolutions.svg) - -{{< table caption="제공자와 솔루션을 나열한 운영 환경 솔루션 표." >}} -다음 운영 환경 솔루션 표는 제공자와 솔루션을 나열한다. - -|제공자 | 매니지드 | 턴키 클라우드 | 온-프렘(on-prem) 데이터센터 | 커스텀 (클라우드) | 커스텀 (온-프레미스 VMs)| 커스텀 (베어 메탈) | -| --------- | ------ | ------ | ------ | ------ | ------ | ----- | -| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | | -| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | | -| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | | -| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | | -| [APPUiO](https://appuio.ch/)  | ✔ | ✔ | ✔ | | | | -| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ | -| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | | -| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | | -| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ | -| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔| -| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔ -| [Containership](https://containership.io) | ✔ |✔ | | | | -| [D2iQ](https://d2iq.com/) | | [Kommander](https://docs.d2iq.com/ksphere/kommander/) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | -| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔ -| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | | -| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔ -| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [사용자 정의 확장](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) | -| [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | | -| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | | -| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | | -| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | | -| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | -| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | -| [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ | -| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | -| [KubeSail](https://kubesail.com/) | ✔ | | | | | -| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ | -| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ | -| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | | -| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | | -| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | ✔ | ✔ | ✔ | | | -| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | | -| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) | -| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | | -| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/) -| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | | -| [oVirt](https://www.ovirt.org/) | | | | | ✔ | -| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | | -| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔ -| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/) -| [Supergiant](https://supergiant.io/) | |✔ | | | | -| [SUSE](https://www.suse.com/) | | ✔ | | | | -| [SysEleven](https://www.syseleven.io/) | ✔ | | | | | -| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ | -| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | | -| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) -| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | | +[공인 쿠버네티스](https://github.com/cncf/k8s-conformance/#certified-kubernetes) 공급자의 목록과 "[파트너](https://kubernetes.io/partners/#conformance)"를 참조한다. {{% /capture %}} diff --git a/content/ko/docs/setup/learning-environment/minikube.md b/content/ko/docs/setup/learning-environment/minikube.md index f6af768b68..82d3f4f26d 100644 --- a/content/ko/docs/setup/learning-environment/minikube.md +++ b/content/ko/docs/setup/learning-environment/minikube.md @@ -183,23 +183,25 @@ Minikube는 또한 "minikube" 컨텍스트를 생성하고 이를 kubectl의 기 minikube start --kubernetes-version {{< param "fullversion" >}} ``` #### VM 드라이버 지정하기 -`minikube start` 코멘드에 `--vm-driver=` 플래그를 추가해서 VM 드라이버를 변경할 수 있다. +`minikube start` 코멘드에 `--driver=` 플래그를 추가해서 VM 드라이버를 변경할 수 있다. 코멘드를 예를 들면 다음과 같다. ```shell -minikube start --vm-driver= +minikube start --driver= ``` Minikube는 다음의 드라이버를 지원한다. {{< note >}} - 지원되는 드라이버와 플러그인 설치 방법에 대한 보다 상세한 정보는 [드라이버](https://git.k8s.io/minikube/docs/drivers.md)를 참조한다. + 지원되는 드라이버와 플러그인 설치 방법에 대한 보다 상세한 정보는 [드라이버](https://minikube.sigs.k8s.io/docs/reference/drivers/)를 참조한다. {{< /note >}} * virtualbox * vmwarefusion -* kvm2 ([드라이버 설치](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver)) -* hyperkit ([드라이버 설치](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver)) -* hyperv ([드라이버 설치](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver)) +* docker (EXPERIMENTAL) +* kvm2 ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/)) +* hyperkit ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/)) +* hyperv ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/)) 다음 IP는 동적이며 변경할 수 있다. `minikube ip`로 알아낼 수 있다. -* vmware ([드라이버 설치](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) (VMware unified driver) +* vmware ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/)) (VMware unified driver) +* parallels ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/parallels/)) * none (쿠버네티스 컴포넌트를 가상 머신이 아닌 호스트 상에서 구동한다. 리눅스를 실행중이어야 하고, {{< glossary_tooltip term_id="docker" >}}가 설치되어야 한다.) {{< caution >}} diff --git a/content/ko/docs/setup/production-environment/container-runtimes.md b/content/ko/docs/setup/production-environment/container-runtimes.md index 4429049bfb..0331440eac 100644 --- a/content/ko/docs/setup/production-environment/container-runtimes.md +++ b/content/ko/docs/setup/production-environment/container-runtimes.md @@ -62,7 +62,7 @@ kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다. ## Docker 각 머신들에 대해서, Docker를 설치한다. -버전 19.03.4가 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다. +버전 19.03.8이 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다. 쿠버네티스 릴리스 노트를 통해서, 최신에 검증된 Docker 버전의 지속적인 파악이 필요하다. 시스템에 Docker를 설치하기 위해서 아래의 커맨드들을 사용한다. @@ -86,9 +86,9 @@ add-apt-repository \ ## Docker CE 설치. apt-get update && apt-get install -y \ - containerd.io=1.2.10-3 \ - docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) \ - docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) + containerd.io=1.2.13-1 \ + docker-ce=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) \ + docker-ce-cli=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) # 데몬 설정. cat > /etc/docker/daemon.json <}} -{{< tab name="Ubuntu 16.04" codelang="bash" >}} +{{< tab name="Debian" codelang="bash" >}} +# Debian 개발 배포본(Unstable/Sid) +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add - -# 선행 조건 설치 -apt-get update -apt-get install -y software-properties-common +# Debian Testing +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add - -add-apt-repository ppa:projectatomic/ppa -apt-get update +# Debian 10 +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add - + +# Raspbian 10 +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add - # CRI-O 설치 -apt-get install -y cri-o-1.15 - +sudo apt-get install cri-o-1.17 {{< /tab >}} -{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}} +{{< tab name="Ubuntu 18.04, 19.04 and 19.10" codelang="bash" >}} +# 리포지터리 설치 +. /etc/os-release +sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add - +sudo apt-get update + +# CRI-O 설치 +sudo apt-get install cri-o-1.17 +{{< /tab >}} + +{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}} # 선행 조건 설치 yum-config-manager --add-repo=https://cbs.centos.org/repos/paas7-crio-115-release/x86_64/os/ # CRI-O 설치 yum install --nogpgcheck -y cri-o +{{< tab name="openSUSE Tumbleweed" codelang="bash" >}} +sudo zypper install cri-o {{< /tab >}} {{< /tabs >}} @@ -304,4 +324,4 @@ kubeadm을 사용하는 경우에도 마찬가지로, 수동으로 자세한 정보는 [Frakti 빠른 시작 가이드](https://github.com/kubernetes/frakti#quickstart)를 참고한다. -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md index c12f9e68a3..c1ba8d3646 100644 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -19,7 +19,7 @@ weight: 75 ## 시작하기 전에 -* [윈도우 서버에서 운영하는 마스터와 워커 노드](../user-guide-windows-nodes)를 포함한 쿠버네티스 클러스터를 생성한다. +* [윈도우 서버에서 운영하는 마스터와 워커 노드](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)를 포함한 쿠버네티스 클러스터를 생성한다. * 쿠버네티스에서 서비스와 워크로드를 생성하고 배포하는 것은 리눅스나 윈도우 컨테이너 모두 비슷한 방식이라는 것이 중요하다. [Kubectl 커맨드](/docs/reference/kubectl/overview/)로 클러스터에 접속하는 것은 동일하다. 아래 단원의 예시는 윈도우 컨테이너를 경험하기 위해 제공한다. ## 시작하기: 윈도우 컨테이너 배포하기 diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md deleted file mode 100644 index e6720c9b0b..0000000000 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md +++ /dev/null @@ -1,354 +0,0 @@ ---- -reviewers: -title: 쿠버네티스에서 윈도우 노드 추가 가이드 -min-kubernetes-server-version: v1.14 -content_template: templates/tutorial -weight: 70 ---- - -{{% capture overview %}} - -쿠버네티스 플랫폼은 이제 리눅스와 윈도우 컨테이너 모두 운영할 수 있다. 윈도우 노드도 클러스터에 등록할 수 있다. 이 페이지에서는 어떻게 하나 또는 그 이상의 윈도우 노드를 클러스터에 등록할 수 있는지 보여준다. -{{% /capture %}} - - -{{% capture prerequisites %}} - -* 윈도우 컨테이너를 호스트하는 윈도우 노드를 구성하려면 [윈도우 서버 2019 라이선스](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing)를 소유해야 한다. 클러스터를 위해서 소속 기관의 라이선스를 사용하거나, Microsoft, 리셀러로 부터 취득할 수 있으며, GCP, AWS, Azure와 같은 주요 클라우드 제공자의 마켓플레이스를 통해 윈도우 서버를 운영하는 가상머신을 프로비저닝하여 취득할 수도 있다. [사용시간이 제한된 시험판](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial)도 활용 가능하다. - -* 컨트롤 플레인에 접근할 수 있는 리눅스 기반의 쿠버네티스 클러스터를 구축한다.(몇 가지 예시는 [kubeadm으로 단일 컨트롤플레인 클러스터 만들기](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/docs/setup/production-environment/turnkey/azure/), [GCE](/docs/setup/production-environment/turnkey/gce/), [AWS](/docs/setup/production-environment/turnkey/aws/)를 포함한다) - -{{% /capture %}} - - -{{% capture objectives %}} - -* 윈도우 노드를 클러스터에 등록하기 -* 리눅스와 윈도우에서 동작하는 파드와 서비스가 상호 간에 통신할 수 있게 네트워크를 구성하기 - -{{% /capture %}} - - -{{% capture lessoncontent %}} - -## 시작하기: 사용자 클러스터에 윈도우 노드 추가하기 - -### IP 주소 체계 설계하기 - -쿠버네티스 클러스터 관리를 위해 실수로 네트워크 충돌을 일으키지 않도록 IP 주소에 대해 신중히 설계해야 한다. 이 가이드는 [쿠버네티스 네트워킹 개념](/docs/concepts/cluster-administration/networking/)에 익숙하다 가정한다. - -클러스터를 배포하려면 다음 주소 공간이 필요하다. - -| 서브넷 / 주소 범위 | 비고 | 기본값 | -| --- | --- | --- | -| 서비스 서브넷 | 라우트 불가한 순수한 가상 서브넷으로 네트워크 토플로지에 관계없이 파드에서 서비스로 단일화된 접근을 제공하기 위해 사용한다. 서비스 서브넷은 노드에서 실행 중인 `kube-proxy`에 의해서 라우팅 가능한 주소 공간으로(또는 반대로) 번역된다. | 10.96.0.0/12 | -| 클러스터 서브넷 | 클러스터 내에 모든 파드에 사용되는 글로벌 서브넷이다. 각 노드에는 파드가 사용하기 위한 /24 보다 작거나 같은 서브넷을 할당한다. 서브넷은 클러스터 내에 모든 파드를 수용할 수 있을 정도로 충분히 큰 값이어야 한다. *최소 서브넷*의 크기를 계산하려면: `(노드의 개수) + (노드의 개수 * 구성하려는 노드 당 최대 파드 개수)`. 예: 노드 당 100개 파드인 5 노드짜리 클러스터 = `(5) + (5 * 100) = 505.` | 10.244.0.0/16 | -| 쿠버네티스 DNS 서비스 IP | DNS 확인 및 클러스터 서비스 검색에 사용되는 서비스인 `kube-dns`의 IP 주소이다. | 10.96.0.10 | - -클러스터에 IP 주소를 얼마나 할당해야 할지 결정하기 위해 '쿠버네티스에서 윈도우 컨테이너: 지원되는 기능: 네트워킹'에서 소개한 네트워킹 선택 사항을 검토하자. - -### 윈도우에서 실행되는 구성 요소 - -쿠버네티스 컨트롤 플레인이 리눅스 노드에서 운영되는 반면, 다음 요소는 윈도우 노드에서 구성되고 운영된다. - -1. kubelet -2. kube-proxy -3. kubectl (선택적) -4. 컨테이너 런타임 - -v1.14 이후의 최신 바이너리를 [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases)에서 받아온다. kubeadm, kubectl, kubelet, kube-proxy의 Windows-amd64 바이너리는 CHANGELOG 링크에서 찾아볼 수 있다. - -### 네트워크 구성 - -리눅스 기반의 쿠버네티스 컨트롤 플레인("마스터") 노드를 가지고 있다면 네트워킹 솔루션을 선택할 준비가 된 것이다. 이 가이드는 단순화를 위해 VXLAN 방식의 플라넬(Flannel)을 사용하여 설명한다. - -#### 리눅스 컨트롤 플레인에서 VXLAN 방식으로 플라넬 구성하기 - -1. 플라넬을 위해 쿠버네티스 마스터를 준비한다. - - 클러스터의 쿠버네티스 마스터에서 사소한 준비를 권장한다. 플라넬을 사용할 때에 iptables 체인으로 IPv4 트래픽을 브릿지할 수 있게 하는 것은 추천한다. 이는 다음 커맨드를 이용하여 수행할 수 있다. - - ```bash - sudo sysctl net.bridge.bridge-nf-call-iptables=1 - ``` - -1. 플라넬 다운로드 받고 구성하기 - - 가장 최신의 플라넬 메니페스트를 다운로드한다. - - ```bash - wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml - ``` - - VXLAN 네트워킹 벡엔드를 가능하게 하기 위해 수정할 곳은 두 곳이다. - - 아래 단계를 적용하면 `kube-flannel.yml`의 `net-conf.json`부분을 다음과 같게 된다. - - ```json - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan", - "VNI" : 4096, - "Port": 4789 - } - } - ``` - - {{< note >}}리눅스의 플라넬과 윈도우의 플라넬이 상호운용하기 위해서 `VNI`는 반드시 4096이고, `Port`는 4789여야 한다. 다른 VNI는 곧 지원될 예정이다. [VXLAN 문서](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)에서 - 이 필드의 설명 부분을 보자.{{< /note >}} - -1. `kube-flannel.yml`의 `net-conf.json` 부분을 거듭 확인하자. - 1. 클러스터 서브넷(예, "10.244.0.0/16")은 IP 주소 설계에 따라 설정되어야 한다. - * VNI 4096 은 벡엔드에 설정한다. - * Port 4789 는 벡엔드에 설정한다. - 1. `kube-flannel.yml`의 `cni-conf.json` 부분에서 네트워크 이름을 `vxlan0`로 바꾼다. - - `cni-conf.json`는 다음과 같다. - - ```json - cni-conf.json: | - { - "name": "vxlan0", - "plugins": [ - { - "type": "flannel", - "delegate": { - "hairpinMode": true, - "isDefaultGateway": true - } - }, - { - "type": "portmap", - "capabilities": { - "portMappings": true - } - } - ] - } - ``` - -1. 플라넬 매니페스트를 적용하고 확인하기 - - 플라넬 구성을 적용하자. - - ```bash - kubectl apply -f kube-flannel.yml - ``` - - 몇 분 뒤에 플라넬 파드 네트워크가 배포되었다면, 모든 파드에서 운영 중인 것을 확인할 수 있다. - - ```bash - kubectl get pods --all-namespaces - ``` - - 결과는 다음과 같다. - - ``` - NAMESPACE NAME READY STATUS RESTARTS AGE - kube-system etcd-flannel-master 1/1 Running 0 1m - kube-system kube-apiserver-flannel-master 1/1 Running 0 1m - kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m - kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m - kube-system kube-flannel-ds-54954 1/1 Running 0 1m - kube-system kube-proxy-Zjlxz 1/1 Running 0 1m - kube-system kube-scheduler-flannel-master 1/1 Running 0 1m - ``` - - 플라넬 데몬셋에 노드 셀렉터가 적용되었음을 확인한다. - - ```bash - kubectl get ds -n kube-system - ``` - - 결과는 다음과 같다. 노드 셀렉터 `beta.kubernetes.io/os=linux`가 적용되었다. - - ``` - NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE - kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d - kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d - ``` - - - -### 윈도우 워커 노드 추가하기 - -이번 단원은 맨 땅에서부터 온프레미스 클러스터에 가입하기까지 윈도우 노드 구성을 다룬다. 클러스터가 클라우드상에 있다면, [퍼블릭 클라우드 제공자 단원](#퍼블릭-클라우드-제공자)에 있는 클라우드에 특정한 가이드를 따르도록 된다. - -#### 윈도우 노드 준비하기 - -{{< note >}} -윈도우 단원에서 모든 코드 부분은 윈도우 워커 노드에서 높은 권한(Administrator)으로 파워쉘(PowerShell) 환경에서 구동한다. -{{< /note >}} - -1. 설치 및 참여(join) 스크립트가 포함된 [SIG Windows tools](https://github.com/kubernetes-sigs/sig-windows-tools) 리포지터리를 내려받는다. - ```PowerShell - [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 - Start-BitsTransfer https://github.com/kubernetes-sigs/sig-windows-tools/archive/master.zip - tar -xvf .\master.zip --strip-components 3 sig-windows-tools-master/kubeadm/v1.15.0/* - Remove-Item .\master.zip - ``` - -1. 쿠버네티스 [구성 파일](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclustervxlan.json)을 커스터마이즈한다. - - ``` - { - "Cri" : { // Contains values for container runtime and base container setup - "Name" : "dockerd", // Container runtime name - "Images" : { - "Pause" : "mcr.microsoft.com/k8s/core/pause:1.2.0", // Infrastructure container image - "Nanoserver" : "mcr.microsoft.com/windows/nanoserver:1809", // Base Nanoserver container image - "ServerCore" : "mcr.microsoft.com/windows/servercore:ltsc2019" // Base ServerCore container image - } - }, - "Cni" : { // Contains values for networking executables - "Name" : "flannel", // Name of network fabric - "Source" : [{ // Contains array of objects containing values for network daemon(s) - "Name" : "flanneld", // Name of network daemon - "Url" : "https://github.com/coreos/flannel/releases/download/v0.11.0/flanneld.exe" // Direct URL pointing to network daemon executable - } - ], - "Plugin" : { // Contains values for CNI network plugin - "Name": "vxlan" // Backend network mechanism to use: ["vxlan" | "bridge"] - }, - "InterfaceName" : "Ethernet" // Designated network interface name on Windows node to use as container network - }, - "Kubernetes" : { // Contains values for Kubernetes node binaries - "Source" : { // Contains values for Kubernetes node binaries - "Release" : "1.15.0", // Version of Kubernetes node binaries - "Url" : "https://dl.k8s.io/v1.15.0/kubernetes-node-windows-amd64.tar.gz" // Direct URL pointing to Kubernetes node binaries tarball - }, - "ControlPlane" : { // Contains values associated with Kubernetes control-plane ("Master") node - "IpAddress" : "kubemasterIP", // IP address of control-plane ("Master") node - "Username" : "localadmin", // Username on control-plane ("Master") node with remote SSH access - "KubeadmToken" : "token", // Kubeadm bootstrap token - "KubeadmCAHash" : "discovery-token-ca-cert-hash" // Kubeadm CA key hash - }, - "KubeProxy" : { // Contains values for Kubernetes network proxy configuration - "Gates" : "WinOverlay=true" // Comma-separated key-value pairs passed to kube-proxy feature gate flag - }, - "Network" : { // Contains values for IP ranges in CIDR notation for Kubernetes networking - "ServiceCidr" : "10.96.0.0/12", // Service IP subnet used by Services in CIDR notation - "ClusterCidr" : "10.244.0.0/16" // Cluster IP subnet used by Pods in CIDR notation - } - }, - "Install" : { // Contains values and configurations for Windows node installation - "Destination" : "C:\\ProgramData\\Kubernetes" // Absolute DOS path where Kubernetes will be installed on the Windows node - } -} - ``` - -{{< note >}} -사용자는 쿠버네티스 컨트롤 플레인("마스터") 노드에서 `kubeadm token create --print-join-command`를 실행해서 `ControlPlane.KubeadmToken`과 `ControlPlane.KubeadmCAHash` 필드를 위한 값을 생성할 수 있다. -{{< /note >}} - -1. 컨테이너와 쿠버네티스를 설치 (시스템 재시작 필요) - -기존에 내려받은 [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) 스크립트를 사용해서 쿠버네티스를 윈도우 서버 컨테이너 호스트에 설치한다. - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -install - ``` - 이 때 `-ConfigFile`는 쿠버네티스 구성 파일의 경로를 가리킨다. - -{{< note >}} -아래 예제에서, 우리는 오버레이 네트워킹 모드를 사용한다. 이는 [KB4489899](https://support.microsoft.com/help/4489899)를 포함한 윈도우 서버 버전 2019와 최소 쿠버네티스 v1.14 이상이 필요하다. 이 요구사항을 만족시키기 어려운 사용자는 구성 파일의 [플러그인](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclusterbridge.json#L18)으로 `bridge`를 선택하지 말고 `L2bridge` 네트워킹을 사용해야만 한다. -{{< /note >}} - - ![alt_text](../kubecluster.ps1-install.gif "KubeCluster.ps1 install output") - - -대상으로 하는 윈도우 노드에서, 본 단계는 - -1. 윈도우 서버 컨테이너 역할을 활성화(및 재시작) 한다. -1. 선택된 컨테이너 런타임을 내려받아 설치한다. -1. 필요한 컨테이너 이미지를 모두 내려받는다. -1. 쿠버네티스 바이너리를 내려받아서 `$PATH` 환경 변수에 추가한다. -1. 쿠버네티스 구성 파일에서 선택한 내용을 기반으로 CNI 플러그인을 내려받는다. -1. (선택적으로) 참여(join) 중에 컨트롤 플레인("마스터") 노드에 접속하기 위한 새로운 SSH 키를 생성한다. - - {{< note >}}또한, SSH 키 생성 단계에서 생성된 공개 SSH 키를 (리눅스) 컨트롤 플레인 노드의 `authorized_keys` 파일에 추가해야 한다. 이는 한 번만 수행하면 된다. 스크립트가 출력물의 마지막 부분에 이를 따라 할 수 있도록 단계를 출력해 준다.{{< /note >}} - -일단 설치가 완료되면, 생성된 모든 구성 파일이나 바이너리는 윈도우 노드가 참여하기 전에 수정될 수 있다. - -#### 윈도우 노드를 쿠버네티스 클러스터에 참여시키기 - -이 섹션에서는 클러스터를 구성하기 위해서 [쿠버네티스가 설치된 윈도우 노드](#윈도우-노드-준비하기)를 기존의 (리눅스) 컨트롤 플레인에 참여시키는 방법을 다룬다. - -앞서 내려받은 [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) 스크립트를 사용해서 윈도우 노드를 클러스터에 참여시킨다. - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -join - ``` - 이 때 `-ConfigFile` 쿠버네티스 구성 파일의 경로를 가리킨다. - -![alt_text](../kubecluster.ps1-join.gif "KubeCluster.ps1 join output") - -{{< note >}} -어떤 이유에서든 부트스트랩 동안이나 참여 과정에서 스크립트가 실패하면, 뒤따르는 참여 시도를 시작하기 전에 신규 PowerShell 세션을 시작해야한다. -{{< /note >}} - -본 단계는 다음의 행위를 수행한다. - -1. 컨트롤 플레인("마스터") 노드에 SSH로 접속해서 [Kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 얻어온다. -1. kubelet을 윈도우 서비스로 등록한다. -1. CNI 네트워크 플러그인을 구성한다. -1. 선택된 네트워크 인터페이스 상에서 HNS 네트워크를 생성한다. - {{< note >}} - 이는 vSwitch가 생성되는 동안 몇 초간의 네트워크 순단현상을 야기할 수 있다. - {{< /note >}} -1. (vxlan 플러그인을 선택한 경우) 오버레이 트래픽을 위해서 인바운드(inbound) 방화벽의 UDP 포트 4789를 열어준다. -1. flanneld를 윈도우 서비스로 등록한다. -1. kube-proxy를 윈도우 서비스로 등록한다. - -이제 클러스터에서 다음의 명령을 실행해서 윈도우 노드를 볼 수 있다. - -```bash -kubectl get nodes -``` - -#### 윈도우 노드를 쿠버네티스 클러스터에서 제거하기 -이 섹션에서는 윈도우 노드를 쿠버네티스 클러스터에서 제거하는 방법을 다룬다. - -앞서 내려받은 [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) 스크립트를 사용해서 클러스터에서 윈도우 노드를 제거한다. - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -reset - ``` - 이 때 `-ConfigFile` 쿠버네티스 구성 파일의 경로를 가리킨다. - -![alt_text](../kubecluster.ps1-reset.gif "KubeCluster.ps1 reset output") - -본 단계는 다음의 행위를 대상이되는 윈도우 노드에서 수행한다. - -1. 윈도우 노드를 쿠버네티스 클러스터에서 삭제한다. -1. 구동 중인 모든 컨테이너를 중지시킨다. -1. 모든 컨테이너 네트워킹(HNS) 자원을 삭제한다. -1. 등록된 모든 쿠버네티스 서비스(flanneld, kubelet, kube-proxy)를 해지한다. -1. 쿠버네티스 바이너리(kube-proxy.exe, kubelet.exe, flanneld.exe, kubeadm.exe)를 모두 삭제한다. -1. CNI 네트워크 플러그인 바이너리를 모두 삭제한다. -1. 쿠버네티스 클러스터에 접근하기 위한 [Kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 삭제한다. - - -### 퍼블릭 클라우드 제공자 - -#### Azure - -AKS-Engine은 완전하고, 맞춤 설정이 가능한 쿠버네티스 클러스터를 리눅스와 윈도우 노드에 배포할 수 있다. 단계별 안내가 [GitHub에 있는 문서](https://github.com/Azure/aks-engine/blob/master/docs/topics/windows.md)로 제공된다. - -#### GCP - -사용자가 [GitHub](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/README-GCE-Windows-kube-up.md)에 있는 단계별 안내를 따라서 완전한 쿠버네티스 클러스터를 GCE 상에 쉽게 배포할 수 있다. - -#### kubeadm과 클러스터 API로 배포하기 - -Kubeadm은 쿠버네티스 클러스터를 배포하는 사용자에게 산업 표준이 되었다. Kubeadm에서 윈도우 노드 지원은 쿠버네티스 v1.16 이후 부터 알파 기능이다. 또한 윈도우 노드가 올바르게 프로비저닝되도록 클러스터 API에 투자하고 있다. 보다 자세한 내용은, [Windows KEP를 위한 kubeadm](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/20190424-kubeadm-for-windows.md)을 통해 상담하도록 하자. - - -### 다음 단계 - -이제 클러스터 내에 윈도우 컨테이너를 실행하도록 윈도우 워커를 구성했으니, 리눅스 컨테이너를 실행할 리눅스 노드를 1개 이상 추가할 수 있다. 이제 윈도우 컨테이너를 클러스터에 스케줄링할 준비가 됬다. - -{{% /capture %}} - diff --git a/content/ko/docs/tasks/administer-cluster/cluster-management.md b/content/ko/docs/tasks/administer-cluster/cluster-management.md index 4d1446526f..1fd99c0895 100644 --- a/content/ko/docs/tasks/administer-cluster/cluster-management.md +++ b/content/ko/docs/tasks/administer-cluster/cluster-management.md @@ -21,7 +21,7 @@ content_template: templates/concept ## 클러스터 업그레이드 -클러스터 업그레이드 상태의 현황은 제공자에 따라 달라지며, 몇몇 릴리스들은 업그레이드에 각별한 주의를 요하기도 한다. 관리자들에게는 클러스터 업그레이드에 앞서 [릴리스 노트](https://git.k8s.io/kubernetes/CHANGELOG.md)와 버전에 맞는 업그레이드 노트 모두를 검토하도록 권장하고 있다. +클러스터 업그레이드 상태의 현황은 제공자에 따라 달라지며, 몇몇 릴리스들은 업그레이드에 각별한 주의를 요하기도 한다. 관리자들에게는 클러스터 업그레이드에 앞서 [릴리스 노트](https://git.k8s.io/kubernetes/CHANGELOG/README.md)와 버전에 맞는 업그레이드 노트 모두를 검토하도록 권장하고 있다. ### Azure Kubernetes Service (AKS) 클러스터 업그레이드 diff --git a/content/ko/docs/tasks/configure-pod-container/_index.md b/content/ko/docs/tasks/configure-pod-container/_index.md new file mode 100644 index 0000000000..560261ecad --- /dev/null +++ b/content/ko/docs/tasks/configure-pod-container/_index.md @@ -0,0 +1,4 @@ +--- +title: "파드와 컨테이너 설정" +weight: 20 +--- diff --git a/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md new file mode 100644 index 0000000000..8ce67986bc --- /dev/null +++ b/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -0,0 +1,104 @@ +--- +title: 노드에 파드 할당 +content_template: templates/task +weight: 120 +--- + +{{% capture overview %}} +이 문서는 쿠버네티스 클러스터의 특정 노드에 쿠버네티스 파드를 할당하는 +방법을 설명한다. +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## 노드에 레이블 추가 + +1. 클러스터의 {{< glossary_tooltip term_id="node" text="노드" >}}를 레이블과 함께 나열하자. + + ```shell + kubectl get nodes --show-labels + ``` + + 결과는 아래와 같다. + + ```shell + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` +1. 노드 한 개를 선택하고, 레이블을 추가하자. + + ```shell + kubectl label nodes disktype=ssd + ``` + + ``는 선택한 노드의 이름이다. + +1. 선택한 노드가 `disktype=ssd` 레이블을 갖고 있는지 확인하자. + + ```shell + kubectl get nodes --show-labels + ``` + + 결과는 아래와 같다. + + ```shell + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` + + 위의 결과에서, `worker0` 노드에 `disktype=ssd` 레이블이 있는 것을 + 확인할 수 있다. + +## 선택한 노드에 스케줄되도록 파드 생성하기 + +이 파드 구성 파일은 `disktype: ssd`라는 선택하는 노드 셀렉터를 가진 파드를 +설명한다. +즉, `disktype=ssd` 레이블이 있는 노드에 파드가 스케줄될 것이라는 +것을 의미한다. + +{{< codenew file="pods/pod-nginx.yaml" >}} + +1. 구성 파일을 사용해서 선택한 노드로 스케줄되도록 파드를 + 생성하자. + + ```shell + kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml + ``` + +1. 파드가 선택한 노드에서 실행 중인지 확인하자. + + ```shell + kubectl get pods --output=wide + ``` + + 결과는 아래와 같다. + + ```shell + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + ``` + +## 특정 노드에 스케줄되도록 파드 생성하기 + +`nodeName` 설정을 통해 특정 노드로 파드를 배포할 수 있다. + +{{< codenew file="pods/pod-nginx-specific-node.yaml" >}} + +설정 파일을 사용해 `foo-node` 노드에 파드를 스케줄되도록 만들어 보자. + +{{% /capture %}} + +{{% capture whatsnext %}} +* [레이블과 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/)에 대해 배우기. +* [노드](/ko/docs/concepts/architecture/nodes/)에 대해 배우기. +{{% /capture %}} diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md index dde6650e20..013854884a 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -117,7 +117,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -134,7 +134,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -197,7 +197,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -214,7 +214,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -253,7 +253,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -271,7 +271,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -279,7 +279,7 @@ spec: # ... ``` -`nginx:1.7.9`에서 `nginx:1.11.9`로 이미지를 변경하기 위해 `simple_deployment.yaml` +`nginx:1.14.2`에서 `nginx:1.16.1`로 이미지를 변경하기 위해 `simple_deployment.yaml` 구성 파일을 업데이트 하고, `minReadySeconds` 필드를 삭제한다. {{< codenew file="application/update_deployment.yaml" >}} @@ -301,7 +301,7 @@ kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yam * `replicas` 필드는 `kubectl scale`에 의해 설정된 값 2를 유지한다. 이는 구성 파일에서 생략되었기 때문에 가능하다. -* `image` 필드는 `nginx:1.7.9`에서 `nginx:1.11.9`로 업데이트되었다. +* `image` 필드는 `nginx:1.14.2`에서 `nginx:1.16.1`로 업데이트되었다. * `last-applied-configuration` 어노테이션은 새로운 이미지로 업데이트되었다. * `minReadySeconds` 필드는 지워졌다. * `last-applied-configuration` 어노테이션은 더 이상 `minReadySeconds` 필드를 포함하지 않는다. @@ -318,7 +318,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.11.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -336,7 +336,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.11.9 # Set by `kubectl apply` + - image: nginx:1.16.1 # Set by `kubectl apply` # ... name: nginx ports: @@ -458,7 +458,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -476,7 +476,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -516,7 +516,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.11.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -534,7 +534,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.11.9 # Set by `kubectl apply` + - image: nginx:1.16.1 # Set by `kubectl apply` # ... name: nginx ports: @@ -777,7 +777,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 imagePullPolicy: IfNotPresent # defaulted by apiserver name: nginx ports: @@ -817,7 +817,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 @@ -832,7 +832,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 @@ -850,7 +850,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 @@ -868,7 +868,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 ``` diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md index 695ec57d09..7f2d17ca5f 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -139,10 +139,10 @@ TODO(pwittrock): 구현이 이루어지면 주석을 해제한다. 다음은 관련 예제이다. ```sh -kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - +kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - ``` -1. `kubectl create service -o yaml --dry-run` 커맨드는 서비스에 대한 구성을 생성하지만, 이를 쿠버네티스 API 서버에 전송하는 대신 YAML 형식으로 stdout에 출력한다. +1. `kubectl create service -o yaml --dry-run=client` 커맨드는 서비스에 대한 구성을 생성하지만, 이를 쿠버네티스 API 서버에 전송하는 대신 YAML 형식으로 stdout에 출력한다. 1. `kubectl set selector --local -f - -o yaml` 커맨드는 stdin으로부터 구성을 읽어, YAML 형식으로 stdout에 업데이트된 구성을 기록한다. 1. `kubectl create -f -` 커맨드는 stdin을 통해 제공된 구성을 사용하여 오브젝트를 생성한다. @@ -152,7 +152,7 @@ kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run | k 다음은 관련 예제이다. ```sh -kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run > /tmp/srv.yaml +kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client > /tmp/srv.yaml kubectl create --edit -f /tmp/srv.yaml ``` diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md index 6cd56ae881..87ca926908 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -791,6 +791,12 @@ kubectl get -k ./ kubectl describe -k ./ ``` +다음 명령을 실행해서 디플로이먼트 오브젝트 `dev-my-nginx` 를 매니페스트가 적용된 경우의 클러스터 상태와 비교한다. + +```shell +kubectl diff -k ./ +``` + 디플로이먼트 오브젝트 `dev-my-nginx`를 삭제하려면 다음 명령어를 실행한다. ```shell diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md index 136ff556e9..9ab479baf3 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -291,7 +291,7 @@ API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다. ## 구성가능한 스케일링 동작 지원 -[v1.17](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) +[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) 부터 `v2beta2` API는 HPA `behavior` 필드를 통해 스케일링 동작을 구성할 수 있다. 동작은 `behavior` 필드 아래의 `scaleUp` 또는 `scaleDown` diff --git a/content/ko/docs/tasks/tools/install-minikube.md b/content/ko/docs/tasks/tools/install-minikube.md index 82a899bba0..41ff7092ec 100644 --- a/content/ko/docs/tasks/tools/install-minikube.md +++ b/content/ko/docs/tasks/tools/install-minikube.md @@ -26,7 +26,7 @@ grep -E --color 'vmx|svm' /proc/cpuinfo {{% tab name="맥OS" %}} 맥OS에서 가상화 지원 여부를 확인하려면, 아래 명령어를 터미널에서 실행한다. ``` -sysctl -a | grep -E --color 'machdep.cpu.features|VMX' +sysctl -a | grep -E --color 'machdep.cpu.features|VMX' ``` 만약 출력 중에 (색상으로 강조된) `VMX`를 볼 수 있다면, VT-x 기능이 머신에서 활성화된 것이다. {{% /tab %}} @@ -74,7 +74,7 @@ kubectl이 설치되었는지 확인한다. kubectl은 [kubectl 설치하고 설 • [VirtualBox](https://www.virtualbox.org/wiki/Downloads) -Minikube는 쿠버네티스 컴포넌트를 VM이 아닌 호스트에서도 동작하도록 `--vm-driver=none` 옵션도 지원한다. +Minikube는 쿠버네티스 컴포넌트를 VM이 아닌 호스트에서도 동작하도록 `--driver=none` 옵션도 지원한다. 이 드라이버를 사용하려면 [도커](https://www.docker.com/products/docker-desktop) 와 Linux 환경이 필요하지만, 하이퍼바이저는 필요하지 않다. 데비안(Debian) 또는 파생된 배포판에서 `none` 드라이버를 사용하는 경우, @@ -83,7 +83,7 @@ Minikube에서는 동작하지 않는 스냅 패키지 대신 도커용 `.deb` {{< caution >}} `none` VM 드라이버는 보안과 데이터 손실 이슈를 일으킬 수 있다. -`--vm-driver=none` 을 사용하기 전에 [이 문서](https://minikube.sigs.k8s.io/docs/reference/drivers/none/)를 참조해서 더 자세한 내용을 본다. +`--driver=none` 을 사용하기 전에 [이 문서](https://minikube.sigs.k8s.io/docs/reference/drivers/none/)를 참조해서 더 자세한 내용을 본다. {{< /caution >}} Minikube는 도커 드라이브와 비슷한 `vm-driver=podman` 도 지원한다. 슈퍼사용자 권한(root 사용자)으로 실행되는 Podman은 컨테이너가 시스템에서 사용 가능한 모든 기능에 완전히 접근할 수 있는 가장 좋은 방법이다. @@ -214,12 +214,12 @@ Minikube 설치를 마친 후, 현재 CLI 세션을 닫고 재시작한다. Mini {{< note >}} -`minikube start` 시 `--vm-driver` 를 설정하려면, 아래에 `` 로 소문자로 언급된 곳에 설치된 하이퍼바이저의 이름을 입력한다. `--vm-driver` 값의 전체 목록은 [VM driver 문서에서 지정하기](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)에서 확인할 수 있다. +`minikube start` 시 `--driver` 를 설정하려면, 아래에 `` 로 소문자로 언급된 곳에 설치된 하이퍼바이저의 이름을 입력한다. `--driver` 값의 전체 목록은 [VM driver 문서에서 지정하기](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)에서 확인할 수 있다. {{< /note >}} ```shell -minikube start --vm-driver= +minikube start --driver= ``` `minikube start` 가 완료되면, 아래 명령을 실행해서 클러스터의 상태를 확인한다. diff --git a/content/ko/examples/admin/resource/limit-mem-cpu-container.yaml b/content/ko/examples/admin/resource/limit-mem-cpu-container.yaml new file mode 100644 index 0000000000..3c2b30f29c --- /dev/null +++ b/content/ko/examples/admin/resource/limit-mem-cpu-container.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: limit-mem-cpu-per-container +spec: + limits: + - max: + cpu: "800m" + memory: "1Gi" + min: + cpu: "100m" + memory: "99Mi" + default: + cpu: "700m" + memory: "900Mi" + defaultRequest: + cpu: "110m" + memory: "111Mi" + type: Container diff --git a/content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml b/content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml new file mode 100644 index 0000000000..0ce0f69ac8 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: limit-mem-cpu-per-pod +spec: + limits: + - max: + cpu: "2" + memory: "2Gi" + type: Pod diff --git a/content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml b/content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml new file mode 100644 index 0000000000..859fc20ece --- /dev/null +++ b/content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: limit-memory-ratio-pod +spec: + limits: + - maxLimitRequestRatio: + memory: 2 + type: Pod diff --git a/content/ko/examples/admin/resource/limit-range-pod-1.yaml b/content/ko/examples/admin/resource/limit-range-pod-1.yaml new file mode 100644 index 0000000000..0457792af9 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-range-pod-1.yaml @@ -0,0 +1,37 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox1 +spec: + containers: + - name: busybox-cnt01 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt02 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + - name: busybox-cnt03 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] + resources: + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt04 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/ko/examples/admin/resource/limit-range-pod-2.yaml b/content/ko/examples/admin/resource/limit-range-pod-2.yaml new file mode 100644 index 0000000000..efac440269 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-range-pod-2.yaml @@ -0,0 +1,37 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox2 +spec: + containers: + - name: busybox-cnt01 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt02 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + - name: busybox-cnt03 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] + resources: + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt04 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/ko/examples/admin/resource/limit-range-pod-3.yaml b/content/ko/examples/admin/resource/limit-range-pod-3.yaml new file mode 100644 index 0000000000..8afdb6379c --- /dev/null +++ b/content/ko/examples/admin/resource/limit-range-pod-3.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox3 +spec: + containers: + - name: busybox-cnt01 + image: busybox + resources: + limits: + memory: "300Mi" + requests: + memory: "100Mi" diff --git a/content/ko/examples/admin/resource/pvc-limit-greater.yaml b/content/ko/examples/admin/resource/pvc-limit-greater.yaml new file mode 100644 index 0000000000..2d92bf92b3 --- /dev/null +++ b/content/ko/examples/admin/resource/pvc-limit-greater.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-limit-greater +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi diff --git a/content/ko/examples/admin/resource/pvc-limit-lower.yaml b/content/ko/examples/admin/resource/pvc-limit-lower.yaml new file mode 100644 index 0000000000..ef819b6292 --- /dev/null +++ b/content/ko/examples/admin/resource/pvc-limit-lower.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-limit-lower +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi diff --git a/content/ko/examples/admin/resource/storagelimits.yaml b/content/ko/examples/admin/resource/storagelimits.yaml new file mode 100644 index 0000000000..7f597e4dfe --- /dev/null +++ b/content/ko/examples/admin/resource/storagelimits.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: storagelimits +spec: + limits: + - type: PersistentVolumeClaim + max: + storage: 2Gi + min: + storage: 1Gi diff --git a/content/ko/examples/application/deployment.yaml b/content/ko/examples/application/deployment.yaml index 68ab8289b5..2cd599218d 100644 --- a/content/ko/examples/application/deployment.yaml +++ b/content/ko/examples/application/deployment.yaml @@ -1,4 +1,4 @@ -apiVersion: apps/v1 # apps/v1beta2를 사용하는 1.9.0보다 더 이전의 버전용 +apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment @@ -6,7 +6,7 @@ spec: selector: matchLabels: app: nginx - replicas: 2 # 템플릿에 매칭되는 파드 2개를 구동하는 디플로이먼트임 + replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/examples/application/simple_deployment.yaml b/content/ko/examples/application/simple_deployment.yaml index 10fa1ddf29..d9c74af8c5 100644 --- a/content/ko/examples/application/simple_deployment.yaml +++ b/content/ko/examples/application/simple_deployment.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/examples/application/update_deployment.yaml b/content/ko/examples/application/update_deployment.yaml index d53aa3e6d2..2d7603acb9 100644 --- a/content/ko/examples/application/update_deployment.yaml +++ b/content/ko/examples/application/update_deployment.yaml @@ -13,6 +13,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.11.9 # update the image + image: nginx:1.16.1 # update the image ports: - containerPort: 80 diff --git a/content/ko/examples/controllers/daemonset.yaml b/content/ko/examples/controllers/daemonset.yaml index 1bfa082833..f291b750c1 100644 --- a/content/ko/examples/controllers/daemonset.yaml +++ b/content/ko/examples/controllers/daemonset.yaml @@ -15,6 +15,8 @@ spec: name: fluentd-elasticsearch spec: tolerations: + # this toleration is to have the daemonset runnable on master nodes + # remove it if your masters can't run pods - key: node-role.kubernetes.io/master effect: NoSchedule containers: diff --git a/content/ko/examples/controllers/nginx-deployment.yaml b/content/ko/examples/controllers/nginx-deployment.yaml index f7f95deebb..685c17aa68 100644 --- a/content/ko/examples/controllers/nginx-deployment.yaml +++ b/content/ko/examples/controllers/nginx-deployment.yaml @@ -16,6 +16,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/examples/pods/pod-nginx-specific-node.yaml b/content/ko/examples/pods/pod-nginx-specific-node.yaml new file mode 100644 index 0000000000..27ead0118a --- /dev/null +++ b/content/ko/examples/pods/pod-nginx-specific-node.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + nodeName: foo-node # 특정 노드에 파드 스케줄 + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/ko/examples/policy/example-psp.yaml b/content/ko/examples/policy/example-psp.yaml new file mode 100644 index 0000000000..7c7a19343f --- /dev/null +++ b/content/ko/examples/policy/example-psp.yaml @@ -0,0 +1,17 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: example +spec: + privileged: false # 특권을 가진 파드는 허용금지! + # 나머지는 일부 필수 필드를 채운다. + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + runAsUser: + rule: RunAsAny + fsGroup: + rule: RunAsAny + volumes: + - '*' diff --git a/content/ko/examples/policy/privileged-psp.yaml b/content/ko/examples/policy/privileged-psp.yaml new file mode 100644 index 0000000000..915c8d37b5 --- /dev/null +++ b/content/ko/examples/policy/privileged-psp.yaml @@ -0,0 +1,27 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: privileged + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' +spec: + privileged: true + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + volumes: + - '*' + hostNetwork: true + hostPorts: + - min: 0 + max: 65535 + hostIPC: true + hostPID: true + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'RunAsAny' + fsGroup: + rule: 'RunAsAny' diff --git a/content/ko/examples/policy/restricted-psp.yaml b/content/ko/examples/policy/restricted-psp.yaml new file mode 100644 index 0000000000..cbaf2758c0 --- /dev/null +++ b/content/ko/examples/policy/restricted-psp.yaml @@ -0,0 +1,48 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: restricted + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' + apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' + seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' + apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' +spec: + privileged: false + # 루트로의 에스컬레이션을 방지하는데 필요하다. + allowPrivilegeEscalation: false + # 이것은 루트가 아닌 사용자 + 권한 에스컬레이션을 허용하지 않는 것으로 중복이지만, + # 심층 방어를 위해 이를 제공한다. + requiredDropCapabilities: + - ALL + # 기본 볼륨 유형을 허용한다. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # 클러스터 관리자가 설정한 퍼시스턴트볼륨을 사용하는 것이 안전하다고 가정한다. + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # 루트 권한없이 컨테이너를 실행해야 한다. + rule: 'MustRunAsNonRoot' + seLinux: + # 이 정책은 노드가 SELinux가 아닌 AppArmor를 사용한다고 가정한다. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # 루트 그룹을 추가하지 않는다. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # 루트 그룹을 추가하지 않는다. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false diff --git a/content/ko/partners/_index.html b/content/ko/partners/_index.html new file mode 100644 index 0000000000..2ac7e6945c --- /dev/null +++ b/content/ko/partners/_index.html @@ -0,0 +1,91 @@ +--- +title: 파트너 +bigheader: 쿠버네티스 파트너 +abstract: 쿠버네티스 생태계의 성장 +class: gridPage +cid: partners +--- + +
    +
    +
    쿠버네티스는 파트너와 협력하여 다양하게 보완하는 플랫폼을 지원하는 강력하고 활기찬 코드베이스를 만들어갑니다.
    +
    +
    +
    +
    + 공인 쿠버네티스 서비스 공급자(Kubernetes Certified Service Providers, KCSP) +
    +
    기업들이 쿠버네티스를 성공적으로 채택하도록 도와주는 풍부한 경험을 가진 노련한 서비스 공급자입니다. +


    + +

    KCSP에 관심이 있으신가요? +
    +
    +
    +
    +
    + 공인 쿠버네티스 배포, 호스트된 플랫폼 그리고 설치 프로그램 +
    소프트웨어 적합성은 모든 벤더의 쿠버네티스 버전이 필요한 API를 지원하도록 보장합니다. +


    + +

    공인 쿠버네티스에 관심이 있으신가요? +
    +
    +
    +
    +
    쿠버네티스 교육 파트너(Kubernetes Training Partners, KTP)
    +
    클라우드 네이티브 기술 교육 경험이 풍부하고 노련한 교육 공급자입니다. +



    + +

    KTP에 관심이 있으신가요? +
    +
    +
    + + + +
    + + +
    + +
    +
    + + + + From ace502b542fab812bc3ec1a1e423e293bd30ca99 Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Thu, 9 Apr 2020 13:19:41 -0400 Subject: [PATCH 092/105] Make feature-state tag usage consistent --- .../configuration/pod-priority-preemption.md | 4 ++-- .../concepts/configuration/resource-bin-packing.md | 2 +- .../concepts/configuration/taint-and-toleration.md | 2 +- content/en/docs/concepts/policy/resource-quotas.md | 2 +- .../concepts/scheduling/scheduler-perf-tuning.md | 2 +- .../docs/concepts/scheduling/scheduling-framework.md | 2 +- content/en/docs/concepts/storage/storage-classes.md | 4 ++-- content/en/docs/concepts/storage/volume-snapshots.md | 2 +- .../tools/kubeadm/control-plane-flags.md | 2 +- .../tools/kubeadm/kubelet-integration.md | 2 +- .../custom-resource-definition-versioning.md | 2 +- .../custom-resources/custom-resource-definitions.md | 12 ++++++------ .../developing-cloud-controller-manager.md | 2 +- .../administer-cluster/highly-available-master.md | 2 +- content/en/docs/tasks/manage-gpus/scheduling-gpus.md | 2 +- .../run-application/horizontal-pod-autoscale.md | 2 +- 16 files changed, 23 insertions(+), 23 deletions(-) diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md index 3e47d1f0b7..e3ec1ee12f 100644 --- a/content/en/docs/concepts/configuration/pod-priority-preemption.md +++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md @@ -9,7 +9,7 @@ weight: 70 {{% capture overview %}} -{{< feature-state for_k8s_version="1.14" state="stable" >}} +{{< feature-state for_k8s_version="v1.14" state="stable" >}} [Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the @@ -145,7 +145,7 @@ description: "This priority class should be used for XYZ service pods only." ## Non-preempting PriorityClass {#non-preempting-priority-class} -{{< feature-state for_k8s_version="1.15" state="alpha" >}} +{{< feature-state for_k8s_version="v1.15" state="alpha" >}} Pods with `PreemptionPolicy: Never` will be placed in the scheduling queue ahead of lower-priority pods, diff --git a/content/en/docs/concepts/configuration/resource-bin-packing.md b/content/en/docs/concepts/configuration/resource-bin-packing.md index 207460d7ad..eac4a37655 100644 --- a/content/en/docs/concepts/configuration/resource-bin-packing.md +++ b/content/en/docs/concepts/configuration/resource-bin-packing.md @@ -10,7 +10,7 @@ weight: 10 {{% capture overview %}} -{{< feature-state for_k8s_version="1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.16" state="alpha" >}} The kube-scheduler can be configured to enable bin packing of resources along with extended resources using `RequestedToCapacityRatioResourceAllocation` priority function. Priority functions can be used to fine-tune the kube-scheduler as per custom needs. diff --git a/content/en/docs/concepts/configuration/taint-and-toleration.md b/content/en/docs/concepts/configuration/taint-and-toleration.md index 2026390eff..0ee6d63f21 100644 --- a/content/en/docs/concepts/configuration/taint-and-toleration.md +++ b/content/en/docs/concepts/configuration/taint-and-toleration.md @@ -202,7 +202,7 @@ when there are node problems, which is described in the next section. ## Taint based Evictions -{{< feature-state for_k8s_version="1.18" state="stable" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already running on the node as follows diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index d48c2db88a..8ae3111323 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -197,7 +197,7 @@ The `Terminating`, `NotTerminating`, and `NotBestEffort` scopes restrict a quota ### Resource Quota Per PriorityClass -{{< feature-state for_k8s_version="1.12" state="beta" >}} +{{< feature-state for_k8s_version="v1.12" state="beta" >}} Pods can be created at a specific [priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority). You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector` diff --git a/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md index cfa5a4521c..245963ac88 100644 --- a/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md +++ b/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md @@ -8,7 +8,7 @@ weight: 70 {{% capture overview %}} -{{< feature-state for_k8s_version="1.14" state="beta" >}} +{{< feature-state for_k8s_version="v1.14" state="beta" >}} [kube-scheduler](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler) is the Kubernetes default scheduler. It is responsible for placement of Pods diff --git a/content/en/docs/concepts/scheduling/scheduling-framework.md b/content/en/docs/concepts/scheduling/scheduling-framework.md index ddc2225cac..0d4c633391 100644 --- a/content/en/docs/concepts/scheduling/scheduling-framework.md +++ b/content/en/docs/concepts/scheduling/scheduling-framework.md @@ -8,7 +8,7 @@ weight: 60 {{% capture overview %}} -{{< feature-state for_k8s_version="1.15" state="alpha" >}} +{{< feature-state for_k8s_version="v1.15" state="alpha" >}} The scheduling framework is a pluggable architecture for Kubernetes Scheduler that makes scheduler customizations easy. It adds a new set of "plugin" APIs to diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index e842165763..ab1233e09c 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -185,7 +185,7 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent * All of the above * [Local](#local) -{{< feature-state state="stable" for_k8s_version="1.17" >}} +{{< feature-state state="stable" for_k8s_version="v1.17" >}} [CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver to see its supported topology keys and examples. @@ -410,7 +410,7 @@ parameters: round-robin-ed across all active zones where Kubernetes cluster has a node. {{< note >}} -{{< feature-state state="deprecated" for_k8s_version="1.11" >}} +{{< feature-state state="deprecated" for_k8s_version="v1.11" >}} This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack). {{< /note >}} diff --git a/content/en/docs/concepts/storage/volume-snapshots.md b/content/en/docs/concepts/storage/volume-snapshots.md index d29f5b52bf..dc8d5749d9 100644 --- a/content/en/docs/concepts/storage/volume-snapshots.md +++ b/content/en/docs/concepts/storage/volume-snapshots.md @@ -13,7 +13,7 @@ weight: 20 {{% capture overview %}} -{{< feature-state for_k8s_version="1.17" state="beta" >}} +{{< feature-state for_k8s_version="v1.17" state="beta" >}} In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/). {{% /capture %}} diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index af58f4f5a3..e2ae7267bc 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -8,7 +8,7 @@ weight: 40 {{% capture overview %}} -{{< feature-state for_k8s_version="1.12" state="stable" >}} +{{< feature-state for_k8s_version="v1.12" state="stable" >}} The kubeadm `ClusterConfiguration` object exposes the field `extraArgs` that can override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler. The components are defined using the following fields: diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index d6e421b2ba..641d349440 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -8,7 +8,7 @@ weight: 80 {{% capture overview %}} -{{< feature-state for_k8s_version="1.11" state="stable" >}} +{{< feature-state for_k8s_version="v1.11" state="stable" >}} The lifecycle of the kubeadm CLI tool is decoupled from the [kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md index 184e870fc3..ec35dd88e8 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md @@ -275,7 +275,7 @@ the version. ## Webhook conversion -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} {{< note >}} Webhook conversion is available as beta since 1.15, and as alpha since Kubernetes 1.13. The diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md index dd96f2d6d6..ddcf7d4875 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md @@ -243,7 +243,7 @@ If you later recreate the same CustomResourceDefinition, it will start out empty ## Specifying a structural schema -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} CustomResources traditionally store arbitrary JSON (next to `apiVersion`, `kind` and `metadata`, which is validated by the API server implicitly). With [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) a schema can be specified, which is validated during creation and updates, compare below for details and limits of such a schema. @@ -364,7 +364,7 @@ Structural schemas are a requirement for `apiextensions.k8s.io/v1`, and disables ### Pruning versus preserving unknown fields -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} CustomResourceDefinitions traditionally store any (possibly validated) JSON as is in etcd. This means that unspecified fields (if there is a [OpenAPI v3.0 validation schema](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) at all) are persisted. This is in contrast to native Kubernetes resources such as a pod where unknown fields are dropped before being persisted to etcd. We call this "pruning" of unknown fields. @@ -604,7 +604,7 @@ meaning all finalizers have been executed. ### Validation -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} Validation of custom objects is possible via [OpenAPI v3 schemas](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) or [validatingadmissionwebhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook). In `apiextensions.k8s.io/v1` schemas are required, in `apiextensions.k8s.io/v1beta1` they are optional. @@ -781,7 +781,7 @@ crontab "my-new-cron-object" created ### Defaulting -{{< feature-state state="stable" for_kubernetes_version="1.17" >}} +{{< feature-state state="stable" for_k8s_version="v1.17" >}} {{< note >}} To use defaulting, your CustomResourceDefinition must use API version `apiextensions.k8s.io/v1`. @@ -866,7 +866,7 @@ Default values for `metadata` fields of `x-kubernetes-embedded-resources: true` ### Publish Validation Schema in OpenAPI v2 -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} {{< note >}} OpenAPI v2 Publishing is available as beta since 1.15, and as alpha since 1.14. The @@ -1051,7 +1051,7 @@ The column's `format` controls the style used when `kubectl` prints the value. ### Subresources -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} Custom resources support `/status` and `/scale` subresources. diff --git a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md index 0b79ef581c..fc1975bc82 100644 --- a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md +++ b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md @@ -14,7 +14,7 @@ In upcoming releases, Cloud Controller Manager will be the preferred way to integrate Kubernetes with any cloud. This will ensure cloud providers can develop their features independently from the core Kubernetes release cycles. -{{< feature-state for_k8s_version="1.8" state="alpha" >}} +{{< feature-state for_k8s_version="v1.8" state="alpha" >}} Before going into how to build your own cloud controller manager, some background on how it works under the hood is helpful. The cloud controller manager is code from `kube-controller-manager` utilizing Go interfaces to allow implementations from any cloud to be plugged in. Most of the scaffolding and generic controller implementations will be in core, but it will always exec out to the cloud interfaces it is provided, so long as the [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go#L42-L62) is satisfied. diff --git a/content/en/docs/tasks/administer-cluster/highly-available-master.md b/content/en/docs/tasks/administer-cluster/highly-available-master.md index 1a0a36d875..e5529da7c7 100644 --- a/content/en/docs/tasks/administer-cluster/highly-available-master.md +++ b/content/en/docs/tasks/administer-cluster/highly-available-master.md @@ -7,7 +7,7 @@ content_template: templates/task {{% capture overview %}} -{{< feature-state for_k8s_version="1.5" state="alpha" >}} +{{< feature-state for_k8s_version="v1.5" state="alpha" >}} You can replicate Kubernetes masters in `kube-up` or `kube-down` scripts for Google Compute Engine. This document describes how to use kube-up/down scripts to manage highly available (HA) masters and how HA masters are implemented for use with GCE. diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 97500e6beb..4c0b9f9bc3 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -7,7 +7,7 @@ title: Schedule GPUs {{% capture overview %}} -{{< feature-state state="beta" for_k8s_version="1.10" >}} +{{< feature-state state="beta" for_k8s_version="v1.10" >}} Kubernetes includes **experimental** support for managing AMD and NVIDIA GPUs (graphical processing units) across several nodes. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index b5f7612d75..059efbab7e 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -75,7 +75,7 @@ metrics-server, which needs to be launched separately. See for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster. {{< note >}} -{{< feature-state state="deprecated" for_k8s_version="1.11" >}} +{{< feature-state state="deprecated" for_k8s_version="v1.11" >}} Fetching metrics from Heapster is deprecated as of Kubernetes 1.11. {{< /note >}} From 3366db71c77b541bc288fca963d36ead5c369ac7 Mon Sep 17 00:00:00 2001 From: Rob Scott Date: Thu, 9 Apr 2020 11:54:21 -0700 Subject: [PATCH 093/105] Adding quotes to examples in Ingress blog post --- ...s-to-the-Ingress-API-in-Kubernetes-1.18.md | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md b/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md index 0d78e3206b..6fdb291c5b 100644 --- a/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md +++ b/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md @@ -24,17 +24,17 @@ The new concept of a path type allows you to specify how a path should be matche The Ingress resource was designed with simplicity in mind, providing a simple set of fields that would be applicable in all use cases. Over time, as use cases evolved, implementations began to rely on a long list of custom annotations for further configuration. The new `IngressClass` resource provides a way to replace some of those annotations. Each `IngressClass` specifies which controller should implement Ingresses of the class and can reference a custom resource with additional parameters. -``` -apiVersion: networking.k8s.io/v1beta1 -kind: IngressClass +```yaml +apiVersion: "networking.k8s.io/v1beta1" +kind: "IngressClass" metadata: - name: external-lb + name: "external-lb" spec: - controller: example.com/ingress-controller + controller: "example.com/ingress-controller" parameters: - apiGroup: k8s.example.com/v1alpha - kind: IngressParameters - name: external-lb + apiGroup: "k8s.example.com/v1alpha" + kind: "IngressParameters" + name: "external-lb" ``` ### Specifying the Class of an Ingress @@ -60,21 +60,21 @@ Many Ingress providers have supported wildcard hostname matching like `*.foo.com ### Putting it All Together These new Ingress features allow for much more configurability. Here’s an example of an Ingress that makes use of pathType, `ingressClassName`, and a hostname wildcard: -``` -apiVersion: networking.k8s.io/v1beta1 -kind: Ingress +```yaml +apiVersion: "networking.k8s.io/v1beta1" +kind: "Ingress" metadata: - name: example-ingress + name: "example-ingress" spec: - ingressClassName: external-lb + ingressClassName: "external-lb" rules: - - host: *.example.com + - host: "*.example.com" http: paths: - - path: /example - pathType: Prefix + - path: "/example" + pathType: "Prefix" backend: - serviceName: example-service + serviceName: "example-service" servicePort: 80 ``` From cb5a3cd3f20c21ca08ec6c521694797869f3b608 Mon Sep 17 00:00:00 2001 From: tanjunchen Date: Fri, 10 Apr 2020 10:50:57 +0800 Subject: [PATCH 094/105] replace zh to /zh in content/zh/docs/tutorials/ directory --- content/zh/docs/tutorials/_index.md | 56 +-- .../zh/docs/tutorials/clusters/apparmor.md | 8 +- .../configure-redis-using-configmap.md | 32 +- content/zh/docs/tutorials/hello-minikube.md | 34 +- .../zh/docs/tutorials/services/source-ip.md | 16 +- .../basic-stateful-set.md | 360 +++++++++--------- .../stateful-application/cassandra.md | 22 +- .../mysql-wordpress-persistent-volume.md | 90 ++--- .../stateful-application/zookeeper.md | 50 +-- .../expose-external-ip-address.md | 32 +- .../stateless-application/guestbook.md | 56 +-- 11 files changed, 374 insertions(+), 382 deletions(-) diff --git a/content/zh/docs/tutorials/_index.md b/content/zh/docs/tutorials/_index.md index 4cd5bb8772..ea43094cbf 100644 --- a/content/zh/docs/tutorials/_index.md +++ b/content/zh/docs/tutorials/_index.md @@ -16,17 +16,17 @@ content_template: templates/concept {{% capture overview %}} -Kubernetes 文档的这一部分包含教程。一个教程展示了如何完成一个比单个[任务](zh/docs/tasks/)更大的目标。 +Kubernetes 文档的这一部分包含教程。一个教程展示了如何完成一个比单个[任务](/zh/docs/tasks/)更大的目标。 通常一个教程有几个部分,每个部分都有一系列步骤。在浏览每个教程之前, -您可能希望将[标准化术语表](zh/docs/reference/glossary/)页面添加到书签,供以后参考。 +您可能希望将[标准化术语表](/zh/docs/reference/glossary/)页面添加到书签,供以后参考。 {{% /capture %}} @@ -39,10 +39,10 @@ Before walking through each tutorial, you may want to bookmark the ## Basics --> -* [Kubernetes 基础知识](zh/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。 +* [Kubernetes 基础知识](/zh/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。 * [使用 Kubernetes (Udacity) 的可伸缩微服务](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) @@ -57,10 +57,10 @@ Before walking through each tutorial, you may want to bookmark the * [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) --> -* [你好 Minikube](zh/docs/tutorials/hello-minikube/) +* [你好 Minikube](/zh/docs/tutorials/hello-minikube/) ## 配置 @@ -69,10 +69,10 @@ Before walking through each tutorial, you may want to bookmark the ## Configuration --> -* [使用一个 ConfigMap 配置 Redis](zh/docs/tutorials/configuration/configure-redis-using-configmap/) +* [使用一个 ConfigMap 配置 Redis](/zh/docs/tutorials/configuration/configure-redis-using-configmap/) ## 无状态应用程序 @@ -81,16 +81,16 @@ Before walking through each tutorial, you may want to bookmark the ## Stateless Applications --> -* [公开外部 IP 地址访问集群中的应用程序](zh/docs/tutorials/stateless-application/expose-external-ip-address/) +* [公开外部 IP 地址访问集群中的应用程序](/zh/docs/tutorials/stateless-application/expose-external-ip-address/) -* [示例:使用 Redis 部署 PHP 留言板应用程序](zh/docs/tutorials/stateless-application/guestbook/) +* [示例:使用 Redis 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/) ## 有状态应用程序 @@ -99,28 +99,28 @@ Before walking through each tutorial, you may want to bookmark the ## Stateful Applications --> -* [StatefulSet 基础](zh/docs/tutorials/stateful-application/basic-stateful-set/) +* [StatefulSet 基础](/zh/docs/tutorials/stateful-application/basic-stateful-set/) -* [示例:WordPress 和 MySQL 使用持久卷](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) +* [示例:WordPress 和 MySQL 使用持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) -* [示例:使用有状态集部署 Cassandra](zh/docs/tutorials/stateful-application/cassandra/) +* [示例:使用有状态集部署 Cassandra](/zh/docs/tutorials/stateful-application/cassandra/) -* [运行 ZooKeeper,CP 分布式系统](zh/docs/tutorials/stateful-application/zookeeper/) +* [运行 ZooKeeper,CP 分布式系统](/zh/docs/tutorials/stateful-application/zookeeper/) ## CI/CD 管道 @@ -155,33 +155,33 @@ Before walking through each tutorial, you may want to bookmark the ## 集群 -* [AppArmor](zh/docs/tutorials/clusters/apparmor/) +* [AppArmor](/zh/docs/tutorials/clusters/apparmor/) ## 服务 -* [使用源 IP](zh/docs/tutorials/services/source-ip/) +* [使用源 IP](/zh/docs/tutorials/services/source-ip/) {{% /capture %}} {{% capture whatsnext %}} -如果您想编写教程,请参阅[使用页面模板](zh/docs/home/contribute/page-templates/) +如果您想编写教程,请参阅[使用页面模板](/zh/docs/home/contribute/page-templates/) 以获取有关教程页面类型和教程模板的信息。 diff --git a/content/zh/docs/tutorials/clusters/apparmor.md b/content/zh/docs/tutorials/clusters/apparmor.md index e44f1247cb..204ce7964e 100644 --- a/content/zh/docs/tutorials/clusters/apparmor.md +++ b/content/zh/docs/tutorials/clusters/apparmor.md @@ -88,7 +88,7 @@ Apparmor 是一个 Linux 内核安全模块,它补充了标准的基于 Linux kernel, including patches that add additional hooks and features. Kubernetes has only been tested with the upstream version, and does not promise support for other features. {{< /note >}} --> -2. AppArmor 内核模块已启用 -- 要使 Linux 内核强制执行 AppArmor 配置文件,必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,如 Ubuntu 和 SUSE,还有许多发行版提供可选支持。要检查模块是否已启用,请检查 +2. AppArmor 内核模块已启用 -- 要使 Linux 内核强制执行 AppArmor 配置文件,必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,如 Ubuntu 和 SUSE,还有许多发行版提供可选支持。要检查模块是否已启用,请检查 `/sys/module/apparmor/parameters/enabled` 文件: ```shell cat /sys/module/apparmor/parameters/enabled @@ -416,7 +416,7 @@ Events: nodes. There are lots of ways to setup the profiles though, such as: --> Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载到节点上。有很多方法可以设置配置文件,例如: - 调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,并使用[节点选择器](/zh/docs/concepts/configuration/assign pod node/)确保 Pod 在具有所需配置文件的节点上运行。 @@ -525,7 +525,7 @@ logs or through `journalctl`. More information is provided in 想要调试 AppArmor 的问题,您可以检查系统日志,查看具体拒绝了什么。AppArmor 将详细消息记录到 `dmesg` ,错误通常可以在系统日志中或通过 `journalctl` 找到。更多详细信息见[AppArmor 失败](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Failures)。 -## API 参考 +## API 参考 ### Pod 注释 diff --git a/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md index 614c7021d8..8cc92c076e 100644 --- a/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -9,7 +9,7 @@ content_template: templates/tutorial {{% capture overview %}} 这篇文档基于[使用 ConfigMap 来配置 Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 这个任务,提供了一个使用 ConfigMap 来配置 Redis 的真实案例。 @@ -40,7 +40,7 @@ This page provides a real world example of how to configure Redis using a Config * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * 此页面上显示的示例适用于 `kubectl` 1.14和在其以上的版本。 * 理解[使用ConfigMap来配置Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 @@ -77,8 +77,8 @@ configMapGenerator: EOF ``` - 将 pod 的资源配置添加到 `kustomization.yaml` 文件中: @@ -93,8 +93,8 @@ resources: EOF ``` - 应用整个 kustomization 文件夹以创建 ConfigMap 和 Pod 对象: @@ -102,8 +102,8 @@ Apply the kustomization directory to create both the ConfigMap and Pod objects: kubectl apply -k . ``` - 使用以下命令检查创建的对象 @@ -116,20 +116,20 @@ NAME READY STATUS RESTARTS AGE pod/redis 1/1 Running 0 52s ``` - 在示例中,配置卷挂载在 `/redis-master` 下。 它使用 `path` 将 `redis-config` 密钥添加到名为 `redis.conf` 的文件中。 因此,redis配置的文件路径为 `/redis-master/redis.conf`。 这是镜像将在其中查找 redis master 的配置文件的位置。 - 使用 `kubectl exec` 进入 pod 并运行 `redis-cli` 工具来验证配置已正确应用: @@ -143,8 +143,8 @@ kubectl exec -it redis redis-cli 2) "allkeys-lru" ``` - 删除创建的 pod: ```shell @@ -155,8 +155,8 @@ kubectl delete pod redis {{% capture whatsnext %}} - * 了解有关 [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)的更多信息。 diff --git a/content/zh/docs/tutorials/hello-minikube.md b/content/zh/docs/tutorials/hello-minikube.md index 1086f9a334..bafa1340f2 100644 --- a/content/zh/docs/tutorials/hello-minikube.md +++ b/content/zh/docs/tutorials/hello-minikube.md @@ -33,16 +33,16 @@ card: -本教程向您展示如何使用 [Minikube](zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。 +本教程向您展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。 {{< note >}} -如果您已在本地安装 [Minikube](zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 +如果您已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 {{< /note >}} @@ -117,17 +117,17 @@ For more information on the `docker build` command, read the [Docker documentati ## Create a Deployment -A Kubernetes [*Pod*](zh/docs/concepts/workloads/pods/pod/) is a group of one or more Containers, +A Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A Kubernetes -[*Deployment*](zh/docs/concepts/workloads/controllers/deployment/) checks on the health of your +[*Deployment*](/docs/concepts/workloads/controllers/deployment/) checks on the health of your Pod and restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. --> ## 创建 Deployment -Kubernetes [*Pod*](zh/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](zh/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。 +Kubernetes [*Pod*](/zh/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](/zh/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。 - {{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](zh/docs/user-guide/kubectl-overview/)。{{< /note >}} + {{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](/zh/docs/user-guide/kubectl-overview/)。{{< /note >}} ## 创建 Service -默认情况下,Pod 只能通过 Kubernetes 集群中的内部 IP 地址访问。要使得 `hello-node` 容器可以从 Kubernetes 虚拟网络的外部访问,您必须将 Pod 暴露为 Kubernetes [*Service*](zh/docs/concepts/services-networking/service/)。 +默认情况下,Pod 只能通过 Kubernetes 集群中的内部 IP 地址访问。要使得 `hello-node` 容器可以从 Kubernetes 虚拟网络的外部访问,您必须将 Pod 暴露为 Kubernetes [*Service*](/zh/docs/concepts/services-networking/service/)。 -* 进一步了解 [Deployment 对象](zh/docs/concepts/workloads/controllers/deployment/)。 -* 学习更多关于 [部署应用](zh/docs/tasks/run-application/run-stateless-application-deployment/)。 -* 学习更多关于 [Service 对象](zh/docs/concepts/services-networking/service/)。 +* 进一步了解 [Deployment 对象](/zh/docs/concepts/workloads/controllers/deployment/)。 +* 学习更多关于 [部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)。 +* 学习更多关于 [Service 对象](/zh/docs/concepts/services-networking/service/)。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/services/source-ip.md b/content/zh/docs/tutorials/services/source-ip.md index 5e1cb34c47..098f5d760f 100644 --- a/content/zh/docs/tutorials/services/source-ip.md +++ b/content/zh/docs/tutorials/services/source-ip.md @@ -24,8 +24,8 @@ Kubernetes 集群中运行的应用通过 Service 抽象来互相查找、通信 * [NAT](https://en.wikipedia.org/wiki/Network_address_translation): 网络地址转换 * [Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT): 替换数据包的源 IP, 通常为节点的 IP * [Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT): 替换数据包的目的 IP, 通常为 Pod 的 IP -* [VIP](zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP -* [Kube-proxy](zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理 +* [VIP](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP +* [Kube-proxy](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理 ## 准备工作 @@ -59,7 +59,7 @@ deployment.apps/source-ip-app created ## Type=ClusterIP 类型 Services 的 Source IP -如果你的 kube-proxy 运行在 [iptables 模式](zh/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT,这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。 +如果你的 kube-proxy 运行在 [iptables 模式](/zh/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT,这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。 ```console kubectl get nodes @@ -136,7 +136,7 @@ command=GET ## Type=NodePort 类型 Services 的 Source IP -从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试: +从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](/zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试: ```console kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort @@ -189,7 +189,7 @@ client_address=10.240.0.3 ``` -为了防止这种情况发生,Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints,发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下,你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。 +为了防止这种情况发生,Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints,发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下,你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。 设置 `service.spec.externalTrafficPolicy` 字段如下: @@ -244,7 +244,7 @@ client_address=104.132.1.79 ## Type=LoadBalancer 类型 Services 的 Source IP -从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT,这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP(如前面章节所述)。 +从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](/zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT,这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP(如前面章节所述)。 你可以通过在一个 loadbalancer 上暴露这个 source-ip-app 来进行测试。 @@ -390,6 +390,6 @@ $ kubectl delete deployment source-ip-app {{% capture whatsnext %}} -* 学习更多关于 [通过 services 连接应用](zh/docs/concepts/services-networking/connect-applications-service/) -* 学习更多关于 [负载均衡](zh/docs/user-guide/load-balancer) +* 学习更多关于 [通过 services 连接应用](/zh/docs/concepts/services-networking/connect-applications-service/) +* 学习更多关于 [负载均衡](/zh/docs/user-guide/load-balancer) {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md b/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md index 956537c88e..e48f5f2528 100644 --- a/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md @@ -13,37 +13,37 @@ approvers: {{% capture overview %}} - -本教程介绍如何了使用 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。 +本教程介绍如何了使用 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。 {{% /capture %}} {{% capture prerequisites %}} - 在开始本教程之前,你应该熟悉以下 Kubernetes 的概念: -* [Pods](zh/docs/user-guide/pods/single-container/) -* [Cluster DNS](zh/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](zh/docs/concepts/services-networking/service/#headless-services) -* [PersistentVolumes](zh/docs/concepts/storage/persistent-volumes/) +* [Pods](/zh/docs/user-guide/pods/single-container/) +* [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/) +* [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services) +* [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) -* [StatefulSets](zh/docs/concepts/workloads/controllers/statefulset/) -* [kubectl CLI](zh/docs/user-guide/kubectl/) +* [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/) +* [kubectl CLI](/zh/docs/user-guide/kubectl/) - @@ -53,11 +53,11 @@ tutorial. {{% capture objectives %}} - 下载上面的例子并保存为文件 `web.yaml`。 -你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。 +你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。 ```shell kubectl get pods -w -l app=nginx ``` -在另一个终端中,使用 [`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。 +在另一个终端中,使用 [`kubectl apply`](/zh/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。 ```shell kubectl apply -f web.yaml @@ -123,8 +123,8 @@ statefulset.apps/web created ``` @@ -144,9 +144,9 @@ web 2 1 20s ### Ordered Pod Creation -For a StatefulSet with N replicas, when Pods are being deployed, they are -created sequentially, in order from {0..N-1}. Examine the output of the -`kubectl get` command in the first terminal. Eventually, the output will +For a StatefulSet with N replicas, when Pods are being deployed, they are +created sequentially, in order from {0..N-1}. Examine the output of the +`kubectl get` command in the first terminal. Eventually, the output will look like the example below. --> @@ -165,14 +165,14 @@ web-0 1/1 Running 0 19s web-1 0/1 Pending 0 0s web-1 0/1 Pending 0 0s web-1 0/1 ContainerCreating 0 0s -web-1 1/1 Running 0 18s +web-1 1/1 Running 0 18s ``` -请注意在 `web-0` Pod 处于 [Running和Ready](zh/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 +请注意在 `web-0` Pod 处于 [Running和Ready](/zh/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 -如同 [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的,StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`-`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0`和`web-1`。 +如同 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的,StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`-`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0`和`web-1`。 ### 使用稳定的网络身份标识 -每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。 +每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](/zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。 ```shell for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done @@ -231,17 +231,17 @@ web-0 web-1 ``` - -使用 [`kubectl run`](zh/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。 +使用 [`kubectl run`](/zh/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。 ```shell -kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm +kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm nslookup web-0.nginx Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local @@ -258,11 +258,11 @@ Address 1: 10.244.2.6 ``` headless service 的 CNAME 指向 SRV 记录(记录每个 Running 和 Ready 状态的 Pod)。SRV 记录指向一个包含 Pod IP 地址的记录表项。 @@ -274,11 +274,11 @@ kubectl get pod -w -l app=nginx ``` -在另一个终端中使用 [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。 +在另一个终端中使用 [`kubectl delete`](/zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。 ```shell kubectl delete pod -l app=nginx @@ -287,7 +287,7 @@ pod "web-1" deleted ``` @@ -306,7 +306,7 @@ web-1 1/1 Running 0 34s ``` @@ -317,7 +317,7 @@ for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done web-0 web-1 -kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh +kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh nslookup web-0.nginx Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local @@ -333,23 +333,23 @@ Name: web-1.nginx Address 1: 10.244.2.8 ``` @@ -381,20 +381,20 @@ www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO ``` -StatefulSet 控制器创建了两个 PersistentVolumeClaims,绑定到两个 [PersistentVolumes](zh/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。 +StatefulSet 控制器创建了两个 PersistentVolumeClaims,绑定到两个 [PersistentVolumes](/zh/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。 NGINX web 服务器默认会加载位于 `/usr/share/nginx/html/index.html` 的 index 文件。StatefulSets `spec` 中的 `volumeMounts` 字段保证了 `/usr/share/nginx/html` 文件夹由一个 PersistentVolume 支持。 @@ -451,7 +451,7 @@ pod "web-0" deleted pod "web-1" deleted ``` @@ -482,17 +482,17 @@ web-1 ``` 在另一个终端窗口使用 `kubectl scale` 扩展副本数为 5。 @@ -527,7 +527,7 @@ kubectl scale sts web --replicas=5 statefulset.apps/web scaled ``` @@ -556,8 +556,8 @@ web-4 1/1 Running 0 19s 在另一个终端使用 `kubectl patch` 将 StatefulSet 缩容回三个副本。 @@ -614,11 +614,11 @@ web-3 1/1 Terminating 0 42s ### 顺序终止 Pod @@ -641,16 +641,16 @@ www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO ``` @@ -740,15 +740,15 @@ web-0 1/1 Running 0 10s ``` @@ -1026,22 +1026,22 @@ k8s.gcr.io/nginx-slim:0.7 ``` -使用 [`kubectl delete`](zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。 +使用 [`kubectl delete`](/zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。 ```shell kubectl delete statefulset web --cascade=false @@ -1141,7 +1141,7 @@ kubectl get pods -w -l app=nginx 在另一个终端里重新创建 StatefulSet。请注意,除非你删除了 `nginx` Service (你不应该这样做),你将会看到一个错误,提示 Service 已经存在。 @@ -1154,7 +1154,7 @@ service/nginx unchanged ``` @@ -1181,14 +1181,14 @@ web-2 0/1 Terminating 0 3m ``` @@ -1204,10 +1204,10 @@ web-1 ``` @@ -1260,12 +1260,12 @@ web-1 0/1 Terminating 0 29m ``` @@ -1294,7 +1294,7 @@ statefulset.apps/web created ``` @@ -1307,8 +1307,8 @@ web-1 ``` @@ -1371,7 +1371,7 @@ Pod. @@ -1453,7 +1453,7 @@ web-3 1/1 Running 0 26s ``` @@ -1522,8 +1522,8 @@ kubectl delete svc nginx diff --git a/content/zh/docs/tutorials/stateful-application/cassandra.md b/content/zh/docs/tutorials/stateful-application/cassandra.md index 78cbf27c5d..98d0d6ab44 100644 --- a/content/zh/docs/tutorials/stateful-application/cassandra.md +++ b/content/zh/docs/tutorials/stateful-application/cassandra.md @@ -28,18 +28,18 @@ title: "Example: Deploying Cassandra with Stateful Sets" 本示例也使用了Kubernetes的一些核心组件: -- [_Pods_](zh/docs/user-guide/pods) -- [ _Services_](zh/docs/user-guide/services) -- [_Replication Controllers_](zh/docs/user-guide/replication-controller) -- [_Stateful Sets_](zh/docs/concepts/workloads/controllers/statefulset/) -- [_Daemon Sets_](zh/docs/admin/daemons) +- [_Pods_](/zh/docs/user-guide/pods) +- [ _Services_](/zh/docs/user-guide/services) +- [_Replication Controllers_](/zh/docs/user-guide/replication-controller) +- [_Stateful Sets_](/zh/docs/concepts/workloads/controllers/statefulset/) +- [_Daemon Sets_](/zh/docs/admin/daemons) ## 准备工作 -本示例假设你已经安装运行了一个 Kubernetes集群(版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](zh/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](zh/docs/getting-started-guides/) 获取关于你的平台的安装说明。 +本示例假设你已经安装运行了一个 Kubernetes集群(版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](/zh/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](/zh/docs/getting-started-guides/) 获取关于你的平台的安装说明。 本示例还需要一些代码和配置文件。为了避免手动输入,你可以 `git clone` Kubernetes 源到你本地。 @@ -133,7 +133,7 @@ kubectl delete daemonset cassandra ## 步骤 1:创建 Cassandra Headless Service -Kubernetes _[Service](zh/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](zh/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod:一个或多个_必须_调度到相同主机上的容器。 +Kubernetes _[Service](/zh/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](/zh/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod:一个或多个_必须_调度到相同主机上的容器。 这个 Service 用于在 Kubernetes 集群内部进行 Cassandra 客户端和 Cassandra Pod 之间的 DNS 查找。 @@ -354,7 +354,7 @@ $ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces' system_traces system_schema system_auth system system_distributed ``` -你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](zh/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。 +你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](/zh/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。 使用以下命令编辑 StatefulSet。 @@ -429,7 +429,7 @@ $ grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodS ## 步骤 5:使用 Replication Controller 创建 Cassandra 节点 pod -Kubernetes _[Replication Controller](zh/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query,用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。 +Kubernetes _[Replication Controller](/zh/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query,用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。 和我们刚才定义的 Service 一起,Replication Controller 能够让我们轻松的构建一个复制的、可扩展的 Cassandra 集群。 @@ -639,7 +639,7 @@ $ kubectl delete rc cassandra ## 步骤 8:使用 DaemonSet 替换 Replication Controller -在 Kubernetes中,[_DaemonSet_](zh/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector,用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。 +在 Kubernetes中,[_DaemonSet_](/zh/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector,用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。 示范用例:当部署到云平台时,预期情况是实例是短暂的并且随时可能终止。Cassandra 被搭建成为在各个节点间复制数据以便于实现数据冗余。这样的话,即使一个实例终止了,存储在它上面的数据却没有,并且集群会通过重新复制数据到其它运行节点来作为响应。 @@ -802,6 +802,6 @@ $ kubectl delete daemonset cassandra 查看本示例的 [image](https://github.com/kubernetes/examples/tree/master/cassandra/image) 目录,了解如何构建容器的 docker 镜像及其内容。 -你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](zh/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL` 和 `Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu(0.1 核)。 +你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](/zh/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL` 和 `Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu(0.1 核)。 [!Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cassandra/README.md?pixel)]() diff --git a/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index df29be10c9..d05f07d135 100644 --- a/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -12,23 +12,23 @@ card: {{% capture overview %}} - 本示例描述了如何通过 Minikube 在 Kubernetes 上安装 WordPress 和 MySQL。这两个应用都使用 PersistentVolumes 和 PersistentVolumeClaims 保存数据。 -[PersistentVolume](zh/docs/concepts/storage/persistent-volumes/)(PV)是一块集群里由管理员手动提供,或 kubernetes 通过 [StorageClass](zh/docs/concepts/storage/storage-classes) 动态创建的存储。 -[PersistentVolumeClaim](zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。 +[PersistentVolume](/zh/docs/concepts/storage/persistent-volumes/)(PV)是一块集群里由管理员手动提供,或 kubernetes 通过 [StorageClass](/zh/docs/concepts/storage/storage-classes) 动态创建的存储。 +[PersistentVolumeClaim](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。 {{< warning >}} deployment 在生产场景中并不适合,它使用单实例 WordPress 和 MySQL Pods。考虑使用 [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) 在生产场景中部署 WordPress。 @@ -53,7 +53,7 @@ deployment 在生产场景中并不适合,它使用单实例 WordPress 和 MyS * MySQL resource configs * WordPress resource configs * Apply the kustomization directory by `kubectl apply -k ./` -* Clean up +* Clean up --> * 创建 PersistentVolumeClaims 和 PersistentVolumes @@ -77,7 +77,7 @@ Download the following configuration files: 1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml) -1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) +1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) --> 此例在`kubectl` 1.14 或者更高版本有效。 @@ -86,14 +86,14 @@ Download the following configuration files: 1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml) -2. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) - +2. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) + {{% /capture %}} {{% capture lessoncontent %}} ## 创建 PersistentVolumeClaims 和 PersistentVolumes @@ -113,15 +113,15 @@ MySQL 和 Wordpress 都需要一个 PersistentVolume 来存储数据。他们的 {{< warning >}} 在本地群集中,默认的 StorageClass 使用`hostPath`供应器。 `hostPath`卷仅适用于开发和测试。使用 `hostPath` 卷,您的数据位于 Pod 调度到的节点上的`/tmp`中,并且不会在节点之间移动。如果 Pod 死亡并被调度到群集中的另一个节点,或者该节点重新启动,则数据将丢失。 {{< /warning >}} {{< note >}} - 如果要建立需要使用`hostPath`设置程序的集群,则必须在 controller-manager 组件中设置`--enable-hostpath-provisioner`标志。 @@ -133,25 +133,25 @@ If you are bringing up a cluster that needs to use the `hostPath` provisioner, t 如果你已经有运行在 Google Kubernetes Engine 的集群,请参考 [this guide](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk)。 {{< /note >}} - ## 创建 kustomization.yaml - ### 创建 Secret 生成器 -A [Secret](zh/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。 +A [Secret](/zh/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。 通过以下命令在`kustomization.yaml`中添加一个 Secret 生成器。您需要用您要使用的密码替换`YOUR_PASSWORD`。 @@ -164,13 +164,13 @@ secretGenerator: EOF ``` - ## 补充 MySQL 和 WordPress 的资源配置 - @@ -178,11 +178,11 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c {{< codenew file="application/wordpress/mysql-deployment.yaml" >}} - 以下 manifest 文件描述了单实例 WordPress 部署。WordPress 容器将网站数据文件位于`/var/www/html`的 PersistentVolume。`WORDPRESS_DB_HOST`环境变量集上面定义的 MySQL Service 的名称,WordPress 将通过 Service 访问数据库。`WORDPRESS_DB_PASSWORD`环境变量设置从 Secret kustomize 生成的数据库密码。 @@ -234,8 +234,8 @@ the name of the MySQL Service defined above, and WordPress will access the datab ``` - ## 应用和验证 @@ -348,7 +348,7 @@ kubectl apply -k ./ ``` 响应应如下所示: - + ```shell NAME TYPE DATA AGE mysql-pass-c57bb4t7mf Opaque 1 9s @@ -419,7 +419,7 @@ kubectl apply -k ./ ``` 6. 复制 IP 地址,然后将页面加载到浏览器中来查看您的站点。 - + 您应该看到类似于以下屏幕截图的 WordPress 设置页面。 ![wordpress-init](https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/WordPress.png) @@ -427,8 +427,8 @@ kubectl apply -k ./ {{% /capture %}} {{< warning >}} - 不要在此页面上保留 WordPress 安装。如果其他用户找到了它,他们可以在您的实例上建立一个网站并使用它来提供恶意内容。

    通过创建用户名和密码来安装 WordPress 或删除您的实例。 @@ -453,24 +453,16 @@ Do not leave your WordPress installation on this page. If another user finds it, {{% capture whatsnext %}} -* Learn more about [Introspection and Debugging](zh/docs/tasks/debug-application-cluster/debug-application-introspection/) -* Learn more about [Jobs](zh/docs/concepts/workloads/controllers/jobs-run-to-completion/) -* Learn more about [Port Forwarding](zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) -* Learn how to [Get a Shell to a Container](zh/docs/tasks/debug-application-cluster/get-shell-running-container/) + -1. 运行以下命令以删除您的 Secret,Deployments,Services 和 PersistentVolumeClaims: - ```shell - kubectl delete -k ./ - ``` - -{{% /capture %}} - -{{% capture whatsnext %}} - -* 了解更多关于 [Introspection and Debugging](zh/docs/tasks/debug-application-cluster/debug-application-introspection/) -* 了解更多关于 [Jobs](zh/docs/concepts/workloads/controllers/jobs-run-to-completion/) -* 了解更多关于 [Port Forwarding](zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) -* 了解如何 [Get a Shell to a Container](zh/docs/tasks/debug-application-cluster/get-shell-running-container/) +* 了解更多关于 [Introspection and Debugging](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/) +* 了解更多关于 [Jobs](/zh/docs/concepts/workloads/controllers/jobs-run-to-completion/) +* 了解更多关于 [Port Forwarding](/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) +* 了解如何 [Get a Shell to a Container](/zh/docs/tasks/debug-application-cluster/get-shell-running-container/) {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateful-application/zookeeper.md b/content/zh/docs/tutorials/stateful-application/zookeeper.md index 4d0475e629..2c5189ac4a 100644 --- a/content/zh/docs/tutorials/stateful-application/zookeeper.md +++ b/content/zh/docs/tutorials/stateful-application/zookeeper.md @@ -14,23 +14,23 @@ content_template: templates/tutorial {{% capture overview %}} -本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 +本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](/zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](/zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 {{% /capture %}} {{% capture prerequisites %}} 在开始本教程前,你应该熟悉以下 Kubernetes 概念。 -* [Pods](zh/docs/user-guide/pods/single-container/) -* [Cluster DNS](zh/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](zh/docs/concepts/services-networking/service/#headless-services) -* [PersistentVolumes](zh/docs/concepts/storage/volumes/) +* [Pods](/zh/docs/user-guide/pods/single-container/) +* [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/) +* [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services) +* [PersistentVolumes](/zh/docs/concepts/storage/volumes/) * [PersistentVolume Provisioning](http://releases.k8s.io/{{< param "githubbranch" >}}/examples/persistent-volume-provisioning/) -* [ConfigMaps](zh/docs/tasks/configure-pod-container/configure-pod-configmap/) -* [StatefulSets](zh/docs/concepts/abstractions/controllers/statefulsets/) -* [PodDisruptionBudgets](zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) -* [PodAntiAffinity](zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) -* [kubectl CLI](zh/docs/user-guide/kubectl) +* [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) +* [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) +* [PodDisruptionBudgets](/zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) +* [PodAntiAffinity](/zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) +* [kubectl CLI](/zh/docs/user-guide/kubectl) @@ -69,14 +69,14 @@ ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被 下面的清单包含一个 -[Headless Service](zh/docs/concepts/services-networking/service/#headless-services), -一个 [Service](zh/docs/concepts/services-networking/service/), -一个 [PodDisruptionBudget](zh/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), -和一个 [StatefulSet](zh/docs/concepts/workloads/controllers/statefulset/)。 +[Headless Service](/zh/docs/concepts/services-networking/service/#headless-services), +一个 [Service](/zh/docs/concepts/services-networking/service/), +一个 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), +和一个 [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)。 {{< codenew file="application/zookeeper/zookeeper.yaml" >}} -打开一个命令行终端,使用 [`kubectl apply`](zh/docs/reference/generated/kubectl/kubectl-commands/#apply) +打开一个命令行终端,使用 [`kubectl apply`](/zh/docs/reference/generated/kubectl/kubectl-commands/#apply) 创建这个清单。 ```shell @@ -92,7 +92,7 @@ poddisruptionbudget.policy/zk-pdb created statefulset.apps/zk created ``` -使用 [`kubectl get`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。 +使用 [`kubectl get`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。 ```shell kubectl get pods -w -l app=zk @@ -130,7 +130,7 @@ StatefulSet 控制器创建了3个 Pods,每个 Pod 包含一个 [ZooKeeper 3.4 由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置,以执行 leader 选举。Ensemble 中的每个服务都需要具有一个独一无二的标识符,所有的服务均需要知道标识符的全集,并且每个标志都需要和一个网络地址相关联。 -使用 [`kubectl exec`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 +使用 [`kubectl exec`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname; done @@ -184,7 +184,7 @@ zk-2.zk-headless.default.svc.cluster.local ``` -[Kubernetes DNS](zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 +[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。 @@ -320,7 +320,7 @@ numChildren = 0 如同在 [ZooKeeper 基础](#zookeeper-basics) 一节所提到的,ZooKeeper 提交所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化是一种常用的技术,对于普通的存储应用也是如此。 -使用 [`kubectl delete`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。 +使用 [`kubectl delete`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。 ```shell kubectl delete statefulset zk @@ -641,7 +641,7 @@ log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %- 这是在容器里安全记录日志的最简单的方法。由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流的应用日志不会耗尽本地存储媒介。 -使用 [`kubectl logs`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。 +使用 [`kubectl logs`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。 ```shell kubectl logs zk-0 --tail 20 @@ -679,7 +679,7 @@ kubectl logs zk-0 --tail 20 ### 配置非特权用户 -在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 +在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 `zk` StatefulSet 的 Pod 的 `template` 包含了一个 SecurityContext。 @@ -736,7 +736,7 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ### 处理进程故障 -[Restart Policies](zh/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 +[Restart Policies](/zh/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 检查 `zk-0` Pod 中运行的 ZooKeeper 服务的进程树。 @@ -947,7 +947,7 @@ kubectl get nodes ``` -使用 [`kubectl cordon`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。 +使用 [`kubectl cordon`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。 ```shell kubectl cordon < node name > @@ -987,7 +987,7 @@ kubernetes-minion-group-i4c4 ``` -使用 [`kubectl drain`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 +使用 [`kubectl drain`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 ```shell kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data @@ -1102,7 +1102,7 @@ numChildren = 0 ``` -使用 [`kubectl uncordon`](zh/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。 +使用 [`kubectl uncordon`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。 ```shell kubectl uncordon kubernetes-minion-group-pb41 diff --git a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md index 16ae8471ef..371ba45f34 100644 --- a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -26,21 +26,21 @@ external IP address. {{% capture prerequisites %}} - * 安装 [kubectl](zh/docs/tasks/tools/install-kubectl/). + * 安装 [kubectl](/zh/docs/tasks/tools/install-kubectl/). * 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 群集。 - 本教程创建了一个[外部负载均衡器](zh/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。 + 本教程创建了一个[外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。 * 配置 `kubectl` 与 Kubernetes API 服务器通信。有关说明,请参阅云供应商文档。 @@ -79,16 +79,16 @@ external IP address. - 前面的命令创建一个 [Deployment](zh/docs/concepts/workloads/controllers/deployment/) - 对象和一个关联的 [ReplicaSet](zh/docs/concepts/workloads/controllers/replicaset/)对象。 - ReplicaSet 有五个 [Pod](zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。 + 前面的命令创建一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) + 对象和一个关联的 [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/)对象。 + ReplicaSet 有五个 [Pod](/zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。 成功请求的响应是一条问候消息: @@ -249,9 +249,9 @@ the Hello World application, enter this command: -了解更多关于[将应用程序与服务连接](zh/docs/concepts/services-networking/connect-applications-service/)。 +了解更多关于[将应用程序与服务连接](/zh/docs/concepts/services-networking/connect-applications-service/)。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateless-application/guestbook.md b/content/zh/docs/tutorials/stateless-application/guestbook.md index c970f91d7d..8eea7b7922 100644 --- a/content/zh/docs/tutorials/stateless-application/guestbook.md +++ b/content/zh/docs/tutorials/stateless-application/guestbook.md @@ -25,8 +25,8 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica 一个简单的多层 web 应用程序。本例由以下组件组成: @@ -143,14 +143,14 @@ Replace POD-NAME with the name of your Pod. ### 创建 Redis 主节点的服务 -留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 +留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 {{< codenew file="application/guestbook/redis-master-service.yaml" >}} 1. 使用下面的 `redis-master-service.yaml` 文件创建 Redis 主节点的服务: @@ -181,7 +181,7 @@ The guestbook applications needs to communicate to the Redis master to write its {{< note >}} 这个清单文件创建了一个名为 `Redis-master` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 Redis 主节点 Pod 上。 @@ -205,12 +205,12 @@ Although the Redis master is a single pod, you can make it highly available to m ### 创建 Redis 从节点 Deployment Deployments 根据清单文件中设置的配置进行伸缩。在这种情况下,Deployment 对象指定两个副本。 如果没有任何副本正在运行,则此 Deployment 将启动容器集群上的两个副本。相反, 如果有两个以上的副本在运行,那么它的规模就会缩小,直到运行两个副本为止。 @@ -349,20 +349,20 @@ The guestbook application has a web frontend serving the HTTP requests written i ### 创建前端服务 应用的 `redis-slave` 和 `redis-master` 服务只能在容器集群中访问,因为服务的默认类型是 -[ClusterIP](zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 +[ClusterIP](/zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 如果您希望客人能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户机可以从容器集群之外请求服务。Minikube 只能通过 `NodePort` 公开服务。 {{< note >}} 一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine,支持外部负载均衡器。如果您的云提供商支持负载均衡器,并且您希望使用它, 只需删除或注释掉 `type: NodePort`,并取消注释 `type: LoadBalancer` 即可。 @@ -386,14 +386,14 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su 2. 查询服务列表以验证前端服务正在运行: ```shell - kubectl get services + kubectl get services ``` 响应应该与此类似: - + ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend ClusterIP 10.0.0.112 80:31323/TCP 6s @@ -472,7 +472,7 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y 2. 复制外部 IP 地址,然后在浏览器中加载页面以查看留言板。 ## 扩展 Web 前端 @@ -501,7 +501,7 @@ Scaling up or down is easy because your servers are defined as a Service that us ``` 响应应该类似于这样: @@ -548,7 +548,7 @@ Scaling up or down is easy because your servers are defined as a Service that us redis-slave-2005841000-fpvqc 1/1 Running 0 1h redis-slave-2005841000-phfv9 1/1 Running 0 1h ``` - + {{% /capture %}} {{% capture cleanup %}} @@ -580,7 +580,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels deployment.apps "redis-slave" deleted service "redis-master" deleted service "redis-slave" deleted - deployment.apps "frontend" deleted + deployment.apps "frontend" deleted service "frontend" deleted ``` @@ -594,9 +594,9 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels ``` - 响应应该是: + 响应应该是: ``` No resources found. @@ -607,16 +607,16 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels {{% capture whatsnext %}} -* 完成 [Kubernetes Basics](zh/docs/tutorials/kubernetes-basics/) 交互式教程 -* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) -* 阅读更多关于[连接应用程序](zh/docs/concepts/services-networking/connect-applications-service/) -* 阅读更多关于[管理资源](zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) +* 完成 [Kubernetes Basics](/zh/docs/tutorials/kubernetes-basics/) 交互式教程 +* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) +* 阅读更多关于[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/) +* 阅读更多关于[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) {{% /capture %}} From d2a0440d02bbfa1ee06c6e827f0a42974a719dd6 Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Fri, 10 Apr 2020 09:46:31 +0800 Subject: [PATCH 095/105] update content/zh/docs/concepts/configuration/manage-compute-resources-container.md for release-1.18 --- .../manage-compute-resources-container.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/content/zh/docs/concepts/configuration/manage-compute-resources-container.md b/content/zh/docs/concepts/configuration/manage-compute-resources-container.md index c3e6b6af4d..e10ac8f7f8 100644 --- a/content/zh/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/zh/docs/concepts/configuration/manage-compute-resources-container.md @@ -275,16 +275,18 @@ resource limits, see the The resource usage of a Pod is reported as part of the Pod status. -If [optional monitoring](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/) -is configured for your cluster, then Pod resource usage can be retrieved from -the monitoring system. +If optional [tools for monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) +are available in your cluster, then Pod resource usage can be retrieved either +from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) +directly or from your monitoring tools. --> ## 监控计算资源使用 Pod 的资源使用情况被报告为 Pod 状态的一部分。 -如果为集群配置了 [可选监控](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/),则可以从监控系统检索 Pod 资源的使用情况。 +如果为集群配置了可选 [监控工具](/docs/tasks/debug-application-cluster/resource-usage-monitoring/),则可以直接从 +[指标 API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) 或者监控工具检索 Pod 资源的使用情况。 + +Указанное имя может иметь только один объект определённого типа. Но если вы удалите этот объект, вы можете создать новый с таким же именем diff --git a/content/ru/docs/reference/glossary/uid.md b/content/ru/docs/reference/glossary/uid.md new file mode 100755 index 0000000000..fe050683ba --- /dev/null +++ b/content/ru/docs/reference/glossary/uid.md @@ -0,0 +1,17 @@ +--- +title: UID +id: uid +date: 2018-04-12 +full_link: /docs/concepts/overview/working-with-objects/names +short_description: > + Уникальная строка, сгенерированная самим Kubernetes, для идентификации объектов. + +aka: +tags: +- fundamental +--- + Уникальная строка, сгенерированная самим Kubernetes, для идентификации объектов. + + + +У каждого объекта, созданного в течение всего периода работы кластера Kubernetes, есть собственный уникальный идентификатор (UID). Он предназначен для выяснения различий между событиями похожих сущностей. \ No newline at end of file diff --git a/content/ru/examples/application/deployment.yaml b/content/ru/examples/application/deployment.yaml new file mode 100644 index 0000000000..f7a4886e4e --- /dev/null +++ b/content/ru/examples/application/deployment.yaml @@ -0,0 +1,19 @@ +apiVersion: apps/v1 # до версии 1.9.0 нужно использовать apps/v1beta2 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 # запускает 2 пода, созданных по шаблону + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 From 7c839798a18dd16656c3405b816b4da2ab2b7ff8 Mon Sep 17 00:00:00 2001 From: Alexey Pyltsyn Date: Fri, 10 Apr 2020 13:33:13 +0300 Subject: [PATCH 101/105] Translate comments --- content/ru/docs/concepts/overview/kubernetes-api.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/ru/docs/concepts/overview/kubernetes-api.md b/content/ru/docs/concepts/overview/kubernetes-api.md index 5aea5818af..c3812c8469 100644 --- a/content/ru/docs/concepts/overview/kubernetes-api.md +++ b/content/ru/docs/concepts/overview/kubernetes-api.md @@ -40,8 +40,8 @@ Kubernetes как таковой состоит из множества комп Заголовок | Возможные значения ------ | --------------- -Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (the default content-type is `application/json` for `*/*` or not passing this header) -Accept-Encoding | `gzip` (not passing this header is acceptable) +Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (по умолчанию заголовок Content-Type установлен в `application/json` с `*/*`, допустимо также пропускать этот заголовок) +Accept-Encoding | `gzip` (можно не передавать этот заголовок) До версии 1.14 конечные точки с форматом (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`) предоставляли спецификацию OpenAPI в разных форматах. Эти конечные точки были объявлены устаревшими и удалены в Kubernetes 1.14. From ba7708fa7adb458143a49ba0b1d0319993bcbe98 Mon Sep 17 00:00:00 2001 From: Tsahi Duek Date: Fri, 10 Apr 2020 14:41:52 +0300 Subject: [PATCH 102/105] Revert "Fix identations in Conventions paragraph" This reverts commit 69bb47153ae8592b93219396e92cfca4474336f2. --- .../pods/pod-topology-spread-constraints.md | 65 ++++++++++--------- 1 file changed, 33 insertions(+), 32 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 35a373473b..bb1db1907f 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -18,7 +18,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te ### Enable Feature Gate -The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/) +The `EvenPodsSpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled for the {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and** {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}. @@ -62,10 +62,10 @@ metadata: name: mypod spec: topologySpreadConstraints: - - maxSkew: - topologyKey: - whenUnsatisfiable: - labelSelector: + - maxSkew: + topologyKey: + whenUnsatisfiable: + labelSelector: ``` You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are: @@ -73,8 +73,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s - **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero. - **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain. - **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: - - `DoNotSchedule` (default) tells the scheduler not to schedule it. - - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. + - `DoNotSchedule` (default) tells the scheduler not to schedule it. + - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. - **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details. You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. @@ -160,29 +160,30 @@ There are some implicit conventions worth noting here: - Only the Pods holding the same namespace as the incoming Pod can be matching candidates. - Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that: - 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". + + 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA". + 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". - Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels. - If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed. - Suppose you have a 5-node cluster ranging from zoneA to zoneC: + Suppose you have a 5-node cluster ranging from zoneA to zoneC: - ``` - +---------------+---------------+-------+ - | zoneA | zoneB | zoneC | - +-------+-------+-------+-------+-------+ - | node1 | node2 | node3 | node4 | node5 | - +-------+-------+-------+-------+-------+ - | P | P | P | | | - +-------+-------+-------+-------+-------+ - ``` + ``` + +---------------+---------------+-------+ + | zoneA | zoneB | zoneC | + +-------+-------+-------+-------+-------+ + | node1 | node2 | node3 | node4 | node5 | + +-------+-------+-------+-------+-------+ + | P | P | P | | | + +-------+-------+-------+-------+-------+ + ``` - and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. + and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. + + {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - ### Cluster-level default constraints {{< feature-state for_k8s_version="v1.18" state="alpha" >}} @@ -207,16 +208,16 @@ kind: KubeSchedulerConfiguration profiles: pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: - - maxSkew: 1 - topologyKey: failure-domain.beta.kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: failure-domain.beta.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway ``` {{< note >}} -The score produced by default scheduling constraints might conflict with the +The score produced by default scheduling constraints might conflict with the score produced by the [`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/profiles/#scheduling-plugins). It is recommended that you disable this plugin in the scheduling profile when @@ -229,14 +230,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are scheduled - more packed or more scattered. - For `PodAffinity`, you can try to pack any number of Pods into qualifying -topology domain(s) + topology domain(s) - For `PodAntiAffinity`, only one Pod can be scheduled into a -single topology domain. + single topology domain. The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different topology domains - to achieve high availability or cost-saving. This can also help on rolling update workloads and scaling out replicas smoothly. -See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details. +See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details. ## Known Limitations From 893aeecddd435a5927a462eed0fd7ebd9f4d8603 Mon Sep 17 00:00:00 2001 From: Tsahi Duek Date: Fri, 10 Apr 2020 14:43:45 +0300 Subject: [PATCH 103/105] gix missed identation caused by lint --- .../pods/pod-topology-spread-constraints.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index bb1db1907f..1c68ba4ebd 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -168,21 +168,21 @@ There are some implicit conventions worth noting here: - If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed. - Suppose you have a 5-node cluster ranging from zoneA to zoneC: + Suppose you have a 5-node cluster ranging from zoneA to zoneC: - ``` - +---------------+---------------+-------+ - | zoneA | zoneB | zoneC | - +-------+-------+-------+-------+-------+ - | node1 | node2 | node3 | node4 | node5 | - +-------+-------+-------+-------+-------+ - | P | P | P | | | - +-------+-------+-------+-------+-------+ - ``` + ``` + +---------------+---------------+-------+ + | zoneA | zoneB | zoneC | + +-------+-------+-------+-------+-------+ + | node1 | node2 | node3 | node4 | node5 | + +-------+-------+-------+-------+-------+ + | P | P | P | | | + +-------+-------+-------+-------+-------+ + ``` - and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. + and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. - {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} + {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} ### Cluster-level default constraints From af7ebbe2cff2749c47ce5f0062a87d6c1f4b6ae7 Mon Sep 17 00:00:00 2001 From: Arhell Date: Fri, 10 Apr 2020 15:06:24 +0300 Subject: [PATCH 104/105] Fix left menu button on mobile (docs home page) --- content/ja/docs/home/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/home/_index.md b/content/ja/docs/home/_index.md index c0f9abdb4c..1c46fab5b7 100644 --- a/content/ja/docs/home/_index.md +++ b/content/ja/docs/home/_index.md @@ -4,7 +4,7 @@ title: Kubernetesドキュメント noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "ホーム" main_menu: true weight: 10 From 1e99f4f71faf66ef1a52b9d4cc70cc25e3bfecfe Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 10 Dec 2019 02:30:48 +0000 Subject: [PATCH 105/105] Use API resource name for clarity --- content/en/docs/reference/glossary/service-account.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/en/docs/reference/glossary/service-account.md b/content/en/docs/reference/glossary/service-account.md index aee24891c8..7985f672fb 100755 --- a/content/en/docs/reference/glossary/service-account.md +++ b/content/en/docs/reference/glossary/service-account.md @@ -1,5 +1,5 @@ --- -title: Service Account +title: ServiceAccount id: service-account date: 2018-04-12 full_link: /docs/tasks/configure-pod-container/configure-service-account/ @@ -16,4 +16,3 @@ tags: When processes inside Pods access the cluster, they are authenticated by the API server as a particular service account, for example, `default`. When you create a Pod, if you do not specify a service account, it is automatically assigned the default service account in the same {{< glossary_tooltip text="Namespace" term_id="namespace" >}}. -