Merge pull request #26488 from ChandaniM123/merged-master-dev-1.21
Merged master into dev 1.21 - 2/12/21
This commit is contained in:
commit
12dd36ef3d
|
@ -92,15 +92,21 @@ aliases:
|
|||
- daminisatya
|
||||
- mittalyashu
|
||||
sig-docs-id-owners: # Admins for Indonesian content
|
||||
- girikuncoro
|
||||
- irvifa
|
||||
sig-docs-id-reviews: # PR reviews for Indonesian content
|
||||
- ariscahyadi
|
||||
- danninov
|
||||
- girikuncoro
|
||||
- habibrosyad
|
||||
- irvifa
|
||||
- wahyuoi
|
||||
- phanama
|
||||
- wahyuoi
|
||||
sig-docs-id-reviews: # PR reviews for Indonesian content
|
||||
- ariscahyadi
|
||||
- danninov
|
||||
- girikuncoro
|
||||
- habibrosyad
|
||||
- irvifa
|
||||
- phanama
|
||||
- wahyuoi
|
||||
sig-docs-it-owners: # Admins for Italian content
|
||||
- fabriziopandini
|
||||
- Fale
|
||||
|
|
55
README-pt.md
55
README-pt.md
|
@ -2,11 +2,11 @@
|
|||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Bem vindos! Este repositório abriga todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
|
||||
Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
|
||||
|
||||
# Utilizando este repositório
|
||||
|
||||
Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável usar um container runtime, pois garante a consistência na implantação do website real.
|
||||
Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável utilizar um container runtime, pois garante a consistência na implantação do website real.
|
||||
|
||||
## Pré-requisitos
|
||||
|
||||
|
@ -40,7 +40,7 @@ make container-image
|
|||
make container-serve
|
||||
```
|
||||
|
||||
Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, Hugo atualiza o website e força a atualização do navegador.
|
||||
Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador.
|
||||
|
||||
## Executando o website localmente utilizando o Hugo
|
||||
|
||||
|
@ -56,6 +56,53 @@ make serve
|
|||
|
||||
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
|
||||
|
||||
## Construindo a página de referência da API
|
||||
|
||||
A página de referência da API localizada em `content/en/docs/reference/kubernetes-api` é construída a partir da especificação do Swagger utilizando https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs.
|
||||
|
||||
Siga os passos abaixo para atualizar a página de referência para uma nova versão do Kubernetes:
|
||||
|
||||
OBS: modifique o "v1.20" no exemplo a seguir pela versão a ser atualizada
|
||||
|
||||
1. Obter o submódulo `kubernetes-resources-reference`:
|
||||
|
||||
```
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
2. Criar a nova versão da API no submódulo e adicionar à especificação do Swagger:
|
||||
|
||||
```
|
||||
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
|
||||
```
|
||||
|
||||
3. Copiar o sumário e os campos de configuração para a nova versão a partir da versão anterior:
|
||||
|
||||
```
|
||||
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||
cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
|
||||
```
|
||||
|
||||
4. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
|
||||
|
||||
5. Em seguida, gerar as páginas:
|
||||
|
||||
```
|
||||
make api-reference
|
||||
```
|
||||
|
||||
Você pode validar o resultado localmente gerando e disponibilizando o site a partir da imagem do container:
|
||||
|
||||
```
|
||||
make container-image
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Abra o seu navegador em http://localhost:1313/docs/reference/kubernetes-api/ para visualizar a página de referência da API.
|
||||
|
||||
6. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
|
||||
|
||||
## Troubleshooting
|
||||
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
|
||||
|
||||
|
@ -134,4 +181,4 @@ A participação na comunidade Kubernetes é regida pelo [Código de Conduta da
|
|||
|
||||
# Obrigado!
|
||||
|
||||
O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
|
||||
O Kubernetes prospera com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
|
|
@ -100,6 +100,8 @@ make container-image
|
|||
make container-serve
|
||||
```
|
||||
|
||||
In a web browser, go to http://localhost:1313/docs/reference/kubernetes-api/ to view the API reference.
|
||||
|
||||
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
|
||||
|
||||
## Troubleshooting
|
||||
|
|
|
@ -66,6 +66,7 @@ Vagrant.configure("2") do |config|
|
|||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
### Step 2: Create an Ansible playbook for Kubernetes master.
|
||||
|
|
|
@ -19,6 +19,7 @@ cid: community
|
|||
|
||||
<div class="community__navbar">
|
||||
|
||||
<a href="#values">Community Values</a>
|
||||
<a href="#conduct">Code of conduct </a>
|
||||
<a href="#videos">Videos</a>
|
||||
<a href="#discuss">Discussions</a>
|
||||
|
@ -41,10 +42,28 @@ cid: community
|
|||
<img src="/images/community/kubernetes-community-final-05.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-right:0% important" class="desktop">
|
||||
</div>
|
||||
<img src="/images/community/kubernetes-community-04-mobile.jpg" alt="Kubernetes Conference Gallery" style="width:100%;margin-bottom:3%" class="mobile">
|
||||
|
||||
<a name="conduct"></a>
|
||||
<a name="values"></a>
|
||||
</div>
|
||||
|
||||
<div><a name="values"></a></div>
|
||||
<div class="conduct">
|
||||
<div class="conducttext">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<br class="tablet"><br class="tablet">
|
||||
<div class="conducttextnobutton" style="margin-bottom:2%"><h1>Community Values</h1>
|
||||
The Kubernetes Community values are the keystone to the ongoing success of the project.<br>
|
||||
These principles guide every aspect of the Kubernetes project.
|
||||
<br>
|
||||
<a href="/community/values/">
|
||||
<br class="mobile"><br class="mobile">
|
||||
<span class="fullbutton">
|
||||
READ MORE
|
||||
</span>
|
||||
</a>
|
||||
</div><a name="conduct"></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
<div class="conduct">
|
||||
|
|
|
@ -0,0 +1,28 @@
|
|||
<!-- Do not edit this file directly. Get the latest from
|
||||
https://git.k8s.io/community/values.md -->
|
||||
|
||||
# Kubernetes Community Values
|
||||
|
||||
Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement.
|
||||
|
||||
## Distribution is better than centralization
|
||||
|
||||
The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community.
|
||||
|
||||
## Community over product or company
|
||||
|
||||
We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
|
||||
|
||||
## Automation over process
|
||||
|
||||
Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.
|
||||
|
||||
## Inclusive is better than exclusive
|
||||
|
||||
Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path.
|
||||
|
||||
## Evolution is better than stagnation
|
||||
|
||||
Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
|
||||
|
||||
**"Culture eats strategy for breakfast." --Peter Drucker**
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Community
|
||||
layout: basic
|
||||
cid: community
|
||||
css: /css/community.css
|
||||
---
|
||||
|
||||
<div class="community_main">
|
||||
|
||||
<div class="cncf_coc_container">
|
||||
{{< include "/static/community-values.md" >}}
|
||||
</div>
|
||||
</div>
|
|
@ -45,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
|
|||
|
||||
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
|
||||
|
||||
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
|
||||
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
|
||||
|
||||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
|
||||
|
||||
|
@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
|
|||
## Updating labels
|
||||
|
||||
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
|
||||
For example, if you want to label all your nginx pods as frontend tier, simply run:
|
||||
For example, if you want to label all your nginx pods as frontend tier, run:
|
||||
|
||||
```shell
|
||||
kubectl label pods -l app=nginx tier=fe
|
||||
|
@ -411,7 +411,7 @@ and
|
|||
|
||||
## Disruptive updates
|
||||
|
||||
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
|
||||
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
|
||||
|
||||
```shell
|
||||
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
|
||||
|
@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
|
|||
deployment.apps/my-nginx scaled
|
||||
```
|
||||
|
||||
To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
|
||||
To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
|
||||
|
||||
```shell
|
||||
kubectl edit deployment/my-nginx
|
||||
|
|
|
@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
|
|||
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
|
||||
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
|
||||
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||
A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
|
||||
A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
|
||||
all requests directly to the API server.
|
||||
As a result, the total delay from the moment when the ConfigMap is updated to the moment
|
||||
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
|
||||
|
|
|
@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
|
|||
However, the kubelet uses its local cache for getting the current value of the Secret.
|
||||
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
|
||||
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||
A Secret can be either propagated by watch (default), ttl-based, or simply redirecting
|
||||
A Secret can be either propagated by watch (default), ttl-based, or by redirecting
|
||||
all requests directly to the API server.
|
||||
As a result, the total delay from the moment when the Secret is updated to the moment
|
||||
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
|
||||
|
|
|
@ -36,10 +36,13 @@ No parameters are passed to the handler.
|
|||
|
||||
`PreStop`
|
||||
|
||||
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
|
||||
It is blocking, meaning it is synchronous,
|
||||
so it must complete before the signal to stop the container can be sent.
|
||||
No parameters are passed to the handler.
|
||||
This hook is called immediately before a container is terminated due to an API request or management
|
||||
event such as a liveness/startup probe failure, preemption, resource contention and others. A call
|
||||
to the `PreStop` hook fails if the container is already in a terminated or completed state and the
|
||||
hook must complete before the TERM signal to stop the container can be sent. The Pod's termination
|
||||
grace period countdown begins before the `PreStop` hook is executed, so regardless of the outcome of
|
||||
the handler, the container will eventually terminate within the Pod's termination grace period. No
|
||||
parameters are passed to the handler.
|
||||
|
||||
A more detailed description of the termination behavior can be found in
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
|
@ -65,19 +68,15 @@ the Container ENTRYPOINT and hook fire asynchronously.
|
|||
However, if the hook takes too long to run or hangs,
|
||||
the Container cannot reach a `running` state.
|
||||
|
||||
`PreStop` hooks are not executed asynchronously from the signal
|
||||
to stop the Container; the hook must complete its execution before
|
||||
the signal can be sent.
|
||||
If a `PreStop` hook hangs during execution,
|
||||
the Pod's phase will be `Terminating` and remain there until the Pod is
|
||||
killed after its `terminationGracePeriodSeconds` expires.
|
||||
This grace period applies to the total time it takes for both
|
||||
the `PreStop` hook to execute and for the Container to stop normally.
|
||||
If, for example, `terminationGracePeriodSeconds` is 60, and the hook
|
||||
takes 55 seconds to complete, and the Container takes 10 seconds to stop
|
||||
normally after receiving the signal, then the Container will be killed
|
||||
before it can stop normally, since `terminationGracePeriodSeconds` is
|
||||
less than the total time (55+10) it takes for these two things to happen.
|
||||
`PreStop` hooks are not executed asynchronously from the signal to stop the Container; the hook must
|
||||
complete its execution before the TERM signal can be sent. If a `PreStop` hook hangs during
|
||||
execution, the Pod's phase will be `Terminating` and remain there until the Pod is killed after its
|
||||
`terminationGracePeriodSeconds` expires. This grace period applies to the total time it takes for
|
||||
both the `PreStop` hook to execute and for the Container to stop normally. If, for example,
|
||||
`terminationGracePeriodSeconds` is 60, and the hook takes 55 seconds to complete, and the Container
|
||||
takes 10 seconds to stop normally after receiving the signal, then the Container will be killed
|
||||
before it can stop normally, since `terminationGracePeriodSeconds` is less than the total time
|
||||
(55+10) it takes for these two things to happen.
|
||||
|
||||
If either a `PostStart` or `PreStop` hook fails,
|
||||
it kills the Container.
|
||||
|
|
|
@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi
|
|||
|
||||
## Custom controllers
|
||||
|
||||
On their own, custom resources simply let you store and retrieve structured data.
|
||||
On their own, custom resources let you store and retrieve structured data.
|
||||
When you combine a custom resource with a *custom controller*, custom resources
|
||||
provide a true _declarative API_.
|
||||
|
||||
|
@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:
|
|||
|
||||
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
|
||||
|
||||
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended.
|
||||
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
|
||||
|
||||
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
|
|||
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
|
||||
|
||||
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
|
||||
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
|
||||
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.
|
||||
|
||||
## Network Plugin Requirements
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed
|
|||
|
||||
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
|
||||
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
|
||||
The application can simply use it as a service.
|
||||
The application can access the message queue as a service.
|
||||
|
||||
## Architecture
|
||||
|
||||
|
|
|
@ -98,7 +98,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
|
|||
### _Equality-based_ requirement
|
||||
|
||||
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
|
||||
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
|
||||
Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
|
||||
|
||||
```
|
||||
environment = production
|
||||
|
|
|
@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
|
|||
### Create a policy and a pod
|
||||
|
||||
Define the example PodSecurityPolicy object in a file. This is a policy that
|
||||
simply prevents the creation of privileged pods.
|
||||
prevents the creation of privileged pods.
|
||||
The name of a PodSecurityPolicy object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
|
|
|
@ -610,17 +610,28 @@ plugins:
|
|||
values: ["cluster-services"]
|
||||
```
|
||||
|
||||
Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present.
|
||||
For example:
|
||||
Then, create a resource quota object in the `kube-system` namespace:
|
||||
|
||||
```yaml
|
||||
scopeSelector:
|
||||
matchExpressions:
|
||||
- scopeName: PriorityClass
|
||||
operator: In
|
||||
values: ["cluster-services"]
|
||||
{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
|
||||
```
|
||||
|
||||
```
|
||||
resourcequota/pods-cluster-services created
|
||||
```
|
||||
|
||||
In this case, a pod creation will be allowed if:
|
||||
|
||||
1. the Pod's `priorityClassName` is not specified.
|
||||
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
|
||||
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
|
||||
in the `kube-system` namespace, and it has passed the resource quota check.
|
||||
|
||||
A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services`
|
||||
and it is to be created in a namespace other than `kube-system`.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
|
||||
|
|
|
@ -261,7 +261,7 @@ for performance and security reasons, there are some constraints on topologyKey:
|
|||
and `preferredDuringSchedulingIgnoredDuringExecution`.
|
||||
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
and `preferredDuringSchedulingIgnoredDuringExecution`.
|
||||
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
|
||||
3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
|
||||
4. Except for the above cases, the `topologyKey` can be any legal label-key.
|
||||
|
||||
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
|
||||
|
|
|
@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
|
|||
minimum value of 50 nodes.
|
||||
|
||||
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
|
||||
checks all the nodes, simply because there are not enough feasible nodes to stop
|
||||
checks all the nodes because there are not enough feasible nodes to stop
|
||||
the scheduler's search early.
|
||||
|
||||
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your
|
||||
|
|
|
@ -183,7 +183,7 @@ the three things:
|
|||
|
||||
{{< note >}}
|
||||
While any plugin can access the list of "waiting" Pods and approve them
|
||||
(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit
|
||||
(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
|
||||
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
|
||||
is approved, it is sent to the [PreBind](#pre-bind) phase.
|
||||
{{< /note >}}
|
||||
|
|
|
@ -120,6 +120,7 @@ Area of Concern for Containers | Recommendation |
|
|||
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
|
||||
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
|
||||
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
|
||||
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation
|
||||
|
||||
## Code
|
||||
|
||||
|
@ -152,3 +153,4 @@ Learn about related Kubernetes security topics:
|
|||
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
|
||||
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
|
||||
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
|
||||
* [Runtime class](/docs/concepts/containers/runtime-class)
|
||||
|
|
|
@ -26,7 +26,7 @@ include the Pod's own namespace and the cluster's default domain. This is best
|
|||
illustrated by example:
|
||||
|
||||
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
|
||||
in namespace `bar` can look up this service by simply doing a DNS query for
|
||||
in namespace `bar` can look up this service by querying a DNS service for
|
||||
`foo`. A Pod running in namespace `quux` can look up this service by doing a
|
||||
DNS query for `foo.bar`.
|
||||
|
||||
|
|
|
@ -163,7 +163,7 @@ status:
|
|||
loadBalancer: {}
|
||||
```
|
||||
|
||||
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`.
|
||||
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
|
||||
|
||||
|
|
|
@ -430,7 +430,7 @@ Services by their DNS name.
|
|||
For example, if you have a Service called `my-service` in a Kubernetes
|
||||
namespace `my-ns`, the control plane and the DNS Service acting together
|
||||
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
|
||||
should be able to find it by simply doing a name lookup for `my-service`
|
||||
should be able to find the service by doing a name lookup for `my-service`
|
||||
(`my-service.my-ns` would also work).
|
||||
|
||||
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
|
||||
|
@ -463,7 +463,7 @@ selectors defined:
|
|||
|
||||
For headless Services that define selectors, the endpoints controller creates
|
||||
`Endpoints` records in the API, and modifies the DNS configuration to return
|
||||
records (addresses) that point directly to the `Pods` backing the `Service`.
|
||||
A records (IP addresses) that point directly to the `Pods` backing the `Service`.
|
||||
|
||||
### Without selectors
|
||||
|
||||
|
@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
|
|||
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
|
||||
|
||||
This means that Service owners can choose any port they want without risk of
|
||||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
collision. Clients can connect to an IP and port, without being aware
|
||||
of which Pods they are actually accessing.
|
||||
|
||||
#### iptables
|
||||
|
|
|
@ -487,7 +487,7 @@ The following volume types support mount options:
|
|||
* VsphereVolume
|
||||
* iSCSI
|
||||
|
||||
Mount options are not validated, so mount will simply fail if one is invalid.
|
||||
Mount options are not validated. If a mount option is invalid, the mount fails.
|
||||
|
||||
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
|
||||
of the `mountOptions` attribute. This annotation is still working; however,
|
||||
|
|
|
@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.
|
|||
|
||||
If the volume plugin does not support mount options but mount options are
|
||||
specified, provisioning will fail. Mount options are not validated on either
|
||||
the class or PV, so mount of the PV will simply fail if one is invalid.
|
||||
the class or PV. If a mount option is invalid, the PV mount fails.
|
||||
|
||||
### Volume Binding Mode
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
|
|||
|
||||
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
|
||||
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
|
||||
Users need to be aware of the following when using this feature:
|
||||
|
||||
|
|
|
@ -106,6 +106,8 @@ spec:
|
|||
fsType: ext4
|
||||
```
|
||||
|
||||
If the EBS volume is partitioned, you can supply the optional field `partition: "<partition number>"` to specify which parition to mount on.
|
||||
|
||||
#### AWS EBS CSI migration
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
|
||||
|
|
|
@ -90,6 +90,11 @@ If `startingDeadlineSeconds` is set to a large value or left unset (the default)
|
|||
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
|
||||
at least once.
|
||||
|
||||
{{< caution >}}
|
||||
If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
|
||||
|
||||
````
|
||||
|
@ -128,4 +133,3 @@ documents the format of CronJob `schedule` fields.
|
|||
For instructions on creating and working with cron jobs, and for an example of CronJob
|
||||
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
|
||||
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ In this example:
|
|||
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
|
||||
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
|
||||
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
|
||||
In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
|
||||
In this case, you select a label that is defined in the Pod template (`app: nginx`).
|
||||
However, more sophisticated selection rules are possible,
|
||||
as long as the Pod template itself satisfies the rule.
|
||||
|
||||
|
@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
|
|||
```shell
|
||||
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
or simply use the following command:
|
||||
|
||||
or use the following command:
|
||||
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
|
|||
kubectl edit deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment edited
|
||||
```
|
||||
|
@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
|
|||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
@ -212,7 +218,8 @@ Get more details on your updated Deployment:
|
|||
|
||||
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
```ini
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 36s
|
||||
```
|
||||
|
|
|
@ -180,16 +180,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
|
|||
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
||||
command is interrupted, it can be restarted.
|
||||
|
||||
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
|
||||
When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
|
||||
0, wait for pod deletions, then delete the ReplicationController).
|
||||
|
||||
### Deleting just a ReplicationController
|
||||
### Deleting only a ReplicationController
|
||||
|
||||
You can delete a ReplicationController without affecting any of its pods.
|
||||
|
||||
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
||||
|
||||
When using the REST API or go client library, simply delete the ReplicationController object.
|
||||
When using the REST API or Go client library, you can delete the ReplicationController object.
|
||||
|
||||
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
||||
|
@ -240,7 +240,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic
|
|||
|
||||
## Responsibilities of the ReplicationController
|
||||
|
||||
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ Members can:
|
|||
|
||||
{{< note >}}
|
||||
Using `/lgtm` triggers automation. If you want to provide non-binding
|
||||
approval, simply commenting "LGTM" works too!
|
||||
approval, commenting "LGTM" works too!
|
||||
{{< /note >}}
|
||||
|
||||
- Use the `/hold` comment to block merging for a pull request
|
||||
|
|
|
@ -44,22 +44,25 @@ The English-language documentation uses U.S. English spelling and grammar.
|
|||
|
||||
### Use upper camel case for API objects
|
||||
|
||||
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
|
||||
|
||||
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||
|
||||
You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
|
||||
|
||||
Don't split the API object name into separate words. For example, use
|
||||
PodTemplateList, not Pod Template List.
|
||||
|
||||
Refer to API objects without saying "object," unless omitting "object"
|
||||
leads to an awkward construction.
|
||||
The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects.
|
||||
|
||||
{{< table caption = "Do and Don't - API objects" >}}
|
||||
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
The pod has two containers. | The Pod has two containers.
|
||||
The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
|
||||
A PodList is a list of pods. | A Pod List is a list of pods.
|
||||
The two ContainerPorts ... | The two ContainerPort objects ...
|
||||
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
|
||||
The HorizontalPodAutoscaler resource is responsible for ... | The Horizontal pod autoscaler is responsible for ...
|
||||
A PodList object is a list of pods. | A Pod List object is a list of pods.
|
||||
The Volume object contains a `hostPath` field. | The volume object contains a hostPath field.
|
||||
Every ConfigMap object is part of a namespace. | Every configMap object is part of a namespace.
|
||||
For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API.
|
||||
{{< /table >}}
|
||||
|
||||
|
||||
|
@ -113,12 +116,12 @@ The copy is called a "fork". | The copy is called a "fork."
|
|||
|
||||
## Inline code formatting
|
||||
|
||||
### Use code style for inline code, commands, and API objects
|
||||
### Use code style for inline code, commands, and API objects {#code-style-inline-code}
|
||||
|
||||
For inline code in an HTML document, use the `<code>` tag. In a Markdown
|
||||
document, use the backtick (`` ` ``).
|
||||
|
||||
{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
|
||||
{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
|
||||
|
|
|
@ -116,7 +116,7 @@ Rejects all requests. AlwaysDeny is DEPRECATED as it has no real meaning.
|
|||
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
|
||||
multitenant cluster so that users can be assured that their private images can only be used by those
|
||||
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
|
||||
node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
|
||||
node, any pod from any user can use it by knowing the image's name (assuming the Pod is
|
||||
scheduled onto the right node), without any authorization check against the image. When this admission controller
|
||||
is enabled, images are always pulled prior to starting containers, which means valid credentials are
|
||||
required.
|
||||
|
|
|
@ -206,7 +206,7 @@ spec:
|
|||
|
||||
Service account bearer tokens are perfectly valid to use outside the cluster and
|
||||
can be used to create identities for long standing jobs that wish to talk to the
|
||||
Kubernetes API. To manually create a service account, simply use the `kubectl
|
||||
Kubernetes API. To manually create a service account, use the `kubectl
|
||||
create serviceaccount (NAME)` command. This creates a service account in the
|
||||
current namespace and an associated secret.
|
||||
|
||||
|
@ -420,12 +420,12 @@ users:
|
|||
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
|
||||
name: oidc
|
||||
```
|
||||
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
|
||||
|
||||
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
|
||||
|
||||
##### Option 2 - Use the `--token` Option
|
||||
|
||||
The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option:
|
||||
The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option:
|
||||
|
||||
```bash
|
||||
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
|
||||
|
|
|
@ -635,8 +635,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
|
||||
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
|
||||
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
|
||||
- `KubeletPodResources`: Enable the kubelet's pod resources GRPC endpoint. See
|
||||
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)
|
||||
- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
|
||||
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
|
||||
for more details.
|
||||
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
|
||||
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
|
||||
|
|
|
@ -16,10 +16,10 @@ min-kubernetes-server-version: 1.16
|
|||
|
||||
## Introduction
|
||||
|
||||
Server Side Apply helps users and controllers manage their resources via
|
||||
declarative configurations. It allows them to create and/or modify their
|
||||
Server Side Apply helps users and controllers manage their resources through
|
||||
declarative configurations. Clients can create and modify their
|
||||
[objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
declaratively, simply by sending their fully specified intent.
|
||||
declaratively by sending their fully specified intent.
|
||||
|
||||
A fully specified intent is a partial object that only includes the fields and
|
||||
values for which the user has an opinion. That intent either creates a new
|
||||
|
|
|
@ -420,7 +420,7 @@ Start CRI-O:
|
|||
|
||||
```shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl start crio
|
||||
sudo systemctl enable crio --now
|
||||
```
|
||||
|
||||
Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md)
|
||||
|
|
|
@ -434,7 +434,7 @@ Now remove the node:
|
|||
kubectl delete node <node name>
|
||||
```
|
||||
|
||||
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
|
||||
If you wish to start over, run `kubeadm init` or `kubeadm join` with the
|
||||
appropriate arguments.
|
||||
|
||||
### Clean up the control plane
|
||||
|
|
|
@ -308,13 +308,6 @@ or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it a
|
|||
(stored in `/var/lib/kubelet/config.yaml` by default).
|
||||
{{< /note >}}
|
||||
|
||||
Restarting the kubelet is required:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
The automatic detection of cgroup driver for other container runtimes
|
||||
like CRI-O and containerd is work in progress.
|
||||
|
||||
|
|
|
@ -547,7 +547,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
|
|||
|
||||
1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created"
|
||||
|
||||
There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows:
|
||||
There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch start.ps1 or relaunch it manually as follows:
|
||||
|
||||
```powershell
|
||||
PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "<Windows_Worker_Hostname>")
|
||||
|
|
|
@ -23,7 +23,7 @@ Windows applications constitute a large portion of the services and applications
|
|||
## Before you begin
|
||||
|
||||
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)
|
||||
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers.
|
||||
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers.
|
||||
|
||||
## Getting Started: Deploying a Windows container
|
||||
|
||||
|
|
|
@ -280,7 +280,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
|
|||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
|
||||
|
|
|
@ -215,7 +215,7 @@ for i in ret.items:
|
|||
|
||||
#### Java client
|
||||
|
||||
* To install the [Java Client](https://github.com/kubernetes-client/java), simply execute :
|
||||
To install the [Java Client](https://github.com/kubernetes-client/java), run:
|
||||
|
||||
```shell
|
||||
# Clone java library
|
||||
|
|
|
@ -83,7 +83,7 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
|
|||
|
||||
#### Manually constructing apiserver proxy URLs
|
||||
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
|
||||
|
||||
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
|
||||
|
|
|
@ -32,7 +32,7 @@ for example, it might provision storage that is too expensive. If this is the ca
|
|||
you can either change the default StorageClass or disable it completely to avoid
|
||||
dynamic provisioning of storage.
|
||||
|
||||
Simply deleting the default StorageClass may not work, as it may be re-created
|
||||
Deleting the default StorageClass may not work, as it may be re-created
|
||||
automatically by the addon manager running in your cluster. Please consult the docs for your installation
|
||||
for details about addon manager and how to disable individual addons.
|
||||
|
||||
|
|
|
@ -201,6 +201,9 @@ allow.textmode=true
|
|||
how.nice.to.look=fairlyNice
|
||||
```
|
||||
|
||||
When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap.
|
||||
If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' <name>`.
|
||||
|
||||
Use the option `--from-env-file` to create a ConfigMap from an env-file, for example:
|
||||
|
||||
```shell
|
||||
|
@ -687,4 +690,3 @@ data:
|
|||
|
||||
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
|
||||
|
||||
|
||||
|
|
|
@ -23,16 +23,10 @@ authenticated by the apiserver as a particular User Account (currently this is
|
|||
usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver.
|
||||
When they do, they are authenticated as a particular Service Account (for example, `default`).
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Use the Default Service Account to access the API server.
|
||||
|
@ -129,7 +123,7 @@ then you will see that a token has automatically been created and is referenced
|
|||
|
||||
You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
|
||||
|
||||
To use a non-default service account, simply set the `spec.serviceAccountName`
|
||||
To use a non-default service account, set the `spec.serviceAccountName`
|
||||
field of a pod to the name of the service account you wish to use.
|
||||
|
||||
The service account has to exist at the time the pod is created, or it will be rejected.
|
||||
|
|
|
@ -12,16 +12,10 @@ What's Kompose? It's a conversion tool for all things compose (namely Docker Com
|
|||
|
||||
More information can be found on the Kompose website at [http://kompose.io](http://kompose.io).
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Install Kompose
|
||||
|
@ -35,13 +29,13 @@ Kompose is released via GitHub on a three-week cycle, you can see all current re
|
|||
|
||||
```sh
|
||||
# Linux
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
|
||||
|
||||
# macOS
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-darwin-amd64 -o kompose
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
|
||||
|
||||
# Windows
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-windows-amd64.exe -o kompose.exe
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe
|
||||
|
||||
chmod +x kompose
|
||||
sudo mv ./kompose /usr/local/bin/kompose
|
||||
|
@ -49,7 +43,6 @@ sudo mv ./kompose /usr/local/bin/kompose
|
|||
|
||||
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Build from source" %}}
|
||||
|
||||
|
@ -87,8 +80,8 @@ On macOS you can install latest release via [Homebrew](https://brew.sh):
|
|||
|
||||
```bash
|
||||
brew install kompose
|
||||
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -97,8 +90,7 @@ brew install kompose
|
|||
In just a few steps, we'll take you from Docker Compose to Kubernetes. All
|
||||
you need is an existing `docker-compose.yml` file.
|
||||
|
||||
1. Go to the directory containing your `docker-compose.yml` file. If you don't
|
||||
have one, test using this one.
|
||||
1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
|
||||
|
||||
```yaml
|
||||
version: "2"
|
||||
|
@ -127,37 +119,44 @@ you need is an existing `docker-compose.yml` file.
|
|||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
|
||||
2. Run the `kompose up` command to deploy to Kubernetes directly, or skip to
|
||||
the next step instead to generate a file to use with `kubectl`.
|
||||
|
||||
```bash
|
||||
$ kompose up
|
||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
|
||||
|
||||
INFO Successfully created Service: redis
|
||||
INFO Successfully created Service: web
|
||||
INFO Successfully created Deployment: redis
|
||||
INFO Successfully created Deployment: web
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
|
||||
```
|
||||
|
||||
3. To convert the `docker-compose.yml` file to files that you can use with
|
||||
2. To convert the `docker-compose.yml` file to files that you can use with
|
||||
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
|
||||
|
||||
```bash
|
||||
$ kompose convert
|
||||
kompose convert
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```none
|
||||
redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
|
@ -168,18 +167,21 @@ you need is an existing `docker-compose.yml` file.
|
|||
|
||||
Your deployments are running in Kubernetes.
|
||||
|
||||
4. Access your application.
|
||||
3. Access your application.
|
||||
|
||||
If you're already using `minikube` for your development process:
|
||||
|
||||
```bash
|
||||
$ minikube service frontend
|
||||
minikube service frontend
|
||||
```
|
||||
|
||||
Otherwise, let's look up what IP your service is using!
|
||||
|
||||
```sh
|
||||
$ kubectl describe svc frontend
|
||||
kubectl describe svc frontend
|
||||
```
|
||||
|
||||
```none
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
|
@ -192,17 +194,14 @@ you need is an existing `docker-compose.yml` file.
|
|||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
|
||||
```
|
||||
|
||||
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
|
||||
|
||||
```sh
|
||||
$ curl http://192.0.2.89
|
||||
curl http://192.0.2.89
|
||||
```
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## User Guide
|
||||
|
@ -221,15 +220,17 @@ you need is an existing `docker-compose.yml` file.
|
|||
Kompose has support for two providers: OpenShift and Kubernetes.
|
||||
You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default.
|
||||
|
||||
|
||||
## `kompose convert`
|
||||
|
||||
Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects.
|
||||
|
||||
### Kubernetes
|
||||
### Kubernetes `kompose convert` example
|
||||
|
||||
```sh
|
||||
$ kompose --file docker-voting.yml convert
|
||||
```shell
|
||||
kompose --file docker-voting.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
WARN Unsupported key networks - ignoring
|
||||
WARN Unsupported key build - ignoring
|
||||
INFO Kubernetes file "worker-svc.yaml" created
|
||||
|
@ -242,16 +243,24 @@ INFO Kubernetes file "result-deployment.yaml" created
|
|||
INFO Kubernetes file "vote-deployment.yaml" created
|
||||
INFO Kubernetes file "worker-deployment.yaml" created
|
||||
INFO Kubernetes file "db-deployment.yaml" created
|
||||
```
|
||||
|
||||
$ ls
|
||||
```shell
|
||||
ls
|
||||
```
|
||||
|
||||
```none
|
||||
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
|
||||
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
|
||||
```
|
||||
|
||||
You can also provide multiple docker-compose files at the same time:
|
||||
|
||||
```sh
|
||||
$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```shell
|
||||
kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "mlbparks-service.yaml" created
|
||||
INFO Kubernetes file "mongodb-service.yaml" created
|
||||
|
@ -263,8 +272,13 @@ INFO Kubernetes file "mongodb-deployment.yaml" created
|
|||
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
$ ls
|
||||
```shell
|
||||
ls
|
||||
```
|
||||
|
||||
```none
|
||||
mlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml
|
||||
frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml
|
||||
frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml
|
||||
|
@ -273,10 +287,13 @@ redis-master-deployment.yaml
|
|||
|
||||
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
|
||||
|
||||
### OpenShift
|
||||
### OpenShift `kompose convert` example
|
||||
|
||||
```sh
|
||||
$ kompose --provider openshift --file docker-voting.yml convert
|
||||
kompose --provider openshift --file docker-voting.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
WARN [worker] Service cannot be created because of missing port.
|
||||
INFO OpenShift file "vote-service.yaml" created
|
||||
INFO OpenShift file "db-service.yaml" created
|
||||
|
@ -297,7 +314,10 @@ INFO OpenShift file "result-imagestream.yaml" created
|
|||
It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch for the build. You can specify a different source repo and branch using ``--build-repo`` and ``--build-branch`` options respectively.
|
||||
|
||||
```sh
|
||||
$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
```
|
||||
|
||||
```none
|
||||
WARN [foo] Service cannot be created because of missing port.
|
||||
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
|
||||
INFO OpenShift file "foo-deploymentconfig.yaml" created
|
||||
|
@ -313,10 +333,13 @@ If you are manually pushing the OpenShift artifacts using ``oc create -f``, you
|
|||
|
||||
Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`.
|
||||
|
||||
### Kubernetes `kompose up` example
|
||||
|
||||
### Kubernetes
|
||||
```sh
|
||||
$ kompose --file ./examples/docker-guestbook.yml up
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml up
|
||||
```
|
||||
|
||||
```none
|
||||
We are going to create Kubernetes deployments and services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
|
||||
|
||||
|
@ -328,8 +351,13 @@ INFO Successfully created deployment: redis-slave
|
|||
INFO Successfully created deployment: frontend
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details.
|
||||
```
|
||||
|
||||
$ kubectl get deployment,svc,pods
|
||||
```shell
|
||||
kubectl get deployment,svc,pods
|
||||
```
|
||||
|
||||
```none
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.extensions/frontend 1 1 1 1 4m
|
||||
deployment.extensions/redis-master 1 1 1 1 4m
|
||||
|
@ -347,14 +375,19 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m
|
|||
pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
**Note**:
|
||||
{{< note >}}
|
||||
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl apply -f` commands instead.
|
||||
{{< /note >}}
|
||||
|
||||
### OpenShift
|
||||
```sh
|
||||
$ kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
### OpenShift `kompose up` example
|
||||
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
```
|
||||
|
||||
```none
|
||||
We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
|
||||
|
||||
|
@ -369,8 +402,13 @@ INFO Successfully created deployment: redis-master
|
|||
INFO Successfully created ImageStream: redis-master
|
||||
|
||||
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details.
|
||||
```
|
||||
|
||||
$ oc get dc,svc,is
|
||||
```shell
|
||||
oc get dc,svc,is
|
||||
```
|
||||
|
||||
```none
|
||||
NAME REVISION DESIRED CURRENT TRIGGERED BY
|
||||
dc/frontend 0 1 0 config,image(frontend:v4)
|
||||
dc/redis-master 0 1 0 config,image(redis-master:e2e)
|
||||
|
@ -385,16 +423,16 @@ is/redis-master 172.30.12.200:5000/fff/redis-master
|
|||
is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
|
||||
{{< note >}}
|
||||
You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`).
|
||||
{{< /note >}}
|
||||
|
||||
## `kompose down`
|
||||
|
||||
Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
|
||||
Once you have deployed "composed" application to Kubernetes, `kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
|
||||
|
||||
```sh
|
||||
$ kompose --file docker-guestbook.yml down
|
||||
```shell
|
||||
kompose --file docker-guestbook.yml down
|
||||
INFO Successfully deleted service: redis-master
|
||||
INFO Successfully deleted deployment: redis-master
|
||||
INFO Successfully deleted service: redis-slave
|
||||
|
@ -403,16 +441,16 @@ INFO Successfully deleted service: frontend
|
|||
INFO Successfully deleted deployment: frontend
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
{{< note >}}
|
||||
You must have a running Kubernetes cluster with a pre-configured `kubectl` context.
|
||||
{{< /note >}}
|
||||
|
||||
## Build and Push Docker Images
|
||||
|
||||
Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
|
||||
|
||||
- Automatically be built with Docker using the `image` key specified within your file
|
||||
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
|
||||
- Automatically be built with Docker using the `image` key specified within your file
|
||||
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
|
||||
|
||||
Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml):
|
||||
|
||||
|
@ -428,7 +466,7 @@ services:
|
|||
Using `kompose up` with a `build` key:
|
||||
|
||||
```none
|
||||
$ kompose up
|
||||
kompose up
|
||||
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
|
||||
INFO Building image 'docker.io/foo/bar' from directory 'build'
|
||||
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
|
||||
|
@ -448,10 +486,10 @@ In order to disable the functionality, or choose to use BuildConfig generation (
|
|||
|
||||
```sh
|
||||
# Disable building/pushing Docker images
|
||||
$ kompose up --build none
|
||||
kompose up --build none
|
||||
|
||||
# Generate Build Config artifacts for OpenShift
|
||||
$ kompose up --provider openshift --build build-config
|
||||
kompose up --provider openshift --build build-config
|
||||
```
|
||||
|
||||
## Alternative Conversions
|
||||
|
@ -459,45 +497,54 @@ $ kompose up --provider openshift --build build-config
|
|||
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
|
||||
|
||||
```sh
|
||||
$ kompose convert -j
|
||||
kompose convert -j
|
||||
INFO Kubernetes file "redis-svc.json" created
|
||||
INFO Kubernetes file "web-svc.json" created
|
||||
INFO Kubernetes file "redis-deployment.json" created
|
||||
INFO Kubernetes file "web-deployment.json" created
|
||||
```
|
||||
|
||||
The `*-deployment.json` files contain the Deployment objects.
|
||||
|
||||
```sh
|
||||
$ kompose convert --replication-controller
|
||||
kompose convert --replication-controller
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-replicationcontroller.yaml" created
|
||||
INFO Kubernetes file "web-replicationcontroller.yaml" created
|
||||
```
|
||||
|
||||
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3`
|
||||
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `kompose convert --replication-controller --replicas 3`
|
||||
|
||||
```sh
|
||||
$ kompose convert --daemon-set
|
||||
```shell
|
||||
kompose convert --daemon-set
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-daemonset.yaml" created
|
||||
INFO Kubernetes file "web-daemonset.yaml" created
|
||||
```
|
||||
|
||||
The `*-daemonset.yaml` files contain the Daemon Set objects
|
||||
The `*-daemonset.yaml` files contain the DaemonSet objects
|
||||
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) run:
|
||||
|
||||
```sh
|
||||
$ kompose convert -c
|
||||
```shell
|
||||
kompose convert -c
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-deployment.yaml" created
|
||||
chart created in "./docker-compose/"
|
||||
```
|
||||
|
||||
$ tree docker-compose/
|
||||
```shell
|
||||
tree docker-compose/
|
||||
```
|
||||
|
||||
```none
|
||||
docker-compose
|
||||
├── Chart.yaml
|
||||
├── README.md
|
||||
|
@ -578,7 +625,7 @@ If you want to create normal pods without controllers you can use `restart` cons
|
|||
| `no` | Pod | `Never` |
|
||||
|
||||
{{< note >}}
|
||||
The controller object could be `deployment` or `replicationcontroller`, etc.
|
||||
The controller object could be `deployment` or `replicationcontroller`.
|
||||
{{< /note >}}
|
||||
|
||||
For example, the `pival` service will become pod down here. This container calculated value of `pi`.
|
||||
|
@ -593,7 +640,7 @@ services:
|
|||
restart: "on-failure"
|
||||
```
|
||||
|
||||
### Warning about Deployment Config's
|
||||
### Warning about Deployment Configurations
|
||||
|
||||
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
|
||||
|
||||
|
@ -606,5 +653,3 @@ Please note that changing service name might break some `docker-compose` files.
|
|||
Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature.
|
||||
|
||||
A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys.
|
||||
|
||||
|
||||
|
|
|
@ -18,10 +18,10 @@ you to figure out what's going wrong.
|
|||
## Running commands in a Pod
|
||||
|
||||
For many steps here you will want to see what a Pod running in the cluster
|
||||
sees. The simplest way to do this is to run an interactive alpine Pod:
|
||||
sees. The simplest way to do this is to run an interactive busybox Pod:
|
||||
|
||||
```none
|
||||
kubectl run -it --rm --restart=Never alpine --image=alpine sh
|
||||
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
@ -111,7 +111,7 @@ kubectl get pods -l app=hostnames \
|
|||
10.244.0.7
|
||||
```
|
||||
|
||||
The example container used for this walk-through simply serves its own hostname
|
||||
The example container used for this walk-through serves its own hostname
|
||||
via HTTP on port 9376, but if you are debugging your own app, you'll want to
|
||||
use whatever port number your Pods are listening on.
|
||||
|
||||
|
|
|
@ -12,20 +12,15 @@ content_type: task
|
|||
This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think
|
||||
of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
You need to have a working `kubectl` binary installed.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Installing kubectl plugins
|
||||
|
||||
A plugin is nothing more than a standalone executable file, whose name begins with `kubectl-`. To install a plugin, simply move its executable file to anywhere on your `PATH`.
|
||||
A plugin is a standalone executable file, whose name begins with `kubectl-`. To install a plugin, move its executable file to anywhere on your `PATH`.
|
||||
|
||||
You can also discover and install kubectl plugins available in the open source
|
||||
using [Krew](https://krew.dev/). Krew is a plugin manager maintained by
|
||||
|
@ -60,9 +55,9 @@ You can write a plugin in any programming language or script that allows you to
|
|||
|
||||
There is no plugin installation or pre-loading required. Plugin executables receive
|
||||
the inherited environment from the `kubectl` binary.
|
||||
A plugin determines which command path it wishes to implement based on its name. For
|
||||
example, a plugin wanting to provide a new command `kubectl foo`, would simply be named
|
||||
`kubectl-foo`, and live somewhere in your `PATH`.
|
||||
A plugin determines which command path it wishes to implement based on its name.
|
||||
For example, a plugin named `kubectl-foo` provides a command `kubectl foo`. You must
|
||||
install the plugin executable somewhere in your `PATH`.
|
||||
|
||||
### Example plugin
|
||||
|
||||
|
@ -88,32 +83,34 @@ echo "I am a plugin named kubectl-foo"
|
|||
|
||||
### Using a plugin
|
||||
|
||||
To use the above plugin, simply make it executable:
|
||||
To use a plugin, make the plugin executable:
|
||||
|
||||
```
|
||||
```shell
|
||||
sudo chmod +x ./kubectl-foo
|
||||
```
|
||||
|
||||
and place it anywhere in your `PATH`:
|
||||
|
||||
```
|
||||
```shell
|
||||
sudo mv ./kubectl-foo /usr/local/bin
|
||||
```
|
||||
|
||||
You may now invoke your plugin as a `kubectl` command:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl foo
|
||||
```
|
||||
|
||||
```
|
||||
I am a plugin named kubectl-foo
|
||||
```
|
||||
|
||||
All args and flags are passed as-is to the executable:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl foo version
|
||||
```
|
||||
|
||||
```
|
||||
1.0.0
|
||||
```
|
||||
|
@ -124,6 +121,7 @@ All environment variables are also passed as-is to the executable:
|
|||
export KUBECONFIG=~/.kube/config
|
||||
kubectl foo config
|
||||
```
|
||||
|
||||
```
|
||||
/home/<user>/.kube/config
|
||||
```
|
||||
|
@ -131,6 +129,7 @@ kubectl foo config
|
|||
```shell
|
||||
KUBECONFIG=/etc/kube/config kubectl foo config
|
||||
```
|
||||
|
||||
```
|
||||
/etc/kube/config
|
||||
```
|
||||
|
@ -376,16 +375,11 @@ set up a build environment (if it needs compiling), and deploy the plugin.
|
|||
If you also make compiled packages available, or use Krew, that will make
|
||||
installs easier.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Check the Sample CLI Plugin repository for a
|
||||
[detailed example](https://github.com/kubernetes/sample-cli-plugin) of a
|
||||
plugin written in Go.
|
||||
In case of any questions, feel free to reach out to the
|
||||
[SIG CLI team](https://github.com/kubernetes/community/tree/master/sig-cli).
|
||||
* Read about [Krew](https://krew.dev/), a package manager for kubectl plugins.
|
||||
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ based on a common template. You can use this approach to process batches of work
|
|||
parallel.
|
||||
|
||||
For this example there are only three items: _apple_, _banana_, and _cherry_.
|
||||
The sample Jobs process each item simply by printing a string then pausing.
|
||||
The sample Jobs process each item by printing a string then pausing.
|
||||
|
||||
See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how
|
||||
this pattern fits more realistic use cases.
|
||||
|
|
|
@ -66,7 +66,7 @@ Use caution when deleting a PVC, as it may lead to data loss.
|
|||
|
||||
### Complete deletion of a StatefulSet
|
||||
|
||||
To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
|
||||
To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
|
||||
|
||||
```shell
|
||||
grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')
|
||||
|
|
|
@ -383,7 +383,12 @@ behavior:
|
|||
periodSeconds: 60
|
||||
```
|
||||
|
||||
When the number of pods is more than 40 the second policy will be used for scaling down.
|
||||
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
|
||||
The first policy _(Pods)_ allows at most 4 replicas to be scaled down in one minute. The second policy
|
||||
_(Percent)_ allows at most 10% of the current replicas to be scaled down in one minute.
|
||||
|
||||
Since by default the policy which allows the highest amount of change is selected, the second policy will
|
||||
only be used when the number of pod replicas is more than 40. With 40 or less replicas, the first policy will be applied.
|
||||
For instance if there are 80 replicas and the target has to be scaled down to 10 replicas
|
||||
then during the first step 8 replicas will be reduced. In the next iteration when the number
|
||||
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
|
||||
|
@ -391,10 +396,6 @@ the autoscaler controller the number of pods to be change is re-calculated based
|
|||
of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied
|
||||
and 4 replicas will be reduced at a time.
|
||||
|
||||
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
|
||||
The first policy allows at most 4 replicas to be scaled down in one minute. The second policy
|
||||
allows at most 10% of the current replicas to be scaled down in one minute.
|
||||
|
||||
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
|
||||
direction. By setting the value to `Min` which would select the policy which allows the
|
||||
smallest change in the replica count. Setting the value to `Disabled` completely disables
|
||||
|
@ -441,7 +442,7 @@ behavior:
|
|||
periodSeconds: 15
|
||||
selectPolicy: Max
|
||||
```
|
||||
For scaling down the stabilization window is _300_ seconds(or the value of the
|
||||
For scaling down the stabilization window is _300_ seconds (or the value of the
|
||||
`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy
|
||||
for scaling down which allows a 100% of the currently running replicas to be removed which
|
||||
means the scaling target can be scaled down to the minimum allowed replicas.
|
||||
|
|
|
@ -171,10 +171,10 @@ properties.
|
|||
The script in the `init-mysql` container also applies either `primary.cnf` or
|
||||
`replica.cnf` from the ConfigMap by copying the contents into `conf.d`.
|
||||
Because the example topology consists of a single primary MySQL server and any number of
|
||||
replicas, the script simply assigns ordinal `0` to be the primary server, and everyone
|
||||
replicas, the script assigns ordinal `0` to be the primary server, and everyone
|
||||
else to be replicas.
|
||||
Combined with the StatefulSet controller's
|
||||
[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/),
|
||||
[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees),
|
||||
this ensures the primary MySQL server is Ready before creating replicas, so they can begin
|
||||
replicating.
|
||||
|
||||
|
|
|
@ -65,6 +65,8 @@ for a secure solution.
|
|||
|
||||
kubectl describe deployment mysql
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
Name: mysql
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700
|
||||
|
@ -105,6 +107,8 @@ for a secure solution.
|
|||
|
||||
kubectl get pods -l app=mysql
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mysql-63082529-2z3ki 1/1 Running 0 3m
|
||||
|
||||
|
@ -112,6 +116,8 @@ for a secure solution.
|
|||
|
||||
kubectl describe pvc mysql-pv-claim
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
Name: mysql-pv-claim
|
||||
Namespace: default
|
||||
StorageClass:
|
||||
|
|
|
@ -51,7 +51,6 @@ a Deployment that runs the nginx:1.14.2 Docker image:
|
|||
|
||||
The output is similar to this:
|
||||
|
||||
user@computer:~/website$ kubectl describe deployment nginx-deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700
|
||||
|
|
|
@ -51,12 +51,12 @@ Configurations with a single API server will experience unavailability while the
|
|||
If any pods are started before new CA is used by API servers, they will get this update and trust both old and new CAs.
|
||||
|
||||
```shell
|
||||
base64_encoded_ca="$(base64 <path to file containing both old and new CAs>)"
|
||||
base64_encoded_ca="$(base64 -w0 <path to file containing both old and new CAs>)"
|
||||
|
||||
for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do
|
||||
for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do
|
||||
kubectl get $token --namespace "$namespace" -o yaml | \
|
||||
/bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}" | \
|
||||
/bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}/" | \
|
||||
kubectl apply -f -
|
||||
done
|
||||
done
|
||||
|
@ -132,10 +132,10 @@ Configurations with a single API server will experience unavailability while the
|
|||
1. If your cluster is using bootstrap tokens to join nodes, update the ConfigMap `cluster-info` in the `kube-public` namespace with new CA.
|
||||
|
||||
```shell
|
||||
base64_encoded_ca="$(base64 /etc/kubernetes/pki/ca.crt)"
|
||||
base64_encoded_ca="$(base64 -w0 /etc/kubernetes/pki/ca.crt)"
|
||||
|
||||
kubectl get cm/cluster-info --namespace kube-public -o yaml | \
|
||||
/bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}" | \
|
||||
/bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}/" | \
|
||||
kubectl apply -f -
|
||||
```
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
|
||||
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Stateful Applications
|
||||
|
||||
|
|
|
@ -168,8 +168,7 @@ k8s-apparmor-example-deny-write (enforce)
|
|||
|
||||
*This example assumes you have already set up a cluster with AppArmor support.*
|
||||
|
||||
First, we need to load the profile we want to use onto our nodes. The profile we'll use simply
|
||||
denies all file writes:
|
||||
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
|
||||
|
||||
```shell
|
||||
#include <tunables/global>
|
||||
|
|
|
@ -64,12 +64,6 @@ weight: 10
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_services.svg" width="150%" height="150%"></p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.</p>
|
||||
|
|
File diff suppressed because it is too large
Load Diff
Before Width: | Height: | Size: 57 KiB After Width: | Height: | Size: 79 KiB |
|
@ -934,10 +934,10 @@ web-2 0/1 Terminating 0 3m
|
|||
|
||||
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
|
||||
Since `web-1` was already Running and Ready, when `web-0` transitioned to
|
||||
Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
|
||||
with `replicas` equal to 2, once `web-0` had been recreated, and once
|
||||
`web-1` had been determined to already be Running and Ready, `web-2` was
|
||||
terminated.
|
||||
Running and Ready, it adopted this Pod. Since you recreated the StatefulSet
|
||||
with `replicas` equal to 2, once `web-0` had been recreated, and once
|
||||
`web-1` had been determined to already be Running and Ready, `web-2` was
|
||||
terminated.
|
||||
|
||||
Let's take another look at the contents of the `index.html` file served by the
|
||||
Pods' webservers:
|
||||
|
@ -945,6 +945,7 @@ Pods' webservers:
|
|||
```shell
|
||||
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
|
||||
```
|
||||
|
||||
```
|
||||
web-0
|
||||
web-1
|
||||
|
@ -970,15 +971,18 @@ In another terminal, delete the StatefulSet again. This time, omit the
|
|||
```shell
|
||||
kubectl delete statefulset web
|
||||
```
|
||||
|
||||
```
|
||||
statefulset.apps "web" deleted
|
||||
```
|
||||
|
||||
Examine the output of the `kubectl get` command running in the first terminal,
|
||||
and wait for all of the Pods to transition to Terminating.
|
||||
|
||||
```shell
|
||||
kubectl get pods -w -l app=nginx
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
web-0 1/1 Running 0 11m
|
||||
|
@ -1006,10 +1010,10 @@ the cascade does not delete the headless Service associated with the StatefulSet
|
|||
You must delete the `nginx` Service manually.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
```shell
|
||||
kubectl delete service nginx
|
||||
```
|
||||
|
||||
```
|
||||
service "nginx" deleted
|
||||
```
|
||||
|
@ -1019,6 +1023,7 @@ Recreate the StatefulSet and headless Service one more time:
|
|||
```shell
|
||||
kubectl apply -f web.yaml
|
||||
```
|
||||
|
||||
```
|
||||
service/nginx created
|
||||
statefulset.apps/web created
|
||||
|
@ -1030,6 +1035,7 @@ the contents of their `index.html` files:
|
|||
```shell
|
||||
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
|
||||
```
|
||||
|
||||
```
|
||||
web-0
|
||||
web-1
|
||||
|
@ -1044,13 +1050,17 @@ Finally, delete the `nginx` Service...
|
|||
```shell
|
||||
kubectl delete service nginx
|
||||
```
|
||||
|
||||
```
|
||||
service "nginx" deleted
|
||||
```
|
||||
|
||||
...and the `web` StatefulSet:
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset web
|
||||
```
|
||||
|
||||
```
|
||||
statefulset "web" deleted
|
||||
```
|
||||
|
|
|
@ -1,460 +0,0 @@
|
|||
---
|
||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
||||
reviewers:
|
||||
- sftim
|
||||
content_type: tutorial
|
||||
weight: 21
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 31
|
||||
title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
|
||||
|
||||
* A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
|
||||
* Elasticsearch and Kibana
|
||||
* Filebeat
|
||||
* Metricbeat
|
||||
* Packetbeat
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
* Start up the PHP Guestbook with Redis.
|
||||
* Install kube-state-metrics.
|
||||
* Create a Kubernetes Secret.
|
||||
* Deploy the Beats.
|
||||
* View dashboards of your logs and metrics.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
{{< version-check >}}
|
||||
|
||||
Additionally you need:
|
||||
|
||||
* A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
|
||||
|
||||
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co),
|
||||
run the [downloaded files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)
|
||||
on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## Start up the PHP Guestbook with Redis
|
||||
|
||||
This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
|
||||
|
||||
## Add a Cluster role binding
|
||||
|
||||
Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).
|
||||
|
||||
```shell
|
||||
kubectl create clusterrolebinding cluster-admin-binding \
|
||||
--clusterrole=cluster-admin --user=<your email associated with the k8s provider account>
|
||||
```
|
||||
|
||||
## Install kube-state-metrics
|
||||
|
||||
Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.
|
||||
|
||||
```shell
|
||||
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
|
||||
kubectl apply -f kube-state-metrics/examples/standard
|
||||
```
|
||||
|
||||
### Check to see if kube-state-metrics is running
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s
|
||||
```
|
||||
|
||||
## Clone the Elastic examples GitHub repo
|
||||
|
||||
```shell
|
||||
git clone https://github.com/elastic/examples.git
|
||||
```
|
||||
|
||||
The rest of the commands will reference files in the `examples/beats-k8s-send-anywhere` directory, so change dir there:
|
||||
|
||||
```shell
|
||||
cd examples/beats-k8s-send-anywhere
|
||||
```
|
||||
|
||||
## Create a Kubernetes Secret
|
||||
|
||||
A Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}} is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
|
||||
|
||||
{{< note >}}
|
||||
There are two sets of steps here, one for *self managed* Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the *managed service* Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.
|
||||
{{< /note >}}
|
||||
|
||||
{{< tabs name="tab_with_md" >}}
|
||||
{{% tab name="Self Managed" %}}
|
||||
|
||||
### Self managed
|
||||
|
||||
Switch to the **Managed service** tab if you are connecting to Elasticsearch Service in Elastic Cloud.
|
||||
|
||||
### Set the credentials
|
||||
|
||||
There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:
|
||||
|
||||
1. `ELASTICSEARCH_HOSTS`
|
||||
1. `ELASTICSEARCH_PASSWORD`
|
||||
1. `ELASTICSEARCH_USERNAME`
|
||||
1. `KIBANA_HOST`
|
||||
|
||||
Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see [*this configuration*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
|
||||
|
||||
#### `ELASTICSEARCH_HOSTS`
|
||||
|
||||
1. A nodeGroup from the Elastic Elasticsearch Helm Chart:
|
||||
|
||||
```
|
||||
["http://elasticsearch-master.default.svc.cluster.local:9200"]
|
||||
```
|
||||
|
||||
1. A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
|
||||
|
||||
```
|
||||
["http://host.docker.internal:9200"]
|
||||
```
|
||||
|
||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
||||
|
||||
```
|
||||
["http://host1.example.com:9200", "http://host2.example.com:9200"]
|
||||
```
|
||||
|
||||
Edit `ELASTICSEARCH_HOSTS`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_HOSTS
|
||||
```
|
||||
|
||||
#### `ELASTICSEARCH_PASSWORD`
|
||||
|
||||
Just the password; no whitespace, quotes, `<` or `>`:
|
||||
|
||||
```
|
||||
<yoursecretpassword>
|
||||
```
|
||||
|
||||
Edit `ELASTICSEARCH_PASSWORD`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_PASSWORD
|
||||
```
|
||||
|
||||
#### `ELASTICSEARCH_USERNAME`
|
||||
|
||||
Just the username; no whitespace, quotes, `<` or `>`:
|
||||
|
||||
```
|
||||
<your ingest username for Elasticsearch>
|
||||
```
|
||||
|
||||
Edit `ELASTICSEARCH_USERNAME`:
|
||||
|
||||
```shell
|
||||
vi ELASTICSEARCH_USERNAME
|
||||
```
|
||||
|
||||
#### `KIBANA_HOST`
|
||||
|
||||
1. The Kibana instance from the Elastic Kibana Helm Chart. The subdomain `default` refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
|
||||
|
||||
```
|
||||
"kibana-kibana.default.svc.cluster.local:5601"
|
||||
```
|
||||
|
||||
1. A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
|
||||
|
||||
```
|
||||
"host.docker.internal:5601"
|
||||
```
|
||||
1. Two Elasticsearch nodes running in VMs or on physical hardware:
|
||||
|
||||
```
|
||||
"host1.example.com:5601"
|
||||
```
|
||||
|
||||
Edit `KIBANA_HOST`:
|
||||
|
||||
```shell
|
||||
vi KIBANA_HOST
|
||||
```
|
||||
|
||||
### Create a Kubernetes Secret
|
||||
|
||||
This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dynamic-logging \
|
||||
--from-file=./ELASTICSEARCH_HOSTS \
|
||||
--from-file=./ELASTICSEARCH_PASSWORD \
|
||||
--from-file=./ELASTICSEARCH_USERNAME \
|
||||
--from-file=./KIBANA_HOST \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Managed service" %}}
|
||||
|
||||
## Managed service
|
||||
|
||||
This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with [Deploy the Beats](#deploy-the-beats).
|
||||
|
||||
### Set the credentials
|
||||
|
||||
There are two files to edit to create a Kubernetes Secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:
|
||||
|
||||
1. `ELASTIC_CLOUD_AUTH`
|
||||
1. `ELASTIC_CLOUD_ID`
|
||||
|
||||
Set these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:
|
||||
|
||||
#### `ELASTIC_CLOUD_ID`
|
||||
|
||||
```
|
||||
devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==
|
||||
```
|
||||
|
||||
#### `ELASTIC_CLOUD_AUTH`
|
||||
|
||||
Just the username, a colon (`:`), and the password, no whitespace or quotes:
|
||||
|
||||
```
|
||||
elastic:VFxJJf9Tjwer90wnfTghsn8w
|
||||
```
|
||||
|
||||
### Edit the required files:
|
||||
|
||||
```shell
|
||||
vi ELASTIC_CLOUD_ID
|
||||
vi ELASTIC_CLOUD_AUTH
|
||||
```
|
||||
|
||||
### Create a Kubernetes Secret
|
||||
|
||||
This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dynamic-logging \
|
||||
--from-file=./ELASTIC_CLOUD_ID \
|
||||
--from-file=./ELASTIC_CLOUD_AUTH \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Deploy the Beats
|
||||
|
||||
Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.
|
||||
|
||||
### About Filebeat
|
||||
|
||||
Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}. Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
|
||||
|
||||
Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application. This configuration is in the file `filebeat-kubernetes.yaml`:
|
||||
|
||||
```yaml
|
||||
- condition.contains:
|
||||
kubernetes.labels.app: redis
|
||||
config:
|
||||
- module: redis
|
||||
log:
|
||||
input:
|
||||
type: docker
|
||||
containers.ids:
|
||||
- ${data.kubernetes.container.id}
|
||||
slowlog:
|
||||
enabled: true
|
||||
var.hosts: ["${data.host}:${data.port}"]
|
||||
```
|
||||
|
||||
This configures Filebeat to apply the Filebeat module `redis` when a container is detected with a label `app` containing the string `redis`. The redis module has the ability to collect the `log` stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis `slowlog` entries by connecting to the proper pod host and port, which is provided in the container metadata.
|
||||
|
||||
### Deploy Filebeat:
|
||||
|
||||
```shell
|
||||
kubectl create -f filebeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
#### Verify
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic
|
||||
```
|
||||
|
||||
### About Metricbeat
|
||||
|
||||
Metricbeat autodiscover is configured in the same way as Filebeat. Here is the Metricbeat autodiscover configuration for the Redis containers. This configuration is in the file `metricbeat-kubernetes.yaml`:
|
||||
|
||||
```yaml
|
||||
- condition.equals:
|
||||
kubernetes.labels.tier: backend
|
||||
config:
|
||||
- module: redis
|
||||
metricsets: ["info", "keyspace"]
|
||||
period: 10s
|
||||
|
||||
# Redis hosts
|
||||
hosts: ["${data.host}:${data.port}"]
|
||||
```
|
||||
|
||||
This configures Metricbeat to apply the Metricbeat module `redis` when a container is detected with a label `tier` equal to the string `backend`. The `redis` module has the ability to collect the `info` and `keyspace` metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.
|
||||
|
||||
### Deploy Metricbeat
|
||||
|
||||
```shell
|
||||
kubectl create -f metricbeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
#### Verify
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=metricbeat
|
||||
```
|
||||
|
||||
### About Packetbeat
|
||||
|
||||
Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.
|
||||
|
||||
{{< note >}}
|
||||
If you are running a service on a non-standard port add that port number to the appropriate type in `filebeat.yaml` and delete/create the Packetbeat DaemonSet.
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
packetbeat.interfaces.device: any
|
||||
|
||||
packetbeat.protocols:
|
||||
- type: dns
|
||||
ports: [53]
|
||||
include_authorities: true
|
||||
include_additionals: true
|
||||
|
||||
- type: http
|
||||
ports: [80, 8000, 8080, 9200]
|
||||
|
||||
- type: mysql
|
||||
ports: [3306]
|
||||
|
||||
- type: redis
|
||||
ports: [6379]
|
||||
|
||||
packetbeat.flows:
|
||||
timeout: 30s
|
||||
period: 10s
|
||||
```
|
||||
|
||||
#### Deploy Packetbeat
|
||||
|
||||
```shell
|
||||
kubectl create -f packetbeat-kubernetes.yaml
|
||||
```
|
||||
|
||||
#### Verify
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic
|
||||
```
|
||||
|
||||
## View in Kibana
|
||||
|
||||
Open Kibana in your browser and then open the **Dashboard** application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.
|
||||
|
||||
Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
|
||||
|
||||
Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.
|
||||
|
||||
To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.
|
||||
|
||||
## Scale your Deployments and see new pods being monitored
|
||||
|
||||
List the existing Deployments:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
The output:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
frontend 3/3 3 3 3h27m
|
||||
redis-master 1/1 1 1 3h27m
|
||||
redis-slave 2/2 2 2 3h27m
|
||||
```
|
||||
|
||||
Scale the frontend down to two pods:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=2 deployment/frontend
|
||||
```
|
||||
|
||||
The output:
|
||||
|
||||
```
|
||||
deployment.extensions/frontend scaled
|
||||
```
|
||||
|
||||
Scale the frontend back up to three pods:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas=3 deployment/frontend
|
||||
```
|
||||
|
||||
## View the changes in Kibana
|
||||
|
||||
See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.
|
||||

|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
|
||||
|
||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
|
||||
```shell
|
||||
kubectl delete deployment -l app=redis
|
||||
kubectl delete service -l app=redis
|
||||
kubectl delete deployment -l app=guestbook
|
||||
kubectl delete service -l app=guestbook
|
||||
kubectl delete -f filebeat-kubernetes.yaml
|
||||
kubectl delete -f metricbeat-kubernetes.yaml
|
||||
kubectl delete -f packetbeat-kubernetes.yaml
|
||||
kubectl delete secret dynamic-logging -n kube-system
|
||||
```
|
||||
|
||||
1. Query the list of Pods to verify that no Pods are running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The response should be this:
|
||||
|
||||
```
|
||||
No resources found.
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about [tools for monitoring resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
* Read more about [logging architecture](/docs/concepts/cluster-administration/logging/)
|
||||
* Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/)
|
||||
* Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Example: Deploying PHP Guestbook application with Redis"
|
||||
title: "Example: Deploying PHP Guestbook application with MongoDB"
|
||||
reviewers:
|
||||
- ahmetb
|
||||
content_type: tutorial
|
||||
|
@ -7,22 +7,19 @@ weight: 20
|
|||
card:
|
||||
name: tutorials
|
||||
weight: 30
|
||||
title: "Stateless Example: PHP Guestbook with Redis"
|
||||
title: "Stateless Example: PHP Guestbook with MongoDB"
|
||||
min-kubernetes-server-version: v1.14
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This tutorial shows you how to build and deploy a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
|
||||
* A single-instance [Redis](https://redis.io/) master to store guestbook entries
|
||||
* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
|
||||
* A single-instance [MongoDB](https://www.mongodb.com/) to store guestbook entries
|
||||
* Multiple web frontend instances
|
||||
|
||||
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
* Start up a Redis master.
|
||||
* Start up Redis slaves.
|
||||
* Start up a Mongo database.
|
||||
* Start up the guestbook frontend.
|
||||
* Expose and view the Frontend Service.
|
||||
* Clean up.
|
||||
|
@ -39,24 +36,28 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
|
|||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## Start up the Redis Master
|
||||
## Start up the Mongo Database
|
||||
|
||||
The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
|
||||
The guestbook application uses MongoDB to store its data.
|
||||
|
||||
### Creating the Redis Master Deployment
|
||||
### Creating the Mongo Deployment
|
||||
|
||||
The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
|
||||
The manifest file, included below, specifies a Deployment controller that runs a single replica MongoDB Pod.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}}
|
||||
{{< codenew file="application/guestbook/mongo-deployment.yaml" >}}
|
||||
|
||||
1. Launch a terminal window in the directory you downloaded the manifest files.
|
||||
1. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
|
||||
1. Apply the MongoDB Deployment from the `mongo-deployment.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
|
||||
```
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/mongo-deployment.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Pods to verify that the Redis Master Pod is running:
|
||||
1. Query the list of Pods to verify that the MongoDB Pod is running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
@ -66,32 +67,33 @@ The manifest file, included below, specifies a Deployment controller that runs a
|
|||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 28s
|
||||
mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s
|
||||
```
|
||||
|
||||
1. Run the following command to view the logs from the Redis Master Pod:
|
||||
1. Run the following command to view the logs from the MongoDB Deployment:
|
||||
|
||||
```shell
|
||||
kubectl logs -f POD-NAME
|
||||
kubectl logs -f deployment/mongo
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Replace POD-NAME with the name of your Pod.
|
||||
{{< /note >}}
|
||||
### Creating the MongoDB Service
|
||||
|
||||
### Creating the Redis Master Service
|
||||
The guestbook application needs to communicate to the MongoDB to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the MongoDB Pod. A Service defines a policy to access the Pods.
|
||||
|
||||
The guestbook application needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
|
||||
{{< codenew file="application/guestbook/mongo-service.yaml" >}}
|
||||
|
||||
{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
|
||||
|
||||
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
|
||||
1. Apply the MongoDB Service from the following `mongo-service.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
|
||||
```
|
||||
|
||||
1. Query the list of Services to verify that the Redis Master Service is running:
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/mongo-service.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Services to verify that the MongoDB Service is running:
|
||||
|
||||
```shell
|
||||
kubectl get service
|
||||
|
@ -102,77 +104,17 @@ The guestbook application needs to communicate to the Redis master to write its
|
|||
```shell
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 8s
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
|
||||
This manifest file creates a Service named `mongo` with a set of labels that match the labels previously defined, so the Service routes network traffic to the MongoDB Pod.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Start up the Redis Slaves
|
||||
|
||||
Although the Redis master is a single pod, you can make it highly available to meet traffic demands by adding replica Redis slaves.
|
||||
|
||||
### Creating the Redis Slave Deployment
|
||||
|
||||
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
|
||||
|
||||
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas running, it would scale down until two replicas are running.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}}
|
||||
|
||||
1. Apply the Redis Slave Deployment from the `redis-slave-deployment.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml
|
||||
```
|
||||
|
||||
1. Query the list of Pods to verify that the Redis Slave Pods are running:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 1m
|
||||
redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
|
||||
redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
|
||||
```
|
||||
|
||||
### Creating the Redis Slave Service
|
||||
|
||||
The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-slave-service.yaml" >}}
|
||||
|
||||
1. Apply the Redis Slave Service from the following `redis-slave-service.yaml` file:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
|
||||
```
|
||||
|
||||
1. Query the list of Services to verify that the Redis slave service is running:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 1m
|
||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 6s
|
||||
```
|
||||
|
||||
## Set up and Expose the Guestbook Frontend
|
||||
|
||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `redis-master` Service for write requests and the `redis-slave` service for Read requests.
|
||||
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `mongo` Service to store Guestbook entries.
|
||||
|
||||
### Creating the Guestbook Frontend Deployment
|
||||
|
||||
|
@ -184,6 +126,11 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
|||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
|
||||
```
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Pods to verify that the three frontend replicas are running:
|
||||
|
||||
```shell
|
||||
|
@ -201,12 +148,12 @@ The guestbook application has a web frontend serving the HTTP requests written i
|
|||
|
||||
### Creating the Frontend Service
|
||||
|
||||
The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||
|
||||
{{< note >}}
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment `type: LoadBalancer`.
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
||||
|
@ -217,6 +164,11 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
|
||||
```
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
kubectl apply -f ./content/en/examples/application/guestbook/frontend-service.yaml
|
||||
-->
|
||||
|
||||
1. Query the list of Services to verify that the frontend Service is running:
|
||||
|
||||
```shell
|
||||
|
@ -227,29 +179,27 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
frontend NodePort 10.0.0.112 <none> 80:31323/TCP 6s
|
||||
frontend ClusterIP 10.0.0.112 <none> 80/TCP 6s
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
|
||||
redis-master ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||
redis-slave ClusterIP 10.0.0.223 <none> 6379/TCP 1m
|
||||
mongo ClusterIP 10.0.0.151 <none> 6379/TCP 2m
|
||||
```
|
||||
|
||||
### Viewing the Frontend Service via `NodePort`
|
||||
### Viewing the Frontend Service via `kubectl port-forward`
|
||||
|
||||
If you deployed this application to Minikube or a local cluster, you need to find the IP address to view your Guestbook.
|
||||
|
||||
1. Run the following command to get the IP address for the frontend Service.
|
||||
1. Run the following command to forward port `8080` on your local machine to port `80` on the service.
|
||||
|
||||
```shell
|
||||
minikube service frontend --url
|
||||
kubectl port-forward svc/frontend 8080:80
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
||||
```
|
||||
http://192.168.99.100:31323
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
```
|
||||
|
||||
1. Copy the IP address, and load the page in your browser to view your guestbook.
|
||||
1. load the page [http://localhost:8080](http://localhost:8080) in your browser to view your guestbook.
|
||||
|
||||
### Viewing the Frontend Service via `LoadBalancer`
|
||||
|
||||
|
@ -295,9 +245,7 @@ You can scale up or down as needed because your servers are defined as a Service
|
|||
frontend-3823415956-k22zn 1/1 Running 0 54m
|
||||
frontend-3823415956-w9gbt 1/1 Running 0 54m
|
||||
frontend-3823415956-x2pld 1/1 Running 0 5s
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 56m
|
||||
redis-slave-2005841000-fpvqc 1/1 Running 0 55m
|
||||
redis-slave-2005841000-phfv9 1/1 Running 0 55m
|
||||
mongo-1068406935-3lswp 1/1 Running 0 56m
|
||||
```
|
||||
|
||||
1. Run the following command to scale down the number of frontend Pods:
|
||||
|
@ -318,9 +266,7 @@ You can scale up or down as needed because your servers are defined as a Service
|
|||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-3823415956-k22zn 1/1 Running 0 1h
|
||||
frontend-3823415956-w9gbt 1/1 Running 0 1h
|
||||
redis-master-1068406935-3lswp 1/1 Running 0 1h
|
||||
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
|
||||
redis-slave-2005841000-phfv9 1/1 Running 0 1h
|
||||
mongo-1068406935-3lswp 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
|
||||
|
@ -332,19 +278,17 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
|
||||
```shell
|
||||
kubectl delete deployment -l app=redis
|
||||
kubectl delete service -l app=redis
|
||||
kubectl delete deployment -l app=guestbook
|
||||
kubectl delete service -l app=guestbook
|
||||
kubectl delete deployment -l app.kubernetes.io/name=mongo
|
||||
kubectl delete service -l app.kubernetes.io/name=mongo
|
||||
kubectl delete deployment -l app.kubernetes.io/name=guestbook
|
||||
kubectl delete service -l app.kubernetes.io/name=guestbook
|
||||
```
|
||||
|
||||
The responses should be:
|
||||
|
||||
```
|
||||
deployment.apps "redis-master" deleted
|
||||
deployment.apps "redis-slave" deleted
|
||||
service "redis-master" deleted
|
||||
service "redis-slave" deleted
|
||||
deployment.apps "mongo" deleted
|
||||
service "mongo" deleted
|
||||
deployment.apps "frontend" deleted
|
||||
service "frontend" deleted
|
||||
```
|
||||
|
@ -365,7 +309,6 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Add [ELK logging and monitoring](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) to your Guestbook application
|
||||
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
|
||||
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
|
||||
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
|
|
@ -3,22 +3,24 @@ kind: Deployment
|
|||
metadata:
|
||||
name: frontend
|
||||
labels:
|
||||
app: guestbook
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: php-redis
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
- name: guestbook
|
||||
image: paulczar/gb-frontend:v5
|
||||
# image: gcr.io/google-samples/gb-frontend:v4
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
@ -26,13 +28,5 @@ spec:
|
|||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
||||
# service launched automatically. However, if the cluster you are using
|
||||
# does not have a built-in DNS service, you can instead
|
||||
# access an environment variable to find the master
|
||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
||||
# uncomment the line below:
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
|
@ -3,16 +3,14 @@ kind: Service
|
|||
metadata:
|
||||
name: frontend
|
||||
labels:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
spec:
|
||||
# comment or delete the following line if you want to use a LoadBalancer
|
||||
type: NodePort
|
||||
# if your cluster supports it, uncomment the following to automatically create
|
||||
# an external load-balanced IP for the frontend service.
|
||||
# type: LoadBalancer
|
||||
ports:
|
||||
- port: 80
|
||||
selector:
|
||||
app: guestbook
|
||||
tier: frontend
|
||||
app.kubernetes.io/name: guestbook
|
||||
app.kubernetes.io/component: frontend
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mongo
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: mongo
|
||||
image: mongo:4.2
|
||||
args:
|
||||
- --bind_ip
|
||||
- 0.0.0.0
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 27017
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mongo
|
||||
labels:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 27017
|
||||
targetPort: 27017
|
||||
selector:
|
||||
app.kubernetes.io/name: mongo
|
||||
app.kubernetes.io/component: backend
|
|
@ -1,29 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-master
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: k8s.gcr.io/redis:e2e # or just image: redis
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 6379
|
|
@ -1,17 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-master
|
||||
labels:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- name: redis
|
||||
port: 6379
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: master
|
||||
tier: backend
|
|
@ -1,40 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: GET_HOSTS_FROM
|
||||
value: dns
|
||||
# Using `GET_HOSTS_FROM=dns` requires your cluster to
|
||||
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
|
||||
# service launched automatically. However, if the cluster you are using
|
||||
# does not have a built-in DNS service, you can instead
|
||||
# access an environment variable to find the master
|
||||
# service's host. To do so, comment out the 'value: dns' line above, and
|
||||
# uncomment the line below:
|
||||
# value: env
|
||||
ports:
|
||||
- containerPort: 6379
|
|
@ -1,15 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: redis-slave
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
|
@ -148,6 +148,11 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) {
|
|||
}
|
||||
|
||||
func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||
podValidationOptions := validation.PodValidationOptions{
|
||||
AllowMultipleHugePageResources: true,
|
||||
AllowDownwardAPIHugePages: true,
|
||||
}
|
||||
|
||||
// Enable CustomPodDNS for testing
|
||||
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
|
||||
switch t := obj.(type) {
|
||||
|
@ -182,7 +187,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
opts := validation.PodValidationOptions{
|
||||
AllowMultipleHugePageResources: true,
|
||||
}
|
||||
errors = validation.ValidatePod(t, opts)
|
||||
errors = validation.ValidatePodCreate(t, opts)
|
||||
case *api.PodList:
|
||||
for i := range t.Items {
|
||||
errors = append(errors, validateObject(&t.Items[i])...)
|
||||
|
@ -191,12 +196,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = validation.ValidatePodTemplate(t)
|
||||
errors = validation.ValidatePodTemplate(t, podValidationOptions)
|
||||
case *api.ReplicationController:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = validation.ValidateReplicationController(t)
|
||||
errors = validation.ValidateReplicationController(t, podValidationOptions)
|
||||
case *api.ReplicationControllerList:
|
||||
for i := range t.Items {
|
||||
errors = append(errors, validateObject(&t.Items[i])...)
|
||||
|
@ -215,7 +220,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = validation.ValidateService(t, true)
|
||||
// handle clusterIPs, logic copied from service strategy
|
||||
if len(t.Spec.ClusterIP) > 0 && len(t.Spec.ClusterIPs) == 0 {
|
||||
t.Spec.ClusterIPs = []string{t.Spec.ClusterIP}
|
||||
}
|
||||
errors = validation.ValidateService(t)
|
||||
case *api.ServiceAccount:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
|
@ -250,12 +259,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = apps_validation.ValidateDaemonSet(t)
|
||||
errors = apps_validation.ValidateDaemonSet(t, podValidationOptions)
|
||||
case *apps.Deployment:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = apps_validation.ValidateDeployment(t)
|
||||
errors = apps_validation.ValidateDeployment(t, podValidationOptions)
|
||||
case *networking.Ingress:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
|
@ -265,18 +274,30 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
|
||||
}
|
||||
errors = networking_validation.ValidateIngressCreate(t, gv)
|
||||
case *networking.IngressClass:
|
||||
/*
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
gv := schema.GroupVersion{
|
||||
Group: networking.GroupName,
|
||||
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
|
||||
}
|
||||
*/
|
||||
errors = networking_validation.ValidateIngressClass(t)
|
||||
|
||||
case *policy.PodSecurityPolicy:
|
||||
errors = policy_validation.ValidatePodSecurityPolicy(t)
|
||||
case *apps.ReplicaSet:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = apps_validation.ValidateReplicaSet(t)
|
||||
errors = apps_validation.ValidateReplicaSet(t, podValidationOptions)
|
||||
case *batch.CronJob:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = batch_validation.ValidateCronJob(t)
|
||||
errors = batch_validation.ValidateCronJob(t, podValidationOptions)
|
||||
case *networking.NetworkPolicy:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
|
@ -287,6 +308,9 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = policy_validation.ValidatePodDisruptionBudget(t)
|
||||
case *rbac.ClusterRole:
|
||||
// clusterole does not accept namespace
|
||||
errors = rbac_validation.ValidateClusterRole(t)
|
||||
case *rbac.ClusterRoleBinding:
|
||||
// clusterolebinding does not accept namespace
|
||||
errors = rbac_validation.ValidateClusterRoleBinding(t)
|
||||
|
@ -414,6 +438,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"storagelimits": {&api.LimitRange{}},
|
||||
},
|
||||
"admin/sched": {
|
||||
"clusterrole": {&rbac.ClusterRole{}},
|
||||
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
|
||||
"pod1": {&api.Pod{}},
|
||||
"pod2": {&api.Pod{}},
|
||||
|
@ -539,6 +564,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"dapi-envars-pod": {&api.Pod{}},
|
||||
"dapi-volume": {&api.Pod{}},
|
||||
"dapi-volume-resources": {&api.Pod{}},
|
||||
"dependent-envars": {&api.Pod{}},
|
||||
"envars": {&api.Pod{}},
|
||||
"pod-multiple-secret-env-variable": {&api.Pod{}},
|
||||
"pod-secret-envFrom": {&api.Pod{}},
|
||||
|
@ -596,20 +622,29 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"load-balancer-example": {&apps.Deployment{}},
|
||||
},
|
||||
"service/access": {
|
||||
"frontend": {&api.Service{}, &apps.Deployment{}},
|
||||
"backend-deployment": {&apps.Deployment{}},
|
||||
"backend-service": {&api.Service{}},
|
||||
"frontend-deployment": {&apps.Deployment{}},
|
||||
"frontend-service": {&api.Service{}},
|
||||
"hello-application": {&apps.Deployment{}},
|
||||
"hello-service": {&api.Service{}},
|
||||
"hello": {&apps.Deployment{}},
|
||||
},
|
||||
"service/networking": {
|
||||
"curlpod": {&apps.Deployment{}},
|
||||
"custom-dns": {&api.Pod{}},
|
||||
"dual-stack-default-svc": {&api.Service{}},
|
||||
"dual-stack-ipv4-svc": {&api.Service{}},
|
||||
"dual-stack-ipv6-lb-svc": {&api.Service{}},
|
||||
"dual-stack-ipfamilies-ipv6": {&api.Service{}},
|
||||
"dual-stack-ipv6-svc": {&api.Service{}},
|
||||
"dual-stack-prefer-ipv6-lb-svc": {&api.Service{}},
|
||||
"dual-stack-preferred-ipfamilies-svc": {&api.Service{}},
|
||||
"dual-stack-preferred-svc": {&api.Service{}},
|
||||
"external-lb": {&networking.IngressClass{}},
|
||||
"example-ingress": {&networking.Ingress{}},
|
||||
"hostaliases-pod": {&api.Pod{}},
|
||||
"ingress": {&networking.Ingress{}},
|
||||
"ingress-resource-backend": {&networking.Ingress{}},
|
||||
"ingress-wildcard-host": {&networking.Ingress{}},
|
||||
"minimal-ingress": {&networking.Ingress{}},
|
||||
"name-virtual-host-ingress": {&networking.Ingress{}},
|
||||
"name-virtual-host-ingress-no-third-host": {&networking.Ingress{}},
|
||||
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
|
||||
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
|
||||
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
|
||||
|
@ -619,6 +654,9 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
|
||||
"nginx-svc": {&api.Service{}},
|
||||
"run-my-nginx": {&apps.Deployment{}},
|
||||
"simple-fanout-example": {&networking.Ingress{}},
|
||||
"test-ingress": {&networking.Ingress{}},
|
||||
"tls-example-ingress": {&networking.Ingress{}},
|
||||
},
|
||||
"windows": {
|
||||
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: pods-cluster-services
|
||||
spec:
|
||||
scopeSelector:
|
||||
matchExpressions:
|
||||
- operator : In
|
||||
scopeName: PriorityClass
|
||||
values: ["cluster-services"]
|
|
@ -24,7 +24,7 @@ Ce Code de conduite s’applique à la fois dans le cadre du projet et dans le c
|
|||
|
||||
Des cas de conduite abusive, de harcèlement ou autre pratique inacceptable ayant cours sur Kubernetes peuvent être signalés en contactant le [comité pour le code de conduite de Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) via l'adresse <conduct@kubernetes.io>. Pour d'autres projets, bien vouloir contacter un responsable de projet CNCF ou notre médiateur, Mishi Choudhary à l'adresse <mishi@linux.com>.
|
||||
|
||||
Ce Code de conduite est inspiré du « Contributor Covenant » (http://contributor-covenant.org) version 1.2.0, disponible à l’adresse http://contributor-covenant.org/version/1/2/0/.
|
||||
Ce Code de conduite est inspiré du « Contributor Covenant » (https://contributor-covenant.org) version 1.2.0, disponible à l’adresse https://contributor-covenant.org/version/1/2/0/.
|
||||
|
||||
### Code de conduite pour les événements de la CNCF
|
||||
|
||||
|
|
|
@ -48,10 +48,10 @@ Suivez les étapes ci-dessous pour commencer et explorer Minikube.
|
|||
Starting local Kubernetes cluster...
|
||||
```
|
||||
|
||||
Pour plus d'informations sur le démarrage de votre cluster avec une version spécifique de Kubernetes, une machine virtuelle ou un environnement de conteneur, voir [Démarrage d'un cluster].(#starting-a-cluster).
|
||||
Pour plus d'informations sur le démarrage de votre cluster avec une version spécifique de Kubernetes, une machine virtuelle ou un environnement de conteneur, voir [Démarrage d'un cluster](#starting-a-cluster).
|
||||
|
||||
2. Vous pouvez maintenant interagir avec votre cluster à l'aide de kubectl.
|
||||
Pour plus d'informations, voir [Interagir avec votre cluster.](#interacting-with-your-cluster).
|
||||
Pour plus d'informations, voir [Interagir avec votre cluster](#interacting-with-your-cluster).
|
||||
|
||||
Créons un déploiement Kubernetes en utilisant une image existante nommée `echoserver`, qui est un serveur HTTP, et exposez-la sur le port 8080 à l’aide de `--port`.
|
||||
|
||||
|
@ -529,5 +529,3 @@ Les contributions, questions et commentaires sont les bienvenus et sont encourag
|
|||
Les développeurs de minikube sont dans le canal #minikube du [Slack](https://kubernetes.slack.com) de Kubernetes (recevoir une invitation [ici](http://slack.kubernetes.io/)).
|
||||
Nous avons également la liste de diffusion [kubernetes-dev Google Groupes](https://groups.google.com/forum/#!forum/kubernetes-dev).
|
||||
Si vous publiez sur la liste, veuillez préfixer votre sujet avec "minikube:".
|
||||
|
||||
|
||||
|
|
|
@ -84,7 +84,7 @@ Vous pouvez télécharger les packages `.deb` depuis [Docker](https://www.docker
|
|||
|
||||
{{< caution >}}
|
||||
Le pilote VM `none` peut entraîner des problèmes de sécurité et de perte de données.
|
||||
Avant d'utiliser `--driver=none`, consultez [cette documentation] (https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations.
|
||||
Avant d'utiliser `--driver=none`, consultez [cette documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations.
|
||||
{{</ caution >}}
|
||||
|
||||
Minikube prend également en charge un `vm-driver=podman` similaire au pilote Docker. Podman est exécuté en tant que superutilisateur (utilisateur root), c'est le meilleur moyen de garantir que vos conteneurs ont un accès complet à toutes les fonctionnalités disponibles sur votre système.
|
||||
|
|
|
@ -75,5 +75,5 @@ terhadap dokumentasi Kubernetes, tetapi daftar ini dapat membantumu memulainya.
|
|||
|
||||
- Untuk berkontribusi ke komunitas Kubernetes melalui forum-forum daring seperti Twitter atau Stack Overflow, atau mengetahui tentang pertemuan komunitas (_meetup_) lokal dan acara-acara Kubernetes, kunjungi [situs komunitas Kubernetes](/community/).
|
||||
- Untuk mulai berkontribusi ke pengembangan fitur, baca [_cheatseet_ kontributor](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet).
|
||||
|
||||
- Untuk kontribusi khusus ke halaman Bahansa Indonesia, baca [Dokumentasi Khusus Untuk Translasi Bahasa Indonesia](/docs/contribute/localization_id.md)
|
||||
|
||||
|
|
|
@ -0,0 +1,178 @@
|
|||
---
|
||||
title: Dokumentasi Khusus Untuk Translasi Bahasa Indonesia
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Panduan khusus untuk bergabung ke komunitas SIG DOC Indonesia dan melakukan
|
||||
kontribusi untuk mentranslasikan dokumentasi Kubernetes ke dalam Bahasa
|
||||
Indonesia.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Manajemen _Milestone_ Tim {#manajemen-milestone-tim}
|
||||
|
||||
Secara umum siklus translasi dokumentasi ke Bahasa Indonesia akan dilakukan
|
||||
3 kali dalam setahun (sekitar setiap 4 bulan). Untuk menentukan dan mengevaluasi
|
||||
pencapaian atau _milestone_ dalam kurun waktu tersebut [jadwal rapat daring
|
||||
reguler tim Bahasa Indonesia](https://zoom.us/j/6072809193) dilakukan secara
|
||||
konsisten setiap dua minggu sekali. Dalam [agenda rapat ini](https://docs.google.com/document/d/1Qrj-WUAMA11V6KmcfxJsXcPeWwMbFsyBGV4RGbrSRXY)
|
||||
juga dilakukan pemilihan PR _Wrangler_ untuk dua minggu ke depan. Tugas PR
|
||||
_Wrangler_ tim Bahasa Indonesia serupa dengan PR _Wrangler_ dari proyek
|
||||
_upstream_.
|
||||
|
||||
Target pencapaian atau _milestone_ tim akan dirilis sebagai
|
||||
[_issue tracking_ seperti ini](https://github.com/kubernetes/website/issues/22296)
|
||||
pada Kubernetes GitHub Website setiap 4 bulan. Dan bersama dengan informasi
|
||||
PR _Wrangler_ yang dipilih setiap dua minggu, keduanya akan diumumkan di Slack
|
||||
_channel_ [#kubernetes-docs-id](https://kubernetes.slack.com/archives/CJ1LUCUHM)
|
||||
dari Komunitas Kubernetes.
|
||||
|
||||
## Cara Memulai Translasi
|
||||
|
||||
Untuk menerjemahkan satu halaman Bahasa Inggris ke Bahasa Indonesia, lakukan
|
||||
langkah-langkah berikut ini:
|
||||
|
||||
* Check halaman _issue_ di GitHub dan pastikan tidak ada orang lain yang sudah
|
||||
mengklaim halaman kamu dalam daftar periksa atau komentar-komentar sebelumnya.
|
||||
* Klaim halaman kamu pada _issue_ di GitHub dengan memberikan komentar di bawah
|
||||
dengan nama halaman yang ingin kamu terjemahkan dan ambillah hanya satu halaman
|
||||
dalam satu waktu.
|
||||
* _Fork_ [repo ini](https://github.com/kubernetes/website), buat terjemahan
|
||||
kamu, dan kirimkan PR (_pull request_) dengan label `language/id`
|
||||
* Setelah dikirim, pengulas akan memberikan komentar dalam beberapa hari, dan
|
||||
tolong untuk menjawab semua komentar. Direkomendasikan juga untuk melakukan
|
||||
[_squash_](https://github.com/wprig/wprig/wiki/How-to-squash-commits) _commit_
|
||||
kamu dengan pesan _commit_ yang baik.
|
||||
|
||||
|
||||
## Informasi Acuan Untuk Translasi
|
||||
|
||||
Tidak ada panduan gaya khusus untuk menulis translasi ke bahasa Indonesia.
|
||||
Namun, secara umum kita dapat mengikuti panduan gaya bahasa Inggris dengan
|
||||
beberapa tambahan untuk kata-kata impor yang dicetak miring.
|
||||
|
||||
Harap berkomitmen dengan terjemahan kamu dan pada saat kamu mendapatkan komentar
|
||||
dari pengulas, silahkan atasi sebaik-baiknya. Kami berharap halaman yang
|
||||
diklaim akan diterjemahkan dalam waktu kurang lebih dua minggu. Jika ternyata
|
||||
kamu tidak dapat berkomitmen lagi, beri tahu para pengulas agar mereka dapat
|
||||
meberikan halaman tersebut ke orang lain.
|
||||
|
||||
Beberapa acuan tambahan dalam melakukan translasi silahkan lihat informasi
|
||||
berikut ini:
|
||||
|
||||
### Daftara Glosarium Translasi dari tim SIG DOC Indonesia
|
||||
Untuk kata-kata selengkapnya silahkan baca glosariumnya
|
||||
[disini](#glosarium-indonesia)
|
||||
|
||||
### KBBI
|
||||
Konsultasikan dengan KBBI (Kamus Besar Bahasa Indonesia)
|
||||
[disini](https://kbbi.web.id/) dari
|
||||
[Kemendikbud](https://kbbi.kemdikbud.go.id/).
|
||||
|
||||
### RSNI Glosarium dari Ivan Lanin
|
||||
[RSNI Glosarium](https://github.com/jk8s/sig-docs-id-localization-how-tos/blob/master/resources/RSNI-glossarium.pdf)
|
||||
dapat digunakan untuk memahami bagaimana menerjemahkan berbagai istilah teknis
|
||||
dan khusus Kubernetes.
|
||||
|
||||
|
||||
## Panduan Penulisan _Source Code_
|
||||
|
||||
### Mengikuti kode asli dari dokumentasi bahasa Inggris
|
||||
|
||||
Untuk kenyamanan pemeliharaan, ikuti lebar teks asli dalam kode bahasa Inggris.
|
||||
Dengan kata lain, jika teks asli ditulis dalam baris yang panjang tanpa putus
|
||||
atu baris, maka teks tersebut ditulis panjang dalam satu baris meskipun dalam
|
||||
bahasa Indonesia. Jagalah agar tetap serupa.
|
||||
|
||||
### Hapus nama reviewer di kode asli bahasa Inggris
|
||||
|
||||
Terkadang _reviewer_ ditentukan di bagian atas kode di teks asli Bahasa Inggris.
|
||||
Secara umum, _reviewer-reviewer_ halaman aslinya akan kesulitan untuk meninjau
|
||||
halaman dalam bahasa Indonesia, jadi hapus kode yang terkait dengan informasi
|
||||
_reviewer_ dari metadata kode tersebut.
|
||||
|
||||
|
||||
## Panduan Penulisan Kata-kata Translasi
|
||||
|
||||
### Panduan umum
|
||||
|
||||
* Gunakan "kamu" daripada "Anda" sebagai subyek agar lebih bersahabat dengan
|
||||
para pembaca dokumentasi.
|
||||
* Tulislah miring untuk kata-kata bahasa Inggris yang diimpor jika kamu tidak
|
||||
dapat menemukan kata-kata tersebut dalam bahasa Indonesia.
|
||||
*Benar*: _controller_. *Salah*: controller, `controller`
|
||||
|
||||
### Panduan untuk kata-kata API Objek Kubernetes
|
||||
|
||||
Gunakan gaya "CamelCase" untuk menulis objek API Kubernetes, lihat daftar
|
||||
lengkapnya [di sini](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/).
|
||||
Sebagai contoh:
|
||||
|
||||
* *Benar*: PersistentVolume. *Salah*: volume persisten, `PersistentVolume`,
|
||||
persistentVolume
|
||||
* *Benar*: Pod. *Salah*: pod, `pod`, "pod"
|
||||
|
||||
*Tips* : Biasanya API objek sudah ditulis dalam huruf kapital pada halaman asli
|
||||
bahasa Inggris.
|
||||
|
||||
### Panduan untuk kata-kata yang sama dengan API Objek Kubernetes
|
||||
|
||||
Ada beberapa kata-kata yang serupa dengan nama API objek dari Kubernetes dan
|
||||
dapat mengacu ke arti yang lebih umum (tidak selalu dalam konteks Kubernetes).
|
||||
Sebagai contoh: _service_, _container_, _node_ , dan lain sebagainya. Kata-kata
|
||||
sebaiknya ditranslasikan ke Bahasa Indonesia sebagai contoh _service_ menjadi
|
||||
layanan, _container_ menjadi kontainer.
|
||||
|
||||
*Tips* : Biasanya kata-kata yang mengacu ke arti yang lebih umum sudah *tidak*
|
||||
ditulis dalam huruf kapital pada halaman asli bahasa Inggris.
|
||||
|
||||
### Panduan untuk "Feature Gate" Kubernetes
|
||||
|
||||
Istilah [_functional gate_](https://kubernetes.io/ko/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
Kubernetes tidak perlu diterjemahkan ke dalam bahasa Indonesia dan tetap
|
||||
dipertahankan dalam bentuk aslinya.
|
||||
|
||||
Contoh dari _functional gate_ adalah sebagai berikut:
|
||||
|
||||
- Akselerator
|
||||
- AdvancedAuditing
|
||||
- AffinityInAnnotations
|
||||
- AllowExtTrafficLocalEndpoints
|
||||
- ...
|
||||
|
||||
### Glosarium Indonesia {#glosarium-indonesia}
|
||||
|
||||
Inggris | Tipe Kata | Indonesia | Sumber | Contoh Kalimat
|
||||
---|---|---|---|---
|
||||
cluster | | klaster | |
|
||||
container | | kontainer | |
|
||||
node | kata benda | node | |
|
||||
file | | berkas | |
|
||||
service | kata benda | layanan | |
|
||||
set | | sekumpulan | |
|
||||
resource | | sumber daya | |
|
||||
default | | bawaan atau standar (tergantung context) | | Secara bawaan, ...; Pada konfigurasi dan instalasi standar, ...
|
||||
deploy | | menggelar | |
|
||||
image | | _image_ | |
|
||||
request | | permintaan | |
|
||||
object | kata benda | objek | https://kbbi.web.id/objek |
|
||||
command | | perintah | https://kbbi.web.id/perintah |
|
||||
view | | tampilan | |
|
||||
support | | tersedia atau dukungan (tergantung konteks) | "This feature is supported on version X; Fitur ini tersedia pada versi X; Supported by community; Didukung oleh komunitas"
|
||||
release | kata benda | rilis | https://kbbi.web.id/rilis |
|
||||
tool | | perangkat | |
|
||||
deployment | | penggelaran | |
|
||||
client | | klien | |
|
||||
reference | | rujukan | |
|
||||
update | | pembaruan | | The latest update... ; Pembaruan terkini...
|
||||
state | | _state_ | |
|
||||
task | | _task_ | |
|
||||
certificate | | sertifikat | |
|
||||
install | | instalasi | https://kbbi.web.id/instalasi |
|
||||
scale | | skala | |
|
||||
process | kata kerja | memproses | https://kbbi.web.id/proses |
|
||||
replica | kata benda | replika | https://kbbi.web.id/replika |
|
||||
flag | | tanda, parameter, argumen | |
|
||||
event | | _event_ | |
|
|
@ -0,0 +1,258 @@
|
|||
---
|
||||
title: Olá, Minikube!
|
||||
content_type: tutorial
|
||||
weight: 5
|
||||
menu:
|
||||
main:
|
||||
title: "Iniciar"
|
||||
weight: 10
|
||||
post: >
|
||||
<p>Pronto para meter a mão na massa? Vamos criar um cluster Kubernetes simples e executar uma aplicação exemplo.</p>
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Este tutorial mostra como executar uma aplicação exemplo no Kubernetes utilizando o [Minikube](https://minikube.sigs.k8s.io) e o [Katacoda](https://www.katacoda.com). O Katacoda disponibiliza um ambiente Kubernetes gratuito e acessível via navegador.
|
||||
|
||||
{{< note >}}
|
||||
Você também consegue seguir os passos desse tutorial instalando o Minikube localmente. Para instruções de instalação, acesse: [iniciando com minikube](https://minikube.sigs.k8s.io/docs/start/).
|
||||
{{< /note >}}
|
||||
|
||||
## Objetivos
|
||||
|
||||
* Instalar uma aplicação exemplo no minikube.
|
||||
* Executar a aplicação.
|
||||
* Visualizar os logs da aplicação.
|
||||
|
||||
## Antes de você iniciar
|
||||
|
||||
Este tutorial disponibiliza uma imagem de contêiner que utiliza o NGINX para retornar todas as requisições.
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## Criando um cluster do Minikube
|
||||
|
||||
1. Clique no botão abaixo **para iniciar o terminal do Katacoda**.
|
||||
|
||||
{{< kat-button >}}
|
||||
|
||||
{{< note >}}
|
||||
Se você instalou o Minikube localmente, execute: `minikube start`.
|
||||
{{< /note >}}
|
||||
|
||||
2. Abra o painel do Kubernetes em um navegador:
|
||||
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
3. Apenas no ambiente do Katacoda: Na parte superior do terminal, clique em **Preview Port 30000**.
|
||||
|
||||
## Criando um Deployment
|
||||
|
||||
Um [*Pod*](/docs/concepts/workloads/pods/) Kubernetes consiste em um ou mais contêineres agrupados para fins de administração e gerenciamento de rede. O Pod desse tutorial possui apenas um contêiner. Um [*Deployment*](/docs/concepts/workloads/controllers/deployment/) Kubernetes verifica a saúde do seu Pod e reinicia o contêiner do Pod caso o mesmo seja finalizado. Deployments são a maneira recomendada de gerenciar a criação e escalonamento dos Pods.
|
||||
|
||||
1. Usando o comando `kubectl create` para criar um Deployment que gerencia um Pod. O Pod executa um contêiner baseado na imagem docker disponibilizada.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
|
||||
```
|
||||
|
||||
2. Visualizando o Deployment:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
hello-node 1/1 1 1 1m
|
||||
```
|
||||
|
||||
3. Visualizando o Pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
4. Visualizando os eventos do cluster:
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
5. Visualizando a configuração do `kubectl`:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Para mais informações sobre o comando `kubectl`, veja o [kubectl overview](/docs/reference/kubectl/overview/).
|
||||
{{< /note >}}
|
||||
|
||||
## Criando um serviço
|
||||
|
||||
Por padrão, um Pod só é acessível utilizando o seu endereço IP interno no cluster Kubernetes. Para dispobiblilizar o contêiner `hello-node` fora da rede virtual do Kubernetes, você deve expor o Pod como um [*serviço*](/docs/concepts/services-networking/service/) Kubernetes.
|
||||
|
||||
1. Expondo o Pod usando o comando `kubectl expose`:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
|
||||
```
|
||||
|
||||
O parâmetro `--type=LoadBalancer` indica que você deseja expor o seu serviço fora do cluster Kubernetes.
|
||||
|
||||
A aplicação dentro da imagem `k8s.gcr.io/echoserver` "escuta" apenas na porta TCP 8080. Se você usou
|
||||
`kubectl expose` para expor uma porta diferente, os clientes não conseguirão se conectar a essa outra porta.
|
||||
|
||||
2. Visualizando o serviço que você acabou de criar:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
|
||||
```
|
||||
|
||||
Em provedores de Cloud que fornecem serviços de balanceamento de carga para o Kubernetes, um IP externo seria provisionado para acessar o serviço. No Minikube, o tipo `LoadBalancer` torna o serviço acessível por meio do comando `minikube service`.
|
||||
|
||||
3. Executar o comando a seguir:
|
||||
|
||||
```shell
|
||||
minikube service hello-node
|
||||
```
|
||||
|
||||
4. (**Apenas no ambiente do Katacoda**) Clicar no sinal de mais e então clicar em **Select port to view on Host 1**.
|
||||
|
||||
5. (**Apenas no ambiente do Katacoda**) Observe o número da porta com 5 dígitos exibido ao lado de `8080` na saída do serviço. Este número de porta é gerado aleatoriamente e pode ser diferente para você. Digite seu número na caixa de texto do número da porta e clique em **Display Port**. Usando o exemplo anterior, você digitaria `30369`.
|
||||
|
||||
Isso abre uma janela do navegador, acessa o seu aplicativo e mostra o retorno da requisição.
|
||||
|
||||
## Habilitando Complementos (addons)
|
||||
|
||||
O Minikube inclui um conjunto integrado de {{< glossary_tooltip text="complementos" term_id="addons" >}} que podem ser habilitados, desabilitados e executados no ambiente Kubernetes local.
|
||||
|
||||
1. Listando os complementos suportados atualmente:
|
||||
|
||||
```shell
|
||||
minikube addons list
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
addon-manager: enabled
|
||||
dashboard: enabled
|
||||
default-storageclass: enabled
|
||||
efk: disabled
|
||||
freshpod: disabled
|
||||
gvisor: disabled
|
||||
helm-tiller: disabled
|
||||
ingress: disabled
|
||||
ingress-dns: disabled
|
||||
logviewer: disabled
|
||||
metrics-server: disabled
|
||||
nvidia-driver-installer: disabled
|
||||
nvidia-gpu-device-plugin: disabled
|
||||
registry: disabled
|
||||
registry-creds: disabled
|
||||
storage-provisioner: enabled
|
||||
storage-provisioner-gluster: disabled
|
||||
```
|
||||
|
||||
2. Habilitando um complemento, por exemplo, `metrics-server`:
|
||||
|
||||
```shell
|
||||
minikube addons enable metrics-server
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
metrics-server was successfully enabled
|
||||
```
|
||||
|
||||
3. Visualizando os Pods e os Serviços que você acabou de criar:
|
||||
|
||||
```shell
|
||||
kubectl get pod,svc -n kube-system
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
|
||||
pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
|
||||
pod/metrics-server-67fb648c5 1/1 Running 0 26s
|
||||
pod/etcd-minikube 1/1 Running 0 34m
|
||||
pod/influxdb-grafana-b29w8 2/2 Running 0 26s
|
||||
pod/kube-addon-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-apiserver-minikube 1/1 Running 0 34m
|
||||
pod/kube-controller-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-proxy-rnlps 1/1 Running 0 34m
|
||||
pod/kube-scheduler-minikube 1/1 Running 0 34m
|
||||
pod/storage-provisioner 1/1 Running 0 34m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s
|
||||
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
|
||||
service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s
|
||||
service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
|
||||
```
|
||||
|
||||
4. Desabilitando o complemento `metrics-server`:
|
||||
|
||||
```shell
|
||||
minikube addons disable metrics-server
|
||||
```
|
||||
|
||||
A saída será semelhante a:
|
||||
|
||||
```
|
||||
metrics-server was successfully disabled
|
||||
```
|
||||
|
||||
## Removendo os recursos do Minikube
|
||||
|
||||
Agora você pode remover todos os recursos criados no seu cluster:
|
||||
|
||||
```shell
|
||||
kubectl delete service hello-node
|
||||
kubectl delete deployment hello-node
|
||||
```
|
||||
(**Opcional**) Pare a máquina virtual (VM) do Minikube:
|
||||
|
||||
```shell
|
||||
minikube stop
|
||||
```
|
||||
(**Opcional**) Remova a VM do Minikube:
|
||||
|
||||
```shell
|
||||
minikube delete
|
||||
```
|
||||
|
||||
## Próximos passos
|
||||
|
||||
* Aprender mais sobre [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
|
||||
* Aprender mais sobre [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
* Aprender mais sobre [Service objects](/docs/concepts/services-networking/service/).
|
||||
|
|
@ -23,8 +23,8 @@ Quy tắc ứng xử này áp dụng cả trong không gian dự án và trong k
|
|||
|
||||
Các trường hợp lạm dụng, quấy rối hoặc hành vi không thể chấp nhận được trong Kubernetes có thể được báo cáo bằng cách liên hệ với [Ủy ban Quy tắc ứng xử Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) thông qua <conduct@kubernetes.io>. Đối với các dự án khác, vui lòng liên hệ với người bảo trì dự án CNCF hoặc hòa giải viên của chúng tôi, Mishi Choudhary <mishi@linux.com>.
|
||||
|
||||
Quy tắc ứng xử này được điều chỉnh từ Giao ước cộng tác viên (http://contributor-covenant.org), phiên bản 1.2.0, có sẵn tại
|
||||
http://contributor-covenant.org/version/1/2/0/
|
||||
Quy tắc ứng xử này được điều chỉnh từ Giao ước cộng tác viên (https://contributor-covenant.org), phiên bản 1.2.0, có sẵn tại
|
||||
https://contributor-covenant.org/version/1/2/0/
|
||||
|
||||
### Quy tắc ứng xử sự kiện CNCF
|
||||
|
||||
|
|
|
@ -0,0 +1,211 @@
|
|||
---
|
||||
title: "为开发指南做贡献"
|
||||
linkTitle: "为开发指南做贡献"
|
||||
Author: Erik L. Arneson
|
||||
Description: "一位新的贡献者描述了编写和提交对 Kubernetes 开发指南的修改的经验。"
|
||||
date: 2020-10-01
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/
|
||||
resources:
|
||||
- src: "jorge-castro-code-of-conduct.jpg"
|
||||
title: "Jorge Castro 正在 SIG ContribEx 的周例会上宣布 Kubernetes 的行为准则。"
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: "Contributing to the Development Guide"
|
||||
linkTitle: "Contributing to the Development Guide"
|
||||
Author: Erik L. Arneson
|
||||
Description: "A new contributor describes the experience of writing and submitting changes to the Kubernetes Development Guide."
|
||||
date: 2020-10-01
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/
|
||||
resources:
|
||||
- src: "jorge-castro-code-of-conduct.jpg"
|
||||
title: "Jorge Castro announcing the Kubernetes Code of Conduct during a weekly SIG ContribEx meeting."
|
||||
---
|
||||
-->
|
||||
|
||||
|
||||
<!--
|
||||
When most people think of contributing to an open source project, I suspect they probably think of
|
||||
contributing code changes, new features, and bug fixes. As a software engineer and a long-time open
|
||||
source user and contributor, that's certainly what I thought. Although I have written a good quantity
|
||||
of documentation in different workflows, the massive size of the Kubernetes community was a new kind
|
||||
of "client." I just didn't know what to expect when Google asked my compatriots and me at
|
||||
[Lion's Way](https://lionswaycontent.com/) to make much-needed updates to the Kubernetes Development Guide.
|
||||
|
||||
*This article originally appeared on the [Kubernetes Contributor Community blog](https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/).*
|
||||
-->
|
||||
|
||||
当大多数人想到为一个开源项目做贡献时,我猜想他们可能想到的是贡献代码修改、新功能和错误修复。作为一个软件工程师和一个长期的开源用户和贡献者,这也正是我的想法。
|
||||
虽然我已经在不同的工作流中写了不少文档,但规模庞大的 Kubernetes 社区是一种新型 "客户"。我只是不知道当 Google 要求我和 [Lion's Way](https://lionswaycontent.com/) 的同胞们对 Kubernetes 开发指南进行必要更新时会发生什么。
|
||||
|
||||
*本文最初出现在 [Kubernetes Contributor Community blog](https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/)。*
|
||||
|
||||
<!--
|
||||
## The Delights of Working With a Community
|
||||
|
||||
As professional writers, we are used to being hired to write very specific pieces. We specialize in
|
||||
marketing, training, and documentation for technical services and products, which can range anywhere from relatively fluffy marketing emails to deeply technical white papers targeted at IT and developers. With
|
||||
this kind of professional service, every deliverable tends to have a measurable return on investment.
|
||||
I knew this metric wouldn't be present when working on open source documentation, but I couldn't
|
||||
predict how it would change my relationship with the project.
|
||||
-->
|
||||
|
||||
## 与社区合作的乐趣
|
||||
|
||||
作为专业的写手,我们习惯了受雇于他人去书写非常具体的项目。我们专注于技术服务,产品营销,技术培训以及文档编制,范围从相对宽松的营销邮件到针对 IT 和开发人员的深层技术白皮书。
|
||||
在这种专业服务下,每一个可交付的项目往往都有可衡量的投资回报。我知道在从事开源文档工作时不会出现这个指标,但我不确定它将如何改变我与项目的关系。
|
||||
|
||||
<!--
|
||||
One of the primary traits of the relationship between our writing and our traditional clients is that we
|
||||
always have one or two primary points of contact inside a company. These contacts are responsible
|
||||
for reviewing our writing and making sure it matches the voice of the company and targets the
|
||||
audience they're looking for. It can be stressful -- which is why I'm so glad that my writing
|
||||
partner, eagle-eyed reviewer, and bloodthirsty editor [Joel](https://twitter.com/JoelByronBarker)
|
||||
handles most of the client contact.
|
||||
-->
|
||||
|
||||
我们的写作和传统客户之间的关系有一个主要的特点,就是我们在一个公司里面总是有一两个主要的对接人。他们负责审查我们的文稿,并确保文稿内容符合公司的声明且对标于他们正在寻找的受众。
|
||||
这随之而来的压力--正好解释了为什么我很高兴我的写作伙伴、鹰眼审稿人同时也是嗜血编辑的 [Joel](https://twitter.com/JoelByronBarker) 处理了大部分的客户联系。
|
||||
|
||||
|
||||
<!--
|
||||
I was surprised and delighted that all of the stress of client contact went out the window when
|
||||
working with the Kubernetes community.
|
||||
-->
|
||||
|
||||
在与 Kubernetes 社区合作时,所有与客户接触的压力都消失了,这让我感到惊讶和高兴。
|
||||
|
||||
<!--
|
||||
"How delicate do I have to be? What if I screw up? What if I make a developer angry? What if I make
|
||||
enemies?" These were all questions that raced through my mind and made me feel like I was
|
||||
approaching a field of eggshells when I first joined the `#sig-contribex` channel on the Kubernetes
|
||||
Slack and announced that I would be working on the
|
||||
[Development Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/development.md).
|
||||
-->
|
||||
|
||||
"我必须得多仔细?如果我搞砸了怎么办?如果我让开发商生气了怎么办?如果我树敌了怎么办?"。
|
||||
当我第一次加入 Kubernetes Slack 上的 "#sig-contribex " 频道并宣布我将编写 [开发指南](https://github.com/kubernetes/community/blob/master/contributors/devel/development.md) 时,这些问题都在我脑海中奔腾,让我感觉如履薄冰。
|
||||
|
||||
<!--
|
||||
{{< imgproc jorge-castro-code-of-conduct Fit "800x450" >}}
|
||||
"The Kubernetes Code of Conduct is in effect, so please be excellent to each other." — Jorge
|
||||
Castro, SIG ContribEx co-chair
|
||||
{{< /imgproc >}}
|
||||
-->
|
||||
|
||||
{{< imgproc jorge-castro-code-of-conduct Fit "800x450" >}}
|
||||
"Kubernetes 编码准则已经生效,让我们共同勉励。" — Jorge
|
||||
Castro, SIG ContribEx co-chair
|
||||
{{< /imgproc >}}
|
||||
|
||||
<!--
|
||||
My fears were unfounded. Immediately, I felt welcome. I like to think this isn't just because I was
|
||||
working on a much needed task, but rather because the Kubernetes community is filled
|
||||
with friendly, welcoming people. During the weekly SIG ContribEx meetings, our reports on progress
|
||||
with the Development Guide were included immediately. In addition, the leader of the meeting would
|
||||
always stress that the [Kubernetes Code of Conduct](https://www.kubernetes.dev/resources/code-of-conduct/) was in
|
||||
effect, and that we should, like Bill and Ted, be excellent to each other.
|
||||
-->
|
||||
|
||||
事实上我的担心是多虑的。很快,我就感觉到自己是被欢迎的。我倾向于认为这不仅仅是因为我正在从事一项急需的任务,而是因为 Kubernetes 社区充满了友好、热情的人们。
|
||||
在每周的 SIG ContribEx 会议上,我们关于开发指南进展情况的报告会被立即纳入其中。此外,会议的领导会一直强调 [Kubernetes](https://www.kubernetes.dev/resources/code-of-conduct/) 编码准则,我们应该像 Bill 和 Ted 一样,相互进步。
|
||||
|
||||
|
||||
<!--
|
||||
## This Doesn't Mean It's All Easy
|
||||
|
||||
The Development Guide needed a pretty serious overhaul. When we got our hands on it, it was already
|
||||
packed with information and lots of steps for new developers to go through, but it was getting dusty
|
||||
with age and neglect. Documentation can really require a global look, not just point fixes.
|
||||
As a result, I ended up submitting a gargantuan pull request to the
|
||||
[Community repo](https://github.com/kubernetes/community): 267 additions and 88 deletions.
|
||||
-->
|
||||
|
||||
## 这并不意味着这一切都很简单
|
||||
|
||||
开发指南需要一次全面检查。当我们拿到它的时候,它已经捆绑了大量的信息和很多新开发者需要经历的步骤,但随着时间的推移和被忽视,它变得相当陈旧。
|
||||
文档的确需要全局观,而不仅仅是点与点的修复。结果,最终我向这个项目提交了一个巨大的 pull 请求。[社区仓库](https://github.com/kubernetes/community):新增 267 行,删除 88 行。
|
||||
|
||||
<!--
|
||||
The life cycle of a pull request requires a certain number of Kubernetes organization members to review and approve changes
|
||||
before they can be merged. This is a great practice, as it keeps both documentation and code in
|
||||
pretty good shape, but it can be tough to cajole the right people into taking the time for such a hefty
|
||||
review. As a result, that massive PR took 26 days from my first submission to final merge. But in
|
||||
the end, [it was successful](https://github.com/kubernetes/community/pull/5003).
|
||||
-->
|
||||
|
||||
pull 请求的周期需要一定数量的 Kubernetes 组织成员审查和批准更改后才能合并。这是一个很好的做法,因为它使文档和代码都保持在相当不错的状态,
|
||||
但要哄骗合适的人花时间来做这样一个赫赫有名的审查是很难的。
|
||||
因此,那次大规模的 PR 从我第一次提交到最后合并,用了 26 天。 但最终,[它是成功的](https://github.com/kubernetes/community/pull/5003).
|
||||
|
||||
<!--
|
||||
Since Kubernetes is a pretty fast-moving project, and since developers typically aren't really
|
||||
excited about writing documentation, I also ran into the problem that sometimes, the secret jewels
|
||||
that describe the workings of a Kubernetes subsystem are buried deep within the [labyrinthine mind of
|
||||
a brilliant engineer](https://github.com/amwat), and not in plain English in a Markdown file. I ran headlong into this issue
|
||||
when it came time to update the getting started documentation for end-to-end (e2e) testing.
|
||||
-->
|
||||
|
||||
由于 Kubernetes 是一个发展相当迅速的项目,而且开发人员通常对编写文档并不十分感兴趣,所以我也遇到了一个问题,那就是有时候,
|
||||
描述 Kubernetes 子系统工作原理的秘密珍宝被深埋在 [天才工程师的迷宫式思维](https://github.com/amwat) 中,而不是用单纯的英文写在 Markdown 文件中。
|
||||
当我要更新端到端(e2e)测试的入门文档时,就一头撞上了这个问题。
|
||||
|
||||
<!--
|
||||
This portion of my journey took me out of documentation-writing territory and into the role of a
|
||||
brand new user of some unfinished software. I ended up working with one of the developers of the new
|
||||
[`kubetest2` framework](https://github.com/kubernetes-sigs/kubetest2) to document the latest process of
|
||||
getting up-and-running for e2e testing, but it required a lot of head scratching on my part. You can
|
||||
judge the results for yourself by checking out my
|
||||
[completed pull request](https://github.com/kubernetes/community/pull/5045).
|
||||
-->
|
||||
|
||||
这段旅程将我带出了编写文档的领域,进入到一些未完成软件的全新用户角色。最终我花了很多心思与新的 [kubetest2`框架](https://github.com/kubernetes-sigs/kubetest2) 的开发者之一合作,
|
||||
记录了最新 e2e 测试的启动和运行过程。
|
||||
你可以通过查看我的 [已完成的 pull request](https://github.com/kubernetes/community/pull/5045) 来自己判断结果。
|
||||
|
||||
<!--
|
||||
## Nobody Is the Boss, and Everybody Gives Feedback
|
||||
|
||||
But while I secretly expected chaos, the process of contributing to the Kubernetes Development Guide
|
||||
and interacting with the amazing Kubernetes community went incredibly smoothly. There was no
|
||||
contention. I made no enemies. Everybody was incredibly friendly and welcoming. It was *enjoyable*.
|
||||
-->
|
||||
|
||||
## 没有人是老板,每个人都给出反馈。
|
||||
|
||||
但当我暗自期待混乱的时候,为 Kubernetes 开发指南做贡献以及与神奇的 Kubernetes 社区互动的过程却非常顺利。
|
||||
没有争执,我也没有树敌。每个人都非常友好和热情。这是令人*愉快的*。
|
||||
|
||||
<!--
|
||||
With an open source project, there is no one boss. The Kubernetes project, which approaches being
|
||||
gargantuan, is split into many different special interest groups (SIGs), working groups, and
|
||||
communities. Each has its own regularly scheduled meetings, assigned duties, and elected
|
||||
chairpersons. My work intersected with the efforts of both SIG ContribEx (who watch over and seek to
|
||||
improve the contributor experience) and SIG Testing (who are in charge of testing). Both of these
|
||||
SIGs proved easy to work with, eager for contributions, and populated with incredibly friendly and
|
||||
welcoming people.
|
||||
-->
|
||||
|
||||
对于一个开源项目,没人是老板。Kubernetes 项目,一个近乎巨大的项目,被分割成许多不同的特殊兴趣小组(SIG)、工作组和社区。
|
||||
每个小组都有自己的定期会议、职责分配和主席推选。我的工作与 SIG ContribEx(负责监督并寻求改善贡献者体验)和 SIG Testing(负责测试)的工作有交集。
|
||||
事实证明,这两个 SIG 都很容易合作,他们渴望贡献,而且都是非常友好和热情的人。
|
||||
|
||||
<!--
|
||||
In an active, living project like Kubernetes, documentation continues to need maintenance, revision,
|
||||
and testing alongside the code base. The Development Guide will continue to be crucial to onboarding
|
||||
new contributors to the Kubernetes code base, and as our efforts have shown, it is important that
|
||||
this guide keeps pace with the evolution of the Kubernetes project.
|
||||
-->
|
||||
|
||||
在 Kubernetes 这样一个活跃的、有生命力的项目中,文档仍然需要与代码库一起进行维护、修订和测试。
|
||||
开发指南将继续对 Kubernetes 代码库的新贡献者起到至关重要的作用,正如我们的努力所显示的那样,该指南必须与 Kubernetes 项目的发展保持同步。
|
||||
|
||||
<!--
|
||||
Joel and I really enjoy interacting with the Kubernetes community and contributing to
|
||||
the Development Guide. I really look forward to continuing to not only contributing more, but to
|
||||
continuing to build the new friendships I've made in this vast open source community over the past
|
||||
few months.
|
||||
-->
|
||||
|
||||
Joel 和我非常喜欢与 Kubernetes 社区互动并为开发指南做出贡献。我真的很期待,不仅能继续做出更多贡献,还能继续与过去几个月在这个庞大的开源社区中结识的新朋友进行合作。
|
Binary file not shown.
After Width: | Height: | Size: 140 KiB |
|
@ -162,7 +162,7 @@ Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _eq
|
|||
|
||||
_基于等值_ 或 _基于不等值_ 的需求允许按标签键和值进行过滤。
|
||||
匹配对象必须满足所有指定的标签约束,尽管它们也可能具有其他标签。
|
||||
可接受的运算符有`=`、`==` 和 `!=` 三种。
|
||||
可接受的运算符有`=`、`==` 和 `!=` 三种。
|
||||
前两个表示 _相等_(并且只是同义词),而后者表示 _不相等_。例如:
|
||||
|
||||
```
|
||||
|
|
|
@ -169,7 +169,7 @@ scheduled onto the node (if it is not yet running on the node).
|
|||
则 Kubernetes 不会将 Pod 分配到该节点。
|
||||
* 如果未被过滤的污点中不存在 effect 值为 `NoSchedule` 的污点,
|
||||
但是存在 effect 值为 `PreferNoSchedule` 的污点,
|
||||
则 Kubernetes 会 *尝试* 将 Pod 分配到该节点。
|
||||
则 Kubernetes 会 *尝试* 不将 Pod 分配到该节点。
|
||||
* 如果未被过滤的污点中存在至少一个 effect 值为 `NoExecute` 的污点,
|
||||
则 Kubernetes 不会将 Pod 分配到该节点(如果 Pod 还未在节点上运行),
|
||||
或者将 Pod 从该节点驱逐(如果 Pod 已经在节点上运行)。
|
||||
|
|
|
@ -186,7 +186,7 @@ Follow the steps given below to create the above Deployment:
|
|||
-->
|
||||
* `NAME` 列出了集群中 Deployment 的名称。
|
||||
* `READY` 显示应用程序的可用的 _副本_ 数。显示的模式是“就绪个数/期望个数”。
|
||||
* `UP-TO-DATE` 显示为了打到期望状态已经更新的副本数。
|
||||
* `UP-TO-DATE` 显示为了达到期望状态已经更新的副本数。
|
||||
* `AVAILABLE` 显示应用可供用户使用的副本数。
|
||||
* `AGE` 显示应用程序运行的时间。
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -85,42 +85,42 @@ See the Kubernetes v1.17 documentation for a [list](https://v1-17.docs.kubernete
|
|||
#### 生成器
|
||||
<!--
|
||||
You can generate the following resources with a kubectl command, `kubectl create --dry-run=client -o yaml`:
|
||||
```
|
||||
clusterrole Create a ClusterRole.
|
||||
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole.
|
||||
configmap Create a configmap from a local file, directory or literal value.
|
||||
cronjob Create a cronjob with the specified name.
|
||||
deployment Create a deployment with the specified name.
|
||||
job Create a job with the specified name.
|
||||
namespace Create a namespace with the specified name.
|
||||
poddisruptionbudget Create a pod disruption budget with the specified name.
|
||||
priorityclass Create a priorityclass with the specified name.
|
||||
quota Create a quota with the specified name.
|
||||
role Create a role with single rule.
|
||||
rolebinding Create a RoleBinding for a particular Role or ClusterRole.
|
||||
secret Create a secret using specified subcommand.
|
||||
service Create a service using specified subcommand.
|
||||
serviceaccount Create a service account with the specified name.
|
||||
```
|
||||
|
||||
* `clusterrole`: Create a ClusterRole.
|
||||
* `clusterrolebinding`: Create a ClusterRoleBinding for a particular ClusterRole.
|
||||
* `configmap`: Create a ConfigMap from a local file, directory or literal value.
|
||||
* `cronjob`: Create a CronJob with the specified name.
|
||||
* `deployment`: Create a Deployment with the specified name.
|
||||
* `job`: Create a Job with the specified name.
|
||||
* `namespace`: Create a Namespace with the specified name.
|
||||
* `poddisruptionbudget`: Create a PodDisruptionBudget with the specified name.
|
||||
* `priorityclass`: Create a PriorityClass with the specified name.
|
||||
* `quota`: Create a Quota with the specified name.
|
||||
* `role`: Create a Role with single rule.
|
||||
* `rolebinding`: Create a RoleBinding for a particular Role or ClusterRole.
|
||||
* `secret`: Create a Secret using specified subcommand.
|
||||
* `service`: Create a Service using specified subcommand.
|
||||
* `serviceaccount`: Create a ServiceAccount with the specified name.
|
||||
|
||||
-->
|
||||
你可以使用 kubectl 命令生成以下资源, `kubectl create --dry-run=client -o yaml`:
|
||||
```
|
||||
clusterrole 创建 ClusterRole。
|
||||
clusterrolebinding 为特定的 ClusterRole 创建 ClusterRoleBinding。
|
||||
configmap 使用本地文件、目录或文本值创建 Configmap。
|
||||
cronjob 使用指定的名称创建 Cronjob。
|
||||
deployment 使用指定的名称创建 Deployment。
|
||||
job 使用指定的名称创建 Job。
|
||||
namespace 使用指定的名称创建名称空间。
|
||||
poddisruptionbudget 使用指定名称创建 Pod 干扰预算。
|
||||
priorityclass 使用指定的名称创建 Priorityclass。
|
||||
quota 使用指定的名称创建配额。
|
||||
role 使用单一规则创建角色。
|
||||
rolebinding 为特定角色或 ClusterRole 创建 RoleBinding。
|
||||
secret 使用指定的子命令创建 Secret。
|
||||
service 使用指定的子命令创建服务。
|
||||
serviceaccount 使用指定的名称创建服务帐户。
|
||||
```
|
||||
|
||||
* `clusterrole`: 创建 ClusterRole。
|
||||
* `clusterrolebinding`: 为特定的 ClusterRole 创建 ClusterRoleBinding。
|
||||
* `configmap`: 使用本地文件、目录或文本值创建 Configmap。
|
||||
* `cronjob`: 使用指定的名称创建 Cronjob。
|
||||
* `deployment`: 使用指定的名称创建 Deployment。
|
||||
* `job`: 使用指定的名称创建 Job。
|
||||
* `namespace`: 使用指定的名称创建名称空间。
|
||||
* `poddisruptionbudget`: 使用指定名称创建 Pod 干扰预算。
|
||||
* `priorityclass`: 使用指定的名称创建 Priorityclass。
|
||||
* `quota`: 使用指定的名称创建配额。
|
||||
* `role`: 使用单一规则创建角色。
|
||||
* `rolebinding`: 为特定角色或 ClusterRole 创建 RoleBinding。
|
||||
* `secret`: 使用指定的子命令创建 Secret。
|
||||
* `service`: 使用指定的子命令创建服务。
|
||||
* `serviceaccount`: 使用指定的名称创建服务帐户。
|
||||
|
||||
|
||||
### `kubectl apply`
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ kubectl create deployment --image=nginx nginx-app
|
|||
deployment.apps/nginx-app created
|
||||
```
|
||||
|
||||
```
|
||||
```shell
|
||||
# add env to nginx-app
|
||||
kubectl set env deployment/nginx-app DOMAIN=cluster
|
||||
```
|
||||
|
|
|
@ -3,31 +3,40 @@ title: 使用 Service 把前端连接到后端
|
|||
content_type: tutorial
|
||||
weight: 70
|
||||
---
|
||||
<!--
|
||||
title: Connect a Frontend to a Backend Using Services
|
||||
content_type: tutorial
|
||||
weight: 70
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This task shows how to create a frontend and a backend
|
||||
microservice. The backend microservice is a hello greeter. The
|
||||
frontend and backend are connected using a Kubernetes
|
||||
{{< glossary_tooltip term_id="service" >}} object.
|
||||
This task shows how to create a _frontend_ and a _backend_ microservice. The backend
|
||||
microservice is a hello greeter. The frontend exposes the backend using nginx and a
|
||||
Kubernetes {{< glossary_tooltip term_id="service" >}} object.
|
||||
-->
|
||||
|
||||
本任务会描述如何创建前端微服务和后端微服务。后端微服务是一个 hello 欢迎程序。
|
||||
前端和后端的连接是通过 Kubernetes {{< glossary_tooltip term_id="service" text="服务" >}}
|
||||
完成的。
|
||||
本任务会描述如何创建前端(Frontend)微服务和后端(Backend)微服务。后端微服务是一个 hello 欢迎程序。
|
||||
前端通过 nginx 和一个 Kubernetes {{< glossary_tooltip term_id="service" text="服务" >}}
|
||||
暴露后端所提供的服务。
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
<!--
|
||||
* Create and run a microservice using a {{< glossary_tooltip term_id="deployment" >}} object.
|
||||
* Route traffic to the backend using a frontend.
|
||||
* Use a Service object to connect the frontend application to the
|
||||
backend application.
|
||||
* Create and run a sample `hello` backend microservice using a
|
||||
{{< glossary_tooltip term_id="deployment" >}} object.
|
||||
* Use a Service object to send traffic to the backend microservice's multiple replicas.
|
||||
* Create and run a `nginx` frontend microservice, also using a Deployment object.
|
||||
* Configure the frontend microservice to send traffic to the backend microservice.
|
||||
* Use a Service object of `type=LoadBalancer` to expose the frontend microservice
|
||||
outside the cluster.
|
||||
-->
|
||||
* 使用部署对象(Deployment object)创建并运行一个微服务
|
||||
* 从后端将流量路由到前端
|
||||
* 使用服务对象把前端应用连接到后端应用
|
||||
* 使用部署对象(Deployment object)创建并运行一个 `hello` 后端微服务
|
||||
* 使用一个 Service 对象将请求流量发送到后端微服务的多个副本
|
||||
* 同样使用一个 Deployment 对象创建并运行一个 `nginx` 前端微服务
|
||||
* 配置前端微服务将请求流量发送到后端微服务
|
||||
* 使用 `type=LoadBalancer` 的 Service 对象将全段微服务暴露到集群外部
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -39,8 +48,7 @@ This task uses
|
|||
require a supported environment. If your environment does not support this, you can use a Service of type
|
||||
[NodePort](/docs/concepts/services-networking/service/#nodeport) instead.
|
||||
-->
|
||||
|
||||
本任务使用 [外部负载均衡服务](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/),
|
||||
本任务使用[外部负载均衡服务](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/),
|
||||
所以需要对应的可支持此功能的环境。如果你的环境不能支持,你可以使用
|
||||
[NodePort](/zh/docs/concepts/services-networking/service/#nodeport)
|
||||
类型的服务代替。
|
||||
|
@ -57,7 +65,7 @@ file for the backend Deployment:
|
|||
|
||||
后端是一个简单的 hello 欢迎微服务应用。这是后端应用的 Deployment 配置文件:
|
||||
|
||||
{{< codenew file="service/access/hello.yaml" >}}
|
||||
{{< codenew file="service/access/backend-deployment.yaml" >}}
|
||||
|
||||
<!--
|
||||
Create the backend Deployment:
|
||||
|
@ -65,7 +73,7 @@ Create the backend Deployment:
|
|||
创建后端 Deployment:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/hello.yaml
|
||||
kubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -73,7 +81,7 @@ View information about the backend Deployment:
|
|||
-->
|
||||
查看后端的 Deployment 信息:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl describe deployment hello
|
||||
```
|
||||
|
||||
|
@ -83,7 +91,7 @@ The output is similar to this:
|
|||
输出类似于:
|
||||
|
||||
```
|
||||
Name: hello
|
||||
Name: backend
|
||||
Namespace: default
|
||||
CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
|
||||
Labels: app=hello
|
||||
|
@ -91,7 +99,7 @@ Labels: app=hello
|
|||
track=stable
|
||||
Annotations: deployment.kubernetes.io/revision=1
|
||||
Selector: app=hello,tier=backend,track=stable
|
||||
Replicas: 7 desired | 7 updated | 7 total | 7 available | 0 unavailable
|
||||
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 1 max unavailable, 1 max surge
|
||||
|
@ -112,15 +120,15 @@ Conditions:
|
|||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: hello-3621623197 (7/7 replicas created)
|
||||
NewReplicaSet: hello-3621623197 (3/3 replicas created)
|
||||
Events:
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
## Creating the backend Service object
|
||||
## Creating the `hello` Service object
|
||||
|
||||
The key to connecting a frontend to a backend is the backend
|
||||
The key to sending requests from a frontend to a backend is the backend
|
||||
Service. A Service creates a persistent IP address and DNS name entry
|
||||
so that the backend microservice can always be reached. A Service uses
|
||||
{{< glossary_tooltip text="selectors" term_id="selector" >}} to find
|
||||
|
@ -128,68 +136,84 @@ the Pods that it routes traffic to.
|
|||
|
||||
First, explore the Service configuration file:
|
||||
-->
|
||||
### 创建后端服务对象
|
||||
### 创建 `hello` Service 对象
|
||||
|
||||
前端连接到后端的关键是 Service(服务)。Service 创建一个固定 IP 和 DNS 解析名入口,
|
||||
使得后端微服务可达。Service 使用
|
||||
将请求从前端发送到到后端的关键是后端 Service。Service 创建一个固定 IP 和 DNS 解析名入口,
|
||||
使得后端微服务总是可达。Service 使用
|
||||
{{< glossary_tooltip text="选择算符" term_id="selector" >}}
|
||||
来寻找目标 Pod。
|
||||
|
||||
首先,浏览 Service 的配置文件:
|
||||
|
||||
{{< codenew file="service/access/hello-service.yaml" >}}
|
||||
{{< codenew file="service/access/backend-service.yaml" >}}
|
||||
|
||||
<!--
|
||||
In the configuration file, you can see that the Service routes traffic to Pods
|
||||
that have the labels `app: hello` and `tier: backend`.
|
||||
In the configuration file, you can see that the Service named `hello` routes
|
||||
traffic to Pods that have the labels `app: hello` and `tier: backend`.
|
||||
-->
|
||||
配置文件中,你可以看到 Service 将流量路由到包含 `app: hello` 和 `tier: backend` 标签的 Pod。
|
||||
配置文件中,你可以看到名为 `hello` 的 Service 将流量路由到包含 `app: hello`
|
||||
和 `tier: backend` 标签的 Pod。
|
||||
|
||||
<!--
|
||||
Create the `hello` Service:
|
||||
Create the backend Service:
|
||||
-->
|
||||
创建 `hello` Service:
|
||||
创建后端 Service:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml
|
||||
kubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
At this point, you have a backend Deployment running, and you have a
|
||||
Service that can route traffic to it.
|
||||
At this point, you have a `backend` Deployment running three replicas of your `hello`
|
||||
application, and you have a Service that can route traffic to them. However, this
|
||||
service is neither available nor resolvable outside the cluster.
|
||||
-->
|
||||
此时,你已经有了一个在运行的后端 Deployment,你也有了一个 Service 用于路由网络流量。
|
||||
此时,你已经有了一个运行着 `hello` 应用的三个副本的 `backend` Deployment,你也有了
|
||||
一个 Service 用于路由网络流量。不过,这个服务在集群外部无法访问也无法解析。
|
||||
|
||||
<!--
|
||||
## Creating the frontend
|
||||
|
||||
Now that you have your backend, you can create a frontend that connects to the backend.
|
||||
The frontend connects to the backend worker Pods by using the DNS name
|
||||
given to the backend Service. The DNS name is "hello", which is the value
|
||||
of the `name` field in the preceding Service configuration file.
|
||||
Now that you have your backend running, you can create a frontend that is accessible
|
||||
outside the cluster, and connects to the backend by proxying requests to it.
|
||||
|
||||
The frontend sends requests to the backend worker Pods by using the DNS name
|
||||
given to the backend Service. The DNS name is `hello`, which is the value
|
||||
of the `name` field in the `examples/service/access/backend-service.yaml`
|
||||
configuration file.
|
||||
|
||||
The Pods in the frontend Deployment run an nginx image that is configured
|
||||
to find the hello backend Service. Here is the nginx configuration file:
|
||||
to proxy requests to the hello backend Service. Here is the nginx configuration file:
|
||||
-->
|
||||
### 创建前端应用
|
||||
|
||||
既然你已经有了后端应用,你可以创建一个前端应用连接到后端。前端应用通过 DNS 名连接到后端的工作 Pods。
|
||||
DNS 名是 "hello",也就是 Service 配置文件中 `name` 字段的值。
|
||||
现在你已经有了运行中的后端应用,你可以创建一个可在集群外部访问的前端,并通过代理
|
||||
前端的请求连接到后端。
|
||||
|
||||
前端 Deployment 中的 Pods 运行一个 nginx 镜像,这个已经配置好镜像去寻找后端的 hello Service。
|
||||
只是 nginx 的配置文件:
|
||||
前端使用被赋予后端 Service 的 DNS 名称将请求发送到后端工作 Pods。这一 DNS
|
||||
名称为 `hello`,也就是 `examples/service/access/backend-service.yaml` 配置
|
||||
文件中 `name` 字段的取值。
|
||||
|
||||
{{< codenew file="service/access/frontend.conf" >}}
|
||||
前端 Deployment 中的 Pods 运行一个 nginx 镜像,这个已经配置好的镜像会将请求转发
|
||||
给后端的 hello Service。下面是 nginx 的配置文件:
|
||||
|
||||
{{< codenew file="service/access/frontend-nginx.conf" >}}
|
||||
|
||||
<!--
|
||||
Similar to the backend, the frontend has a Deployment and a Service. The
|
||||
configuration for the Service has `type: LoadBalancer`, which means that
|
||||
the Service uses the default load balancer of your cloud provider.
|
||||
Similar to the backend, the frontend has a Deployment and a Service. An important
|
||||
difference to notice between the backend and frontend services, is that the
|
||||
configuration for the frontend Service has `type: LoadBalancer`, which means that
|
||||
the Service uses a load balancer provisioned by your cloud provider and will be
|
||||
accessible from outside the cluster.
|
||||
-->
|
||||
与后端类似,前端用包含一个 Deployment 和一个 Service。Service 的配置文件包含了 `type: LoadBalancer`,
|
||||
也就是说,Service 会使用你的云服务商的默认负载均衡设备。
|
||||
与后端类似,前端用包含一个 Deployment 和一个 Service。后端与前端服务之间的一个
|
||||
重要区别是前端 Service 的配置文件包含了 `type: LoadBalancer`,也就是说,Service
|
||||
会使用你的云服务商的默认负载均衡设备,从而实现从集群外访问的目的。
|
||||
|
||||
{{< codenew file="service/access/frontend-service.yaml" >}}
|
||||
|
||||
{{< codenew file="service/access/frontend-deployment.yaml" >}}
|
||||
|
||||
{{< codenew file="service/access/frontend.yaml" >}}
|
||||
|
||||
<!--
|
||||
Create the frontend Deployment and Service:
|
||||
|
@ -197,7 +221,8 @@ Create the frontend Deployment and Service:
|
|||
创建前端 Deployment 和 Service:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml
|
||||
kubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml
|
||||
kubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -271,21 +296,22 @@ cluster.
|
|||
<!--
|
||||
## Send traffic through the frontend
|
||||
|
||||
The frontend and backends are now connected. You can hit the endpoint
|
||||
The frontend and backend are now connected. You can hit the endpoint
|
||||
by using the curl command on the external IP of your frontend Service.
|
||||
-->
|
||||
### 通过前端发送流量
|
||||
|
||||
前端和后端已经完成连接了。你可以使用 curl 命令通过你的前端 Service 的外部 IP 访问服务端点。
|
||||
前端和后端已经完成连接了。你可以使用 curl 命令通过你的前端 Service 的外部
|
||||
IP 访问服务端点。
|
||||
|
||||
```shell
|
||||
curl http://<EXTERNAL-IP>
|
||||
curl http://${EXTERNAL_IP} # 将 EXTERNAL_P 替换为你之前看到的外部 IP
|
||||
```
|
||||
|
||||
<!--
|
||||
The output shows the message generated by the backend:
|
||||
-->
|
||||
后端生成的消息输出如下:
|
||||
输出显示后端生成的消息:
|
||||
|
||||
```json
|
||||
{"message":"Hello"}
|
||||
|
@ -299,7 +325,7 @@ To delete the Services, enter this command:
|
|||
要删除服务,输入下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl delete services frontend hello
|
||||
kubectl delete services frontend backend
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -308,14 +334,16 @@ To delete the Deployments, the ReplicaSets and the Pods that are running the bac
|
|||
要删除在前端和后端应用中运行的 Deployment、ReplicaSet 和 Pod,输入下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment frontend hello
|
||||
kubectl delete deployment frontend backend
|
||||
```
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* Learn more about [Services](/docs/concepts/services-networking/service/)
|
||||
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
* Learn more about [DNS for Service and Pods](/docs/concepts/services-networking/dns-pod-service/)
|
||||
-->
|
||||
* 进一步了解[Service](/zh/docs/concepts/services-networking/service/)
|
||||
* 进一步了解[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
* 进一步了解 [Service](/zh/docs/concepts/services-networking/service/)
|
||||
* 进一步了解 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
* 进一步了解 [Service 和 Pods 的 DNS](/docs/concepts/services-networking/dns-pod-service/)
|
||||
|
||||
|
|
|
@ -0,0 +1,164 @@
|
|||
---
|
||||
title: 检查弃用 Dockershim 对你的影响
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
<!--
|
||||
title: Check whether Dockershim deprecation affects you
|
||||
content_type: task
|
||||
reviewers:
|
||||
- SergeyKanzhelev
|
||||
weight: 20
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
The `dockershim` component of Kubernetes allows to use Docker as a Kubernetes's
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
|
||||
Kubernetes' built-in `dockershim` component was deprecated in release v1.20.
|
||||
-->
|
||||
Kubernetes 的 `dockershim` 组件使得你可以把 Docker 用作 Kubernetes 的
|
||||
{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}。
|
||||
在 Kubernetes v1.20 版本中,内建组件 `dockershim` 被弃用。
|
||||
|
||||
<!--
|
||||
This page explains how your cluster could be using Docker as a container runtime,
|
||||
provides details on the role that `dockershim` plays when in use, and shows steps
|
||||
you can take to check whether any workloads could be affected by `dockershim` deprecation.
|
||||
-->
|
||||
本页讲解你的集群把 Docker 用作容器运行时的运作机制,
|
||||
并提供使用 `dockershim` 时,它所扮演角色的详细信息,
|
||||
继而展示了一组验证步骤,可用来检查弃用 `dockershim` 对你的工作负载的影响。
|
||||
|
||||
<!--
|
||||
## Finding if your app has a dependencies on Docker {#find-docker-dependencies}
|
||||
-->
|
||||
## 检查你的应用是否依赖于 Docker {#find-docker-dependencies}
|
||||
|
||||
<!--
|
||||
If you are using Docker for building your application containers, you can still
|
||||
run these containers on any container runtime. This use of Docker does not count
|
||||
as a dependency on Docker as a container runtime.
|
||||
-->
|
||||
虽然你通过 Docker 创建了应用容器,但这些容器却可以运行于所有容器运行时。
|
||||
所以这种使用 Docker 容器运行时的方式并不构成对 Docker 的依赖。
|
||||
|
||||
<!--
|
||||
When alternative container runtime is used, executing Docker commands may either
|
||||
not work or yield unexpected output. This is how you can find whether you have a
|
||||
dependency on Docker:
|
||||
-->
|
||||
当用了替代的容器运行时之后,Docker 命令可能不工作,甚至产生意外的输出。
|
||||
这才是判定你是否依赖于 Docker 的方法。
|
||||
|
||||
<!--
|
||||
1. Make sure no privileged Pods execute Docker commands.
|
||||
2. Check that scripts and apps running on nodes outside of Kubernetes
|
||||
infrastructure do not execute Docker commands. It might be:
|
||||
- SSH to nodes to troubleshoot;
|
||||
- Node startup scripts;
|
||||
- Monitoring and security agents installed on nodes directly.
|
||||
3. Third-party tools that perform above mentioned privileged operations. See
|
||||
[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
|
||||
for more information.
|
||||
4. Make sure there is no indirect dependencies on dockershim behavior.
|
||||
This is an edge case and unlikely to affect your application. Some tooling may be configured
|
||||
to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for
|
||||
a specific log message as part of troubleshooting instructions.
|
||||
If you have such tooling configured, test the behavior on test
|
||||
cluster before migration.
|
||||
-->
|
||||
1. 确认没有特权 Pod 执行 docker 命令。
|
||||
2. 检查 Kubernetes 基础架构外部节点上的脚本和应用,确认它们没有执行 Docker 命令。可能的命令有:
|
||||
- SSH 到节点排查故障;
|
||||
- 节点启动脚本;
|
||||
- 直接安装在节点上的监视和安全代理。
|
||||
3. 检查执行了上述特权操作的第三方工具。详细操作请参考:
|
||||
[从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
|
||||
4. 确认没有对 dockershim 行为的间接依赖。这是一种极端情况,不太可能影响你的应用。
|
||||
一些工具很可能被配置为使用了 Docker 特性,比如,基于特定指标发警报,或者在故障排查指令的一个环节中搜索特定的日志信息。
|
||||
如果你有此类配置的工具,需要在迁移之前,在测试集群上完成功能验证。
|
||||
|
||||
|
||||
<!--
|
||||
## Dependency on Docker explained {#role-of-dockershim}
|
||||
-->
|
||||
## Docker 依赖详解 {#role-of-dockershim}
|
||||
|
||||
<!--
|
||||
A [container runtime](/docs/concepts/containers/#container-runtimes) is software that can
|
||||
execute the containers that make up a Kubernetes pod. Kubernetes is responsible for orchestration
|
||||
and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
|
||||
uses the container runtime interface as an abstraction so that you can use any compatible
|
||||
container runtime.
|
||||
-->
|
||||
[容器运行时](/zh/docs/concepts/containers/#container-runtimes)是一个软件,用来运行组成 Kubernetes Pod 的容器。
|
||||
Kubernetes 负责编排和调度 Pod;在每一个节点上,
|
||||
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
|
||||
使用抽象的容器运行时接口,所以你可以任意选用兼容的容器运行时。
|
||||
|
||||
<!--
|
||||
In its earliest releases, Kubernetes offered compatibility with just one container runtime: Docker.
|
||||
Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes.
|
||||
The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However,
|
||||
because Docker existed before the CRI specification was invented, the Kubernetes project created an
|
||||
adapter component, `dockershim`. The dockershim adapter allows the kubelet to interact with Docker as
|
||||
if Docker were a CRI compatible runtime.
|
||||
-->
|
||||
在早期版本中,Kubernetes 提供的兼容性只支持一个容器运行时:Docker。
|
||||
在 Kubernetes 发展历史中,集群运营人员希望采用更多的容器运行时。
|
||||
于是 CRI 被设计出来满足这类灵活性需要 - 而 kubelet 亦开始支持 CRI。
|
||||
然而,因为 Docker 在 CRI 规范创建之前就已经存在,Kubernetes 就创建了一个适配器组件:`dockershim`。
|
||||
dockershim 适配器允许 kubelet 与 Docker交互,就好像 Docker 是一个 CRI 兼容的运行时一样。
|
||||
|
||||
<!--
|
||||
You can read about it in [Kubernetes Containerd integration goes GA](/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/) blog post.
|
||||
-->
|
||||
你可以阅读博文
|
||||
[Kubernetes 容器集成功能的正式发布](/zh/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/)
|
||||
|
||||
<!-- Dockershim vs. CRI with Containerd -->
|
||||

|
||||
|
||||
<!--
|
||||
Switching to Containerd as a container runtime eliminates the middleman. All the
|
||||
same containers can be run by container runtimes like Containerd as before. But
|
||||
now, since containers schedule directly with the container runtime, they are not visible to Docker.
|
||||
So any Docker tooling or fancy UI you might have used
|
||||
before to check on these containers is no longer available.
|
||||
-->
|
||||
切换到容器运行时 Containerd 可以消除掉中间环节。
|
||||
所有以前遗留的容器可由 Containerd 这类容器运行时来运行和管理,操作体验也和以前一样。
|
||||
但是现在,由于直接用容器运行时调度容器,所以它们对 Docker 来说是不可见的。
|
||||
因此,你以前用来检查这些容器的 Docker 工具或漂亮的 UI 都不再可用。
|
||||
|
||||
<!--
|
||||
You cannot get container information using `docker ps` or `docker inspect`
|
||||
commands. As you cannot list containers, you cannot get logs, stop containers,
|
||||
or execute something inside container using `docker exec`.
|
||||
-->
|
||||
你不能再使用 `docker ps` 或 `docker inspect` 命令来获取容器信息。
|
||||
由于你不能列出容器,因此你不能获取日志、停止容器,甚至不能通过 `docker exec` 在容器中执行命令。
|
||||
|
||||
<!--
|
||||
If you're running workloads via Kubernetes, the best way to stop a container is through
|
||||
the Kubernetes API rather than directly through the container runtime (this advice applies
|
||||
for all container runtimes, not just Docker).
|
||||
-->
|
||||
{{< note >}}
|
||||
|
||||
如果你用 Kubernetes 运行工作负载,最好通过 Kubernetes API停止容器,而不是通过容器运行时
|
||||
(此建议适用于所有容器运行时,不仅仅是针对 Docker)。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
You can still pull images or build them using `docker build` command. But images
|
||||
built or pulled by Docker would not be visible to container runtime and
|
||||
Kubernetes. They needed to be pushed to some registry to allow them to be used
|
||||
by Kubernetes.
|
||||
-->
|
||||
你仍然可以下载镜像,或者用 `docker build` 命令创建它们。
|
||||
但用 Docker 创建、下载的镜像,对于容器运行时和 Kubernetes,均不可见。
|
||||
为了在 Kubernetes 中使用,需要把镜像推送(push)到某注册中心。
|
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
title: 从 dockershim 迁移遥测和安全代理
|
||||
content_type: task
|
||||
weight: 70
|
||||
---
|
||||
<!--
|
||||
title: Migrating telemetry and security agents from dockershim
|
||||
content_type: task
|
||||
reviewers:
|
||||
- SergeyKanzhelev
|
||||
weight: 70
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
With Kubernetes 1.20 dockershim was deprecated. From the
|
||||
[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
|
||||
you might already know that most apps do not have a direct dependency on runtime hosting
|
||||
containers. However, there are still a lot of telemetry and security agents
|
||||
that has a dependency on docker to collect containers metadata, logs and
|
||||
metrics. This document aggregates information on how to detect tese
|
||||
dependencies and links on how to migrate these agents to use generic tools or
|
||||
alternative runtimes.
|
||||
-->
|
||||
在 Kubernetes 1.20 版本中,dockershim 被弃用。
|
||||
在博文[弃用 Dockershim 常见问题](/zh/blog/2020/12/02/dockershim-faq/)中,
|
||||
你大概已经了解到,大多数应用并没有直接通过运行时来托管容器。
|
||||
但是,仍然有大量的遥测和安全代理依赖 docker 来收集容器元数据、日志和指标。
|
||||
本文汇总了一些信息和链接:信息用于阐述如何探查这些依赖,链接用于解释如何迁移这些代理去使用通用的工具或其他容器运行。
|
||||
|
||||
<!--
|
||||
## Telemetry and security agents
|
||||
-->
|
||||
## 遥测和安全代理 {#telemetry-and-security-agents}
|
||||
|
||||
<!--
|
||||
There are a few ways agents may run on Kubernetes cluster. Agents may run on
|
||||
nodes directly or as DaemonSets.
|
||||
-->
|
||||
为了让代理运行在 Kubernetes 集群中,我们有几种办法。
|
||||
代理既可以直接在节点上运行,也可以作为守护进程运行。
|
||||
|
||||
<!--
|
||||
### Why do telemetry agents rely on Docker?
|
||||
-->
|
||||
### 为什么遥测代理依赖于 Docker? {#why-do-telemetry-agents-relyon-docker}
|
||||
|
||||
<!--
|
||||
Historically, Kubernetes was built on top of Docker. Kubernetes is managing
|
||||
networking and scheduling, Docker was placing and operating containers on a
|
||||
node. So you can get scheduling-related metadata like a pod name from Kubernetes
|
||||
and containers state information from Docker. Over time more runtimes were
|
||||
created to manage containers. Also there are projects and Kubernetes features
|
||||
that generalize container status information extraction across many runtimes.
|
||||
-->
|
||||
因为历史原因,Kubernetes 建立在 Docker 之上。
|
||||
Kubernetes 管理网络和调度,Docker 则在具体的节点上定位并操作容器。
|
||||
所以,你可以从 Kubernetes 取得调度相关的元数据,比如 Pod 名称;从 Docker 取得容器状态信息。
|
||||
后来,人们开发了更多的运行时来管理容器。
|
||||
同时一些项目和 Kubernetes 特性也不断涌现,支持跨多个运行时收集容器状态信息。
|
||||
|
||||
<!--
|
||||
Some agents are tied specifically to the Docker tool. The agents may run
|
||||
commands like [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
|
||||
or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list
|
||||
containers and processes or [docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
|
||||
to subscribe on docker logs. With the deprecating of Docker as a container runtime,
|
||||
these commands will not work any longer.
|
||||
-->
|
||||
一些代理和 Docker 工具紧密绑定。此类代理可以这样运行命令,比如用
|
||||
[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
|
||||
或 [`docker top`](https://docs.docker.com/engine/reference/commandline/top/)
|
||||
这类命令来列出容器和进程,用
|
||||
[docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
|
||||
订阅 Docker 的日志。
|
||||
但随着 Docker 作为容器运行时被弃用,这些命令将不再工作。
|
||||
|
||||
<!--
|
||||
### Identify DaemonSets that depend on Docker {#identify-docker-dependency }
|
||||
-->
|
||||
### 识别依赖于 Docker 的 DaemonSet {#identify-docker-dependency}
|
||||
|
||||
<!--
|
||||
If a pod wants to make calls to the `dockerd` running on the node, the pod must either:
|
||||
|
||||
- mount the filesystem containing the Docker daemon's privileged socket, as a
|
||||
{{< glossary_tooltip text="volume" term_id="volume" >}}; or
|
||||
- mount the specific path of the Docker daemon's privileged socket directly, also as a volume.
|
||||
-->
|
||||
如果某 Pod 想调用运行在节点上的 `dockerd`,该 Pod 必须满足以下两个条件之一:
|
||||
|
||||
- 将包含 Docker 守护进程特权套接字的文件系统挂载为一个{{< glossary_tooltip text="卷" term_id="volume" >}};或
|
||||
- 直接以卷的形式挂载 Docker 守护进程特权套接字的特定路径。
|
||||
|
||||
<!--
|
||||
For example: on COS images, Docker exposes its Unix domain socket at
|
||||
`/var/run/docker.sock` This means that the pod spec will include a
|
||||
`hostPath` volume mount of `/var/run/docker.sock`.
|
||||
-->
|
||||
举例来说:在 COS 镜像中,Docker 通过 `/var/run/docker.sock` 开放其 Unix 域套接字。
|
||||
这意味着 Pod 的规约中需要包含 `hostPath` 卷以挂载 `/var/run/docker.sock`。
|
||||
|
||||
<!--
|
||||
Here's a sample shell script to find Pods that have a mount directly mapping the
|
||||
Docker socket. This script outputs the namespace and name of the pod. You can
|
||||
remove the grep `/var/run/docker.sock` to review other mounts.
|
||||
-->
|
||||
下面是一个 shell 示例脚本,用于查找包含直接映射 Docker 套接字的挂载点的 Pod。
|
||||
你也可以删掉 grep `/var/run/docker.sock` 这一代码片段以查看其它挂载信息。
|
||||
|
||||
```bash
|
||||
kubectl get pods --all-namespaces \
|
||||
-o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{":\t"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.hostPath.path}{", "}{end}{end}' \
|
||||
| sort \
|
||||
| grep '/var/run/docker.sock'
|
||||
```
|
||||
|
||||
<!--
|
||||
There are alternative ways for a pod to access Docker on the host. For instance, the parent
|
||||
directory `/var/run` may be mounted instead of the full path (like in [this
|
||||
example](https://gist.github.com/itaysk/7bc3e56d69c4d72a549286d98fd557dd)).
|
||||
The script above only detects the most common uses.
|
||||
-->
|
||||
{{< note >}}
|
||||
对于 Pod 来说,访问宿主机上的 Docker 还有其他方式。
|
||||
例如,可以挂载 `/var/run` 的父目录而非其完整路径
|
||||
(就像[这个例子](https://gist.github.com/itaysk/7bc3e56d69c4d72a549286d98fd557dd))。
|
||||
上述脚本只检测最常见的使用方式。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Detecting Docker dependency from node agents
|
||||
-->
|
||||
### 检测节点代理对 Docker 的依赖性 {#detecting-docker-dependency-from-node-agents}
|
||||
|
||||
<!--
|
||||
In case your cluster nodes are customized and install additional security and
|
||||
telemetry agents on the node, make sure to check with the vendor of the agent whether it has dependency on Docker.
|
||||
-->
|
||||
在你的集群节点被定制、且在各个节点上均安装了额外的安全和遥测代理的场景下,
|
||||
一定要和代理的供应商确认:该代理是否依赖于 Docker。
|
||||
|
||||
<!--
|
||||
### Telemetry and security agent vendors
|
||||
-->
|
||||
### 遥测和安全代理的供应商 {#telemetry-and-security-agent-vendors}
|
||||
|
||||
<!--
|
||||
We keep the work in progress version of migration instructions for various telemetry and security agent vendors
|
||||
in [Google doc](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#).
|
||||
Please contact the vendor to get up to date instructions for migrating from dockershim.
|
||||
-->
|
||||
我们通过
|
||||
[谷歌文档](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#)
|
||||
提供了为各类遥测和安全代理供应商准备的持续更新的迁移指导。
|
||||
请与供应商联系,获取从 dockershim 迁移的最新说明。
|
|
@ -92,25 +92,37 @@ Support for the Topology Manager requires `TopologyManager` [feature gate](/docs
|
|||
从 Kubernetes 1.18 版本开始,这一特性默认是启用的。
|
||||
|
||||
<!--
|
||||
### Topology Manager Policies
|
||||
### Topology Manager Scopes and Policies
|
||||
|
||||
The Topology Manager currently:
|
||||
|
||||
- Aligns Pods of all QoS classes.
|
||||
- Aligns the requested resources that Hint Provider provides topology hints for.
|
||||
-->
|
||||
### 拓扑管理器策略
|
||||
### 拓扑管理器作用域和策略
|
||||
|
||||
拓扑管理器目前:
|
||||
|
||||
- 对所有 QoS 类的 Pod 执行对齐操作
|
||||
- 针对建议提供者所提供的拓扑建议,对请求的资源进行对齐
|
||||
|
||||
<!--
|
||||
If these conditions are met, Topology Manager will align the requested resources.
|
||||
If these conditions are met, the Topology Manager will align the requested resources.
|
||||
|
||||
In order to customise how this alignment is carried out, the Topology Manager provides two distinct knobs: `scope` and `policy`.
|
||||
-->
|
||||
如果满足这些条件,则拓扑管理器将对齐请求的资源。
|
||||
|
||||
为了定制如何进行对齐,拓扑管理器提供了两种不同的方式:`scope` 和 `policy`。
|
||||
|
||||
<!--
|
||||
The `scope` defines the granularity at which you would like resource alignment to be performed (e.g. at the `pod` or `container` level). And the `policy` defines the actual strategy used to carry out the alignment (e.g. `best-effort`, `restricted`, `single-numa-node`, etc.).
|
||||
|
||||
Details on the various `scopes` and `policies` available today can be found below.
|
||||
-->
|
||||
`scope` 定义了资源对齐时你所希望使用的粒度(例如,是在 `pod` 还是 `container` 级别)。
|
||||
`policy` 定义了对齐时实际使用的策略(例如,`best-effort`、`restricted`、`single-numa-node` 等等)。
|
||||
|
||||
可以在下文找到现今可用的各种 `scopes` 和 `policies` 的具体信息。
|
||||
|
||||
<!--
|
||||
To align CPU resources with other requested resources in a Pod Spec, the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
|
||||
-->
|
||||
|
@ -120,6 +132,117 @@ To align CPU resources with other requested resources in a Pod Spec, the CPU Man
|
|||
参看[控制 CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/).
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Topology Manager Scopes
|
||||
|
||||
The Topology Manager can deal with the alignment of resources in a couple of distinct scopes:
|
||||
|
||||
* `container` (default)
|
||||
* `pod`
|
||||
|
||||
Either option can be selected at a time of the kubelet startup, with `--topology-manager-scope` flag.
|
||||
-->
|
||||
### 拓扑管理器作用域
|
||||
|
||||
拓扑管理器可以在以下不同的作用域内进行资源对齐:
|
||||
|
||||
* `container` (默认)
|
||||
* `pod`
|
||||
|
||||
在 kubelet 启动时,可以使用 `--topology-manager-scope` 标志来选择其中任一选项。
|
||||
|
||||
<!--
|
||||
### container scope
|
||||
|
||||
The `container` scope is used by default.
|
||||
-->
|
||||
### 容器作用域
|
||||
|
||||
默认使用的是 `container` 作用域。
|
||||
|
||||
<!--
|
||||
Within this scope, the Topology Manager performs a number of sequential resource alignments, i.e., for each container (in a pod) a separate alignment is computed. In other words, there is no notion of grouping the containers to a specific set of NUMA nodes, for this particular scope. In effect, the Topology Manager performs an arbitrary alignment of individual containers to NUMA nodes.
|
||||
-->
|
||||
在该作用域内,拓扑管理器依次进行一系列的资源对齐,
|
||||
也就是,对每一个容器(包含在一个 Pod 里)计算单独的对齐。
|
||||
换句话说,在该特定的作用域内,没有根据特定的 NUMA 节点集来把容器分组的概念。
|
||||
实际上,拓扑管理器会把单个容器任意地对齐到 NUMA 节点上。
|
||||
|
||||
<!--
|
||||
The notion of grouping the containers was endorsed and implemented on purpose in the following scope, for example the `pod` scope.
|
||||
-->
|
||||
容器分组的概念是在以下的作用域内特别实现的,也就是 `pod` 作用域。
|
||||
|
||||
<!--
|
||||
### pod scope
|
||||
|
||||
To select the `pod` scope, start the kubelet with the command line option `--topology-manager-scope=pod`.
|
||||
-->
|
||||
### Pod 作用域
|
||||
|
||||
使用命令行选项 `--topology-manager-scope=pod` 来启动 kubelet,就可以选择 `pod` 作用域。
|
||||
|
||||
<!--
|
||||
This scope allows for grouping all containers in a pod to a common set of NUMA nodes. That is, the Topology Manager treats a pod as a whole and attempts to allocate the entire pod (all containers) to either a single NUMA node or a common set of NUMA nodes. The following examples illustrate the alignments produced by the Topology Manager on different occasions:
|
||||
-->
|
||||
该作用域允许把一个 Pod 里的所有容器作为一个分组,分配到一个共同的 NUMA 节点集。
|
||||
也就是,拓扑管理器会把一个 Pod 当成一个整体,
|
||||
并且试图把整个 Pod(所有容器)分配到一个单个的 NUMA 节点或者一个共同的 NUMA 节点集。
|
||||
以下的例子说明了拓扑管理器在不同的场景下使用的对齐方式:
|
||||
|
||||
<!--
|
||||
* all containers can be and are allocated to a single NUMA node;
|
||||
* all containers can be and are allocated to a shared set of NUMA nodes.
|
||||
-->
|
||||
* 所有容器可以被分配到一个单一的 NUMA 节点;
|
||||
* 所有容器可以被分配到一个共享的 NUMA 节点集。
|
||||
|
||||
<!--
|
||||
The total amount of particular resource demanded for the entire pod is calculated according to [effective requests/limits](/docs/concepts/workloads/pods/init-containers/#resources) formula, and thus, this total value is equal to the maximum of:
|
||||
* the sum of all app container requests,
|
||||
* the maximum of init container requests,
|
||||
for a resource.
|
||||
-->
|
||||
整个 Pod 所请求的某种资源总量是根据
|
||||
[有效 request/limit](/zh/docs/concepts/workloads/pods/init-containers/#resources)
|
||||
公式来计算的,
|
||||
因此,对某一种资源而言,该总量等于以下数值中的最大值:
|
||||
* 所有应用容器请求之和;
|
||||
* 初始容器请求的最大值。
|
||||
|
||||
<!--
|
||||
Using the `pod` scope in tandem with `single-numa-node` Topology Manager policy is specifically valuable for workloads that are latency sensitive or for high-throughput applications that perform IPC. By combining both options, you are able to place all containers in a pod onto a single NUMA node; hence, the inter-NUMA communication overhead can be eliminated for that pod.
|
||||
-->
|
||||
`pod` 作用域与 `single-numa-node` 拓扑管理器策略一起使用,
|
||||
对于延时敏感的工作负载,或者对于进行 IPC 的高吞吐量应用程序,都是特别有价值的。
|
||||
把这两个选项组合起来,你可以把一个 Pod 里的所有容器都放到一个单个的 NUMA 节点,
|
||||
使得该 Pod 消除了 NUMA 之间的通信开销。
|
||||
|
||||
<!--
|
||||
In the case of `single-numa-node` policy, a pod is accepted only if a suitable set of NUMA nodes is present among possible allocations. Reconsider the example above:
|
||||
-->
|
||||
在 `single-numa-node` 策略下,只有当可能的分配方案中存在合适的 NUMA 节点集时,Pod 才会被接受。
|
||||
重新考虑上述的例子:
|
||||
|
||||
<!--
|
||||
* a set containing only a single NUMA node - it leads to pod being admitted,
|
||||
* whereas a set containing more NUMA nodes - it results in pod rejection (because instead of one NUMA node, two or more NUMA nodes are required to satisfy the allocation).
|
||||
-->
|
||||
* 节点集只包含单个 NUMA 节点时,Pod 就会被接受,
|
||||
* 然而,节点集包含多个 NUMA 节点时,Pod 就会被拒绝
|
||||
(因为满足该分配方案需要两个或以上的 NUMA 节点,而不是单个 NUMA 节点)。
|
||||
|
||||
<!--
|
||||
To recap, Topology Manager first computes a set of NUMA nodes and then tests it against Topology Manager policy, which either leads to the rejection or admission of the pod.
|
||||
-->
|
||||
简要地说,拓扑管理器首先计算出 NUMA 节点集,然后使用拓扑管理器策略来测试该集合,
|
||||
从而决定拒绝或者接受 Pod。
|
||||
|
||||
<!--
|
||||
### Topology Manager Policies
|
||||
-->
|
||||
### 拓扑管理器策略
|
||||
|
||||
<!--
|
||||
Topology Manager supports four allocation policies. You can set a policy via a Kubelet flag, `--topology-manager-policy`.
|
||||
There are four supported policies:
|
||||
|
@ -138,6 +261,17 @@ There are four supported policies:
|
|||
* `restricted`
|
||||
* `single-numa-node`
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
If Topology Manager is configured with the **pod** scope, the container, which is considered by the policy, is reflecting requirements of the entire pod, and thus each container from the pod will result with **the same** topology alignment decision.
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
如果拓扑管理器配置使用 **Pod** 作用域,
|
||||
那么在策略考量一个容器时,该容器反映的是整个 Pod 的要求,
|
||||
于是该 Pod 里的每个容器都会得到 **相同的** 拓扑对齐决定。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### none policy {#policy-none}
|
||||
|
||||
|
|
|
@ -512,23 +512,71 @@ Defaults to 3. Minimum value is 1.
|
|||
就绪探测情况下的放弃 Pod 会被打上未就绪的标签。默认值是 3。最小值是 1。
|
||||
|
||||
<!--
|
||||
Before Kubernetes 1.20, the field `timeoutSeconds` was not respected for exec probes:
|
||||
probes continued running indefinitely, even past their configured deadline,
|
||||
until a result was returned.
|
||||
-->
|
||||
在 Kubernetes 1.20 版本之前,exec 探针会忽略 `timeoutSeconds`:探针会无限期地
|
||||
持续运行,甚至可能超过所配置的限期,直到返回结果为止。
|
||||
|
||||
<!--
|
||||
This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior,
|
||||
even without realizing it, as the default timeout is 1 second.
|
||||
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`)
|
||||
on each kubelet to restore the behavior from older versions, then remove that override
|
||||
once all the exec probes in the cluster have a `timeoutSeconds` value set.
|
||||
If you have pods that are impacted from the default 1 second timeout,
|
||||
you should update their probe timeout so that you're ready for the
|
||||
eventual removal of that feature gate.
|
||||
-->
|
||||
这一缺陷在 Kubernetes v1.20 版本中得到修复。你可能一直依赖于之前错误的探测行为,
|
||||
甚至你都没有觉察到这一问题的存在,因为默认的超时值是 1 秒钟。
|
||||
作为集群管理员,你可以在所有的 kubelet 上禁用 `ExecProbeTimeout`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
(将其设置为 `false`),从而恢复之前版本中的运行行为,之后当集群中所有的
|
||||
exec 探针都设置了 `timeoutSeconds` 参数后,移除此标志重载。
|
||||
如果你有 Pods 受到此默认 1 秒钟超时值的影响,你应该更新 Pod 对应的探针的
|
||||
超时值,这样才能为最终去除该特性门控做好准备。
|
||||
|
||||
<!--
|
||||
With the fix of the defect, for exec probes, on Kubernetes `1.20+` with the `dockershim` container runtime,
|
||||
the process inside the container may keep running even after probe returned failure because of the timeout.
|
||||
-->
|
||||
当此缺陷被修复之后,在使用 `dockershim` 容器运行时的 Kubernetes `1.20+`
|
||||
版本中,对于 exec 探针而言,容器中的进程可能会因为超时值的设置保持持续运行,
|
||||
即使探针返回了失败状态。
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Incorrect implementation of readiness probes may result in an ever growing number
|
||||
of processes in the container, and resource starvation if this is left unchecked.
|
||||
-->
|
||||
如果就绪态探针的实现不正确,可能会导致容器中进程的数量不断上升。
|
||||
如果不对其采取措施,很可能导致资源枯竭的状况。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
### HTTP probes
|
||||
|
||||
[HTTP probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
|
||||
have additional fields that can be set on `httpGet`:
|
||||
|
||||
* `host`: Host name to connect to, defaults to the pod IP. You probably want to
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
|
||||
* `path`: Path to access on the HTTP server.
|
||||
* `path`: Path to access on the HTTP server. Defaults to /.
|
||||
* `httpHeaders`: Custom headers to set in the request. HTTP allows repeated headers.
|
||||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
-->
|
||||
### HTTP 探测 {#http-probes}
|
||||
|
||||
[HTTP Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
|
||||
可以在 `httpGet` 上配置额外的字段:
|
||||
|
||||
* `host`:连接使用的主机名,默认是 Pod 的 IP。也可以在 HTTP 头中设置 “Host” 来代替。
|
||||
* `scheme` :用于设置连接主机的方式(HTTP 还是 HTTPS)。默认是 HTTP。
|
||||
* `path`:访问 HTTP 服务的路径。
|
||||
* `path`:访问 HTTP 服务的路径。默认值为 "/"。
|
||||
* `httpHeaders`:请求中自定义的 HTTP 头。HTTP 头字段允许重复。
|
||||
* `port`:访问容器的端口号或者端口名。如果数字必须在 1 ~ 65535 之间。
|
||||
|
||||
|
@ -542,10 +590,6 @@ Here's one scenario where you would set it. Suppose the Container listens on 127
|
|||
and the Pod's `hostNetwork` field is true. Then `host`, under `httpGet`, should be set
|
||||
to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more common
|
||||
case, you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
|
||||
For a TCP probe, the kubelet makes the probe connection at the node, not in the pod, which
|
||||
means that you can not use a service name in the `host` parameter since the kubelet is unable
|
||||
to resolve it.
|
||||
-->
|
||||
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的路径和端口来执行检测。
|
||||
除非 `httpGet` 中的 `host` 字段设置了,否则 kubelet 默认是给 Pod 的 IP 地址发送探测。
|
||||
|
@ -556,6 +600,61 @@ to resolve it.
|
|||
可能更常见的情况是如果 Pod 依赖虚拟主机,你不应该设置 `host` 字段,而是应该在
|
||||
`httpHeaders` 中设置 `Host`。
|
||||
|
||||
<!--
|
||||
For an HTTP probe, the kubelet sends two request headers in addition to the mandatory `Host` header:
|
||||
`User-Agent`, and `Accept`. The default values for these headers are `kube-probe/{{< skew latestVersion >}}`
|
||||
(where `{{< skew latestVersion >}}` is the version of the kubelet ), and `*/*` respectively.
|
||||
|
||||
You can override the default headers by defining `.httpHeaders` for the probe; for example
|
||||
-->
|
||||
针对 HTTP 探针,kubelet 除了必需的 `Host` 头部之外还发送两个请求头部字段:
|
||||
`User-Agent` 和 `Accept`。这些头部的默认值分别是 `kube-probe/{{ skew latestVersion >}}`
|
||||
(其中 `{{< skew latestVersion >}}` 是 kubelet 的版本号)和 `*/*`。
|
||||
|
||||
你可以通过为探测设置 `.httpHeaders` 来重载默认的头部字段值;例如:
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: Accept
|
||||
value: application/json
|
||||
|
||||
startupProbe:
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: User-Agent
|
||||
value: MyUserAgent
|
||||
```
|
||||
|
||||
<!--
|
||||
You can also remove these two headers by defining them with an empty value.
|
||||
-->
|
||||
你也可以通过将这些头部字段定义为空值,从请求中去掉这些头部字段。
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: Accept
|
||||
value: ""
|
||||
|
||||
startupProbe:
|
||||
httpGet:
|
||||
httpHeaders:
|
||||
- name: User-Agent
|
||||
value: ""
|
||||
```
|
||||
|
||||
<!--
|
||||
### TCP probes
|
||||
|
||||
For a TCP probe, the kubelet makes the probe connection at the node, not in the pod, which
|
||||
means that you can not use a service name in the `host` parameter since the kubelet is unable
|
||||
to resolve it.
|
||||
-->
|
||||
### TCP 探测 {#tcp-probes}
|
||||
|
||||
对于一次 TCP 探测,kubelet 在节点上(不是在 Pod 里面)建立探测连接,
|
||||
这意味着你不能在 `host` 参数上配置服务名称,因为 kubelet 不能解析服务名称。
|
||||
|
||||
|
|
|
@ -1,14 +1,15 @@
|
|||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hello
|
||||
name: backend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hello
|
||||
tier: backend
|
||||
track: stable
|
||||
replicas: 7
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
|
@ -22,3 +23,4 @@ spec:
|
|||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
...
|
|
@ -1,3 +1,4 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
|
@ -10,3 +11,4 @@ spec:
|
|||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
...
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hello
|
||||
tier: frontend
|
||||
track: stable
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hello
|
||||
tier: frontend
|
||||
track: stable
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: "gcr.io/google-samples/hello-frontend:1.0"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/usr/sbin/nginx","-s","quit"]
|
||||
...
|
|
@ -0,0 +1,14 @@
|
|||
# The identifier Backend is internal to nginx, and used to name this specific upstream
|
||||
upstream Backend {
|
||||
# hello is the internal DNS name used by the backend Service inside Kubernetes
|
||||
server hello;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
# The following statement will proxy traffic to the upstream named Backend
|
||||
proxy_pass http://Backend;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
selector:
|
||||
app: hello
|
||||
tier: frontend
|
||||
ports:
|
||||
- protocol: "TCP"
|
||||
port: 80
|
||||
targetPort: 80
|
||||
type: LoadBalancer
|
||||
...
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue