Merge pull request #26915 from PI-Victor/merged-master-dev-1.21
Merge master into dev-1.21
This commit is contained in:
commit
a65f9ac6f4
|
|
@ -114,7 +114,7 @@ will have strictly better performance and less overhead. However, we encourage y
|
|||
to explore all the options from the [CNCF landscape] in case another would be an
|
||||
even better fit for your environment.
|
||||
|
||||
[CNCF landscape]: https://landscape.cncf.io/category=container-runtime&format=card-mode&grouping=category
|
||||
[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category
|
||||
|
||||
|
||||
### What should I look out for when changing CRI implementations?
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ Before choosing a guide, here are some considerations:
|
|||
|
||||
## Securing a cluster
|
||||
|
||||
* [Certificates](/docs/concepts/cluster-administration/certificates/) describes the steps to generate certificates using different tool chains.
|
||||
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains.
|
||||
|
||||
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node.
|
||||
|
||||
|
|
|
|||
|
|
@ -4,249 +4,6 @@ content_type: concept
|
|||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
When using client certificate authentication, you can generate certificates
|
||||
manually through `easyrsa`, `openssl` or `cfssl`.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
### easyrsa
|
||||
|
||||
**easyrsa** can manually generate certificates for your cluster.
|
||||
|
||||
1. Download, unpack, and initialize the patched version of easyrsa3.
|
||||
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
|
||||
tar xzf easy-rsa.tar.gz
|
||||
cd easy-rsa-master/easyrsa3
|
||||
./easyrsa init-pki
|
||||
1. Generate a new certificate authority (CA). `--batch` sets automatic mode;
|
||||
`--req-cn` specifies the Common Name (CN) for the CA's new root certificate.
|
||||
|
||||
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
|
||||
1. Generate server certificate and key.
|
||||
The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will
|
||||
be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR
|
||||
that is specified as the `--service-cluster-ip-range` argument for both the API server and
|
||||
the controller manager component. The argument `--days` is used to set the number of days
|
||||
after which the certificate expires.
|
||||
The sample below also assumes that you are using `cluster.local` as the default
|
||||
DNS domain name.
|
||||
|
||||
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
|
||||
"IP:${MASTER_CLUSTER_IP},"\
|
||||
"DNS:kubernetes,"\
|
||||
"DNS:kubernetes.default,"\
|
||||
"DNS:kubernetes.default.svc,"\
|
||||
"DNS:kubernetes.default.svc.cluster,"\
|
||||
"DNS:kubernetes.default.svc.cluster.local" \
|
||||
--days=10000 \
|
||||
build-server-full server nopass
|
||||
1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory.
|
||||
1. Fill in and add the following parameters into the API server start parameters:
|
||||
|
||||
--client-ca-file=/yourdirectory/ca.crt
|
||||
--tls-cert-file=/yourdirectory/server.crt
|
||||
--tls-private-key-file=/yourdirectory/server.key
|
||||
|
||||
### openssl
|
||||
|
||||
**openssl** can manually generate certificates for your cluster.
|
||||
|
||||
1. Generate a ca.key with 2048bit:
|
||||
|
||||
openssl genrsa -out ca.key 2048
|
||||
1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time):
|
||||
|
||||
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
|
||||
1. Generate a server.key with 2048bit:
|
||||
|
||||
openssl genrsa -out server.key 2048
|
||||
1. Create a config file for generating a Certificate Signing Request (CSR).
|
||||
Be sure to substitute the values marked with angle brackets (e.g. `<MASTER_IP>`)
|
||||
with real values before saving this to a file (e.g. `csr.conf`).
|
||||
Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the
|
||||
API server as described in previous subsection.
|
||||
The sample below also assumes that you are using `cluster.local` as the default
|
||||
DNS domain name.
|
||||
|
||||
[ req ]
|
||||
default_bits = 2048
|
||||
prompt = no
|
||||
default_md = sha256
|
||||
req_extensions = req_ext
|
||||
distinguished_name = dn
|
||||
|
||||
[ dn ]
|
||||
C = <country>
|
||||
ST = <state>
|
||||
L = <city>
|
||||
O = <organization>
|
||||
OU = <organization unit>
|
||||
CN = <MASTER_IP>
|
||||
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
DNS.3 = kubernetes.default.svc
|
||||
DNS.4 = kubernetes.default.svc.cluster
|
||||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = <MASTER_IP>
|
||||
IP.2 = <MASTER_CLUSTER_IP>
|
||||
|
||||
[ v3_ext ]
|
||||
authorityKeyIdentifier=keyid,issuer:always
|
||||
basicConstraints=CA:FALSE
|
||||
keyUsage=keyEncipherment,dataEncipherment
|
||||
extendedKeyUsage=serverAuth,clientAuth
|
||||
subjectAltName=@alt_names
|
||||
1. Generate the certificate signing request based on the config file:
|
||||
|
||||
openssl req -new -key server.key -out server.csr -config csr.conf
|
||||
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
|
||||
|
||||
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
|
||||
-CAcreateserial -out server.crt -days 10000 \
|
||||
-extensions v3_ext -extfile csr.conf
|
||||
1. View the certificate:
|
||||
|
||||
openssl x509 -noout -text -in ./server.crt
|
||||
|
||||
Finally, add the same parameters into the API server start parameters.
|
||||
|
||||
### cfssl
|
||||
|
||||
**cfssl** is another tool for certificate generation.
|
||||
|
||||
1. Download, unpack and prepare the command line tools as shown below.
|
||||
Note that you may need to adapt the sample commands based on the hardware
|
||||
architecture and cfssl version you are using.
|
||||
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
|
||||
chmod +x cfssl
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
|
||||
chmod +x cfssljson
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
|
||||
chmod +x cfssl-certinfo
|
||||
1. Create a directory to hold the artifacts and initialize cfssl:
|
||||
|
||||
mkdir cert
|
||||
cd cert
|
||||
../cfssl print-defaults config > config.json
|
||||
../cfssl print-defaults csr > csr.json
|
||||
1. Create a JSON config file for generating the CA file, for example, `ca-config.json`:
|
||||
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"kubernetes": {
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"server auth",
|
||||
"client auth"
|
||||
],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
1. Create a JSON config file for CA certificate signing request (CSR), for example,
|
||||
`ca-csr.json`. Be sure to replace the values marked with angle brackets with
|
||||
real values you want to use.
|
||||
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names":[{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`):
|
||||
|
||||
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
|
||||
1. Create a JSON config file for generating keys and certificates for the API
|
||||
server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with
|
||||
real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster
|
||||
IP for the API server as described in previous subsection.
|
||||
The sample below also assumes that you are using `cluster.local` as the default
|
||||
DNS domain name.
|
||||
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"<MASTER_IP>",
|
||||
"<MASTER_CLUSTER_IP>",
|
||||
"kubernetes",
|
||||
"kubernetes.default",
|
||||
"kubernetes.default.svc",
|
||||
"kubernetes.default.svc.cluster",
|
||||
"kubernetes.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
1. Generate the key and certificate for the API server, which are by default
|
||||
saved into file `server-key.pem` and `server.pem` respectively:
|
||||
|
||||
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
|
||||
--config=ca-config.json -profile=kubernetes \
|
||||
server-csr.json | ../cfssljson -bare server
|
||||
|
||||
|
||||
## Distributing Self-Signed CA Certificate
|
||||
|
||||
A client node may refuse to recognize a self-signed CA certificate as valid.
|
||||
For a non-production deployment, or for a deployment that runs behind a company
|
||||
firewall, you can distribute a self-signed CA certificate to all clients and
|
||||
refresh the local list for valid certificates.
|
||||
|
||||
On each client, perform the following operations:
|
||||
|
||||
```bash
|
||||
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
|
||||
sudo update-ca-certificates
|
||||
```
|
||||
|
||||
```
|
||||
Updating certificates in /etc/ssl/certs...
|
||||
1 added, 0 removed; done.
|
||||
Running hooks in /etc/ca-certificates/update.d....
|
||||
done.
|
||||
```
|
||||
|
||||
## Certificates API
|
||||
|
||||
You can use the `certificates.k8s.io` API to provision
|
||||
x509 certificates to use for authentication as documented
|
||||
[here](/docs/tasks/tls/managing-tls-in-a-cluster).
|
||||
|
||||
|
||||
To learn how to generate certificates for your cluster, see [Certificates](/docs/tasks/administer-cluster/certificates/).
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
|
|||
|
||||
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
|
||||
|
||||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
|
||||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
|
||||
|
|
|
|||
|
|
@ -718,7 +718,7 @@ spec:
|
|||
|
||||
#### Consuming Secret Values from environment variables
|
||||
|
||||
Inside a container that consumes a secret in an environment variables, the secret keys appear as
|
||||
Inside a container that consumes a secret in the environment variables, the secret keys appear as
|
||||
normal environment variables containing the base64 decoded values of the secret data.
|
||||
This is the result of commands executed inside the container from the example above:
|
||||
|
||||
|
|
|
|||
|
|
@ -40,6 +40,7 @@ as are any environment variables specified statically in the Docker image.
|
|||
### Cluster information
|
||||
|
||||
A list of all services that were running when a Container was created is available to that Container as environment variables.
|
||||
This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services.
|
||||
Those environment variables match the syntax of Docker links.
|
||||
|
||||
For a service named *foo* that maps to a Container named *bar*,
|
||||
|
|
|
|||
|
|
@ -28,9 +28,7 @@ The most common way to implement the APIService is to run an *extension API serv
|
|||
Extension API servers should have low latency networking to and from the kube-apiserver.
|
||||
Discovery requests are required to round-trip from the kube-apiserver in five seconds or less.
|
||||
|
||||
If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it. You can also set the
|
||||
`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver
|
||||
to disable the timeout restriction. This deprecated feature gate will be removed in a future release.
|
||||
If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -51,11 +51,11 @@ the same machine, and do not run user containers on this machine. See
|
|||
|
||||
{{< glossary_definition term_id="kube-controller-manager" length="all" >}}
|
||||
|
||||
These controllers include:
|
||||
Some types of these controllers are:
|
||||
|
||||
* Node controller: Responsible for noticing and responding when nodes go down.
|
||||
* Replication controller: Responsible for maintaining the correct number of pods for every replication
|
||||
controller object in the system.
|
||||
* Job controller: Watches for Job objects that represent one-off tasks, then creates
|
||||
Pods to run those tasks to completion.
|
||||
* Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
|
||||
* Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
|
||||
|
||||
|
|
|
|||
|
|
@ -7,8 +7,8 @@ content_type: concept
|
|||
weight: 20
|
||||
---
|
||||
<!-- overview -->
|
||||
This page provides an overview of DNS support by Kubernetes.
|
||||
|
||||
Kubernetes creates DNS records for services and pods. You can contact
|
||||
services with consistent DNS names instead of IP addresses.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
@ -18,19 +18,47 @@ Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures
|
|||
the kubelets to tell individual containers to use the DNS Service's IP to
|
||||
resolve DNS names.
|
||||
|
||||
### What things get DNS names?
|
||||
|
||||
Every Service defined in the cluster (including the DNS server itself) is
|
||||
assigned a DNS name. By default, a client Pod's DNS search list will
|
||||
include the Pod's own namespace and the cluster's default domain. This is best
|
||||
illustrated by example:
|
||||
assigned a DNS name. By default, a client Pod's DNS search list includes the
|
||||
Pod's own namespace and the cluster's default domain.
|
||||
|
||||
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
|
||||
in namespace `bar` can look up this service by querying a DNS service for
|
||||
`foo`. A Pod running in namespace `quux` can look up this service by doing a
|
||||
DNS query for `foo.bar`.
|
||||
### Namespaces of Services
|
||||
|
||||
The following sections detail the supported record types and layout that is
|
||||
A DNS query may return different results based on the namespace of the pod making
|
||||
it. DNS queries that don't specify a namespace are limited to the pod's
|
||||
namespace. Access services in other namespaces by specifying it in the DNS query.
|
||||
|
||||
For example, consider a pod in a `test` namespace. A `data` service is in
|
||||
the `prod` namespace.
|
||||
|
||||
A query for `data` returns no results, because it uses the pod's `test` namespace.
|
||||
|
||||
A query for `data.prod` returns the intended result, because it specifies the
|
||||
namespace.
|
||||
|
||||
DNS queries may be expanded using the pod's `/etc/resolv.conf`. Kubelet
|
||||
sets this file for each pod. For example, a query for just `data` may be
|
||||
expanded to `data.test.cluster.local`. The values of the `search` option
|
||||
are used to expand queries. To learn more about DNS queries, see
|
||||
[the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)
|
||||
|
||||
```
|
||||
nameserver 10.32.0.10
|
||||
search <namespace>.svc.cluster.local svc.cluster.local cluster.local
|
||||
options ndots:5
|
||||
```
|
||||
|
||||
In summary, a pod in the _test_ namespace can successfully resolve either
|
||||
`data.prod` or `data.prod.cluster.local`.
|
||||
|
||||
### DNS Records
|
||||
|
||||
What objects get DNS records?
|
||||
|
||||
1. Services
|
||||
2. Pods
|
||||
|
||||
The following sections detail the supported DNS record types and layout that is
|
||||
supported. Any other layout or names or queries that happen to work are
|
||||
considered implementation details and are subject to change without warning.
|
||||
For more up-to-date specification, see
|
||||
|
|
|
|||
|
|
@ -74,8 +74,8 @@ a new instance.
|
|||
The name of a Service object must be a valid
|
||||
[DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names).
|
||||
|
||||
For example, suppose you have a set of Pods that each listen on TCP port 9376
|
||||
and carry a label `app=MyApp`:
|
||||
For example, suppose you have a set of Pods where each listens on TCP port 9376
|
||||
and contains a label `app=MyApp`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ Here are some ways to mitigate involuntary disruptions:
|
|||
and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.)
|
||||
- For even higher availability when running replicated applications,
|
||||
spread applications across racks (using
|
||||
[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature))
|
||||
[anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity))
|
||||
or across zones (if using a
|
||||
[multi-zone cluster](/docs/setup/multiple-zones).)
|
||||
|
||||
|
|
@ -104,7 +104,7 @@ ensure that the number of replicas serving load never falls below a certain
|
|||
percentage of the total.
|
||||
|
||||
Cluster managers and hosting providers should use tools which
|
||||
respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)
|
||||
respect PodDisruptionBudgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#eviction-api)
|
||||
instead of directly deleting pods or deployments.
|
||||
|
||||
For example, the `kubectl drain` subcommand lets you mark a node as going out of
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content
|
|||
|
||||
For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs.
|
||||
|
||||
Members of `@kubernetes/website-maintainers` can create new development branches to coordinate translation efforts.
|
||||
Members of `@kubernetes/website-maintainers` can create new localization branches to coordinate translation efforts.
|
||||
|
||||
Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs.
|
||||
|
||||
|
|
@ -205,14 +205,20 @@ To ensure accuracy in grammar and meaning, members of your localization team sho
|
|||
|
||||
### Source files
|
||||
|
||||
Localizations must be based on the English files from the most recent release, {{< latest-version >}}.
|
||||
Localizations must be based on the English files from a specific release targeted by the localization team.
|
||||
Each localization team can decide which release to target which is referred to as the _target version_ below.
|
||||
|
||||
To find source files for the most recent release:
|
||||
To find source files for your target version:
|
||||
|
||||
1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website.
|
||||
2. Select the `release-1.X` branch for the most recent version.
|
||||
2. Select a branch for your target version from the following table:
|
||||
Target version | Branch
|
||||
-----|-----
|
||||
Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}})
|
||||
Latest version | [`master`](https://github.com/kubernetes/website/tree/master)
|
||||
Previous version | `release-*.**`
|
||||
|
||||
The latest version is {{< latest-version >}}, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}).
|
||||
The `master` branch holds content for the current release `{{< latest-version >}}`. The release team will create `{{< release-branch >}}` branch shortly before the next release: v{{< skew nextMinorVersion >}}.
|
||||
|
||||
### Site strings in i18n
|
||||
|
||||
|
|
@ -239,11 +245,11 @@ Some language teams have their own language-specific style guide and glossary. F
|
|||
|
||||
## Branching strategy
|
||||
|
||||
Because localization projects are highly collaborative efforts, we encourage teams to work in shared development branches.
|
||||
Because localization projects are highly collaborative efforts, we encourage teams to work in shared localization branches.
|
||||
|
||||
To collaborate on a development branch:
|
||||
To collaborate on a localization branch:
|
||||
|
||||
1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a development branch from a source branch on https://github.com/kubernetes/website.
|
||||
1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a localization branch from a source branch on https://github.com/kubernetes/website.
|
||||
|
||||
Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository.
|
||||
|
||||
|
|
@ -251,25 +257,31 @@ To collaborate on a development branch:
|
|||
|
||||
`dev-<source version>-<language code>.<team milestone>`
|
||||
|
||||
For example, an approver on a German localization team opens the development branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12.
|
||||
For example, an approver on a German localization team opens the localization branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12.
|
||||
|
||||
2. Individual contributors open feature branches based on the development branch.
|
||||
2. Individual contributors open feature branches based on the localization branch.
|
||||
|
||||
For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`.
|
||||
|
||||
3. Approvers review and merge feature branches into the development branch.
|
||||
3. Approvers review and merge feature branches into the localization branch.
|
||||
|
||||
4. Periodically, an approver merges the development branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request.
|
||||
4. Periodically, an approver merges the localization branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request.
|
||||
|
||||
Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German development branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc.
|
||||
Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German localization branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc.
|
||||
|
||||
Teams must merge localized content into the same release branch from which the content was sourced. For example, a development branch sourced from {{< release-branch >}} must be based on {{< release-branch >}}.
|
||||
Teams must merge localized content into the same branch from which the content was sourced.
|
||||
|
||||
An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch.
|
||||
For example:
|
||||
- a localization branch sourced from `master` must be merged into `master`.
|
||||
- a localization branch sourced from `release-1.19` must be merged into `release-1.19`.
|
||||
|
||||
At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous development branch and the current development branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/master/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/master/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
|
||||
{{< note >}}
|
||||
If your localization branch was created from `master` branch but it is not merged into `master` before new release branch `{{< release-branch >}}` created, merge it into both `master` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`.
|
||||
{{< /note >}}
|
||||
|
||||
While only approvers can open a new development branch and merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required.
|
||||
At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/master/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/master/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
|
||||
|
||||
While only approvers can open a new localization branch and merge pull requests, anyone can open a pull request for a new localization branch. No special permissions are required.
|
||||
|
||||
For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo).
|
||||
|
||||
|
|
@ -290,5 +302,3 @@ Once a localization meets requirements for workflow and minimum output, SIG docs
|
|||
|
||||
- Enable language selection on the website
|
||||
- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/).
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -6,8 +6,10 @@ linkTitle: "Reference"
|
|||
main_menu: true
|
||||
weight: 70
|
||||
content_type: concept
|
||||
no_list: true
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This section of the Kubernetes documentation contains references.
|
||||
|
|
@ -18,11 +20,17 @@ This section of the Kubernetes documentation contains references.
|
|||
|
||||
## API Reference
|
||||
|
||||
* [Glossary](/docs/reference/glossary/) - a comprehensive, standardized list of Kubernetes terminology
|
||||
|
||||
|
||||
|
||||
* [Kubernetes API Reference](/docs/reference/kubernetes-api/)
|
||||
* [One-page API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
* [Using The Kubernetes API](/docs/reference/using-api/) - overview of the API for Kubernetes.
|
||||
* [API access control](/docs/reference/access-authn-authz/) - details on how Kubernetes controls API access
|
||||
* [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/)
|
||||
|
||||
## API Client Libraries
|
||||
## Officially supported client libraries
|
||||
|
||||
To call the Kubernetes API from a programming language, you can use
|
||||
[client libraries](/docs/reference/using-api/client-libraries/). Officially supported
|
||||
|
|
@ -32,22 +40,28 @@ client libraries:
|
|||
- [Kubernetes Python client library](https://github.com/kubernetes-client/python)
|
||||
- [Kubernetes Java client library](https://github.com/kubernetes-client/java)
|
||||
- [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript)
|
||||
- [Kubernetes Dotnet client library](https://github.com/kubernetes-client/csharp)
|
||||
- [Kubernetes Haskell Client library](https://github.com/kubernetes-client/haskell)
|
||||
|
||||
## CLI Reference
|
||||
## CLI
|
||||
|
||||
* [kubectl](/docs/reference/kubectl/overview/) - Main CLI tool for running commands and managing Kubernetes clusters.
|
||||
* [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](https://goessner.net/articles/JsonPath/) with kubectl.
|
||||
* [kubeadm](/docs/reference/setup-tools/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster.
|
||||
|
||||
## Components Reference
|
||||
## Components
|
||||
|
||||
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy.
|
||||
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers.
|
||||
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes.
|
||||
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends.
|
||||
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity.
|
||||
* [kube-scheduler Policies](/docs/reference/scheduling/policies)
|
||||
* [kube-scheduler Profiles](/docs/reference/scheduling/config#profiles)
|
||||
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity.
|
||||
|
||||
## Scheduling
|
||||
|
||||
* [Scheduler Policies](/docs/reference/scheduling/policies)
|
||||
* [Scheduler Profiles](/docs/reference/scheduling/config#profiles)
|
||||
|
||||
|
||||
## Design Docs
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: API Access Control
|
||||
weight: 20
|
||||
weight: 15
|
||||
no_list: true
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -625,6 +625,8 @@ Starting from 1.11, this admission controller is disabled by default.
|
|||
|
||||
### PodNodeSelector {#podnodeselector}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.5" state="alpha" >}}
|
||||
|
||||
This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration.
|
||||
|
||||
#### Configuration File Format
|
||||
|
|
@ -704,6 +706,8 @@ for more information.
|
|||
|
||||
### PodTolerationRestriction {#podtolerationrestriction}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.7" state="alpha" >}}
|
||||
|
||||
The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace.
|
||||
It rejects the pod request if there is a conflict.
|
||||
It then merges the tolerations annotated on the namespace into the tolerations of the pod.
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@ openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app
|
|||
|
||||
This would create a CSR for the username "jbeda", belonging to two groups, "app1" and "app2".
|
||||
|
||||
See [Managing Certificates](/docs/concepts/cluster-administration/certificates/) for how to generate a client cert.
|
||||
See [Managing Certificates](/docs/tasks/administer-cluster/certificates/) for how to generate a client cert.
|
||||
|
||||
### Static Token File
|
||||
|
||||
|
|
@ -328,7 +328,7 @@ Since all of the data needed to validate who you are is in the `id_token`, Kuber
|
|||
|
||||
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
|
||||
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
|
||||
3. To authenticate to the Kubernetes dashboard, you must the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
|
||||
3. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
|
||||
|
||||
#### Configuring the API Server
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Command line tools reference
|
||||
title: Component tools
|
||||
weight: 60
|
||||
---
|
||||
|
|
|
|||
|
|
@ -239,6 +239,7 @@ different Kubernetes components.
|
|||
| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - |
|
||||
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 |
|
||||
| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - |
|
||||
| `EnableAggregatedDiscoveryTimeout` | `true` | Deprecated | 1.16 | - |
|
||||
| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 |
|
||||
| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - |
|
||||
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 |
|
||||
|
|
|
|||
|
|
@ -302,7 +302,7 @@ kubelet [flags]
|
|||
<td colspan="2">--enable-cadvisor-json-endpoints Default: `false`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enable cAdvisor json `/spec` and `/stats/*` endpoints. (DEPRECATED: will be removed in a future version)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Enable cAdvisor json `/spec` and `/stats/*` endpoints. This flag has no effect on the /stats/summary endpoint. (DEPRECATED: will be removed in a future version)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
approvers:
|
||||
- chenopis
|
||||
- abiogenesis-now
|
||||
title: Standardized Glossary
|
||||
title: Glossary
|
||||
layout: glossary
|
||||
noedit: true
|
||||
default_active_tag: fundamental
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Kubernetes Issues and Security
|
||||
weight: 10
|
||||
weight: 40
|
||||
---
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "kubectl CLI"
|
||||
title: "kubectl"
|
||||
weight: 60
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -320,6 +320,18 @@ kubectl top pod POD_NAME --containers # Show metrics for a given p
|
|||
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
|
||||
```
|
||||
|
||||
## Interacting with Deployments and Services
|
||||
```bash
|
||||
kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)
|
||||
kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)
|
||||
|
||||
kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend
|
||||
kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name <my-service-port>
|
||||
|
||||
kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment>
|
||||
kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)
|
||||
```
|
||||
|
||||
## Interacting with Nodes and cluster
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ reviewers:
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
You can use the Kubernetes command line tool kubectl to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the docker commands and the kubectl commands. The following sections show a docker sub-command and describe the equivalent kubectl command.
|
||||
You can use the Kubernetes command line tool `kubectl` to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the Docker commands and the kubectl commands. The following sections show a Docker sub-command and describe the equivalent `kubectl` command.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Setup tools reference
|
||||
title: Setup tools
|
||||
weight: 50
|
||||
---
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
---
|
||||
title: Other Tools
|
||||
reviewers:
|
||||
- janetkuo
|
||||
title: Tools
|
||||
content_type: concept
|
||||
weight: 80
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -10,13 +12,6 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes
|
|||
|
||||
|
||||
<!-- body -->
|
||||
## Kubectl
|
||||
|
||||
[`kubectl`](/docs/tasks/tools/install-kubectl/) is the command line tool for Kubernetes. It controls the Kubernetes cluster manager.
|
||||
|
||||
## Kubeadm
|
||||
|
||||
[`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha).
|
||||
|
||||
## Minikube
|
||||
|
||||
|
|
@ -1,11 +1,12 @@
|
|||
---
|
||||
title: Kubernetes API Overview
|
||||
title: API Overview
|
||||
reviewers:
|
||||
- erictune
|
||||
- lavalamp
|
||||
- jbeda
|
||||
content_type: concept
|
||||
weight: 10
|
||||
no_list: true
|
||||
card:
|
||||
name: reference
|
||||
weight: 50
|
||||
|
|
|
|||
|
|
@ -67,12 +67,13 @@ their authors, not the Kubernetes team.
|
|||
| Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) |
|
||||
| Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) |
|
||||
| Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) |
|
||||
| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) |
|
||||
| Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) |
|
||||
| Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) |
|
||||
| Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) |
|
||||
| Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) |
|
||||
| Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) |
|
||||
| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) |
|
||||
| Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) |
|
||||
| Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) |
|
||||
| Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) |
|
||||
| DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) |
|
||||
|
|
|
|||
|
|
@ -219,30 +219,39 @@ sudo systemctl restart containerd
|
|||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Windows (PowerShell)" %}}
|
||||
|
||||
<br />
|
||||
Start a Powershell session, set `$Version` to the desired version (ex: `$Version=1.4.3`), and then run the following commands:
|
||||
<br />
|
||||
|
||||
```powershell
|
||||
# (Install containerd)
|
||||
# download containerd
|
||||
cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.1/containerd-1.4.1-windows-amd64.tar.gz
|
||||
cmd /c tar xvf .\containerd-1.4.1-windows-amd64.tar.gz
|
||||
# Download containerd
|
||||
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
|
||||
tar.exe xvf .\containerd-windows-amd64.tar.gz
|
||||
```
|
||||
|
||||
```powershell
|
||||
# extract and configure
|
||||
# Extract and configure
|
||||
Copy-Item -Path ".\bin\" -Destination "$Env:ProgramFiles\containerd" -Recurse -Force
|
||||
cd $Env:ProgramFiles\containerd\
|
||||
.\containerd.exe config default | Out-File config.toml -Encoding ascii
|
||||
|
||||
# review the configuration. depending on setup you may want to adjust:
|
||||
# - the sandbox_image (kubernetes pause image)
|
||||
# Review the configuration. Depending on setup you may want to adjust:
|
||||
# - the sandbox_image (Kubernetes pause image)
|
||||
# - cni bin_dir and conf_dir locations
|
||||
Get-Content config.toml
|
||||
|
||||
# (Optional - but highly recommended) Exclude containerd form Windows Defender Scans
|
||||
Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
|
||||
```
|
||||
|
||||
```powershell
|
||||
# start containerd
|
||||
# Start containerd
|
||||
.\containerd.exe --register-service
|
||||
Start-Service containerd
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -18,14 +18,7 @@ For information how to create a cluster with kubeadm once you have performed thi
|
|||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* One or more machines running one of:
|
||||
- Ubuntu 16.04+
|
||||
- Debian 9+
|
||||
- CentOS 7+
|
||||
- Red Hat Enterprise Linux (RHEL) 7+
|
||||
- Fedora 25+
|
||||
- HypriotOS v1.0.1+
|
||||
- Flatcar Container Linux (tested with 2512.3.0)
|
||||
* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
|
||||
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
|
||||
* 2 CPUs or more.
|
||||
* Full network connectivity between all machines in the cluster (public or private network is fine).
|
||||
|
|
@ -122,7 +115,7 @@ The following table lists container runtimes and their associated socket paths:
|
|||
{{< table caption = "Container runtimes and their socket paths" >}}
|
||||
| Runtime | Path to Unix domain socket |
|
||||
|------------|-----------------------------------|
|
||||
| Docker | `/var/run/docker.sock` |
|
||||
| Docker | `/var/run/dockershim.sock` |
|
||||
| containerd | `/run/containerd/containerd.sock` |
|
||||
| CRI-O | `/var/run/crio/crio.sock` |
|
||||
{{< /table >}}
|
||||
|
|
@ -181,7 +174,7 @@ For more information on version skews, see:
|
|||
* Kubeadm-specific [version skew policy](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy)
|
||||
|
||||
{{< tabs name="k8s_install" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
{{% tab name="Debian-based distributions" %}}
|
||||
```bash
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
|
|
@ -193,7 +186,7 @@ sudo apt-get install -y kubelet kubeadm kubectl
|
|||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
{{% tab name="Red Hat-based distributions" %}}
|
||||
```bash
|
||||
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
|
|
@ -224,7 +217,7 @@ sudo systemctl enable --now kubelet
|
|||
- You can leave SELinux enabled if you know how to configure it but it may require settings that are not supported by kubeadm.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Fedora CoreOS or Flatcar Container Linux" %}}
|
||||
{{% tab name="Without a package manager" %}}
|
||||
Install CNI plugins (required for most pod network):
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -138,10 +138,10 @@ Right after `kubeadm init` there should not be any pods in these states.
|
|||
|
||||
- If there are pods in one of these states _right after_ `kubeadm init`, please open an
|
||||
issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state
|
||||
until you have deployed the network solution.
|
||||
until you have deployed the network add-on.
|
||||
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
|
||||
after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
|
||||
it's very likely that the Pod Network solution that you installed is somehow broken.
|
||||
after deploying the network add-on and nothing happens to `coredns` (or `kube-dns`),
|
||||
it's very likely that the Pod Network add-on that you installed is somehow broken.
|
||||
You might have to grant it more RBAC privileges or use a newer version. Please file
|
||||
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
|
||||
- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option
|
||||
|
|
@ -152,14 +152,14 @@ Right after `kubeadm init` there should not be any pods in these states.
|
|||
## `coredns` (or `kube-dns`) is stuck in the `Pending` state
|
||||
|
||||
This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
|
||||
should [install the pod network solution](/docs/concepts/cluster-administration/addons/)
|
||||
should [install the pod network add-on](/docs/concepts/cluster-administration/addons/)
|
||||
of choice. You have to install a Pod Network
|
||||
before CoreDNS may be deployed fully. Hence the `Pending` state before the network is set up.
|
||||
|
||||
## `HostPort` services do not work
|
||||
|
||||
The `HostPort` and `HostIP` functionality is available depending on your Pod Network
|
||||
provider. Please contact the author of the Pod Network solution to find out whether
|
||||
provider. Please contact the author of the Pod Network add-on to find out whether
|
||||
`HostPort` and `HostIP` functionality are available.
|
||||
|
||||
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
|
||||
|
|
|
|||
|
|
@ -352,102 +352,6 @@ exampleWithKubeConfig = do
|
|||
>>= print
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
### Accessing the API from within a Pod
|
||||
|
||||
When accessing the API from within a Pod, locating and authenticating
|
||||
to the API server are slightly different to the external client case described above.
|
||||
|
||||
The easiest way to use the Kubernetes API from a Pod is to use
|
||||
one of the official [client libraries](/docs/reference/using-api/client-libraries/). These
|
||||
libraries can automatically discover the API server and authenticate.
|
||||
|
||||
#### Using Official Client Libraries
|
||||
|
||||
From within a Pod, the recommended ways to connect to the Kubernetes API are:
|
||||
|
||||
- For a Go client, use the official [Go client library](https://github.com/kubernetes/client-go/).
|
||||
The `rest.InClusterConfig()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go).
|
||||
|
||||
- For a Python client, use the official [Python client library](https://github.com/kubernetes-client/python/).
|
||||
The `config.load_incluster_config()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py).
|
||||
|
||||
- There are a number of other libraries available, please refer to the [Client Libraries](/docs/reference/using-api/client-libraries/) page.
|
||||
|
||||
In each case, the service account credentials of the Pod are used to communicate
|
||||
securely with the API server.
|
||||
|
||||
#### Directly accessing the REST API
|
||||
|
||||
While running in a Pod, the Kubernetes apiserver is accessible via a Service named
|
||||
`kubernetes` in the `default` namespace. Therefore, Pods can use the
|
||||
`kubernetes.default.svc` hostname to query the API server. Official client libraries
|
||||
do this automatically.
|
||||
|
||||
The recommended way to authenticate to the API server is with a
|
||||
[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By default, a Pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that Pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the API server.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
#### Using kubectl proxy
|
||||
|
||||
If you would like to query the API without an official client library, you can run `kubectl proxy`
|
||||
as the [command](/docs/tasks/inject-data-application/define-command-argument-container/)
|
||||
of a new sidecar container in the Pod. This way, `kubectl proxy` will authenticate
|
||||
to the API and expose it on the `localhost` interface of the Pod, so that other containers
|
||||
in the Pod can use it directly.
|
||||
|
||||
#### Without using a proxy
|
||||
|
||||
It is possible to avoid using the kubectl proxy by passing the authentication token
|
||||
directly to the API server. The internal certificate secures the connection.
|
||||
|
||||
```shell
|
||||
# Point to the internal API server hostname
|
||||
APISERVER=https://kubernetes.default.svc
|
||||
|
||||
# Path to ServiceAccount token
|
||||
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
|
||||
|
||||
# Read this Pod's namespace
|
||||
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
|
||||
|
||||
# Read the ServiceAccount bearer token
|
||||
TOKEN=$(cat ${SERVICEACCOUNT}/token)
|
||||
|
||||
# Reference the internal certificate authority (CA)
|
||||
CACERT=${SERVICEACCOUNT}/ca.crt
|
||||
|
||||
# Explore the API with TOKEN
|
||||
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
|
||||
```
|
||||
|
||||
The output will be similar to this:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
"serverAddress": "10.0.1.149:443"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
* [Accessing the Kubernetes API from a Pod](/docs/tasks/run-application/access-api-from-pod/)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,252 @@
|
|||
---
|
||||
title: Certificates
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
When using client certificate authentication, you can generate certificates
|
||||
manually through `easyrsa`, `openssl` or `cfssl`.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
### easyrsa
|
||||
|
||||
**easyrsa** can manually generate certificates for your cluster.
|
||||
|
||||
1. Download, unpack, and initialize the patched version of easyrsa3.
|
||||
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
|
||||
tar xzf easy-rsa.tar.gz
|
||||
cd easy-rsa-master/easyrsa3
|
||||
./easyrsa init-pki
|
||||
1. Generate a new certificate authority (CA). `--batch` sets automatic mode;
|
||||
`--req-cn` specifies the Common Name (CN) for the CA's new root certificate.
|
||||
|
||||
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
|
||||
1. Generate server certificate and key.
|
||||
The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will
|
||||
be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR
|
||||
that is specified as the `--service-cluster-ip-range` argument for both the API server and
|
||||
the controller manager component. The argument `--days` is used to set the number of days
|
||||
after which the certificate expires.
|
||||
The sample below also assumes that you are using `cluster.local` as the default
|
||||
DNS domain name.
|
||||
|
||||
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
|
||||
"IP:${MASTER_CLUSTER_IP},"\
|
||||
"DNS:kubernetes,"\
|
||||
"DNS:kubernetes.default,"\
|
||||
"DNS:kubernetes.default.svc,"\
|
||||
"DNS:kubernetes.default.svc.cluster,"\
|
||||
"DNS:kubernetes.default.svc.cluster.local" \
|
||||
--days=10000 \
|
||||
build-server-full server nopass
|
||||
1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory.
|
||||
1. Fill in and add the following parameters into the API server start parameters:
|
||||
|
||||
--client-ca-file=/yourdirectory/ca.crt
|
||||
--tls-cert-file=/yourdirectory/server.crt
|
||||
--tls-private-key-file=/yourdirectory/server.key
|
||||
|
||||
### openssl
|
||||
|
||||
**openssl** can manually generate certificates for your cluster.
|
||||
|
||||
1. Generate a ca.key with 2048bit:
|
||||
|
||||
openssl genrsa -out ca.key 2048
|
||||
1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time):
|
||||
|
||||
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
|
||||
1. Generate a server.key with 2048bit:
|
||||
|
||||
openssl genrsa -out server.key 2048
|
||||
1. Create a config file for generating a Certificate Signing Request (CSR).
|
||||
Be sure to substitute the values marked with angle brackets (e.g. `<MASTER_IP>`)
|
||||
with real values before saving this to a file (e.g. `csr.conf`).
|
||||
Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the
|
||||
API server as described in previous subsection.
|
||||
The sample below also assumes that you are using `cluster.local` as the default
|
||||
DNS domain name.
|
||||
|
||||
[ req ]
|
||||
default_bits = 2048
|
||||
prompt = no
|
||||
default_md = sha256
|
||||
req_extensions = req_ext
|
||||
distinguished_name = dn
|
||||
|
||||
[ dn ]
|
||||
C = <country>
|
||||
ST = <state>
|
||||
L = <city>
|
||||
O = <organization>
|
||||
OU = <organization unit>
|
||||
CN = <MASTER_IP>
|
||||
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
DNS.3 = kubernetes.default.svc
|
||||
DNS.4 = kubernetes.default.svc.cluster
|
||||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = <MASTER_IP>
|
||||
IP.2 = <MASTER_CLUSTER_IP>
|
||||
|
||||
[ v3_ext ]
|
||||
authorityKeyIdentifier=keyid,issuer:always
|
||||
basicConstraints=CA:FALSE
|
||||
keyUsage=keyEncipherment,dataEncipherment
|
||||
extendedKeyUsage=serverAuth,clientAuth
|
||||
subjectAltName=@alt_names
|
||||
1. Generate the certificate signing request based on the config file:
|
||||
|
||||
openssl req -new -key server.key -out server.csr -config csr.conf
|
||||
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
|
||||
|
||||
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
|
||||
-CAcreateserial -out server.crt -days 10000 \
|
||||
-extensions v3_ext -extfile csr.conf
|
||||
1. View the certificate:
|
||||
|
||||
openssl x509 -noout -text -in ./server.crt
|
||||
|
||||
Finally, add the same parameters into the API server start parameters.
|
||||
|
||||
### cfssl
|
||||
|
||||
**cfssl** is another tool for certificate generation.
|
||||
|
||||
1. Download, unpack and prepare the command line tools as shown below.
|
||||
Note that you may need to adapt the sample commands based on the hardware
|
||||
architecture and cfssl version you are using.
|
||||
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
|
||||
chmod +x cfssl
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
|
||||
chmod +x cfssljson
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
|
||||
chmod +x cfssl-certinfo
|
||||
1. Create a directory to hold the artifacts and initialize cfssl:
|
||||
|
||||
mkdir cert
|
||||
cd cert
|
||||
../cfssl print-defaults config > config.json
|
||||
../cfssl print-defaults csr > csr.json
|
||||
1. Create a JSON config file for generating the CA file, for example, `ca-config.json`:
|
||||
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"kubernetes": {
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"server auth",
|
||||
"client auth"
|
||||
],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
1. Create a JSON config file for CA certificate signing request (CSR), for example,
|
||||
`ca-csr.json`. Be sure to replace the values marked with angle brackets with
|
||||
real values you want to use.
|
||||
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names":[{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`):
|
||||
|
||||
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
|
||||
1. Create a JSON config file for generating keys and certificates for the API
|
||||
server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with
|
||||
real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster
|
||||
IP for the API server as described in previous subsection.
|
||||
The sample below also assumes that you are using `cluster.local` as the default
|
||||
DNS domain name.
|
||||
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"<MASTER_IP>",
|
||||
"<MASTER_CLUSTER_IP>",
|
||||
"kubernetes",
|
||||
"kubernetes.default",
|
||||
"kubernetes.default.svc",
|
||||
"kubernetes.default.svc.cluster",
|
||||
"kubernetes.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
1. Generate the key and certificate for the API server, which are by default
|
||||
saved into file `server-key.pem` and `server.pem` respectively:
|
||||
|
||||
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
|
||||
--config=ca-config.json -profile=kubernetes \
|
||||
server-csr.json | ../cfssljson -bare server
|
||||
|
||||
|
||||
## Distributing Self-Signed CA Certificate
|
||||
|
||||
A client node may refuse to recognize a self-signed CA certificate as valid.
|
||||
For a non-production deployment, or for a deployment that runs behind a company
|
||||
firewall, you can distribute a self-signed CA certificate to all clients and
|
||||
refresh the local list for valid certificates.
|
||||
|
||||
On each client, perform the following operations:
|
||||
|
||||
```bash
|
||||
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
|
||||
sudo update-ca-certificates
|
||||
```
|
||||
|
||||
```
|
||||
Updating certificates in /etc/ssl/certs...
|
||||
1 added, 0 removed; done.
|
||||
Running hooks in /etc/ca-certificates/update.d....
|
||||
done.
|
||||
```
|
||||
|
||||
## Certificates API
|
||||
|
||||
You can use the `certificates.k8s.io` API to provision
|
||||
x509 certificates to use for authentication as documented
|
||||
[here](/docs/tasks/tls/managing-tls-in-a-cluster).
|
||||
|
||||
|
||||
|
|
@ -25,6 +25,12 @@ kube-dns.
|
|||
|
||||
{{< codenew file="admin/dns/dnsutils.yaml" >}}
|
||||
|
||||
{{< note >}}
|
||||
This example creates a pod in the `default` namespace. DNS name resolution for
|
||||
services depends on the namespace of the pod. For more information, review
|
||||
[DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names).
|
||||
{{< /note >}}
|
||||
|
||||
Use that manifest to create a Pod:
|
||||
|
||||
```shell
|
||||
|
|
@ -247,6 +253,27 @@ linux/amd64, go1.10.3, 2e322f6
|
|||
172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 106 0.000066649s
|
||||
```
|
||||
|
||||
### Are you in the right namespace for the service?
|
||||
|
||||
DNS queries that don't specify a namespace are limited to the pod's
|
||||
namespace.
|
||||
|
||||
If the namespace of the pod and service differ, the DNS query must include
|
||||
the namespace of the service.
|
||||
|
||||
This query is limited to the pod's namespace:
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup <service-name>
|
||||
```
|
||||
|
||||
This query specifies the namespace:
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup <service-name>.<namespace>
|
||||
```
|
||||
|
||||
To learn more about name resolution, see
|
||||
[DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names).
|
||||
|
||||
## Known issues
|
||||
|
||||
Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved).
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ kubectl describe secrets/db-user-pass-96mffmfh4k
|
|||
The output is similar to:
|
||||
|
||||
```
|
||||
Name: db-user-pass
|
||||
Name: db-user-pass-96mffmfh4k
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
|
|
|||
|
|
@ -293,6 +293,10 @@ Services.
|
|||
Readiness probes runs on the container during its whole lifecycle.
|
||||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.
|
||||
{{< /caution >}}
|
||||
|
||||
Readiness probes are configured similarly to liveness probes. The only difference
|
||||
is that you use the `readinessProbe` field instead of the `livenessProbe` field.
|
||||
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
|
|||
```
|
||||
|
||||
The examples in this section use the `pause` container image because it does not
|
||||
contain userland debugging utilities, but this method works with all container
|
||||
contain debugging utilities, but this method works with all container
|
||||
images.
|
||||
|
||||
If you attempt to use `kubectl exec` to create a shell you will see an error
|
||||
|
|
|
|||
|
|
@ -70,8 +70,9 @@ override any environment variables specified in the container image.
|
|||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
The environment variables can reference each other, and cycles are possible,
|
||||
pay attention to the order before using
|
||||
Environment variables may reference each other, however ordering is important.
|
||||
Variables making use of others defined in the same context must come later in
|
||||
the list. Similarly, avoid circular references.
|
||||
{{< /note >}}
|
||||
|
||||
## Using environment variables inside of your config
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ weight: 20
|
|||
|
||||
[Kustomize](https://github.com/kubernetes-sigs/kustomize) is a standalone tool
|
||||
to customize Kubernetes objects
|
||||
through a [kustomization file](https://kubernetes-sigs.github.io/kustomize/api-reference/glossary/#kustomization).
|
||||
through a [kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization).
|
||||
|
||||
Since 1.14, Kubectl also
|
||||
supports the management of Kubernetes objects using a kustomization file.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,111 @@
|
|||
---
|
||||
title: Accessing the Kubernetes API from a Pod
|
||||
content_type: task
|
||||
weight: 120
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This guide demonstrates how to access the Kubernetes API from within a pod.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Accessing the API from within a Pod
|
||||
|
||||
When accessing the API from within a Pod, locating and authenticating
|
||||
to the API server are slightly different to the external client case.
|
||||
|
||||
The easiest way to use the Kubernetes API from a Pod is to use
|
||||
one of the official [client libraries](/docs/reference/using-api/client-libraries/). These
|
||||
libraries can automatically discover the API server and authenticate.
|
||||
|
||||
### Using Official Client Libraries
|
||||
|
||||
From within a Pod, the recommended ways to connect to the Kubernetes API are:
|
||||
|
||||
- For a Go client, use the official [Go client library](https://github.com/kubernetes/client-go/).
|
||||
The `rest.InClusterConfig()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go).
|
||||
|
||||
- For a Python client, use the official [Python client library](https://github.com/kubernetes-client/python/).
|
||||
The `config.load_incluster_config()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py).
|
||||
|
||||
- There are a number of other libraries available, please refer to the [Client Libraries](/docs/reference/using-api/client-libraries/) page.
|
||||
|
||||
In each case, the service account credentials of the Pod are used to communicate
|
||||
securely with the API server.
|
||||
|
||||
### Directly accessing the REST API
|
||||
|
||||
While running in a Pod, the Kubernetes apiserver is accessible via a Service named
|
||||
`kubernetes` in the `default` namespace. Therefore, Pods can use the
|
||||
`kubernetes.default.svc` hostname to query the API server. Official client libraries
|
||||
do this automatically.
|
||||
|
||||
The recommended way to authenticate to the API server is with a
|
||||
[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By default, a Pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that Pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
|
||||
If available, a certificate bundle is placed into the filesystem tree of each
|
||||
container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
|
||||
used to verify the serving certificate of the API server.
|
||||
|
||||
Finally, the default namespace to be used for namespaced API operations is placed in a file
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
|
||||
|
||||
### Using kubectl proxy
|
||||
|
||||
If you would like to query the API without an official client library, you can run `kubectl proxy`
|
||||
as the [command](/docs/tasks/inject-data-application/define-command-argument-container/)
|
||||
of a new sidecar container in the Pod. This way, `kubectl proxy` will authenticate
|
||||
to the API and expose it on the `localhost` interface of the Pod, so that other containers
|
||||
in the Pod can use it directly.
|
||||
|
||||
### Without using a proxy
|
||||
|
||||
It is possible to avoid using the kubectl proxy by passing the authentication token
|
||||
directly to the API server. The internal certificate secures the connection.
|
||||
|
||||
```shell
|
||||
# Point to the internal API server hostname
|
||||
APISERVER=https://kubernetes.default.svc
|
||||
|
||||
# Path to ServiceAccount token
|
||||
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
|
||||
|
||||
# Read this Pod's namespace
|
||||
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
|
||||
|
||||
# Read the ServiceAccount bearer token
|
||||
TOKEN=$(cat ${SERVICEACCOUNT}/token)
|
||||
|
||||
# Reference the internal certificate authority (CA)
|
||||
CACERT=${SERVICEACCOUNT}/ca.crt
|
||||
|
||||
# Explore the API with TOKEN
|
||||
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
|
||||
```
|
||||
|
||||
The output will be similar to this:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
"serverAddress": "10.0.1.149:443"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
|
@ -69,8 +69,9 @@ write that to disk, in the location specified by `--cert-dir`. Then the kubelet
|
|||
will use the new certificate to connect to the Kubernetes API.
|
||||
|
||||
As the expiration of the signed certificate approaches, the kubelet will
|
||||
automatically issue a new certificate signing request, using the Kubernetes
|
||||
API. Again, the controller manager will automatically approve the certificate
|
||||
automatically issue a new certificate signing request, using the Kubernetes API.
|
||||
This can happen at any point between 30% and 10% of the time remaining on the
|
||||
certificate. Again, the controller manager will automatically approve the certificate
|
||||
request and attach a signed certificate to the certificate signing request. The
|
||||
kubelet will retrieve the new signed certificate from the Kubernetes API and
|
||||
write that to disk. Then it will update the connections it has to the
|
||||
|
|
|
|||
|
|
@ -105,8 +105,8 @@ Configurations with a single API server will experience unavailability while the
|
|||
* Make sure control plane components logs no TLS errors.
|
||||
|
||||
{{< note >}}
|
||||
To generate certificates and private keys for your cluster using the `openssl` command line tool, see [Certificates (`openssl`)](/docs/concepts/cluster-administration/certificates/#openssl).
|
||||
You can also use [`cfssl`](/docs/concepts/cluster-administration/certificates/#cfssl).
|
||||
To generate certificates and private keys for your cluster using the `openssl` command line tool, see [Certificates (`openssl`)](/docs/tasks/administer-cluster/certificates/#openssl).
|
||||
You can also use [`cfssl`](/docs/tasks/administer-cluster/certificates/#cfssl).
|
||||
{{< /note >}}
|
||||
|
||||
1. Annotate any Daemonsets and Deployments to trigger pod replacement in a safer rolling fashion.
|
||||
|
|
|
|||
|
|
@ -7,19 +7,20 @@ no_list: true
|
|||
|
||||
## kubectl
|
||||
|
||||
The Kubernetes command-line tool, `kubectl`, allows you to run commands against
|
||||
Kubernetes clusters. You can use `kubectl` to deploy applications, inspect and
|
||||
manage cluster resources, and view logs.
|
||||
|
||||
See [Install and Set Up `kubectl`](/docs/tasks/tools/install-kubectl/) for
|
||||
information about how to download and install `kubectl` and set it up for
|
||||
accessing your cluster.
|
||||
|
||||
<a class="btn btn-primary" href="/docs/tasks/tools/install-kubectl/" role="button" aria-label="View kubectl Install and Set Up Guide">View kubectl Install and Set Up Guide</a>
|
||||
|
||||
You can also read the
|
||||
<!-- overview -->
|
||||
The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
|
||||
you to run commands against Kubernetes clusters.
|
||||
You can use kubectl to deploy applications, inspect and manage cluster resources,
|
||||
and view logs. For more information including a complete list of kubectl operations, see the
|
||||
[`kubectl` reference documentation](/docs/reference/kubectl/).
|
||||
|
||||
kubectl is installable on a variety of Linux platforms, macOS and Windows.
|
||||
Find your preferred operating system below.
|
||||
|
||||
- [Install kubectl on Linux](install-kubectl-linux)
|
||||
- [Install kubectl on macOS](install-kubectl-macos)
|
||||
- [Install kubectl on Windows](install-kubectl-windows)
|
||||
|
||||
## kind
|
||||
|
||||
[`kind`](https://kind.sigs.k8s.io/docs/) lets you run Kubernetes on
|
||||
|
|
|
|||
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
title: "Tools Included"
|
||||
description: "Snippets to be included in the main kubectl-installs-*.md pages."
|
||||
headless: true
|
||||
toc_hide: true
|
||||
---
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: "gcloud kubectl install"
|
||||
description: "How to install kubectl with gcloud snippet for inclusion in each OS-specific tab."
|
||||
headless: true
|
||||
---
|
||||
|
||||
You can install kubectl as part of the Google Cloud SDK.
|
||||
|
||||
1. Install the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
|
||||
1. Run the `kubectl` installation command:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```shell
|
||||
kubectl version --client
|
||||
```
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: "What's next?"
|
||||
description: "What's next after installing kubectl."
|
||||
headless: true
|
||||
---
|
||||
|
||||
* [Install Minikube](https://minikube.sigs.k8s.io/docs/start/)
|
||||
* See the [getting started guides](/docs/setup/) for more about creating clusters.
|
||||
* [Learn how to launch and expose your application.](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* If you need access to a cluster you didn't create, see the
|
||||
[Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
||||
* Read the [kubectl reference docs](/docs/reference/kubectl/kubectl/)
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: "bash auto-completion on Linux"
|
||||
description: "Some optional configuration for bash auto-completion on Linux."
|
||||
headless: true
|
||||
---
|
||||
|
||||
### Introduction
|
||||
|
||||
The kubectl completion script for Bash can be generated with the command `kubectl completion bash`. Sourcing the completion script in your shell enables kubectl autocompletion.
|
||||
|
||||
However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
|
||||
|
||||
### Install bash-completion
|
||||
|
||||
bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
|
||||
|
||||
The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
|
||||
|
||||
To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file:
|
||||
|
||||
```bash
|
||||
source /usr/share/bash-completion/bash_completion
|
||||
```
|
||||
|
||||
Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
|
||||
|
||||
### Enable kubectl autocompletion
|
||||
|
||||
You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
|
||||
|
||||
- Source the completion script in your `~/.bashrc` file:
|
||||
|
||||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
```
|
||||
|
||||
- Add the completion script to the `/etc/bash_completion.d` directory:
|
||||
|
||||
```bash
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```bash
|
||||
echo 'alias k=kubectl' >>~/.bashrc
|
||||
echo 'complete -F __start_kubectl k' >>~/.bashrc
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
bash-completion sources all completion scripts in `/etc/bash_completion.d`.
|
||||
{{< /note >}}
|
||||
|
||||
Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working.
|
||||
|
|
@ -0,0 +1,89 @@
|
|||
---
|
||||
title: "bash auto-completion on macOS"
|
||||
description: "Some optional configuration for bash auto-completion on macOS."
|
||||
headless: true
|
||||
---
|
||||
|
||||
### Introduction
|
||||
|
||||
The kubectl completion script for Bash can be generated with `kubectl completion bash`. Sourcing this script in your shell enables kubectl completion.
|
||||
|
||||
However, the kubectl completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install.
|
||||
|
||||
{{< warning>}}
|
||||
There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The kubectl completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use kubectl completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
|
||||
{{< /warning >}}
|
||||
|
||||
### Upgrade Bash
|
||||
|
||||
The instructions here assume you use Bash 4.1+. You can check your Bash's version by running:
|
||||
|
||||
```bash
|
||||
echo $BASH_VERSION
|
||||
```
|
||||
|
||||
If it is too old, you can install/upgrade it using Homebrew:
|
||||
|
||||
```bash
|
||||
brew install bash
|
||||
```
|
||||
|
||||
Reload your shell and verify that the desired version is being used:
|
||||
|
||||
```bash
|
||||
echo $BASH_VERSION $SHELL
|
||||
```
|
||||
|
||||
Homebrew usually installs it at `/usr/local/bin/bash`.
|
||||
|
||||
### Install bash-completion
|
||||
|
||||
{{< note >}}
|
||||
As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work).
|
||||
{{< /note >}}
|
||||
|
||||
You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew:
|
||||
|
||||
```bash
|
||||
brew install bash-completion@2
|
||||
```
|
||||
|
||||
As stated in the output of this command, add the following to your `~/.bash_profile` file:
|
||||
|
||||
```bash
|
||||
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
||||
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
|
||||
```
|
||||
|
||||
Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`.
|
||||
|
||||
### Enable kubectl autocompletion
|
||||
|
||||
You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
|
||||
|
||||
- Source the completion script in your `~/.bash_profile` file:
|
||||
|
||||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bash_profile
|
||||
```
|
||||
|
||||
- Add the completion script to the `/usr/local/etc/bash_completion.d` directory:
|
||||
|
||||
```bash
|
||||
kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
- If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```bash
|
||||
echo 'alias k=kubectl' >>~/.bash_profile
|
||||
echo 'complete -F __start_kubectl k' >>~/.bash_profile
|
||||
```
|
||||
|
||||
- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
|
||||
|
||||
{{< note >}}
|
||||
The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
|
||||
{{< /note >}}
|
||||
|
||||
In any case, after reloading your shell, kubectl completion should be working.
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: "zsh auto-completion"
|
||||
description: "Some optional configuration for zsh auto-completion."
|
||||
headless: true
|
||||
---
|
||||
|
||||
The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion.
|
||||
|
||||
To do so in all your shell sessions, add the following to your `~/.zshrc` file:
|
||||
|
||||
```zsh
|
||||
source <(kubectl completion zsh)
|
||||
```
|
||||
|
||||
If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```zsh
|
||||
echo 'alias k=kubectl' >>~/.zshrc
|
||||
echo 'complete -F __start_kubectl k' >>~/.zshrc
|
||||
```
|
||||
|
||||
After reloading your shell, kubectl autocompletion should be working.
|
||||
|
||||
If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
|
||||
|
||||
```zsh
|
||||
autoload -Uz compinit
|
||||
compinit
|
||||
```
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
title: "verify kubectl install"
|
||||
description: "How to verify kubectl."
|
||||
headless: true
|
||||
---
|
||||
|
||||
In order for kubectl to find and access a Kubernetes cluster, it needs a
|
||||
[kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/),
|
||||
which is created automatically when you create a cluster using
|
||||
[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)
|
||||
or successfully deploy a Minikube cluster.
|
||||
By default, kubectl configuration is located at `~/.kube/config`.
|
||||
|
||||
Check that kubectl is properly configured by getting the cluster state:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
If you see a URL response, kubectl is correctly configured to access your cluster.
|
||||
|
||||
If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.
|
||||
|
||||
```
|
||||
The connection to the server <server-name:port> was refused - did you specify the right host or port?
|
||||
```
|
||||
|
||||
For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.
|
||||
|
||||
If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info dump
|
||||
```
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: Install and Set Up kubectl on Linux
|
||||
content_type: task
|
||||
weight: 10
|
||||
card:
|
||||
name: tasks
|
||||
weight: 20
|
||||
title: Install kubectl on Linux
|
||||
---
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster.
|
||||
For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master.
|
||||
Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
## Install kubectl on Linux
|
||||
|
||||
The following methods exist for installing kubectl on Linux:
|
||||
|
||||
- [Install kubectl binary with curl on Linux](#install-kubectl-binary-with-curl-on-linux)
|
||||
- [Install using native package management](#install-using-native-package-management)
|
||||
- [Install using other package management](#install-using-other-package-management)
|
||||
- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk)
|
||||
|
||||
### Install kubectl binary with curl on Linux
|
||||
|
||||
1. Download the latest release with the command:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version.
|
||||
|
||||
For example, to download version {{< param "fullversion" >}} on Linux, type:
|
||||
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
1. Validate the binary (optional)
|
||||
|
||||
Download the kubectl checksum file:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
Validate the kubectl binary against the checksum file:
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
|
||||
```
|
||||
|
||||
If valid, the output is:
|
||||
|
||||
```console
|
||||
kubectl: OK
|
||||
```
|
||||
|
||||
If the check fails, `sha256` exits with nonzero status and prints output similar to:
|
||||
|
||||
```bash
|
||||
kubectl: FAILED
|
||||
sha256sum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Download the same version of the binary and checksum.
|
||||
{{< /note >}}
|
||||
|
||||
1. Install kubectl
|
||||
|
||||
```bash
|
||||
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.local/bin/kubectl
|
||||
mv ./kubectl ~/.local/bin/kubectl
|
||||
# and then add ~/.local/bin/kubectl to $PATH
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### Install using native package management
|
||||
|
||||
{{< tabs name="kubectl_install" >}}
|
||||
{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}}
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Install using other package management
|
||||
|
||||
{{< tabs name="other_kubectl_install" >}}
|
||||
{{% tab name="Snap" %}}
|
||||
If you are on Ubuntu or another Linux distribution that support [snap](https://snapcraft.io/docs/core/install) package manager, kubectl is available as a [snap](https://snapcraft.io/) application.
|
||||
|
||||
```shell
|
||||
snap install kubectl --classic
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Homebrew" %}}
|
||||
If you are on Linux and using [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) package manager, kubectl is available for [installation](https://docs.brew.sh/Homebrew-on-Linux#install).
|
||||
|
||||
```shell
|
||||
brew install kubectl
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Install on Linux as part of the Google Cloud SDK
|
||||
|
||||
{{< include "included/install-kubectl-gcloud.md" >}}
|
||||
|
||||
## Verify kubectl configuration
|
||||
|
||||
{{< include "included/verify-kubectl.md" >}}
|
||||
|
||||
## Optional kubectl configurations
|
||||
|
||||
### Enable shell autocompletion
|
||||
|
||||
kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing.
|
||||
|
||||
Below are the procedures to set up autocompletion for Bash and Zsh.
|
||||
|
||||
{{< tabs name="kubectl_autocompletion" >}}
|
||||
{{< tab name="Bash" include="included/optional-kubectl-configs-bash-linux.md" />}}
|
||||
{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
{{< include "included/kubectl-whats-next.md" >}}
|
||||
|
|
@ -0,0 +1,160 @@
|
|||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: Install and Set Up kubectl on macOS
|
||||
content_type: task
|
||||
weight: 10
|
||||
card:
|
||||
name: tasks
|
||||
weight: 20
|
||||
title: Install kubectl on macOS
|
||||
---
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster.
|
||||
For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master.
|
||||
Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
## Install kubectl on macOS
|
||||
|
||||
The following methods exist for installing kubectl on macOS:
|
||||
|
||||
- [Install kubectl binary with curl on macOS](#install-kubectl-binary-with-curl-on-macos)
|
||||
- [Install with Homebrew on macOS](#install-with-homebrew-on-macos)
|
||||
- [Install with Macports on macOS](#install-with-macports-on-macos)
|
||||
- [Install on Linux as part of the Google Cloud SDK](#install-on-linux-as-part-of-the-google-cloud-sdk)
|
||||
|
||||
### Install kubectl binary with curl on macOS
|
||||
|
||||
1. Download the latest release:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version.
|
||||
|
||||
For example, to download version {{< param "fullversion" >}} on macOS, type:
|
||||
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
1. Validate the binary (optional)
|
||||
|
||||
Download the kubectl checksum file:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
Validate the kubectl binary against the checksum file:
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | shasum -a 256 --check
|
||||
```
|
||||
|
||||
If valid, the output is:
|
||||
|
||||
```console
|
||||
kubectl: OK
|
||||
```
|
||||
|
||||
If the check fails, `shasum` exits with nonzero status and prints output similar to:
|
||||
|
||||
```bash
|
||||
kubectl: FAILED
|
||||
shasum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Download the same version of the binary and checksum.
|
||||
{{< /note >}}
|
||||
|
||||
1. Make the kubectl binary executable.
|
||||
|
||||
```bash
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
1. Move the kubectl binary to a file location on your system `PATH`.
|
||||
|
||||
```bash
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
sudo chown root: /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### Install with Homebrew on macOS
|
||||
|
||||
If you are on macOS and using [Homebrew](https://brew.sh/) package manager, you can install kubectl with Homebrew.
|
||||
|
||||
1. Run the installation command:
|
||||
|
||||
```bash
|
||||
brew install kubectl
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
brew install kubernetes-cli
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### Install with Macports on macOS
|
||||
|
||||
If you are on macOS and using [Macports](https://macports.org/) package manager, you can install kubectl with Macports.
|
||||
|
||||
1. Run the installation command:
|
||||
|
||||
```bash
|
||||
sudo port selfupdate
|
||||
sudo port install kubectl
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
|
||||
### Install on macOS as part of the Google Cloud SDK
|
||||
|
||||
{{< include "included/install-kubectl-gcloud.md" >}}
|
||||
|
||||
## Verify kubectl configuration
|
||||
|
||||
{{< include "included/verify-kubectl.md" >}}
|
||||
|
||||
## Optional kubectl configurations
|
||||
|
||||
### Enable shell autocompletion
|
||||
|
||||
kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing.
|
||||
|
||||
Below are the procedures to set up autocompletion for Bash and Zsh.
|
||||
|
||||
{{< tabs name="kubectl_autocompletion" >}}
|
||||
{{< tab name="Bash" include="included/optional-kubectl-configs-bash-mac.md" />}}
|
||||
{{< tab name="Zsh" include="included/optional-kubectl-configs-zsh.md" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
{{< include "included/kubectl-whats-next.md" >}}
|
||||
|
|
@ -0,0 +1,179 @@
|
|||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: Install and Set Up kubectl on Windows
|
||||
content_type: task
|
||||
weight: 10
|
||||
card:
|
||||
name: tasks
|
||||
weight: 20
|
||||
title: Install kubectl on Windows
|
||||
---
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster.
|
||||
For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master.
|
||||
Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
## Install kubectl on Windows
|
||||
|
||||
The following methods exist for installing kubectl on Windows:
|
||||
|
||||
- [Install kubectl binary with curl on Windows](#install-kubectl-binary-with-curl-on-windows)
|
||||
- [Install with PowerShell from PSGallery](#install-with-powershell-from-psgallery)
|
||||
- [Install on Windows using Chocolatey or Scoop](#install-on-windows-using-chocolatey-or-scoop)
|
||||
- [Install on Windows as part of the Google Cloud SDK](#install-on-windows-as-part-of-the-google-cloud-sdk)
|
||||
|
||||
|
||||
### Install kubectl binary with curl on Windows
|
||||
|
||||
1. Download the [latest release {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Or if you have `curl` installed, use this command:
|
||||
|
||||
```powershell
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To find out the latest stable version (for example, for scripting), take a look at [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
{{< /note >}}
|
||||
|
||||
1. Validate the binary (optional)
|
||||
|
||||
Download the kubectl checksum file:
|
||||
|
||||
```powershell
|
||||
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256
|
||||
```
|
||||
|
||||
Validate the kubectl binary against the checksum file:
|
||||
|
||||
- Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded:
|
||||
|
||||
```cmd
|
||||
CertUtil -hashfile kubectl.exe SHA256
|
||||
type kubectl.exe.sha256
|
||||
```
|
||||
|
||||
- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result:
|
||||
|
||||
```powershell
|
||||
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
|
||||
```
|
||||
|
||||
1. Add the binary in to your `PATH`.
|
||||
|
||||
1. Test to ensure the version of `kubectl` is the same as downloaded:
|
||||
|
||||
```cmd
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
[Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/#kubernetes) adds its own version of `kubectl` to `PATH`.
|
||||
If you have installed Docker Desktop before, you may need to place your `PATH` entry before the one added by the Docker Desktop installer or remove the Docker Desktop's `kubectl`.
|
||||
{{< /note >}}
|
||||
|
||||
### Install with PowerShell from PSGallery
|
||||
|
||||
If you are on Windows and using the [PowerShell Gallery](https://www.powershellgallery.com/) package manager, you can install and update kubectl with PowerShell.
|
||||
|
||||
1. Run the installation commands (making sure to specify a `DownloadLocation`):
|
||||
|
||||
```powershell
|
||||
Install-Script -Name 'install-kubectl' -Scope CurrentUser -Force
|
||||
install-kubectl.ps1 [-DownloadLocation <path>]
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's `temp` Directory.
|
||||
{{< /note >}}
|
||||
|
||||
The installer creates `$HOME/.kube` and instructs it to create a config file.
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Updating the installation is performed by rerunning the two commands listed in step 1.
|
||||
{{< /note >}}
|
||||
|
||||
### Install on Windows using Chocolatey or Scoop
|
||||
|
||||
1. To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [Scoop](https://scoop.sh) command-line installer.
|
||||
|
||||
{{< tabs name="kubectl_win_install" >}}
|
||||
{{% tab name="choco" %}}
|
||||
```powershell
|
||||
choco install kubernetes-cli
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="scoop" %}}
|
||||
```powershell
|
||||
scoop install kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
1. Navigate to your home directory:
|
||||
|
||||
```powershell
|
||||
# If you're using cmd.exe, run: cd %USERPROFILE%
|
||||
cd ~
|
||||
```
|
||||
|
||||
1. Create the `.kube` directory:
|
||||
|
||||
```powershell
|
||||
mkdir .kube
|
||||
```
|
||||
|
||||
1. Change to the `.kube` directory you just created:
|
||||
|
||||
```powershell
|
||||
cd .kube
|
||||
```
|
||||
|
||||
1. Configure kubectl to use a remote Kubernetes cluster:
|
||||
|
||||
```powershell
|
||||
New-Item config -type file
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Edit the config file with a text editor of your choice, such as Notepad.
|
||||
{{< /note >}}
|
||||
|
||||
### Install on Windows as part of the Google Cloud SDK
|
||||
|
||||
{{< include "included/install-kubectl-gcloud.md" >}}
|
||||
|
||||
## Verify kubectl configuration
|
||||
|
||||
{{< include "included/verify-kubectl.md" >}}
|
||||
|
||||
## Optional kubectl configurations
|
||||
|
||||
### Enable shell autocompletion
|
||||
|
||||
kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing.
|
||||
|
||||
Below are the procedures to set up autocompletion for Zsh, if you are running that on Windows.
|
||||
|
||||
{{< include "included/optional-kubectl-configs-zsh.md" >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
{{< include "included/kubectl-whats-next.md" >}}
|
||||
|
|
@ -1,634 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: Install and Set Up kubectl
|
||||
content_type: task
|
||||
weight: 10
|
||||
card:
|
||||
name: tasks
|
||||
weight: 20
|
||||
title: Install kubectl
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
|
||||
you to run commands against Kubernetes clusters.
|
||||
You can use kubectl to deploy applications, inspect and manage cluster resources,
|
||||
and view logs. For a complete list of kubectl operations, see
|
||||
[Overview of kubectl](/docs/reference/kubectl/overview/).
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster.
|
||||
For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master.
|
||||
Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Install kubectl on Linux
|
||||
|
||||
### Install kubectl binary with curl on Linux
|
||||
|
||||
1. Download the latest release with the command:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version.
|
||||
|
||||
For example, to download version {{< param "fullversion" >}} on Linux, type:
|
||||
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
1. Validate the binary (optional)
|
||||
|
||||
Download the kubectl checksum file:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
Validate the kubectl binary against the checksum file:
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
|
||||
```
|
||||
|
||||
If valid, the output is:
|
||||
|
||||
```bash
|
||||
kubectl: OK
|
||||
```
|
||||
|
||||
If the check fails, `sha256` exits with nonzero status and prints output similar to:
|
||||
|
||||
```bash
|
||||
kubectl: FAILED
|
||||
sha256sum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Download the same version of the binary and checksum.
|
||||
{{< /note >}}
|
||||
|
||||
1. Install kubectl
|
||||
|
||||
```bash
|
||||
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.local/bin/kubectl
|
||||
mv ./kubectl ~/.local/bin/kubectl
|
||||
# and then add ~/.local/bin/kubectl to $PATH
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### Install using native package management
|
||||
|
||||
{{< tabs name="kubectl_install" >}}
|
||||
{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}}
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Install using other package management
|
||||
|
||||
{{< tabs name="other_kubectl_install" >}}
|
||||
{{% tab name="Snap" %}}
|
||||
If you are on Ubuntu or another Linux distribution that support [snap](https://snapcraft.io/docs/core/install) package manager, kubectl is available as a [snap](https://snapcraft.io/) application.
|
||||
|
||||
```shell
|
||||
snap install kubectl --classic
|
||||
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Homebrew" %}}
|
||||
If you are on Linux and using [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) package manager, kubectl is available for [installation](https://docs.brew.sh/Homebrew-on-Linux#install).
|
||||
|
||||
```shell
|
||||
brew install kubectl
|
||||
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Install kubectl on macOS
|
||||
|
||||
### Install kubectl binary with curl on macOS
|
||||
|
||||
1. Download the latest release:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To download a specific version, replace the `$(curl -L -s https://dl.k8s.io/release/stable.txt)` portion of the command with the specific version.
|
||||
|
||||
For example, to download version {{< param "fullversion" >}} on macOS, type:
|
||||
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
1. Validate the binary (optional)
|
||||
|
||||
Download the kubectl checksum file:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
Validate the kubectl binary against the checksum file:
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | shasum -a 256 --check
|
||||
```
|
||||
|
||||
If valid, the output is:
|
||||
|
||||
```bash
|
||||
kubectl: OK
|
||||
```
|
||||
|
||||
If the check fails, `shasum` exits with nonzero status and prints output similar to:
|
||||
|
||||
```bash
|
||||
kubectl: FAILED
|
||||
shasum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Download the same version of the binary and checksum.
|
||||
{{< /note >}}
|
||||
|
||||
1. Make the kubectl binary executable.
|
||||
|
||||
```bash
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
1. Move the kubectl binary to a file location on your system `PATH`.
|
||||
|
||||
```bash
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl && \
|
||||
sudo chown root: /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### Install with Homebrew on macOS
|
||||
|
||||
If you are on macOS and using [Homebrew](https://brew.sh/) package manager, you can install kubectl with Homebrew.
|
||||
|
||||
1. Run the installation command:
|
||||
|
||||
```bash
|
||||
brew install kubectl
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
brew install kubernetes-cli
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### Install with Macports on macOS
|
||||
|
||||
If you are on macOS and using [Macports](https://macports.org/) package manager, you can install kubectl with Macports.
|
||||
|
||||
1. Run the installation command:
|
||||
|
||||
```bash
|
||||
sudo port selfupdate
|
||||
sudo port install kubectl
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
## Install kubectl on Windows
|
||||
|
||||
### Install kubectl binary with curl on Windows
|
||||
|
||||
1. Download the [latest release {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Or if you have `curl` installed, use this command:
|
||||
|
||||
```powershell
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To find out the latest stable version (for example, for scripting), take a look at [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
{{< /note >}}
|
||||
|
||||
1. Validate the binary (optional)
|
||||
|
||||
Download the kubectl checksum file:
|
||||
|
||||
```powershell
|
||||
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256
|
||||
```
|
||||
|
||||
Validate the kubectl binary against the checksum file:
|
||||
|
||||
- Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded:
|
||||
|
||||
```cmd
|
||||
CertUtil -hashfile kubectl.exe SHA256
|
||||
type kubectl.exe.sha256
|
||||
```
|
||||
|
||||
- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result:
|
||||
|
||||
```powershell
|
||||
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
|
||||
```
|
||||
|
||||
1. Add the binary in to your `PATH`.
|
||||
|
||||
1. Test to ensure the version of `kubectl` is the same as downloaded:
|
||||
|
||||
```cmd
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
[Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/#kubernetes) adds its own version of `kubectl` to `PATH`.
|
||||
If you have installed Docker Desktop before, you may need to place your `PATH` entry before the one added by the Docker Desktop installer or remove the Docker Desktop's `kubectl`.
|
||||
{{< /note >}}
|
||||
|
||||
### Install with PowerShell from PSGallery
|
||||
|
||||
If you are on Windows and using the [PowerShell Gallery](https://www.powershellgallery.com/) package manager, you can install and update kubectl with PowerShell.
|
||||
|
||||
1. Run the installation commands (making sure to specify a `DownloadLocation`):
|
||||
|
||||
```powershell
|
||||
Install-Script -Name 'install-kubectl' -Scope CurrentUser -Force
|
||||
install-kubectl.ps1 [-DownloadLocation <path>]
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
If you do not specify a `DownloadLocation`, `kubectl` will be installed in the user's `temp` Directory.
|
||||
{{< /note >}}
|
||||
|
||||
The installer creates `$HOME/.kube` and instructs it to create a config file.
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Updating the installation is performed by rerunning the two commands listed in step 1.
|
||||
{{< /note >}}
|
||||
|
||||
### Install on Windows using Chocolatey or Scoop
|
||||
|
||||
1. To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [Scoop](https://scoop.sh) command-line installer.
|
||||
|
||||
{{< tabs name="kubectl_win_install" >}}
|
||||
{{% tab name="choco" %}}
|
||||
```powershell
|
||||
choco install kubernetes-cli
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="scoop" %}}
|
||||
```powershell
|
||||
scoop install kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
1. Navigate to your home directory:
|
||||
|
||||
```powershell
|
||||
# If you're using cmd.exe, run: cd %USERPROFILE%
|
||||
cd ~
|
||||
```
|
||||
|
||||
1. Create the `.kube` directory:
|
||||
|
||||
```powershell
|
||||
mkdir .kube
|
||||
```
|
||||
|
||||
1. Change to the `.kube` directory you just created:
|
||||
|
||||
```powershell
|
||||
cd .kube
|
||||
```
|
||||
|
||||
1. Configure kubectl to use a remote Kubernetes cluster:
|
||||
|
||||
```powershell
|
||||
New-Item config -type file
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Edit the config file with a text editor of your choice, such as Notepad.
|
||||
{{< /note >}}
|
||||
|
||||
## Download as part of the Google Cloud SDK
|
||||
|
||||
You can install kubectl as part of the Google Cloud SDK.
|
||||
|
||||
1. Install the [Google Cloud SDK](https://cloud.google.com/sdk/).
|
||||
|
||||
1. Run the `kubectl` installation command:
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
1. Test to ensure the version you installed is up-to-date:
|
||||
|
||||
```shell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
## Verifying kubectl configuration
|
||||
|
||||
In order for kubectl to find and access a Kubernetes cluster, it needs a
|
||||
[kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/),
|
||||
which is created automatically when you create a cluster using
|
||||
[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)
|
||||
or successfully deploy a Minikube cluster.
|
||||
By default, kubectl configuration is located at `~/.kube/config`.
|
||||
|
||||
Check that kubectl is properly configured by getting the cluster state:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
If you see a URL response, kubectl is correctly configured to access your cluster.
|
||||
|
||||
If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.
|
||||
|
||||
```
|
||||
The connection to the server <server-name:port> was refused - did you specify the right host or port?
|
||||
```
|
||||
|
||||
For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.
|
||||
|
||||
If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info dump
|
||||
```
|
||||
|
||||
## Optional kubectl configurations
|
||||
|
||||
### Enabling shell autocompletion
|
||||
|
||||
kubectl provides autocompletion support for Bash and Zsh, which can save you a lot of typing.
|
||||
|
||||
Below are the procedures to set up autocompletion for Bash (including the difference between Linux and macOS) and Zsh.
|
||||
|
||||
{{< tabs name="kubectl_autocompletion" >}}
|
||||
|
||||
{{% tab name="Bash on Linux" %}}
|
||||
|
||||
### Introduction
|
||||
|
||||
The kubectl completion script for Bash can be generated with the command `kubectl completion bash`. Sourcing the completion script in your shell enables kubectl autocompletion.
|
||||
|
||||
However, the completion script depends on [**bash-completion**](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
|
||||
|
||||
### Install bash-completion
|
||||
|
||||
bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
|
||||
|
||||
The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
|
||||
|
||||
To find out, reload your shell and run `type _init_completion`. If the command succeeds, you're already set, otherwise add the following to your `~/.bashrc` file:
|
||||
|
||||
```bash
|
||||
source /usr/share/bash-completion/bash_completion
|
||||
```
|
||||
|
||||
Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
|
||||
|
||||
### Enable kubectl autocompletion
|
||||
|
||||
You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
|
||||
|
||||
- Source the completion script in your `~/.bashrc` file:
|
||||
|
||||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
```
|
||||
|
||||
- Add the completion script to the `/etc/bash_completion.d` directory:
|
||||
|
||||
```bash
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```bash
|
||||
echo 'alias k=kubectl' >>~/.bashrc
|
||||
echo 'complete -F __start_kubectl k' >>~/.bashrc
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
bash-completion sources all completion scripts in `/etc/bash_completion.d`.
|
||||
{{< /note >}}
|
||||
|
||||
Both approaches are equivalent. After reloading your shell, kubectl autocompletion should be working.
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
|
||||
{{% tab name="Bash on macOS" %}}
|
||||
|
||||
|
||||
### Introduction
|
||||
|
||||
The kubectl completion script for Bash can be generated with `kubectl completion bash`. Sourcing this script in your shell enables kubectl completion.
|
||||
|
||||
However, the kubectl completion script depends on [**bash-completion**](https://github.com/scop/bash-completion) which you thus have to previously install.
|
||||
|
||||
{{< warning>}}
|
||||
There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The kubectl completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use kubectl completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
|
||||
{{< /warning >}}
|
||||
|
||||
### Upgrade Bash
|
||||
|
||||
The instructions here assume you use Bash 4.1+. You can check your Bash's version by running:
|
||||
|
||||
```bash
|
||||
echo $BASH_VERSION
|
||||
```
|
||||
|
||||
If it is too old, you can install/upgrade it using Homebrew:
|
||||
|
||||
```bash
|
||||
brew install bash
|
||||
```
|
||||
|
||||
Reload your shell and verify that the desired version is being used:
|
||||
|
||||
```bash
|
||||
echo $BASH_VERSION $SHELL
|
||||
```
|
||||
|
||||
Homebrew usually installs it at `/usr/local/bin/bash`.
|
||||
|
||||
### Install bash-completion
|
||||
|
||||
{{< note >}}
|
||||
As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won't work).
|
||||
{{< /note >}}
|
||||
|
||||
You can test if you have bash-completion v2 already installed with `type _init_completion`. If not, you can install it with Homebrew:
|
||||
|
||||
```bash
|
||||
brew install bash-completion@2
|
||||
```
|
||||
|
||||
As stated in the output of this command, add the following to your `~/.bash_profile` file:
|
||||
|
||||
```bash
|
||||
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
||||
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
|
||||
```
|
||||
|
||||
Reload your shell and verify that bash-completion v2 is correctly installed with `type _init_completion`.
|
||||
|
||||
### Enable kubectl autocompletion
|
||||
|
||||
You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:
|
||||
|
||||
- Source the completion script in your `~/.bash_profile` file:
|
||||
|
||||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bash_profile
|
||||
```
|
||||
|
||||
- Add the completion script to the `/usr/local/etc/bash_completion.d` directory:
|
||||
|
||||
```bash
|
||||
kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
- If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```bash
|
||||
echo 'alias k=kubectl' >>~/.bash_profile
|
||||
echo 'complete -F __start_kubectl k' >>~/.bash_profile
|
||||
```
|
||||
|
||||
- If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything.
|
||||
|
||||
{{< note >}}
|
||||
The Homebrew installation of bash-completion v2 sources all the files in the `BASH_COMPLETION_COMPAT_DIR` directory, that's why the latter two methods work.
|
||||
{{< /note >}}
|
||||
|
||||
In any case, after reloading your shell, kubectl completion should be working.
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Zsh" %}}
|
||||
|
||||
The kubectl completion script for Zsh can be generated with the command `kubectl completion zsh`. Sourcing the completion script in your shell enables kubectl autocompletion.
|
||||
|
||||
To do so in all your shell sessions, add the following to your `~/.zshrc` file:
|
||||
|
||||
```zsh
|
||||
source <(kubectl completion zsh)
|
||||
```
|
||||
|
||||
If you have an alias for kubectl, you can extend shell completion to work with that alias:
|
||||
|
||||
```zsh
|
||||
echo 'alias k=kubectl' >>~/.zshrc
|
||||
echo 'complete -F __start_kubectl k' >>~/.zshrc
|
||||
```
|
||||
|
||||
After reloading your shell, kubectl autocompletion should be working.
|
||||
|
||||
If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
|
||||
|
||||
```zsh
|
||||
autoload -Uz compinit
|
||||
compinit
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Install Minikube](https://minikube.sigs.k8s.io/docs/start/)
|
||||
* See the [getting started guides](/docs/setup/) for more about creating clusters.
|
||||
* [Learn how to launch and expose your application.](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* If you need access to a cluster you didn't create, see the
|
||||
[Sharing Cluster Access document](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
||||
* Read the [kubectl reference docs](/docs/reference/kubectl/kubectl/)
|
||||
|
||||
|
|
@ -59,6 +59,22 @@ If you installed minikube locally, run `minikube start`.
|
|||
|
||||
4. Katacoda environment only: Type `30000`, and then click **Display Port**.
|
||||
|
||||
{{< note >}}
|
||||
The `dashboard` command enables the dashboard add-on and opens the proxy in the default web browser. You can create Kubernetes resources on the dashboard such as Deployment and Service.
|
||||
|
||||
If you are running in an environment as root, see [Open Dashboard with URL](/docs/tutorials/hello-minikube#open-dashboard-with-url).
|
||||
|
||||
To stop the proxy, run `Ctrl+C` to exit the process. The dashboard remains running.
|
||||
{{< /note >}}
|
||||
|
||||
## Open Dashboard with URL
|
||||
|
||||
If you don't want to open a web browser, run the dashboard command with the url flag to emit a URL:
|
||||
|
||||
```shell
|
||||
minikube dashboard --url
|
||||
```
|
||||
|
||||
## Create a Deployment
|
||||
|
||||
A Kubernetes [*Pod*](/docs/concepts/workloads/pods/) is a group of one or more Containers,
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ weight: 10
|
|||
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it.
|
||||
To do so, you create a Kubernetes <b>Deployment</b> configuration. The Deployment instructs Kubernetes
|
||||
how to create and update instances of your application. Once you've created a Deployment, the Kubernetes
|
||||
master schedules the application instances included in that Deployment to run on individual Nodes in the
|
||||
control plane schedules the application instances included in that Deployment to run on individual Nodes in the
|
||||
cluster.
|
||||
</p>
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,32 @@
|
|||
<svg width="476.1" height="385.3" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<style type="text/css">.st0{fill:#FFFFFF;stroke:#006DE9;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
width="476.1"
|
||||
height="385.3"
|
||||
version="1.1"
|
||||
id="svg1987">
|
||||
<metadata
|
||||
id="metadata1993">
|
||||
<rdf:RDF>
|
||||
<cc:Work
|
||||
rdf:about="">
|
||||
<dc:format>image/svg+xml</dc:format>
|
||||
<dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
|
||||
<dc:title></dc:title>
|
||||
</cc:Work>
|
||||
</rdf:RDF>
|
||||
</metadata>
|
||||
<defs
|
||||
id="defs1991" />
|
||||
<style
|
||||
type="text/css"
|
||||
id="style1792">.st0{fill:#FFFFFF;stroke:#006DE9;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st1{fill:#FFFFFF;stroke:#006DE9;stroke-width:6;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st2{fill:#FFFFFF;stroke:#326DE6;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st3{opacity:0.71;fill:#326CE6;}
|
||||
|
|
@ -63,231 +90,535 @@
|
|||
.st61{fill:#011F38;stroke:#414042;stroke-width:0.3;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st62{fill:none;stroke:#011F38;stroke-width:0.3;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st63{fill:none;stroke:#011F38;stroke-width:0.2813;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}</style>
|
||||
<symbol id="master_x5F_level1_1_" viewBox="-68.6 -66.9 137.2 133.9">
|
||||
<g id="svg_1">
|
||||
<g id="svg_2">
|
||||
<line class="st0" x1="0" y1="-11.1" x2="0" y2="0.7" id="svg_3"/>
|
||||
<line class="st0" x1="5.9" y1="-5.2" x2="-5.9" y2="-5.2" id="svg_4"/>
|
||||
</g>
|
||||
<polygon class="st1" points="-29.2,-63.9 -65.6,-18.3 -52.6,38.6 0,63.9 52.6,38.6 65.6,-18.3 29.2,-63.9 " id="svg_5"/>
|
||||
</g>
|
||||
</symbol>
|
||||
<symbol id="node_high_level" viewBox="-81 -93 162 186.1">
|
||||
<polygon class="st2" points="-80,-46 -80,46 0,92 80,46 80,-46 0,-92 " id="svg_6"/>
|
||||
<g id="Isolation_Mode_3_"/>
|
||||
</symbol>
|
||||
<symbol id="node_x5F_empty" viewBox="-87.5 -100.6 175.1 201.1">
|
||||
<use xlink:href="#node_high_level" width="162" height="186.1" id="XMLID_201_" x="-81" y="-93" transform="matrix(1.0808,0,0,1.0808,-0.00003292006,-0.00003749943) "/>
|
||||
<g id="svg_7">
|
||||
<polygon class="st3" points="76.8,-28.1 -14,-80.3 0,-88.3 76.7,-44.4 " id="svg_8"/>
|
||||
<polygon class="st4" points="76.8,-28.1 32.1,-53.8 38.8,-66.1 76.7,-44.4 " id="svg_9"/>
|
||||
</g>
|
||||
</symbol>
|
||||
<symbol id="node_x5F_new" viewBox="-87.6 -101 175.2 202">
|
||||
<polygon class="st5" points="0,-100 -86.6,-50 -86.6,50 0,100 86.6,50 86.6,-50 " id="svg_10"/>
|
||||
<polygon class="st6" points="-86.6,-20.2 -86.6,-50 0,-100 25.8,-85.1 " id="svg_11"/>
|
||||
<polygon class="st7" points="-40.8,-70.7 -32.9,-57 15.7,-85.1 0,-94.3 " id="svg_12"/>
|
||||
<text transform="matrix(0.866,-0.5,-0.5,-0.866,-33.9256,-70.7388) " class="st8" font-size="11.3632px" font-family="'RobotoSlab-Regular'" id="svg_13">Docker</text>
|
||||
<text transform="matrix(0.866,-0.5,-0.5,-0.866,-76.0668,-46.4087) " class="st8" font-size="11.3632px" font-family="'RobotoSlab-Regular'" id="svg_14">Kubelt</text>
|
||||
</symbol>
|
||||
<g>
|
||||
<title>Layer 1</title>
|
||||
<g id="CLUSTER">
|
||||
<g id="XMLID_8_" class="st9">
|
||||
<g id="svg_15">
|
||||
<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="27.9955" y1="185.5114" x2="342.4509" y2="185.5114">
|
||||
<stop offset="0" stop-color="#326DE6"/>
|
||||
<stop offset="1" stop-color="#10FFC6"/>
|
||||
</linearGradient>
|
||||
<polygon class="st10" points="311.3,92.9 342.5,229.4 255.2,338.8 115.3,338.8 28,229.4 59.1,92.9 185.2,32.2 " id="svg_16"/>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g id="master">
|
||||
<g id="master_x5F_level1"/>
|
||||
<use xlink:href="#master_x5F_level1_1_" width="137.2" height="133.9" x="-68.6" y="-66.9" transform="matrix(0.4,0,0,-0.4,185.2213,187.4709) " id="svg_17"/>
|
||||
</g>
|
||||
<g id="Node">
|
||||
<g id="Node_x5F_level3_x5F_1">
|
||||
<g id="Isolation_Mode"/>
|
||||
</g>
|
||||
<polygon class="st16" points="224.6,160.7 189,140.1 189,98.9 224.6,78.4 260.3,98.9 260.3,140.1 " id="svg_18"/>
|
||||
<polygon class="st13" points="189,127.8 189,140.1 224.6,160.7 235.2,154.5 " id="svg_19"/>
|
||||
<polygon class="st7" points="207.8,148.6 211.1,143 231.1,154.5 224.6,158.3 " id="svg_20"/>
|
||||
<g id="svg_21">
|
||||
<path class="st8" d="m213.7,146.5c0.4,0.2 0.6,0.5 0.7,0.9c0.1,0.4 0,0.7 -0.2,1.1l-0.2,0.4c-0.2,0.4 -0.5,0.6 -0.9,0.7c-0.4,0.1 -0.7,0 -1.1,-0.2l-1.2,-0.7l0.1,-0.2l0.4,0.1l1.3,-2.3l-0.3,-0.2l0.1,-0.3l0.3,0.2l1,0.5zm-0.6,0l-1.3,2.3l0.5,0.3c0.3,0.2 0.5,0.2 0.8,0.1c0.3,-0.1 0.5,-0.3 0.6,-0.5l0.2,-0.4c0.2,-0.3 0.2,-0.5 0.2,-0.8c-0.1,-0.3 -0.2,-0.5 -0.5,-0.6l-0.5,-0.4z" id="svg_22"/>
|
||||
<path class="st8" d="m214.3,149.2c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.9,0.1c0.3,0.2 0.5,0.4 0.5,0.7c0.1,0.3 0,0.6 -0.2,0.9l0,0c-0.2,0.3 -0.4,0.5 -0.7,0.6c-0.3,0.1 -0.6,0.1 -0.9,-0.1c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7c0,-0.2 0,-0.5 0.2,-0.9l0,0zm0.4,0.3c-0.1,0.2 -0.2,0.4 -0.2,0.6c0,0.2 0.1,0.4 0.3,0.5c0.2,0.1 0.4,0.1 0.5,0c0.2,-0.1 0.3,-0.3 0.5,-0.5l0,0c0.1,-0.2 0.2,-0.4 0.2,-0.6c0,-0.2 -0.1,-0.4 -0.3,-0.5c-0.2,-0.1 -0.4,-0.1 -0.6,0s-0.2,0.2 -0.4,0.5l0,0z" id="svg_23"/>
|
||||
<path class="st8" d="m217.1,151.9c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,-0.1 0.3,-0.2l0.4,0.2l0,0c-0.1,0.2 -0.3,0.3 -0.5,0.3c-0.3,0 -0.5,0 -0.7,-0.1c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7c0,-0.3 0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.9,0.1c0.2,0.1 0.3,0.2 0.4,0.4c0.1,0.1 0.2,0.3 0.2,0.4l-0.3,0.5l-0.3,-0.2l0.1,-0.4c0,-0.1 -0.1,-0.1 -0.1,-0.2c-0.1,-0.1 -0.1,-0.1 -0.2,-0.2c-0.2,-0.1 -0.4,-0.1 -0.6,0c-0.2,0.1 -0.3,0.3 -0.4,0.5l0,0.1c-0.1,0.2 -0.2,0.4 -0.2,0.6c-0.1,0.2 0,0.3 0.2,0.4z" id="svg_24"/>
|
||||
<path class="st8" d="m219.7,150l0.1,-0.3l0.7,0.4l-1,1.8l0.2,0.1l0.8,-0.3l-0.2,-0.1l0.1,-0.3l0.9,0.5l-0.1,0.3l-0.3,-0.1l-1,0.3l0.2,1.3l0.2,0.2l-0.1,0.2l-0.9,-0.5l0.1,-0.2l0.2,0.1l-0.2,-1l-0.3,-0.1l-0.1,0.7l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l1.4,-2.5l-0.4,-0.2z" id="svg_25"/>
|
||||
<path class="st8" d="m221.5,154.9c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7s0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.8,0.1c0.3,0.2 0.5,0.4 0.5,0.6c0,0.3 0,0.5 -0.2,0.8l-0.1,0.2l-1.4,-0.8l0,0c-0.1,0.2 -0.2,0.4 -0.1,0.6c0,0.2 0.1,0.3 0.3,0.4c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,0 0.3,0l0,0.3c-0.1,0 -0.3,0 -0.4,0c-0.1,0.2 -0.3,0.1 -0.5,0zm1.1,-1.9c-0.1,-0.1 -0.3,-0.1 -0.4,0c-0.2,0.1 -0.3,0.2 -0.4,0.3l0,0l1,0.6l0,-0.1c0.1,-0.2 0.1,-0.3 0.1,-0.5c-0.1,-0.1 -0.2,-0.2 -0.3,-0.3z" id="svg_26"/>
|
||||
<path class="st8" d="m223.9,153.7l0.1,-0.3l0.7,0.4l-0.1,0.3c0.1,-0.1 0.2,-0.1 0.4,-0.1c0.1,0 0.2,0 0.4,0.1c0,0 0.1,0 0.1,0.1c0,0 0.1,0 0.1,0.1l-0.3,0.3l-0.2,-0.1c-0.1,-0.1 -0.2,-0.1 -0.3,-0.1c-0.1,0 -0.2,0 -0.3,0.1l-0.7,1.2l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l0.9,-1.5l-0.5,-0.2z" id="svg_27"/>
|
||||
</g>
|
||||
<g id="svg_28">
|
||||
<path class="st8" d="m194.3,135.3l0.1,-0.3l0.7,0.4l-1,1.8l0.2,0.1l0.8,-0.3l-0.2,-0.1l0.1,-0.3l0.9,0.5l-0.1,0.3l-0.3,-0.1l-1,0.3l0.2,1.3l0.2,0.2l-0.1,0.2l-0.9,-0.5l0.1,-0.2l0.2,0.1l-0.2,-1l-0.3,-0.1l-0.4,0.7l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l1.4,-2.5l-0.1,-0.2z" id="svg_29"/>
|
||||
<path class="st8" d="m196.8,140.2c-0.1,0.1 -0.3,0.1 -0.4,0.1c-0.1,0 -0.3,0 -0.4,-0.1c-0.2,-0.1 -0.4,-0.3 -0.4,-0.5c0,-0.2 0,-0.5 0.2,-0.8l0.6,-1l-0.2,-0.2l0.1,-0.3l0.2,0.1l0.4,0.2l-0.7,1.3c-0.1,0.2 -0.2,0.4 -0.2,0.5c0,0.1 0.1,0.2 0.2,0.3c0.1,0.1 0.3,0.1 0.4,0.1s0.2,0 0.3,-0.1l0.7,-1.2l-0.3,-0.2l0.1,-0.3l0.3,0.2l0.4,0.2l-1.1,1.8l0.2,0.2l-0.1,0.2l-0.6,-0.3l0.3,-0.2z" id="svg_30"/>
|
||||
<path class="st8" d="m200.1,141.2c-0.2,0.3 -0.4,0.5 -0.7,0.6c-0.3,0.1 -0.5,0.1 -0.8,-0.1c-0.1,-0.1 -0.2,-0.2 -0.3,-0.3c-0.1,-0.1 -0.1,-0.2 -0.1,-0.4l-0.2,0.3l-0.3,-0.2l1.6,-2.8l-0.3,-0.2l0.1,-0.3l0.7,0.4l-0.7,1.2c0.1,-0.1 0.2,-0.1 0.4,-0.1c0.1,0 0.3,0 0.4,0.1c0.3,0.2 0.4,0.4 0.4,0.7c0.1,0.4 0,0.7 -0.2,1.1l0,0zm-0.4,-0.3c0.1,-0.2 0.2,-0.5 0.2,-0.7c0,-0.2 -0.1,-0.4 -0.3,-0.5c-0.1,-0.1 -0.2,-0.1 -0.4,-0.1c-0.1,0 -0.2,0.1 -0.3,0.1l-0.5,0.9c0,0.1 0,0.2 0.1,0.3c0.1,0.1 0.1,0.2 0.3,0.3c0.2,0.1 0.4,0.1 0.5,0c0.2,0.1 0.3,0 0.4,-0.3l0,0z" id="svg_31"/>
|
||||
<path class="st8" d="m200.9,143c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7s0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.8,0.1c0.3,0.2 0.5,0.4 0.5,0.6c0,0.3 0,0.5 -0.2,0.8l-0.1,0.2l-1.4,-0.8l0,0c-0.1,0.2 -0.2,0.4 -0.1,0.6c0,0.2 0.1,0.3 0.3,0.4c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,0 0.3,0l0,0.3c-0.1,0 -0.3,0 -0.4,0c-0.2,0.2 -0.3,0.1 -0.5,0zm1,-2c-0.1,-0.1 -0.3,-0.1 -0.4,0c-0.2,0.1 -0.3,0.2 -0.4,0.3l0,0l1,0.6l0,-0.1c0.1,-0.2 0.1,-0.3 0.1,-0.5c0,0 -0.1,-0.2 -0.3,-0.3z" id="svg_32"/>
|
||||
<path class="st8" d="m203.7,140.8l0.1,-0.3l0.7,0.4l-1.6,2.8l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l1.4,-2.5l-0.3,-0.1z" id="svg_33"/>
|
||||
<path class="st8" d="m204.3,145c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7c-0.1,-0.3 0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.8,0.1c0.3,0.2 0.5,0.4 0.5,0.6c0,0.3 0,0.5 -0.2,0.8l-0.1,0.2l-1.4,-0.8l0,0c-0.1,0.2 -0.2,0.4 -0.1,0.6c0,0.2 0.1,0.3 0.3,0.4c0.1,0.1 0.3,0.1 0.4,0.1s0.2,0 0.3,0l0,0.3c-0.1,0 -0.3,0 -0.4,0c-0.1,0.1 -0.3,0.1 -0.5,0zm1.1,-2c-0.1,-0.1 -0.3,-0.1 -0.4,0c-0.2,0.1 -0.3,0.2 -0.4,0.3l0,0l1,0.6l0,-0.1c0.1,-0.2 0.1,-0.3 0.1,-0.5c-0.1,-0.1 -0.2,-0.2 -0.3,-0.3z" id="svg_34"/>
|
||||
<path class="st8" d="m207.8,143.4l-0.3,0.5l0.4,0.2l-0.2,0.3l-0.4,-0.2l-0.8,1.3c-0.1,0.1 -0.1,0.2 -0.1,0.2c0,0.1 0.1,0.1 0.1,0.2c0,0 0.1,0 0.1,0.1c0,0 0.1,0 0.1,0l-0.1,0.3c-0.1,0 -0.1,0 -0.2,0c-0.1,0 -0.2,-0.1 -0.2,-0.1c-0.2,-0.1 -0.3,-0.2 -0.3,-0.4s0,-0.3 0.1,-0.5l0.8,-1.3l-0.3,-0.2l0.2,-0.3l0.3,0.2l0.3,-0.5l0.5,0.2z" id="svg_35"/>
|
||||
</g>
|
||||
<polygon class="st16" points="182.3,140.1 146.6,160.7 111,140.1 111,99 146.6,78.4 182.3,99 " id="svg_36"/>
|
||||
<polygon class="st13" points="136,154.5 146.6,160.7 182.3,140.1 182.2,127.9 " id="svg_37"/>
|
||||
<polygon class="st7" points="163.4,148.6 160.2,143 180.2,131.4 180.2,138.9 " id="svg_38"/>
|
||||
<g id="svg_39">
|
||||
<path class="st8" d="m164.6,142.5c0.4,-0.2 0.7,-0.3 1.1,-0.2c0.4,0.1 0.6,0.3 0.9,0.7l0.2,0.4c0.2,0.4 0.3,0.7 0.2,1.1c-0.1,0.4 -0.3,0.7 -0.7,0.9l-1.2,0.7l-0.1,-0.2l0.3,-0.2l-1.3,-2.3l-0.4,0.1l-0.1,-0.3l0.3,-0.2l0.8,-0.5zm-0.3,0.6l1.3,2.3l0.5,-0.3c0.3,-0.2 0.4,-0.4 0.5,-0.6c0.1,-0.3 0,-0.5 -0.2,-0.8l-0.2,-0.4c-0.2,-0.3 -0.4,-0.5 -0.6,-0.5c-0.3,-0.1 -0.5,-0.1 -0.8,0.1l-0.5,0.2z" id="svg_40"/>
|
||||
<path class="st8" d="m167.3,143.4c-0.2,-0.3 -0.2,-0.6 -0.2,-0.9c0.1,-0.3 0.2,-0.5 0.5,-0.7c0.3,-0.2 0.6,-0.2 0.9,-0.1c0.3,0.1 0.5,0.3 0.7,0.6l0,0c0.2,0.3 0.2,0.6 0.2,0.9c-0.1,0.3 -0.2,0.5 -0.5,0.7c-0.3,0.2 -0.6,0.2 -0.9,0.1c-0.3,0 -0.5,-0.3 -0.7,-0.6l0,0zm0.4,-0.2c0.1,0.2 0.3,0.4 0.5,0.5s0.4,0.1 0.6,0c0.2,-0.1 0.3,-0.3 0.3,-0.5c0,-0.2 0,-0.4 -0.2,-0.6l0,0c-0.1,-0.2 -0.3,-0.4 -0.5,-0.5c-0.2,-0.1 -0.4,-0.1 -0.6,0c-0.2,0.1 -0.3,0.3 -0.3,0.5s0,0.3 0.2,0.6l0,0z" id="svg_41"/>
|
||||
<path class="st8" d="m171,142.3c0.1,-0.1 0.2,-0.2 0.3,-0.3c0.1,-0.1 0,-0.3 0,-0.4l0.3,-0.2l0,0c0.1,0.2 0.1,0.4 0,0.6c-0.1,0.2 -0.2,0.4 -0.5,0.6c-0.3,0.2 -0.6,0.2 -0.9,0.1c-0.3,-0.1 -0.5,-0.3 -0.7,-0.6l0,-0.1c-0.2,-0.3 -0.2,-0.6 -0.2,-0.9c0,-0.3 0.2,-0.5 0.5,-0.7c0.2,-0.1 0.3,-0.2 0.5,-0.2s0.3,0 0.5,0l0.3,0.5l-0.3,0.2l-0.3,-0.3c-0.1,0 -0.2,0 -0.2,0c-0.1,0 -0.2,0 -0.3,0.1c-0.2,0.1 -0.3,0.3 -0.3,0.5c0,0.2 0.1,0.4 0.2,0.6l0,0.1c0.1,0.2 0.3,0.4 0.4,0.5c0.3,0.1 0.5,0.1 0.7,-0.1z" id="svg_42"/>
|
||||
<path class="st8" d="m170.6,139.1l-0.1,-0.3l0.7,-0.4l1,1.8l0.2,-0.1l0.1,-0.8l-0.2,0.1l-0.1,-0.3l0.9,-0.5l0.1,0.3l-0.2,0.2l-0.2,1l1.2,0.5l0.3,-0.1l0.1,0.2l-0.9,0.5l-0.1,-0.2l0.2,-0.1l-1,-0.4l-0.3,0.1l0.4,0.7l0.4,-0.1l0.1,0.2l-1,0.6l-0.1,-0.2l0.3,-0.2l-1.4,-2.6l-0.4,0.1z" id="svg_43"/>
|
||||
<path class="st8" d="m175.8,140c-0.3,0.2 -0.6,0.2 -0.9,0.1c-0.3,-0.1 -0.5,-0.3 -0.7,-0.6l-0.1,-0.1c-0.2,-0.3 -0.2,-0.6 -0.2,-0.9c0.1,-0.3 0.2,-0.5 0.5,-0.7c0.3,-0.2 0.6,-0.2 0.8,-0.1c0.2,0.1 0.5,0.3 0.6,0.6l0.1,0.2l-1.4,0.8l0,0c0.1,0.2 0.3,0.3 0.4,0.4c0.2,0.1 0.4,0.1 0.5,0c0.1,-0.1 0.2,-0.2 0.3,-0.3c0.1,-0.1 0.1,-0.2 0.2,-0.3l0.3,0.2c0,0.1 -0.1,0.2 -0.2,0.4c0.1,0.1 -0.1,0.2 -0.2,0.3zm-1.2,-1.9c-0.1,0.1 -0.2,0.2 -0.2,0.4c0,0.2 0,0.3 0.1,0.5l0,0l1,-0.6l0,-0.1c-0.1,-0.2 -0.2,-0.3 -0.3,-0.3c-0.3,0 -0.4,0 -0.6,0.1z" id="svg_44"/>
|
||||
<path class="st8" d="m175.8,137.4l-0.1,-0.3l0.7,-0.4l0.2,0.3c0,-0.1 0,-0.3 0.1,-0.4c0.1,-0.1 0.1,-0.2 0.3,-0.3c0,0 0.1,0 0.1,0c0,0 0.1,0 0.1,0l0.2,0.4l-0.2,0.1c-0.1,0.1 -0.2,0.1 -0.2,0.2s-0.1,0.2 0,0.3l0.7,1.2l0.4,-0.1l0.1,0.2l-1,0.6l-0.1,-0.2l0.3,-0.2l-0.9,-1.5l-0.7,0.1z" id="svg_45"/>
|
||||
</g>
|
||||
<g id="svg_46">
|
||||
<path class="st8" d="m145.1,153.8l-0.1,-0.3l0.7,-0.4l1,1.8l0.2,-0.1l0.1,-0.8l-0.2,0.1l-0.1,-0.3l0.9,-0.5l0.1,0.3l-0.2,0.2l-0.2,1l1.2,0.5l0.3,-0.1l0.1,0.2l-0.9,0.5l-0.1,-0.2l0.2,-0.1l-1,-0.4l-0.3,0.1l0.4,0.7l0.4,-0.1l0.1,0.2l-1,0.6l-0.1,-0.2l0.3,-0.2l-1.4,-2.5l-0.4,0z" id="svg_47"/>
|
||||
<path class="st8" d="m150.6,154c0,0.2 0,0.3 -0.1,0.4c-0.1,0.1 -0.2,0.2 -0.3,0.3c-0.2,0.1 -0.5,0.2 -0.7,0.1c-0.2,-0.1 -0.4,-0.3 -0.6,-0.6l-0.6,-1l-0.3,0.1l-0.1,-0.3l0.2,-0.1l0.4,-0.2l0.7,1.3c0.1,0.2 0.3,0.4 0.4,0.4c0.1,0 0.2,0 0.4,-0.1c0.1,-0.1 0.2,-0.2 0.3,-0.3c0.1,-0.1 0.1,-0.2 0.1,-0.4l-0.7,-1.2l-0.3,0.1l-0.1,-0.3l0.3,-0.2l0.4,-0.2l1.1,1.8l0.3,-0.1l0.1,0.2l-0.6,0.3l-0.3,0z" id="svg_48"/>
|
||||
<path class="st8" d="m153.1,151.7c0.2,0.3 0.2,0.6 0.2,0.9c0,0.3 -0.2,0.5 -0.4,0.6c-0.1,0.1 -0.3,0.1 -0.4,0.1c-0.1,0 -0.3,0 -0.4,-0.1l0.1,0.3l-0.3,0.2l-1.6,-2.8l-0.4,0.1l-0.1,-0.3l0.7,-0.4l0.7,1.2c0,-0.1 0.1,-0.3 0.1,-0.4c0.1,-0.1 0.2,-0.2 0.3,-0.3c0.3,-0.2 0.5,-0.2 0.8,0c0.3,0.2 0.5,0.5 0.7,0.9l0,0zm-0.4,0.1c-0.1,-0.2 -0.3,-0.4 -0.5,-0.5s-0.4,-0.1 -0.5,0c-0.1,0.1 -0.2,0.2 -0.3,0.3c0,0.1 -0.1,0.2 -0.1,0.3l0.5,0.9c0.1,0.1 0.2,0.1 0.3,0.1c0.1,0 0.2,0 0.4,-0.1c0.2,-0.1 0.3,-0.2 0.3,-0.4c0.1,-0.1 0.1,-0.3 -0.1,-0.6l0,0z" id="svg_49"/>
|
||||
<path class="st8" d="m155.1,151.9c-0.3,0.2 -0.6,0.2 -0.9,0.1s-0.5,-0.3 -0.7,-0.6l-0.1,-0.1c-0.2,-0.3 -0.2,-0.6 -0.2,-0.9c0.1,-0.3 0.2,-0.5 0.5,-0.7c0.3,-0.2 0.6,-0.2 0.8,-0.1c0.2,0.1 0.5,0.3 0.6,0.6l0.1,0.2l-1.4,0.8l0,0c0.1,0.2 0.3,0.3 0.4,0.4c0.2,0.1 0.4,0.1 0.5,0c0.1,-0.1 0.2,-0.2 0.3,-0.3c0.1,-0.1 0.1,-0.2 0.2,-0.3l0.3,0.2c0,0.1 -0.1,0.2 -0.2,0.4c0.1,0.1 0,0.2 -0.2,0.3zm-1.2,-1.9c-0.1,0.1 -0.2,0.2 -0.2,0.4c0,0.2 0,0.3 0.1,0.5l0,0l1,-0.6l0,-0.1c-0.1,-0.2 -0.2,-0.3 -0.3,-0.3c-0.3,0 -0.4,0 -0.6,0.1z" id="svg_50"/>
|
||||
<path class="st8" d="m154.6,148.4l-0.1,-0.3l0.7,-0.4l1.6,2.8l0.4,-0.1l0.1,0.2l-1,0.6l-0.1,-0.2l0.3,-0.2l-1.4,-2.5l-0.5,0.1z" id="svg_51"/>
|
||||
<path class="st8" d="m158.6,149.9c-0.3,0.2 -0.6,0.2 -0.9,0.1c-0.3,-0.1 -0.5,-0.3 -0.7,-0.6l-0.1,-0.1c-0.2,-0.3 -0.2,-0.6 -0.2,-0.9c0.1,-0.3 0.2,-0.5 0.5,-0.7c0.3,-0.2 0.6,-0.2 0.8,-0.1c0.2,0.1 0.5,0.3 0.6,0.6l0.1,0.2l-1.4,0.8l0,0c0.1,0.2 0.3,0.3 0.4,0.4c0.2,0.1 0.4,0.1 0.5,0c0.1,-0.1 0.2,-0.2 0.3,-0.3c0.1,-0.1 0.1,-0.2 0.2,-0.3l0.3,0.2c0,0.1 -0.1,0.2 -0.2,0.4c0.1,0.1 -0.1,0.2 -0.2,0.3zm-1.2,-1.9c-0.1,0.1 -0.2,0.2 -0.2,0.4c0,0.2 0,0.3 0.1,0.5l0,0l1,-0.6l0,-0.1c-0.1,-0.2 -0.2,-0.3 -0.3,-0.3c-0.3,0 -0.5,0 -0.6,0.1z" id="svg_52"/>
|
||||
<path class="st8" d="m158.9,146.1l0.3,0.5l0.4,-0.2l0.2,0.3l-0.4,0.2l0.8,1.3c0.1,0.1 0.1,0.2 0.2,0.2c0.1,0 0.1,0 0.2,0c0,0 0.1,0 0.1,-0.1c0,0 0.1,-0.1 0.1,-0.1l0.2,0.2c0,0 -0.1,0.1 -0.1,0.2c-0.1,0.1 -0.1,0.1 -0.2,0.1c-0.2,0.1 -0.3,0.1 -0.5,0.1c-0.1,0 -0.3,-0.2 -0.4,-0.4l-0.7,-1.3l-0.3,0.2l-0.2,-0.3l0.3,-0.2l-0.3,-0.5l0.3,-0.2z" id="svg_53"/>
|
||||
</g>
|
||||
<polygon class="st16" points="146.2,296.6 110.6,276.1 110.6,234.9 146.2,214.4 181.9,234.9 181.9,276.1 " id="svg_54"/>
|
||||
<polygon class="st13" points="135.6,220.5 146.2,214.4 181.9,234.9 181.8,247.2 " id="svg_55"/>
|
||||
<polygon class="st7" points="163,226.4 159.8,232.1 179.8,243.6 179.8,236.1 " id="svg_56"/>
|
||||
<g id="svg_57">
|
||||
<path class="st8" d="m145.8,218.8l0.1,-0.3l0.7,0.4l-1,1.8l0.2,0.1l0.8,-0.3l-0.2,-0.1l0.1,-0.3l0.9,0.5l-0.1,0.3l-0.3,-0.1l-1,0.3l0.2,1.3l0.2,0.2l-0.1,0.2l-0.9,-0.5l0.1,-0.2l0.2,0.1l-0.2,-1l-0.3,-0.1l-0.4,0.7l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l1.4,-2.5l-0.1,-0.2z" id="svg_58"/>
|
||||
<path class="st8" d="m148.4,223.7c-0.1,0.1 -0.3,0.1 -0.4,0.1c-0.1,0 -0.3,0 -0.4,-0.1c-0.2,-0.1 -0.4,-0.3 -0.4,-0.5c0,-0.2 0,-0.5 0.2,-0.8l0.6,-1l-0.2,-0.2l0.1,-0.3l0.2,0.1l0.4,0.2l-0.7,1.3c-0.1,0.2 -0.2,0.4 -0.2,0.5c0,0.1 0.1,0.2 0.2,0.3c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,0 0.3,-0.1l0.7,-1.2l-0.3,-0.2l0.1,-0.3l0.3,0.2l0.4,0.2l-1.1,1.8l0.2,0.2l-0.1,0.2l-0.6,-0.3l0.3,-0.2z" id="svg_59"/>
|
||||
<path class="st8" d="m151.7,224.7c-0.2,0.3 -0.4,0.5 -0.7,0.6c-0.3,0.1 -0.5,0.1 -0.8,-0.1c-0.1,-0.1 -0.2,-0.2 -0.3,-0.3c-0.1,-0.1 -0.1,-0.2 -0.1,-0.4l-0.2,0.3l-0.3,-0.2l1.6,-2.8l-0.3,-0.2l0.1,-0.3l0.7,0.4l-0.7,1.2c0.1,-0.1 0.2,-0.1 0.4,-0.1c0.1,0 0.3,0 0.4,0.1c0.3,0.2 0.4,0.4 0.4,0.7c0.1,0.4 0,0.7 -0.2,1.1l0,0zm-0.4,-0.3c0.1,-0.2 0.2,-0.5 0.2,-0.7c0,-0.2 -0.1,-0.4 -0.3,-0.5c-0.1,-0.1 -0.2,-0.1 -0.4,-0.1c-0.1,0 -0.2,0.1 -0.3,0.1l-0.5,0.9c0,0.1 0,0.2 0.1,0.3c0.1,0.1 0.1,0.2 0.3,0.3c0.2,0.1 0.4,0.1 0.5,0c0.1,0.1 0.3,-0.1 0.4,-0.3l0,0z" id="svg_60"/>
|
||||
<path class="st8" d="m152.5,226.5c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7s0,-0.6 0.2,-0.9l0.1,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.8,0.1c0.3,0.2 0.5,0.4 0.5,0.6c0,0.3 0,0.5 -0.2,0.8l-0.1,0.2l-1.4,-0.8l0,0c-0.1,0.2 -0.2,0.4 -0.1,0.6c0,0.2 0.1,0.3 0.3,0.4c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,0 0.3,0l0,0.3c-0.1,0 -0.3,0 -0.4,0c-0.3,0.1 -0.5,0.1 -0.6,0zm1,-2c-0.1,-0.1 -0.3,-0.1 -0.4,0c-0.2,0.1 -0.3,0.2 -0.4,0.3l0,0l1,0.6l0,-0.1c0.1,-0.2 0.1,-0.3 0.1,-0.5s-0.1,-0.2 -0.3,-0.3z" id="svg_61"/>
|
||||
<path class="st8" d="m155.3,224.3l0.1,-0.3l0.7,0.4l-1.6,2.8l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l1.4,-2.5l-0.3,-0.1z" id="svg_62"/>
|
||||
<path class="st8" d="m155.9,228.5c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7s0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.8,0.1c0.3,0.2 0.5,0.4 0.5,0.6c0,0.3 0,0.5 -0.2,0.8l-0.1,0.2l-1.4,-0.8l0,0c-0.1,0.2 -0.2,0.4 -0.1,0.6c0,0.2 0.1,0.3 0.3,0.4c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,0 0.3,0l0,0.3c-0.1,0 -0.3,0 -0.4,0c-0.1,0.1 -0.3,0.1 -0.5,0zm1.1,-2c-0.1,-0.1 -0.3,-0.1 -0.4,0s-0.3,0.2 -0.4,0.3l0,0l1,0.6l0,-0.1c0.1,-0.2 0.1,-0.3 0.1,-0.5c-0.1,-0.1 -0.2,-0.2 -0.3,-0.3z" id="svg_63"/>
|
||||
<path class="st8" d="m159.4,226.9l-0.3,0.5l0.4,0.2l-0.2,0.3l-0.4,-0.2l-0.8,1.3c-0.1,0.1 -0.1,0.2 -0.1,0.2c0,0.1 0.1,0.1 0.1,0.2c0,0 0.1,0 0.1,0.1c0,0 0.1,0 0.1,0l-0.1,0.3c-0.1,0 -0.1,0 -0.2,0c-0.1,0 -0.2,-0.1 -0.2,-0.1c-0.2,-0.1 -0.3,-0.2 -0.3,-0.4s0,-0.3 0.1,-0.5l0.8,-1.3l-0.3,-0.2l0.2,-0.3l0.3,0.2l0.3,-0.5l0.5,0.2z" id="svg_64"/>
|
||||
</g>
|
||||
<g id="svg_65">
|
||||
<path class="st8" d="m166.1,230.5c0.4,0.2 0.6,0.5 0.7,0.9c0.1,0.4 0,0.7 -0.2,1.1l-0.2,0.4c-0.2,0.4 -0.5,0.6 -0.9,0.7s-0.7,0 -1.1,-0.2l-1.2,-0.7l0.1,-0.2l0.4,0.1l1.3,-2.3l-0.3,-0.2l0.1,-0.3l0.3,0.2l1,0.5zm-0.7,0l-1.3,2.3l0.5,0.3c0.3,0.2 0.5,0.2 0.8,0.1c0.3,-0.1 0.5,-0.3 0.6,-0.5l0.2,-0.4c0.2,-0.3 0.2,-0.5 0.2,-0.8c-0.1,-0.3 -0.2,-0.5 -0.5,-0.6l-0.5,-0.4z" id="svg_66"/>
|
||||
<path class="st8" d="m166.7,233.2c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.9,0.1c0.3,0.2 0.5,0.4 0.5,0.7c0.1,0.3 0,0.6 -0.2,0.9l0,0c-0.2,0.3 -0.4,0.5 -0.7,0.6c-0.3,0.1 -0.6,0.1 -0.9,-0.1c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7c-0.1,-0.2 0,-0.5 0.2,-0.9l0,0zm0.3,0.3c-0.1,0.2 -0.2,0.4 -0.2,0.6c0,0.2 0.1,0.4 0.3,0.5c0.2,0.1 0.4,0.1 0.5,0c0.2,-0.1 0.3,-0.3 0.5,-0.5l0,0c0.1,-0.2 0.2,-0.4 0.2,-0.6c0,-0.2 -0.1,-0.4 -0.3,-0.5c-0.2,-0.1 -0.4,-0.1 -0.6,0c-0.1,0.1 -0.2,0.2 -0.4,0.5l0,0z" id="svg_67"/>
|
||||
<path class="st8" d="m169.4,235.9c0.1,0.1 0.3,0.1 0.4,0.1s0.2,-0.1 0.3,-0.2l0.3,0.2l0,0c-0.1,0.2 -0.3,0.3 -0.5,0.3c-0.3,0 -0.5,0 -0.7,-0.1c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7c0,-0.3 0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.9,0.1c0.2,0.1 0.3,0.2 0.4,0.4c0.1,0.1 0.2,0.3 0.2,0.4l-0.3,0.5l-0.3,-0.2l0.1,-0.4c0,-0.1 -0.1,-0.1 -0.1,-0.2c-0.1,-0.1 -0.1,-0.1 -0.2,-0.2c-0.2,-0.1 -0.4,-0.1 -0.6,0c-0.2,0.1 -0.3,0.3 -0.4,0.5l0,0.1c-0.1,0.2 -0.2,0.4 -0.2,0.6c0,0.1 0.1,0.3 0.3,0.4z" id="svg_68"/>
|
||||
<path class="st8" d="m172.1,234l0.1,-0.3l0.7,0.4l-1,1.8l0.2,0.1l0.8,-0.3l-0.2,-0.1l0.1,-0.3l0.9,0.5l-0.1,0.3l-0.3,-0.1l-1,0.3l0.2,1.3l0.2,0.2l-0.1,0.2l-0.9,-0.5l0.1,-0.2l0.2,0.1l-0.2,-1l-0.3,-0.1l-0.4,0.7l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l1.4,-2.5l-0.1,-0.2z" id="svg_69"/>
|
||||
<path class="st8" d="m173.9,238.9c-0.3,-0.2 -0.5,-0.4 -0.5,-0.7c-0.1,-0.3 0,-0.6 0.2,-0.9l0,-0.1c0.2,-0.3 0.4,-0.5 0.7,-0.6c0.3,-0.1 0.6,-0.1 0.8,0.1c0.3,0.2 0.5,0.4 0.5,0.6c0,0.3 0,0.5 -0.2,0.8l-0.1,0.2l-1.4,-0.8l0,0c-0.1,0.2 -0.2,0.4 -0.1,0.6c0,0.2 0.1,0.3 0.3,0.4c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.2,0 0.3,0l0,0.3c-0.1,0 -0.3,0 -0.4,0c-0.2,0.2 -0.4,0.1 -0.5,0zm1,-2c-0.1,-0.1 -0.3,-0.1 -0.4,0c-0.2,0.1 -0.3,0.2 -0.4,0.3l0,0l1,0.6l0,-0.1c0.1,-0.2 0.1,-0.3 0.1,-0.5c0,0 -0.1,-0.2 -0.3,-0.3z" id="svg_70"/>
|
||||
<path class="st8" d="m176.2,237.7l0.1,-0.3l0.7,0.4l-0.1,0.3c0.1,-0.1 0.2,-0.1 0.4,-0.1c0.1,0 0.2,0 0.4,0.1c0,0 0.1,0 0.1,0.1c0,0 0.1,0 0.1,0.1l-0.3,0.3l-0.2,-0.1c-0.1,-0.1 -0.2,-0.1 -0.3,-0.1c-0.1,0 -0.2,0 -0.3,0.1l-0.7,1.2l0.3,0.2l-0.1,0.2l-1,-0.6l0.1,-0.2l0.4,0.1l0.9,-1.5l-0.5,-0.2z" id="svg_71"/>
|
||||
</g>
|
||||
</g>
|
||||
<g id="service"/>
|
||||
<g id="pods">
|
||||
<circle class="st29" cx="146.2" cy="119.5" r="20.9" id="svg_72"/>
|
||||
</g>
|
||||
<g id="IP"/>
|
||||
<g id="deployments">
|
||||
<g id="svg_73">
|
||||
<g id="svg_74">
|
||||
<circle class="st47" cx="177.4" cy="181.7" r="10.1" id="svg_75"/>
|
||||
<g id="svg_76">
|
||||
<g id="svg_77">
|
||||
<path class="st48" d="m180.9,176c1.9,1.2 3.2,3.3 3.2,5.8c0,3.7 -3,6.7 -6.7,6.7c-3.7,0 -6.7,-3 -6.7,-6.7c0,-2.9 1.9,-5.4 4.5,-6.4" id="svg_78"/>
|
||||
<g id="svg_79">
|
||||
<polygon class="st42" points="182.1,178.9 181.2,176.2 183.9,175.3 182.4,174.5 179.6,175.4 180.5,178.1 " id="svg_80"/>
|
||||
</g>
|
||||
<symbol
|
||||
id="master_x5F_level1_1_"
|
||||
viewBox="-68.6 -66.9 137.2 133.9">
|
||||
<g
|
||||
id="svg_1">
|
||||
<g
|
||||
id="svg_2">
|
||||
<line
|
||||
class="st0"
|
||||
x1="0"
|
||||
y1="-11.1"
|
||||
x2="0"
|
||||
y2="0.7"
|
||||
id="svg_3" />
|
||||
<line
|
||||
class="st0"
|
||||
x1="5.9"
|
||||
y1="-5.2"
|
||||
x2="-5.9"
|
||||
y2="-5.2"
|
||||
id="svg_4" />
|
||||
</g>
|
||||
</g>
|
||||
<g id="svg_81">
|
||||
<path class="st42" d="m174.8,185.1l0,-6.7l2.3,0c0.8,0 1.5,0.3 2,0.8c0.5,0.5 0.8,1.2 0.8,2l0,1.1c0,0.8 -0.3,1.5 -0.8,2c-0.5,0.5 -1.2,0.8 -2,0.8l-2.3,0zm1.3,-5.7l0,4.7l0.9,0c0.5,0 0.9,-0.2 1.1,-0.5c0.3,-0.3 0.4,-0.8 0.4,-1.3l0,-1.1c0,-0.5 -0.1,-0.9 -0.4,-1.3c-0.3,-0.3 -0.7,-0.5 -1.1,-0.5l-0.9,0z" id="svg_82"/>
|
||||
</g>
|
||||
<polygon
|
||||
class="st1"
|
||||
points="-29.2,-63.9 -65.6,-18.3 -52.6,38.6 0,63.9 52.6,38.6 65.6,-18.3 29.2,-63.9 "
|
||||
id="svg_5" />
|
||||
</g>
|
||||
</g>
|
||||
</symbol>
|
||||
<symbol
|
||||
id="node_high_level"
|
||||
viewBox="-81 -93 162 186.1">
|
||||
<polygon
|
||||
class="st2"
|
||||
points="-80,-46 -80,46 0,92 80,46 80,-46 0,-92 "
|
||||
id="svg_6" />
|
||||
<g
|
||||
id="Isolation_Mode_3_" />
|
||||
</symbol>
|
||||
<symbol
|
||||
id="node_x5F_empty"
|
||||
viewBox="-87.5 -100.6 175.1 201.1">
|
||||
<use
|
||||
xlink:href="#node_high_level"
|
||||
width="162"
|
||||
height="186.1"
|
||||
id="XMLID_201_"
|
||||
x="-81"
|
||||
y="-93"
|
||||
transform="matrix(1.0808,0,0,1.0808,-0.00003292006,-0.00003749943) " />
|
||||
<g
|
||||
id="svg_7">
|
||||
<polygon
|
||||
class="st3"
|
||||
points="76.8,-28.1 -14,-80.3 0,-88.3 76.7,-44.4 "
|
||||
id="svg_8" />
|
||||
<polygon
|
||||
class="st4"
|
||||
points="76.8,-28.1 32.1,-53.8 38.8,-66.1 76.7,-44.4 "
|
||||
id="svg_9" />
|
||||
</g>
|
||||
</symbol>
|
||||
<symbol
|
||||
id="node_x5F_new"
|
||||
viewBox="-87.6 -101 175.2 202">
|
||||
<polygon
|
||||
class="st5"
|
||||
points="0,-100 -86.6,-50 -86.6,50 0,100 86.6,50 86.6,-50 "
|
||||
id="svg_10" />
|
||||
<polygon
|
||||
class="st6"
|
||||
points="-86.6,-20.2 -86.6,-50 0,-100 25.8,-85.1 "
|
||||
id="svg_11" />
|
||||
<polygon
|
||||
class="st7"
|
||||
points="-40.8,-70.7 -32.9,-57 15.7,-85.1 0,-94.3 "
|
||||
id="svg_12" />
|
||||
<text
|
||||
transform="matrix(0.866,-0.5,-0.5,-0.866,-33.9256,-70.7388) "
|
||||
class="st8"
|
||||
font-size="11.3632px"
|
||||
font-family="'RobotoSlab-Regular'"
|
||||
id="svg_13">Docker</text>
|
||||
<text
|
||||
transform="matrix(0.866,-0.5,-0.5,-0.866,-76.0668,-46.4087) "
|
||||
class="st8"
|
||||
font-size="11.3632px"
|
||||
font-family="'RobotoSlab-Regular'"
|
||||
id="svg_14">Kubelt</text>
|
||||
</symbol>
|
||||
<g
|
||||
id="g1985">
|
||||
<title
|
||||
id="title1814">Layer 1</title>
|
||||
<g
|
||||
id="CLUSTER"
|
||||
transform="translate(-22)">
|
||||
<g
|
||||
id="XMLID_8_"
|
||||
class="st9">
|
||||
<g
|
||||
id="svg_15">
|
||||
<linearGradient
|
||||
id="SVGID_1_"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
x1="27.995501"
|
||||
y1="185.5114"
|
||||
x2="342.4509"
|
||||
y2="185.5114">
|
||||
<stop
|
||||
offset="0"
|
||||
stop-color="#326DE6"
|
||||
id="stop1816" />
|
||||
<stop
|
||||
offset="1"
|
||||
stop-color="#10FFC6"
|
||||
id="stop1818" />
|
||||
</linearGradient>
|
||||
<polygon
|
||||
class="st10"
|
||||
points="28,229.4 59.1,92.9 185.2,32.2 311.3,92.9 342.5,229.4 255.2,338.8 115.3,338.8 "
|
||||
id="svg_16"
|
||||
style="fill:url(#SVGID_1_)" />
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="master"
|
||||
transform="translate(-24)">
|
||||
<g
|
||||
id="master_x5F_level1" />
|
||||
<use
|
||||
xlink:href="#master_x5F_level1_1_"
|
||||
width="137.2"
|
||||
height="133.89999"
|
||||
x="-68.599998"
|
||||
y="-66.900002"
|
||||
transform="matrix(0.4,0,0,-0.4,185.2213,187.4709)"
|
||||
id="svg_17" />
|
||||
</g>
|
||||
<g
|
||||
id="Node"
|
||||
transform="translate(-24)">
|
||||
<g
|
||||
id="Node_x5F_level3_x5F_1">
|
||||
<g
|
||||
id="Isolation_Mode" />
|
||||
</g>
|
||||
<polygon
|
||||
class="st16"
|
||||
points="224.6,160.7 189,140.1 189,98.9 224.6,78.4 260.3,98.9 260.3,140.1 "
|
||||
id="svg_18" />
|
||||
<polygon
|
||||
class="st13"
|
||||
points="189,127.8 189,140.1 224.6,160.7 235.2,154.5 "
|
||||
id="svg_19" />
|
||||
<polygon
|
||||
class="st7"
|
||||
points="207.8,148.6 211.1,143 231.1,154.5 224.6,158.3 "
|
||||
id="svg_20" />
|
||||
<g
|
||||
id="svg_21">
|
||||
<path
|
||||
class="st8"
|
||||
d="m 213.7,146.5 c 0.4,0.2 0.6,0.5 0.7,0.9 0.1,0.4 0,0.7 -0.2,1.1 l -0.2,0.4 c -0.2,0.4 -0.5,0.6 -0.9,0.7 -0.4,0.1 -0.7,0 -1.1,-0.2 l -1.2,-0.7 0.1,-0.2 0.4,0.1 1.3,-2.3 -0.3,-0.2 0.1,-0.3 0.3,0.2 z m -0.6,0 -1.3,2.3 0.5,0.3 c 0.3,0.2 0.5,0.2 0.8,0.1 0.3,-0.1 0.5,-0.3 0.6,-0.5 l 0.2,-0.4 c 0.2,-0.3 0.2,-0.5 0.2,-0.8 -0.1,-0.3 -0.2,-0.5 -0.5,-0.6 z"
|
||||
id="svg_22" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 214.3,149.2 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.9,0.1 0.3,0.2 0.5,0.4 0.5,0.7 0.1,0.3 0,0.6 -0.2,0.9 v 0 c -0.2,0.3 -0.4,0.5 -0.7,0.6 -0.3,0.1 -0.6,0.1 -0.9,-0.1 -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.2 0,-0.5 0.2,-0.9 z m 0.4,0.3 c -0.1,0.2 -0.2,0.4 -0.2,0.6 0,0.2 0.1,0.4 0.3,0.5 0.2,0.1 0.4,0.1 0.5,0 0.2,-0.1 0.3,-0.3 0.5,-0.5 v 0 c 0.1,-0.2 0.2,-0.4 0.2,-0.6 0,-0.2 -0.1,-0.4 -0.3,-0.5 -0.2,-0.1 -0.4,-0.1 -0.6,0 -0.2,0.1 -0.2,0.2 -0.4,0.5 z"
|
||||
id="svg_23" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 217.1,151.9 c 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,-0.1 0.3,-0.2 l 0.4,0.2 v 0 c -0.1,0.2 -0.3,0.3 -0.5,0.3 -0.3,0 -0.5,0 -0.7,-0.1 -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.9,0.1 0.2,0.1 0.3,0.2 0.4,0.4 0.1,0.1 0.2,0.3 0.2,0.4 l -0.3,0.5 -0.3,-0.2 0.1,-0.4 c 0,-0.1 -0.1,-0.1 -0.1,-0.2 -0.1,-0.1 -0.1,-0.1 -0.2,-0.2 -0.2,-0.1 -0.4,-0.1 -0.6,0 -0.2,0.1 -0.3,0.3 -0.4,0.5 v 0.1 c -0.1,0.2 -0.2,0.4 -0.2,0.6 -0.1,0.2 0,0.3 0.2,0.4 z"
|
||||
id="svg_24" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 219.7,150 0.1,-0.3 0.7,0.4 -1,1.8 0.2,0.1 0.8,-0.3 -0.2,-0.1 0.1,-0.3 0.9,0.5 -0.1,0.3 -0.3,-0.1 -1,0.3 0.2,1.3 0.2,0.2 -0.1,0.2 -0.9,-0.5 0.1,-0.2 0.2,0.1 -0.2,-1 -0.3,-0.1 -0.1,0.7 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 1.4,-2.5 z"
|
||||
id="svg_25" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 221.5,154.9 c -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.8,0.1 0.3,0.2 0.5,0.4 0.5,0.6 0,0.3 0,0.5 -0.2,0.8 l -0.1,0.2 -1.4,-0.8 v 0 c -0.1,0.2 -0.2,0.4 -0.1,0.6 0,0.2 0.1,0.3 0.3,0.4 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,0 v 0.3 c -0.1,0 -0.3,0 -0.4,0 -0.1,0.2 -0.3,0.1 -0.5,0 z m 1.1,-1.9 c -0.1,-0.1 -0.3,-0.1 -0.4,0 -0.2,0.1 -0.3,0.2 -0.4,0.3 v 0 l 1,0.6 v -0.1 c 0.1,-0.2 0.1,-0.3 0.1,-0.5 -0.1,-0.1 -0.2,-0.2 -0.3,-0.3 z"
|
||||
id="svg_26" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 223.9,153.7 0.1,-0.3 0.7,0.4 -0.1,0.3 c 0.1,-0.1 0.2,-0.1 0.4,-0.1 0.1,0 0.2,0 0.4,0.1 0,0 0.1,0 0.1,0.1 0,0 0.1,0 0.1,0.1 l -0.3,0.3 -0.2,-0.1 c -0.1,-0.1 -0.2,-0.1 -0.3,-0.1 -0.1,0 -0.2,0 -0.3,0.1 l -0.7,1.2 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 0.9,-1.5 z"
|
||||
id="svg_27" />
|
||||
</g>
|
||||
<g
|
||||
id="svg_28">
|
||||
<path
|
||||
class="st8"
|
||||
d="m 194.3,135.3 0.1,-0.3 0.7,0.4 -1,1.8 0.2,0.1 0.8,-0.3 -0.2,-0.1 0.1,-0.3 0.9,0.5 -0.1,0.3 -0.3,-0.1 -1,0.3 0.2,1.3 0.2,0.2 -0.1,0.2 -0.9,-0.5 0.1,-0.2 0.2,0.1 -0.2,-1 -0.3,-0.1 -0.4,0.7 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 1.4,-2.5 z"
|
||||
id="svg_29" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 196.8,140.2 c -0.1,0.1 -0.3,0.1 -0.4,0.1 -0.1,0 -0.3,0 -0.4,-0.1 -0.2,-0.1 -0.4,-0.3 -0.4,-0.5 0,-0.2 0,-0.5 0.2,-0.8 l 0.6,-1 -0.2,-0.2 0.1,-0.3 0.2,0.1 0.4,0.2 -0.7,1.3 c -0.1,0.2 -0.2,0.4 -0.2,0.5 0,0.1 0.1,0.2 0.2,0.3 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,-0.1 l 0.7,-1.2 -0.3,-0.2 0.1,-0.3 0.3,0.2 0.4,0.2 -1.1,1.8 0.2,0.2 -0.1,0.2 -0.6,-0.3 z"
|
||||
id="svg_30" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 200.1,141.2 c -0.2,0.3 -0.4,0.5 -0.7,0.6 -0.3,0.1 -0.5,0.1 -0.8,-0.1 -0.1,-0.1 -0.2,-0.2 -0.3,-0.3 -0.1,-0.1 -0.1,-0.2 -0.1,-0.4 l -0.2,0.3 -0.3,-0.2 1.6,-2.8 -0.3,-0.2 0.1,-0.3 0.7,0.4 -0.7,1.2 c 0.1,-0.1 0.2,-0.1 0.4,-0.1 0.1,0 0.3,0 0.4,0.1 0.3,0.2 0.4,0.4 0.4,0.7 0.1,0.4 0,0.7 -0.2,1.1 z m -0.4,-0.3 c 0.1,-0.2 0.2,-0.5 0.2,-0.7 0,-0.2 -0.1,-0.4 -0.3,-0.5 -0.1,-0.1 -0.2,-0.1 -0.4,-0.1 -0.1,0 -0.2,0.1 -0.3,0.1 l -0.5,0.9 c 0,0.1 0,0.2 0.1,0.3 0.1,0.1 0.1,0.2 0.3,0.3 0.2,0.1 0.4,0.1 0.5,0 0.2,0.1 0.3,0 0.4,-0.3 z"
|
||||
id="svg_31" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 200.9,143 c -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.8,0.1 0.3,0.2 0.5,0.4 0.5,0.6 0,0.3 0,0.5 -0.2,0.8 l -0.1,0.2 -1.4,-0.8 v 0 c -0.1,0.2 -0.2,0.4 -0.1,0.6 0,0.2 0.1,0.3 0.3,0.4 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,0 v 0.3 c -0.1,0 -0.3,0 -0.4,0 -0.2,0.2 -0.3,0.1 -0.5,0 z m 1,-2 c -0.1,-0.1 -0.3,-0.1 -0.4,0 -0.2,0.1 -0.3,0.2 -0.4,0.3 v 0 l 1,0.6 v -0.1 c 0.1,-0.2 0.1,-0.3 0.1,-0.5 0,0 -0.1,-0.2 -0.3,-0.3 z"
|
||||
id="svg_32" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 203.7,140.8 0.1,-0.3 0.7,0.4 -1.6,2.8 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 1.4,-2.5 z"
|
||||
id="svg_33" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 204.3,145 c -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 -0.1,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.8,0.1 0.3,0.2 0.5,0.4 0.5,0.6 0,0.3 0,0.5 -0.2,0.8 l -0.1,0.2 -1.4,-0.8 v 0 c -0.1,0.2 -0.2,0.4 -0.1,0.6 0,0.2 0.1,0.3 0.3,0.4 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,0 v 0.3 c -0.1,0 -0.3,0 -0.4,0 -0.1,0.1 -0.3,0.1 -0.5,0 z m 1.1,-2 c -0.1,-0.1 -0.3,-0.1 -0.4,0 -0.2,0.1 -0.3,0.2 -0.4,0.3 v 0 l 1,0.6 v -0.1 c 0.1,-0.2 0.1,-0.3 0.1,-0.5 -0.1,-0.1 -0.2,-0.2 -0.3,-0.3 z"
|
||||
id="svg_34" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 207.8,143.4 -0.3,0.5 0.4,0.2 -0.2,0.3 -0.4,-0.2 -0.8,1.3 c -0.1,0.1 -0.1,0.2 -0.1,0.2 0,0.1 0.1,0.1 0.1,0.2 0,0 0.1,0 0.1,0.1 0,0 0.1,0 0.1,0 l -0.1,0.3 c -0.1,0 -0.1,0 -0.2,0 -0.1,0 -0.2,-0.1 -0.2,-0.1 -0.2,-0.1 -0.3,-0.2 -0.3,-0.4 0,-0.2 0,-0.3 0.1,-0.5 l 0.8,-1.3 -0.3,-0.2 0.2,-0.3 0.3,0.2 0.3,-0.5 z"
|
||||
id="svg_35" />
|
||||
</g>
|
||||
<polygon
|
||||
class="st16"
|
||||
points="182.3,140.1 146.6,160.7 111,140.1 111,99 146.6,78.4 182.3,99 "
|
||||
id="svg_36" />
|
||||
<polygon
|
||||
class="st13"
|
||||
points="136,154.5 146.6,160.7 182.3,140.1 182.2,127.9 "
|
||||
id="svg_37" />
|
||||
<polygon
|
||||
class="st7"
|
||||
points="163.4,148.6 160.2,143 180.2,131.4 180.2,138.9 "
|
||||
id="svg_38" />
|
||||
<g
|
||||
id="svg_39">
|
||||
<path
|
||||
class="st8"
|
||||
d="m 164.6,142.5 c 0.4,-0.2 0.7,-0.3 1.1,-0.2 0.4,0.1 0.6,0.3 0.9,0.7 l 0.2,0.4 c 0.2,0.4 0.3,0.7 0.2,1.1 -0.1,0.4 -0.3,0.7 -0.7,0.9 l -1.2,0.7 -0.1,-0.2 0.3,-0.2 -1.3,-2.3 -0.4,0.1 -0.1,-0.3 0.3,-0.2 z m -0.3,0.6 1.3,2.3 0.5,-0.3 c 0.3,-0.2 0.4,-0.4 0.5,-0.6 0.1,-0.3 0,-0.5 -0.2,-0.8 l -0.2,-0.4 c -0.2,-0.3 -0.4,-0.5 -0.6,-0.5 -0.3,-0.1 -0.5,-0.1 -0.8,0.1 z"
|
||||
id="svg_40" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 167.3,143.4 c -0.2,-0.3 -0.2,-0.6 -0.2,-0.9 0.1,-0.3 0.2,-0.5 0.5,-0.7 0.3,-0.2 0.6,-0.2 0.9,-0.1 0.3,0.1 0.5,0.3 0.7,0.6 v 0 c 0.2,0.3 0.2,0.6 0.2,0.9 -0.1,0.3 -0.2,0.5 -0.5,0.7 -0.3,0.2 -0.6,0.2 -0.9,0.1 -0.3,0 -0.5,-0.3 -0.7,-0.6 z m 0.4,-0.2 c 0.1,0.2 0.3,0.4 0.5,0.5 0.2,0.1 0.4,0.1 0.6,0 0.2,-0.1 0.3,-0.3 0.3,-0.5 0,-0.2 0,-0.4 -0.2,-0.6 v 0 c -0.1,-0.2 -0.3,-0.4 -0.5,-0.5 -0.2,-0.1 -0.4,-0.1 -0.6,0 -0.2,0.1 -0.3,0.3 -0.3,0.5 0,0.2 0,0.3 0.2,0.6 z"
|
||||
id="svg_41" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 171,142.3 c 0.1,-0.1 0.2,-0.2 0.3,-0.3 0.1,-0.1 0,-0.3 0,-0.4 l 0.3,-0.2 v 0 c 0.1,0.2 0.1,0.4 0,0.6 -0.1,0.2 -0.2,0.4 -0.5,0.6 -0.3,0.2 -0.6,0.2 -0.9,0.1 -0.3,-0.1 -0.5,-0.3 -0.7,-0.6 V 142 c -0.2,-0.3 -0.2,-0.6 -0.2,-0.9 0,-0.3 0.2,-0.5 0.5,-0.7 0.2,-0.1 0.3,-0.2 0.5,-0.2 0.2,0 0.3,0 0.5,0 l 0.3,0.5 -0.3,0.2 -0.3,-0.3 c -0.1,0 -0.2,0 -0.2,0 -0.1,0 -0.2,0 -0.3,0.1 -0.2,0.1 -0.3,0.3 -0.3,0.5 0,0.2 0.1,0.4 0.2,0.6 v 0.1 c 0.1,0.2 0.3,0.4 0.4,0.5 0.3,0.1 0.5,0.1 0.7,-0.1 z"
|
||||
id="svg_42" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 170.6,139.1 -0.1,-0.3 0.7,-0.4 1,1.8 0.2,-0.1 0.1,-0.8 -0.2,0.1 -0.1,-0.3 0.9,-0.5 0.1,0.3 -0.2,0.2 -0.2,1 1.2,0.5 0.3,-0.1 0.1,0.2 -0.9,0.5 -0.1,-0.2 0.2,-0.1 -1,-0.4 -0.3,0.1 0.4,0.7 0.4,-0.1 0.1,0.2 -1,0.6 -0.1,-0.2 0.3,-0.2 -1.4,-2.6 z"
|
||||
id="svg_43" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 175.8,140 c -0.3,0.2 -0.6,0.2 -0.9,0.1 -0.3,-0.1 -0.5,-0.3 -0.7,-0.6 l -0.1,-0.1 c -0.2,-0.3 -0.2,-0.6 -0.2,-0.9 0.1,-0.3 0.2,-0.5 0.5,-0.7 0.3,-0.2 0.6,-0.2 0.8,-0.1 0.2,0.1 0.5,0.3 0.6,0.6 l 0.1,0.2 -1.4,0.8 v 0 c 0.1,0.2 0.3,0.3 0.4,0.4 0.2,0.1 0.4,0.1 0.5,0 0.1,-0.1 0.2,-0.2 0.3,-0.3 0.1,-0.1 0.1,-0.2 0.2,-0.3 l 0.3,0.2 c 0,0.1 -0.1,0.2 -0.2,0.4 0.1,0.1 -0.1,0.2 -0.2,0.3 z m -1.2,-1.9 c -0.1,0.1 -0.2,0.2 -0.2,0.4 0,0.2 0,0.3 0.1,0.5 v 0 l 1,-0.6 v -0.1 c -0.1,-0.2 -0.2,-0.3 -0.3,-0.3 -0.3,0 -0.4,0 -0.6,0.1 z"
|
||||
id="svg_44" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 175.8,137.4 -0.1,-0.3 0.7,-0.4 0.2,0.3 c 0,-0.1 0,-0.3 0.1,-0.4 0.1,-0.1 0.1,-0.2 0.3,-0.3 0,0 0.1,0 0.1,0 0,0 0.1,0 0.1,0 l 0.2,0.4 -0.2,0.1 c -0.1,0.1 -0.2,0.1 -0.2,0.2 0,0.1 -0.1,0.2 0,0.3 l 0.7,1.2 0.4,-0.1 0.1,0.2 -1,0.6 -0.1,-0.2 0.3,-0.2 -0.9,-1.5 z"
|
||||
id="svg_45" />
|
||||
</g>
|
||||
<g
|
||||
id="svg_46">
|
||||
<path
|
||||
class="st8"
|
||||
d="m 145.1,153.8 -0.1,-0.3 0.7,-0.4 1,1.8 0.2,-0.1 0.1,-0.8 -0.2,0.1 -0.1,-0.3 0.9,-0.5 0.1,0.3 -0.2,0.2 -0.2,1 1.2,0.5 0.3,-0.1 0.1,0.2 -0.9,0.5 -0.1,-0.2 0.2,-0.1 -1,-0.4 -0.3,0.1 0.4,0.7 0.4,-0.1 0.1,0.2 -1,0.6 -0.1,-0.2 0.3,-0.2 -1.4,-2.5 z"
|
||||
id="svg_47" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 150.6,154 c 0,0.2 0,0.3 -0.1,0.4 -0.1,0.1 -0.2,0.2 -0.3,0.3 -0.2,0.1 -0.5,0.2 -0.7,0.1 -0.2,-0.1 -0.4,-0.3 -0.6,-0.6 l -0.6,-1 -0.3,0.1 -0.1,-0.3 0.2,-0.1 0.4,-0.2 0.7,1.3 c 0.1,0.2 0.3,0.4 0.4,0.4 0.1,0 0.2,0 0.4,-0.1 0.1,-0.1 0.2,-0.2 0.3,-0.3 0.1,-0.1 0.1,-0.2 0.1,-0.4 l -0.7,-1.2 -0.3,0.1 -0.1,-0.3 0.3,-0.2 0.4,-0.2 1.1,1.8 0.3,-0.1 0.1,0.2 -0.6,0.3 z"
|
||||
id="svg_48" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 153.1,151.7 c 0.2,0.3 0.2,0.6 0.2,0.9 0,0.3 -0.2,0.5 -0.4,0.6 -0.1,0.1 -0.3,0.1 -0.4,0.1 -0.1,0 -0.3,0 -0.4,-0.1 l 0.1,0.3 -0.3,0.2 -1.6,-2.8 -0.4,0.1 -0.1,-0.3 0.7,-0.4 0.7,1.2 c 0,-0.1 0.1,-0.3 0.1,-0.4 0.1,-0.1 0.2,-0.2 0.3,-0.3 0.3,-0.2 0.5,-0.2 0.8,0 0.3,0.2 0.5,0.5 0.7,0.9 z m -0.4,0.1 c -0.1,-0.2 -0.3,-0.4 -0.5,-0.5 -0.2,-0.1 -0.4,-0.1 -0.5,0 -0.1,0.1 -0.2,0.2 -0.3,0.3 0,0.1 -0.1,0.2 -0.1,0.3 l 0.5,0.9 c 0.1,0.1 0.2,0.1 0.3,0.1 0.1,0 0.2,0 0.4,-0.1 0.2,-0.1 0.3,-0.2 0.3,-0.4 0.1,-0.1 0.1,-0.3 -0.1,-0.6 z"
|
||||
id="svg_49" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 155.1,151.9 c -0.3,0.2 -0.6,0.2 -0.9,0.1 -0.3,-0.1 -0.5,-0.3 -0.7,-0.6 l -0.1,-0.1 c -0.2,-0.3 -0.2,-0.6 -0.2,-0.9 0.1,-0.3 0.2,-0.5 0.5,-0.7 0.3,-0.2 0.6,-0.2 0.8,-0.1 0.2,0.1 0.5,0.3 0.6,0.6 l 0.1,0.2 -1.4,0.8 v 0 c 0.1,0.2 0.3,0.3 0.4,0.4 0.2,0.1 0.4,0.1 0.5,0 0.1,-0.1 0.2,-0.2 0.3,-0.3 0.1,-0.1 0.1,-0.2 0.2,-0.3 l 0.3,0.2 c 0,0.1 -0.1,0.2 -0.2,0.4 0.1,0.1 0,0.2 -0.2,0.3 z m -1.2,-1.9 c -0.1,0.1 -0.2,0.2 -0.2,0.4 0,0.2 0,0.3 0.1,0.5 v 0 l 1,-0.6 v -0.1 c -0.1,-0.2 -0.2,-0.3 -0.3,-0.3 -0.3,0 -0.4,0 -0.6,0.1 z"
|
||||
id="svg_50" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 154.6,148.4 -0.1,-0.3 0.7,-0.4 1.6,2.8 0.4,-0.1 0.1,0.2 -1,0.6 -0.1,-0.2 0.3,-0.2 -1.4,-2.5 z"
|
||||
id="svg_51" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 158.6,149.9 c -0.3,0.2 -0.6,0.2 -0.9,0.1 -0.3,-0.1 -0.5,-0.3 -0.7,-0.6 l -0.1,-0.1 c -0.2,-0.3 -0.2,-0.6 -0.2,-0.9 0.1,-0.3 0.2,-0.5 0.5,-0.7 0.3,-0.2 0.6,-0.2 0.8,-0.1 0.2,0.1 0.5,0.3 0.6,0.6 l 0.1,0.2 -1.4,0.8 v 0 c 0.1,0.2 0.3,0.3 0.4,0.4 0.2,0.1 0.4,0.1 0.5,0 0.1,-0.1 0.2,-0.2 0.3,-0.3 0.1,-0.1 0.1,-0.2 0.2,-0.3 l 0.3,0.2 c 0,0.1 -0.1,0.2 -0.2,0.4 0.1,0.1 -0.1,0.2 -0.2,0.3 z m -1.2,-1.9 c -0.1,0.1 -0.2,0.2 -0.2,0.4 0,0.2 0,0.3 0.1,0.5 v 0 l 1,-0.6 v -0.1 c -0.1,-0.2 -0.2,-0.3 -0.3,-0.3 -0.3,0 -0.5,0 -0.6,0.1 z"
|
||||
id="svg_52" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 158.9,146.1 0.3,0.5 0.4,-0.2 0.2,0.3 -0.4,0.2 0.8,1.3 c 0.1,0.1 0.1,0.2 0.2,0.2 0.1,0 0.1,0 0.2,0 0,0 0.1,0 0.1,-0.1 0,0 0.1,-0.1 0.1,-0.1 l 0.2,0.2 c 0,0 -0.1,0.1 -0.1,0.2 -0.1,0.1 -0.1,0.1 -0.2,0.1 -0.2,0.1 -0.3,0.1 -0.5,0.1 -0.1,0 -0.3,-0.2 -0.4,-0.4 l -0.7,-1.3 -0.3,0.2 -0.2,-0.3 0.3,-0.2 -0.3,-0.5 z"
|
||||
id="svg_53" />
|
||||
</g>
|
||||
<polygon
|
||||
class="st16"
|
||||
points="146.2,296.6 110.6,276.1 110.6,234.9 146.2,214.4 181.9,234.9 181.9,276.1 "
|
||||
id="svg_54" />
|
||||
<polygon
|
||||
class="st13"
|
||||
points="135.6,220.5 146.2,214.4 181.9,234.9 181.8,247.2 "
|
||||
id="svg_55" />
|
||||
<polygon
|
||||
class="st7"
|
||||
points="163,226.4 159.8,232.1 179.8,243.6 179.8,236.1 "
|
||||
id="svg_56" />
|
||||
<g
|
||||
id="svg_57">
|
||||
<path
|
||||
class="st8"
|
||||
d="m 145.8,218.8 0.1,-0.3 0.7,0.4 -1,1.8 0.2,0.1 0.8,-0.3 -0.2,-0.1 0.1,-0.3 0.9,0.5 -0.1,0.3 -0.3,-0.1 -1,0.3 0.2,1.3 0.2,0.2 -0.1,0.2 -0.9,-0.5 0.1,-0.2 0.2,0.1 -0.2,-1 -0.3,-0.1 -0.4,0.7 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 1.4,-2.5 z"
|
||||
id="svg_58" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 148.4,223.7 c -0.1,0.1 -0.3,0.1 -0.4,0.1 -0.1,0 -0.3,0 -0.4,-0.1 -0.2,-0.1 -0.4,-0.3 -0.4,-0.5 0,-0.2 0,-0.5 0.2,-0.8 l 0.6,-1 -0.2,-0.2 0.1,-0.3 0.2,0.1 0.4,0.2 -0.7,1.3 c -0.1,0.2 -0.2,0.4 -0.2,0.5 0,0.1 0.1,0.2 0.2,0.3 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,-0.1 l 0.7,-1.2 -0.3,-0.2 0.1,-0.3 0.3,0.2 0.4,0.2 -1.1,1.8 0.2,0.2 -0.1,0.2 -0.6,-0.3 z"
|
||||
id="svg_59" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 151.7,224.7 c -0.2,0.3 -0.4,0.5 -0.7,0.6 -0.3,0.1 -0.5,0.1 -0.8,-0.1 -0.1,-0.1 -0.2,-0.2 -0.3,-0.3 -0.1,-0.1 -0.1,-0.2 -0.1,-0.4 l -0.2,0.3 -0.3,-0.2 1.6,-2.8 -0.3,-0.2 0.1,-0.3 0.7,0.4 -0.7,1.2 c 0.1,-0.1 0.2,-0.1 0.4,-0.1 0.1,0 0.3,0 0.4,0.1 0.3,0.2 0.4,0.4 0.4,0.7 0.1,0.4 0,0.7 -0.2,1.1 z m -0.4,-0.3 c 0.1,-0.2 0.2,-0.5 0.2,-0.7 0,-0.2 -0.1,-0.4 -0.3,-0.5 -0.1,-0.1 -0.2,-0.1 -0.4,-0.1 -0.1,0 -0.2,0.1 -0.3,0.1 l -0.5,0.9 c 0,0.1 0,0.2 0.1,0.3 0.1,0.1 0.1,0.2 0.3,0.3 0.2,0.1 0.4,0.1 0.5,0 0.1,0.1 0.3,-0.1 0.4,-0.3 z"
|
||||
id="svg_60" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 152.5,226.5 c -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.3 0,-0.6 0.2,-0.9 l 0.1,-0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.8,0.1 0.3,0.2 0.5,0.4 0.5,0.6 0,0.3 0,0.5 -0.2,0.8 l -0.1,0.2 -1.4,-0.8 v 0 c -0.1,0.2 -0.2,0.4 -0.1,0.6 0,0.2 0.1,0.3 0.3,0.4 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,0 v 0.3 c -0.1,0 -0.3,0 -0.4,0 -0.3,0.1 -0.5,0.1 -0.6,0 z m 1,-2 c -0.1,-0.1 -0.3,-0.1 -0.4,0 -0.2,0.1 -0.3,0.2 -0.4,0.3 v 0 l 1,0.6 v -0.1 c 0.1,-0.2 0.1,-0.3 0.1,-0.5 0,-0.2 -0.1,-0.2 -0.3,-0.3 z"
|
||||
id="svg_61" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 155.3,224.3 0.1,-0.3 0.7,0.4 -1.6,2.8 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 1.4,-2.5 z"
|
||||
id="svg_62" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 155.9,228.5 c -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.8,0.1 0.3,0.2 0.5,0.4 0.5,0.6 0,0.3 0,0.5 -0.2,0.8 l -0.1,0.2 -1.4,-0.8 v 0 c -0.1,0.2 -0.2,0.4 -0.1,0.6 0,0.2 0.1,0.3 0.3,0.4 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,0 v 0.3 c -0.1,0 -0.3,0 -0.4,0 -0.1,0.1 -0.3,0.1 -0.5,0 z m 1.1,-2 c -0.1,-0.1 -0.3,-0.1 -0.4,0 -0.1,0.1 -0.3,0.2 -0.4,0.3 v 0 l 1,0.6 v -0.1 c 0.1,-0.2 0.1,-0.3 0.1,-0.5 -0.1,-0.1 -0.2,-0.2 -0.3,-0.3 z"
|
||||
id="svg_63" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 159.4,226.9 -0.3,0.5 0.4,0.2 -0.2,0.3 -0.4,-0.2 -0.8,1.3 c -0.1,0.1 -0.1,0.2 -0.1,0.2 0,0.1 0.1,0.1 0.1,0.2 0,0 0.1,0 0.1,0.1 0,0 0.1,0 0.1,0 l -0.1,0.3 c -0.1,0 -0.1,0 -0.2,0 -0.1,0 -0.2,-0.1 -0.2,-0.1 -0.2,-0.1 -0.3,-0.2 -0.3,-0.4 0,-0.2 0,-0.3 0.1,-0.5 l 0.8,-1.3 -0.3,-0.2 0.2,-0.3 0.3,0.2 0.3,-0.5 z"
|
||||
id="svg_64" />
|
||||
</g>
|
||||
<g
|
||||
id="svg_65">
|
||||
<path
|
||||
class="st8"
|
||||
d="m 166.1,230.5 c 0.4,0.2 0.6,0.5 0.7,0.9 0.1,0.4 0,0.7 -0.2,1.1 l -0.2,0.4 c -0.2,0.4 -0.5,0.6 -0.9,0.7 -0.4,0.1 -0.7,0 -1.1,-0.2 l -1.2,-0.7 0.1,-0.2 0.4,0.1 1.3,-2.3 -0.3,-0.2 0.1,-0.3 0.3,0.2 z m -0.7,0 -1.3,2.3 0.5,0.3 c 0.3,0.2 0.5,0.2 0.8,0.1 0.3,-0.1 0.5,-0.3 0.6,-0.5 l 0.2,-0.4 c 0.2,-0.3 0.2,-0.5 0.2,-0.8 -0.1,-0.3 -0.2,-0.5 -0.5,-0.6 z"
|
||||
id="svg_66" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 166.7,233.2 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.9,0.1 0.3,0.2 0.5,0.4 0.5,0.7 0.1,0.3 0,0.6 -0.2,0.9 v 0 c -0.2,0.3 -0.4,0.5 -0.7,0.6 -0.3,0.1 -0.6,0.1 -0.9,-0.1 -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 -0.1,-0.2 0,-0.5 0.2,-0.9 z m 0.3,0.3 c -0.1,0.2 -0.2,0.4 -0.2,0.6 0,0.2 0.1,0.4 0.3,0.5 0.2,0.1 0.4,0.1 0.5,0 0.2,-0.1 0.3,-0.3 0.5,-0.5 v 0 c 0.1,-0.2 0.2,-0.4 0.2,-0.6 0,-0.2 -0.1,-0.4 -0.3,-0.5 -0.2,-0.1 -0.4,-0.1 -0.6,0 -0.1,0.1 -0.2,0.2 -0.4,0.5 z"
|
||||
id="svg_67" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 169.4,235.9 c 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,-0.1 0.3,-0.2 l 0.3,0.2 v 0 c -0.1,0.2 -0.3,0.3 -0.5,0.3 -0.3,0 -0.5,0 -0.7,-0.1 -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 0,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.9,0.1 0.2,0.1 0.3,0.2 0.4,0.4 0.1,0.1 0.2,0.3 0.2,0.4 l -0.3,0.5 -0.3,-0.2 0.1,-0.4 c 0,-0.1 -0.1,-0.1 -0.1,-0.2 -0.1,-0.1 -0.1,-0.1 -0.2,-0.2 -0.2,-0.1 -0.4,-0.1 -0.6,0 -0.2,0.1 -0.3,0.3 -0.4,0.5 v 0.1 c -0.1,0.2 -0.2,0.4 -0.2,0.6 0,0.1 0.1,0.3 0.3,0.4 z"
|
||||
id="svg_68" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 172.1,234 0.1,-0.3 0.7,0.4 -1,1.8 0.2,0.1 0.8,-0.3 -0.2,-0.1 0.1,-0.3 0.9,0.5 -0.1,0.3 -0.3,-0.1 -1,0.3 0.2,1.3 0.2,0.2 -0.1,0.2 -0.9,-0.5 0.1,-0.2 0.2,0.1 -0.2,-1 -0.3,-0.1 -0.4,0.7 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 1.4,-2.5 z"
|
||||
id="svg_69" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 173.9,238.9 c -0.3,-0.2 -0.5,-0.4 -0.5,-0.7 -0.1,-0.3 0,-0.6 0.2,-0.9 v -0.1 c 0.2,-0.3 0.4,-0.5 0.7,-0.6 0.3,-0.1 0.6,-0.1 0.8,0.1 0.3,0.2 0.5,0.4 0.5,0.6 0,0.3 0,0.5 -0.2,0.8 l -0.1,0.2 -1.4,-0.8 v 0 c -0.1,0.2 -0.2,0.4 -0.1,0.6 0,0.2 0.1,0.3 0.3,0.4 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.2,0 0.3,0 v 0.3 c -0.1,0 -0.3,0 -0.4,0 -0.2,0.2 -0.4,0.1 -0.5,0 z m 1,-2 c -0.1,-0.1 -0.3,-0.1 -0.4,0 -0.2,0.1 -0.3,0.2 -0.4,0.3 v 0 l 1,0.6 v -0.1 c 0.1,-0.2 0.1,-0.3 0.1,-0.5 0,0 -0.1,-0.2 -0.3,-0.3 z"
|
||||
id="svg_70" />
|
||||
<path
|
||||
class="st8"
|
||||
d="m 176.2,237.7 0.1,-0.3 0.7,0.4 -0.1,0.3 c 0.1,-0.1 0.2,-0.1 0.4,-0.1 0.1,0 0.2,0 0.4,0.1 0,0 0.1,0 0.1,0.1 0,0 0.1,0 0.1,0.1 l -0.3,0.3 -0.2,-0.1 c -0.1,-0.1 -0.2,-0.1 -0.3,-0.1 -0.1,0 -0.2,0 -0.3,0.1 l -0.7,1.2 0.3,0.2 -0.1,0.2 -1,-0.6 0.1,-0.2 0.4,0.1 0.9,-1.5 z"
|
||||
id="svg_71" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="service" />
|
||||
<g
|
||||
id="pods"
|
||||
transform="translate(-24)">
|
||||
<circle
|
||||
class="st29"
|
||||
cx="146.2"
|
||||
cy="119.5"
|
||||
r="20.9"
|
||||
id="svg_72" />
|
||||
</g>
|
||||
<g
|
||||
id="IP" />
|
||||
<g
|
||||
id="deployments"
|
||||
transform="translate(-22)">
|
||||
<g
|
||||
id="svg_73">
|
||||
<g
|
||||
id="svg_74">
|
||||
<circle
|
||||
class="st47"
|
||||
cx="177.39999"
|
||||
cy="181.7"
|
||||
r="10.1"
|
||||
id="svg_75" />
|
||||
<g
|
||||
id="svg_76">
|
||||
<g
|
||||
id="svg_77">
|
||||
<path
|
||||
class="st48"
|
||||
d="m 180.9,176 c 1.9,1.2 3.2,3.3 3.2,5.8 0,3.7 -3,6.7 -6.7,6.7 -3.7,0 -6.7,-3 -6.7,-6.7 0,-2.9 1.9,-5.4 4.5,-6.4"
|
||||
id="svg_78" />
|
||||
<g
|
||||
id="svg_79">
|
||||
<polygon
|
||||
class="st42"
|
||||
points="182.4,174.5 179.6,175.4 180.5,178.1 182.1,178.9 181.2,176.2 183.9,175.3 "
|
||||
id="svg_80" />
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="svg_81">
|
||||
<path
|
||||
class="st42"
|
||||
d="m 174.8,185.1 v -6.7 h 2.3 c 0.8,0 1.5,0.3 2,0.8 0.5,0.5 0.8,1.2 0.8,2 v 1.1 c 0,0.8 -0.3,1.5 -0.8,2 -0.5,0.5 -1.2,0.8 -2,0.8 z m 1.3,-5.7 v 4.7 h 0.9 c 0.5,0 0.9,-0.2 1.1,-0.5 0.3,-0.3 0.4,-0.8 0.4,-1.3 v -1.1 c 0,-0.5 -0.1,-0.9 -0.4,-1.3 -0.3,-0.3 -0.7,-0.5 -1.1,-0.5 z"
|
||||
id="svg_82" />
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="containers_x2F_volumes"
|
||||
transform="translate(-24)">
|
||||
<g
|
||||
id="svg_83">
|
||||
<polygon
|
||||
class="st50"
|
||||
points="155.9,113.9 146.2,119.5 136.5,113.9 146.2,108.3 "
|
||||
id="svg_84" />
|
||||
<polygon
|
||||
class="st50"
|
||||
points="146.2,119.5 146.2,130.7 136.5,125.1 136.5,113.9 "
|
||||
id="svg_85" />
|
||||
<polygon
|
||||
class="st50"
|
||||
points="155.9,113.9 155.9,125.1 146.2,130.7 146.2,119.5 "
|
||||
id="svg_86" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="labels_x2F_selectors" />
|
||||
<g
|
||||
id="description"
|
||||
transform="translate(-22)">
|
||||
<line
|
||||
class="st62"
|
||||
x1="360"
|
||||
y1="203.89999"
|
||||
x2="188.60001"
|
||||
y2="203.89999"
|
||||
id="svg_113" />
|
||||
<line
|
||||
class="st62"
|
||||
x1="360"
|
||||
y1="181.7"
|
||||
x2="181.5"
|
||||
y2="181.7"
|
||||
id="svg_125" />
|
||||
<line
|
||||
class="st62"
|
||||
x1="360"
|
||||
y1="229"
|
||||
x2="159.39999"
|
||||
y2="229"
|
||||
id="svg_126" />
|
||||
<line
|
||||
class="st62"
|
||||
x1="360"
|
||||
y1="101.5"
|
||||
x2="242.8"
|
||||
y2="101.5"
|
||||
id="svg_127" />
|
||||
<line
|
||||
class="st62"
|
||||
x1="360"
|
||||
y1="123.7"
|
||||
x2="150.2"
|
||||
y2="123.7"
|
||||
id="svg_145" />
|
||||
<g
|
||||
id="svg_146" />
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="361.82697"
|
||||
y="104.54287"
|
||||
id="text1997"><tspan
|
||||
id="tspan1995"
|
||||
x="361.82697"
|
||||
y="104.54287">Node</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="361.24179"
|
||||
y="126.47728"
|
||||
id="text2001"><tspan
|
||||
id="tspan1999"
|
||||
x="361.24179"
|
||||
y="126.47728">containerized app</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="361.82697"
|
||||
y="185.326"
|
||||
id="text2005"><tspan
|
||||
id="tspan2003"
|
||||
x="361.82697"
|
||||
y="185.326">Deployment</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="361.60068"
|
||||
y="207.72804"
|
||||
id="text2009"><tspan
|
||||
id="tspan2007"
|
||||
x="361.60068"
|
||||
y="207.72804">Control Plane</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="360.92181"
|
||||
y="232.16661"
|
||||
id="text2013"><tspan
|
||||
id="tspan2011"
|
||||
x="360.92181"
|
||||
y="232.16661">node processes</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Bold';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal"
|
||||
x="113.36793"
|
||||
y="350.73907"
|
||||
id="text2017"><tspan
|
||||
id="tspan2015"
|
||||
x="113.36793"
|
||||
y="350.73907">Kubernetes Cluster</tspan></text>
|
||||
</g>
|
||||
<g
|
||||
id="Layer_14" />
|
||||
</g>
|
||||
<g id="containers_x2F_volumes">
|
||||
<g id="svg_83">
|
||||
<polygon class="st50" points="136.5,113.9 146.2,108.3 155.9,113.9 146.2,119.5 " id="svg_84"/>
|
||||
<polygon class="st50" points="136.5,125.1 136.5,113.9 146.2,119.5 146.2,130.7 " id="svg_85"/>
|
||||
<polygon class="st50" points="146.2,130.7 146.2,119.5 155.9,113.9 155.9,125.1 " id="svg_86"/>
|
||||
</g>
|
||||
</g>
|
||||
<g id="labels_x2F_selectors"/>
|
||||
<g id="description">
|
||||
<g id="svg_87">
|
||||
<path class="st42" d="m373.8,200.8l0,0l-2.8,6.8l-0.8,0l-2.8,-6.8l0,0l0.1,3.5l0,2.5l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-6.8l-1,-0.2l0,-0.7l1,0l1.5,0l2.7,6.9l0,0l2.7,-6.9l2.4,0l0,0.7l-1,0.2l0,6.7l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-2.5l0.2,-3.4z" id="svg_88"/>
|
||||
<path class="st42" d="m380.8,207.6c0,-0.2 -0.1,-0.3 -0.1,-0.5s0,-0.3 0,-0.4c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.7,0.3 -1.1,0.3c-0.7,0 -1.2,-0.2 -1.5,-0.5s-0.5,-0.8 -0.5,-1.4c0,-0.6 0.2,-1.1 0.7,-1.4s1.2,-0.5 2,-0.5l1.2,0l0,-0.7c0,-0.4 -0.1,-0.7 -0.4,-0.9s-0.6,-0.3 -1,-0.3c-0.3,0 -0.5,0 -0.8,0.1s-0.4,0.2 -0.5,0.3l-0.1,0.7l-0.9,0l0,-1.2c0.3,-0.2 0.6,-0.4 1,-0.6s0.9,-0.2 1.3,-0.2c0.7,0 1.3,0.2 1.7,0.6s0.7,0.9 0.7,1.6l0,3.1c0,0.1 0,0.2 0,0.2s0,0.2 0,0.2l0.5,0.1l0,0.7l-1.4,0zm-1.8,-0.8c0.4,0 0.7,-0.1 1,-0.3s0.5,-0.4 0.7,-0.7l0,-1l-1.2,0c-0.5,0 -0.8,0.1 -1.1,0.3s-0.4,0.5 -0.4,0.8c0,0.3 0.1,0.5 0.3,0.6s0.3,0.3 0.7,0.3z" id="svg_89"/>
|
||||
<path class="st42" d="m388.2,203.3l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" id="svg_90"/>
|
||||
<path class="st42" d="m391.5,199.8l0,1.5l1.2,0l0,0.9l-1.2,0l0,3.8c0,0.3 0.1,0.5 0.2,0.6s0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0s0.2,0 0.3,-0.1l0.2,0.8c-0.1,0.1 -0.3,0.1 -0.5,0.2s-0.4,0.1 -0.6,0.1c-0.5,0 -0.8,-0.1 -1.1,-0.4s-0.4,-0.7 -0.4,-1.3l0,-3.8l-1,0l0,-0.9l1,0l0,-1.5l1.1,0l0,-0.1z" id="svg_91"/>
|
||||
<path class="st42" d="m396.7,207.8c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.8,0.3 -1.4,0.3zm-0.1,-5.7c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.6,-0.4 -1,-0.4z" id="svg_92"/>
|
||||
<path class="st42" d="m400.3,202l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.4,-0.6 0.7,-0.8s0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0s0.2,0 0.2,0l-0.2,1.1l-0.7,0c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.3 -0.5,0.6l0,3.6l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-4.5l-0.9,-0.1z" id="svg_93"/>
|
||||
</g>
|
||||
<g id="svg_94">
|
||||
<path class="st42" d="m365.4,230.3l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.5,-0.6 0.8,-0.8s0.7,-0.3 1.1,-0.3c0.7,0 1.2,0.2 1.6,0.6s0.6,1 0.6,1.9l0,3.1l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-3.1c0,-0.6 -0.1,-1 -0.3,-1.2s-0.6,-0.4 -1,-0.4c-0.3,0 -0.6,0.1 -0.9,0.2s-0.5,0.4 -0.6,0.7l0,3.7l1,0.2l0,0.7l-3.1,0l0,-0.6l-0.2,0z" id="svg_95"/>
|
||||
<path class="st42" d="m373.2,227.8c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.2,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" id="svg_96"/>
|
||||
<path class="st42" d="m384.2,230.3c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.6,0.2 -1,0.2c-0.8,0 -1.4,-0.3 -1.8,-0.8s-0.7,-1.3 -0.7,-2.2l0,-0.2c0,-1 0.2,-1.8 0.7,-2.5s1,-0.9 1.8,-0.9c0.4,0 0.7,0.1 1,0.2s0.5,0.3 0.7,0.6l0,-2.6l-1,-0.2l0,-0.7l1,0l1.2,0l0,8.2l1,0.2l0,0.7l-2,0l-0.1,-0.7zm-3.1,-2.2c0,0.6 0.1,1.1 0.4,1.5s0.7,0.6 1.2,0.6c0.3,0 0.6,-0.1 0.9,-0.2s0.4,-0.4 0.6,-0.7l0,-2.9c-0.1,-0.3 -0.3,-0.5 -0.6,-0.6s-0.5,-0.2 -0.9,-0.2c-0.6,0 -1,0.2 -1.2,0.7s-0.4,1.1 -0.4,1.8l0,0z" id="svg_97"/>
|
||||
<path class="st42" d="m390,231.1c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.8,0.3 -1.4,0.3zm-0.1,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.6,-0.4 -1,-0.4z" id="svg_98"/>
|
||||
<path class="st42" d="m396.4,232.7l1,-0.2l0,-7l-1,-0.2l0,-0.7l1.9,0l0.1,0.8c0.2,-0.3 0.5,-0.5 0.8,-0.7s0.7,-0.2 1.1,-0.2c0.8,0 1.4,0.3 1.8,0.9s0.7,1.4 0.7,2.5l0,0.1c0,0.9 -0.2,1.7 -0.7,2.2s-1,0.8 -1.8,0.8c-0.4,0 -0.7,-0.1 -1,-0.2s-0.5,-0.3 -0.8,-0.6l0,2.2l1,0.2l0,0.7l-3.1,0l0,-0.6zm5.2,-4.7c0,-0.7 -0.1,-1.3 -0.4,-1.8s-0.7,-0.7 -1.3,-0.7c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.4 -0.6,0.6l0,3.1c0.1,0.3 0.3,0.5 0.6,0.6s0.5,0.2 0.9,0.2c0.5,0 1,-0.2 1.2,-0.6s0.4,-0.9 0.4,-1.6l0,0z" id="svg_99"/>
|
||||
<path class="st42" d="m403.8,225.4l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.4,-0.6 0.7,-0.8s0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0s0.2,0 0.2,0l-0.2,1.1l-0.7,0c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.3 -0.5,0.6l0,3.6l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-4.5l-0.9,-0.1z" id="svg_100"/>
|
||||
<path class="st42" d="m408.7,227.8c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.2,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" id="svg_101"/>
|
||||
<path class="st42" d="m418.3,230.2c0.4,0 0.7,-0.1 1,-0.4s0.4,-0.5 0.4,-0.9l1,0l0,0c0,0.5 -0.2,1 -0.7,1.5s-1.1,0.6 -1.8,0.6c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.7,-1.4 -0.7,-2.3l0,-0.2c0,-0.9 0.2,-1.7 0.7,-2.3s1.2,-0.9 2.1,-0.9c0.5,0 1,0.1 1.4,0.3s0.7,0.4 1,0.7l0.1,1.4l-0.9,0l-0.3,-1c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.6,0 -1,0.2 -1.3,0.7s-0.4,1 -0.4,1.6l0,0.2c0,0.6 0.1,1.2 0.4,1.6s0.7,0.7 1.3,0.7z" id="svg_102"/>
|
||||
<path class="st42" d="m424.8,231.1c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" id="svg_103"/>
|
||||
<path class="st42" d="m433.2,226.7l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" id="svg_104"/>
|
||||
<path class="st42" d="m439.3,226.7l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" id="svg_105"/>
|
||||
<path class="st42" d="m443.5,231.1c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" id="svg_106"/>
|
||||
<path class="st42" d="m451.9,226.7l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" id="svg_107"/>
|
||||
</g>
|
||||
<g id="svg_108">
|
||||
<path class="st42" d="m373.8,95l0,0.7l-1,0.2l0,7.6l-1.2,0l-4.1,-6.6l0,0l0,5.7l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-6.7l-1,-0.2l0,-0.7l1,0l1.2,0l4.1,6.6l0,0l0,-5.7l-1,-0.2l0,-0.7l2.1,0l1,0z" id="svg_109"/>
|
||||
<path class="st42" d="m374.7,100.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.2,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" id="svg_110"/>
|
||||
<path class="st42" d="m385.7,102.8c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.6,0.2 -1,0.2c-0.8,0 -1.4,-0.3 -1.8,-0.8s-0.7,-1.3 -0.7,-2.2l0,-0.1c0,-1 0.2,-1.8 0.7,-2.5s1,-0.9 1.8,-0.9c0.4,0 0.7,0.1 1,0.2s0.5,0.3 0.7,0.6l0,-2.6l-1,-0.2l0,-0.7l1,0l1.2,0l0,8.2l1,0.2l0,0.7l-2,0l-0.1,-0.8zm-3.1,-2.2c0,0.6 0.1,1.1 0.4,1.5s0.7,0.6 1.2,0.6c0.3,0 0.6,-0.1 0.9,-0.2s0.4,-0.4 0.6,-0.7l0,-2.9c-0.1,-0.3 -0.3,-0.5 -0.6,-0.6s-0.5,-0.2 -0.9,-0.2c-0.6,0 -1,0.2 -1.2,0.7s-0.4,1.1 -0.4,1.8l0,0z" id="svg_111"/>
|
||||
<path class="st42" d="m391.5,103.6c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" id="svg_112"/>
|
||||
</g>
|
||||
<line class="st62" x1="360" y1="203.9" x2="188.6" y2="203.9" id="svg_113"/>
|
||||
<g id="svg_114">
|
||||
<path class="st42" d="m369,176.9c1.1,0 2,0.3 2.7,1s1,1.6 1,2.7l0,1.2c0,1.1 -0.3,2 -1,2.7s-1.6,1 -2.7,1l-3.6,0l0,-0.7l1,-0.2l0,-6.7l-1,-0.2l0,-0.7l1,0l2.6,0l0,-0.1zm-1.4,0.9l0,6.7l1.5,0c0.8,0 1.4,-0.3 1.9,-0.8s0.7,-1.2 0.7,-2l0,-1.2c0,-0.8 -0.2,-1.5 -0.7,-2s-1.1,-0.8 -1.9,-0.8l-1.5,0l0,0.1z" id="svg_115"/>
|
||||
<path class="st42" d="m376.8,185.5c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" id="svg_116"/>
|
||||
<path class="st42" d="m380.2,187.1l1,-0.2l0,-7l-1,-0.2l0,-0.7l1.9,0l0.1,0.8c0.2,-0.3 0.5,-0.5 0.8,-0.7s0.7,-0.2 1.1,-0.2c0.8,0 1.4,0.3 1.8,0.9s0.7,1.4 0.7,2.5l0,0.1c0,0.9 -0.2,1.7 -0.7,2.2s-1,0.8 -1.8,0.8c-0.4,0 -0.7,-0.1 -1,-0.2s-0.5,-0.3 -0.8,-0.6l0,2.2l1,0.2l0,0.7l-3.1,0l0,-0.6zm5.2,-4.8c0,-0.7 -0.1,-1.3 -0.4,-1.8s-0.7,-0.7 -1.3,-0.7c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.4 -0.6,0.6l0,3.1c0.1,0.3 0.3,0.5 0.6,0.6s0.5,0.2 0.9,0.2c0.5,0 1,-0.2 1.2,-0.6s0.4,-0.9 0.4,-1.6l0,0z" id="svg_117"/>
|
||||
<path class="st42" d="m387.4,177l0,-0.7l2.1,0l0,8.2l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-7.3l-1,-0.2z" id="svg_118"/>
|
||||
<path class="st42" d="m391.2,182.2c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.1,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" id="svg_119"/>
|
||||
<path class="st42" d="m403.9,179.8l-0.6,0.1l-2.4,6.5c-0.2,0.4 -0.4,0.8 -0.7,1.1s-0.7,0.5 -1.2,0.5c-0.1,0 -0.2,0 -0.4,0s-0.3,0 -0.3,-0.1l0.1,-0.9c0,0 0,0 0.2,0s0.3,0 0.3,0c0.2,0 0.4,-0.1 0.6,-0.3s0.3,-0.5 0.4,-0.7l0.3,-0.7l-2.1,-5.4l-0.6,-0.1l0,-0.7l2.6,0l0,0.7l-0.7,0.1l1.1,3.1l0.2,0.8l0,0l1.3,-3.9l-0.7,-0.1l0,-0.7l2.6,0l0,0.7z" id="svg_120"/>
|
||||
<path class="st42" d="m404.4,184.7l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2,0l0.1,0.8c0.2,-0.3 0.5,-0.5 0.8,-0.7s0.7,-0.2 1.1,-0.2s0.8,0.1 1.1,0.3s0.5,0.5 0.7,0.9c0.2,-0.4 0.5,-0.6 0.8,-0.9s0.7,-0.3 1.1,-0.3c0.6,0 1.2,0.2 1.5,0.7s0.6,1.1 0.6,2l0,2.9l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-2.9c0,-0.6 -0.1,-1.1 -0.3,-1.3s-0.5,-0.4 -1,-0.4c-0.4,0 -0.7,0.1 -1,0.4s-0.4,0.6 -0.4,1.1l0,3.1l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-2.9c0,-0.6 -0.1,-1 -0.3,-1.3s-0.5,-0.4 -1,-0.4c-0.4,0 -0.6,0.1 -0.9,0.2s-0.4,0.3 -0.5,0.6l0,3.8l1,0.2l0,0.7l-3.1,0l0,-0.8l-0.1,0z" id="svg_121"/>
|
||||
<path class="st42" d="m418.8,185.5c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" id="svg_122"/>
|
||||
<path class="st42" d="m422.2,184.7l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.5,-0.6 0.8,-0.8s0.7,-0.3 1.1,-0.3c0.7,0 1.2,0.2 1.6,0.6s0.6,1 0.6,1.9l0,3.1l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-3.1c0,-0.6 -0.1,-1 -0.3,-1.2s-0.6,-0.4 -1,-0.4c-0.3,0 -0.6,0.1 -0.9,0.2s-0.5,0.4 -0.6,0.7l0,3.7l1,0.2l0,0.7l-3.1,0l0,-0.6l-0.2,0z" id="svg_123"/>
|
||||
<path class="st42" d="m432.1,177.5l0,1.5l1.2,0l0,0.9l-1.2,0l0,3.8c0,0.3 0.1,0.5 0.2,0.6s0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0s0.2,0 0.3,-0.1l0.2,0.8c-0.1,0.1 -0.3,0.1 -0.5,0.2s-0.4,0.1 -0.6,0.1c-0.5,0 -0.8,-0.1 -1.1,-0.4s-0.4,-0.7 -0.4,-1.3l0,-3.8l-1,0l0,-0.9l1,0l0,-1.5l1.1,0l0,-0.1z" id="svg_124"/>
|
||||
</g>
|
||||
<line class="st62" x1="360" y1="181.7" x2="181.5" y2="181.7" id="svg_125"/>
|
||||
<line class="st62" x1="360" y1="229" x2="159.4" y2="229" id="svg_126"/>
|
||||
<line class="st62" x1="360" y1="101.5" x2="242.8" y2="101.5" id="svg_127"/>
|
||||
<g id="svg_128">
|
||||
<path class="st42" d="m368.5,126.3c0.4,0 0.7,-0.1 1,-0.4s0.4,-0.5 0.4,-0.9l1,0l0,0c0,0.5 -0.2,1 -0.7,1.5s-1.1,0.6 -1.8,0.6c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.7,-1.4 -0.7,-2.3l0,-0.2c0,-0.9 0.2,-1.7 0.7,-2.3s1.2,-0.9 2.1,-0.9c0.5,0 1,0.1 1.4,0.3s0.7,0.4 1,0.7l0.1,1.4l-0.9,0l-0.3,-1c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.6,0 -1,0.2 -1.3,0.7s-0.4,1 -0.4,1.6l0,0.2c0,0.6 0.1,1.2 0.4,1.6s0.6,0.7 1.3,0.7z" id="svg_129"/>
|
||||
<path class="st42" d="m372.1,123.8c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.1,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" id="svg_130"/>
|
||||
<path class="st42" d="m378.6,126.3l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.5,-0.6 0.8,-0.8s0.7,-0.3 1.1,-0.3c0.7,0 1.2,0.2 1.6,0.6s0.6,1 0.6,1.9l0,3.1l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-3.1c0,-0.6 -0.1,-1 -0.3,-1.2s-0.6,-0.4 -1,-0.4c-0.3,0 -0.6,0.1 -0.9,0.2s-0.5,0.4 -0.6,0.7l0,3.7l1,0.2l0,0.7l-3.1,0l0,-0.6l-0.2,0z" id="svg_131"/>
|
||||
<path class="st42" d="m388.6,119.2l0,1.5l1.2,0l0,0.9l-1.2,0l0,3.8c0,0.3 0.1,0.5 0.2,0.6s0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0s0.2,0 0.3,-0.1l0.2,0.8c-0.1,0.1 -0.3,0.1 -0.5,0.2s-0.4,0.1 -0.6,0.1c-0.5,0 -0.8,-0.1 -1.1,-0.4s-0.4,-0.7 -0.4,-1.3l0,-3.8l-1,0l0,-0.9l1,0l0,-1.5l1.1,0l0,-0.1z" id="svg_132"/>
|
||||
<path class="st42" d="m395.1,127c0,-0.2 -0.1,-0.3 -0.1,-0.5s0,-0.3 0,-0.4c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.7,0.3 -1.1,0.3c-0.7,0 -1.2,-0.2 -1.5,-0.5s-0.5,-0.8 -0.5,-1.4c0,-0.6 0.2,-1.1 0.7,-1.4s1.2,-0.5 2,-0.5l1.2,0l0,-0.7c0,-0.4 -0.1,-0.7 -0.4,-0.9s-0.6,-0.3 -1,-0.3c-0.3,0 -0.5,0 -0.8,0.1s-0.4,0.2 -0.5,0.3l-0.1,0.7l-0.9,0l0,-1.2c0.3,-0.2 0.6,-0.4 1,-0.6s0.9,-0.2 1.3,-0.2c0.7,0 1.3,0.2 1.7,0.6s0.7,0.9 0.7,1.6l0,3.1c0,0.1 0,0.2 0,0.2s0,0.2 0,0.2l0.5,0.1l0,0.7l-1.4,0zm-1.9,-0.8c0.4,0 0.7,-0.1 1,-0.3s0.5,-0.4 0.7,-0.7l0,-1l-1.2,0c-0.5,0 -0.8,0.1 -1.1,0.3s-0.4,0.5 -0.4,0.8c0,0.3 0.1,0.5 0.3,0.6s0.4,0.3 0.7,0.3z" id="svg_133"/>
|
||||
<path class="st42" d="m397.5,126.3l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2.1,0l0,5.4l1,0.2l0,0.7l-3.1,0l0,-0.7zm2.1,-7.2l-1.2,0l0,-1.2l1.2,0l0,1.2z" id="svg_134"/>
|
||||
<path class="st42" d="m401.3,126.3l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.5,-0.6 0.8,-0.8s0.7,-0.3 1.1,-0.3c0.7,0 1.2,0.2 1.6,0.6s0.6,1 0.6,1.9l0,3.1l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-3.1c0,-0.6 -0.1,-1 -0.3,-1.2s-0.6,-0.4 -1,-0.4c-0.3,0 -0.6,0.1 -0.9,0.2s-0.5,0.4 -0.6,0.7l0,3.7l1,0.2l0,0.7l-3.1,0l0,-0.6l-0.2,0z" id="svg_135"/>
|
||||
<path class="st42" d="m412,127.2c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.8,0.3 -1.4,0.3zm-0.1,-5.7c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.6,-0.4 -1,-0.4z" id="svg_136"/>
|
||||
<path class="st42" d="m415.6,121.4l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.4,-0.6 0.7,-0.8s0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0s0.2,0 0.2,0l-0.2,1.1l-0.7,0c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.3 -0.5,0.6l0,3.6l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-4.5l-0.9,-0.1z" id="svg_137"/>
|
||||
<path class="st42" d="m420.6,126.3l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2.1,0l0,5.4l1,0.2l0,0.7l-3.1,0l0,-0.7zm2.1,-7.2l-1.2,0l0,-1.2l1.2,0l0,1.2z" id="svg_138"/>
|
||||
<path class="st42" d="m426,126.1l2.6,0l0.1,-1l1,0l0,1.9l-5,0l0,-0.8l3.4,-4.6l-2.3,0l-0.1,1l-1,0l0,-1.9l4.8,0l0,0.8l-3.5,4.6z" id="svg_139"/>
|
||||
<path class="st42" d="m433.7,127.2c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.7c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" id="svg_140"/>
|
||||
<path class="st42" d="m441.5,126.3c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.6,0.2 -1,0.2c-0.8,0 -1.4,-0.3 -1.8,-0.8s-0.7,-1.3 -0.7,-2.2l0,-0.2c0,-1 0.2,-1.8 0.7,-2.5s1,-0.9 1.8,-0.9c0.4,0 0.7,0.1 1,0.2s0.5,0.3 0.7,0.6l0,-2.6l-1,-0.2l0,-0.7l1,0l1.2,0l0,8.2l1,0.2l0,0.7l-2,0l-0.1,-0.7zm-3.1,-2.2c0,0.6 0.1,1.1 0.4,1.5s0.7,0.6 1.2,0.6c0.3,0 0.6,-0.1 0.9,-0.2s0.4,-0.4 0.6,-0.7l0,-2.9c-0.1,-0.3 -0.3,-0.5 -0.6,-0.6s-0.5,-0.2 -0.9,-0.2c-0.6,0 -1,0.2 -1.2,0.7s-0.4,1.1 -0.4,1.8l0,0z" id="svg_141"/>
|
||||
<path class="st42" d="m451.5,127c0,-0.2 -0.1,-0.3 -0.1,-0.5s0,-0.3 0,-0.4c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.7,0.3 -1.1,0.3c-0.7,0 -1.2,-0.2 -1.5,-0.5s-0.5,-0.8 -0.5,-1.4c0,-0.6 0.2,-1.1 0.7,-1.4s1.2,-0.5 2,-0.5l1.2,0l0,-0.7c0,-0.4 -0.1,-0.7 -0.4,-0.9s-0.6,-0.3 -1,-0.3c-0.3,0 -0.5,0 -0.8,0.1s-0.4,0.2 -0.5,0.3l-0.1,0.7l-0.9,0l0,-1.2c0.3,-0.2 0.6,-0.4 1,-0.6s0.9,-0.2 1.3,-0.2c0.7,0 1.3,0.2 1.7,0.6s0.7,0.9 0.7,1.6l0,3.1c0,0.1 0,0.2 0,0.2s0,0.2 0,0.2l0.5,0.1l0,0.7l-1.4,0zm-1.8,-0.8c0.4,0 0.7,-0.1 1,-0.3s0.5,-0.4 0.7,-0.7l0,-1l-1.2,0c-0.5,0 -0.8,0.1 -1.1,0.3s-0.4,0.5 -0.4,0.8c0,0.3 0.1,0.5 0.3,0.6s0.3,0.3 0.7,0.3z" id="svg_142"/>
|
||||
<path class="st42" d="m453.9,128.7l1,-0.2l0,-7l-1,-0.2l0,-0.7l1.9,0l0.1,0.8c0.2,-0.3 0.5,-0.5 0.8,-0.7s0.7,-0.2 1.1,-0.2c0.8,0 1.4,0.3 1.8,0.9s0.7,1.4 0.7,2.5l0,0.1c0,0.9 -0.2,1.7 -0.7,2.2s-1,0.8 -1.8,0.8c-0.4,0 -0.7,-0.1 -1,-0.2s-0.5,-0.3 -0.8,-0.6l0,2.2l1,0.2l0,0.7l-3.1,0l0,-0.6zm5.2,-4.7c0,-0.7 -0.1,-1.3 -0.4,-1.8s-0.7,-0.7 -1.3,-0.7c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.4 -0.6,0.6l0,3.1c0.1,0.3 0.3,0.5 0.6,0.6s0.5,0.2 0.9,0.2c0.5,0 1,-0.2 1.2,-0.6s0.4,-0.9 0.4,-1.6l0,0z" id="svg_143"/>
|
||||
<path class="st42" d="m461.1,128.7l1,-0.2l0,-7l-1,-0.2l0,-0.7l1.9,0l0.1,0.8c0.2,-0.3 0.5,-0.5 0.8,-0.7s0.7,-0.2 1.1,-0.2c0.8,0 1.4,0.3 1.8,0.9s0.7,1.4 0.7,2.5l0,0.1c0,0.9 -0.2,1.7 -0.7,2.2s-1,0.8 -1.8,0.8c-0.4,0 -0.7,-0.1 -1,-0.2s-0.5,-0.3 -0.8,-0.6l0,2.2l1,0.2l0,0.7l-3.1,0l0,-0.6zm5.2,-4.7c0,-0.7 -0.1,-1.3 -0.4,-1.8s-0.7,-0.7 -1.3,-0.7c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.4 -0.6,0.6l0,3.1c0.1,0.3 0.3,0.5 0.6,0.6s0.5,0.2 0.9,0.2c0.5,0 1,-0.2 1.2,-0.6s0.4,-0.9 0.4,-1.6l0,0z" id="svg_144"/>
|
||||
</g>
|
||||
<line class="st62" x1="360" y1="123.7" x2="150.2" y2="123.7" id="svg_145"/>
|
||||
<g id="svg_146">
|
||||
<path class="st42" d="m120.9,356.9l1,-0.2l0,-6.9l-1,-0.2l0,-1.2l4,0l0,1.2l-1,0.2l0,2.6l0.8,0l1.9,-2.7l-0.6,-0.1l0,-1.2l3.8,0l0,1.2l-1,0.2l-2.4,3.2l2.7,3.8l1,0.2l0,1.2l-3.8,0l0,-1.2l0.6,-0.1l-1.9,-2.9l-1.1,0l0,2.7l1,0.2l0,1.2l-4,0l0,-1.2z" id="svg_147"/>
|
||||
<path class="st42" d="m135.6,357.1c-0.2,0.3 -0.5,0.6 -0.9,0.8s-0.7,0.3 -1.2,0.3c-0.8,0 -1.4,-0.2 -1.8,-0.7c-0.4,-0.5 -0.6,-1.2 -0.6,-2.3l0,-3l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,4.3c0,0.5 0.1,0.9 0.3,1.1s0.4,0.3 0.8,0.3c0.3,0 0.6,0 0.8,-0.1s0.4,-0.2 0.5,-0.4l0,-3.9l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,5.8l0.9,0.2l0,1.2l-2.6,0l0,-0.8z" id="svg_148"/>
|
||||
<path class="st42" d="m146,354.6c0,1.1 -0.2,1.9 -0.7,2.6c-0.5,0.6 -1.2,1 -2.1,1c-0.4,0 -0.8,-0.1 -1.1,-0.3s-0.6,-0.4 -0.8,-0.8l-0.1,0.9l-1.7,0l0,-9l-1,-0.2l0,-1.2l3,0l0,3.9c0.2,-0.3 0.5,-0.5 0.7,-0.7s0.6,-0.2 1,-0.2c0.9,0 1.6,0.3 2.1,1c0.5,0.7 0.7,1.6 0.7,2.8l0,0.2zm-1.9,-0.1c0,-0.7 -0.1,-1.3 -0.3,-1.7s-0.6,-0.6 -1.1,-0.6c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.3 -0.5,0.5l0,3c0.1,0.2 0.3,0.4 0.5,0.5c0.2,0.1 0.5,0.2 0.8,0.2c0.5,0 0.8,-0.2 1,-0.5c0.2,-0.4 0.3,-0.9 0.3,-1.5l0,-0.1l0.1,0z" id="svg_149"/>
|
||||
<path class="st42" d="m150.3,358.2c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1s0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6s-1,0 -1.6,0zm-0.1,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1c-0.3,-0.2 -0.6,-0.3 -0.9,-0.3z" id="svg_150"/>
|
||||
<path class="st42" d="m154.3,356.9l0.9,-0.2l0,-4.5l-1,-0.2l0,-1.2l2.8,0l0.1,1c0.2,-0.4 0.4,-0.7 0.7,-0.9c0.3,-0.2 0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,0.1l-0.2,1.8l-0.8,0c-0.3,0 -0.5,0.1 -0.7,0.2c-0.2,0.1 -0.3,0.3 -0.4,0.5l0,3.5l0.9,0.2l0,1.2l-3.8,0l0,-1.2z" id="svg_151"/>
|
||||
<path class="st42" d="m159.7,356.9l0.9,-0.2l0,-4.5l-1,-0.2l0,-1.2l2.8,0l0.1,1c0.2,-0.4 0.5,-0.7 0.9,-0.9s0.7,-0.3 1.2,-0.3c0.7,0 1.3,0.2 1.7,0.7c0.4,0.5 0.6,1.2 0.6,2.1l0,3.1l0.9,0.2l0,1.2l-3.7,0l0,-1.2l0.8,-0.2l0,-3.1c0,-0.5 -0.1,-0.8 -0.3,-1c-0.2,-0.2 -0.5,-0.3 -0.9,-0.3c-0.3,0 -0.5,0.1 -0.7,0.2c-0.2,0.1 -0.4,0.3 -0.5,0.4l0,3.9l0.8,0.2l0,1.2l-3.7,0l0,-1.1l0.1,0z" id="svg_152"/>
|
||||
<path class="st42" d="m171.9,358.2c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1s0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6s-1.1,0 -1.6,0zm-0.2,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1c-0.2,-0.2 -0.5,-0.3 -0.9,-0.3z" id="svg_153"/>
|
||||
<path class="st42" d="m178.5,349.1l0,1.8l1.3,0l0,1.4l-1.3,0l0,3.7c0,0.3 0.1,0.5 0.2,0.6c0.1,0.1 0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,-0.1l0.2,1.4c-0.2,0.1 -0.4,0.1 -0.6,0.1c-0.2,0 -0.4,0 -0.7,0c-0.7,0 -1.2,-0.2 -1.5,-0.6c-0.4,-0.4 -0.5,-0.9 -0.5,-1.7l0,-3.7l-1.1,0l0,-1.4l1.1,0l0,-1.8l1.8,0l0,0.1z" id="svg_154"/>
|
||||
<path class="st42" d="m184.2,358.2c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1s0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6s-1,0 -1.6,0zm-0.2,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1c-0.2,-0.2 -0.5,-0.3 -0.9,-0.3z" id="svg_155"/>
|
||||
<path class="st42" d="m193.69375,353.3l-1.3,0l-0.2,-0.9c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3c-0.2,-0.1 -0.4,-0.1 -0.7,-0.1c-0.3,0 -0.6,0.1 -0.8,0.2c-0.2,0.2 -0.3,0.3 -0.3,0.6c0,0.2 0.1,0.4 0.3,0.5s0.6,0.3 1.1,0.4c0.9,0.2 1.5,0.4 2,0.8s0.6,0.8 0.6,1.4c0,0.6 -0.3,1.2 -0.8,1.6c-0.6,0.4 -1.3,0.6 -2.2,0.6c-0.6,0 -1.1,-0.1 -1.5,-0.2c-0.5,-0.2 -0.9,-0.4 -1.2,-0.7l0,-1.6l1.4,0l0.3,0.9c0.1,0.1 0.3,0.2 0.5,0.2c0.2,0 0.4,0.1 0.6,0.1c0.4,0 0.7,-0.1 0.9,-0.2s0.3,-0.3 0.3,-0.6c0,-0.2 -0.1,-0.4 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.5,-0.4 -1.9,-0.8s-0.6,-0.8 -0.6,-1.4c0,-0.6 0.2,-1.1 0.7,-1.6s1.2,-0.7 2.1,-0.7c0.6,0 1.1,0.1 1.6,0.2s0.9,0.3 1.2,0.6l-0.2,2z" id="svg_157" fill="black"/>
|
||||
<path class="st42" d="m211.9,351.6l-1.4,0l-0.2,-1.3c-0.2,-0.2 -0.4,-0.3 -0.7,-0.5s-0.6,-0.2 -1,-0.2c-0.8,0 -1.5,0.3 -1.9,0.9c-0.5,0.6 -0.7,1.4 -0.7,2.4l0,0.3c0,1 0.2,1.8 0.7,2.4c0.5,0.6 1.1,0.9 1.9,0.9c0.4,0 0.7,-0.1 1,-0.2c0.3,-0.1 0.6,-0.3 0.7,-0.5l0.2,-1.3l1.4,0l0,1.9c-0.4,0.5 -0.9,0.8 -1.5,1.1c-0.6,0.3 -1.3,0.4 -2,0.4c-1.3,0 -2.4,-0.4 -3.2,-1.3s-1.2,-2.1 -1.2,-3.5l0,-0.1c0,-1.4 0.4,-2.6 1.2,-3.5c0.8,-0.9 1.9,-1.4 3.2,-1.4c0.7,0 1.4,0.1 2,0.4c0.6,0.3 1.1,0.6 1.5,1.1l0,2z" id="svg_158"/>
|
||||
<path class="st42" d="m212.6,348.8l0,-1.2l3,0l0,9l0.9,0.2l0,1.2l-3.8,0l0,-1.2l0.9,-0.2l0,-7.6l-1,-0.2z" id="svg_159"/>
|
||||
<path class="st42" d="m222.2,357.1c-0.2,0.3 -0.5,0.6 -0.9,0.8s-0.7,0.3 -1.2,0.3c-0.8,0 -1.4,-0.2 -1.8,-0.7c-0.4,-0.5 -0.6,-1.2 -0.6,-2.3l0,-3l-0.7,-0.2l0,-1.2l0.8,0l1.9,0l0,4.3c0,0.5 0.1,0.9 0.3,1.1s0.4,0.3 0.8,0.3c0.3,0 0.6,0 0.8,-0.1s0.4,-0.2 0.5,-0.4l0,-3.9l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,5.8l0.9,0.2l0,1.2l-2.6,0l-0.1,-0.8z" id="svg_160"/>
|
||||
<path class="st42" d="m231.5,353.3l-1.3,0l-0.2,-0.9c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3c-0.2,-0.1 -0.4,-0.1 -0.7,-0.1c-0.3,0 -0.6,0.1 -0.8,0.2c-0.2,0.2 -0.3,0.3 -0.3,0.6c0,0.2 0.1,0.4 0.3,0.5s0.6,0.3 1.1,0.4c0.9,0.2 1.5,0.4 2,0.8s0.6,0.8 0.6,1.4c0,0.6 -0.3,1.2 -0.8,1.6c-0.6,0.4 -1.3,0.6 -2.2,0.6c-0.6,0 -1.1,-0.1 -1.5,-0.2c-0.5,-0.2 -0.9,-0.4 -1.2,-0.7l0,-1.6l1.4,0l0.3,0.9c0.1,0.1 0.3,0.2 0.5,0.2c0.2,0 0.4,0.1 0.6,0.1c0.4,0 0.7,-0.1 0.9,-0.2s0.3,-0.3 0.3,-0.6c0,-0.2 -0.1,-0.4 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.5,-0.4 -1.9,-0.8s-0.6,-0.8 -0.6,-1.4c0,-0.6 0.2,-1.1 0.7,-1.6s1.2,-0.7 2.1,-0.7c0.6,0 1.1,0.1 1.6,0.2s0.9,0.3 1.2,0.6l-0.2,2z" id="svg_161"/>
|
||||
<path class="st42" d="m235.5,349.1l0,1.8l1.3,0l0,1.4l-1.3,0l0,3.7c0,0.3 0.1,0.5 0.2,0.6c0.1,0.1 0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,-0.1l0.2,1.4c-0.2,0.1 -0.4,0.1 -0.6,0.1c-0.2,0 -0.4,0 -0.7,0c-0.7,0 -1.2,-0.2 -1.5,-0.6c-0.4,-0.4 -0.5,-0.9 -0.5,-1.7l0,-3.7l-1.1,0l0,-1.4l1.1,0l0,-1.8l1.8,0l0,0.1z" id="svg_162"/>
|
||||
<path class="st42" d="m241.2,358.2c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1s0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6s-1,0 -1.6,0zm-0.2,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1c-0.2,-0.2 -0.5,-0.3 -0.9,-0.3z" id="svg_163"/>
|
||||
<path class="st42" d="m245.1,356.9l0.9,-0.2l0,-4.5l-1,-0.2l0,-1.2l2.8,0l0.1,1c0.2,-0.4 0.4,-0.7 0.7,-0.9c0.3,-0.2 0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,0.1l-0.2,1.8l-0.8,0c-0.3,0 -0.5,0.1 -0.7,0.2c-0.2,0.1 -0.3,0.3 -0.4,0.5l0,3.5l0.9,0.2l0,1.2l-3.8,0l0,-1.2z" id="svg_164"/>
|
||||
</g>
|
||||
</g>
|
||||
<g id="Layer_14"/>
|
||||
</g>
|
||||
</svg>
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 32 KiB |
|
|
@ -148,7 +148,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment
|
|||
|
||||
### Creating the Frontend Service
|
||||
|
||||
The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services-service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ spec:
|
|||
- name: hello
|
||||
image: busybox
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- date; echo Hello from the Kubernetes cluster
|
||||
|
|
|
|||
|
|
@ -85,6 +85,6 @@ Perhatikan bahwa hal ini dapat terjadi apabila TTL diaktifkan dengan nilai selai
|
|||
|
||||
[Membersikan Job secara Otomatis](/id/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically)
|
||||
|
||||
[Dokumentasi Rancangan](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
|
||||
[Dokumentasi Rancangan](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -75,5 +75,5 @@ terhadap dokumentasi Kubernetes, tetapi daftar ini dapat membantumu memulainya.
|
|||
|
||||
- Untuk berkontribusi ke komunitas Kubernetes melalui forum-forum daring seperti Twitter atau Stack Overflow, atau mengetahui tentang pertemuan komunitas (_meetup_) lokal dan acara-acara Kubernetes, kunjungi [situs komunitas Kubernetes](/community/).
|
||||
- Untuk mulai berkontribusi ke pengembangan fitur, baca [_cheatseet_ kontributor](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet).
|
||||
- Untuk kontribusi khusus ke halaman Bahansa Indonesia, baca [Dokumentasi Khusus Untuk Translasi Bahasa Indonesia](/docs/contribute/localization_id.md)
|
||||
- Untuk kontribusi khusus ke halaman Bahasa Indonesia, baca [Dokumentasi Khusus Untuk Translasi Bahasa Indonesia](/docs/contribute/localization_id.md)
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ spec:
|
|||
containers:
|
||||
- name: hello
|
||||
image: busybox
|
||||
args:
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- date; echo Hello from the Kubernetes cluster
|
||||
|
|
|
|||
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: Container Lifecycle Hooks
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Questa pagina descrive come i Container gestiti con kubelet possono utilizzare il lifecycle
|
||||
hook framework dei Container per l'esecuzione di codice eseguito in corrispondenza di alcuni
|
||||
eventi durante il loro ciclo di vita.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Overview
|
||||
|
||||
Analogamente a molti framework di linguaggi di programmazione che hanno degli hooks legati al ciclo di
|
||||
vita dei componenti, come ad esempio Angular, Kubernetes fornisce ai Container degli hook legati al loro ciclo di
|
||||
vita dei Container.
|
||||
Gli hook consentono ai Container di essere consapevoli degli eventi durante il loro ciclo di
|
||||
gestione ed eseguire del codice implementato in un handler quando il corrispondente hook viene
|
||||
eseguito.
|
||||
|
||||
## Container hooks
|
||||
|
||||
Esistono due tipi di hook che vengono esposti ai Container:
|
||||
|
||||
`PostStart`
|
||||
|
||||
Questo hook viene eseguito successivamente alla creazione del container.
|
||||
Tuttavia, non vi è garanzia che questo hook venga eseguito prima dell'ENTRYPOINT del container.
|
||||
Non vengono passati parametri all'handler.
|
||||
|
||||
`PreStop`
|
||||
|
||||
Questo hook viene eseguito prima della terminazione di un container a causa di una richiesta API o
|
||||
di un evento di gestione, come ad esempio un fallimento delle sonde di liveness/startup, preemption,
|
||||
risorse contese e altro. Una chiamata all'hook di `PreStop` fallisce se il container è in stato
|
||||
terminated o completed e l'hook deve finire prima che possa essere inviato il segnale di TERM per
|
||||
fermare il container. Il conto alla rovescia per la terminazione del Pod (grace period) inizia prima dell'esecuzione
|
||||
dell'hook `PreStop`, quindi indipendentemente dall'esito dell'handler, il container terminerà entro
|
||||
il grace period impostato. Non vengono passati parametri all'handler.
|
||||
|
||||
Una descrizione più dettagliata riguardante al processo di terminazione dei Pod può essere trovata in
|
||||
[Terminazione dei Pod](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
|
||||
### Implementazione degli hook handler
|
||||
|
||||
I Container possono accedere a un hook implementando e registrando un handler per tale hook.
|
||||
Ci sono due tipi di handler che possono essere implementati per i Container:
|
||||
|
||||
* Exec - Esegue un comando specifico, tipo `pre-stop.sh`, all'interno dei cgroup e namespace del Container.
|
||||
Le risorse consumate dal comando vengono contate sul Container.
|
||||
* HTTP - Esegue una richiesta HTTP verso un endpoint specifico del Container.
|
||||
|
||||
### Esecuzione dell'hook handler
|
||||
|
||||
Quando viene richiamato l'hook legato al lifecycle del Container, il sistema di gestione di Kubernetes
|
||||
esegue l'handler secondo l'azione dell'hook, `httpGet` e `tcpSocket` vengono eseguiti dal processo kubelet,
|
||||
mentre `exec` è eseguito nel Container.
|
||||
|
||||
Le chiamate agli handler degli hook sono sincrone rispetto al contesto del Pod che contiene il Container.
|
||||
Questo significa che per un hook `PostStart`, l'ENTRYPOINT e l'hook si attivano in modo asincrono.
|
||||
Tuttavia, se l'hook impiega troppo tempo per essere eseguito o si blocca, il container non può raggiungere lo
|
||||
stato di `running`.
|
||||
|
||||
Gli hook di `PreStop` non vengono eseguiti in modo asincrono dall'evento di stop del container; l'hook
|
||||
deve completare la sua esecuzione prima che l'evento TERM possa essere inviato. Se un hook di `PreStop`
|
||||
si blocca durante la sua esecuzione, la fase del Pod rimarrà `Terminating` finchè il Pod non sarà rimosso forzatamente
|
||||
dopo la scadenza del suo `terminationGracePeriodSeconds`. Questo grace period si applica al tempo totale
|
||||
necessario per effettuare sia l'esecuzione dell'hook di `PreStop` che per l'arresto normale del container.
|
||||
Se, per esempio, il `terminationGracePeriodSeconds` è di 60, e l'hook impiega 55 secondi per essere completato,
|
||||
e il container impiega 10 secondi per fermarsi normalmente dopo aver ricevuto il segnale, allora il container
|
||||
verrà terminato prima di poter completare il suo arresto, poiché `terminationGracePeriodSeconds` è inferiore al tempo
|
||||
totale (55+10) necessario perché queste due cose accadano.
|
||||
|
||||
Se un hook `PostStart` o `PreStop` fallisce, allora il container viene terminato.
|
||||
|
||||
Gli utenti dovrebbero mantenere i loro handler degli hook i più leggeri possibili.
|
||||
Ci sono casi, tuttavia, in cui i comandi di lunga durata hanno senso,
|
||||
come il salvataggio dello stato del container prima della sua fine.
|
||||
|
||||
### Garanzia della chiamata dell'hook
|
||||
|
||||
La chiamata degli hook avviene *almeno una volta*, il che significa
|
||||
che un hook può essere chiamato più volte da un dato evento, come per `PostStart`
|
||||
o `PreStop`.
|
||||
Sta all'implementazione dell'hook gestire correttamente questo aspetto.
|
||||
|
||||
Generalmente, vengono effettuate singole chiamate agli hook.
|
||||
Se, per esempio, la destinazione di hook HTTP non è momentaneamente in grado di ricevere traffico,
|
||||
non c'è alcun tentativo di re invio.
|
||||
In alcuni rari casi, tuttavia, può verificarsi una doppia chiamata.
|
||||
Per esempio, se un kubelet si riavvia nel mentre dell'invio di un hook, questo potrebbe essere
|
||||
chiamato per una seconda volta dopo che il kubelet è tornato in funzione.
|
||||
|
||||
### Debugging Hook handlers
|
||||
|
||||
I log di un handler di hook non sono esposti negli eventi del Pod.
|
||||
Se un handler fallisce per qualche ragione, trasmette un evento.
|
||||
Per il `PostStart`, questo è l'evento di `FailedPostStartHook`,
|
||||
e per il `PreStop`, questo è l'evento di `FailedPreStopHook`.
|
||||
Puoi vedere questi eventi eseguendo `kubectl describe pod <pod_name>`.
|
||||
Ecco alcuni esempi di output di eventi dall'esecuzione di questo comando:
|
||||
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
|
||||
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
|
||||
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
|
||||
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
|
||||
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Approfondisci [Container environment](/docs/concepts/containers/container-environment/).
|
||||
* Esegui un tutorial su come
|
||||
[definire degli handlers per i Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
|
|
@ -1,3 +1,4 @@
|
|||
---
|
||||
title: ドキュメント
|
||||
linktitle: Kubernetesドキュメント
|
||||
title: ドキュメント
|
||||
---
|
||||
|
|
|
|||
|
|
@ -6,11 +6,12 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetesはコンテナを_Node_上で実行されるPodに配置することで、ワークロードを実行します。
|
||||
Kubernetesはコンテナを _Node_ 上で実行されるPodに配置することで、ワークロードを実行します。
|
||||
ノードはクラスターによりますが、1つのVMまたは物理的なマシンです。
|
||||
各ノードは{{< glossary_tooltip text="Pod" term_id="pod" >}}やそれを制御する{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}を実行するのに必要なサービスを含んでいます。
|
||||
|
||||
通常、1つのクラスターで複数のノードを持ちます。学習用途やリソースの制限がある環境では、1ノードかもしれません。
|
||||
|
||||
1つのノード上の[コンポーネント](/ja/docs/concepts/overview/components/#node-components)には、{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}、{{< glossary_tooltip text="コンテナランタイム" term_id="container-runtime" >}}、{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}が含まれます。
|
||||
|
||||
<!-- body -->
|
||||
|
|
@ -22,7 +23,7 @@ Kubernetesはコンテナを_Node_上で実行されるPodに配置すること
|
|||
1. ノード上のkubeletが、コントロールプレーンに自己登録する。
|
||||
2. あなた、もしくは他のユーザーが手動でNodeオブジェクトを追加する。
|
||||
|
||||
Nodeオブジェクトの作成、もしくはノード上のkubeketによる自己登録の後、コントロールプレーンはNodeオブジェクトが有効かチェックします。例えば、下記のjsonマニフェストでノードを作成してみましょう。
|
||||
Nodeオブジェクトの作成、もしくはノード上のkubeketによる自己登録の後、コントロールプレーンはNodeオブジェクトが有効かチェックします。例えば、下記のjsonマニフェストでノードを作成してみましょう:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
@ -72,9 +73,9 @@ kubeletのフラグ `--register-node`がtrue(デフォルト)のとき、kub
|
|||
管理者が手動でNodeオブジェクトを作成したい場合は、kubeletフラグ `--register-node = false`を設定してください。
|
||||
|
||||
管理者は`--register-node`の設定に関係なくNodeオブジェクトを変更することができます。
|
||||
変更には、ノードにラベルを設定し、それをunschedulableとしてマークすることが含まれます。
|
||||
例えば、ノードにラベルを設定し、それをunschedulableとしてマークすることが含まれます。
|
||||
|
||||
ノード上のラベルは、スケジューリングを制御するためにPod上のノードセレクタと組み合わせて使用できます。
|
||||
ノード上のラベルは、スケジューリングを制御するためにPod上のノードセレクターと組み合わせて使用できます。
|
||||
例えば、Podをノードのサブセットでのみ実行する資格があるように制限します。
|
||||
|
||||
ノードをunschedulableとしてマークすると、新しいPodがそのノードにスケジュールされるのを防ぎますが、ノード上の既存のPodには影響しません。
|
||||
|
|
@ -124,7 +125,7 @@ kubectl describe node <ノード名をここに挿入>
|
|||
{{< table caption = "ノードのConditionと、各condition適用時の概要" >}}
|
||||
| ノードのCondition | 概要 |
|
||||
|----------------------|-------------|
|
||||
| `Ready` | ノードの状態がHealthyでPodを配置可能な場合に`True`になります。ノードの状態に問題があり、Podが配置できない場合に`False`になります。ノードコントローラーが、`node-monitor-grace-period`で設定された時間内(デフォルトでは40秒)に該当ノードと疎通できない場合、`Unknown`になります。 |
|
||||
| `Ready` | ノードの状態が有効でPodを配置可能な場合に`True`になります。ノードの状態に問題があり、Podが配置できない場合に`False`になります。ノードコントローラーが、`node-monitor-grace-period`で設定された時間内(デフォルトでは40秒)に該当ノードと疎通できない場合、`Unknown`になります。 |
|
||||
| `DiskPressure` | ノードのディスク容量が圧迫されているときに`True`になります。圧迫とは、ディスクの空き容量が少ないことを指します。それ以外のときは`False`です。 |
|
||||
| `MemoryPressure` | ノードのメモリが圧迫されているときに`True`になります。圧迫とは、メモリの空き容量が少ないことを指します。それ以外のときは`False`です。 |
|
||||
| `PIDPressure` | プロセスが圧迫されているときに`True`になります。圧迫とは、プロセス数が多すぎることを指します。それ以外のときは`False`です。 |
|
||||
|
|
@ -241,7 +242,7 @@ kubeletが`NodeStatus`とLeaseオブジェクトの作成および更新を担
|
|||
このような場合、ノードコントローラーはマスター接続に問題があると見なし、接続が回復するまですべての退役を停止します。
|
||||
|
||||
ノードコントローラーは、Podがtaintを許容しない場合、 `NoExecute`のtaintを持つノード上で実行されているPodを排除する責務もあります。
|
||||
さらに、デフォルトで無効になっているアルファ機能として、ノードコントローラーはノードに到達できない、または準備ができていないなどのノードの問題に対応する{{< glossary_tooltip text="taint" term_id="taint" >}}を追加する責務があります。これはスケジューラーが、問題のあるノードにPodを配置しない事を意味しています。
|
||||
さらに、ノードコントローラーはノードに到達できない、または準備ができていないなどのノードの問題に対応する{{< glossary_tooltip text="taint" term_id="taint" >}}を追加する責務があります。これはスケジューラーが、問題のあるノードにPodを配置しない事を意味しています。
|
||||
|
||||
{{< caution >}}
|
||||
`kubectl cordon`はノードに'unschedulable'としてマークします。それはロードバランサーのターゲットリストからノードを削除するという
|
||||
|
|
@ -254,8 +255,7 @@ Nodeオブジェクトはノードのリソースキャパシティ(CPUの数
|
|||
[自己登録](#self-registration-of-nodes)したノードは、Nodeオブジェクトを作成するときにキャパシティを報告します。
|
||||
[手動によるノード管理](#manual-node-administration)を実行している場合は、ノードを追加するときにキャパシティを設定する必要があります。
|
||||
|
||||
Kubernetes{{< glossary_tooltip text="スケジューラー" term_id="kube-scheduler" >}}は、ノード上のすべてのPodに十分なリソースがあることを確認します。
|
||||
ノード上のコンテナが要求するリソースの合計がノードキャパシティ以下であることを確認します。
|
||||
Kubernetes{{< glossary_tooltip text="スケジューラー" term_id="kube-scheduler" >}}は、ノード上のすべてのPodに十分なリソースがあることを確認します。スケジューラーは、ノード上のコンテナが要求するリソースの合計がノードキャパシティ以下であることを確認します。
|
||||
これは、kubeletによって管理されたすべてのコンテナを含みますが、コンテナランタイムによって直接開始されたコンテナやkubeletの制御外で実行されているプロセスは含みません。
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ KubernetesのIPアドレスは`Pod`スコープに存在します。`Pod`内の
|
|||
|
||||
`Pod`に転送する`ノード`自体のポート(ホストポートと呼ばれる)を要求することは可能ですが、これは非常にニッチな操作です。このポート転送の実装方法も、コンテナランタイムの詳細部分です。`Pod`自体は、ホストポートの有無を認識しません。
|
||||
|
||||
## Kubernetesネットワークモデルの実装方法
|
||||
## Kubernetesネットワークモデルの実装方法 {#how-to-implement-the-kubernetes-networking-model}
|
||||
|
||||
このネットワークモデルを実装する方法はいくつかあります。このドキュメントは、こうした方法を網羅的にはカバーしませんが、いくつかの技術の紹介として、また出発点として役立つことを願っています。
|
||||
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ CPUは常に相対量としてではなく、絶対量として要求されま
|
|||
### メモリーの意味
|
||||
|
||||
`メモリー`の制限と要求はバイト単位で測定されます。
|
||||
E、P、T、G、M、Kのいずれかのサフィックスを使用して、メモリーを整数または固定小数点整数として表すことができます。
|
||||
E、P、T、G、M、Kのいずれかのサフィックスを使用して、メモリーを整数または固定小数点数として表すことができます。
|
||||
また、Ei、Pi、Ti、Gi、Mi、Kiのような2の累乗の値を使用することもできます。
|
||||
たとえば、以下はほぼ同じ値を表しています。
|
||||
|
||||
|
|
@ -104,11 +104,9 @@ metadata:
|
|||
name: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: db
|
||||
image: mysql
|
||||
- name: app
|
||||
image: images.my-company.example/app:v4
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
value: "password"
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
|
|
@ -116,8 +114,8 @@ spec:
|
|||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
- name: wp
|
||||
image: wordpress
|
||||
- name: log-aggregator
|
||||
image: images.my-company.example/log-aggregator:v6
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
|
|
@ -185,7 +183,7 @@ kubeletは、ローカルのエフェメラルストレージを使用して、P
|
|||
また、kubeletはこの種類のストレージを使用して、[Nodeレベルのコンテナログ](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)、コンテナイメージ、実行中のコンテナの書き込み可能なレイヤーを保持します。
|
||||
|
||||
{{< caution >}}
|
||||
Nodeに障害が発生すると、そのエフェメラルストレージ内のデータが失われる可能性があります。
|
||||
Nodeに障害が発生すると、そのエフェメラルストレージ内のデータが失われる可能性があります。
|
||||
アプリケーションは、ローカルのエフェメラルストレージにパフォーマンスのサービス品質保証(ディスクのIOPSなど)を期待することはできません。
|
||||
{{< /caution >}}
|
||||
|
||||
|
|
@ -242,7 +240,7 @@ Podの各コンテナは、次の1つ以上を指定できます。
|
|||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
`ephemeral-storage`の制限と要求はバイト単位で記します。
|
||||
ストレージは、次のいずれかの接尾辞を使用して、通常の整数または固定小数点整数として表すことができます。
|
||||
ストレージは、次のいずれかの接尾辞を使用して、通常の整数または固定小数点数として表すことができます。
|
||||
E、P、T、G、M、K。Ei、Pi、Ti、Gi、Mi、Kiの2のべき乗を使用することもできます。
|
||||
たとえば、以下はほぼ同じ値を表しています。
|
||||
|
||||
|
|
@ -262,18 +260,15 @@ metadata:
|
|||
name: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: db
|
||||
image: mysql
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
value: "password"
|
||||
- name: app
|
||||
image: images.my-company.example/app:v4
|
||||
resources:
|
||||
requests:
|
||||
ephemeral-storage: "2Gi"
|
||||
limits:
|
||||
ephemeral-storage: "4Gi"
|
||||
- name: wp
|
||||
image: wordpress
|
||||
- name: log-aggregator
|
||||
image: images.my-company.example/log-aggregator:v6
|
||||
resources:
|
||||
requests:
|
||||
ephemeral-storage: "2Gi"
|
||||
|
|
@ -300,6 +295,7 @@ kubeletがローカルのエフェメラルストレージをリソースとし
|
|||
Podが許可するよりも多くのエフェメラルストレージを使用している場合、kubeletはPodの排出をトリガーするシグナルを設定します。
|
||||
|
||||
コンテナレベルの分離の場合、コンテナの書き込み可能なレイヤーとログ使用量がストレージの制限を超えると、kubeletはPodに排出のマークを付けます。
|
||||
|
||||
Podレベルの分離の場合、kubeletはPod内のコンテナの制限を合計し、Podの全体的なストレージ制限を計算します。
|
||||
このケースでは、すべてのコンテナからのローカルのエフェメラルストレージの使用量とPodの`emptyDir`ボリュームの合計がPod全体のストレージ制限を超過する場合、
|
||||
kubeletはPodをまた排出対象としてマークします。
|
||||
|
|
@ -345,7 +341,7 @@ Kubernetesでは、`1048576`から始まるプロジェクトIDを使用しま
|
|||
Kubernetesが使用しないようにする必要があります。
|
||||
|
||||
クォータはディレクトリスキャンよりも高速で正確です。
|
||||
ディレクトリがプロジェクトに割り当てられると、ディレクトリ配下に作成されたファイルはすべてそのプロジェクト内に作成され、カーネルはそのプロジェクト内のファイルによって使用されているブロックの数を追跡するだけです。
|
||||
ディレクトリがプロジェクトに割り当てられると、ディレクトリ配下に作成されたファイルはすべてそのプロジェクト内に作成され、カーネルはそのプロジェクト内のファイルによって使用されているブロックの数を追跡するだけです。
|
||||
ファイルが作成されて削除されても、開いているファイルディスクリプタがあれば、スペースを消費し続けます。
|
||||
クォータトラッキングはそのスペースを正確に記録しますが、ディレクトリスキャンは削除されたファイルが使用するストレージを見落としてしまいます。
|
||||
|
||||
|
|
@ -354,7 +350,7 @@ Kubernetesが使用しないようにする必要があります。
|
|||
* kubelet設定で、`LocalocalStorpactionCapactionIsolationFSQuotaMonitoring=true`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gate/)を有効にします。
|
||||
|
||||
* ルートファイルシステム(またはオプションのランタイムファイルシステム))がプロジェクトクォータを有効にしていることを確認してください。
|
||||
すべてのXFSファイルシステムはプロジェクトクォータをサポートしています。
|
||||
すべてのXFSファイルシステムはプロジェクトクォータをサポートしています。
|
||||
ext4ファイルシステムでは、ファイルシステムがマウントされていない間は、プロジェクトクォータ追跡機能を有効にする必要があります。
|
||||
```bash
|
||||
# ext4の場合、/dev/block-deviceがマウントされていません
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ Kubernetesの名称は、ギリシャ語に由来し、操舵手やパイロッ
|
|||
Kubernetesは以下を提供します。
|
||||
|
||||
* **サービスディスカバリーと負荷分散**
|
||||
Kubernetesは、DNS名または独自のIPアドレスを使ってコンテナを公開することができます。コンテナへのトラフィックが多い場合は、Kubernetesは負荷分散し、ネットワークトラフィックを振り分けることができるたため、デプロイが安定します。
|
||||
Kubernetesは、DNS名または独自のIPアドレスを使ってコンテナを公開することができます。コンテナへのトラフィックが多い場合は、Kubernetesは負荷分散し、ネットワークトラフィックを振り分けることができるため、デプロイが安定します。
|
||||
* **ストレージ オーケストレーション**
|
||||
Kubernetesは、ローカルストレージやパブリッククラウドプロバイダーなど、選択したストレージシステムを自動でマウントすることができます。
|
||||
* **自動化されたロールアウトとロールバック**
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ Serviceのすべてのネットワークエンドポイントが単一のEndpoin
|
|||
|
||||
## EndpointSliceリソース {#endpointslice-resource}
|
||||
|
||||
Kubernetes内ではEndpointSliceにはネットワークエンドポイントの集合へのリファレンスが含まれます。EndpointSliceコントローラーは、{{< glossary_tooltip text="セレクター" term_id="selector" >}}が指定されると、Kubernetes Serviceに対するEndpointSliceを自動的に作成します。これらのEndpointSliceにはServiceセレクターに一致する任意のPodへのリファレクンスが含まれます。EndpointSliceはネットワークエンドポイントをユニークなServiceとPortの組み合わせでグループ化します。EndpointSliceオブジェクトの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。
|
||||
Kubernetes内ではEndpointSliceにはネットワークエンドポイントの集合へのリファレンスが含まれます。EndpointSliceコントローラーは、{{< glossary_tooltip text="セレクター" term_id="selector" >}}が指定されると、Kubernetes Serviceに対するEndpointSliceを自動的に作成します。これらのEndpointSliceにはServiceセレクターに一致する任意のPodへのリファレンスが含まれます。EndpointSliceはネットワークエンドポイントをユニークなServiceとPortの組み合わせでグループ化します。EndpointSliceオブジェクトの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。
|
||||
|
||||
一例として、以下に`example`というKubernetes Serviceに対するサンプルのEndpointSliceリソースを示します。
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ Ingressリソースが動作するためには、クラスターでIngressコン
|
|||
|
||||
## 複数のIngressコントローラーの使用 {#using-multiple-ingress-controllers}
|
||||
|
||||
[Ingressコントローラーは、好きな数だけ](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers))クラスターにデプロイすることができます。Ingressを作成する際には、クラスター内に複数のIngressコントローラーが存在する場合にどのIngressコントローラーを使用するかを示すために適切な[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)のアノテーションを指定します。
|
||||
[Ingressコントローラーは、好きな数だけ](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)クラスターにデプロイすることができます。Ingressを作成する際には、クラスター内に複数のIngressコントローラーが存在する場合にどのIngressコントローラーを使用するかを示すために適切な[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)のアノテーションを指定します。
|
||||
|
||||
クラスを定義しない場合、クラウドプロバイダーはデフォルトのIngressコントローラーを使用する場合があります。
|
||||
|
||||
|
|
|
|||
|
|
@ -712,7 +712,7 @@ NLBの背後にあるインスタンスに対してクライアントのトラ
|
|||
|------|----------|---------|------------|---------------------|
|
||||
| ヘルスチェック | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
|
||||
| クライアントのトラフィック | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (デフォルト: `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
|
||||
| MTCによるサービスディスカバリー | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (デフォルト: `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
|
||||
| MTUによるサービスディスカバリー | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (デフォルト: `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
|
||||
|
||||
どのクライアントIPがNLBにアクセス可能かを制限するためには、`loadBalancerSourceRanges`を指定してください。
|
||||
|
||||
|
|
|
|||
|
|
@ -31,8 +31,9 @@ Deploymentによって作成されたReplicaSetを管理しないでください
|
|||
* ReplicaSetをロールアウトするために[Deploymentの作成](#creating-a-deployment)を行う: ReplicaSetはバックグラウンドでPodを作成します。Podの作成が完了したかどうかは、ロールアウトのステータスを確認してください。
|
||||
* DeploymentのPodTemplateSpecを更新することにより[Podの新しい状態を宣言する](#updating-a-deployment): 新しいReplicaSetが作成され、Deploymentは指定された頻度で古いReplicaSetから新しいReplicaSetへのPodの移行を管理します。新しいReplicaSetはDeploymentのリビジョンを更新します。
|
||||
* Deploymentの現在の状態が不安定な場合、[Deploymentのロールバック](#rolling-back-a-deployment)をする: ロールバックによる各更新作業は、Deploymentのリビジョンを更新します。
|
||||
* より多くの負荷をさばけるように、[Deploymentをスケールアップ](#scaling-a-deployment)する
|
||||
* より多くの負荷をさばけるように、[Deploymentをスケールアップ](#scaling-a-deployment)する。
|
||||
* PodTemplateSpecに対する複数の修正を適用するために[Deploymentを停止(Pause)し](#pausing-and-resuming-a-deployment)、それを再開して新しいロールアウトを開始します。
|
||||
* [Deploymentのステータス](#deployment-status) をロールアウトが失敗したサインとして利用する。
|
||||
* 今後必要としない[古いReplicaSetのクリーンアップ](#clean-up-policy)
|
||||
|
||||
## Deploymentの作成 {#creating-a-deployment}
|
||||
|
|
@ -82,7 +83,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください
|
|||
```
|
||||
クラスターにてDeploymentを調査するとき、以下のフィールドが出力されます。
|
||||
* `NAME`は、クラスター内にあるDeploymentの名前一覧です。
|
||||
* `READY`は、ユーザーが使用できるアプリケーションのレプリカの数です。
|
||||
* `READY`は、ユーザーが使用できるアプリケーションのレプリカの数です。使用可能な数/理想的な数の形式で表示されます。
|
||||
* `UP-TO-DATE`は、理想的な状態を満たすためにアップデートが完了したレプリカの数です。
|
||||
* `AVAILABLE`は、ユーザーが利用可能なレプリカの数です。
|
||||
* `AGE`は、アプリケーションが稼働してからの時間です。
|
||||
|
|
@ -133,7 +134,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください
|
|||
{{< note >}}
|
||||
Deploymentに対して適切なセレクターとPodテンプレートのラベルを設定する必要があります(このケースでは`app: nginx`)。
|
||||
|
||||
ラベルやセレクターを他のコントローラーと重複させないでください(他のDeploymentやStatefulSetを含む)。Kubernetesはユーザーがラベルを重複させることを止めないため、複数のコントローラーでセレクターの重複が発生すると、コントローラー間で衝突し予期せぬふるまいをすることになります。
|
||||
ラベルやセレクターを他のコントローラーと重複させないでください(他のDeploymentやStatefulSetを含む)。Kubernetesはユーザーがラベルを重複させることを阻止しないため、複数のコントローラーでセレクターの重複が発生すると、コントローラー間で衝突し予期せぬふるまいをすることになります。
|
||||
{{< /note >}}
|
||||
|
||||
### pod-template-hashラベル
|
||||
|
|
@ -146,7 +147,7 @@ Deploymentに対して適切なセレクターとPodテンプレートのラベ
|
|||
|
||||
このラベルはDeploymentが管理するReplicaSetが重複しないことを保証します。このラベルはReplicaSetの`PodTemplate`をハッシュ化することにより生成され、生成されたハッシュ値はラベル値としてReplicaSetセレクター、Podテンプレートラベル、ReplicaSetが作成した全てのPodに対して追加されます。
|
||||
|
||||
## Deploymentの更新
|
||||
## Deploymentの更新 {#updating-a-deployment}
|
||||
|
||||
{{< note >}}
|
||||
Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナーイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。
|
||||
|
|
@ -589,13 +590,11 @@ Deploymentのローリングアップデートは、同時に複数のバージ
|
|||
```
|
||||
|
||||
* クラスター内で、解決できない新しいイメージに更新します。
|
||||
* You update to a new image which happens to be unresolvable from inside the cluster.
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
|
||||
```
|
||||
|
||||
実行結果は以下のとおりです。
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
|
@ -604,7 +603,8 @@ Deploymentのローリングアップデートは、同時に複数のバージ
|
|||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
実行結果は以下のとおりです。
|
||||
|
||||
実行結果は以下のとおりです。
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 5 5 0 9s
|
||||
|
|
@ -615,24 +615,26 @@ Deploymentのローリングアップデートは、同時に複数のバージ
|
|||
|
||||
上記の例では、3つのレプリカが古いReplicaSetに追加され、2つのレプリカが新しいReplicaSetに追加されました。ロールアウトの処理では、新しいレプリカ数のPodが正常になったと仮定すると、最終的に新しいReplicaSetに全てのレプリカを移動させます。これを確認するためには以下のコマンドを実行して下さい。
|
||||
|
||||
```shell
|
||||
kubectl get deploy
|
||||
```
|
||||
実行結果は以下のとおりです。
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 15 18 7 8 7m
|
||||
```
|
||||
ロールアウトのステータスでレプリカがどのように各ReplicaSetに追加されるか確認できます。
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
実行結果は以下のとおりです。
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 7 7 0 7m
|
||||
nginx-deployment-618515232 11 11 11 7m
|
||||
```
|
||||
```shell
|
||||
kubectl get deploy
|
||||
```
|
||||
|
||||
実行結果は以下のとおりです。
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 15 18 7 8 7m
|
||||
```
|
||||
ロールアウトのステータスでレプリカがどのように各ReplicaSetに追加されるか確認できます。
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
実行結果は以下のとおりです。
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 7 7 0 7m
|
||||
nginx-deployment-618515232 11 11 11 7m
|
||||
```
|
||||
|
||||
## Deployment更新の一時停止と再開 {#pausing-and-resuming-a-deployment}
|
||||
|
||||
|
|
@ -752,7 +754,7 @@ Deploymentのローリングアップデートは、同時に複数のバージ
|
|||
nginx-3926361531 3 3 3 28s
|
||||
```
|
||||
{{< note >}}
|
||||
一時停止したDeploymentの稼働を再開させない限り、Deploymentをロールバックすることはできません。
|
||||
Deploymentの稼働を再開させない限り、一時停止したDeploymentをロールバックすることはできません。
|
||||
{{< /note >}}
|
||||
|
||||
## Deploymentのステータス {#deployment-status}
|
||||
|
|
@ -937,13 +939,13 @@ Deploymentが管理する古いReplicaSetをいくつ保持するかを指定す
|
|||
|
||||
## カナリアパターンによるデプロイ
|
||||
|
||||
Deploymentを使って一部のユーザーやサーバーに対してリリースのロールアウトをしたい場合、[リソースの管理](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments)に記載されているカナリアパターンに従って、リリース毎に1つずつ、複数のDeploymentを作成できます。
|
||||
Deploymentを使って一部のユーザーやサーバーに対してリリースのロールアウトをしたい場合、[リソースの管理](/ja/docs/concepts/cluster-administration/manage-deployment/#canary-deployments-カナリアデプロイ)に記載されているカナリアパターンに従って、リリース毎に1つずつ、複数のDeploymentを作成できます。
|
||||
|
||||
## Deployment Specの記述
|
||||
|
||||
他の全てのKubernetesの設定と同様に、Deploymentは`.apiVersion`、`.kind`や`.metadata`フィールドを必要とします。
|
||||
設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/docs/concepts/overview/working-with-objects/object-management/)を参照してください。
|
||||
Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。
|
||||
設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/ja/docs/concepts/overview/working-with-objects/object-management/)を参照してください。
|
||||
Deploymentオブジェクトの名前は、有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。
|
||||
Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。
|
||||
|
||||
### Podテンプレート
|
||||
|
|
@ -992,25 +994,25 @@ Deploymentのセレクターに一致するラベルを持つPodを直接作成
|
|||
|
||||
`.spec.strategy.type==RollingUpdate`と指定されているとき、DeploymentはローリングアップデートによりPodを更新します。ローリングアップデートの処理をコントロールするために`maxUnavailable`と`maxSurge`を指定できます。
|
||||
|
||||
##### maxUnavailable
|
||||
##### Max Unavailable {#max-unavailable}
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxUnavailable`はオプションのフィールドで、更新処理において利用不可となる最大のPod数を指定します。値は絶対値(例: 5)を指定するか、理想状態のPodのパーセンテージを指定します(例: 10%)。パーセンテージを指定した場合、絶対値は小数切り捨てされて計算されます。`.spec.strategy.rollingUpdate.maxSurge`が0に指定されている場合、この値を0にできません。デフォルトでは25%です。
|
||||
|
||||
例えば、この値が30%と指定されているとき、ローリングアップデートが開始すると古いReplicaSetはすぐに理想状態の70%にスケールダウンされます。一度新しいPodが稼働できる状態になると、古いReplicaSetはさらにスケールダウンされ、続いて新しいReplicaSetがスケールアップされます。この間、利用可能なPodの総数は理想状態のPodの少なくとも70%以上になるように保証されます。
|
||||
|
||||
##### maxSurge
|
||||
##### Max Surge {#max-surge}
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxSurge`はオプションのフィールドで、理想状態のPod数を超えて作成できる最大のPod数を指定します。値は絶対値(例: 5)を指定するか、理想状態のPodのパーセンテージを指定します(例: 10%)。パーセンテージを指定した場合、絶対値は小数切り上げで計算されます。`MaxUnavailable`が0に指定されている場合、この値を0にできません。デフォルトでは25%です。
|
||||
|
||||
例えば、この値が30%と指定されているとき、ローリングアップデートが開始すると新しいReplicaSetはすぐに更新されます。このとき古いPodと新しいPodの総数は理想状態の130%を超えないように更新されます。一度古いPodが削除されると、新しいReplicaSetはさらにスケールアップされます。この間、利用可能なPodの総数は理想状態のPodに対して最大130%になるように保証されます。
|
||||
|
||||
### progressDeadlineSeconds
|
||||
### Progress Deadline Seconds
|
||||
|
||||
`.spec.progressDeadlineSeconds`はオプションのフィールドで、システムがDeploymentの[更新に失敗](#failed-deployment)したと判断するまでに待つ秒数を指定します。更新に失敗したと判断されたとき、リソースのステータスは`Type=Progressing`、`Status=False`かつ`Reason=ProgressDeadlineExceeded`となるのを確認できます。DeploymentコントローラーはDeploymentの更新のリトライし続けます。デフォルト値は600です。今後、自動的なロールバックが実装されたとき、更新失敗状態になるとすぐにDeploymentコントローラーがロールバックを行うようになります。
|
||||
|
||||
この値が指定されているとき、`.spec.minReadySeconds`より大きい値を指定する必要があります。
|
||||
|
||||
### minReadySeconds {#min-ready-seconds}
|
||||
### Min Ready Seconds {#min-ready-seconds}
|
||||
|
||||
`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナーがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/ja/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。
|
||||
|
||||
|
|
@ -1020,7 +1022,7 @@ Deploymentのリビジョン履歴は、Deploymentが管理するReplicaSetに
|
|||
|
||||
`.spec.revisionHistoryLimit`はオプションのフィールドで、ロールバック可能な古いReplicaSetの数を指定します。この古いReplicaSetは`etcd`内のリソースを消費し、`kubectl get rs`の出力結果を見にくくします。Deploymentの各リビジョンの設定はReplicaSetに保持されます。このため一度古いReplicaSetが削除されると、そのリビジョンのDeploymentにロールバックすることができなくなります。デフォルトでは10もの古いReplicaSetが保持されます。しかし、この値の最適値は新しいDeploymentの更新頻度と安定性に依存します。
|
||||
|
||||
さらに詳しく言うと、この値を0にすると、0のレプリカを持つ古い全てのReplicaSetが削除されます。このケースでは、リビジョン履歴が完全に削除されているため新しいDeploymentのロールアウトを完了することができません。
|
||||
さらに詳しく言うと、この値を0にすると、0のレプリカを持つ古い全てのReplicaSetが削除されます。このケースでは、リビジョン履歴が完全に削除されているため新しいDeploymentのロールアウトを元に戻すことができません。
|
||||
|
||||
### paused
|
||||
|
||||
|
|
|
|||
|
|
@ -1,28 +1,11 @@
|
|||
---
|
||||
title: Kubernetesドキュメントがサポートしているバージョン
|
||||
content_type: concept
|
||||
title: 利用可能なドキュメントバージョン
|
||||
content_type: custom
|
||||
layout: supported-versions
|
||||
card:
|
||||
name: about
|
||||
weight: 10
|
||||
title: ドキュメントがサポートしているバージョン
|
||||
title: 利用可能なドキュメントバージョン
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
本ウェブサイトには、現行版とその直前4バージョンのKubernetesドキュメントがあります。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 現行版
|
||||
|
||||
現在のバージョンは[{{< param "version" >}}](/)です。
|
||||
|
||||
## 以前のバージョン
|
||||
|
||||
{{< versions-other >}}
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
title: Kubelet 認証/認可
|
||||
---
|
||||
|
||||
|
||||
## 概要
|
||||
|
||||
kubeletのHTTPSエンドポイントは、さまざまな感度のデータへのアクセスを提供するAPIを公開し、
|
||||
ノードとコンテナ内のさまざまなレベルの権限でタスクを実行できるようにします。
|
||||
|
||||
このドキュメントでは、kubeletのHTTPSエンドポイントへのアクセスを認証および承認する方法について説明します。
|
||||
|
||||
## Kubelet 認証
|
||||
|
||||
デフォルトでは、他の構成済み認証方法によって拒否されないkubeletのHTTPSエンドポイントへのリクエストは
|
||||
匿名リクエストとして扱われ、ユーザー名は`system:anonymous`、
|
||||
グループは`system:unauthenticated`になります。
|
||||
|
||||
匿名アクセスを無効にし、認証されていないリクエストに対して`401 Unauthorized`応答を送信するには:
|
||||
|
||||
* `--anonymous-auth=false`フラグでkubeletを開始します。
|
||||
|
||||
kubeletのHTTPSエンドポイントに対するX509クライアント証明書認証を有効にするには:
|
||||
|
||||
* `--client-ca-file`フラグでkubeletを起動し、クライアント証明書を確認するためのCAバンドルを提供します。
|
||||
* `--kubelet-client-certificate`および`--kubelet-client-key`フラグを使用してapiserverを起動します。
|
||||
* 詳細については、[apiserver認証ドキュメント](/ja/docs/reference/access-authn-authz/authentication/#x509-client-certs)を参照してください。
|
||||
|
||||
APIベアラートークン(サービスアカウントトークンを含む)を使用して、kubeletのHTTPSエンドポイントへの認証を行うには:
|
||||
|
||||
* APIサーバーで`authentication.k8s.io/v1beta1`グループが有効になっていることを確認します。
|
||||
* `--authentication-token-webhook`および`--kubeconfig`フラグを使用してkubeletを開始します。
|
||||
* kubeletは、構成済みのAPIサーバーで `TokenReview` APIを呼び出して、ベアラートークンからユーザー情報を判別します。
|
||||
|
||||
## Kubelet 承認
|
||||
|
||||
認証に成功した要求(匿名要求を含む)はすべて許可されます。デフォルトの認可モードは、すべての要求を許可する`AlwaysAllow`です。
|
||||
|
||||
kubelet APIへのアクセスを細分化するのは、次のような多くの理由が考えられます:
|
||||
|
||||
* 匿名認証は有効になっていますが、匿名ユーザーがkubeletのAPIを呼び出す機能は制限する必要があります。
|
||||
* ベアラートークン認証は有効になっていますが、kubeletのAPIを呼び出す任意のAPIユーザー(サービスアカウントなど)の機能を制限する必要があります。
|
||||
* クライアント証明書の認証は有効になっていますが、構成されたCAによって署名されたクライアント証明書の一部のみがkubeletのAPIの使用を許可されている必要があります。
|
||||
|
||||
kubeletのAPIへのアクセスを細分化するには、APIサーバーに承認を委任します:
|
||||
|
||||
* APIサーバーで`authorization.k8s.io/v1beta1` APIグループが有効になっていることを確認します。
|
||||
* `--authorization-mode=Webhook`と`--kubeconfig`フラグでkubeletを開始します。
|
||||
* kubeletは、構成されたAPIサーバーで`SubjectAccessReview` APIを呼び出して、各リクエストが承認されているかどうかを判断します。
|
||||
|
||||
kubeletは、apiserverと同じ[リクエスト属性](/docs/reference/access-authn-authz/authorization/#review-your-request-attributes)アプローチを使用してAPIリクエストを承認します。
|
||||
|
||||
動詞は、受けとったリクエストのHTTP動詞から決定されます:
|
||||
|
||||
HTTP動詞 | 要求 動詞
|
||||
----------|---------------
|
||||
POST | create
|
||||
GET, HEAD | get
|
||||
PUT | update
|
||||
PATCH | patch
|
||||
DELETE | delete
|
||||
|
||||
リソースとサブリソースは、受けとったリクエストのパスから決定されます:
|
||||
|
||||
Kubelet API | リソース | サブリソース
|
||||
-------------|----------|------------
|
||||
/stats/\* | nodes | stats
|
||||
/metrics/\* | nodes | metrics
|
||||
/logs/\* | nodes | log
|
||||
/spec/\* | nodes | spec
|
||||
*all others* | nodes | proxy
|
||||
|
||||
名前空間とAPIグループの属性は常に空の文字列であり、
|
||||
リソース名は常にkubeletの`Node` APIオブジェクトの名前です。
|
||||
|
||||
このモードで実行する場合は、apiserverに渡される`--kubelet-client-certificate`フラグと`--kubelet-client-key`
|
||||
フラグで識別されるユーザーが次の属性に対して許可されていることを確認します:
|
||||
|
||||
* verb=\*, resource=nodes, subresource=proxy
|
||||
* verb=\*, resource=nodes, subresource=stats
|
||||
* verb=\*, resource=nodes, subresource=log
|
||||
* verb=\*, resource=nodes, subresource=spec
|
||||
* verb=\*, resource=nodes, subresource=metrics
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
title: APIサーバー
|
||||
id: kube-apiserver
|
||||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-apiserver/
|
||||
full_link: /docs/concepts/overview/components/#kube-apiserver
|
||||
short_description: >
|
||||
Kubernetes APIを提供するコントロールプレーンのコンポーネントです。
|
||||
|
||||
|
|
|
|||
|
|
@ -8,16 +8,10 @@ card:
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
[Kubectl概要](/ja/docs/reference/kubectl/overview/)と[JsonPathガイド](/docs/reference/kubectl/jsonpath)も合わせてご覧ください。
|
||||
|
||||
このページは`kubectl`コマンドの概要です。
|
||||
|
||||
|
||||
このページには、一般的によく使われる`kubectl`コマンドとフラグのリストが含まれています。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
# kubectl - チートシート
|
||||
|
||||
## Kubectlコマンドの補完
|
||||
|
||||
### BASH
|
||||
|
|
@ -76,7 +70,7 @@ kubectl config set-context gce --user=cluster-admin --namespace=foo \
|
|||
kubectl config unset users.foo # ユーザーfooを削除します
|
||||
```
|
||||
|
||||
## Apply
|
||||
## Kubectl Apply
|
||||
|
||||
`apply`はKubernetesリソースを定義するファイルを通じてアプリケーションを管理します。`kubectl apply`を実行して、クラスター内のリソースを作成および更新します。これは、本番環境でKubernetesアプリケーションを管理する推奨方法です。
|
||||
詳しくは[Kubectl Book](https://kubectl.docs.kubernetes.io)をご覧ください。
|
||||
|
|
@ -372,6 +366,7 @@ kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.
|
|||
kubectl get pods -A -o=custom-columns='DATA:metadata.*'
|
||||
```
|
||||
|
||||
kubectlに関するより多くのサンプルは[カスタムカラムのリファレンス](/ja/docs/reference/kubectl/overview/#custom-columns)を参照してください。
|
||||
|
||||
### Kubectlのログレベルとデバッグ
|
||||
kubectlのログレベルは、レベルを表す整数が後に続く`-v`または`--v`フラグで制御されます。一般的なKubernetesのログ記録規則と関連するログレベルについて、[こちら](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)で説明します。
|
||||
|
|
@ -392,11 +387,10 @@ kubectlのログレベルは、レベルを表す整数が後に続く`-v`また
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* kubectlについてより深く学びたい方は[kubectl概要](/ja/docs/reference/kubectl/overview/)をご覧ください。
|
||||
* kubectlについてより深く学びたい方は[kubectl概要](/ja/docs/reference/kubectl/overview/)や[JsonPath](/docs/reference/kubectl/jsonpath)をご覧ください。
|
||||
|
||||
* オプションについては[kubectl](/docs/reference/kubectl/kubectl/) optionsをご覧ください。
|
||||
|
||||
|
||||
* また[kubectlの利用パターン](/docs/reference/kubectl/conventions/)では再利用可能なスクリプトでkubectlを利用する方法を学べます。
|
||||
|
||||
* コミュニティ版[kubectlチートシート](https://github.com/dennyzhang/cheatsheet-kubernetes-A4)もご覧ください。
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ MinikubeのサポートするKubernetesの機能:
|
|||
|
||||
## インストール
|
||||
|
||||
[Minikubeのインストール](/ja/docs/tasks/tools/install-minikube/)を参照してください。
|
||||
ツールのインストールについて知りたい場合は、公式の[Get Started!](https://minikube.sigs.k8s.io/docs/start/)のガイドに従ってください。
|
||||
|
||||
## クイックスタート
|
||||
|
||||
|
|
|
|||
|
|
@ -130,7 +130,7 @@ yum install -y yum-utils device-mapper-persistent-data lvm2
|
|||
```
|
||||
|
||||
```shell
|
||||
### Dockerリポジトリの追加
|
||||
## Dockerリポジトリの追加
|
||||
yum-config-manager --add-repo \
|
||||
https://download.docker.com/linux/centos/docker-ce.repo
|
||||
```
|
||||
|
|
@ -215,73 +215,107 @@ sysctl --system
|
|||
{{< tabs name="tab-cri-cri-o-installation" >}}
|
||||
{{% tab name="Debian" %}}
|
||||
|
||||
CRI-Oを以下のOSにインストールするには、環境変数$OSを以下の表の適切なフィールドに設定します。
|
||||
|
||||
| Operating system | $OS |
|
||||
| ---------------- | ----------------- |
|
||||
| Debian Unstable | `Debian_Unstable` |
|
||||
| Debian Testing | `Debian_Testing` |
|
||||
|
||||
<br />
|
||||
そして、`$VERSION`にKubernetesのバージョンに合わせたCRI-Oのバージョンを設定します。例えば、CRI-O 1.18をインストールしたい場合は、`VERSION=1.18` を設定します。インストールを特定のリリースに固定することができます。バージョン 1.18.3をインストールするには、`VERSION=1.18:1.18.3` を設定します。
|
||||
<br />
|
||||
|
||||
以下を実行します。
|
||||
```shell
|
||||
# Debian Unstable/Sid
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add -
|
||||
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
|
||||
|
||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
|
||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
|
||||
|
||||
apt-get update
|
||||
apt-get install cri-o cri-o-runc
|
||||
```
|
||||
|
||||
```shell
|
||||
# Debian Testing
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
```shell
|
||||
# Debian 10
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
```shell
|
||||
# Raspbian 10
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
{{% tab name="Ubuntu" %}}
|
||||
|
||||
それでは、CRI-Oをインストールします:
|
||||
CRI-Oを以下のOSにインストールするには、環境変数$OSを以下の表の適切なフィールドに設定します。
|
||||
|
||||
| Operating system | $OS |
|
||||
| ---------------- | ----------------- |
|
||||
| Ubuntu 20.04 | `xUbuntu_20.04` |
|
||||
| Ubuntu 19.10 | `xUbuntu_19.10` |
|
||||
| Ubuntu 19.04 | `xUbuntu_19.04` |
|
||||
| Ubuntu 18.04 | `xUbuntu_18.04` |
|
||||
|
||||
<br />
|
||||
次に、`$VERSION`をKubernetesのバージョンと一致するCRI-Oのバージョンに設定します。例えば、CRI-O 1.18をインストールしたい場合は、`VERSION=1.18` を設定します。インストールを特定のリリースに固定することができます。バージョン 1.18.3 をインストールするには、`VERSION=1.18:1.18.3` を設定します。
|
||||
<br />
|
||||
|
||||
以下を実行します。
|
||||
```shell
|
||||
sudo apt-get install cri-o-1.17
|
||||
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
|
||||
|
||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
|
||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
|
||||
|
||||
apt-get update
|
||||
apt-get install cri-o cri-o-runc
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Ubuntu 18.04, 19.04 and 19.10" %}}
|
||||
{{% tab name="CentOS" %}}
|
||||
|
||||
CRI-Oを以下のOSにインストールするには、環境変数$OSを以下の表の適切なフィールドに設定します。
|
||||
|
||||
| Operating system | $OS |
|
||||
| ---------------- | ----------------- |
|
||||
| Centos 8 | `CentOS_8` |
|
||||
| Centos 8 Stream | `CentOS_8_Stream` |
|
||||
| Centos 7 | `CentOS_7` |
|
||||
|
||||
<br />
|
||||
次に、`$VERSION`をKubernetesのバージョンと一致するCRI-Oのバージョンに設定します。例えば、CRI-O 1.18 をインストールしたい場合は、`VERSION=1.18` を設定します。インストールを特定のリリースに固定することができます。バージョン 1.18.3 をインストールするには、`VERSION=1.18:1.18.3` を設定します。
|
||||
<br />
|
||||
|
||||
以下を実行します。
|
||||
```shell
|
||||
# パッケージレポジトリを設定する
|
||||
. /etc/os-release
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
|
||||
sudo apt-get update
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
|
||||
yum install cri-o
|
||||
```
|
||||
|
||||
```shell
|
||||
# CRI-Oのインストール
|
||||
sudo apt-get install cri-o-1.17
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="CentOS/RHEL 7.4+" %}}
|
||||
|
||||
```shell
|
||||
# 必要なパッケージのインストール
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_7/devel:kubic:libcontainers:stable.repo
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}/CentOS_7/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo
|
||||
```
|
||||
|
||||
```shell
|
||||
# CRI-Oのインストール
|
||||
yum install -y cri-o
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="openSUSE Tumbleweed" %}}
|
||||
|
||||
```shell
|
||||
sudo zypper install cri-o
|
||||
sudo zypper install cri-o
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Fedora" %}}
|
||||
|
||||
$VERSIONには、Kubernetesのバージョンと一致するCRI-Oのバージョンを設定します。例えば、CRI-O 1.18をインストールしたい場合は、$VERSION=1.18を設定します。
|
||||
以下のコマンドで、利用可能なバージョンを見つけることができます。
|
||||
```shell
|
||||
dnf module list cri-o
|
||||
```
|
||||
CRI-OはFedoraの特定のリリースにピン留めすることをサポートしていません。
|
||||
|
||||
以下を実行します。
|
||||
```shell
|
||||
dnf module enable cri-o:$VERSION
|
||||
dnf install cri-o
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
### CRI-Oの起動
|
||||
|
||||
```shell
|
||||
|
|
@ -321,7 +355,7 @@ sysctl --system
|
|||
### containerdのインストール
|
||||
|
||||
{{< tabs name="tab-cri-containerd-installation" >}}
|
||||
{{< tab name="Ubuntu 16.04" codelang="bash" >}}
|
||||
{{% tab name="Ubuntu 16.04" %}}
|
||||
|
||||
```shell
|
||||
# (containerdのインストール)
|
||||
|
|
@ -335,7 +369,7 @@ apt-get update && apt-get install -y apt-transport-https ca-certificates curl so
|
|||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
```
|
||||
|
||||
```
|
||||
```shell
|
||||
## Dockerのaptリポジトリの追加
|
||||
add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ content_type: concept
|
|||
|
||||
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
|
||||
|
||||
[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
|
||||
|
||||
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ kops is an automated provisioning system:
|
|||
|
||||
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.
|
||||
|
||||
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them.
|
||||
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them. The IAM user will need [adequate permissions](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user).
|
||||
|
||||
|
||||
|
||||
|
|
@ -140,7 +140,7 @@ you choose for organization reasons (e.g. you are allowed to create records unde
|
|||
but not under `example.com`).
|
||||
|
||||
Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
|
||||
the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
|
||||
the [normal process](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html), or
|
||||
with a command such as `aws route53 create-hosted-zone --name dev.example.com --caller-reference 1`.
|
||||
|
||||
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here,
|
||||
|
|
@ -231,7 +231,7 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
|
||||
* Learn more about `kops` [advanced usage](https://kops.sigs.k8s.io/) for tutorials, best practices and advanced configuration options.
|
||||
* Follow `kops` community discussions on Slack: [community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors)
|
||||
* Contribute to `kops` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues)
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: kubeadmを使用したシングルコントロールプレーンクラスターの作成
|
||||
title: kubeadmを使用したクラスターの作成
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">`kubeadm`ツールは、ベストプラクティスに準拠した実用最小限のKubernetesクラスターをブートストラップする手助けをします。実際、`kubeadm`を使用すれば、[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification)に通るクラスターをセットアップすることができます。`kubeadm`は、[ブートストラップトークン](/docs/reference/access-authn-authz/bootstrap-tokens/)やクラスターのアップグレードなどのその他のクラスターのライフサイクルの機能もサポートします。
|
||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">ベストプラクティスに準拠した実用最小限のKubernetesクラスターを作成します。実際、`kubeadm`を使用すれば、[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification)に通るクラスターをセットアップすることができます。`kubeadm`は、[ブートストラップトークン](/docs/reference/access-authn-authz/bootstrap-tokens/)やクラスターのアップグレードなどのその他のクラスターのライフサイクルの機能もサポートします。
|
||||
|
||||
`kubeadm`ツールは、次のようなときに適しています。
|
||||
|
||||
|
|
@ -41,7 +41,7 @@ kubeadmツールの全体の機能の状態は、一般利用可能(GA)です。
|
|||
|
||||
## 目的
|
||||
|
||||
* シングルコントロールプレーンのKubernetesクラスターまたは[高可用性クラスター](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)をインストールする
|
||||
* シングルコントロールプレーンのKubernetesクラスターをインストールする
|
||||
* クラスター上にPodネットワークをインストールして、Podがお互いに通信できるようにする
|
||||
|
||||
## 手順
|
||||
|
|
@ -76,7 +76,7 @@ kubeadm init <args>
|
|||
|
||||
`--apiserver-advertise-address`は、この特定のコントロールプレーンノードのAPIサーバーへのadvertise addressを設定するために使えますが、`--control-plane-endpoint`は、すべてのコントロールプレーンノード共有のエンドポイントを設定するために使えます。
|
||||
|
||||
`--control-plane-endpoint`はIPアドレスを受け付けますが、IPアドレスへマッピングされるDNSネームも使用できます。利用可能なソリューションをそうしたマッピングの観点から評価するには、ネットワーク管理者に相談してください。
|
||||
`--control-plane-endpoint`はIPアドレスと、IPアドレスへマッピングできるDNS名を使用できます。利用可能なソリューションをそうしたマッピングの観点から評価するには、ネットワーク管理者に相談してください。
|
||||
|
||||
以下にマッピングの例を示します。
|
||||
|
||||
|
|
@ -203,9 +203,14 @@ export KUBECONFIG=/etc/kubernetes/admin.conf
|
|||
|
||||
{{< /caution >}}
|
||||
|
||||
CNIを使用するKubernetes Podネットワークを提供する外部のプロジェクトがいくつかあります。一部のプロジェクトでは、[ネットワークポリシー](/docs/concepts/services-networking/networkpolicies/)もサポートしています。
|
||||
{{< note >}}
|
||||
現在、Calicoはkubeadmプロジェクトがe2eテストを実施している唯一のCNIプラグインです。
|
||||
もしCNIプラグインに関する問題を見つけた場合、kubeadmやkubernetesではなく、そのCNIプラグインの課題管理システムへ問題を報告してください。
|
||||
{{< /note >}}
|
||||
|
||||
利用できる[ネットワークアドオンとネットワークポリシーアドオン](/docs/concepts/cluster-administration/addons/#networking-and-network-policy)のリストを確認してください。
|
||||
CNIを使用するKubernetes Podネットワークを提供する外部のプロジェクトがいくつかあります。一部のプロジェクトでは、[ネットワークポリシー](/ja/docs/concepts/services-networking/network-policies/)もサポートしています。
|
||||
|
||||
[Kubernetesのネットワークモデル](/ja/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)を実装したアドオンの一覧も確認してください。
|
||||
|
||||
Podネットワークアドオンをインストールするには、コントロールプレーンノード上またはkubeconfigクレデンシャルを持っているノード上で、次のコマンドを実行します。
|
||||
|
||||
|
|
@ -213,91 +218,7 @@ Podネットワークアドオンをインストールするには、コント
|
|||
kubectl apply -f <add-on.yaml>
|
||||
```
|
||||
|
||||
インストールできるPodネットワークは、クラスターごとに1つだけです。以下の手順で、いくつかのよく使われるPodネットワークプラグインをインストールできます。
|
||||
|
||||
{{< tabs name="tabs-pod-install" >}}
|
||||
|
||||
{{% tab name="Calico" %}}
|
||||
[Calico](https://docs.projectcalico.org/latest/introduction/)は、ネットワークとネットワークポリシーのプロバイダーです。Calicoは柔軟なさまざまなネットワークオプションをサポートするため、自分の状況に適した最も効果的なオプションを選択できます。たとえば、ネットワークのオーバーレイの有無や、BGPの有無が選べます。Calicoは、ホスト、Pod、(もしIstioとEnvoyを使っている場合)サービスメッシュレイヤー上のアプリケーションに対してネットワークポリシーを強制するために、同一のエンジンを使用しています。Calicoは、`amd64`、`arm64`、`ppc64le`を含む複数のアーキテクチャで動作します。
|
||||
|
||||
デフォルトでは、Calicoは`192.168.0.0/16`をPodネットワークのCIDRとして使いますが、このCIDRはcalico.yamlファイルで設定できます。Calicoを正しく動作させるためには、これと同じCIDRを`--pod-network-cidr=192.168.0.0/16`フラグまたはkubeadmの設定を使って、`kubeadm init`コマンドに渡す必要があります。
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Cilium" %}}
|
||||
Ciliumを正しく動作させるためには、`kubeadm init`に `--pod-network-cidr=10.217.0.0/16`を渡さなければなりません。
|
||||
|
||||
Ciliumのデプロイは、次のコマンドを実行するだけでできます。
|
||||
|
||||
```shell
|
||||
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.6/install/kubernetes/quick-install.yaml
|
||||
```
|
||||
|
||||
すべてのCilium Podが`READY`とマークされたら、クラスターを使い始められます。
|
||||
|
||||
```shell
|
||||
kubectl get pods -n kube-system --selector=k8s-app=cilium
|
||||
```
|
||||
|
||||
出力は次のようになります。
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cilium-drxkl 1/1 Running 0 18m
|
||||
```
|
||||
|
||||
Ciliumはkube-proxyの代わりに利用することもできます。詳しくは[Kubernetes without kube-proxy](https://docs.cilium.io/en/stable/gettingstarted/kubeproxy-free)を読んでください。
|
||||
|
||||
KubernetesでのCiliumの使い方に関するより詳しい情報は、[Kubernetes Install guide for Cilium](https://docs.cilium.io/en/stable/kubernetes/)を参照してください。
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Contiv-VPP" %}}
|
||||
[Contiv-VPP](https://contivpp.io/)は、[FD.io VPP](https://fd.io/)をベースとするプログラマブルなCNF vSwitchを採用し、機能豊富で高性能なクラウドネイティブなネットワーキングとサービスを提供します。
|
||||
|
||||
Contiv-VPPは、k8sサービスとネットワークポリシーを(VPP上の)ユーザースペースで実装しています。
|
||||
|
||||
こちらのインストールガイドを参照してください: [Contiv-VPP Manual Installation](https://github.com/contiv/vpp/blob/master/docs/setup/MANUAL_INSTALL.md)
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Flannel" %}}
|
||||
`flannel`を正しく動作させるためには、`--pod-network-cidr=10.244.0.0/16`を`kubeadm init`に渡す必要があります。
|
||||
|
||||
オーバーレイネットワークに参加しているすべてのホスト上で、ファイアウォールのルールが、UDPポート8285と8472のトラフィックを許可するように設定されていることを確認してください。この設定に関するより詳しい情報は、Flannelのトラブルシューティングガイドの[Firewall](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls)のセクションを参照してください。
|
||||
|
||||
Flannelは、Linux下の`amd64`、`arm`、`arm64`、`ppc64le`、`s390x`アーキテクチャ上で動作します。Windows(`amd64`)はv0.11.0でサポートされたとされていますが、使用方法はドキュメントに書かれていません。
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
|
||||
```
|
||||
|
||||
`flannel`に関するより詳しい情報は、[GitHub上のCoreOSのflannelリポジトリ](https://github.com/coreos/flannel)を参照してください。
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Kube-router" %}}
|
||||
Kube-routerは、ノードへのPod CIDRの割り当てをkube-controller-managerに依存しています。そのため、`kubeadm init`時に`--pod-network-cidr`フラグを使用する必要があります。
|
||||
|
||||
Kube-routerは、Podネットワーク、ネットワークポリシー、および高性能なIP Virtual Server(IPVS)/Linux Virtual Server(LVS)ベースのサービスプロキシーを提供します。
|
||||
|
||||
Kube-routerを有効にしたKubernetesクラスターをセットアップするために`kubeadm`ツールを使用する方法については、公式の[セットアップガイド](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md)を参照してください。
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Weave Net" %}}
|
||||
Weave Netを使用してKubernetesクラスターをセットアップするより詳しい情報は、[アドオンを使用してKubernetesを統合する](https://www.weave.works/docs/net/latest/kube-addon/)を読んでください。
|
||||
|
||||
Weave Netは、`amd64`、`arm`、`arm64`、`ppc64le`プラットフォームで追加の操作なしで動作します。Weave Netはデフォルトでharipinモードをセットします。このモードでは、Pod同士はPodIPを知らなくても、Service IPアドレス経由でアクセスできます。
|
||||
|
||||
```shell
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
インストールできるPodネットワークは、クラスターごとに1つだけです。
|
||||
|
||||
Podネットワークがインストールされたら、`kubectl get pods --all-namespaces`の出力結果でCoreDNS Podが`Running`状態であることをチェックすることで、ネットワークが動作していることを確認できます。そして、一度CoreDNS Podが動作すれば、続けてノードを追加できます。
|
||||
|
||||
|
|
@ -375,7 +296,7 @@ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outfor
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
IPv6タプルを`<control-plane-host>:<control-plane-ip>`に指定するためには、IPv6アドレスをブラケットで囲みます。たとえば、`[fd00::101]:2073`のように書きます。
|
||||
IPv6タプルを`<control-plane-host>:<control-plane-port>`と指定するためには、IPv6アドレスを角括弧で囲みます。たとえば、`[fd00::101]:2073`のように書きます。
|
||||
{{< /note >}}
|
||||
|
||||
出力は次のようになります。
|
||||
|
|
@ -407,7 +328,7 @@ kubectl --kubeconfig ./admin.conf get nodes
|
|||
{{< note >}}
|
||||
上の例では、rootユーザーに対するSSH接続が有効であることを仮定しています。もしそうでない場合は、`admin.conf`ファイルを誰か他のユーザーからアクセスできるようにコピーした上で、代わりにそのユーザーを使って`scp`してください。
|
||||
|
||||
`admin.conf`ファイルはユーザーにクラスターに対する _特権ユーザー_ の権限を与えます。そのため、このファイルを使うのは控えめにしなければなりません。通常のユーザーには、一部の権限をホワイトリストに加えたユニークなクレデンシャルを生成することを推奨します。これには、`kubeadm alpha kubeconfig user --client-name <CN>`コマンドが使えます。このコマンドを実行すると、KubeConfigファイルがSTDOUTに出力されるので、ファイルに保存してユーザーに配布します。その後、`kubectl create (cluster)rolebinding`コマンドを使って権限をホワイトリストに加えます。
|
||||
`admin.conf`ファイルはユーザーにクラスターに対する _特権ユーザー_ の権限を与えます。そのため、このファイルを使うのは控えめにしなければなりません。通常のユーザーには、明示的に許可した権限を持つユニークなクレデンシャルを生成することを推奨します。これには、`kubeadm alpha kubeconfig user --client-name <CN>`コマンドが使えます。このコマンドを実行すると、KubeConfigファイルがSTDOUTに出力されるので、ファイルに保存してユーザーに配布します。その後、`kubectl create (cluster)rolebinding`コマンドを使って権限を付与します。
|
||||
{{< /note >}}
|
||||
|
||||
### (オプション)APIサーバーをlocalhostへプロキシする
|
||||
|
|
@ -433,10 +354,9 @@ kubectl --kubeconfig ./admin.conf proxy
|
|||
|
||||
```bash
|
||||
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
|
||||
kubectl delete node <node name>
|
||||
```
|
||||
|
||||
その後、ノードが削除されたら、`kubeadm`のインストール状態をすべてリセットします。
|
||||
ノードが削除される前に、`kubeadm`によってインストールされた状態をリセットします。
|
||||
|
||||
```bash
|
||||
kubeadm reset
|
||||
|
|
@ -454,6 +374,11 @@ IPVS tablesをリセットしたい場合は、次のコマンドを実行する
|
|||
ipvsadm -C
|
||||
```
|
||||
|
||||
ノードを削除します。
|
||||
```bash
|
||||
kubectl delete node <node name>
|
||||
```
|
||||
|
||||
クラスターのセットアップを最初から始めたいときは、`kubeadm init`や`kubeadm join`を適切な引数を付けて実行すればいいだけです。
|
||||
|
||||
### コントロールプレーンのクリーンアップ
|
||||
|
|
@ -469,7 +394,7 @@ ipvsadm -C
|
|||
* [Sonobuoy](https://github.com/heptio/sonobuoy)を使用してクラスターが適切に動作しているか検証する。
|
||||
* <a id="lifecycle" />`kubeadm`を使用したクラスターをアップグレードする方法について、[kubeadmクラスターをアップグレードする](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)を読む。
|
||||
* `kubeadm`の高度な利用方法について[kubeadmリファレンスドキュメント](/docs/reference/setup-tools/kubeadm/kubeadm)で学ぶ。
|
||||
* Kubernetesの[コンセプト](/ja/docs/concepts/)や[`kubectl`](/docs/user-guide/kubectl-overview/)についてもっと学ぶ。
|
||||
* Kubernetesの[コンセプト](/ja/docs/concepts/)や[`kubectl`](/ja/docs/reference/kubectl/overview/)についてもっと学ぶ。
|
||||
* Podネットワークアドオンのより完全なリストを[クラスターのネットワーク](/docs/concepts/cluster-administration/networking/)で確認する。
|
||||
* <a id="other-addons" />ロギング、モニタリング、ネットワークポリシー、仮想化、Kubernetesクラスターの制御のためのツールなど、その他のアドオンについて、[アドオンのリスト](/docs/concepts/cluster-administration/addons/)で確認する。
|
||||
* クラスターイベントやPod内で実行中のアプリケーションから送られるログをクラスターがハンドリングする方法を設定する。関係する要素の概要を理解するために、[ロギングのアーキテクチャ](/docs/concepts/cluster-administration/logging/)を読んでください。
|
||||
|
|
@ -486,9 +411,9 @@ ipvsadm -C
|
|||
|
||||
## バージョン互換ポリシー {#version-skew-policy}
|
||||
|
||||
バージョンvX.Yの`kubeadm`ツールは、バージョンvX.YまたはvX.(Y-1)のコントロールプレーンを持つクラスターをデプロイできます。また、`kubeadm` vX.Yは、kubeadmで構築された既存のvX.(Y-1)のクラスターをアップグレートできます。
|
||||
バージョンv{{< skew latestVersion >}}の`kubeadm`ツールは、バージョンv{{< skew latestVersion >}}またはv{{< skew prevMinorVersion >}}のコントロールプレーンを持つクラスターをデプロイできます。また、バージョンv{{< skew latestVersion >}}の`kubeadm`は、バージョンv{{< skew prevMinorVersion >}}のkubeadmで構築されたクラスターをアップグレートできます。
|
||||
|
||||
未来を見ることはできないため、kubeadm CLI vX.YはvX.(Y+1)をデプロイすることはできません。
|
||||
未来を見ることはできないため、kubeadm CLI v{{< skew latestVersion >}}はv{{< skew nextMinorVersion >}}をデプロイできないかもしれません。
|
||||
|
||||
例: `kubeadm` v1.8は、v1.7とv1.8のクラスターをデプロイでき、v1.7のkubeadmで構築されたクラスターをv1.8にアップグレートできます。
|
||||
|
||||
|
|
@ -507,7 +432,7 @@ kubeletとコントロールプレーンの間や、他のKubernetesコンポー
|
|||
|
||||
* 定期的に[etcdをバックアップ](https://coreos.com/etcd/docs/latest/admin_guide.html)する。kubeadmが設定するetcdのデータディレクトリは、コントロールプレーンノードの`/var/lib/etcd`にあります。
|
||||
|
||||
* 複数のコントロールプレーンノードを使用する。[高可用性トポロジーのオプション](/docs/setup/production-environment/tools/kubeadm/ha-topology/)では、より高い可用性を提供するクラスターのトポロジーの選択について説明してます。
|
||||
* 複数のコントロールプレーンノードを使用する。[高可用性トポロジーのオプション](/ja/docs/setup/production-environment/tools/kubeadm/ha-topology/)では、[より高い可用性](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)を提供するクラスターのトポロジーの選択について説明してます。
|
||||
|
||||
### プラットフォームの互換性 {#multi-platform}
|
||||
|
||||
|
|
@ -520,4 +445,3 @@ kubeadmのdeb/rpmパッケージおよびバイナリは、[multi-platform propo
|
|||
## トラブルシューティング {#troubleshooting}
|
||||
|
||||
kubeadmに関する問題が起きたときは、[トラブルシューティングドキュメント](/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)を確認してください。
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,10 @@ HAクラスターは次の方法で設定できます。
|
|||
|
||||
HAクラスターをセットアップする前に、各トポロジーの利点と欠点について注意深く考慮する必要があります。
|
||||
|
||||
|
||||
{{< note >}}
|
||||
kubeadmは、etcdクラスターを静的にブートストラップします。
|
||||
詳細については、etcd[クラスタリングガイド](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)をご覧ください。
|
||||
{{< /note >}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
|
|||
|
|
@ -57,10 +57,10 @@ weight: 60
|
|||
|
||||
- ロードバランサーは、apiserverポートで、全てのコントロールプレーンノードと通信できなければなりません。また、リスニングポートに対する流入トラフィックも許可されていなければなりません。
|
||||
|
||||
- [HAProxy](http://www.haproxy.org/)をロードバランサーとして使用することができます。
|
||||
|
||||
- ロードバランサーのアドレスは、常にkubeadmの`ControlPlaneEndpoint`のアドレスと一致することを確認してください。
|
||||
|
||||
- 詳細は[Options for Software Load Balancing](https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing)をご覧ください。
|
||||
|
||||
1. ロードバランサーに、最初のコントロールプレーンノードを追加し、接続をテストする:
|
||||
|
||||
```sh
|
||||
|
|
@ -87,7 +87,7 @@ weight: 60
|
|||
|
||||
{{< note >}}`kubeadm init`の`--config`フラグと`--certificate-key`フラグは混在させることはできないため、[kubeadm configuration](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)を使用する場合は`certificateKey`フィールドを適切な場所に追加する必要があります(`InitConfiguration`と`JoinConfiguration: controlPlane`の配下)。{{< /note >}}
|
||||
|
||||
{{< note >}}CalicoなどのいくつかのCNIネットワークプラグインは`192.168.0.0/16`のようなCIDRを必要としますが、Weaveなどは必要としません。[CNIネットワークドキュメント](/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)を参照してください。PodにCIDRを設定するには、`ClusterConfiguration`の`networking`オブジェクトに`podSubnet: 192.168.0.0/16`フィールドを設定してください。{{< /note >}}
|
||||
{{< note >}}いくつかのCNIネットワークプラグインはPodのIPのCIDRの指定など追加の設定を必要としますが、必要としないプラグインもあります。[CNIネットワークドキュメント](/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)を参照してください。PodにCIDRを設定するには、`ClusterConfiguration`の`networking`オブジェクトに`podSubnet: 192.168.0.0/16`フィールドを設定してください。{{< /note >}}
|
||||
|
||||
- このような出力がされます:
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ ComponentConfigの詳細については、[このセクション](#configure-kub
|
|||
|
||||
### `kubeadm init`実行時の流れ
|
||||
|
||||
`kubeadm init`を実行した場合、kubeletの設定は`/var/lib/kubelet/config.yaml`に格納され、クラスターのConfigMapにもアップロードされます。ConfigMapは`kubelet-config-1.X`という名前で、`.X`は初期化するKubernetesのマイナーバージョンを表します。またこの設定ファイルは、クラスタ内の全てのkubeletのために、クラスター全体設定の基準と共に`/etc/kubernetes/kubelet.conf`にも書き込まれます。この設定ファイルは、kubeletがAPIサーバと通信するためのクライアント証明書を指し示します。これは、[各kubeletにクラスターレベルの設定を配布](#propagating-cluster-level-configuration-to-each-kubelet)することの必要性を示しています。
|
||||
`kubeadm init`を実行した場合、kubeletの設定は`/var/lib/kubelet/config.yaml`に格納され、クラスターのConfigMapにもアップロードされます。ConfigMapは`kubelet-config-1.X`という名前で、`X`は初期化するKubernetesのマイナーバージョンを表します。またこの設定ファイルは、クラスタ内の全てのkubeletのために、クラスター全体設定の基準と共に`/etc/kubernetes/kubelet.conf`にも書き込まれます。この設定ファイルは、kubeletがAPIサーバと通信するためのクライアント証明書を指し示します。これは、[各kubeletにクラスターレベルの設定を配布](#propagating-cluster-level-configuration-to-each-kubelet)することの必要性を示しています。
|
||||
|
||||
二つ目のパターンである、[インスタンス固有の設定内容を適用](#providing-instance-specific-configuration-details)するために、kubeadmは環境ファイルを`/var/lib/kubelet/kubeadm-flags.env`へ書き出します。このファイルは以下のように、kubelet起動時に渡されるフラグのリストを含んでいます。
|
||||
|
||||
|
|
@ -99,7 +99,7 @@ kubeletが新たな設定を読み込むと、kubeadmは、KubeConfigファイ
|
|||
`kubeadm`には、systemdがどのようにkubeletを実行するかを指定した設定ファイルが同梱されています。
|
||||
kubeadm CLIコマンドは決してこのsystemdファイルには触れないことに注意してください。
|
||||
|
||||
kubeadmの[DEBパッケージ](https://github.com/kubernetes/kubernetes/blob/master/build/debs/10-kubeadm.conf)または[RPMパッケージ](https://github.com/kubernetes/kubernetes/blob/master/build/rpms/10-kubeadm.conf)によってインストールされたこの設定ファイルは、`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`に書き込まれ、systemdで使用されます。基本的な`kubelet.service`([RPM用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service)または、 [DEB用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service))を拡張します。
|
||||
kubeadmの[DEBパッケージ](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf)または[RPMパッケージ](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubeadm/10-kubeadm.conf)によってインストールされたこの設定ファイルは、`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`に書き込まれ、systemdで使用されます。基本的な`kubelet.service`([RPM用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service)または、 [DEB用](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service))を拡張します。
|
||||
|
||||
```none
|
||||
[Service]
|
||||
|
|
@ -134,6 +134,5 @@ Kubernetesに同梱されるDEB、RPMのパッケージは以下の通りです
|
|||
| `kubeadm` | `/usr/bin/kubeadm`CLIツールと、[kubelet用のsystemdファイル](#the-kubelet-drop-in-file-for-systemd)をインストールします。 |
|
||||
| `kubelet` | kubeletバイナリを`/usr/bin`に、CNIバイナリを`/opt/cni/bin`にインストールします。 |
|
||||
| `kubectl` | `/usr/bin/kubectl`バイナリをインストールします。 |
|
||||
| `kubernetes-cni` | 公式のCNIバイナリを`/opt/cni/bin`ディレクトリにインストールします。 |
|
||||
| `cri-tools` | `/usr/bin/crictl`バイナリを[cri-tools gitリポジトリ](https://github.com/kubernetes-incubator/cri-tools)からインストールします。 |
|
||||
|
||||
|
|
|
|||
|
|
@ -1,68 +1,48 @@
|
|||
---
|
||||
title: Configuring your kubernetes cluster to self-host the control plane
|
||||
title: コントロールプレーンをセルフホストするようにkubernetesクラスターを構成する
|
||||
content_type: concept
|
||||
weight: 100
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
### Self-hosting the Kubernetes control plane {#self-hosting}
|
||||
### コントロールプレーンのセルフホスティング {#self-hosting}
|
||||
|
||||
kubeadm allows you to experimentally create a _self-hosted_ Kubernetes control
|
||||
plane. This means that key components such as the API server, controller
|
||||
manager, and scheduler run as [DaemonSet pods](/ja/docs/concepts/workloads/controllers/daemonset/)
|
||||
configured via the Kubernetes API instead of [static pods](/docs/tasks/administer-cluster/static-pod/)
|
||||
configured in the kubelet via static files.
|
||||
kubeadmを使用すると、セルフホスト型のKubernetesコントロールプレーンを実験的に作成できます。これはAPIサーバー、コントローラーマネージャー、スケジューラーなどの主要コンポーネントは、静的ファイルを介してkubeletで構成された[static pods](/docs/tasks/configure-pod-container/static-pod/)ではなく、Kubernetes APIを介して構成された[DaemonSet pods](/ja/docs/concepts/workloads/controllers/daemonset/)として実行されることを意味します。
|
||||
|
||||
To create a self-hosted cluster see the
|
||||
[kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting) command.
|
||||
セルフホスト型クラスターを作成する場合は[kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting)を参照してください。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
#### Caveats
|
||||
#### 警告
|
||||
|
||||
{{< caution >}}
|
||||
This feature pivots your cluster into an unsupported state, rendering kubeadm unable
|
||||
to manage you cluster any longer. This includes `kubeadm upgrade`.
|
||||
この機能により、クラスターがサポートされていない状態になり、kubeadmがクラスターを管理できなくなります。これには`kubeadm upgrade`が含まれます。
|
||||
{{< /caution >}}
|
||||
|
||||
1. Self-hosting in 1.8 and later has some important limitations. In particular, a
|
||||
self-hosted cluster _cannot recover from a reboot of the control-plane node_
|
||||
without manual intervention.
|
||||
1. 1.8以降のセルフホスティングには、いくつかの重要な制限があります。特に、セルフホスト型クラスターは、手動の介入なしにコントロールプレーンのNode再起動から回復することはできません。
|
||||
|
||||
1. By default, self-hosted control plane Pods rely on credentials loaded from
|
||||
[`hostPath`](/docs/concepts/storage/volumes/#hostpath)
|
||||
volumes. Except for initial creation, these credentials are not managed by
|
||||
kubeadm.
|
||||
1. デフォルトでは、セルフホスト型のコントロールプレーンのPodは、[`hostPath`](/docs/concepts/storage/volumes/#hostpath)ボリュームからロードされた資格情報に依存しています。最初の作成を除いて、これらの資格情報はkubeadmによって管理されません。
|
||||
|
||||
1. The self-hosted portion of the control plane does not include etcd,
|
||||
which still runs as a static Pod.
|
||||
1. コントロールプレーンのセルフホストされた部分にはetcdが含まれていませんが、etcdは静的Podとして実行されます。
|
||||
|
||||
#### Process
|
||||
#### プロセス
|
||||
|
||||
The self-hosting bootstrap process is documented in the [kubeadm design
|
||||
document](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.9.md#optional-self-hosting).
|
||||
セルフホスティングのブートストラッププロセスは、[kubeadm design
|
||||
document](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.9.md#optional-self-hosting)に記載されています。
|
||||
|
||||
In summary, `kubeadm alpha selfhosting` works as follows:
|
||||
要約すると、`kubeadm alpha selfhosting`は次のように機能します。
|
||||
|
||||
1. Waits for this bootstrap static control plane to be running and
|
||||
healthy. This is identical to the `kubeadm init` process without self-hosting.
|
||||
1. 静的コントロールプレーンのブートストラップが起動し、正常になるのを待ちます。これは`kubeadm init`のセルフホスティングを使用しないプロセスと同じです。
|
||||
|
||||
1. Uses the static control plane Pod manifests to construct a set of
|
||||
DaemonSet manifests that will run the self-hosted control plane.
|
||||
It also modifies these manifests where necessary, for example adding new volumes
|
||||
for secrets.
|
||||
1. 静的コントロールプレーンのPodのマニフェストを使用して、セルフホスト型コントロールプレーンを実行する一連のDaemonSetのマニフェストを構築します。また、必要に応じてこれらのマニフェストを変更します。たとえば、シークレット用の新しいボリュームを追加します。
|
||||
|
||||
1. Creates DaemonSets in the `kube-system` namespace and waits for the
|
||||
resulting Pods to be running.
|
||||
1. `kube-system`のネームスペースにDaemonSetを作成し、Podの結果が起動されるのを待ちます。
|
||||
|
||||
1. Once self-hosted Pods are operational, their associated static Pods are deleted
|
||||
and kubeadm moves on to install the next component. This triggers kubelet to
|
||||
stop those static Pods.
|
||||
1. セルフホスト型のPodが操作可能になると、関連する静的Podが削除され、kubeadmは次のコンポーネントのインストールに進みます。これによりkubeletがトリガーされて静的Podが停止します。
|
||||
|
||||
1. When the original static control plane stops, the new self-hosted control
|
||||
plane is able to bind to listening ports and become active.
|
||||
1. 元の静的なコントロールプレーンが停止すると、新しいセルフホスト型コントロールプレーンはリスニングポートにバインドしてアクティブになります。
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,8 @@ when using kubeadm to set up a kubernetes cluster.
|
|||
* Three hosts that can talk to each other over ports 2379 and 2380. This
|
||||
document assumes these default ports. However, they are configurable through
|
||||
the kubeadm config file.
|
||||
* Each host must [have docker, kubelet, and kubeadm installed][toolbox].
|
||||
* Each host must [have docker, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
* Each host should have access to the Kubernetes container image registry (`k8s.gcr.io`) or list/pull the required etcd image using `kubeadm config images list/pull`. This guide will setup etcd instances as [static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
|
||||
* Some infrastructure to copy files between hosts. For example `ssh` and `scp`
|
||||
can satisfy this requirement.
|
||||
|
||||
|
|
|
|||
|
|
@ -6,68 +6,100 @@ weight: 20
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
As with any program, you might run into an error installing or running kubeadm.
|
||||
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.
|
||||
どのプログラムでもそうですが、kubeadmのインストールや実行でエラーが発生することがあります。このページでは、一般的な失敗例をいくつか挙げ、問題を理解して解決するための手順を示しています。
|
||||
|
||||
If your problem is not listed below, please follow the following steps:
|
||||
|
||||
- If you think your problem is a bug with kubeadm:
|
||||
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
|
||||
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.
|
||||
|
||||
- If you are unsure about how kubeadm works, you can ask on [Slack](http://slack.k8s.io/) in #kubeadm, or open a question on [StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes). Please include
|
||||
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.
|
||||
本ページに問題が記載されていない場合は、以下の手順を行ってください:
|
||||
|
||||
- 問題がkubeadmのバグによるものと思った場合:
|
||||
- [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues)にアクセスして、既存のIssueを探してください。
|
||||
- Issueがない場合は、テンプレートにしたがって[新しくIssueを立ててください](https://github.com/kubernetes/kubeadm/issues/new)。
|
||||
|
||||
- kubeadmがどのように動作するかわからない場合は、[Slack](http://slack.k8s.io/)の#kubeadmチャンネルで質問するか、[StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)で質問をあげてください。その際は、他の方が助けを出しやすいように`#kubernetes`や`#kubeadm`といったタグをつけてください。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## RBACがないため、v1.18ノードをv1.17クラスタに結合できない
|
||||
v1.18では、同名のノードが既に存在する場合にクラスタ内のノードに参加しないようにする機能を追加しました。これには、ブートストラップトークンユーザがNodeオブジェクトをGETできるようにRBACを追加する必要がありました。
|
||||
|
||||
しかし、これによりv1.18の`kubeadm join`がkubeadm v1.17で作成したクラスタに参加できないという問題が発生します。
|
||||
|
||||
この問題を回避するには、次の2つの方法があります。
|
||||
- kubeadm v1.18を用いて、コントロールプレーンノード上で`kubeadm init phase bootstrap-token`を実行します。
|
||||
これには、ブートストラップトークンの残りのパーミッションも同様に有効にすることに注意してください。
|
||||
|
||||
- `kubectl apply -f ...`を使って以下のRBACを手動で適用します。
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: kubeadm:get-nodes
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubeadm:get-nodes
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: kubeadm:get-nodes
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:bootstrappers:kubeadm:default-node-token
|
||||
```
|
||||
|
||||
## インストール中に`ebtables`もしくは他の似たような実行プログラムが見つからない
|
||||
|
||||
If you see the following warnings while running `kubeadm init`
|
||||
`kubeadm init`の実行中に以下のような警告が表示された場合は、以降に記載するやり方を行ってください。
|
||||
|
||||
```sh
|
||||
[preflight] WARNING: ebtables not found in system path
|
||||
[preflight] WARNING: ethtool not found in system path
|
||||
```
|
||||
|
||||
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
|
||||
このような場合、ノード上に`ebtables`, `ethtool`などの実行ファイルがない可能性があります。これらをインストールするには、以下のコマンドを実行します。
|
||||
|
||||
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
|
||||
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
|
||||
- Ubuntu/Debianユーザーは、`apt install ebtables ethtool`を実行してください。
|
||||
- CentOS/Fedoraユーザーは、`yum install ebtables ethtool`を実行してください。
|
||||
|
||||
## インストール中にkubeadmがコントロールプレーンを待ち続けて止まる
|
||||
|
||||
If you notice that `kubeadm init` hangs after printing out the following line:
|
||||
以下のを出力した後に`kubeadm init`が止まる場合は、`kubeadm init`を実行してください:
|
||||
|
||||
```sh
|
||||
[apiclient] Created API client, waiting for the control plane to become ready
|
||||
```
|
||||
|
||||
This may be caused by a number of problems. The most common are:
|
||||
これはいくつかの問題が原因となっている可能性があります。最も一般的なのは:
|
||||
|
||||
- network connection problems. Check that your machine has full network connectivity before continuing.
|
||||
- the default cgroup driver configuration for the kubelet differs from that used by Docker.
|
||||
Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following:
|
||||
- ネットワーク接続の問題が挙げられます。続行する前に、お使いのマシンがネットワークに完全に接続されていることを確認してください。
|
||||
- kubeletのデフォルトのcgroupドライバの設定がDockerで使用されているものとは異なっている場合も考えられます。
|
||||
システムログファイル(例: `/var/log/message`)をチェックするか、`journalctl -u kubelet`の出力を調べてください:
|
||||
|
||||
```shell
|
||||
error: failed to run Kubelet: failed to create kubelet:
|
||||
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
|
||||
```
|
||||
|
||||
There are two common ways to fix the cgroup driver problem:
|
||||
以上のようなエラーが現れていた場合、cgroupドライバの問題を解決するには、以下の2つの方法があります:
|
||||
|
||||
1. Install Docker again following instructions
|
||||
[here](/ja/docs/setup/independent/install-kubeadm/#installing-docker).
|
||||
1. [ここ](/ja/docs/setup/independent/install-kubeadm/#installing-docker)の指示に従ってDockerを再度インストールします。
|
||||
|
||||
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
|
||||
[Configure cgroup driver used by kubelet on Master Node](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
|
||||
1. Dockerのcgroupドライバに合わせてkubeletの設定を手動で変更します。その際は、[マスターノード上でkubeletが使用するcgroupドライバを設定する](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)を参照してください。
|
||||
|
||||
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
|
||||
- control plane Dockerコンテナがクラッシュループしたり、ハングしたりしています。これは`docker ps`を実行し、`docker logs`を実行して各コンテナを調査することで確認できます。
|
||||
|
||||
## 管理コンテナを削除する時にkubeadmが止まる
|
||||
|
||||
The following could happen if Docker halts and does not remove any Kubernetes-managed containers:
|
||||
Dockerが停止して、Kubernetesで管理されているコンテナを削除しないと、以下のようなことが起こる可能性があります:
|
||||
|
||||
```bash
|
||||
sudo kubeadm reset
|
||||
|
|
@ -78,95 +110,70 @@ sudo kubeadm reset
|
|||
(block)
|
||||
```
|
||||
|
||||
A possible solution is to restart the Docker service and then re-run `kubeadm reset`:
|
||||
考えられる解決策は、Dockerサービスを再起動してから`kubeadm reset`を再実行することです:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart docker.service
|
||||
sudo kubeadm reset
|
||||
```
|
||||
|
||||
Inspecting the logs for docker may also be useful:
|
||||
dockerのログを調べるのも有効な場合があります:
|
||||
|
||||
```sh
|
||||
journalctl -ul docker
|
||||
journalctl -u docker
|
||||
```
|
||||
|
||||
## Podの状態が`RunContainerError`、`CrashLoopBackOff`、または`Error`
|
||||
## Podの状態が`RunContainerError`、`CrashLoopBackOff`、または`Error`となる
|
||||
|
||||
Right after `kubeadm init` there should not be any pods in these states.
|
||||
`kubeadm init`の直後には、これらの状態ではPodは存在しないはずです。
|
||||
|
||||
- If there are pods in one of these states _right after_ `kubeadm init`, please open an
|
||||
issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state
|
||||
until you have deployed the network solution.
|
||||
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
|
||||
after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
|
||||
it's very likely that the Pod Network solution that you installed is somehow broken.
|
||||
You might have to grant it more RBAC privileges or use a newer version. Please file
|
||||
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
|
||||
- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option
|
||||
when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`.
|
||||
MountFlags can interfere with volumes mounted by Kubernetes, and put the Pods in `CrashLoopBackOff` state.
|
||||
The error happens when Kubernetes does not find `var/run/secrets/kubernetes.io/serviceaccount` files.
|
||||
- `kubeadm init`の _直後_ にこれらの状態のいずれかにPodがある場合は、kubeadmのリポジトリにIssueを立ててください。ネットワークソリューションをデプロイするまでは`coredns`(または`kube-dns`)は`Pending`状態でなければなりません。
|
||||
- ネットワークソリューションをデプロイしても`coredns`(または`kube-dns`)に何も起こらない場合にRunContainerError`、`CrashLoopBackOff`、`Error`の状態でPodが表示された場合は、インストールしたPodネットワークソリューションが壊れている可能性が高いです。より多くのRBACの特権を付与するか、新しいバージョンを使用する必要があるかもしれません。PodネットワークプロバイダのイシュートラッカーにIssueを出して、そこで問題をトリアージしてください。
|
||||
- 1.12.1よりも古いバージョンのDockerをインストールした場合は、`systemd`で`dockerd`を起動する際に`MountFlags=slave`オプションを削除して`docker`を再起動してください。マウントフラグは`/usr/lib/systemd/system/docker.service`で確認できます。MountFlagsはKubernetesがマウントしたボリュームに干渉し、Podsを`CrashLoopBackOff`状態にすることがあります。このエラーは、Kubernetesが`var/run/secrets/kubernetes.io/serviceaccount`ファイルを見つけられない場合に発生します。
|
||||
|
||||
## `coredns`(もしくは`kube-dns`)が`Pending`状態でスタックする
|
||||
|
||||
This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
|
||||
should [install the pod network solution](/docs/concepts/cluster-administration/addons/)
|
||||
of choice. You have to install a Pod Network
|
||||
before CoreDNS may be deployed fully. Hence the `Pending` state before the network is set up.
|
||||
kubeadmはネットワークプロバイダに依存しないため、管理者は選択した[Podネットワークソリューションをインストール](/docs/concepts/cluster-administration/addons/)をする必要があります。CoreDNSを完全にデプロイする前にPodネットワークをインストールする必要があります。したがって、ネットワークがセットアップされる前の `Pending`状態になります。
|
||||
|
||||
## `HostPort`サービスが動かない
|
||||
|
||||
The `HostPort` and `HostIP` functionality is available depending on your Pod Network
|
||||
provider. Please contact the author of the Pod Network solution to find out whether
|
||||
`HostPort` and `HostIP` functionality are available.
|
||||
`HostPort`と`HostIP`の機能は、ご使用のPodネットワークプロバイダによって利用可能です。Podネットワークソリューションの作者に連絡して、`HostPort`と`HostIP`機能が利用可能かどうかを確認してください。
|
||||
|
||||
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
|
||||
Calico、Canal、FlannelのCNIプロバイダは、HostPortをサポートしていることが確認されています。
|
||||
|
||||
For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
|
||||
詳細については、[CNI portmap documentation] (https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md) を参照してください。
|
||||
|
||||
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
|
||||
services](/ja/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`.
|
||||
ネットワークプロバイダが portmap CNI プラグインをサポートしていない場合は、[NodePortサービス](/ja/docs/concepts/services-networking/service/#nodeport)を使用するか、`HostNetwork=true`を使用してください。
|
||||
|
||||
## サービスIP経由でPodにアクセスすることができない
|
||||
|
||||
- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
|
||||
which allows pods to access themselves via their Service IP. This is an issue related to
|
||||
[CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network
|
||||
add-on provider to get the latest status of their support for hairpin mode.
|
||||
- 多くのネットワークアドオンは、PodがサービスIPを介して自分自身にアクセスできるようにする[ヘアピンモード](/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)を有効にしていません。これは[CNI](https://github.com/containernetworking/cni/issues/476)に関連する問題です。ヘアピンモードのサポート状況については、ネットワークアドオンプロバイダにお問い合わせください。
|
||||
|
||||
- If you are using VirtualBox (directly or via Vagrant), you will need to
|
||||
ensure that `hostname -i` returns a routable IP address. By default the first
|
||||
interface is connected to a non-routable host-only network. A work around
|
||||
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
|
||||
for an example.
|
||||
- VirtualBoxを使用している場合(直接またはVagrant経由)は、`hostname -i`がルーティング可能なIPアドレスを返すことを確認する必要があります。デフォルトでは、最初のインターフェースはルーティング可能でないホスト専用のネットワークに接続されています。これを回避するには`/etc/hosts`を修正する必要があります。例としてはこの[Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)を参照してください。
|
||||
|
||||
## TLS証明書のエラー
|
||||
|
||||
The following error indicates a possible certificate mismatch.
|
||||
以下のエラーは、証明書の不一致の可能性を示しています。
|
||||
|
||||
```none
|
||||
# kubectl get pods
|
||||
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
|
||||
```
|
||||
|
||||
- Verify that the `$HOME/.kube/config` file contains a valid certificate, and
|
||||
regenerate a certificate if necessary. The certificates in a kubeconfig file
|
||||
are base64 encoded. The `base64 --decode` command can be used to decode the certificate
|
||||
and `openssl x509 -text -noout` can be used for viewing the certificate information.
|
||||
- Unset the `KUBECONFIG` environment variable using:
|
||||
- `HOME/.kube/config`ファイルに有効な証明書が含まれていることを確認し、必要に応じて証明書を再生成します。kubeconfigファイル内の証明書はbase64でエンコードされています。証明書をデコードするには`base64 --decode`コマンドを、証明書情報を表示するには`openssl x509 -text -noout`コマンドを用いてください。
|
||||
- 環境変数`KUBECONFIG`の設定を解除するには以下のコマンドを実行するか:
|
||||
|
||||
```sh
|
||||
unset KUBECONFIG
|
||||
```
|
||||
|
||||
Or set it to the default `KUBECONFIG` location:
|
||||
設定をデフォルトの`KUBECONFIG`の場所に設定します:
|
||||
|
||||
```sh
|
||||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
|
||||
- もう一つの回避策は、既存の`kubeconfig`を"admin"ユーザに上書きすることです:
|
||||
|
||||
```sh
|
||||
mv $HOME/.kube $HOME/.kube.bak
|
||||
|
|
@ -177,38 +184,38 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
|
|||
|
||||
## Vagrant内でPodネットワークとしてflannelを使用する時のデフォルトNIC
|
||||
|
||||
The following error might indicate that something was wrong in the pod network:
|
||||
以下のエラーは、Podネットワークに何か問題があったことを示している可能性を示しています:
|
||||
|
||||
```sh
|
||||
Error from server (NotFound): the server could not find the requested resource
|
||||
```
|
||||
|
||||
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
|
||||
- Vagrant内のPodネットワークとしてflannelを使用している場合は、flannelのデフォルトのインターフェース名を指定する必要があります。
|
||||
|
||||
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
|
||||
Vagrantは通常、2つのインターフェースを全てのVMに割り当てます。1つ目は全てのホストにIPアドレス`10.0.2.15`が割り当てられており、NATされる外部トラフィックのためのものです。
|
||||
|
||||
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
|
||||
これは、ホストの最初のインターフェイスをデフォルトにしているflannelの問題につながるかもしれません。これは、すべてのホストが同じパブリックIPアドレスを持っていると考えます。これを防ぐには、2番目のインターフェイスが選択されるように `--iface eth1`フラグをflannelに渡してください。
|
||||
|
||||
## 公開されていないIPがコンテナに使われている
|
||||
|
||||
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
|
||||
状況によっては、`kubectl logs`や`kubectl run`コマンドが以下のようなエラーを返すことがあります:
|
||||
|
||||
```sh
|
||||
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
|
||||
```
|
||||
|
||||
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
|
||||
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
|
||||
- これには、おそらくマシンプロバイダのポリシーによって、一見同じサブネット上の他のIPと通信できないIPをKubernetesが使用している可能性があります。
|
||||
- DigitalOceanはパブリックIPとプライベートIPを`eth0`に割り当てていますが、`kubelet`はパブリックIPではなく、ノードの`InternalIP`として後者を選択します。
|
||||
|
||||
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet:
|
||||
`ifconfig`ではエイリアスIPアドレスが表示されないため、`ifconfig`の代わりに`ip addr show`を使用してこのシナリオをチェックしてください。あるいは、DigitalOcean専用のAPIエンドポイントを使用して、ドロップレットからアンカーIPを取得することもできます:
|
||||
|
||||
```sh
|
||||
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
|
||||
```
|
||||
|
||||
The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using DigitalOcean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. The [`KubeletExtraArgs` section of the kubeadm `NodeRegistrationOptions` structure](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) can be used for this.
|
||||
回避策としては、`--node-ip`を使ってどのIPを使うかを`kubelet`に伝えることです。DigitalOceanを使用する場合、オプションのプライベートネットワークを使用したい場合は、パブリックIP(`eth0`に割り当てられている)かプライベートIP(`eth1`に割り当てられている)のどちらかを指定します。これにはkubeadm `NodeRegistrationOptions`構造体の [`KubeletExtraArgs`セクション](https://github.com/kubernetes/kubernetes/blob/release-1.13/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go) が利用できます。
|
||||
|
||||
Then restart `kubelet`:
|
||||
`kubelet`を再起動してください:
|
||||
|
||||
```sh
|
||||
systemctl daemon-reload
|
||||
|
|
@ -217,13 +224,12 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
|
|||
|
||||
## `coredns`のPodが`CrashLoopBackOff`もしくは`Error`状態になる
|
||||
|
||||
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
|
||||
where the `coredns` pods are not starting. To solve that you can try one of the following options:
|
||||
SELinuxを実行しているノードで古いバージョンのDockerを使用している場合、`coredns` Podが起動しないということが起きるかもしれません。この問題を解決するには、以下のオプションのいずれかを試してみてください:
|
||||
|
||||
- Upgrade to a [newer version of Docker](/ja/docs/setup/independent/install-kubeadm/#installing-docker).
|
||||
- [新しいDockerのバージョン](/ja/docs/setup/independent/install-kubeadm/#installing-docker)にアップグレードする。
|
||||
|
||||
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
|
||||
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
|
||||
- [SELinuxを無効化する](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux)。
|
||||
- `coredns`を変更して、`allowPrivilegeEscalation`を`true`に設定:
|
||||
|
||||
```bash
|
||||
kubectl -n kube-system get deployment coredns -o yaml | \
|
||||
|
|
@ -231,108 +237,84 @@ kubectl -n kube-system get deployment coredns -o yaml | \
|
|||
kubectl apply -f -
|
||||
```
|
||||
|
||||
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
|
||||
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
|
||||
CoreDNSに`CrashLoopBackOff`が発生する別の原因は、KubernetesにデプロイされたCoreDNS Podがループを検出したときに発生します。CoreDNSがループを検出して終了するたびに、KubernetesがCoreDNS Podを再起動しようとするのを避けるために、[いくつかの回避策](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)が用意されています。
|
||||
|
||||
{{< warning >}}
|
||||
Disabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise
|
||||
the security of your cluster.
|
||||
SELinuxを無効にするか`allowPrivilegeEscalation`を`true`に設定すると、クラスタのセキュリティが損なわれる可能性があります。
|
||||
{{< /warning >}}
|
||||
|
||||
## etcdのpodが継続的に再起動する
|
||||
|
||||
If you encounter the following error:
|
||||
以下のエラーが発生した場合は:
|
||||
|
||||
```
|
||||
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\""
|
||||
```
|
||||
|
||||
this issue appears if you run CentOS 7 with Docker 1.13.1.84.
|
||||
This version of Docker can prevent the kubelet from executing into the etcd container.
|
||||
この問題は、CentOS 7をDocker 1.13.1.84で実行した場合に表示されます。このバージョンのDockerでは、kubeletがetcdコンテナに実行されないようにすることができます。
|
||||
|
||||
To work around the issue, choose one of these options:
|
||||
この問題を回避するには、以下のいずれかのオプションを選択します:
|
||||
|
||||
- Roll back to an earlier version of Docker, such as 1.13.1-75
|
||||
- 1.13.1-75のような以前のバージョンのDockerにロールバックする
|
||||
```
|
||||
yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64
|
||||
```
|
||||
|
||||
- Install one of the more recent recommended versions, such as 18.06:
|
||||
- 18.06のような最新の推奨バージョンをインストールする:
|
||||
```bash
|
||||
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
yum install docker-ce-18.06.1.ce-3.el7.x86_64
|
||||
```
|
||||
|
||||
## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag
|
||||
## コンマで区切られた値のリストを`--component-extra-args`フラグ内の引数に渡すことができない
|
||||
|
||||
`kubeadm init` flags such as `--component-extra-args` allow you to pass custom arguments to a control-plane
|
||||
component like the kube-apiserver. However, this mechanism is limited due to the underlying type used for parsing
|
||||
the values (`mapStringString`).
|
||||
`-component-extra-args`のような`kubeadm init`フラグを使うと、kube-apiserverのようなコントロールプレーンコンポーネントにカスタム引数を渡すことができます。しかし、このメカニズムは値の解析に使われる基本的な型 (`mapStringString`) のために制限されています。
|
||||
|
||||
If you decide to pass an argument that supports multiple, comma-separated values such as
|
||||
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,NamespaceExists"` this flag will fail with
|
||||
`flag: malformed pair, expect string=string`. This happens because the list of arguments for
|
||||
`--apiserver-extra-args` expects `key=value` pairs and in this case `NamespacesExists` is considered
|
||||
as a key that is missing a value.
|
||||
もし、`--apiserver-extra-args "enable-admission plugins=LimitRanger,NamespaceExists"`のようにカンマで区切られた複数の値をサポートする引数を渡した場合、このフラグは`flag: malformed pair, expect string=string`で失敗します。これは`--apiserver-extra-args`の引数リストが`key=value`のペアを期待しており、この場合`NamespacesExists`は値を欠いたキーとみなされるためです。
|
||||
|
||||
Alternatively, you can try separating the `key=value` pairs like so:
|
||||
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"`
|
||||
but this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`.
|
||||
別の方法として、`key=value`のペアを以下のように分離してみることもできます:
|
||||
`--apiserver-extra-args "enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists"`しかし、この場合は、キー`enable-admission-plugins`は`NamespaceExists`の値しか持ちません。既知の回避策としては、kubeadm[設定ファイル](/ja/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags)を使用することが挙げられます。
|
||||
|
||||
A known workaround is to use the kubeadm [configuration file](/ja/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags).
|
||||
## cloud-controller-managerによってノードが初期化される前にkube-proxyがスケジューリングされる
|
||||
|
||||
## kube-proxy scheduled before node is initialized by cloud-controller-manager
|
||||
クラウドプロバイダのシナリオでは、クラウドコントローラマネージャがノードアドレスを初期化する前に、kube-proxyが新しいワーカーノードでスケジューリングされてしまうことがあります。これにより、kube-proxyがノードのIPアドレスを正しく拾えず、ロードバランサを管理するプロキシ機能に悪影響を及ぼします。
|
||||
|
||||
In cloud provider scenarios, kube-proxy can end up being scheduled on new worker nodes before
|
||||
the cloud-controller-manager has initialized the node addresses. This causes kube-proxy to fail
|
||||
to pick up the node's IP address properly and has knock-on effects to the proxy function managing
|
||||
load balancers.
|
||||
|
||||
The following error can be seen in kube-proxy Pods:
|
||||
kube-proxy Podsでは以下のようなエラーが発生します:
|
||||
```
|
||||
server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []
|
||||
proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
|
||||
```
|
||||
|
||||
A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane
|
||||
nodes regardless of their conditions, keeping it off of other nodes until their initial guarding
|
||||
conditions abate:
|
||||
既知の解決策は、初期のガード条件が緩和されるまで他のノードから離しておき、条件に関係なくコントロールプレーンノード上でスケジューリングできるように、キューブプロキシDaemonSetにパッチを当てることです:
|
||||
|
||||
```
|
||||
kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ] } } } }'
|
||||
```
|
||||
|
||||
The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027).
|
||||
Tこの問題のトラッキング問題は[こちら](https://github.com/kubernetes/kubeadm/issues/1027)。
|
||||
|
||||
## The NodeRegistration.Taints field is omitted when marshalling kubeadm configuration
|
||||
## kubeadmの設定をマーシャリングする際、NodeRegistration.Taintsフィールドが省略される
|
||||
|
||||
*Note: This [issue](https://github.com/kubernetes/kubeadm/issues/1358) only applies to tools that marshal kubeadm types (e.g. to a YAML configuration file). It will be fixed in kubeadm API v1beta2.*
|
||||
*注意: この[Issue](https://github.com/kubernetes/kubeadm/issues/1358)は、kubeadmタイプをマーシャルするツール(YAML設定ファイルなど)にのみ適用されます。これはkubeadm API v1beta2で修正される予定です。*
|
||||
|
||||
By default, kubeadm applies the `node-role.kubernetes.io/master:NoSchedule` taint to control-plane nodes.
|
||||
If you prefer kubeadm to not taint the control-plane node, and set `InitConfiguration.NodeRegistration.Taints` to an empty slice,
|
||||
the field will be omitted when marshalling. When the field is omitted, kubeadm applies the default taint.
|
||||
デフォルトでは、kubeadmはコントロールプレーンノードに`node-role.kubernetes.io/master:NoSchedule`のテイントを適用します。kubeadmがコントロールプレーンノードに影響を与えないようにし、`InitConfiguration.NodeRegistration.Taints`を空のスライスに設定すると、マーシャリング時にこのフィールドは省略されます。フィールドが省略された場合、kubeadmはデフォルトのテイントを適用します。
|
||||
|
||||
There are at least two workarounds:
|
||||
少なくとも2つの回避策があります:
|
||||
|
||||
1. Use the `node-role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](/docs/concepts/configuration/taint-and-toleration/), unless other nodes have capacity.
|
||||
1. 空のスライスの代わりに`node-role.kubernetes.io/master:PreferNoSchedule`テイントを使用します。他のノードに容量がない限り、[Podsはマスター上でスケジュールされます](/docs/concepts/scheduling-eviction/taint-and-toleration/)。
|
||||
|
||||
2. Remove the taint after kubeadm init exits:
|
||||
2. kubeadm init終了後のテイントの除去:
|
||||
```bash
|
||||
kubectl taint nodes NODE_NAME node-role.kubernetes.io/master:NoSchedule-
|
||||
```
|
||||
|
||||
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
|
||||
## ノード{#usr-mounted-read-only}に`/usr`が読み取り専用でマウントされる
|
||||
|
||||
On Linux distributions such as Fedora CoreOS, the directory `/usr` is mounted as a read-only filesystem.
|
||||
For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md),
|
||||
Kubernetes components like the kubelet and kube-controller-manager use the default path of
|
||||
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
|
||||
for the feature to work.
|
||||
Fedora CoreOSなどのLinuxディストリビューションでは、ディレクトリ`/usr`が読み取り専用のファイルシステムとしてマウントされます。 [flex-volumeサポート](https://github.com/kubernetes/community/blob/ab55d85/contributors/devel/sig-storage/flexvolume.md)では、kubeletやkube-controller-managerのようなKubernetesコンポーネントはデフォルトで`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`のパスを使用していますが、この機能を動作させるためにはflex-volumeディレクトリは _書き込み可能_ な状態でなければなりません。
|
||||
|
||||
To workaround this issue you can configure the flex-volume directory using the kubeadm
|
||||
[configuration file](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2).
|
||||
この問題を回避するには、kubeadm[設定ファイル](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)を使用してflex-volumeディレクトリを設定します。
|
||||
|
||||
On the primary control-plane Node (created using `kubeadm init`) pass the following
|
||||
file using `--config`:
|
||||
プライマリコントロールプレーンノード(`kubeadm init`で作成されたもの)上で、`--config`で以下のファイルを渡します:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta2
|
||||
|
|
@ -348,7 +330,7 @@ controllerManager:
|
|||
flex-volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
|
||||
```
|
||||
|
||||
On joining Nodes:
|
||||
ノードをジョインするには:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta2
|
||||
|
|
@ -358,5 +340,9 @@ nodeRegistration:
|
|||
volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
|
||||
```
|
||||
|
||||
Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please
|
||||
be advised that this is modifying a design principle of the Linux distribution.
|
||||
あるいは、`/usr`マウントを書き込み可能にするために `/etc/fstab`を変更することもできますが、これはLinuxディストリビューションの設計原理を変更していることに注意してください。
|
||||
|
||||
## `kubeadm upgrade plan`が`context deadline exceeded`エラーメッセージを表示する
|
||||
このエラーメッセージは、外部etcdを実行している場合に`kubeadm`でKubernetesクラスタをアップグレードする際に表示されます。これは致命的なバグではなく、古いバージョンのkubeadmが外部etcdクラスタのバージョンチェックを行うために発生します。`kubeadm upgrade apply ...`で進めることができます。
|
||||
|
||||
この問題はバージョン1.19で修正されます。
|
||||
|
|
@ -8,7 +8,7 @@ weight: 30
|
|||
|
||||
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
|
||||
|
||||
Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
|
||||
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
|
||||
|
||||
* a highly available cluster
|
||||
* composable attributes
|
||||
|
|
@ -21,7 +21,8 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in
|
|||
* openSUSE Leap 15
|
||||
* continuous integration tests
|
||||
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
|
||||
|
||||
|
||||
|
||||
|
|
@ -50,7 +51,7 @@ Kubespray provides the following utilities to help provision your environment:
|
|||
|
||||
### (2/5) インベントリファイルの用意
|
||||
|
||||
After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
|
||||
After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
|
||||
|
||||
### (3/5) クラスタ作成の計画
|
||||
|
||||
|
|
@ -68,7 +69,7 @@ Kubespray provides the ability to customize many aspects of the deployment:
|
|||
* {{< glossary_tooltip term_id="cri-o" >}}
|
||||
* Certificate generation methods
|
||||
|
||||
Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
|
||||
Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
|
||||
|
||||
### (4/5) クラスタのデプロイ
|
||||
|
||||
|
|
@ -110,7 +111,7 @@ When running the reset playbook, be sure not to accidentally target your product
|
|||
|
||||
## フィードバック
|
||||
|
||||
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](http://slack.k8s.io/))
|
||||
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/))
|
||||
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -20,9 +20,7 @@ AWS上でKubernetesクラスターを作成するには、AWSからアクセス
|
|||
|
||||
* [Kubernetes Operations](https://github.com/kubernetes/kops) - プロダクショングレードなKubernetesのインストール、アップグレード、管理が可能です。AWS上のDebian、Ubuntu、CentOS、RHELをサポートしています。
|
||||
|
||||
* [CoreOS Tectonic](https://coreos.com/tectonic/)はAWS上のContainer Linuxノードを含むKubernetesクラスターを作成できる、オープンソースの[Tectonic Installer](https://github.com/coreos/tectonic-installer)を含みます。
|
||||
|
||||
* CoreOSから生まれ、Kubernetes IncubatorがメンテナンスしているCLIツール[kube-aws](https://github.com/kubernetes-incubator/kube-aws)は、[Container Linux](https://coreos.com/why/)ノードを使用したAWSツール(EC2、CloudFormation、Auto Scaling)によるKubernetesクラスターを作成および管理できます。
|
||||
* [kube-aws](https://github.com/kubernetes-incubator/kube-aws) EC2、CloudFormation、Auto Scalingを使用して、[Flatcar Linux](https://www.flatcar-linux.org/)ノードでKubernetesクラスターを作成および管理します。
|
||||
|
||||
* [KubeOne](https://github.com/kubermatic/kubeone)は可用性の高いKubernetesクラスターを作成、アップグレード、管理するための、オープンソースのライフサイクル管理ツールです。
|
||||
|
||||
|
|
@ -46,10 +44,10 @@ export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
|||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
ツールに関する最新のドキュメントページはこちらです: [kubectl manual](/docs/user-guide/kubectl/)
|
||||
ツールに関する最新のドキュメントページはこちらです: [kubectl manual](/docs/reference/kubectl/kubectl/)
|
||||
|
||||
デフォルトでは、`kubectl`はクラスターの起動中に生成された`kubeconfig`ファイルをAPIに対する認証に使用します。
|
||||
詳細な情報は、[kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください。
|
||||
詳細な情報は、[kubeconfig files](/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください。
|
||||
|
||||
### 例
|
||||
|
||||
|
|
@ -61,7 +59,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
|||
|
||||
## クラスターのスケーリング
|
||||
|
||||
`kubectl`を使用したノードの追加および削除はサポートしていません。インストール中に作成された[Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)内の'Desired'および'Max'プロパティを手動で調整することで、ノード数をスケールさせることができます。
|
||||
`kubectl`を使用したノードの追加および削除はサポートしていません。インストール中に作成された[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html)内の'Desired'および'Max'プロパティを手動で調整することで、ノード数をスケールさせることができます。
|
||||
|
||||
## クラスターの解体
|
||||
|
||||
|
|
@ -77,12 +75,8 @@ cluster/kube-down.sh
|
|||
IaaS プロバイダー | 構成管理 | OS | ネットワーク | ドキュメント | 適合 | サポートレベル
|
||||
-------------------- | ------------ | ------------- | ------------ | --------------------------------------------- | ---------| ----------------------------
|
||||
AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
|
||||
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws) | | Community
|
||||
AWS | Juju | Ubuntu | flannel, calico, canal | [docs](/docs/getting-started-guides/ubuntu) | 100% | Commercial, Community
|
||||
AWS | CoreOS | CoreOS | flannel | - | | Community
|
||||
AWS | Juju | Ubuntu | flannel, calico, canal | - | 100% | Commercial, Community
|
||||
AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
|
||||
|
||||
## 参考文献
|
||||
|
||||
Kubernetesクラスターの利用と管理に関する詳細は、[Kubernetesドキュメント](/ja/docs/)を参照してください。
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,340 +0,0 @@
|
|||
---
|
||||
title: CenturyLink Cloud上でKubernetesを動かす
|
||||
---
|
||||
|
||||
|
||||
These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud.
|
||||
|
||||
You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md).
|
||||
|
||||
## ヘルプの検索
|
||||
|
||||
If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways:
|
||||
|
||||
- Submit a github issue
|
||||
- Send an email to Kubernetes AT ctl DOT io
|
||||
- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes)
|
||||
|
||||
## 仮想マシンもしくは物理サーバーのクラスター、その選択
|
||||
|
||||
- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag.
|
||||
- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/)
|
||||
- Physical serves are only available in the VA1 and GB3 data centers.
|
||||
- VMs are available in all 13 of our public cloud locations
|
||||
|
||||
## 必要条件
|
||||
|
||||
The requirements to run this script are:
|
||||
|
||||
- A linux administrative host (tested on ubuntu and macOS)
|
||||
- python 2 (tested on 2.7.11)
|
||||
- pip (installed with python as of 2.7.9)
|
||||
- git
|
||||
- A CenturyLink Cloud account with rights to create new hosts
|
||||
- An active VPN connection to the CenturyLink Cloud from your linux host
|
||||
|
||||
## スクリプトのインストール
|
||||
|
||||
After you have all the requirements met, please follow these instructions to install this script.
|
||||
|
||||
1) Clone this repository and cd into it.
|
||||
|
||||
```shell
|
||||
git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc
|
||||
```
|
||||
|
||||
2) Install all requirements, including
|
||||
|
||||
* Ansible
|
||||
* CenturyLink Cloud SDK
|
||||
* Ansible Modules
|
||||
|
||||
```shell
|
||||
sudo pip install -r ansible/requirements.txt
|
||||
```
|
||||
|
||||
3) Create the credentials file from the template and use it to set your ENV variables
|
||||
|
||||
```shell
|
||||
cp ansible/credentials.sh.template ansible/credentials.sh
|
||||
vi ansible/credentials.sh
|
||||
source ansible/credentials.sh
|
||||
|
||||
```
|
||||
|
||||
4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/)
|
||||
|
||||
|
||||
#### スクリプトのインストールの例: Ububtu 14の手順
|
||||
|
||||
If you use an ubuntu 14, for your convenience we have provided a step by step
|
||||
guide to install the requirements and install the script.
|
||||
|
||||
```shell
|
||||
# system
|
||||
apt-get update
|
||||
apt-get install -y git python python-crypto
|
||||
curl -O https://bootstrap.pypa.io/get-pip.py
|
||||
python get-pip.py
|
||||
|
||||
# installing this repository
|
||||
mkdir -p ~home/k8s-on-clc
|
||||
cd ~home/k8s-on-clc
|
||||
git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git
|
||||
cd adm-kubernetes-on-clc/
|
||||
pip install -r requirements.txt
|
||||
|
||||
# getting started
|
||||
cd ansible
|
||||
cp credentials.sh.template credentials.sh; vi credentials.sh
|
||||
source credentials.sh
|
||||
```
|
||||
|
||||
|
||||
|
||||
## クラスターの作成
|
||||
|
||||
To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete
|
||||
list of script options and some examples are listed below.
|
||||
|
||||
```shell
|
||||
CLC_CLUSTER_NAME=[name of kubernetes cluster]
|
||||
cd ./adm-kubernetes-on-clc
|
||||
bash kube-up.sh -c="$CLC_CLUSTER_NAME"
|
||||
```
|
||||
|
||||
It takes about 15 minutes to create the cluster. Once the script completes, it
|
||||
will output some commands that will help you setup kubectl on your machine to
|
||||
point to the new cluster.
|
||||
|
||||
When the cluster creation is complete, the configuration files for it are stored
|
||||
locally on your administrative host, in the following directory
|
||||
|
||||
```shell
|
||||
> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/
|
||||
```
|
||||
|
||||
|
||||
#### クラスターの作成: スクリプトのオプション
|
||||
|
||||
```shell
|
||||
Usage: kube-up.sh [OPTIONS]
|
||||
Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster
|
||||
Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
|
||||
order to access the CenturyLinkCloud API
|
||||
|
||||
All options (both short and long form) require arguments, and must include "="
|
||||
between option name and option value.
|
||||
|
||||
-h (--help) display this help and exit
|
||||
-c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
|
||||
-t= (--minion_type=) standard -> VM (default), bareMetal -> physical]
|
||||
-d= (--datacenter=) VA1 (default)
|
||||
-m= (--minion_count=) number of kubernetes minion nodes
|
||||
-mem= (--vm_memory=) number of GB ram for each minion
|
||||
-cpu= (--vm_cpu=) number of virtual cps for each minion node
|
||||
-phyid= (--server_conf_id=) physical server configuration id, one of
|
||||
physical_server_20_core_conf_id
|
||||
physical_server_12_core_conf_id
|
||||
physical_server_4_core_conf_id (default)
|
||||
-etcd_separate_cluster=yes create a separate cluster of three etcd nodes,
|
||||
otherwise run etcd on the master node
|
||||
```
|
||||
|
||||
## クラスターの拡張
|
||||
|
||||
To expand an existing Kubernetes cluster, run the ```add-kube-node.sh```
|
||||
script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options).
|
||||
This script must be run from the same host that created the cluster (or a host
|
||||
that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```).
|
||||
|
||||
```shell
|
||||
cd ./adm-kubernetes-on-clc
|
||||
bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2
|
||||
```
|
||||
|
||||
#### クラスターの拡張: スクリプトのオプション
|
||||
|
||||
```shell
|
||||
Usage: add-kube-node.sh [OPTIONS]
|
||||
Create servers in the CenturyLinkCloud environment and add to an
|
||||
existing CLC kubernetes cluster
|
||||
|
||||
Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in
|
||||
order to access the CenturyLinkCloud API
|
||||
|
||||
-h (--help) display this help and exit
|
||||
-c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names
|
||||
-m= (--minion_count=) number of kubernetes minion nodes to add
|
||||
```
|
||||
|
||||
## クラスターの削除
|
||||
|
||||
There are two ways to delete an existing cluster:
|
||||
|
||||
1) Use our python script:
|
||||
|
||||
```shell
|
||||
python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1
|
||||
```
|
||||
|
||||
2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink
|
||||
Cloud control portal and delete the parent server group that contains the
|
||||
Kubernetes Cluster. We hope to add a scripted option to do this soon.
|
||||
|
||||
## 例
|
||||
|
||||
Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1
|
||||
```
|
||||
|
||||
Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes
|
||||
```
|
||||
|
||||
Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1:
|
||||
|
||||
```shell
|
||||
bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4
|
||||
```
|
||||
|
||||
|
||||
|
||||
## クラスターの機能とアーキテクチャ
|
||||
|
||||
We configure the Kubernetes cluster with the following features:
|
||||
|
||||
* KubeDNS: DNS resolution and service discovery
|
||||
* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling.
|
||||
* Grafana: Kubernetes/Docker metric dashboard
|
||||
* KubeUI: Simple web interface to view Kubernetes state
|
||||
* Kube Dashboard: New web interface to interact with your cluster
|
||||
|
||||
We use the following to create the Kubernetes cluster:
|
||||
|
||||
* Kubernetes 1.1.7
|
||||
* Ubuntu 14.04
|
||||
* Flannel 0.5.4
|
||||
* Docker 1.9.1-0~trusty
|
||||
* Etcd 2.2.2
|
||||
|
||||
## 任意のアドオン
|
||||
|
||||
* Logging: We offer an integrated centralized logging ELK platform so that all
|
||||
Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack
|
||||
and configure Kubernetes to send logs to it, follow [the log
|
||||
aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as
|
||||
the footprint isn't trivial.
|
||||
|
||||
## クラスターの管理
|
||||
|
||||
The most widely used tool for managing a Kubernetes cluster is the command-line
|
||||
utility ```kubectl```. If you do not already have a copy of this binary on your
|
||||
administrative machine, you may run the script ```install_kubectl.sh``` which will
|
||||
download it and install it in ```/usr/bin/local```.
|
||||
|
||||
The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined. ```install_kubectl.sh``` also writes a configuration file which will embed the necessary
|
||||
authentication certificates for the particular cluster. The configuration file is
|
||||
written to the ```${CLC_CLUSTER_HOME}/kube``` directory
|
||||
|
||||
|
||||
```shell
|
||||
export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config
|
||||
kubectl version
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
### プログラムでクラスターへアクセス
|
||||
|
||||
It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice.
|
||||
|
||||
To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https:
|
||||
|
||||
```shell
|
||||
curl \
|
||||
--cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \
|
||||
--key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \
|
||||
--cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443
|
||||
```
|
||||
|
||||
But please note, this *does not* work out of the box with the ```curl``` binary
|
||||
distributed with macOS.
|
||||
|
||||
### ブラウザーを使ったクラスターへのアクセス
|
||||
|
||||
We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you
|
||||
create a cluster, the script should output URLs for these interfaces like this:
|
||||
|
||||
kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```.
|
||||
|
||||
Note on Authentication to the UIs:
|
||||
|
||||
The cluster is set up to use basic authentication for the user _admin_.
|
||||
Hitting the url at ```https://${MASTER_IP}:6443``` will
|
||||
require accepting the self-signed certificate
|
||||
from the apiserver, and then presenting the admin
|
||||
password written to file at: ```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_```
|
||||
|
||||
|
||||
### 設定ファイル
|
||||
|
||||
Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under ```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files
|
||||
to access the cluster from machines other than where you created the cluster from.
|
||||
|
||||
* ```config/```: Ansible variable files containing parameters describing the master and minion hosts
|
||||
* ```hosts/```: hosts files listing access information for the Ansible playbooks
|
||||
* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API
|
||||
* ```pki/```: public key infrastructure files enabling TLS communication in the cluster
|
||||
* ```ssh/```: SSH keys for root access to the hosts
|
||||
|
||||
|
||||
## ```kubectl``` usage examples
|
||||
|
||||
There are a great many features of _kubectl_. Here are a few examples
|
||||
|
||||
List existing nodes, pods, services and more, in all namespaces, or in just one:
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
kubectl get --all-namespaces pods
|
||||
kubectl get --all-namespaces services
|
||||
kubectl get --namespace=kube-system replicationcontrollers
|
||||
```
|
||||
|
||||
The Kubernetes API server exposes services on web URLs, which are protected by requiring
|
||||
client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide
|
||||
the necessary certificates and serve locally over http.
|
||||
|
||||
```shell
|
||||
kubectl proxy -p 8001
|
||||
```
|
||||
|
||||
Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser.
|
||||
|
||||
|
||||
## どのKubernetesの機能がCenturyLink Cloud上で動かないのか
|
||||
|
||||
These are the known items that don't work on CenturyLink cloud but do work on other cloud providers:
|
||||
|
||||
- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016.
|
||||
|
||||
- At this time, there is no support for persistent storage volumes provided by
|
||||
CenturyLink Cloud. However, customers can bring their own persistent storage
|
||||
offering. We ourselves use Gluster.
|
||||
|
||||
|
||||
## Ansibleのファイル
|
||||
|
||||
If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md)
|
||||
|
||||
## 参考文献
|
||||
|
||||
Please see the [Kubernetes docs](/ja/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
|
||||
|
||||
|
|
@ -67,7 +67,7 @@ cluster/kube-up.sh
|
|||
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
|
||||
|
||||
If you run into trouble, please see the section on [troubleshooting](/ja/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
|
||||
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on [Slack](/docs/troubleshooting/#slack).
|
||||
[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel.
|
||||
|
||||
The next few steps will show you:
|
||||
|
||||
|
|
@ -80,7 +80,7 @@ The next few steps will show you:
|
|||
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
|
||||
The [kubectl](/docs/user-guide/kubectl/) tool controls the Kubernetes cluster
|
||||
The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
|
||||
manager. It lets you inspect your cluster resources, create, delete, and update
|
||||
components, and much more. You will use it to look at your new cluster and bring
|
||||
up example apps.
|
||||
|
|
@ -93,7 +93,7 @@ gcloud components install kubectl
|
|||
|
||||
{{< note >}}
|
||||
The kubectl version bundled with `gcloud` may be older than the one
|
||||
downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/kubectl/install/)
|
||||
The [kubectl](/ja/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
|
||||
document to see how you can set up the latest `kubectl` on your workstation.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
@ -107,7 +107,7 @@ Once `kubectl` is in your path, you can use it to look at your cluster. E.g., ru
|
|||
kubectl get --all-namespaces services
|
||||
```
|
||||
|
||||
should show a set of [services](/docs/user-guide/services) that look something like this:
|
||||
should show a set of [services](/docs/concepts/services-networking/service/) that look something like this:
|
||||
|
||||
```shell
|
||||
NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
|
||||
|
|
@ -117,7 +117,7 @@ kube-system kube-ui ClusterIP 10.0.0.3 <none>
|
|||
...
|
||||
```
|
||||
|
||||
Similarly, you can take a look at the set of [pods](/docs/user-guide/pods) that were created during cluster startup.
|
||||
Similarly, you can take a look at the set of [pods](/ja/docs/concepts/workloads/pods/) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```shell
|
||||
|
|
@ -144,7 +144,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh
|
|||
|
||||
### いくつかの例の実行
|
||||
|
||||
Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster.
|
||||
Then, see [a simple nginx example](/ja/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
|
||||
|
||||
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
|
||||
|
||||
|
|
@ -215,10 +215,3 @@ IaaS Provider | Config. Mgmt | OS | Networking | Docs
|
|||
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
|
||||
GCE | Saltstack | Debian | GCE | [docs](/ja/docs/setup/production-environment/turnkey/gce/) | | Project
|
||||
|
||||
|
||||
## 参考文献
|
||||
|
||||
Please see the [Kubernetes docs](/ja/docs/) for more details on administering
|
||||
and using a Kubernetes cluster.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -25,13 +25,9 @@ The following modules are available where you can deploy IBM Cloud Private by us
|
|||
|
||||
## AWS上でのIBM Cloud Private
|
||||
|
||||
You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform.
|
||||
You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) using Terraform.
|
||||
|
||||
IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide.
|
||||
|
||||
This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/).
|
||||
|
||||
IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md).
|
||||
IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws).
|
||||
|
||||
## Azure上でのIBM Cloud Private
|
||||
|
||||
|
|
@ -64,4 +60,4 @@ You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. F
|
|||
|
||||
The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
|
||||
|
||||
For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/services/vmwaresolutions/vmonic?topic=vmware-solutions-prod_overview#ibm-cloud-private-hosted).
|
||||
For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview).
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ Windowsアプリケーションは、多くの組織で実行されているサ
|
|||
|
||||
## KubernetesのWindowsコンテナ
|
||||
|
||||
KubernetesでWindowsコンテナのオーケストレーションを有効にする方法は、既存のLinuxクラスターにWindowsノードを含めるだけです。Kubernetesの[Pod](/ja/docs/concepts/workloads/pods/pod-overview/)でWindowsコンテナをスケジュールすることは、Linuxベースのコンテナをスケジュールするのと同じくらいシンプルで簡単です。
|
||||
KubernetesでWindowsコンテナのオーケストレーションを有効にする方法は、既存のLinuxクラスターにWindowsノードを含めるだけです。Kubernetesの{{< glossary_tooltip text="Pod" term_id="pod" >}}でWindowsコンテナをスケジュールすることは、Linuxベースのコンテナをスケジュールするのと同じくらいシンプルで簡単です。
|
||||
|
||||
Windowsコンテナを実行するには、Kubernetesクラスターに複数のオペレーティングシステムを含める必要があります。コントロールプレーンノードはLinux、ワーカーノードはワークロードのニーズに応じてWindowsまたはLinuxで実行します。Windows Server 2019は、サポートされている唯一のWindowsオペレーティングシステムであり、Windows (kubelet、[コンテナランタイム](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd)、kube-proxyを含む)で[Kubernetesノード](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)を有効にします。Windowsディストリビューションチャンネルの詳細については、[Microsoftのドキュメント](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19)を参照してください。
|
||||
|
||||
|
|
@ -52,7 +52,7 @@ Windows Serverホストオペレーティングシステムには、[Windows Ser
|
|||
|
||||
Kubernetesの主要な要素は、WindowsでもLinuxと同じように機能します。このセクションでは、主要なワークロードイネーブラーのいくつかと、それらがWindowsにどのようにマップされるかについて説明します。
|
||||
|
||||
* [Pods](/ja/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Pods](/ja/docs/concepts/workloads/pods/)
|
||||
|
||||
Podは、Kubernetesにおける最も基本的な構成要素です。人間が作成またはデプロイするKubernetesオブジェクトモデルの中で最小かつ最もシンプルな単位です。WindowsとLinuxのコンテナを同じPodにデプロイすることはできません。Pod内のすべてのコンテナは、各ノードが特定のプラットフォームとアーキテクチャを表す単一のノードにスケジュールされます。次のPod機能、プロパティ、およびイベントがWindowsコンテナでサポートされています。:
|
||||
|
||||
|
|
@ -96,7 +96,27 @@ Pod、Controller、Serviceは、KubernetesでWindowsワークロードを管理
|
|||
|
||||
#### コンテナランタイム
|
||||
|
||||
KubernetesのWindows Server 2019/1809ノードでは、Docker EE-basic 18.09が必要です。これは、kubeletに含まれているdockershimコードで動作します。CRI-ContainerDなどの追加のランタイムは、Kubernetesの以降のバージョンでサポートされる可能性があります。
|
||||
##### Docker EE
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
Docker EE-basic 18.09+は、Kubernetesを実行しているWindows Server 2019 / 1809ノードに推奨されるコンテナランタイムです。kubeletに含まれるdockershimコードで動作します。
|
||||
|
||||
##### CRI-ContainerD
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
ContainerDはLinux上のKubernetesで動作するOCI準拠のランタイムです。Kubernetes v1.18では、Windows上での{{< glossary_tooltip term_id="containerd" text="ContainerD" >}}のサポートが追加されています。Windows上でのContainerDの進捗状況は[enhancements#1001](https://github.com/kubernetes/enhancements/issues/1001)で確認できます。
|
||||
|
||||
{{< caution >}}
|
||||
|
||||
Kubernetes v1.18におけるWindows上でのContainerDは以下の既知の欠点があります:
|
||||
|
||||
* ContainerDは公式リリースではWindowsをサポートしていません。すなわち、Kubernetesでのすべての開発はアクティブなContainerD開発ブランチに対して行われています。本番環境へのデプロイは常に、完全にテストされセキュリティ修正をサポートした公式リリースを利用するべきです。
|
||||
* ContainerDを利用した場合、Group Managed Service Accountsは実装されていません。詳細は[containerd/cri#1276](https://github.com/containerd/cri/issues/1276)を参照してください。
|
||||
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
#### 永続ストレージ
|
||||
|
||||
|
|
@ -404,7 +424,6 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ
|
|||
|
||||
# kubelet.exeを登録
|
||||
# マイクロソフトは、mcr.microsoft.com/k8s/core/pause:1.2.0としてポーズインフラストラクチャコンテナをリリース
|
||||
# 詳細については、「KubernetesにWindowsノードを追加するためのガイド」で「pause」を検索してください
|
||||
nssm install kubelet C:\k\kubelet.exe
|
||||
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/k8s/core/pause:1.2.0 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
|
||||
nssm set kubelet AppDirectory C:\k
|
||||
|
|
@ -516,7 +535,7 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ
|
|||
|
||||
PauseイメージがOSバージョンと互換性があることを確認してください。[説明](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources)では、OSとコンテナの両方がバージョン1803であると想定しています。それ以降のバージョンのWindowsを使用している場合は、Insiderビルドなどでは、それに応じてイメージを調整する必要があります。イメージについては、Microsoftの[Dockerレジストリ](https://hub.docker.com/u/microsoft/)を参照してください。いずれにしても、PauseイメージのDockerfileとサンプルサービスの両方で、イメージに:latestのタグが付けられていると想定しています。
|
||||
|
||||
Kubernetes v1.14以降、MicrosoftはPauseインフラストラクチャコンテナを`mcr.microsoft.com/k8s/core/pause:1.2.0`でリリースしています。詳細については、[KubernetesにWindowsノードを追加するためのガイド](../user-guide-windows-nodes)で「Pause」を検索してください。
|
||||
Kubernetes v1.14以降、MicrosoftはPauseインフラストラクチャコンテナを`mcr.microsoft.com/k8s/core/pause:1.2.0`でリリースしています。
|
||||
|
||||
1. DNS名前解決が正しく機能していない
|
||||
|
||||
|
|
@ -568,18 +587,16 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ
|
|||
|
||||
ロードマップには多くの機能があります。高レベルの簡略リストを以下に示しますが、[ロードマッププロジェクト](https://github.com/orgs/kubernetes/projects/8)を見て、[貢献すること](https://github.com/kubernetes/community/blob/master/sig-windows/)によってWindowsサポートを改善することをお勧めします。
|
||||
|
||||
### CRI-ContainerD
|
||||
|
||||
{{< glossary_tooltip term_id="containerd" >}}は、最近{{< glossary_tooltip text="CNCF" term_id="cncf" >}}プロジェクトとして卒業した、もう1つのOCI準拠ランタイムです。現在Linuxでテストされていますが、1.3はWindowsとHyper-Vをサポートします。[[リファレンス](https://blog.docker.com/2019/02/containerd-graduates-within-the-cncf/)]
|
||||
### Hyper-V分離
|
||||
|
||||
CRI-ContainerDインターフェイスは、Hyper-Vに基づいてサンドボックスを管理できるようになります。これにより、RuntimeClassを次のような新しいユースケースに実装できる基盤が提供されます:
|
||||
Hyper-V分離はKubernetesで以下のWindowsコンテナのユースケースを実現するために必要です。
|
||||
|
||||
* Pod間のハイパーバイザーベースの分離により、セキュリティを強化
|
||||
* 下位互換性により、コンテナの再構築を必要とせずにノードで新しいWindows Serverバージョンを実行
|
||||
* Podの特定のCPU/NUMA設定
|
||||
* メモリの分離と予約
|
||||
|
||||
### Hyper-V分離
|
||||
|
||||
既存のHyper-V分離サポートは、v1.10の試験的な機能であり、上記のCRI-ContainerD機能とRuntimeClass機能を優先して将来廃止される予定です。現在の機能を使用してHyper-V分離コンテナを作成するには、kubeletのフィーチャーゲートを`HyperVContainer=true`で開始し、Podにアノテーション`experimental.windows.kubernetes.io/isolation-type=hyperv`を含める必要があります。実験的リリースでは、この機能はPodごとに1つのコンテナに制限されています。
|
||||
|
||||
|
|
@ -609,7 +626,7 @@ spec:
|
|||
|
||||
### kubeadmとクラスターAPIを使用したデプロイ
|
||||
|
||||
Kubeadmは、ユーザーがKubernetesクラスターをデプロイするための事実上の標準になりつつあります。kubeadmのWindowsノードのサポートは、将来のリリースで提供予定です。Windowsノードが適切にプロビジョニングされるように、クラスターAPIにも投資しています。
|
||||
Kubeadmは、ユーザーがKubernetesクラスターをデプロイするための事実上の標準になりつつあります。kubeadmのWindowsノードのサポートは進行中ですが、ガイドはすでに[ここ](/ja/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)で利用可能です。Windowsノードが適切にプロビジョニングされるように、クラスターAPIにも投資しています。
|
||||
|
||||
### その他の主な機能
|
||||
* グループ管理サービスアカウントのベータサポート
|
||||
|
|
|
|||
Binary file not shown.
|
Before Width: | Height: | Size: 6.9 MiB |
Binary file not shown.
|
Before Width: | Height: | Size: 3.5 MiB |
Binary file not shown.
|
Before Width: | Height: | Size: 3.4 MiB |
|
|
@ -19,7 +19,7 @@ Windowsアプリケーションは、多くの組織で実行されるサービ
|
|||
|
||||
## 始める前に
|
||||
|
||||
* [Windows Serverを実行するマスターノードとワーカーノード](/ja/docs/setup/production-environment/windows/user-guide-windows-nodes/)を含むKubernetesクラスターを作成します
|
||||
* [Windows Serverを実行するマスターノードとワーカーノード](/ja/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)を含むKubernetesクラスターを作成します
|
||||
* Kubernetes上にServiceとワークロードを作成してデプロイすることは、LinuxコンテナとWindowsコンテナ共に、ほぼ同じように動作することに注意してください。クラスターとのインタフェースとなる[Kubectlコマンド](/docs/reference/kubectl/overview/)も同じです。Windowsコンテナをすぐに体験できる例を以下セクションに用意しています。
|
||||
|
||||
## はじめに:Windowsコンテナのデプロイ
|
||||
|
|
|
|||
|
|
@ -1,306 +0,0 @@
|
|||
---
|
||||
title: Guide for adding Windows Nodes in Kubernetes
|
||||
content_type: concept
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
The Kubernetes platform can now be used to run both Linux and Windows containers. One or more Windows nodes can be registered to a cluster. This guide shows how to:
|
||||
|
||||
* Register a Windows node to the cluster
|
||||
* Configure networking so pods on Linux and Windows can communicate
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Before you begin
|
||||
|
||||
* Obtain a [Windows Server license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) in order to configure the Windows node that hosts Windows containers. You can use your organization's licenses for the cluster, or acquire one from Microsoft, a reseller, or via the major cloud providers such as GCP, AWS, and Azure by provisioning a virtual machine running Windows Server through their marketplaces. A [time-limited trial](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial) is also available.
|
||||
|
||||
* Build a Linux-based Kubernetes cluster in which you have access to the control plane (some examples include [Creating a single control-plane cluster with kubeadm](/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/ja/docs/setup/production-environment/turnkey/azure/), [GCE](/ja/docs/setup/production-environment/turnkey/gce/), [AWS](/ja/docs/setup/production-environment/turnkey/aws/).
|
||||
|
||||
## Getting Started: Adding a Windows Node to Your Cluster
|
||||
|
||||
### Plan IP Addressing
|
||||
|
||||
Kubernetes cluster management requires careful planning of your IP addresses so that you do not inadvertently cause network collision. This guide assumes that you are familiar with the [Kubernetes networking concepts](/docs/concepts/cluster-administration/networking/).
|
||||
|
||||
In order to deploy your cluster you need the following address spaces:
|
||||
|
||||
| Subnet / address range | Description | Default value |
|
||||
| --- | --- | --- |
|
||||
| Service Subnet | A non-routable, purely virtual subnet that is used by pods to uniformly access services without caring about the network topology. It is translated to/from routable address space by `kube-proxy` running on the nodes. | 10.96.0.0/12 |
|
||||
| Cluster Subnet | This is a global subnet that is used by all pods in the cluster. Each node is assigned a smaller /24 subnet from this for their pods to use. It must be large enough to accommodate all pods used in your cluster. To calculate *minimumsubnet* size: `(number of nodes) + (number of nodes * maximum pods per node that you configure)`. Example: for a 5 node cluster for 100 pods per node: `(5) + (5 * 100) = 505.` | 10.244.0.0/16 |
|
||||
| Kubernetes DNS Service IP | IP address of `kube-dns` service that is used for DNS resolution & cluster service discovery. | 10.96.0.10 |
|
||||
|
||||
Review the networking options supported in 'Intro to Windows containers in Kubernetes: Supported Functionality: Networking' to determine how you need to allocate IP addresses for your cluster.
|
||||
|
||||
### Components that run on Windows
|
||||
|
||||
While the Kubernetes control plane runs on your Linux node(s), the following components are configured and run on your Windows node(s).
|
||||
|
||||
1. kubelet
|
||||
2. kube-proxy
|
||||
3. kubectl (optional)
|
||||
4. Container runtime
|
||||
|
||||
Get the latest binaries from [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases), starting with v1.14 or later. The Windows-amd64 binaries for kubeadm, kubectl, kubelet, and kube-proxy can be found under the CHANGELOG link.
|
||||
|
||||
### Networking Configuration
|
||||
|
||||
Once you have a Linux-based Kubernetes master node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity.
|
||||
|
||||
#### Configuring Flannel in VXLAN mode on the Linux controller
|
||||
|
||||
1. Prepare Kubernetes master for Flannel
|
||||
|
||||
Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command:
|
||||
|
||||
```bash
|
||||
sudo sysctl net.bridge.bridge-nf-call-iptables=1
|
||||
```
|
||||
|
||||
1. Download & configure Flannel
|
||||
|
||||
Download the most recent Flannel manifest:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
|
||||
```
|
||||
|
||||
There are two sections you should modify to enable the vxlan networking backend:
|
||||
|
||||
After applying the steps below, the `net-conf.json` section of `kube-flannel.yml` should look as follows:
|
||||
|
||||
```json
|
||||
net-conf.json: |
|
||||
{
|
||||
"Network": "10.244.0.0/16",
|
||||
"Backend": {
|
||||
"Type": "vxlan",
|
||||
"VNI" : 4096,
|
||||
"Port": 4789
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. Support for other VNIs is coming soon. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)
|
||||
for an explanation of these fields.{{< /note >}}
|
||||
|
||||
1. In the `net-conf.json` section of your `kube-flannel.yml`, double-check:
|
||||
1. The cluster subnet (e.g. "10.244.0.0/16") is set as per your IP plan.
|
||||
* VNI 4096 is set in the backend
|
||||
* Port 4789 is set in the backend
|
||||
1. In the `cni-conf.json` section of your `kube-flannel.yml`, change the network name to `vxlan0`.
|
||||
|
||||
|
||||
Your `cni-conf.json` should look as follows:
|
||||
|
||||
```json
|
||||
cni-conf.json: |
|
||||
{
|
||||
"name": "vxlan0",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"hairpinMode": true,
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {
|
||||
"portMappings": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
1. Apply the Flannel yaml and Validate
|
||||
|
||||
Let's apply the Flannel configuration:
|
||||
|
||||
```bash
|
||||
kubectl apply -f kube-flannel.yml
|
||||
```
|
||||
|
||||
After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.
|
||||
|
||||
```bash
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
The output looks like as follows:
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system etcd-flannel-master 1/1 Running 0 1m
|
||||
kube-system kube-apiserver-flannel-master 1/1 Running 0 1m
|
||||
kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m
|
||||
kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m
|
||||
kube-system kube-flannel-ds-54954 1/1 Running 0 1m
|
||||
kube-system kube-proxy-Zjlxz 1/1 Running 0 1m
|
||||
kube-system kube-scheduler-flannel-master 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
Verify that the Flannel DaemonSet has the NodeSelector applied.
|
||||
|
||||
```bash
|
||||
kubectl get ds -n kube-system
|
||||
```
|
||||
|
||||
The output looks like as follows. The NodeSelector `beta.kubernetes.io/os=linux` is applied.
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d
|
||||
kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d
|
||||
```
|
||||
|
||||
#### Join Windows Worker
|
||||
|
||||
In this section we'll cover configuring a Windows node from scratch to join a cluster on-prem. If your cluster is on a cloud you'll likely want to follow the cloud specific guides in the next section.
|
||||
|
||||
#### Preparing a Windows Node
|
||||
|
||||
{{< note >}}
|
||||
All code snippets in Windows sections are to be run in a PowerShell environment with elevated permissions (Admin).
|
||||
{{< /note >}}
|
||||
|
||||
1. Install Docker (requires a system reboot)
|
||||
|
||||
Kubernetes uses [Docker](https://www.docker.com/) as its container engine, so we need to install it. You can follow the [official Docs instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon#install-docker), the [Docker instructions](https://store.docker.com/editions/enterprise/docker-ee-server-windows), or try the following *recommended* steps:
|
||||
|
||||
```PowerShell
|
||||
Enable-WindowsOptionalFeature -FeatureName Containers
|
||||
Restart-Computer -Force
|
||||
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
|
||||
Install-Package -Name Docker -ProviderName DockerMsftProvider
|
||||
```
|
||||
|
||||
If you are behind a proxy, the following PowerShell environment variables must be defined:
|
||||
|
||||
```PowerShell
|
||||
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine)
|
||||
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine)
|
||||
```
|
||||
|
||||
After reboot, you can verify that the docker service is ready with the command below.
|
||||
|
||||
```PowerShell
|
||||
docker version
|
||||
```
|
||||
|
||||
If you see error message like the following, you need to start the docker service manually.
|
||||
|
||||
```
|
||||
Client:
|
||||
Version: 17.06.2-ee-11
|
||||
API version: 1.30
|
||||
Go version: go1.8.7
|
||||
Git commit: 06fc007
|
||||
Built: Thu May 17 06:14:39 2018
|
||||
OS/Arch: windows / amd64
|
||||
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.30/version: open //./pipe/docker_engine: The system c
|
||||
annot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to
|
||||
connect. This error may also indicate that the docker daemon is not running.
|
||||
```
|
||||
|
||||
You can start the docker service manually like below.
|
||||
|
||||
```PowerShell
|
||||
Start-Service docker
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The "pause" (infrastructure) image is hosted on Microsoft Container Registry (MCR). You can access it using "docker pull mcr.microsoft.com/k8s/core/pause:1.2.0". The DOCKERFILE is available at https://github.com/kubernetes-sigs/windows-testing/blob/master/images/pause/Dockerfile.
|
||||
{{< /note >}}
|
||||
|
||||
1. Prepare a Windows directory for Kubernetes
|
||||
|
||||
Create a "Kubernetes for Windows" directory to store Kubernetes binaries as well as any deployment scripts and config files.
|
||||
|
||||
```PowerShell
|
||||
mkdir c:\k
|
||||
```
|
||||
|
||||
1. Copy Kubernetes certificate
|
||||
|
||||
Copy the Kubernetes certificate file `$HOME/.kube/config` [from the Linux controller](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/creating-a-linux-master#collect-cluster-information) to this new `C:\k` directory on your Windows node.
|
||||
|
||||
Tip: You can use tools such as [xcopy](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/xcopy), [WinSCP](https://winscp.net/eng/download.php), or this [PowerShell wrapper for WinSCP](https://www.powershellgallery.com/packages/WinSCP/5.13.2.0) to transfer the config file between nodes.
|
||||
|
||||
1. Download Kubernetes binaries
|
||||
|
||||
To be able to run Kubernetes, you first need to download the `kubelet` and `kube-proxy` binaries. You download these from the Node Binaries links in the CHANGELOG.md file of the [latest releases](https://github.com/kubernetes/kubernetes/releases/). For example 'kubernetes-node-windows-amd64.tar.gz'. You may also optionally download `kubectl` to run on Windows which you can find under Client Binaries.
|
||||
|
||||
Use the [Expand-Archive](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.archive/expand-archive?view=powershell-6) PowerShell command to extract the archive and place the binaries into `C:\k`.
|
||||
|
||||
#### Join the Windows node to the Flannel cluster
|
||||
|
||||
The Flannel overlay deployment scripts and documentation are available in [this repository](https://github.com/Microsoft/SDN/tree/master/Kubernetes/flannel/overlay). The following steps are a simple walkthrough of the more comprehensive instructions available there.
|
||||
|
||||
Download the [Flannel start.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1) script, the contents of which should be extracted to `C:\k`:
|
||||
|
||||
```PowerShell
|
||||
cd c:\k
|
||||
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
|
||||
wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/start.ps1 -o c:\k\start.ps1
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
[start.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1) references [install.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/install.ps1), which downloads additional files such as the `flanneld` executable and the [Dockerfile for infrastructure pod](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/Dockerfile) and install those for you. For overlay networking mode, the [firewall](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/helper.psm1#L111) is opened for local UDP port 4789. There may be multiple powershell windows being opened/closed as well as a few seconds of network outage while the new external vSwitch for the pod network is being created the first time. Run the script using the arguments as specified below:
|
||||
{{< /note >}}
|
||||
|
||||
```PowerShell
|
||||
cd c:\k
|
||||
.\start.ps1 -ManagementIP <Windows Node IP> `
|
||||
-NetworkMode overlay `
|
||||
-ClusterCIDR <Cluster CIDR> `
|
||||
-ServiceCIDR <Service CIDR> `
|
||||
-KubeDnsServiceIP <Kube-dns Service IP> `
|
||||
-LogDir <Log directory>
|
||||
```
|
||||
|
||||
| Parameter | Default Value | Notes |
|
||||
| --- | --- | --- |
|
||||
| -ManagementIP | N/A (required) | The IP address assigned to the Windows node. You can use `ipconfig` to find this. |
|
||||
| -NetworkMode | l2bridge | We're using `overlay` here |
|
||||
| -ClusterCIDR | 10.244.0.0/16 | Refer to your cluster IP plan |
|
||||
| -ServiceCIDR | 10.96.0.0/12 | Refer to your cluster IP plan |
|
||||
| -KubeDnsServiceIP | 10.96.0.10 | |
|
||||
| -InterfaceName | Ethernet | The name of the network interface of the Windows host. You can use <code>ipconfig</code> to find this. |
|
||||
| -LogDir | C:\k | The directory where kubelet and kube-proxy logs are redirected into their respective output files. |
|
||||
|
||||
Now you can view the Windows nodes in your cluster by running the following:
|
||||
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
You may want to configure your Windows node components like kubelet and kube-proxy to run as services. View the services and background processes section under [troubleshooting](#troubleshooting) for additional instructions. Once you are running the node components as services, collecting logs becomes an important part of troubleshooting. View the [gathering logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs) section of the contributing guide for further instructions.
|
||||
{{< /note >}}
|
||||
|
||||
### Public Cloud Providers
|
||||
|
||||
#### Azure
|
||||
|
||||
AKS-Engine can deploy a complete, customizable Kubernetes cluster with both Linux & Windows nodes. There is a step-by-step walkthrough available in the [docs on GitHub](https://github.com/Azure/aks-engine/blob/master/docs/topics/windows.md).
|
||||
|
||||
#### GCP
|
||||
|
||||
Users can easily deploy a complete Kubernetes cluster on GCE following this step-by-step walkthrough on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/README-GCE-Windows-kube-up.md)
|
||||
|
||||
#### Deployment with kubeadm and cluster API
|
||||
|
||||
Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm will come in a future release. We are also making investments in cluster API to ensure Windows nodes are properly provisioned.
|
||||
|
||||
### Next Steps
|
||||
|
||||
Now that you've configured a Windows worker in your cluster to run Windows containers you may want to add one or more Linux nodes as well to run Linux containers. You are now ready to schedule Windows containers on your cluster.
|
||||
|
||||
|
|
@ -12,14 +12,16 @@ weight: 30
|
|||
|
||||
## サポートされるバージョン {#supported-versions}
|
||||
|
||||
Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](http://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)を参照してください。
|
||||
Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](https://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)を参照してください。
|
||||
|
||||
Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています。
|
||||
Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}})。
|
||||
|
||||
セキュリティフィックスを含む適用可能な修正は、重大度や実行可能性によってはこれら3つのリリースブランチにバックポートされることもあります。パッチリリースは、これらのブランチから [定期的に](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence) 切り出され、必要に応じて追加の緊急リリースも行われます。
|
||||
|
||||
[リリースマネージャー](https://git.k8s.io/sig-release/release-managers.md)グループがこれを決定しています。
|
||||
|
||||
セキュリティフィックスを含む適用可能な修正は、重大度や実行可能性によってはこれら3つのリリースブランチにバックポートされることもあります。パッチリリースは、定期的または必要に応じてこれらのブランチから分岐されます。[パッチリリースチーム](https://github.com/kubernetes/sig-release/blob/master/release-engineering/role-handbooks/patch-release-team.md#release-timing)がこれを決定しています。パッチリリースチームは[リリースマネージャー](https://github.com/kubernetes/sig-release/blob/master/release-managers.md)の一部です。
|
||||
詳細は、[Kubernetesパッチリリース](https://github.com/kubernetes/sig-release/blob/master/releases/patch-releases.md)ページを参照してください。
|
||||
|
||||
マイナーリリースは約3ヶ月ごとに行われるため、マイナーリリースのブランチはそれぞれ約9ヶ月保守されます。
|
||||
|
||||
## サポートされるバージョンの差異
|
||||
|
||||
|
|
@ -29,8 +31,8 @@ Kubernetesプロジェクトでは、最新の3つのマイナーリリースに
|
|||
|
||||
例:
|
||||
|
||||
* 最新の`kube-apiserver`が**1.13**であるとします
|
||||
* ほかの`kube-apiserver`インスタンスは**1.13**および**1.12**がサポートされます
|
||||
* 最新の`kube-apiserver`が**{{< skew latestVersion >}}**であるとします
|
||||
* ほかの`kube-apiserver`インスタンスは**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます
|
||||
|
||||
### kubelet
|
||||
|
||||
|
|
@ -38,8 +40,8 @@ Kubernetesプロジェクトでは、最新の3つのマイナーリリースに
|
|||
|
||||
例:
|
||||
|
||||
* `kube-apiserver`が**1.13**であるとします
|
||||
* `kubelet`は**1.13**、**1.12**および**1.11**がサポートされます
|
||||
* `kube-apiserver`が**{{< skew latestVersion >}}**であるとします
|
||||
* `kubelet`は**{{< skew latestVersion >}}**、**{{< skew prevMinorVersion >}}**および**{{< skew oldestMinorVersion >}}**がサポートされます
|
||||
|
||||
{{< note >}}
|
||||
HAクラスター内の`kube-apiserver`間にバージョンの差異がある場合、有効な`kubelet`のバージョンは少なくなります。
|
||||
|
|
@ -47,8 +49,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある
|
|||
|
||||
例:
|
||||
|
||||
* `kube-apiserver`インスタンスが**1.13**および**1.12**であるとします
|
||||
* `kubelet`は**1.12**および**1.11**がサポートされます(**1.13**はバージョン**1.12**の`kube-apiserver`よりも新しくなるためサポートされません)
|
||||
* `kube-apiserver`インスタンスが**{{< skew latestVersion >}}**および**1.12**であるとします
|
||||
* `kubelet`は**{{< skew prevMinorVersion >}}**および**{{< skew oldestMinorVersion >}}**がサポートされます(**{{< skew latestVersion >}}**はバージョン**{{< skew prevMinorVersion >}}**の`kube-apiserver`よりも新しくなるためサポートされません)
|
||||
|
||||
### kube-controller-manager、kube-scheduler、およびcloud-controller-manager
|
||||
|
||||
|
|
@ -56,8 +58,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある
|
|||
|
||||
例:
|
||||
|
||||
* `kube-apiserver`が**1.13**であるとします
|
||||
* `kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**1.13**および**1.12**がサポートされます
|
||||
* `kube-apiserver`が**{{< skew latestVersion >}}**であるとします
|
||||
* `kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます
|
||||
|
||||
{{< note >}}
|
||||
HAクラスター内の`kube-apiserver`間にバージョンの差異があり、これらのコンポーネントがクラスター内のいずれかの`kube-apiserver`と通信する場合(たとえばロードバランサーを経由して)、コンポーネントの有効なバージョンは少なくなります。
|
||||
|
|
@ -65,8 +67,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異があり
|
|||
|
||||
例:
|
||||
|
||||
* `kube-apiserver`インスタンスが**1.13**および**1.12**であるとします
|
||||
* いずれかの`kube-apiserver`インスタンスへ配信するロードバランサーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**1.12**がサポートされます(**1.13**はバージョン**1.12**の`kube-apiserver`よりも新しくなるためサポートされません)
|
||||
* `kube-apiserver`インスタンスが**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**であるとします
|
||||
* いずれかの`kube-apiserver`インスタンスへ配信するロードバランサーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`は**{{< skew prevMinorVersion >}}**がサポートされます(**{{< skew latestVersion >}}**はバージョン**{{< skew prevMinorVersion >}}**の`kube-apiserver`よりも新しくなるためサポートされません)
|
||||
|
||||
### kubectl
|
||||
|
||||
|
|
@ -74,8 +76,8 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異があり
|
|||
|
||||
例:
|
||||
|
||||
* `kube-apiserver`が**1.13**であるとします
|
||||
* `kubectl`は**1.14**、**1.13**および**1.12**がサポートされます
|
||||
* `kube-apiserver`が**{{< skew latestVersion >}}**であるとします
|
||||
* `kubectl`は**{{< skew nextMinorVersion >}}**、**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます
|
||||
|
||||
{{< note >}}
|
||||
HAクラスター内の`kube-apiserver`間にバージョンの差異がある場合、有効な`kubectl`バージョンは少なくなります。
|
||||
|
|
@ -83,26 +85,26 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある
|
|||
|
||||
例:
|
||||
|
||||
* `kube-apiserver`インスタンスが**1.13**および**1.12**であるとします
|
||||
* `kubectl`は**1.13**および**1.12**がサポートされます(ほかのバージョンでは、ある`kube-apiserver`コンポーネントからマイナーバージョンが2つ以上離れる可能性があります)
|
||||
* `kube-apiserver`インスタンスが**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**であるとします
|
||||
* `kubectl`は**{{< skew latestVersion >}}**および**{{< skew prevMinorVersion >}}**がサポートされます(ほかのバージョンでは、ある`kube-apiserver`コンポーネントからマイナーバージョンが2つ以上離れる可能性があります)
|
||||
|
||||
## サポートされるコンポーネントのアップグレード順序
|
||||
|
||||
コンポーネント間でサポートされるバージョンの差異は、コンポーネントをアップグレードする順序に影響されます。このセクションでは、既存のクラスターをバージョン**1.n**から**1.(n+1)** へ移行するために、コンポーネントをアップグレードする順序を説明します。
|
||||
コンポーネント間でサポートされるバージョンの差異は、コンポーネントをアップグレードする順序に影響されます。このセクションでは、既存のクラスターをバージョン**{{< skew prevMinorVersion >}}**から**{{< skew latestVersion >}}** へ移行するために、コンポーネントをアップグレードする順序を説明します。
|
||||
|
||||
### kube-apiserver
|
||||
|
||||
前提条件:
|
||||
|
||||
* シングルインスタンスのクラスターにおいて、既存の`kube-apiserver`インスタンスは**1.n**とします
|
||||
* HAクラスターにおいて、既存の`kube-apiserver`は**1.n**または**1.(n+1)** とします(最新と最古の間で、最大で1つのマイナーバージョンの差異となります)
|
||||
* サーバーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`はバージョン**1.n**とします(必ず既存のAPIサーバーのバージョンよりも新しいものでなく、かつ新しいAPIサーバーのバージョンの1つ以内のマイナーバージョンとなります)
|
||||
* すべてのノードの`kubelet`インスタンスはバージョン**1.n**または**1.(n-1)** とします(必ず既存のAPIサーバーよりも新しいバージョンでなく、かつ新しいAPIサーバーのバージョンの2つ以内のマイナーバージョンとなります)
|
||||
* シングルインスタンスのクラスターにおいて、既存の`kube-apiserver`インスタンスは**{{< skew prevMinorVersion >}}**とします
|
||||
* HAクラスターにおいて、既存の`kube-apiserver`は**{{< skew prevMinorVersion >}}**または**{{< skew latestVersion >}}** とします(最新と最古の間で、最大で1つのマイナーバージョンの差異となります)
|
||||
* サーバーと通信する`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`はバージョン**{{< skew prevMinorVersion >}}**とします(必ず既存のAPIサーバーのバージョンよりも新しいものでなく、かつ新しいAPIサーバーのバージョンの1つ以内のマイナーバージョンとなります)
|
||||
* すべてのノードの`kubelet`インスタンスはバージョン**{{< skew prevMinorVersion >}}**または**{{< skew oldestMinorVersion >}}** とします(必ず既存のAPIサーバーよりも新しいバージョンでなく、かつ新しいAPIサーバーのバージョンの2つ以内のマイナーバージョンとなります)
|
||||
* 登録されたAdmission webhookは、新しい`kube-apiserver`インスタンスが送信するこれらのデータを扱うことができます:
|
||||
* `ValidatingWebhookConfiguration`および`MutatingWebhookConfiguration`オブジェクトは、**1.(n+1)** で追加されたRESTリソースの新しいバージョンを含んで更新されます(または、v1.15から利用可能な[`matchPolicy: Equivalent`オプション](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy)を使用してください)
|
||||
* Webhookは送信されたRESTリソースの新しいバージョン、および**1.(n+1)** のバージョンで追加された新しいフィールドを扱うことができます
|
||||
* `ValidatingWebhookConfiguration`および`MutatingWebhookConfiguration`オブジェクトは、**{{< skew latestVersion >}}** で追加されたRESTリソースの新しいバージョンを含んで更新されます(または、v1.15から利用可能な[`matchPolicy: Equivalent`オプション](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy)を使用してください)
|
||||
* Webhookは送信されたRESTリソースの新しいバージョン、および**{{< skew latestVersion >}}** のバージョンで追加された新しいフィールドを扱うことができます
|
||||
|
||||
`kube-apiserver`を**1.(n+1)** にアップグレードしてください。
|
||||
`kube-apiserver`を**{{< skew latestVersion >}}** にアップグレードしてください。
|
||||
|
||||
{{< note >}}
|
||||
[非推奨API](/docs/reference/using-api/deprecation-policy/)および[APIの変更ガイドライン](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md)のプロジェクトポリシーにおいては、シングルインスタンスの場合でも`kube-apiserver`のアップグレードの際にマイナーバージョンをスキップしてはなりません。
|
||||
|
|
@ -112,17 +114,17 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある
|
|||
|
||||
前提条件:
|
||||
|
||||
* これらのコンポーネントと通信する`kube-apiserver`インスタンスが**1.(n+1)** であること(これらのコントロールプレーンコンポーネントが、クラスター内の`kube-apiserver`インスタンスと通信できるHAクラスターでは、これらのコンポーネントをアップグレードする前にすべての`kube-apiserver`インスタンスをアップグレードしなければなりません)
|
||||
* これらのコンポーネントと通信する`kube-apiserver`インスタンスが**{{< skew latestVersion >}}** であること(これらのコントロールプレーンコンポーネントが、クラスター内の`kube-apiserver`インスタンスと通信できるHAクラスターでは、これらのコンポーネントをアップグレードする前にすべての`kube-apiserver`インスタンスをアップグレードしなければなりません)
|
||||
|
||||
`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`を**1.(n+1)** にアップグレードしてください。
|
||||
`kube-controller-manager`、`kube-scheduler`および`cloud-controller-manager`を**{{< skew latestVersion >}}** にアップグレードしてください。
|
||||
|
||||
### kubelet
|
||||
|
||||
前提条件:
|
||||
|
||||
* `kubelet`と通信する`kube-apiserver`が**1.(n+1)** であること
|
||||
* `kubelet`と通信する`kube-apiserver`が**{{< skew latestVersion >}}** であること
|
||||
|
||||
必要に応じて、`kubelet`インスタンスを**1.(n+1)** にアップグレードしてください(**1.n**や**1.(n-1)** のままにすることもできます)。
|
||||
必要に応じて、`kubelet`インスタンスを**{{< skew latestVersion >}}** にアップグレードしてください(**{{< skew prevMinorVersion >}}**や**{{< skew oldestMinorVersion >}}** のままにすることもできます)。
|
||||
|
||||
{{< warning >}}
|
||||
`kube-apiserver`と2つのマイナーバージョンの`kubelet`インスタンスを使用してクラスターを実行させることは推奨されません:
|
||||
|
|
@ -130,3 +132,18 @@ HAクラスター内の`kube-apiserver`間にバージョンの差異がある
|
|||
* コントロールプレーンをアップグレードする前に、インスタンスを`kube-apiserver`の1つのマイナーバージョン内にアップグレードさせる必要があります
|
||||
* メンテナンスされている3つのマイナーリリースよりも古いバージョンの`kubelet`を実行する可能性が高まります
|
||||
{{</ warning >}}
|
||||
|
||||
|
||||
|
||||
### kube-proxy
|
||||
|
||||
* `kube-proxy`のマイナーバージョンはノード上の`kubelet`と同じマイナーバージョンでなければなりません
|
||||
* `kube-proxy`は`kube-apiserver`よりも新しいものであってはなりません
|
||||
* `kube-proxy`のマイナーバージョンは`kube-apiserver`のマイナーバージョンよりも2つ以上古いものでなければなりません
|
||||
|
||||
例:
|
||||
|
||||
`kube-proxy`のバージョンが**{{< skew oldestMinorVersion >}}**の場合:
|
||||
|
||||
* `kubelet`のバージョンは**{{< skew oldestMinorVersion >}}**でなければなりません
|
||||
* `kube-apiserver`のバージョンは**{{< skew oldestMinorVersion >}}**と**{{< skew latestVersion >}}**の間でなければなりません
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ Kubernetesのコマンドラインツール`kubectl`を使用すると、Kuberne
|
|||
|
||||
[Minikube](https://minikube.sigs.k8s.io/)は、Kubernetesをローカルで実行するツールです。MinikubeはシングルノードのKubernetesクラスターをパーソナルコンピューター上(Windows、macOS、Linux PCを含む)で実行することで、Kubernetesを試したり、日常的な開発作業のために利用できます。
|
||||
|
||||
ツールのインストールについて知りたい場合は、公式の[Get Started!](https://minikube.sigs.k8s.io/docs/start/)のガイドに従うか、[Minikubeのインストール](/ja/docs/tasks/tools/install-minikube/)を読んでください。
|
||||
ツールのインストールについて知りたい場合は、公式の[Get Started!](https://minikube.sigs.k8s.io/docs/start/)のガイドに従ってください。
|
||||
|
||||
Minikubeが起動したら、[サンプルアプリケーションの実行](/ja/docs/tutorials/hello-minikube/)を試すことができます。
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue