Update create-cluster-kubeadm.md with the latest en version.
This commit is contained in:
parent
d951f1eb6d
commit
432be022b6
|
|
@ -1,84 +1,71 @@
|
||||||
---
|
---
|
||||||
title: kubeadmを使用したシングルコントロールプレーンクラスターの作成
|
title: Creating a single control-plane cluster with kubeadm
|
||||||
content_template: templates/task
|
content_template: templates/task
|
||||||
weight: 30
|
weight: 30
|
||||||
---
|
---
|
||||||
|
|
||||||
{{% capture overview %}}
|
{{% capture overview %}}
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
|
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">The `kubeadm` tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
|
||||||
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/ja/docs/reference/access-authn-authz/bootstrap-tokens/).
|
`kubeadm` also supports other cluster
|
||||||
|
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
|
||||||
|
|
||||||
Because you can install kubeadm on various types of machine (e.g. laptop, server,
|
The `kubeadm` tool is good if you need:
|
||||||
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
|
|
||||||
such as Terraform or Ansible.
|
|
||||||
|
|
||||||
kubeadm's simplicity means it can serve a wide range of use cases:
|
- A simple way for you to try out Kubernetes, possibly for the first time.
|
||||||
|
- A way for existing users to automate setting up a cluster and test their application.
|
||||||
- New users can start with kubeadm to try Kubernetes out for the first time.
|
- A building block in other ecosystem and/or installer tools with a larger
|
||||||
- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications.
|
|
||||||
- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools.
|
|
||||||
|
|
||||||
kubeadm is designed to be a simple way for new users to start trying
|
|
||||||
Kubernetes out, possibly for the first time, a way for existing users to
|
|
||||||
test their application on and stitch together a cluster easily, and also to be
|
|
||||||
a building block in other ecosystem and/or installer tool with a larger
|
|
||||||
scope.
|
scope.
|
||||||
|
|
||||||
You can install _kubeadm_ very easily on operating systems that support
|
You can install and use `kubeadm` on various machines: your laptop, a set
|
||||||
installing deb or rpm packages. The responsible SIG for kubeadm,
|
of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
|
||||||
[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you,
|
cloud or on-premises, you can integrate `kubeadm` into provisioning systems such
|
||||||
but you may also build them from source for other OSes.
|
as Ansible or Terraform.
|
||||||
|
|
||||||
|
|
||||||
### kubeadmの成熟度
|
|
||||||
|
|
||||||
kubeadm's overall feature state is **GA**. Some sub-features, like the configuration
|
|
||||||
file API are still under active development. The implementation of creating the cluster
|
|
||||||
may change slightly as the tool evolves, but the overall implementation should be pretty stable.
|
|
||||||
Any commands under `kubeadm alpha` are by definition, supported on an alpha level.
|
|
||||||
|
|
||||||
|
|
||||||
### サポート期間
|
|
||||||
|
|
||||||
Kubernetes releases are generally supported for nine months, and during that
|
|
||||||
period a patch release may be issued from the release branch if a severe bug or
|
|
||||||
security issue is found. Here are the latest Kubernetes releases and the support
|
|
||||||
timeframe; which also applies to `kubeadm`.
|
|
||||||
|
|
||||||
| Kubernetes version | Release month | End-of-life-month |
|
|
||||||
|--------------------|----------------|-------------------|
|
|
||||||
| v1.13.x | December 2018 | September 2019 |
|
|
||||||
| v1.14.x | March 2019 | December 2019 |
|
|
||||||
| v1.15.x | June 2019 | March 2020 |
|
|
||||||
| v1.16.x | September 2019 | June 2020 |
|
|
||||||
|
|
||||||
{{% /capture %}}
|
{{% /capture %}}
|
||||||
|
|
||||||
{{% capture prerequisites %}}
|
{{% capture prerequisites %}}
|
||||||
|
|
||||||
- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
|
To follow this guide, you need:
|
||||||
- 2 GB or more of RAM per machine. Any less leaves little room for your
|
|
||||||
|
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
|
||||||
|
- 2 GiB or more of RAM per machine--any less leaves little room for your
|
||||||
apps.
|
apps.
|
||||||
- 2 CPUs or more on the control-plane node
|
- At least 2 CPUs on the machine that you use as a control-plane node.
|
||||||
- Full network connectivity among all machines in the cluster. A public or
|
- Full network connectivity among all machines in the cluster. You can use either a
|
||||||
private network is fine.
|
public or a private network.
|
||||||
|
|
||||||
|
|
||||||
|
You also need to use a version of `kubeadm` that can deploy the version
|
||||||
|
of Kubernetes that you want to use in your new cluster.
|
||||||
|
|
||||||
|
[Kubernetes' version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions) applies to `kubeadm` as well as to Kubernetes overall.
|
||||||
|
Check that policy to learn about what versions of Kubernetes and `kubeadm`
|
||||||
|
are supported. This page is written for Kubernetes {{< param "version" >}}.
|
||||||
|
|
||||||
|
The `kubeadm` tool's overall feature state is General Availability (GA). Some sub-features are
|
||||||
|
still under active development. The implementation of creating the cluster may change
|
||||||
|
slightly as the tool evolves, but the overall implementation should be pretty stable.
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
Any commands under `kubeadm alpha` are, by definition, supported on an alpha level.
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
{{% /capture %}}
|
{{% /capture %}}
|
||||||
|
|
||||||
{{% capture steps %}}
|
{{% capture steps %}}
|
||||||
|
|
||||||
## 目的
|
## Objectives
|
||||||
|
|
||||||
* Install a single control-plane Kubernetes cluster or [high availability cluster](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
* Install a single control-plane Kubernetes cluster or [high-availability cluster](/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
||||||
* Install a Pod network on the cluster so that your Pods can
|
* Install a Pod network on the cluster so that your Pods can
|
||||||
talk to each other
|
talk to each other
|
||||||
|
|
||||||
## 説明
|
## Instructions
|
||||||
|
|
||||||
### kubeadmのインストール
|
### Installing kubeadm on your hosts
|
||||||
|
|
||||||
See ["Installing kubeadm"](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
See ["Installing kubeadm"](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
If you have already installed kubeadm, run `apt-get update &&
|
If you have already installed kubeadm, run `apt-get update &&
|
||||||
|
|
@ -89,30 +76,32 @@ kubeadm to tell it what to do. This crashloop is expected and normal.
|
||||||
After you initialize your control-plane, the kubelet runs normally.
|
After you initialize your control-plane, the kubelet runs normally.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
### コントロールプレーンノードの初期化
|
### Initializing your control-plane node
|
||||||
|
|
||||||
The control-plane node is the machine where the control plane components run, including
|
The control-plane node is the machine where the control plane components run, including
|
||||||
etcd (the cluster database) and the API server (which the kubectl CLI
|
{{< glossary_tooltip term_id="etcd" >}} (the cluster database) and the
|
||||||
|
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}
|
||||||
|
(which the {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} command line tool
|
||||||
communicates with).
|
communicates with).
|
||||||
|
|
||||||
1. (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster
|
1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
|
||||||
to high availability you should specify the `--control-plane-endpoint` to set the shared endpoint
|
to high availability you should specify the `--control-plane-endpoint` to set the shared endpoint
|
||||||
for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
|
for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
|
||||||
1. Choose a Pod network add-on, and verify whether it requires any arguments to
|
1. Choose a Pod network add-on, and verify whether it requires any arguments to
|
||||||
be passed to kubeadm initialization. Depending on which
|
be passed to `kubeadm init`. Depending on which
|
||||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||||
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
||||||
1. (Optional) Since version 1.14, kubeadm will try to detect the container runtime on Linux
|
1. (Optional) Since version 1.14, `kubeadm` tries to detect the container runtime on Linux
|
||||||
by using a list of well known domain socket paths. To use different container runtime or
|
by using a list of well known domain socket paths. To use different container runtime or
|
||||||
if there are more than one installed on the provisioned node, specify the `--cri-socket`
|
if there are more than one installed on the provisioned node, specify the `--cri-socket`
|
||||||
argument to `kubeadm init`. See [Installing runtime](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
argument to `kubeadm init`. See [Installing runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
||||||
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
|
1. (Optional) Unless otherwise specified, `kubeadm` uses the network interface associated
|
||||||
with the default gateway to set the advertise address for this particular control-plane node's API server.
|
with the default gateway to set the advertise address for this particular control-plane node's API server.
|
||||||
To use a different network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
|
To use a different network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
|
||||||
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
|
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
|
||||||
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
|
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
|
||||||
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
|
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
|
||||||
connectivity to gcr.io registries.
|
connectivity to the gcr.io container image registry.
|
||||||
|
|
||||||
To initialize the control-plane node run:
|
To initialize the control-plane node run:
|
||||||
|
|
||||||
|
|
@ -143,13 +132,13 @@ high availability scenario.
|
||||||
Turning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster
|
Turning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster
|
||||||
is not supported by kubeadm.
|
is not supported by kubeadm.
|
||||||
|
|
||||||
### 詳細
|
### More information
|
||||||
|
|
||||||
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/ja/docs/reference/setup-tools/kubeadm/kubeadm/).
|
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
|
||||||
|
|
||||||
For a complete list of configuration options, see the [configuration file documentation](/ja/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
|
For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
|
||||||
|
|
||||||
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/ja/docs/setup/production-environment/tools/kubeadm/control-plane-flags/).
|
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/).
|
||||||
|
|
||||||
To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
|
To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
|
||||||
|
|
||||||
|
|
@ -251,32 +240,48 @@ The token is used for mutual authentication between the control-plane node and t
|
||||||
nodes. The token included here is secret. Keep it safe, because anyone with this
|
nodes. The token included here is secret. Keep it safe, because anyone with this
|
||||||
token can add authenticated nodes to your cluster. These tokens can be listed,
|
token can add authenticated nodes to your cluster. These tokens can be listed,
|
||||||
created, and deleted with the `kubeadm token` command. See the
|
created, and deleted with the `kubeadm token` command. See the
|
||||||
[kubeadm reference guide](/ja/docs/reference/setup-tools/kubeadm/kubeadm-token/).
|
[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
|
||||||
|
|
||||||
### Podネットワークアドオンのインストール {#pod-network}
|
### Installing a Pod network add-on {#pod-network}
|
||||||
|
|
||||||
{{< caution >}}
|
{{< caution >}}
|
||||||
This section contains important information about installation and deployment order. Read it carefully before proceeding.
|
This section contains important information about networking setup and
|
||||||
|
deployment order.
|
||||||
|
Read all of this advice carefully before proceeding.
|
||||||
|
|
||||||
|
**You must deploy a
|
||||||
|
{{< glossary_tooltip text="Container Network Interface" term_id="cni" >}}
|
||||||
|
(CNI) based Pod network add-on so that your Pods can communicate with each other.
|
||||||
|
Cluster DNS (CoreDNS) will not start up before a network is installed.**
|
||||||
|
|
||||||
|
- Take care that your Pod network must not overlap with any of the host
|
||||||
|
networks: you are likely to see problems if there is any overlap.
|
||||||
|
(If you find a collision between your network plugin’s preferred Pod
|
||||||
|
network and some of your host networks, you should think of a suitable
|
||||||
|
CIDR block to use instead, then use that during `kubeadm init` with
|
||||||
|
`--pod-network-cidr` and as a replacement in your network plugin’s YAML).
|
||||||
|
|
||||||
|
- By default, `kubeadm` sets up your cluster to use and enforce use of
|
||||||
|
[RBAC](/docs/reference/access-authn-authz/rbac/) (role based access
|
||||||
|
control).
|
||||||
|
Make sure that your Pod network plugin supports RBAC, and so do any manifests
|
||||||
|
that you use to deploy it.
|
||||||
|
|
||||||
|
- If you want to use IPv6--either dual-stack, or single-stack IPv6 only
|
||||||
|
networking--for your cluster, make sure that your Pod network plugin
|
||||||
|
supports IPv6.
|
||||||
|
IPv6 support was added to CNI in [v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
|
||||||
|
|
||||||
{{< /caution >}}
|
{{< /caution >}}
|
||||||
|
|
||||||
You must install a Pod network add-on so that your Pods can communicate with
|
Several external projects provide Kubernetes Pod networks using CNI, some of which also
|
||||||
each other.
|
support [Network Policy](/docs/concepts/services-networking/networkpolicies/).
|
||||||
|
|
||||||
**The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed.
|
See the list of available
|
||||||
kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
|
[networking and network policy add-ons](https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy).
|
||||||
|
|
||||||
Several projects provide Kubernetes Pod networks using CNI, some of which also
|
You can install a Pod network add-on with the following command on the
|
||||||
support [Network Policy](/ja/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/ja/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
|
control-plane node or a node that has the kubeconfig credentials:
|
||||||
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
|
|
||||||
- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9.
|
|
||||||
|
|
||||||
Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/ja/docs/reference/access-authn-authz/rbac/).
|
|
||||||
Make sure that your network manifest supports RBAC.
|
|
||||||
|
|
||||||
Also, beware, that your Pod network must not overlap with any of the host networks as this can cause issues.
|
|
||||||
If you find a collision between your network plugin’s preferred Pod network and some of your host networks, you should think of a suitable CIDR replacement and use that during `kubeadm init` with `--pod-network-cidr` and as a replacement in your network plugin’s YAML.
|
|
||||||
|
|
||||||
You can install a Pod network add-on with the following command on the control-plane node or a node that has the kubeconfig credentials:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl apply -f <add-on.yaml>
|
kubectl apply -f <add-on.yaml>
|
||||||
|
|
@ -288,12 +293,12 @@ Below you can find installation instructions for some popular Pod network plugin
|
||||||
{{< tabs name="tabs-pod-install" >}}
|
{{< tabs name="tabs-pod-install" >}}
|
||||||
|
|
||||||
{{% tab name="Calico" %}}
|
{{% tab name="Calico" %}}
|
||||||
For more information about using Calico, see [Quickstart for Calico on Kubernetes](https://docs.projectcalico.org/latest/getting-started/kubernetes/), [Installing Calico for policy and networking](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico), and other related resources.
|
[Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. Calico works on several architectures, including `amd64`, `arm64`, and `ppc64le`.
|
||||||
|
|
||||||
For Calico to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init` or update the `calico.yml` file to match your Pod network. Note that Calico works on `amd64`, `arm64`, and `ppc64le` only.
|
By default, Calico uses `192.168.0.0/16` as the Pod network CIDR, though this can be configured in the calico.yaml file. For Calico to work correctly, you need to pass this same CIDR to the `kubeadm init` command using the `--pod-network-cidr=192.168.0.0/16` flag or via kubeadm's configuration.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
|
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% /tab %}}
|
{{% /tab %}}
|
||||||
|
|
@ -337,15 +342,9 @@ Please refer to this installation guide: [Contiv-VPP Manual Installation](https:
|
||||||
|
|
||||||
For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init`.
|
For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init`.
|
||||||
|
|
||||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. The [Firewall](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls) section of Flannel's troubleshooting guide explains about this in more detail.
|
||||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
|
||||||
please see [here](/ja/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
|
||||||
|
|
||||||
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network.
|
Flannel works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` architectures under Linux.
|
||||||
see [here
|
|
||||||
](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls).
|
|
||||||
|
|
||||||
Note that `flannel` works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` under Linux.
|
|
||||||
Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented.
|
Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
@ -357,25 +356,19 @@ For more information about `flannel`, see [the CoreOS flannel repository on GitH
|
||||||
{{% /tab %}}
|
{{% /tab %}}
|
||||||
|
|
||||||
{{% tab name="Kube-router" %}}
|
{{% tab name="Kube-router" %}}
|
||||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
|
||||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
|
||||||
please see [here](/ja/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
|
||||||
|
|
||||||
Kube-router relies on kube-controller-manager to allocate Pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag.
|
Kube-router relies on kube-controller-manager to allocate Pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag.
|
||||||
|
|
||||||
Kube-router provides Pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
|
Kube-router provides Pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
|
||||||
|
|
||||||
For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
|
For information on using the `kubeadm` tool to set up a Kubernetes cluster with Kube-router, please see the official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
|
||||||
{{% /tab %}}
|
{{% /tab %}}
|
||||||
|
|
||||||
{{% tab name="Weave Net" %}}
|
{{% tab name="Weave Net" %}}
|
||||||
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
|
|
||||||
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
|
|
||||||
please see [here](/ja/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
|
|
||||||
|
|
||||||
The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/).
|
For more information on setting up your Kubernetes cluster with Weave Net, please see [Integrating Kubernetes via the Addon]((https://www.weave.works/docs/net/latest/kube-addon/).
|
||||||
|
|
||||||
Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` without any extra action required.
|
Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` platforms without any extra action required.
|
||||||
Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
|
Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
|
||||||
if they don't know their PodIP.
|
if they don't know their PodIP.
|
||||||
|
|
||||||
|
|
@ -388,15 +381,17 @@ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl versio
|
||||||
|
|
||||||
|
|
||||||
Once a Pod network has been installed, you can confirm that it is working by
|
Once a Pod network has been installed, you can confirm that it is working by
|
||||||
checking that the CoreDNS Pod is Running in the output of `kubectl get pods --all-namespaces`.
|
checking that the CoreDNS Pod is `Running` in the output of `kubectl get pods --all-namespaces`.
|
||||||
And once the CoreDNS Pod is up and running, you can continue by joining your nodes.
|
And once the CoreDNS Pod is up and running, you can continue by joining your nodes.
|
||||||
|
|
||||||
If your network is not working or CoreDNS is not in the Running state, checkout our [troubleshooting docs](/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
If your network is not working or CoreDNS is not in the `Running` state, check out the
|
||||||
|
[troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)
|
||||||
|
for `kubeadm`.
|
||||||
|
|
||||||
### コントロールプレーンノードの隔離
|
### Control plane node isolation
|
||||||
|
|
||||||
By default, your cluster will not schedule Pods on the control-plane node for security
|
By default, your cluster will not schedule Pods on the control-plane node for security
|
||||||
reasons. If you want to be able to schedule Pods on the control-plane node, e.g. for a
|
reasons. If you want to be able to schedule Pods on the control-plane node, for example for a
|
||||||
single-machine Kubernetes cluster for development, run:
|
single-machine Kubernetes cluster for development, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -415,7 +410,7 @@ This will remove the `node-role.kubernetes.io/master` taint from any nodes that
|
||||||
have it, including the control-plane node, meaning that the scheduler will then be able
|
have it, including the control-plane node, meaning that the scheduler will then be able
|
||||||
to schedule Pods everywhere.
|
to schedule Pods everywhere.
|
||||||
|
|
||||||
### ノードの追加 {#join-nodes}
|
### Joining your nodes {#join-nodes}
|
||||||
|
|
||||||
The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:
|
The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:
|
||||||
|
|
||||||
|
|
@ -463,7 +458,7 @@ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outfor
|
||||||
openssl dgst -sha256 -hex | sed 's/^.* //'
|
openssl dgst -sha256 -hex | sed 's/^.* //'
|
||||||
```
|
```
|
||||||
|
|
||||||
The output is similar to this:
|
The output is similar to:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
|
||||||
|
|
@ -491,7 +486,7 @@ Run 'kubectl get nodes' on control-plane to see this machine join.
|
||||||
A few seconds later, you should notice this node in the output from `kubectl get
|
A few seconds later, you should notice this node in the output from `kubectl get
|
||||||
nodes` when run on the control-plane node.
|
nodes` when run on the control-plane node.
|
||||||
|
|
||||||
### (任意)コントロールプレーンノード以外のマシンからのクラスター操作
|
### (Optional) Controlling your cluster from machines other than the control-plane node
|
||||||
|
|
||||||
In order to get a kubectl on some other computer (e.g. laptop) to talk to your
|
In order to get a kubectl on some other computer (e.g. laptop) to talk to your
|
||||||
cluster, you need to copy the administrator kubeconfig file from your control-plane node
|
cluster, you need to copy the administrator kubeconfig file from your control-plane node
|
||||||
|
|
@ -516,7 +511,7 @@ should save to a file and distribute to your user. After that, whitelist
|
||||||
privileges by using `kubectl create (cluster)rolebinding`.
|
privileges by using `kubectl create (cluster)rolebinding`.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
### (任意) APIサーバーをlocalhostへプロキシ
|
### (Optional) Proxying API Server to localhost
|
||||||
|
|
||||||
If you want to connect to the API Server from outside the cluster you can use
|
If you want to connect to the API Server from outside the cluster you can use
|
||||||
`kubectl proxy`:
|
`kubectl proxy`:
|
||||||
|
|
@ -528,11 +523,18 @@ kubectl --kubeconfig ./admin.conf proxy
|
||||||
|
|
||||||
You can now access the API Server locally at `http://localhost:8001/api/v1`
|
You can now access the API Server locally at `http://localhost:8001/api/v1`
|
||||||
|
|
||||||
## クラスターの削除 {#tear-down}
|
## Clean up {#tear-down}
|
||||||
|
|
||||||
To undo what kubeadm did, you should first [drain the
|
If you used disposable servers for your cluster, for testing, you can
|
||||||
node](/ja/docs/reference/generated/kubectl/kubectl-commands#drain) and make
|
switch those off and do no further clean up. You can use
|
||||||
sure that the node is empty before shutting it down.
|
`kubectl config delete-cluster` to delete your local references to the
|
||||||
|
cluster.
|
||||||
|
|
||||||
|
However, if you want to deprovision your cluster more cleanly, you should
|
||||||
|
first [drain the node](/docs/reference/generated/kubectl/kubectl-commands#drain)
|
||||||
|
and make sure that the node is empty, then deconfigure the node.
|
||||||
|
|
||||||
|
### Remove the node
|
||||||
|
|
||||||
Talking to the control-plane node with the appropriate credentials, run:
|
Talking to the control-plane node with the appropriate credentials, run:
|
||||||
|
|
||||||
|
|
@ -541,7 +543,7 @@ kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
|
||||||
kubectl delete node <node name>
|
kubectl delete node <node name>
|
||||||
```
|
```
|
||||||
|
|
||||||
Then, on the node being removed, reset all kubeadm installed state:
|
Then, on the node being removed, reset all `kubeadm` installed state:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubeadm reset
|
kubeadm reset
|
||||||
|
|
@ -562,55 +564,80 @@ ipvsadm -C
|
||||||
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
|
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
|
||||||
appropriate arguments.
|
appropriate arguments.
|
||||||
|
|
||||||
More options and information about the
|
### Clean up the control plane
|
||||||
[`kubeadm reset command`](/ja/docs/reference/setup-tools/kubeadm/kubeadm-reset/).
|
|
||||||
|
|
||||||
## クラスターの維持 {#lifecycle}
|
You can use `kubeadm reset` on the control plane host to trigger a best-effort
|
||||||
|
clean up.
|
||||||
|
|
||||||
Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found [here.](/ja/docs/tasks/administer-cluster/kubeadm)
|
See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
|
||||||
|
reference documentation for more information about this subcommand and its
|
||||||
|
options.
|
||||||
|
|
||||||
## 他アドオンの参照 {#other-addons}
|
{{% /capture %}}
|
||||||
|
|
||||||
See the [list of add-ons](/ja/docs/concepts/cluster-administration/addons/) to explore other add-ons,
|
{{% capture discussion %}}
|
||||||
including tools for logging, monitoring, network policy, visualization &
|
|
||||||
control of your Kubernetes cluster.
|
|
||||||
|
|
||||||
## 次の手順 {#whats-next}
|
## What's next {#whats-next}
|
||||||
|
|
||||||
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
|
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
|
||||||
* Learn about kubeadm's advanced usage in the [kubeadm reference documentation](/ja/docs/reference/setup-tools/kubeadm/kubeadm)
|
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||||
* Learn more about Kubernetes [concepts](/ja/docs/concepts/) and [`kubectl`](/ja/docs/user-guide/kubectl-overview/).
|
for details about upgrading your cluster using `kubeadm`.
|
||||||
* Configure log rotation. You can use **logrotate** for that. When using Docker, you can specify log rotation options for Docker daemon, for example `--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5`. See [Configure and troubleshoot the Docker daemon](https://docs.docker.com/engine/admin/) for more details.
|
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
|
||||||
|
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||||
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
|
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
|
||||||
of Pod network add-ons.
|
of Pod network add-ons.
|
||||||
|
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
|
||||||
|
explore other add-ons, including tools for logging, monitoring, network policy, visualization &
|
||||||
|
control of your Kubernetes cluster.
|
||||||
|
* Configure how your cluster handles logs for cluster events and from
|
||||||
|
applications running in Pods.
|
||||||
|
See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
|
||||||
|
an overview of what is involved.
|
||||||
|
|
||||||
## フィードバック {#feedback}
|
### Feedback {#feedback}
|
||||||
|
|
||||||
* For bugs, visit [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
|
* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
|
||||||
* For support, visit kubeadm Slack Channel:
|
* For support, visit the
|
||||||
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/)
|
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
|
||||||
* General SIG Cluster Lifecycle Development Slack Channel:
|
* General SIG Cluster Lifecycle development Slack channel:
|
||||||
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
|
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
|
||||||
* SIG Cluster Lifecycle [SIG information](#TODO)
|
* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
|
||||||
* SIG Cluster Lifecycle Mailing List:
|
* SIG Cluster Lifecycle mailing list:
|
||||||
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
||||||
|
|
||||||
## バージョン互換ポリシー {#version-skew-policy}
|
## Version skew policy {#version-skew-policy}
|
||||||
|
|
||||||
The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
|
The `kubeadm` tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
|
||||||
kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
|
`kubeadm` vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
|
||||||
|
|
||||||
Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
|
Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
|
||||||
|
|
||||||
Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
|
Example: `kubeadm` v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
|
||||||
v1.8.
|
v1.8.
|
||||||
|
|
||||||
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
|
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
|
||||||
|
|
||||||
* Kubernetes [version and version-skew policy](/ja/docs/setup/release/version-skew-policy/)
|
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
|
||||||
* Kubeadm-specific [installation guide](/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
|
* Kubeadm-specific [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
|
||||||
|
|
||||||
## kubeadmは様々なプラットフォームで動く
|
## Limitations {#limitations}
|
||||||
|
|
||||||
|
### Cluster resilience {#resilience}
|
||||||
|
|
||||||
|
The cluster created here has a single control-plane node, with a single etcd database
|
||||||
|
running on it. This means that if the control-plane node fails, your cluster may lose
|
||||||
|
data and may need to be recreated from scratch.
|
||||||
|
|
||||||
|
Workarounds:
|
||||||
|
|
||||||
|
* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
|
||||||
|
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
|
||||||
|
|
||||||
|
* Use multiple control-plane nodes. You can read
|
||||||
|
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) to pick a cluster
|
||||||
|
topology that provides higher availabilty.
|
||||||
|
|
||||||
|
### Platform compatibility {#multi-platform}
|
||||||
|
|
||||||
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
|
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
|
||||||
following the [multi-platform
|
following the [multi-platform
|
||||||
|
|
@ -622,20 +649,8 @@ Only some of the network providers offer solutions for all platforms. Please con
|
||||||
network providers above or the documentation from each provider to figure out whether the provider
|
network providers above or the documentation from each provider to figure out whether the provider
|
||||||
supports your chosen platform.
|
supports your chosen platform.
|
||||||
|
|
||||||
## 制限事項 {#limitations}
|
## Troubleshooting {#troubleshooting}
|
||||||
|
|
||||||
The cluster created here has a single control-plane node, with a single etcd database
|
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
||||||
running on it. This means that if the control-plane node fails, your cluster may lose
|
|
||||||
data and may need to be recreated from scratch.
|
|
||||||
|
|
||||||
Workarounds:
|
{{% /capture %}}
|
||||||
|
|
||||||
* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
|
|
||||||
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
|
|
||||||
|
|
||||||
* Use multiple control-plane nodes by completing the
|
|
||||||
[HA setup](/ja/docs/setup/independent/ha-topology) instead.
|
|
||||||
|
|
||||||
## トラブルシューティング {#troubleshooting}
|
|
||||||
|
|
||||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue