Remove the old docker-multinode in favor for the new one

This commit is contained in:
Lucas Käldström 2016-07-23 00:52:00 +03:00
parent 2ac973c0dd
commit 696c4ed0ab
11 changed files with 120 additions and 1248 deletions

View File

@ -163,14 +163,6 @@ toc:
path: /docs/getting-started-guides/azure/
- title: Running Kubernetes on CenturyLink Cloud
path: /docs/getting-started-guides/clc/
- title: Portable Multi-Node Clusters
section:
- title: Installing a Kubernetes Master Node via Docker
path: /docs/getting-started-guides/docker-multinode/master/
- title: Adding a Kubernetes Worker Node via Docker
path: /docs/getting-started-guides/docker-multinode/worker/
- title: Deploying DNS
path: /docs/getting-started-guides/docker-multinode/deployDNS/
- title: Running Kubernetes on Custom Solutions
section:
- title: Creating a Custom Cluster from Scratch
@ -235,6 +227,8 @@ toc:
path: /docs/getting-started-guides/coreos/bare_metal_calico/
- title: Ubuntu Nodes with Calico
path: /docs/getting-started-guides/ubuntu-calico/
- title: Portable Multi-Node Cluster
path: /docs/getting-started-guides/docker-multinode/
- title: Building Large Clusters
path: /docs/admin/cluster-large/
- title: Running in Multiple Zones

View File

@ -0,0 +1,118 @@
---
---
* TOC
{:toc}
## Prerequisites
The only thing you need is a linux machine with **Docker 1.10.0 or higher**
## Overview
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)
### Bootstrap Docker
This guide uses a pattern of running two instances of the Docker daemon:
1) A _bootstrap_ Docker instance which is used to start `etcd` and `flanneld`, on which the Kubernetes components depend
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
### Versions supported
v1.2.x and v1.3.x are supported versions for this deployment.
v1.3.0 alphas and betas might work, but be sure you know what you're doing if you're trying them out.
### Multi-arch solution
Yeah, it's true. You may run this deployment setup seamlessly on `amd64`, `arm`, `arm64` and `ppc64le` hosts.
See this tracking issue for more details: https://github.com/kubernetes/kubernetes/issues/17981
v1.3.0 ships with support for amd64, arm and arm64. ppc64le isn't supported, due to a bug in the Go runtime, `hyperkube` (only!) isn't built for the stable v1.3.0 release, and therefore this guide can't run it. But you may still run Kubernetes on ppc64le via custom deployments.
hyperkube was pushed for ppc64le at versions `v1.3.0-alpha.3` and `v1.3.0-alpha.4`, feel free to try them out, but there might be some unexpected bugs.
### Options/configuration
The scripts will output something like this when starting:
```shell
+++ [0611 12:50:12] K8S_VERSION is set to: v1.3.0
+++ [0611 12:50:12] ETCD_VERSION is set to: 2.2.5
+++ [0611 12:50:12] FLANNEL_VERSION is set to: 0.5.5
+++ [0611 12:50:12] FLANNEL_IPMASQ is set to: true
+++ [0611 12:50:12] FLANNEL_NETWORK is set to: 10.1.0.0/16
+++ [0611 12:50:12] FLANNEL_BACKEND is set to: udp
+++ [0611 12:50:12] RESTART_POLICY is set to: unless-stopped
+++ [0611 12:50:12] MASTER_IP is set to: 192.168.1.50
+++ [0611 12:50:12] ARCH is set to: amd64
+++ [0611 12:50:12] NET_INTERFACE is set to: eth0
```
Each of these options are overridable by `export`ing the values before running the script.
## Setup the master node
The first step in the process is to initialize the master node.
Clone the `kube-deploy` repo, and run [master.sh](master.sh) on the master machine _with root_:
```shell
$ git clone https://github.com/kubernetes/kube-deploy
$ cd docker-multinode
$ ./master.sh
```
First, the `bootstrap` docker daemon is started, then `etcd` and `flannel` are started as containers in the bootstrap daemon.
Then, the main docker daemon is restarted, and this is an OS/distro-specific tasks, so if it doesn't work for your distro, feel free to contribute!
Lastly, it launches `kubelet` in the main docker daemon, and the `kubelet` in turn launches the control plane (apiserver, controller-manager and scheduler) as static pods.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
Clone the `kube-deploy` repo, and run [worker.sh](worker.sh) on the worker machine _with root_:
```shell
$ git clone https://github.com/kubernetes/kube-deploy
$ cd docker-multinode
$ export MASTER_IP=${SOME_IP}
$ ./worker.sh
```
First, the `bootstrap` docker daemon is started, then `flannel` is started as a container in the bootstrap daemon, in order to set up the overlay network.
Then, the main docker daemon is restarted and lastly `kubelet` is launched as a container in the main docker daemon.
## Addons
kube-dns and the dashboard are deployed automatically with v1.3.0
### Deploy DNS manually for v1.2.x
Just specify the architecture, and deploy via these commands:
```shell
# Possible options: amd64, arm, arm64 and ppc64le
$ export ARCH=amd64
# If the kube-system namespace isn't already created, create it
$ kubectl get ns
$ kubectl create namespace kube-system
$ sed -e "s/ARCH/${ARCH}/g;" skydns.yaml | kubectl create -f -
```
### Test if DNS works
Follow [this link](https://releases.k8s.io/release-1.2/cluster/addons/dns#how-do-i-test-if-it-is-working) to check it out.

View File

@ -1,36 +0,0 @@
---
---
### Get the template file
First of all, download the dns template
[skydns template](/docs/getting-started-guides/docker-multinode/skydns.yaml.in)
### Set environment variables
Then you need to set `DNS_REPLICAS`, `DNS_DOMAIN` and `DNS_SERVER_IP` envs
```shell
$ export DNS_REPLICAS=1
$ export DNS_DOMAIN=cluster.local # specify in startup parameter `--cluster-domain` for containerized kubelet
$ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` for containerized kubelet
```
### Replace the corresponding value in the template and create the pod
```shell{% raw %}
$ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{{ pillar\['dns_server'\] }}/${DNS_SERVER_IP}/g" skydns.yaml.in > ./skydns.yaml
# If the kube-system namespace isn't already created, create it
$ kubectl get ns
$ kubectl create namespace kube-system
$ kubectl create -f ./skydns.yaml{% endraw %}
```
### Test if DNS works
Follow [this link](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns#how-do-i-test-if-it-is-working) to check it out.

View File

@ -1,99 +0,0 @@
---
---
_Note_:
These instructions are somewhat significantly more advanced than the [single node](/docs/getting-started-guides/docker) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
* TOC
{:toc}
## Prerequisites
The only thing you need is a machine with **Docker 1.7.1 or higher**
## Overview
This guide will set up a 2-node Kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](/images/docs/k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
You can specify the version on every node before install:
```shell
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
Otherwise, we'll use latest `hyperkube` image as default k8s version.
## Master Node
The first step in the process is to initialize the master node.
The MASTER_IP step here is optional, it defaults to the first value of `hostname -I`.
Clone the Kubernetes repo, and run [master.sh](/docs/getting-started-guides/docker-multinode/master.sh) on the master machine _with root_:
```shell
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./master.sh
```
`Master done!`
See [here](/docs/getting-started-guides/docker-multinode/master) for detailed instructions explanation.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
Clone the Kubernetes repo, and run [worker.sh](/docs/getting-started-guides/docker-multinode/worker.sh) on the worker machine _with root_:
```shell
$ export MASTER_IP=<your_master_ip (e.g. 1.2.3.4)>
$ cd kubernetes/docs/getting-started-guides/docker-multinode/
$ ./worker.sh
```
`Worker done!`
See [here](/docs/getting-started-guides/docker-multinode/worker) for a detailed explanation.
## Deploy a DNS
See [here](/docs/getting-started-guides/docker-multinode/deployDNS) for instructions.
## Testing your cluster
Once your cluster has been created you can [test it out](/docs/getting-started-guides/docker-multinode/testing)
For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
Docker Multi Node | custom | N/A | flannel | [docs](/docs/getting-started-guides/docker-multinode) | | Project ([@brendandburns](https://github.com/brendandburns))
For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart.

View File

@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: kube-system

View File

@ -1,244 +0,0 @@
---
---
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine
is `${MASTER_IP}`. We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.1"
Environment variables used:
```shell
export MASTER_IP=<the_master_ip_here>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.1)>
export ETCD_VERSION=<your_etcd_version (e.g. 2.2.1)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
There are two main phases to installing the master:
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
## Setting up flanneld and etcd
_Note_:
This guide expects **Docker 1.7.1 or higher**.
### Setup Docker Bootstrap
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
`--iptables=false` so that it can only run containers with `--net=host`. That's sufficient to bootstrap our system.
Run:
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_If you have Docker 1.8.0 or higher run this instead_
```shell
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
### Startup etcd for flannel and the API server to use
Run:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
/usr/local/bin/etcd \
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
--advertise-client-urls=http://${MASTER_IP}:4001 \
--data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run \
--net=host \
gcr.io/google_containers/etcd-amd64:${ETCD_VERSION} \
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplified networking between our Pods of containers.
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```shell
sudo /etc/init.d/docker stop
```
or
```shell
sudo systemctl stop docker
```
or
```shell
sudo service docker stop
```
or it may be something else.
#### Run flannel
Now run flanneld itself:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq=${FLANNEL_IPMASQ} \
--iface=${FLANNEL_IFACE}
```
The previous command should have printed a really long hash, the container id, copy this hash.
Now get the subnet settings from flannel:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
Again this is system dependent, it may be:
```shell
sudo /etc/init.d/docker start
```
it may be:
```shell
systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
```shell
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
```
> Note that `--cluster-dns` and `--cluster-domain` is used to deploy dns, feel free to discard them if dns is not needed.
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
Download the kubectl binary for `${K8S_VERSION}` ({{page.version}}) and make it available by editing your PATH environment variable.
([OS X/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/amd64/kubectl))
([OS X/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/darwin/386/kubectl))
([linux/amd64](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/amd64/kubectl))
([linux/386](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/386/kubectl))
([linux/arm](http://storage.googleapis.com/kubernetes-release/release/{{page.version}}.0-alpha.7/bin/linux/arm/kubectl))
For example, OS X:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/darwin/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Linux:
```shell
$ wget http://storage.googleapis.com/kubernetes-release/release/v${K8S_VERSION}/bin/linux/amd64/kubectl
$ chmod 755 kubectl
$ PATH=$PATH:`pwd`
```
Now you can list the nodes:
```shell
kubectl get nodes
```
This should print something like:
```shell
NAME LABELS STATUS
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
If all else fails, ask questions on [Slack](/docs/troubleshooting/#slack).
### Next steps
Move on to [adding one or more workers](/docs/getting-started-guides/docker-multinode/worker/) or [deploy a dns](/docs/getting-started-guides/docker-multinode/deployDNS/)

View File

@ -1,243 +0,0 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A script to setup the k8s master in docker containers.
# Authors @wizard_cxy @resouer
set -e
# Make sure docker daemon is running
if ( ! ps -ef | grep "/usr/bin/docker" | grep -v 'grep' &> /dev/null ); then
echo "Docker is not running on this machine!"
exit 1
fi
# Make sure k8s version env is properly set
K8S_VERSION=${K8S_VERSION:-"1.2.2}
ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
FLANNEL_IPMASQ=${FLANNEL_IPMASQ:-"true"}
FLANNEL_IFACE=${FLANNEL_IFACE:-"eth0"}
ARCH=${ARCH:-"amd64"}
# Run as root
if [ "$(id -u)" != "0" ]; then
echo >&2 "Please run as root"
exit 1
fi
# Make sure master ip is properly set
if [ -z ${MASTER_IP} ]; then
MASTER_IP=$(hostname -I | awk '{print $1}')
fi
echo "K8S_VERSION is set to: ${K8S_VERSION}"
echo "ETCD_VERSION is set to: ${ETCD_VERSION}"
echo "FLANNEL_VERSION is set to: ${FLANNEL_VERSION}"
echo "FLANNEL_IFACE is set to: ${FLANNEL_IFACE}"
echo "FLANNEL_IPMASQ is set to: ${FLANNEL_IPMASQ}"
echo "MASTER_IP is set to: ${MASTER_IP}"
echo "ARCH is set to: ${ARCH}"
# Check if a command is valid
command_exists() {
command -v "$@" > /dev/null 2>&1
}
lsb_dist=""
# Detect the OS distro, we support ubuntu, debian, mint, centos, fedora dist
detect_lsb() {
# TODO: remove this when ARM support is fully merged
case "$(uname -m)" in
*64)
;;
*)
echo "Error: We currently only support 64-bit platforms."
exit 1
;;
esac
if command_exists lsb_release; then
lsb_dist="$(lsb_release -si)"
fi
if [ -z ${lsb_dist} ] && [ -r /etc/lsb-release ]; then
lsb_dist="$(. /etc/lsb-release && echo "$DISTRIB_ID")"
fi
if [ -z ${lsb_dist} ] && [ -r /etc/debian_version ]; then
lsb_dist='debian'
fi
if [ -z ${lsb_dist} ] && [ -r /etc/fedora-release ]; then
lsb_dist='fedora'
fi
if [ -z ${lsb_dist} ] && [ -r /etc/os-release ]; then
lsb_dist="$(. /etc/os-release && echo "$ID")"
fi
lsb_dist="$(echo ${lsb_dist} | tr '[:upper:]' '[:lower:]')"
case "${lsb_dist}" in
amzn|centos|debian|ubuntu|fedora)
;;
*)
echo "Error: We currently only support ubuntu|debian|amzn|centos|fedora."
exit 1
;;
esac
}
# Start the bootstrap daemon
# TODO: do not start docker-bootstrap if it's already running
bootstrap_daemon() {
# Detecting docker version so we could run proper docker_daemon command
[[ $(eval "docker --version") =~ ([0-9][.][0-9][.][0-9]*) ]] && version="${BASH_REMATCH[1]}"
local got=$(echo -e "${version}\n1.8.0" | sed '/^$/d' | sort -nr | head -1)
if [[ "${got}" = "${version}" ]]; then
docker_daemon="docker -d"
else
docker_daemon="docker daemon"
fi
${docker_daemon} \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--graph=/var/lib/docker-bootstrap \
2> /var/log/docker-bootstrap.log \
1> /dev/null &
sleep 5
}
# Start k8s components in containers
DOCKER_CONF=""
start_k8s(){
# Start etcd
docker -H unix:///var/run/docker-bootstrap.sock run \
--restart=on-failure \
--net=host \
-d \
gcr.io/google_containers/etcd-${ARCH}:${ETCD_VERSION} \
/usr/local/bin/etcd \
--listen-client-urls=http://127.0.0.1:4001,http://${MASTER_IP}:4001 \
--advertise-client-urls=http://${MASTER_IP}:4001 \
--data-dir=/var/etcd/data
sleep 5
# Set flannel net config
docker -H unix:///var/run/docker-bootstrap.sock run \
--net=host gcr.io/google_containers/etcd:${ETCD_VERSION} \
etcdctl \
set /coreos.com/network/config \
'{ "Network": "10.1.0.0/16", "Backend": {"Type": "vxlan"}}'
# iface may change to a private network interface, eth0 is for default
flannelCID=$(docker -H unix:///var/run/docker-bootstrap.sock run \
--restart=on-failure \
-d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq="${FLANNEL_IPMASQ}" \
--iface="${FLANNEL_IFACE}")
sleep 8
# Copy flannel env out and source it on the host
docker -H unix:///var/run/docker-bootstrap.sock \
cp ${flannelCID}:/run/flannel/subnet.env .
source subnet.env
# Configure docker net settings, then restart it
case "${lsb_dist}" in
amzn)
DOCKER_CONF="/etc/sysconfig/docker"
echo "OPTIONS=\"\$OPTIONS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && service docker restart
;;
centos|fedora)
DOCKER_CONF="/etc/sysconfig/docker"
sed -i "/^OPTIONS=/ s|\( --mtu=.*\)\?'$| --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}'|" ${DOCKER_CONF}
if ! command_exists ifconfig; then
yum -y -q install net-tools
fi
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && systemctl restart docker
;;
ubuntu|debian)
DOCKER_CONF="/etc/default/docker"
echo "DOCKER_OPTS=\"\$DOCKER_OPTS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
apt-get install bridge-utils
brctl delbr docker0
service docker stop
while [ `ps aux | grep /usr/bin/docker | grep -v grep | wc -l` -gt 0 ]; do
echo "Waiting for docker to terminate"
sleep 1
done
service docker start
;;
*)
echo "Unsupported operations system ${lsb_dist}"
exit 1
;;
esac
# sleep a little bit
sleep 5
# Start kubelet and then start master components as pods
docker run \
--net=host \
--pid=host \
--privileged \
--restart=on-failure \
-d \
-v /sys:/sys:ro \
-v /var/run:/var/run:rw \
-v /:/rootfs:ro \
-v /var/lib/docker/:/var/lib/docker:rw \
-v /var/lib/kubelet/:/var/lib/kubelet:rw \
gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
/hyperkube kubelet \
--address=0.0.0.0 \
--allow-privileged=true \
--enable-server \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--containerized \
--v=2
}
echo "Detecting your OS distro ..."
detect_lsb
echo "Starting bootstrap docker ..."
bootstrap_daemon
echo "Starting k8s ..."
start_k8s
echo "Master done!"

View File

@ -1,136 +0,0 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v10
namespace: kube-system
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: {{ pillar['dns_replicas'] }}
selector:
k8s-app: kube-dns
version: v10
template:
metadata:
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.12
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- --domain={{ pillar['dns_domain'] }}
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain={{ pillar['dns_domain'] }}.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -1,68 +0,0 @@
---
---
To validate that your node(s) have been added, run:
```shell
kubectl get nodes
```
That should show something like:
```shell
NAME LABELS STATUS
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
```
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on [Slack](/docs/troubleshooting/#slack).
### Run an application
```shell
kubectl run nginx --image=nginx --port=80
```
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service
```shell
kubectl expose rc nginx --port=80
```
Run the following command to obtain the IP of this service we just created. There are two IPs, the first one is internal (CLUSTER_IP), and the second one is the external load-balanced IP.
```shell
kubectl get svc nginx
```
Alternatively, you can obtain only the first IP (CLUSTER_IP) by running:
```shell
{% raw %}kubectl get svc nginx --template={{.spec.clusterIP}}{% endraw %}
```
Hit the webserver with the first IP (CLUSTER_IP):
```shell
curl <insert-cluster-ip-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### Scaling
Now try to scale up the nginx you created before:
```shell
kubectl scale rc nginx --replicas=3
```
And list the pods
```shell
kubectl get pods
```
You should see pods landing on the newly added machine.

View File

@ -1,179 +0,0 @@
---
---
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](/docs/getting-started-guides/docker-multinode/master/). We'll need to run several versioned Kubernetes components, so we'll assume that the version we want
to run is `${K8S_VERSION}`, which should hold a released version of Kubernetes >= "1.2.0-alpha.6"
Enviroinment variables used:
```shell
export MASTER_IP=<the_master_ip_here>
export K8S_VERSION=<your_k8s_version (e.g. 1.2.0-alpha.6)>
export FLANNEL_VERSION=<your_flannel_version (e.g. 0.5.5)>
export FLANNEL_IFACE=<flannel_interface (defaults to eth0)>
export FLANNEL_IPMASQ=<flannel_ipmasq_flag (defaults to true)>
```
For each worker node, there are three steps:
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start Kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
As before, the Flannel daemon is going to provide network connectivity.
_Note_:
This guide expects **Docker 1.7.1 or higher**.
#### Set up a bootstrap docker
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
```shell
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_If you have Docker 1.8.0 or higher run this instead_
```shell
sudo sh -c 'docker daemon -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```shell
sudo /etc/init.d/docker stop
```
or
```shell
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq=${FLANNEL_IPMASQ} \
--etcd-endpoints=http://${MASTER_IP}:4001 \
--iface=${FLANNEL_IFACE}
```
The previous command should have printed a really long hash, the container id, copy this hash.
Now get the subnet settings from flannel:
```shell
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
Regardless, you need to add the following to the docker command line:
```shell
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named `docker0` by default. You need to remove this:
```shell
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the `bridge-utils` package for the `brctl` binary.
#### Restart Docker
Again this is system dependent, it may be:
```shell
sudo /etc/init.d/docker start
```
or it may be:
```shell
systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
Again this is similar to the above, but the `--api-servers` now points to the master we set up in the beginning.
```shell
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://${MASTER_IP}:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
```shell
sudo docker run -d \
--net=host \
--privileged \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2
```
### Next steps
Move on to [testing your cluster](/docs/getting-started-guides/docker-multinode/testing/) or [add another node](#adding-a-kubernetes-worker-node-via-docker)

View File

@ -1,231 +0,0 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A script to the k8s worker in docker containers.
# Authors @wizard_cxy @resouer
set -e
# Make sure docker daemon is running
if ( ! ps -ef | grep "/usr/bin/docker" | grep -v 'grep' &> /dev/null ); then
echo "Docker is not running on this machine!"
exit 1
fi
# Make sure k8s version env is properly set
K8S_VERSION=${K8S_VERSION:-"1.2.0-alpha.7"}
FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
FLANNEL_IFACE=${FLANNEL_IFACE:-"eth0"}
FLANNEL_IPMASQ=${FLANNEL_IPMASQ:-"true"}
ARCH=${ARCH:-"amd64"}
# Run as root
if [ "$(id -u)" != "0" ]; then
echo >&2 "Please run as root"
exit 1
fi
# Make sure master ip is properly set
if [ -z ${MASTER_IP} ]; then
echo "Please export MASTER_IP in your env"
exit 1
fi
echo "K8S_VERSION is set to: ${K8S_VERSION}"
echo "FLANNEL_VERSION is set to: ${FLANNEL_VERSION}"
echo "FLANNEL_IFACE is set to: ${FLANNEL_IFACE}"
echo "FLANNEL_IPMASQ is set to: ${FLANNEL_IPMASQ}"
echo "MASTER_IP is set to: ${MASTER_IP}"
echo "ARCH is set to: ${ARCH}"
# Check if a command is valid
command_exists() {
command -v "$@" > /dev/null 2>&1
}
lsb_dist=""
# Detect the OS distro, we support ubuntu, debian, mint, centos, fedora dist
detect_lsb() {
case "$(uname -m)" in
*64)
;;
*)
echo "Error: We currently only support 64-bit platforms."
exit 1
;;
esac
if command_exists lsb_release; then
lsb_dist="$(lsb_release -si)"
fi
if [ -z ${lsb_dist} ] && [ -r /etc/lsb-release ]; then
lsb_dist="$(. /etc/lsb-release && echo "$DISTRIB_ID")"
fi
if [ -z ${lsb_dist} ] && [ -r /etc/debian_version ]; then
lsb_dist='debian'
fi
if [ -z ${lsb_dist} ] && [ -r /etc/fedora-release ]; then
lsb_dist='fedora'
fi
if [ -z ${lsb_dist} ] && [ -r /etc/os-release ]; then
lsb_dist="$(. /etc/os-release && echo "$ID")"
fi
lsb_dist="$(echo ${lsb_dist} | tr '[:upper:]' '[:lower:]')"
case "${lsb_dist}" in
amzn|centos|debian|ubuntu)
;;
*)
echo "Error: We currently only support ubuntu|debian|amzn|centos."
exit 1
;;
esac
}
# Start the bootstrap daemon
bootstrap_daemon() {
# Detecting docker version so we could run proper docker_daemon command
[[ $(eval "docker --version") =~ ([0-9][.][0-9][.][0-9]*) ]] && version="${BASH_REMATCH[1]}"
local got=$(echo -e "${version}\n1.8.0" | sed '/^$/d' | sort -nr | head -1)
if [[ "${got}" = "${version}" ]]; then
docker_daemon="docker -d"
else
docker_daemon="docker daemon"
fi
${docker_daemon} \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--graph=/var/lib/docker-bootstrap \
2> /var/log/docker-bootstrap.log \
1> /dev/null &
sleep 5
}
DOCKER_CONF=""
# Start k8s components in containers
start_k8s() {
# Start flannel
flannelCID=$(docker -H unix:///var/run/docker-bootstrap.sock run \
-d \
--restart=on-failure \
--net=host \
--privileged \
-v /dev/net:/dev/net \
quay.io/coreos/flannel:${FLANNEL_VERSION} \
/opt/bin/flanneld \
--ip-masq="${FLANNEL_IPMASQ}" \
--etcd-endpoints=http://${MASTER_IP}:4001 \
--iface="${FLANNEL_IFACE}")
sleep 10
# Copy flannel env out and source it on the host
docker -H unix:///var/run/docker-bootstrap.sock \
cp ${flannelCID}:/run/flannel/subnet.env .
source subnet.env
# Configure docker net settings, then restart it
case "${lsb_dist}" in
centos)
DOCKER_CONF="/etc/sysconfig/docker"
sed -i "/^OPTIONS=/ s|\( --mtu=.*\)\?'$| --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}'|" ${DOCKER_CONF}
if ! command_exists ifconfig; then
yum -y -q install net-tools
fi
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && systemctl restart docker
;;
amzn)
DOCKER_CONF="/etc/sysconfig/docker"
echo "OPTIONS=\"\$OPTIONS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
yum -y -q install bridge-utils && brctl delbr docker0 && service docker restart
;;
ubuntu|debian) # TODO: today ubuntu uses systemd. Handle that too
DOCKER_CONF="/etc/default/docker"
echo "DOCKER_OPTS=\"\$DOCKER_OPTS --mtu=${FLANNEL_MTU} --bip=${FLANNEL_SUBNET}\"" | tee -a ${DOCKER_CONF}
ifconfig docker0 down
apt-get install bridge-utils
brctl delbr docker0
service docker stop
while [ `ps aux | grep /usr/bin/docker | grep -v grep | wc -l` -gt 0 ]; do
echo "Waiting for docker to terminate"
sleep 1
done
service docker start
;;
*)
echo "Unsupported operations system ${lsb_dist}"
exit 1
;;
esac
# sleep a little bit
sleep 5
# Start kubelet & proxy in container
# TODO: Use secure port for communication
docker run \
--net=host \
--pid=host \
--privileged \
--restart=on-failure \
-d \
-v /sys:/sys:ro \
-v /var/run:/var/run:rw \
-v /:/rootfs:ro \
-v /var/lib/docker/:/var/lib/docker:rw \
-v /var/lib/kubelet/:/var/lib/kubelet:rw \
gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://${MASTER_IP}:8080 \
--address=0.0.0.0 \
--enable-server \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--containerized \
--v=2
docker run \
-d \
--net=host \
--privileged \
--restart=on-failure \
gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
/hyperkube proxy \
--master=http://${MASTER_IP}:8080 \
--v=2
}
echo "Detecting your OS distro ..."
detect_lsb
echo "Starting bootstrap docker ..."
bootstrap_daemon
echo "Starting k8s ..."
start_k8s
echo "Worker done!"