This commit is contained in:
fabriziopandini 2019-03-21 22:06:49 +01:00
parent 72914baa68
commit 9bd09c3234
45 changed files with 5168 additions and 209 deletions

14
.gitignore vendored Normal file
View File

@ -0,0 +1,14 @@
# OSX trash
.DS_Store
# Eclipse files
.classpath
.project
.settings/**
# Files generated by JetBrains IDEs, e.g. IntelliJ IDEA
.idea/
*.iml
# Vscode files
.vscode

3
go.mod Normal file
View File

@ -0,0 +1,3 @@
module k8s.io/kubeadm
go 1.12

2
kinder/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
# tmp folder for testing node variants
/tmp

9
kinder/OWNERS Normal file
View File

@ -0,0 +1,9 @@
# See the OWNERS file documentation:
# https://github.com/kubernetes/community/blob/master/contributors/devel/owners.md
approvers:
- fabriziopandini
- neolit123
reviewers:
labels:
- area/kubeadm
- sig/cluster-lifecycle

View File

@ -5,7 +5,7 @@ kinder is an example of [kind](https://github.com/kubernetes-sigs/kind) used as
All the kind commands will be available in kinder, side by side with additional commands
designed for helping kubeadm contributors.
**kinder is a work in progress. Test it! Break it! Send feeback!**
**kinder is a work in progress. Test it! Break it! Send feedback!**
## Prerequisites
@ -21,7 +21,7 @@ git --version
### Install Go
To work on kinds codebase you will need [Go](https://golang.org/doc/install).
To work with kinder you will need [Go](https://golang.org/doc/install).
Install or upgrade [Go using the instructions for your operating system](https://golang.org/doc/install).
You can check if Go is in your system with the following command:
@ -35,10 +35,7 @@ in lower versions.
## Getting started
Read the [kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/) first, becuase kinder
is built using kind as a library.
Clone the kubeadm repo:
Clone the kubeadm repository:
```bash
git clone https://github.com/kubernetes/kubeadm.git
@ -53,205 +50,17 @@ GO111MODULE=on go install
This will put kinder in $(go env GOPATH)/bin.
## Create a test cluster
## Usage
You can create a cluster in kinder using `kinder create cluster`, that is a wrapper on the `kind create cluster` command.
Read the [kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/) first.
However, additional flags are implemented for enabling following use cases:
Then [Prepare for tests](doc/prepare-for-tests.md)
### Create *only* nodes
Follow the how to guides:
By default kinder stops the cluster creation process before executing `kubeadm init` and `kubeadm join`;
This will give you nodes ready for intalling Kubernetes and more specifically
- [Getting started (test single control-plane)](doc/getting-started.md)
- [Testing HA](doc/test-HA.md)
- [Testing upgrades](doc/test-upgrades.md)
- [Testing X on Y](doc/test-XonY.md)
- the necessary prerequisited already installed on all nodes
- a pre-build kubeadm config file in `/kind/kubeadm.conf`
- in case of more than one control-plane node exists in the cluster, a pre-configured external load balancer
If instead you want to revert to the default kind behaviour, you can use the `--setup-kubernetes`:
```bash
kinder create cluster --setup-kubernetes=true
```
### Testing different cluster topologies
You can use the `--control-plane-nodes <num>` flag and/or the `--worker-nodes <num>` flag
as a shurtcut for creating different cluster topologies. e.g.
```bash
# create a cluster with two worker nodes
kinder create cluster --worker-nodes=2
# create a cluster with two control-pane nodes
kinder create cluster ---control-plane-nodes=2
```
Please note that a load balancer node will be automatically create when there are more than
one control-plane node; if necessary, you can use `--external-load balancer` flag to explicitly
request the creation of an external load balancer node.
More sophisticated cluster topologies can be achieved using the kind config file, like e.g. customizing
kubeadm-config or specifying volume mounts. see [kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/#configuring-your-kind-cluster)
for more details.
### Testing Kubernetes cluster variants
kinder gives you shortcuts for testing Kubernetes cluster variants supported by kubeadm:
```bash
# create a cluster using kube-dns instead of CoreDNS
kinder create cluster --kube-dns
# create a cluster using an external etcd
kinder create cluster --external-etcd
```
## Working on nodes
You can use `docker exec` and `docker cp` to work on nodes.
```bash
# check the content of the /kind/kubeadm.conf file
docker exec kind-control-plane cat kind/kubeadm.conf
# execute a command on the kind-control-plane container (the control-plane node)
docker exec kind-worker1 \
kubeadm init --config=/kind/kubeadm.conf --ignore-preflight-errors=all
# override the kubeadm binary on the on the kind-control-plane container
# with a locally built kubeadm binary
docker cp \
$working_dir/kubernetes/bazel-bin/cmd/kubeadm/linux_amd64_pure_stripped/kubeadm \
kind-control-plane:/usr/bin/kubeadm
```
On top of that, kinder offers you three commands for helping you working on nodes:
- `kinder do` allowing you to execute actions (repetitive tasks/sequence of commands) on nodes
- `kinder exec`, a topology aware wrapper on docker `docker exec`
- `kinder cp`, a topology aware wrapper on docker `docker cp`
### kinder do
`kinder do` is the kinder swiss knife.
It allows to execute actions (repetitive tasks/sequence of commands) on one or more nodes
in the local Kubernetes cluster.
```bash
# Execute kubeadm init, installs the CNI plugin and copy the kubeconfig file on the host
kinder do kubeadm-init
```
All the actions implemented in kinder are by design "developer friendly", in the sense that
all the command output will be echoed and all the step will be documented.
Following actions are available:
| action | Notes |
| --------------- | ------------------------------------------------------------ |
| kubeadm-init | Executes the kubeadm-init workflow, installs the CNI plugin and then copies the kubeconfig file on the host machine. Available options are:<br /> `--use-phases` triggers execution of the init workflow by invoking single phases. <br />`--automatic-copy-certs` instruct kubeadm to use the automatic copy cert feature.|
| manual-copy-certs | Implement the manual copy of certificates to be shared acress control-plane nodes (n.b. manual means not managed by kubeadm) Available options are:<br /> `--only-node` to execute this action only on a specific node. |
| kubeadm-join | Executes the kubeadm-join workflow both on secondary control plane nodes and on worker nodes. Available options are:<br /> `--use-phases` triggers execution of the init workflow by invoking single phases.<br />`--automatic-copy-certs` instruct kubeadm to use the automatic copy cert feature.<br /> `--only-node` to execute this action only on a specific node. |
| kubeadm-upgrade |Executes the kubeadm upgrade workflow and upgrading K8s. Available options are:<br /> `--upgrade-version` for defining the target K8s version.<br />`--only-node` to execute this action only on a specific node. |
| Kubeadm-reset | Executes the kubeadm-reset workflow on all the nodes. Available options are:<br /> `--only-node` to execute this action only on a specific node. |
| cluster-info | Returns a summary of cluster info including<br />- List of nodes<br />- list of pods<br />- list of images used by pods<br />- list of etcd members |
| smoke-test | Implements a non-exhaustive set of tests that aim at ensuring that the most important functions of a Kubernetes cluster work |
### kinder exec
`kinder exec` provide a topology aware wrapper on docker `docker exec` .
```bash
# check the kubeadm version on all the nodes
kinder exec @all -- kubeadm version
# run kubeadm join on all the worker nodes
kinder exec @w* -- kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef ...
# run kubectl command inside the bootstrap control-plane node
kinder exec @cp1 -- kubectl --kubeconfig=/etc/kubernetes/admin.conf cluster-info
```
Following node selectors are available
| selector | return the following nodes |
| -------- | ------------------------------------------------------------ |
| @all | all the Kubernetes nodes in the cluster.<br />(control-plane and worker nodes are included, load balancer and etcd not) |
| @cp* | all the control-plane nodes |
| @cp1 | the bootstrap-control plane node |
| @cpN | the secondary master nodes |
| @w* | all the worker nodes |
| @lb | the external load balancer |
| @etcd | the external etcd |
As alternative to node selector, the node name (the container name without the cluster name prefix) can be used to target actions to a specific node.
```bash
# run kubeadm join on the first worker node node only
kinder exec worker1 -- kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef ...
```
### kinder cp
`kinder cp` provide a topology aware wrapper on docker `docker cp` . Following feature are supported:
```bash
# copy to the host the /kind/kubeadm.conf file existing on the bootstrap control-plane node
kinder cp @cp1:kind/kubeadm.conf kubeadm.conf
# copy to the bootstrap control-plane node a local kubeadm.conf file
kinder cp kubeadm.conf @cp1:kind/kubeadm.conf
# override the kubeadm binary on all the nodes with a locally built kubeadm binary
kinder cp \
$working_dir/kubernetes/bazel-bin/cmd/kubeadm/linux_amd64_pure_stripped/kubeadm \
@all:/usr/bin/kubeadm
```
> Please note that, `docker cp` or `kinder cp` allows you to replace the kubeadm kinary on existing nodes. If you want to replace the kubeadm binary on nodes that you create in future, please check altering node images paragraph
## Altering node images
Kind can be estremely efficient when the node image contains all the necessary artifacts.
kinder allows kubeadm contributor to exploit this feature by implementing the `kinder build node-variant` command, that takes a node-image and allows to build variants by:
- Adding new pre-loaded images that will be made available on all nodes at cluster creation time
- Replacing the kubeadm binary installed in the cluster, e.g. with a locally build version of kubeadm
- Adding deb packages for a second Kubernetes version to be used for upgrade testing
The above options can be combined toghether in one command, if necessary
### Add images
```bash
kinder build node-variant \
--base-image kindest/node:latest \
--image kindest/node:PR12345 \
--with-images $my-local-images/nginx.tar
```
Both single file or folder can be used as a arguments for the `--with-images`, but only image tar files will be considered; Image tar file will be placed in a well know folder, and kind(er) will load them during the initialization of each node.
### Replace kubeadm binary
```bash
kinder build node-variant \
--base-image kindest/node:latest \
--image kindest/node:PR12345 \
--with-kubeadm $working_dir/kubernetes/bazel-bin/cmd/kubeadm/linux_amd64_pure_stripped/kubeadm
```
> Please note that, replacing the kubeadm binary in the node-images will have effect on nodes that you create in future; If you want to replace the kubeadm kinary on existing nodes, you should use `docker cp` or `kinder cp` instead.
### Add upgrade packages
```bash
kinder build node-variant \
--base-image kindest/node:latest \
--image kindest/node:PR12345 \
--with-upgrade-packages $my-local-packages/v1.12.2/
```
Both single file or folder can be used as a arguments for the `--with-upgrade-packages`, but only deb packages will be considered; deb files will be placed in a well know folder, the kubeadm-upgrade action will use them during the upgrade sequence.
Or have a look at the [Kinder reference](doc/reference.md)

View File

@ -0,0 +1,43 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package build implements the `build` command
package build
import (
"github.com/spf13/cobra"
"sigs.k8s.io/kind/cmd/kind/build/baseimage"
"sigs.k8s.io/kind/cmd/kind/build/nodeimage"
"k8s.io/kubeadm/kinder/cmd/kinder/build/nodevariant"
)
// NewCommand returns a new cobra.Command for building
func NewCommand() *cobra.Command {
cmd := &cobra.Command{
Args: cobra.NoArgs,
// TODO(bentheelder): more detailed usage
Use: "build",
Short: "Build one of [base-image, node-image, node-variant]",
Long: "Build the base node image (base-image) or the node image (node-image) or node image variants (node-variant)",
}
// add subcommands
cmd.AddCommand(baseimage.NewCommand())
cmd.AddCommand(nodeimage.NewCommand())
cmd.AddCommand(nodevariant.NewCommand())
return cmd
}

View File

@ -0,0 +1,92 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package nodevariant
import (
"github.com/pkg/errors"
"github.com/spf13/cobra"
"k8s.io/kubeadm/kinder/pkg/build/alter"
"sigs.k8s.io/kind/pkg/build/node"
)
type flagpole struct {
Image string
BaseImage string
ImageTars []string
UpgradeBinaries string
Kubeadm string
}
// NewCommand returns a new cobra.Command for building the node image
func NewCommand() *cobra.Command {
flags := &flagpole{}
cmd := &cobra.Command{
Args: cobra.NoArgs,
Use: "node-variant",
Short: "build the node image variant",
Long: "build the variant for a node image by adding packages, images or replacing the kubeadm binary",
RunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, cmd, args)
},
}
cmd.Flags().StringVar(
&flags.Image, "image",
node.DefaultImage,
"name:tag of the resulting image to be built",
)
cmd.Flags().StringVar(
&flags.BaseImage, "base-image",
node.DefaultImage,
"name:tag of the base image to use for the build",
)
cmd.Flags().StringSliceVar(
&flags.ImageTars, "with-images",
nil,
"images tar or folder with images tars to be added to the images",
)
cmd.Flags().StringVar(
&flags.UpgradeBinaries, "with-upgrade-binaries",
"",
"path to a folder with kubernetes binaries [kubelet, kubeadm, kubectl] to be used for testing the kubeadm-upgrade workflow",
)
cmd.Flags().StringVar(
&flags.Kubeadm, "with-kubeadm",
"",
"override the kubeadm binary existing in the image with the given file",
)
return cmd
}
func runE(flags *flagpole, cmd *cobra.Command, args []string) error {
ctx, err := alter.NewContext(
// base build options
alter.WithImage(flags.Image),
alter.WithBaseImage(flags.BaseImage),
// bits to be added to the image
alter.WithImageTars(flags.ImageTars),
alter.WithUpgradeBinaries(flags.UpgradeBinaries),
alter.WithKubeadm(flags.Kubeadm),
)
if err != nil {
return errors.Wrap(err, "error creating alter context")
}
if err := ctx.Alter(); err != nil {
return errors.Wrap(err, "error altering node image")
}
return nil
}

View File

@ -0,0 +1,83 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cp implements the `cp` command
package cp
import (
"github.com/pkg/errors"
"github.com/spf13/cobra"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster"
)
type flagpole struct {
Name string
}
// NewCommand returns a new cobra.Command for exec
func NewCommand() *cobra.Command {
flags := &flagpole{}
cmd := &cobra.Command{
Args: cobra.ExactArgs(2),
Use: "cp [flags] [NODE_NAME|NODE_SELECTOR:]SRC_PATH DEST_PATH |-\n" +
" kinder cp [flags] SRC_PATH [NODE_NAME|NODE_SELECTOR:]DEST_PATH\n\n" +
"Args:\n" +
" NODE_NAME is the container name without the cluster name prefix\n" +
" NODE_SELECTOR can be one of:\n" +
" @all all the control-plane and worker nodes \n" +
" @cp* all the control-plane nodes \n" +
" @cp1 the bootstrap-control plane node \n" +
" @cpN the secondary master nodes \n" +
" @w* all the worker nodes\n" +
" @lb the external load balancer\n" +
" @etcd the external etcd",
Short: "Copy files/folders between a node and the local filesystem",
Long: "kinder cp is a \"topology aware\" wrapper on docker cp",
RunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, cmd, args)
},
}
cmd.Flags().StringVar(&flags.Name, "name", cluster.DefaultName, "cluster context name")
return cmd
}
func runE(flags *flagpole, cmd *cobra.Command, args []string) error {
// Check if the cluster name already exists
known, err := cluster.IsKnown(flags.Name)
if err != nil {
return err
}
if !known {
return errors.Errorf("a cluster with the name %q does not exists", flags.Name)
}
// create a cluster context from current nodes
ctx := cluster.NewContext(flags.Name)
kcfg, err := kcluster.NewKContext(ctx)
if err != nil {
return errors.Wrap(err, "failed to create cluster context")
}
err = kcfg.Copy(args[0], args[1])
if err != nil {
return errors.Wrap(err, "failed to exec command on cluster nodes")
}
return nil
}

View File

@ -0,0 +1,231 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cluster implements the `create cluster` command
// Nb. re-implemented in Kinder in order to add the --install-kubernetes flag
package cluster
import (
"fmt"
"strings"
"time"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster/config"
"sigs.k8s.io/kind/pkg/cluster/config/encoding"
"sigs.k8s.io/kind/pkg/cluster/config/v1alpha2"
"sigs.k8s.io/kind/pkg/cluster/create"
"sigs.k8s.io/kind/pkg/util"
)
const (
configFlagName = "config"
controlPlaneNodesFlagName = "control-plane-nodes"
workerNodesFLagName = "worker-nodes"
kubeDNSFLagName = "kube-dns"
externalEtcdFlagName = "external-etcd"
externalLoadBalancerFlagName = "external-load-balancer"
)
type flagpole struct {
Name string
Config string
ImageName string
Workers int32
ControlPlanes int32
KubeDNS bool
Retain bool
Wait time.Duration
SetupKubernetes bool
ExternalLoadBalancer bool
ExternalEtcd bool
}
// NewCommand returns a new cobra.Command for cluster creation
func NewCommand() *cobra.Command {
flags := &flagpole{}
cmd := &cobra.Command{
Args: cobra.NoArgs,
Use: "cluster",
Short: "Creates a local Kubernetes cluster",
Long: "Creates a local Kubernetes cluster using Docker container 'nodes'",
RunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, cmd, args)
},
}
cmd.Flags().StringVar(&flags.Name, "name", cluster.DefaultName, "cluster context name")
cmd.Flags().StringVar(&flags.Config, configFlagName, "", "path to a kind config file")
cmd.Flags().Int32Var(&flags.ControlPlanes, controlPlaneNodesFlagName, 1, "number of control-plane nodes in the cluster")
cmd.Flags().Int32Var(&flags.Workers, workerNodesFLagName, 0, "number of worker nodes in the cluster")
cmd.Flags().StringVar(&flags.ImageName, "image", "", "node docker image to use for booting the cluster")
cmd.Flags().BoolVar(&flags.Retain, "retain", false, "retain nodes for debugging when cluster creation fails")
cmd.Flags().DurationVar(&flags.Wait, "wait", time.Duration(0), "Wait for control plane node to be ready (default 0s)")
cmd.Flags().BoolVar(&flags.SetupKubernetes, "setup-kubernetes", false, "setup Kubernetes on cluster nodes")
cmd.Flags().BoolVar(&flags.KubeDNS, kubeDNSFLagName, false, "setup kubeadm for installing kube-dns instead of CoreDNS")
cmd.Flags().BoolVar(&flags.ExternalEtcd, externalEtcdFlagName, false, "create an external etcd and setup kubeadm for using it")
cmd.Flags().BoolVar(&flags.ExternalLoadBalancer, externalLoadBalancerFlagName, false, "add an external load balancer to the cluster (implicit if number of control-plane nodes>1)")
return cmd
}
func runE(flags *flagpole, cmd *cobra.Command, args []string) error {
// refactor this...
if cmd.Flags().Changed(configFlagName) && (cmd.Flags().Changed(controlPlaneNodesFlagName) ||
cmd.Flags().Changed(workerNodesFLagName) ||
cmd.Flags().Changed(kubeDNSFLagName) ||
cmd.Flags().Changed(externalEtcdFlagName) ||
cmd.Flags().Changed(externalLoadBalancerFlagName)) {
return errors.Errorf("flag --%s can't be used in combination with --%s flags", configFlagName, strings.Join([]string{controlPlaneNodesFlagName, workerNodesFLagName, kubeDNSFLagName, externalEtcdFlagName, externalLoadBalancerFlagName}, ","))
}
if flags.ControlPlanes < 0 || flags.Workers < 0 {
return errors.Errorf("flags --%s and --%s should not be a negative number", controlPlaneNodesFlagName, workerNodesFLagName)
}
// Check if the cluster name already exists
known, err := cluster.IsKnown(flags.Name)
if err != nil {
return err
}
if known {
return errors.Errorf("a cluster with the name %q already exists", flags.Name)
}
//TODO: this should go away as soon as kind will support etcd nodes
var externalEtcdIP string
if flags.ExternalEtcd {
fmt.Printf("Creating external etcd for the cluster %q ...\n", flags.Name)
var err error
externalEtcdIP, err = kcluster.CreateExternalEtcd(flags.Name)
if err != nil {
return errors.Wrap(err, "failed to create cluster")
}
}
cfg := NewConfig(flags.ControlPlanes, flags.Workers, flags.KubeDNS, externalEtcdIP, flags.ExternalLoadBalancer)
// override the config with the one from file, if specified
if flags.Config != "" {
// load the config
cfg, err := encoding.Load(flags.Config)
if err != nil {
return errors.Wrap(err, "error loading config")
}
// validate the config
err = cfg.Validate()
if err != nil {
log.Error("Invalid configuration!")
configErrors := err.(*util.Errors)
for _, problem := range configErrors.Errors() {
log.Error(problem)
}
return errors.New("aborting due to invalid configuration")
}
}
// create a cluster context and create the cluster
ctx := cluster.NewContext(flags.Name)
if flags.ImageName != "" {
// Apply image override to all the Nodes defined in Config
// TODO(Fabrizio Pandini): this should be reconsidered when implementing
// https://github.com/kubernetes-sigs/kind/issues/133
for i := range cfg.Nodes {
cfg.Nodes[i].Image = flags.ImageName
}
err := cfg.Validate()
if err != nil {
log.Errorf("Invalid flags, configuration failed validation: %v", err)
return errors.New("aborting due to invalid configuration")
}
}
fmt.Printf("Creating cluster %q ...\n", flags.Name)
if err = ctx.Create(cfg,
create.Retain(flags.Retain),
create.WaitForReady(flags.Wait),
create.SetupKubernetes(flags.SetupKubernetes),
); err != nil {
return errors.Wrap(err, "failed to create cluster")
}
fmt.Printf("\nYou can also use kinder commands:\n\n")
fmt.Printf("- kinder do, the kinder swiss knife 🚀!\n")
fmt.Printf("- kinder exec, a \"topology aware\" wrapper on docker exec\n")
fmt.Printf("- kinder cp, a \"topology aware\" wrapper on docker cp\n")
return nil
}
// NewConfig returns the default config according to requested number of control-plane
// and worker nodes
func NewConfig(controlPlanes, workers int32, kubeDNS bool, externalEtcdIP string, externalLoadBalancer bool) *config.Cluster {
var latestPublicConfig = &v1alpha2.Config{}
// create default config according to requested number of control-plane and worker nodes
// adds the control-plane node(s)
controlPlaneNodes := v1alpha2.Node{Role: v1alpha2.ControlPlaneRole, Replicas: &controlPlanes}
controlPlaneNodes.KubeadmConfigPatches = []string{}
if kubeDNS {
controlPlaneNodes.KubeadmConfigPatches = append(controlPlaneNodes.KubeadmConfigPatches, kubeDNSPatch)
}
if externalEtcdIP != "" {
controlPlaneNodes.KubeadmConfigPatches = append(controlPlaneNodes.KubeadmConfigPatches, fmt.Sprintf(externalEtcdPatch, externalEtcdIP))
}
latestPublicConfig.Nodes = append(latestPublicConfig.Nodes, controlPlaneNodes)
// if requester or more than one control-plane node(s), add an external load balancer
if externalLoadBalancer || controlPlanes > 1 {
latestPublicConfig.Nodes = append(latestPublicConfig.Nodes, v1alpha2.Node{Role: v1alpha2.ExternalLoadBalancerRole})
}
// adds the worker node(s), if any
if workers > 0 {
latestPublicConfig.Nodes = append(latestPublicConfig.Nodes, v1alpha2.Node{Role: v1alpha2.WorkerRole, Replicas: &workers})
}
// apply defaults
encoding.Scheme.Default(latestPublicConfig)
// converts to internal config
var cfg = &config.Cluster{}
encoding.Scheme.Convert(latestPublicConfig, cfg, nil)
// unmarshal the file content into a `kind` Config
return cfg
}
const kubeDNSPatch = `apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
metadata:
name: config
dns:
type: "kube-dns"`
const externalEtcdPatch = `apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
metadata:
name: config
etcd:
external:
endpoints:
- http://%s:2379`

View File

@ -0,0 +1,39 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package create implements the `create` command
// Nb. re-implemented in Kinder in order to import the kinder version of createcluster
package create
import (
"github.com/spf13/cobra"
createcluster "k8s.io/kubeadm/kinder/cmd/kinder/create/cluster"
)
// NewCommand returns a new cobra.Command for cluster creation
func NewCommand() *cobra.Command {
cmd := &cobra.Command{
Args: cobra.NoArgs,
Use: "create",
Short: "Creates one of [cluster, worker-node, control-plane-node]",
Long: "Creates one of local Kubernetes cluster (cluster), or nodes in a local kubernetes cluster (worker-node, control-plane-node)",
}
cmd.AddCommand(createcluster.NewCommand())
//cmd.AddCommand(createnode.NewCommand(constants.ControlPlaneNodeRoleValue)) //this is currently super hacky, looking for a better solution
//cmd.AddCommand(createnode.NewCommand(constants.WorkerNodeRoleValue))
return cmd
}

View File

@ -0,0 +1,82 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package cluster implements the `create cluster` command
// Nb. re-implemented in Kinder in order to add the --install-kubernetes flag
package cluster
import (
"fmt"
"github.com/pkg/errors"
"github.com/spf13/cobra"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster"
)
type flagpole struct {
Name string
ImageName string
}
// NewCommand returns a new cobra.Command for cluster creation
func NewCommand(role string) *cobra.Command {
roleNode := fmt.Sprintf("%s-node", role)
flags := &flagpole{}
cmd := &cobra.Command{
Args: cobra.NoArgs,
Use: roleNode,
Short: fmt.Sprintf("Creates a %s for a Kubernetes cluster", roleNode),
Long: fmt.Sprintf("Creates a %s for a local Kubernetes cluster using Docker container 'nodes'", roleNode),
RunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, role, cmd, args)
},
}
cmd.Flags().StringVar(&flags.Name, "name", cluster.DefaultName, "cluster context name")
cmd.Flags().StringVar(&flags.ImageName, "image", "", "node docker image to use for booting the cluster")
return cmd
}
func runE(flags *flagpole, role string, cmd *cobra.Command, args []string) error {
//TODO: fail il image name is empty
// Check if the cluster name already exists
known, err := cluster.IsKnown(flags.Name)
if err != nil {
return err
}
if !known {
return errors.Errorf("a cluster with the name %q does not exists", flags.Name)
}
// create a cluster context from current nodes
ctx := cluster.NewContext(flags.Name)
kcfg, err := kcluster.NewKContext(ctx)
if err != nil {
return errors.Wrap(err, "failed to create cluster context")
}
fmt.Printf("Creating %s node in cluster %s ...\n", role, flags.Name)
err = kcfg.CreateNode(role, flags.ImageName)
if err != nil {
return errors.Wrap(err, "failed to create node")
}
return nil
}

105
kinder/cmd/kinder/do/do.go Normal file
View File

@ -0,0 +1,105 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package do implements the `do` command
package do
import (
"fmt"
"strings"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/util/version"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster"
)
type flagpole struct {
Name string
OnlyNode string
UsePhases bool
UpgradeVersion string
CopyCerts bool
}
// NewCommand returns a new cobra.Command for exec
func NewCommand() *cobra.Command {
flags := &flagpole{}
actions := kcluster.KnownActions()
cmd := &cobra.Command{
Args: cobra.MinimumNArgs(1),
Use: "do [flags] ACTION\n\n" +
"Args:\n" +
fmt.Sprintf(" ACTION is one of [%s]", strings.Join(actions, ", ")),
Short: "Executes actions (tasks/sequence of commands) on one or more nodes in the local Kubernetes cluster",
Long: "Action define a set of tasks/sequence of commands to be executed on a cluster. Usage of actions allows \n" +
"automate repetitive operatitions.",
RunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, cmd, args)
},
}
cmd.Flags().StringVar(&flags.Name, "name", cluster.DefaultName, "cluster context name")
cmd.Flags().StringVar(&flags.OnlyNode, "only-node", "", "exec the action only on the selected node")
cmd.Flags().BoolVar(&flags.UsePhases, "use-phases", false, "use the kubeadm phases subcommands insted of the the kubeadm top-level commands")
cmd.Flags().StringVar(&flags.UpgradeVersion, "upgrade-version", "", "defines the target upgrade version (it should match the version of upgrades binaries)")
cmd.Flags().BoolVar(&flags.CopyCerts, "automatic-copy-certs", false, "use automatic copy certs instead of manual copy certs when joining new control-plane nodes")
return cmd
}
func runE(flags *flagpole, cmd *cobra.Command, args []string) error {
actionFlags := kcluster.ActionFlags{
UsePhases: flags.UsePhases,
CopyCerts: flags.CopyCerts,
}
//TODO: upgrade version mandatory for updates
if flags.UpgradeVersion != "" {
v, err := version.ParseSemantic(flags.UpgradeVersion)
if err != nil {
return err
}
actionFlags.UpgradeVersion = v
}
// Check if the cluster name already exists
known, err := cluster.IsKnown(flags.Name)
if err != nil {
return err
}
if !known {
return errors.Errorf("a cluster with the name %q does not exists", flags.Name)
}
// create a cluster context from current nodes
ctx := cluster.NewContext(flags.Name)
kcfg, err := kcluster.NewKContext(ctx)
if err != nil {
return errors.Wrap(err, "failed to create cluster context")
}
err = kcfg.Do(args, actionFlags, flags.OnlyNode)
if err != nil {
return errors.Wrap(err, "failed to exec action on cluster nodes")
}
return nil
}

View File

@ -0,0 +1,82 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package exec implements the `exec` command
package exec
import (
"github.com/pkg/errors"
"github.com/spf13/cobra"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster"
)
type flagpole struct {
Name string
}
// NewCommand returns a new cobra.Command for exec
func NewCommand() *cobra.Command {
flags := &flagpole{}
cmd := &cobra.Command{
Args: cobra.MinimumNArgs(2),
Use: "exec [flags] NODE_NAME|NODE_SELECTOR -- COMMAND [ARG...]>\n\n" +
"Args:\n" +
" NODE_NAME is the container name without the cluster name prefix\n" +
" NODE_SELECTOR can be one of:\n" +
" @all all the control-plane and worker nodes \n" +
" @cp* all the control-plane nodes \n" +
" @cp1 the bootstrap-control plane node \n" +
" @cpN the secondary master nodes \n" +
" @w* all the worker nodes\n" +
" @lb the external load balancer\n" +
" @etcd the external etcd",
Short: "Executes command on one or more nodes in the local Kubernetes cluster",
Long: "Exec is a \"topology aware\" wrapper on docker exec, allowing to run command on one or more nodes in the local Kubernetes cluster\n",
RunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, cmd, args)
},
}
cmd.Flags().StringVar(&flags.Name, "name", cluster.DefaultName, "cluster context name")
return cmd
}
func runE(flags *flagpole, cmd *cobra.Command, args []string) error {
// Check if the cluster name already exists
known, err := cluster.IsKnown(flags.Name)
if err != nil {
return err
}
if !known {
return errors.Errorf("a cluster with the name %q does not exists", flags.Name)
}
// create a cluster context from current nodes
ctx := cluster.NewContext(flags.Name)
kcfg, err := kcluster.NewKContext(ctx)
if err != nil {
return errors.Wrap(err, "failed to create cluster context")
}
err = kcfg.Exec(args[0], args[1:])
if err != nil {
return errors.Wrap(err, "failed to exec command on cluster nodes")
}
return nil
}

128
kinder/cmd/kinder/kinder.go Normal file
View File

@ -0,0 +1,128 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package kinder implements the root kinder cobra command, and the cli Main()
package kinder
import (
"os"
"github.com/sirupsen/logrus"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
logutil "sigs.k8s.io/kind/pkg/log"
kbuild "k8s.io/kubeadm/kinder/cmd/kinder/build"
kcp "k8s.io/kubeadm/kinder/cmd/kinder/cp"
kcreate "k8s.io/kubeadm/kinder/cmd/kinder/create"
kdo "k8s.io/kubeadm/kinder/cmd/kinder/do"
kexec "k8s.io/kubeadm/kinder/cmd/kinder/exec"
kversion "k8s.io/kubeadm/kinder/cmd/kinder/version"
"sigs.k8s.io/kind/cmd/kind/delete"
"sigs.k8s.io/kind/cmd/kind/export"
"sigs.k8s.io/kind/cmd/kind/get"
"sigs.k8s.io/kind/cmd/kind/load"
)
const defaultLevel = logrus.WarnLevel
// Flags for the kinder command
type Flags struct {
LogLevel string
}
// NewCommand returns a new cobra.Command implementing the root command for kinder
func NewCommand() *cobra.Command {
flags := &Flags{}
cmd := &cobra.Command{
Args: cobra.NoArgs,
Use: "kinder",
Short: "kinder is an example of kind(https://github.com/kubernetes-sigs/kind) used as a library",
Long: " _ ___ _\n" +
" | |/ (_)_ _ __| |___ _ _\n" +
" | ' <| | ' \\/ _` / -_) '_|\n" +
" |_|\\_\\_|_||_\\__,_\\___|_|\n\n" +
"kinder is an example of kind(https://github.com/kubernetes-sigs/kind) used as a library.\n\n" +
"All the kind commands will be available in kinder, side by side with additional commands \n" +
"designed for helping kubeadm contributors.\n\n" +
"kinder is still a work in progress. Test It, Break It, Send feedback!",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
return runE(flags, cmd, args)
},
SilenceUsage: true,
Version: kversion.Version,
}
cmd.PersistentFlags().StringVar(
&flags.LogLevel,
"loglevel",
defaultLevel.String(),
"logrus log level "+logutil.LevelsString(),
)
// add kind top level subcommands re-used without changes
cmd.AddCommand(delete.NewCommand())
cmd.AddCommand(export.NewCommand())
cmd.AddCommand(get.NewCommand())
cmd.AddCommand(load.NewCommand())
// add kind commands commands customized in kind
cmd.AddCommand(kbuild.NewCommand())
cmd.AddCommand(kcreate.NewCommand())
cmd.AddCommand(kversion.NewCommand())
// add kinder only commands
cmd.AddCommand(kcp.NewCommand())
cmd.AddCommand(kdo.NewCommand())
cmd.AddCommand(kexec.NewCommand())
return cmd
}
func runE(flags *Flags, cmd *cobra.Command, args []string) error {
level := defaultLevel
parsed, err := log.ParseLevel(flags.LogLevel)
if err != nil {
log.Warnf("Invalid log level '%s', defaulting to '%s'", flags.LogLevel, level)
} else {
level = parsed
}
log.SetLevel(level)
return nil
}
// Run runs the `kind` root command
func Run() error {
return NewCommand().Execute()
}
// Main wraps Run and sets the log formatter
func Main() {
// let's explicitly set stdout
log.SetOutput(os.Stdout)
// this formatter is the default, but the timestamps output aren't
// particularly useful, they're relative to the command start
log.SetFormatter(&log.TextFormatter{
FullTimestamp: true,
TimestampFormat: "15:04:05",
// we force colors because this only forces over the isTerminal check
// and this will not be accurately checkable later on when we wrap
// the logger output with our logutil.StatusFriendlyWriter
ForceColors: logutil.IsTerminal(log.StandardLogger().Out),
})
if err := Run(); err != nil {
os.Exit(1)
}
}

View File

@ -0,0 +1,45 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package version implements the `version` command
package version
import (
"fmt"
"github.com/spf13/cobra"
kindversion "sigs.k8s.io/kind/cmd/kind/version"
)
// Version is the kinder CLI version
const Version = "0.1.0-alpha.1"
// NewCommand returns a new cobra.Command for version
func NewCommand() *cobra.Command {
cmd := &cobra.Command{
Args: cobra.NoArgs,
// TODO(bentheelder): more detailed usage
Use: "version",
Short: "prints the kind CLI version",
Long: "prints the kind CLI version",
RunE: func(cmd *cobra.Command, args []string) error {
fmt.Printf("kinder version: %s\nkind version: %s\n", Version, kindversion.Version)
return nil
},
}
return cmd
}

View File

@ -0,0 +1,58 @@
# Getting started (test single control-plane)
This document assumes vX the Kubernetes release to be tested.
See [Prepare for tests](prepare-for-tests.md) for how to create a node-image for Kubernetes vX.
```bash
# create a cluster with eventually some worker node
kinder create cluster --image kindest/node:vX --workers-nodes 0
# initialize the bootstrap control plane
kinder do kubeadm-init
# join nodes
kinder do kubeadm-join
```
Please note that if you need a better control of pre-defined actions with `kinder do`, you can use
the `--only-node` flag to execute actions only on a selected node.
As alternative, instead of using kinder pre-defined actions with `kinder do`, it is possible to
use `docker exec` and `docker cp` to work on nodes invoking directly `kubeadm`, `kubectl` or
any other shell commands.
## Test variants
1. add `--kube-dns` flag to `kinder create cluster` to test usage of kube-dns instead of CoreDNS
2. add `--external-etcd` flag to `kinder create cluster` to test usage of external etcd cluster
3. add `--use-phases` flag to `kubeadm-init` and/or `kubeadm-join` to test phases
4. any combination of the above
## Validation
```bash
# verify kubeadm commands outputs
# get an overview of the resulting cluster
kinder do cluster-info
# > check for nodes, Kubernetes version x, ready
# > check all the components running, Kubernetes version x + related dependencies
# > check for etcd member
# run a smoke test
kinder do smoke-test
```
Also in this case:
- you can use the `--only-node` flag to execute actions only on a selected node.
- as alternative to `kinder do`, it is possible to use `docker exec` and `docker cp`
## Cleanup
```bash
kinder do kubeadm-reset
kinder delete cluster
```

View File

@ -0,0 +1,74 @@
# Prepare for tests
Before starting test with kinder, it necessary to get a node-image to be used as a base for nodes in the cluster.
As in kind, also in kinder in order to make your test will fast and repeatable, it is recommended to
pack whatever you need during your test in the node-images.
kind gives already you what you need in most cases (kubeadm, Kubernetes binaries, pre-pulled images); kinder
on top of that allows to build node variants for addressing following use cases:
- Adding new pre-loaded images that will be made available on all nodes at cluster creation time
- Replacing the kubeadm binary installed in the cluster, e.g. with a locally build version of kubeadm
- Adding binaries for a second Kubernetes version to be used for upgrade testing
## Use a kind public node-images
The easiest way to get a node image for a major/minor Kubernetes version is use kind images available
on docker hub, e.g.
```bash
docker pull kindest/node:vX
```
## Build a node-image
For building a node image you can refer to kind documentation; below a short recap of necessary steps:
Build a base-image (or download one from docker hub)
```bash
kinder build base-image --image kindest/base:latest
```
Build a node-image starting from the above node image
```bash
# To build a node-image using latest Kubernetes apt packages available
kinder build node-image --base-image kindest/base:latest --image kindest/node:vX --type apt
# To build a node-image using local Kubernetes repository
kinder build node-image --base-image kindest/base:latest --image kindest/node:vX --type bazel
```
>  NB see https://github.com/kubernetes/kubeadm/blob/master/testing-pre-releases.md#change-the-target-version-number-when-building-a-local-release for overriding
the build version in case of `--type bazel`
## Customize a node-image
As a third option for building node-image, it is possible to pick an existing node image and customize it by:
1. overriding the kubeadm binary
```bash
kinder build node-variant \
--base-image kindest/node:vX \
--image kindest/node:vX-variant \
--with-kubeadm $mylocalbinary/kubeadm
```
2. adding/overriding the pre loaded images in the `/kind/images` folder
```bash
kinder build node-variant \
--base-image kindest/node:vX \
--image kindest/node:vX-variant \
--with-images $mylocalimages/nginx.tar
```
3. adding a second Kubernetes version in the `/kinder/upgrades` folder for testing upgrades
```bash
kinder build node-variant \
--base-image kindest/node:vX \
--image kindest/node:vX-variant \
--with-upgrade-binaries $mylocalbinaries/vY
```

204
kinder/doc/reference.md Normal file
View File

@ -0,0 +1,204 @@
# Kinder
## Create a test cluster
You can create a cluster in kinder using `kinder create cluster`, that is a wrapper on the `kind create cluster` command.
However, additional flags are implemented for enabling following use cases:
### Create *only* nodes
By default kinder stops the cluster creation process before executing `kubeadm init` and `kubeadm join`;
This will give you nodes ready for installing Kubernetes and more specifically
- the necessary prerequisites already installed on all nodes
- a pre-build kubeadm config file in `/kind/kubeadm.conf`
- in case of more than one control-plane node exists in the cluster, a pre-configured external load balancer
If instead you want to revert to the default kind behavior, you can use the `--setup-kubernetes`:
```bash
kinder create cluster --setup-kubernetes=true
```
### Testing different cluster topologies
You can use the `--control-plane-nodes <num>` flag and/or the `--worker-nodes <num>` flag
as a shortcut for creating different cluster topologies. e.g.
```bash
# create a cluster with two worker nodes
kinder create cluster --worker-nodes=2
# create a cluster with two control-pane nodes
kinder create cluster ---control-plane-nodes=2
```
Please note that a load balancer node will be automatically create when there are more than
one control-plane node; if necessary, you can use `--external-load balancer` flag to explicitly
request the creation of an external load balancer node.
More sophisticated cluster topologies can be achieved using the kind config file, like e.g. customizing
kubeadm-config or specifying volume mounts. see [kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/#configuring-your-kind-cluster)
for more details.
### Testing Kubernetes cluster variants
kinder gives you shortcuts for testing Kubernetes cluster variants supported by kubeadm:
```bash
# create a cluster using kube-dns instead of CoreDNS
kinder create cluster --kube-dns
# create a cluster using an external etcd
kinder create cluster --external-etcd
```
## Working on nodes
You can use `docker exec` and `docker cp` to work on nodes.
```bash
# check the content of the /kind/kubeadm.conf file
docker exec kind-control-plane cat kind/kubeadm.conf
# execute a command on the kind-control-plane container (the control-plane node)
docker exec kind-worker1 \
kubeadm init --config=/kind/kubeadm.conf --ignore-preflight-errors=all
# override the kubeadm binary on the on the kind-control-plane container
# with a locally built kubeadm binary
docker cp \
$working_dir/kubernetes/bazel-bin/cmd/kubeadm/linux_amd64_pure_stripped/kubeadm \
kind-control-plane:/usr/bin/kubeadm
```
On top of that, kinder offers you three commands for helping you working on nodes:
- `kinder do` allowing you to execute actions (repetitive tasks/sequence of commands) on nodes
- `kinder exec`, a topology aware wrapper on docker `docker exec`
- `kinder cp`, a topology aware wrapper on docker `docker cp`
### kinder do
`kinder do` is the kinder swiss knife.
It allows to execute actions (repetitive tasks/sequence of commands) on one or more nodes
in the local Kubernetes cluster.
```bash
# Execute kubeadm init, installs the CNI plugin and copy the kubeconfig file on the host
kinder do kubeadm-init
```
All the actions implemented in kinder are by design "developer friendly", in the sense that
all the command output will be echoed and all the step will be documented.
Following actions are available:
| action | Notes |
| --------------- | ------------------------------------------------------------ |
| kubeadm-init | Executes the kubeadm-init workflow, installs the CNI plugin and then copies the kubeconfig file on the host machine. Available options are:<br /> `--use-phases` triggers execution of the init workflow by invoking single phases. <br />`--automatic-copy-certs` instruct kubeadm to use the automatic copy cert feature.|
| manual-copy-certs | Implement the manual copy of certificates to be shared across control-plane nodes (n.b. manual means not managed by kubeadm) Available options are:<br /> `--only-node` to execute this action only on a specific node. |
| kubeadm-join | Executes the kubeadm-join workflow both on secondary control plane nodes and on worker nodes. Available options are:<br /> `--use-phases` triggers execution of the init workflow by invoking single phases.<br />`--automatic-copy-certs` instruct kubeadm to use the automatic copy cert feature.<br /> `--only-node` to execute this action only on a specific node. |
| kubeadm-upgrade |Executes the kubeadm upgrade workflow and upgrading K8s. Available options are:<br /> `--upgrade-version` for defining the target K8s version.<br />`--only-node` to execute this action only on a specific node. |
| Kubeadm-reset | Executes the kubeadm-reset workflow on all the nodes. Available options are:<br /> `--only-node` to execute this action only on a specific node. |
| cluster-info | Returns a summary of cluster info including<br />- List of nodes<br />- list of pods<br />- list of images used by pods<br />- list of etcd members |
| smoke-test | Implements a non-exhaustive set of tests that aim at ensuring that the most important functions of a Kubernetes cluster work |
### kinder exec
`kinder exec` provide a topology aware wrapper on docker `docker exec` .
```bash
# check the kubeadm version on all the nodes
kinder exec @all -- kubeadm version
# run kubeadm join on all the worker nodes
kinder exec @w* -- kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef ...
# run kubectl command inside the bootstrap control-plane node
kinder exec @cp1 -- kubectl --kubeconfig=/etc/kubernetes/admin.conf cluster-info
```
Following node selectors are available
| selector | return the following nodes |
| -------- | ------------------------------------------------------------ |
| @all | all the Kubernetes nodes in the cluster.<br />(control-plane and worker nodes are included, load balancer and etcd not) |
| @cp* | all the control-plane nodes |
| @cp1 | the bootstrap-control plane node |
| @cpN | the secondary master nodes |
| @w* | all the worker nodes |
| @lb | the external load balancer |
| @etcd | the external etcd |
As alternative to node selector, the node name (the container name without the cluster name prefix) can be used to target actions to a specific node.
```bash
# run kubeadm join on the first worker node node only
kinder exec worker1 -- kubeadm join 172.17.0.2:6443 --token abcdef.0123456789abcdef ...
```
### kinder cp
`kinder cp` provide a topology aware wrapper on docker `docker cp` . Following feature are supported:
```bash
# copy to the host the /kind/kubeadm.conf file existing on the bootstrap control-plane node
kinder cp @cp1:kind/kubeadm.conf kubeadm.conf
# copy to the bootstrap control-plane node a local kubeadm.conf file
kinder cp kubeadm.conf @cp1:kind/kubeadm.conf
# override the kubeadm binary on all the nodes with a locally built kubeadm binary
kinder cp \
$working_dir/kubernetes/bazel-bin/cmd/kubeadm/linux_amd64_pure_stripped/kubeadm \
@all:/usr/bin/kubeadm
```
> Please note that, `docker cp` or `kinder cp` allows you to replace the kubeadm binary on existing nodes. If you want to replace the kubeadm binary on nodes that you create in future, please check altering node images paragraph
## Altering node images
Kind can be extremely efficient when the node image contains all the necessary artifacts.
kinder allows kubeadm contributor to exploit this feature by implementing the `kinder build node-variant` command, that takes a node-image and allows to build variants by:
- Adding new pre-loaded images that will be made available on all nodes at cluster creation time
- Replacing the kubeadm binary installed in the cluster, e.g. with a locally build version of kubeadm
- Adding binaries for a second Kubernetes version to be used for upgrade testing
The above options can be combined together in one command, if necessary
### Add images
```bash
kinder build node-variant \
--base-image kindest/node:latest \
--image kindest/node:PR12345 \
--with-images $my-local-images/nginx.tar
```
Both single file or folder can be used as a arguments for the `--with-images`, but only image tar files will be considered; Image tar file will be placed in a well know folder, and kind(er) will load them during the initialization of each node.
### Replace kubeadm binary
```bash
kinder build node-variant \
--base-image kindest/node:latest \
--image kindest/node:PR12345 \
--with-kubeadm $working_dir/kubernetes/bazel-bin/cmd/kubeadm/linux_amd64_pure_stripped/kubeadm
```
> Please note that, replacing the kubeadm binary in the node-images will have effect on nodes that you create in future; If you want to replace the kubeadm binary on existing nodes, you should use `docker cp` or `kinder cp` instead.
### Add upgrade packages
```bash
kinder build node-variant \
--base-image kindest/node:latest \
--image kindest/node:PR12345 \
--with-upgrade-binaries $my-local-packages/v1.12.2/
```
Both single file or folder can be used as a arguments for the `--with-upgrade-binaries`, but only binaries will be considered; binaries files will be placed in a well know folder, the kubeadm-upgrade action will use them during the upgrade sequence.

59
kinder/doc/test-HA.md Normal file
View File

@ -0,0 +1,59 @@
# Test HA
This document assumes vX the Kubernetes release to be tested.
See [Prepare for tests](prepare-for-tests.md) for how to create a node-image for Kubernetes vX.
```bash
# create a cluster with at least two control plane nodes (and eventually some worker node)
kinder create cluster --image kindest/node:vX --control-plane-nodes 2 --workers-nodes 0
# initialize the bootstrap control plane
kinder do kubeadm-init
# join secondary control planes and nodes (if any)
kinder do kubeadm-join
```
Please note that if you need a better control of pre-defined actions with `kinder do`, you can use
the `--only-node` flag to execute actions only on a selected node.
As alternative, instead of using kinder pre-defined actions with `kinder do`, it is possible to
use `docker exec` and `docker cp` to work on nodes invoking directly `kubeadm`, `kubectl` or
any other shell commands.
## Test variants
1. add `--kube-dns` flag to `kinder create cluster` to test usage of kube-dns instead of CoreDNS
2. add `--external-etcd` flag to `kinder create cluster` to test usage of external etcd cluster
3. add `--use-phases` flag to `kubeadm-init` and/or `kubeadm-join` to test phases
4. add `--automatic-copy-certs` flag both to `kubeadm-init` and `kubeadm-join` to test the automatic copy certs feature
5. any combination of the above
## Validation
```bash
# verify kubeadm commands outputs
# get an overview of the resulting cluster
kinder do cluster-info
# > check for nodes, Kubernetes version x, ready
# > check all the components running, Kubernetes version x + related dependencies
# > check for etcd member
# run a smoke test
kinder do smoke-test
```
Also in this case:
- you can use the `--only-node` flag to execute actions only on a selected node.
- as alternative to `kinder do`, it is possible to use `docker exec` and `docker cp`
## Cleanup
```bash
kinder do kubeadm-reset
kinder delete cluster
```

65
kinder/doc/test-XonY.md Normal file
View File

@ -0,0 +1,65 @@
# Testing X on Y
X on Y test are meant to verify kubeadm vX managing Kubernetes cluster of different version vY,
with vY being 1 minor less or same minor of vX
## Preparation
In order to test X on Y, the recommended approach in kinder is to:
1. take a node-image for Kubernetes vY, e.g. `kindest/node:vY`
2. prepare locally the kubeadm binaries vX
3. create an variant of `kindest/node:vY`, e.g. `kindest/node:vX.on.Y`, that replaces kubeadm vY
with kubeadm vX
e.g. assuming vX artifacts stored in $artifacts
```bash
kinder build node-variant --base-image kindest/node:vY --image kindest/node:vX.on.Y \
--with-kubeadm $artifacts/binaries/kubeadm
```
See [Prepare for tests](prepare-for-tests.md) for more detail
## Creating and initializing the cluster
See [getting started (test single control-plane)](getting-started.md) or [testing HA](test-HA.md);
in summary:
```bash
# create a cluster (choose the desired number of control-plane/worker nodes)
kinder create cluster --image kindest/node:vX.on.Y --control-plane-nodes 1 --workers-nodes 0
# initialize the bootstrap control plane
kinder do kubeadm-init
# join secondary control planes and nodes (if any)
kinder do kubeadm-join
```
Also in this case:
- test variants can be achieved adding `--kube-dns`, `--external-etcd`, `--automatic-copy-certs` flags to `kinder create cluster` or `--use-phases` to `kubeadm-init` and/or `kubeadm-join`
## Validation
```bash
# verify kubeadm commands outputs
# get an overview of the resulting cluster
kinder do cluster-info
# > check for nodes, Kubernetes version y, ready
# > check all the components running, Kubernetes version y + related dependencies
# > check for etcd member
# run a smoke test
kinder do smoke-test
```
## Cleanup
```bash
kinder do kubeadm-reset
kinder delete cluster
```

View File

@ -0,0 +1,85 @@
# Testing upgrades
## Preparation
This document assumes vX the Kubernetes release to be used when creating the cluster, and vY
the target Kubernetes version for the upgrades.
In order to test upgrades, the recommended approach in kinder is to bundle in one
image both vX and vY:
1. take a node-image for Kubernetes vX, e.g. `kindest/node:vX`
2. prepare locally the Kubernetes binaries and the docker images for Kubernetes vY
3. create an variant of `kindest/node:vX`, e.g. `kindest/node:vX.to.Y`, that contains vY artifacts as well
e.g. assuming vY artifacts stored in $artifacts
```bash
kinder build node-variant --base-image kindest/node:vX --image kindest/node:vX.to.Y \
--with-upgrade-binaries $artifacts/binaries \
--with-images $artifacts/images
```
> vY images will be saved in `/kind/images` folder; kind will pre-load all the images in
> this folder when the node container starts-
> vY binaries will be saved in the `/kinder/upgrades` folder; those binaries will be used
> by the kinder `kubeadm-upgrade` action.
## Creating and initializing the cluster
See [getting started (test single control-plane)](getting-started.md) or [testing HA](test-HA.md);
in summary:
```bash
# create a cluster (choose the desired number of control-plane/worker nodes)
kinder create cluster --image kindest/node:vX.to.Y --control-plane-nodes 1 --workers-nodes 0
# initialize the bootstrap control plane
kinder do kubeadm-init
# join secondary control planes and nodes (if any)
kinder do kubeadm-join
```
Also in this case:
- test variants can be achieved adding `--kube-dns`, `--external-etcd`, `--automatic-copy-certs` flags to `kinder create cluster` or `--use-phases` to `kubeadm-init` and/or `kubeadm-join`
## Testing upgrades
```bash
# upgrade the cluster form vX to vY
# - upgrade kubeadm
# - run kubeadm upgrade (or upgrade node)
# - upgrade kubelet
kinder do kubeadm-upgrade --upgrade-version vY
```
As usual:
- you can use the `--only-node` flag to execute actions only on a selected node.
- as alternative to `kinder do`, it is possible to use `docker exec` and `docker cp`
## Validation
```bash
# verify kubeadm commands outputs
# get an overview of the resulting cluster
kinder do cluster-info
# > check for nodes, Kubernetes version x, ready
# > check all the components running, Kubernetes version x + related dependencies
# > check for etcd member
# run a smoke test
kinder do smoke-test
```
## Cleanup
```bash
kinder do kubeadm-reset
kinder delete cluster
```

9
kinder/go.mod Normal file
View File

@ -0,0 +1,9 @@
module k8s.io/kubeadm/kinder
require (
github.com/pkg/errors v0.8.1
github.com/sirupsen/logrus v1.4.0
github.com/spf13/cobra v0.0.3
k8s.io/apimachinery v0.0.0-20190320104356-82cbdc1b6ac2
sigs.k8s.io/kind v0.0.0-20190320191144-d4ff13e4808e
)

443
kinder/go.sum Normal file
View File

@ -0,0 +1,443 @@
cloud.google.com/go v0.0.0-20170206221025-ce650573d812/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.33.1/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.37.0/go.mod h1:TS1dMSSfndXH133OKGwekG838Om/cQT0BUHV3HcBgoo=
dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl+fi1br7+Rr3LqpNJf1/uxUdtRUV+Tnj0o93V2B9MU=
dmitri.shuralyov.com/app/changes v0.0.0-20181114035150-5af16e21babb/go.mod h1:Yl+fi1br7+Rr3LqpNJf1/uxUdtRUV+Tnj0o93V2B9MU=
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/service/change v0.0.0-20190301072032-c25fb47d71b3/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
git.apache.org/thrift.git v0.12.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/zstd v1.3.5/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190129172621-c8b1d7a94ddf/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190312192040-a2a65ffce834/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
github.com/PuerkitoBio/purell v1.1.0 h1:rmGxhojJlM0tuKtfdvliR84CFHljx9ag64t2xmVkjK4=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/sarama v1.21.0/go.mod h1:yuqtN/pe8cXRWG5zPaO7hCfNJp5MwmkoJEoLjkm5tCQ=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/aclements/go-gg v0.0.0-20170118225347-6dbb4e4fefb0/go.mod h1:55qNq4vcpkIuHowELi5C8e+1yUHtoLoOUR9QU5j7Tes=
github.com/aclements/go-gg v0.0.0-20170323211221-abd1f791f5ee/go.mod h1:55qNq4vcpkIuHowELi5C8e+1yUHtoLoOUR9QU5j7Tes=
github.com/aclements/go-moremath v0.0.0-20161014184102-0ff62e0875ff/go.mod h1:idZL3yvz4kzx1dsBOAC+oYv6L92P1oFEhUXUB1A/lwQ=
github.com/aclements/go-moremath v0.0.0-20180329182055-b1aff36309c7/go.mod h1:idZL3yvz4kzx1dsBOAC+oYv6L92P1oFEhUXUB1A/lwQ=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190212144455-93d5ec2c7f76/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/emicklei/go-restful v2.8.0+incompatible h1:wN8GCRDPGHguIynsnBartv5GUgGUg1LAU7+xnSn1j7Q=
github.com/emicklei/go-restful v2.8.0+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.0+incompatible h1:YKhDcF/NL19iSAQcyCATL1MkFXCzxfdaTiuJKr18Ank=
github.com/emicklei/go-restful v2.9.0+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/evanphx/json-patch v3.0.0+incompatible h1:l91aby7TzBXBdmF8heZqjskeH9f3g7ZOL8/sSe+vTlU=
github.com/evanphx/json-patch v3.0.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.1.0+incompatible h1:K1MDoo4AZ4wU0GIU/fPmtZg7VpzLjCxu+UwBD1FvwOc=
github.com/evanphx/json-patch v4.1.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/gliderlabs/ssh v0.1.3/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonpointer v0.17.2 h1:3ekBy41gar/iJi2KSh/au/PrC2vpLr85upF/UZmm3W0=
github.com/go-openapi/jsonpointer v0.17.2/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonpointer v0.18.0 h1:KVRzjXpMzgdM4GEMDmDTnGcY5yBwGWreJwmmk4k35yU=
github.com/go-openapi/jsonpointer v0.18.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonreference v0.17.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/jsonreference v0.17.2 h1:lF3z7AH8dd0IKXc1zEBi1dj0B4XgVb5cVjn39dCK3Ls=
github.com/go-openapi/jsonreference v0.17.2/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/jsonreference v0.18.0 h1:oP2OUNdG1l2r5kYhrfVMXO54gWmzcfAwP/GFuHpNTkE=
github.com/go-openapi/jsonreference v0.18.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/spec v0.17.2 h1:eb2NbuCnoe8cWAxhtK6CfMWUYmiFEZJ9Hx3Z2WRwJ5M=
github.com/go-openapi/spec v0.17.2/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/spec v0.18.0 h1:aIjeyG5mo5/FrvDkpKKEGZPmF9MPHahS72mzfVqeQXQ=
github.com/go-openapi/spec v0.18.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.17.2 h1:K/ycE/XTUDFltNHSO32cGRUhrVGJD64o8WgAIZNyc3k=
github.com/go-openapi/swag v0.17.2/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.18.0 h1:1DU8Km1MRGv9Pj7BNLmkA+umwTStwDHttXvx3NhJA70=
github.com/go-openapi/swag v0.18.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.0 h1:kbxbvI4Un1LUWKxufD+BiE6AEExYYgkQLQmLFqA1LFk=
github.com/golang/protobuf v1.3.0/go.mod h1:Qd/q+1AKNOZr9uGQzbzCmRO6sUih6GTPZv6a1/R87v0=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gonum/blas v0.0.0-20181208220705-f22b278b28ac/go.mod h1:P32wAyui1PQ58Oce/KYkOqQv8cVw1zAapXOl+dRFGbc=
github.com/gonum/floats v0.0.0-20181209220543-c233463c7e82/go.mod h1:PxC8OnwL11+aosOB5+iEPoV3picfs8tUpkVd0pDo+Kg=
github.com/gonum/internal v0.0.0-20181124074243-f884aa714029/go.mod h1:Pu4dmpkhSyOzRwuXkOgAvijx4o+4YMUJJo9OvPYMkks=
github.com/gonum/lapack v0.0.0-20181123203213-e4cdc5a0bff9/go.mod h1:XA3DeT6rxh2EAE789SSiSJNqxPaC0aE9J8NTOI0Jo/A=
github.com/gonum/matrix v0.0.0-20181209220409-c518dec07be9/go.mod h1:0EXg4mc1CNP0HCqCz+K4ts155PXIlUywf0wqN+GfPZw=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf h1:+RRA9JqSOZFfKrOeqr2z77+8R2RKyh8PG66dcu1V0ck=
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190309163659-77426154d546/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/googleapis/gax-go v0.0.0-20161107002406-da06d194a00e/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go v2.0.2+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go/v2 v2.0.3/go.mod h1:LLvjysVCY1JZeum8Z6l8qUty8fiNwE08qbEPm1M08qg=
github.com/googleapis/gnostic v0.2.0 h1:l6N3VoaVzTncYYW+9yOz2LJJammFZGBO13sqgEhpy9g=
github.com/googleapis/gnostic v0.2.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20190309154008-847fc94819f9/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway v1.6.2/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway v1.8.3/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.8.4/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU=
github.com/json-iterator/go v1.1.5 h1:gL2yXlmiIo4+t+y32d4WGwOjKGYcGOuyrg46vadswDE=
github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.6 h1:MrUvLMLTMxbqFJ9kzlvat/rYZqZnW3u4wkLzWTaFwKs=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jteeuwen/go-bindata v0.0.0-20180305030458-6025e8de665b/go.mod h1:JVvhzYOiGBnFSYRyV00iY8q7/0PThjIYav1p9h5dmKs=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329 h1:2gxZ0XQIU/5z3Z3bUBu+FXuk2pFbkN6tcwi/pjyaDic=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe h1:W/GaMY0y69G4cFlmsC6B9sbuo2fP8OFP1ABjt4kPz+w=
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mattn/go-sqlite3 v0.0.0-20161215041557-2d44decb4941/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.10.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/microcosm-cc/bluemonday v1.0.1/go.mod h1:hsXNsILzKxV+sX77C5b8FSuKF00vh2OMYv+xgHpAMF4=
github.com/microcosm-cc/bluemonday v1.0.2/go.mod h1:iVP4YcDBq+n/5fb23BhYFvIMq/leAFZyRl6bYmGDlGc=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 h1:Esafd1046DLDQ0W1YjYsBW+p8U2u7vzgW2SQVmlNazg=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/neelance/astrewrite v0.0.0-20160511093645-99348263ae86/go.mod h1:kHJEU3ofeGjhHklVoIGuVj85JJwZ6kWPaJwCIxgnFmo=
github.com/neelance/sourcemap v0.0.0-20151028013722-8c68805598ab/go.mod h1:Qr6/a/Q4r9LP1IltGz7tA7iOK1WonHEYhu1HRBA7ZiM=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
github.com/openzipkin/zipkin-go v0.1.3/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
github.com/openzipkin/zipkin-go v0.1.5/go.mod h1:8NDCjKHoHW1XOp/vf3lClHem0b91r4433B67KXyKXAQ=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190306233201-d0f344d83b0c/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.0.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday v2.0.0+incompatible/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shurcooL/component v0.0.0-20170202220835-f88ec8f54cc4/go.mod h1:XhFIlyj5a1fBNx5aJTbKoIq0mNaPvOagO+HjB3EtxrY=
github.com/shurcooL/events v0.0.0-20181021180414-410e4ca65f48/go.mod h1:5u70Mqkb5O5cxEA8nxTsgrgLehJeAw6Oc4Ab1c/P1HM=
github.com/shurcooL/github_flavored_markdown v0.0.0-20181002035957-2122de532470/go.mod h1:2dOwnU2uBioM+SGy2aZoq1f/Sd1l9OkAeAUvjSyvgU0=
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go v0.0.0-20190121191506-3fef8c783dec/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
github.com/shurcooL/gofontwoff v0.0.0-20180329035133-29b52fc0a18d/go.mod h1:05UtEgK5zq39gLST6uB0cf3NEHjETfB4Fgr3Gx5R9Vw=
github.com/shurcooL/gofontwoff v0.0.0-20181114050219-180f79e6909d/go.mod h1:05UtEgK5zq39gLST6uB0cf3NEHjETfB4Fgr3Gx5R9Vw=
github.com/shurcooL/gopherjslib v0.0.0-20160914041154-feb6d3990c2c/go.mod h1:8d3azKNyqcHP1GaQE/c6dDgjkgSx2BZ4IoEi4F1reUI=
github.com/shurcooL/highlight_diff v0.0.0-20170515013008-09bb4053de1b/go.mod h1:ZpfEhSmds4ytuByIcDnOLkTHGUI6KNqRNPDLHDk+mUU=
github.com/shurcooL/highlight_diff v0.0.0-20181222201841-111da2e7d480/go.mod h1:ZpfEhSmds4ytuByIcDnOLkTHGUI6KNqRNPDLHDk+mUU=
github.com/shurcooL/highlight_go v0.0.0-20181028180052-98c3abbbae20/go.mod h1:UDKB5a1T23gOMUJrI+uSuH0VRDStOiUVSjBTRDVBVag=
github.com/shurcooL/highlight_go v0.0.0-20181215221002-9d8641ddf2e1/go.mod h1:UDKB5a1T23gOMUJrI+uSuH0VRDStOiUVSjBTRDVBVag=
github.com/shurcooL/home v0.0.0-20181020052607-80b7ffcb30f9/go.mod h1:+rgNQw2P9ARFAs37qieuu7ohDNQ3gds9msbT2yn85sg=
github.com/shurcooL/home v0.0.0-20190204141146-5c8ae21d4240/go.mod h1:+rgNQw2P9ARFAs37qieuu7ohDNQ3gds9msbT2yn85sg=
github.com/shurcooL/htmlg v0.0.0-20170918183704-d01228ac9e50/go.mod h1:zPn1wHpTIePGnXSHpsVPWEktKXHr6+SS6x/IKRb7cpw=
github.com/shurcooL/htmlg v0.0.0-20190120222857-1e8a37b806f3/go.mod h1:zPn1wHpTIePGnXSHpsVPWEktKXHr6+SS6x/IKRb7cpw=
github.com/shurcooL/httperror v0.0.0-20170206035902-86b7830d14cc/go.mod h1:aYMfkZ6DWSJPJ6c4Wwz3QtW22G7mf/PEgaB9k/ik5+Y=
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/httpfs v0.0.0-20181222201310-74dc9339e414/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/httpgzip v0.0.0-20180522190206-b1c53ac65af9/go.mod h1:919LwcH0M7/W4fcZ0/jy0qGght1GIhqyS/EgWGH2j5Q=
github.com/shurcooL/issues v0.0.0-20181008053335-6292fdc1e191/go.mod h1:e2qWDig5bLteJ4fwvDAc2NHzqFEthkqn7aOZAOpj+PQ=
github.com/shurcooL/issues v0.0.0-20190120000219-08d8dadf8acb/go.mod h1:e2qWDig5bLteJ4fwvDAc2NHzqFEthkqn7aOZAOpj+PQ=
github.com/shurcooL/issuesapp v0.0.0-20180602232740-048589ce2241/go.mod h1:NPpHK2TI7iSaM0buivtFUc9offApnI0Alt/K8hcHy0I=
github.com/shurcooL/issuesapp v0.0.0-20181229001453-b8198a402c58/go.mod h1:NPpHK2TI7iSaM0buivtFUc9offApnI0Alt/K8hcHy0I=
github.com/shurcooL/notifications v0.0.0-20181007000457-627ab5aea122/go.mod h1:b5uSkrEVM1jQUspwbixRBhaIjIzL2xazXp6kntxYle0=
github.com/shurcooL/notifications v0.0.0-20181111060504-bcc2b3082a7a/go.mod h1:b5uSkrEVM1jQUspwbixRBhaIjIzL2xazXp6kntxYle0=
github.com/shurcooL/octicon v0.0.0-20181028054416-fa4f57f9efb2/go.mod h1:eWdoE5JD4R5UVWDucdOPg1g2fqQRq78IQa9zlOV1vpQ=
github.com/shurcooL/octicon v0.0.0-20181222203144-9ff1a4cf27f4/go.mod h1:eWdoE5JD4R5UVWDucdOPg1g2fqQRq78IQa9zlOV1vpQ=
github.com/shurcooL/reactions v0.0.0-20181006231557-f2e0b4ca5b82/go.mod h1:TCR1lToEk4d2s07G3XGfz2QrgHXg4RJBvjrOozvoWfk=
github.com/shurcooL/reactions v0.0.0-20181222204718-145cd5e7f3d1/go.mod h1:TCR1lToEk4d2s07G3XGfz2QrgHXg4RJBvjrOozvoWfk=
github.com/shurcooL/sanitized_anchor_name v0.0.0-20170918181015-86672fcb3f95/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/users v0.0.0-20180125191416-49c67e49c537/go.mod h1:QJTqeLYEDaXHZDBsXlPCDqdhQuJkuw4NOtaxYe3xii4=
github.com/shurcooL/webdavfs v0.0.0-20170829043945-18c3829fa133/go.mod h1:hKmq5kWdCj2z2KEozexVbfEZIWiTjhE0+UjmZgPqehw=
github.com/shurcooL/webdavfs v0.0.0-20181215192745-5988b2d638f6/go.mod h1:hKmq5kWdCj2z2KEozexVbfEZIWiTjhE0+UjmZgPqehw=
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.3.0 h1:hI/7Q+DtNZ2kINb6qt/lS+IyXnHQe9e90POfeewL/ME=
github.com/sirupsen/logrus v1.3.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.0 h1:yKenngtzGh+cUSSh6GWbxW2abRqhYUSR/t/6+2QqNvE=
github.com/sirupsen/logrus v1.4.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sourcegraph/annotate v0.0.0-20160123013949-f4cad6c6324d/go.mod h1:UdhH50NIW0fCiwBSr0co2m7BnFLdv4fQTgdqdJTHFeE=
github.com/sourcegraph/syntaxhighlight v0.0.0-20170531221838-bd320f5d308e/go.mod h1:HuIsMU8RRBOtsCgI77wP899iHVBQpCmg4ErYMZB+2IA=
github.com/spf13/cobra v0.0.3 h1:ZlrZ4XsMRm04Fr5pSFxBgfND2EBVa1nLpiy1stUsX/8=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/pflag v1.0.2 h1:Fy0orTDgHdbnzHcsOgfCN4LtHf0ec3wwtiwJqwvf3Gc=
github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
go.opencensus.io v0.19.1/go.mod h1:gug0GbSHa8Pafr0d2urOSgoXHZ6x/RUlaiT0d9pqb4A=
go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
go4.org v0.0.0-20190218023631-ce4c26f7be8e/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
go4.org v0.0.0-20190313082347-94abd6928b1d/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
golang.org/x/build v0.0.0-20190111050920-041ab4dc3f9d/go.mod h1:OWs+y06UdEOHN4y+MfF/py+xQ/tYqIWW03b70/CG9Rw=
golang.org/x/build v0.0.0-20190311235527-86650285478d/go.mod h1:LS5++pZInCkeGSsPGP/1yB0yvU9gfqv2yD1PQgIbDYI=
golang.org/x/build v0.0.0-20190313044741-1b471b8bf26e/go.mod h1:LS5++pZInCkeGSsPGP/1yB0yvU9gfqv2yD1PQgIbDYI=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b h1:2b9XGzhjiYsYPnKXoEfL7klWZQIt8IfyRCz62gCqqlQ=
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181030102418-4d3f4d9ffa16/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a h1:YX8ljsm6wXlHZO+aRz9Exqr0evNhKRNe5K/gi+zKh4U=
golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190312203227-4b39c73a6495/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181217174547-8f45f776aaf1/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190313035508-a4d62f3683ec/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58 h1:otZG8yDCO4LVps5+9bxOeNiCvgmOyt96J3roHTYs7oE=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181029044818-c44066c5c816/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJVlRHBcsciT5bXOrH628=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190313082753-5c2c250b6a70 h1:0OwHPyvXNyZS9VW4XXoGkWOwhrMN52Y4n/gSxvJOgj0=
golang.org/x/net v0.0.0-20190313082753-5c2c250b6a70/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/oauth2 v0.0.0-20170207211851-4464e7848382/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181120190819-8f65e3013eba/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/perf v0.0.0-20190306144031-151b6387e3f2/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/perf v0.0.0-20190312170614-0655857e383f/go.mod h1:FrqOtQDO3iMDVUtw5nNTDFpR1HUCGh00M3kj2wiSzLQ=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181029174526-d69651ed3497/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181218192612-074acd46bca6/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313 h1:pczuHS43Cp2ktBEEmLwScxgjWsBSzdaQiKzUyf3DTTc=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2 h1:z99zHgr7hKfrUcX/KsoJk5FJfjTceCKIp96+biqP4To=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180911133044-677d2ff680c1/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030000716-a0a13e073c7b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181122213734-04b5d21e00f1/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181219222714-6e267b5cc78e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
google.golang.org/api v0.0.0-20170206182103-3d017632ea10/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.0.0-20181220000619-583d854617af/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.1.0/go.mod h1:UGEZY7KEX120AnNLIHFMKIo4obdJhkp2tPbaPlQx13Y=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181029155118-b69ba1387ce2/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181109154231-b5d43981345b/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
google.golang.org/genproto v0.0.0-20181202183823-bd91e49a0898/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
google.golang.org/genproto v0.0.0-20181219182458-5a97ab628bfb/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
google.golang.org/genproto v0.0.0-20190306203927-b5d61aea6440/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/grpc v0.0.0-20170208002647-2a6bf6142e96/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20180920025451-e3ad64cb4ed3/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190215041234-466a0476246c/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.0.0-20181026145037-6e4b5aa967ee h1:ZsUDk0d2jnzG/9MjfhX1vaGxrjSaKgO6Kx8CCMod6c8=
k8s.io/api v0.0.0-20181026145037-6e4b5aa967ee/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/api v0.0.0-20190311155512-f01a027e4c26 h1:5Aq6o/tRwhZ2PEK2D/aIHF4vmHWCGT5oVtQC8QPxn4E=
k8s.io/api v0.0.0-20190311155512-f01a027e4c26/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/api v0.0.0-20190313115550-3c12c96769cc h1:m/JS6kQd00rICnXLWlnJzMFQB4AplcURUopS8dKiWmI=
k8s.io/api v0.0.0-20190313115550-3c12c96769cc/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/apimachinery v0.0.0-20181126191516-4a9a8137c0a1 h1:u/v3rSGNjiTxclqUNHYgSrCIotyczPebwV1FPXtdKRQ=
k8s.io/apimachinery v0.0.0-20181126191516-4a9a8137c0a1/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/apimachinery v0.0.0-20190311155258-f9b45bc4494d h1:8yHHbjNUBWYo3KXE/R2RS1Kfadsbng2IEcBj9Ak89SY=
k8s.io/apimachinery v0.0.0-20190311155258-f9b45bc4494d/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/apimachinery v0.0.0-20190313115320-c9defaaddf6f h1:6ojhffWUv9DZ8i4L2LIvSjbWH3fXfP6PmrTNwXHHMhM=
k8s.io/apimachinery v0.0.0-20190313115320-c9defaaddf6f/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/apimachinery v0.0.0-20190320104356-82cbdc1b6ac2 h1:kAl8fP8Gk3mJ4hZBQOkQ1HkrD1i5n22S3ZKlVGTJsBA=
k8s.io/apimachinery v0.0.0-20190320104356-82cbdc1b6ac2/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/client-go v7.0.0+incompatible h1:kiH+Y6hn+pc78QS/mtBfMJAMIIaWevHi++JvOGEEQp4=
k8s.io/client-go v7.0.0+incompatible/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=
k8s.io/client-go v10.0.0+incompatible h1:F1IqCqw7oMBzDkqlcBymRq1450wD0eNqLE9jzUrIi34=
k8s.io/client-go v10.0.0+incompatible/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=
k8s.io/code-generator v0.0.0-20181116211957-405721ab9678/go.mod h1:MYiN+ZJZ9HkETbgVZdWw2AsuAi9PZ4V80cwfuf2axe8=
k8s.io/code-generator v0.0.0-20190311155051-e4c2b1329cf7/go.mod h1:MYiN+ZJZ9HkETbgVZdWw2AsuAi9PZ4V80cwfuf2axe8=
k8s.io/gengo v0.0.0-20181113154421-fd15ee9cc2f7/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20190308184658-b90029ef6cd8/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog v0.1.0 h1:I5HMfc/DtuVaGR1KPwUrTc476K8NCqNBldC7H4dYEzk=
k8s.io/klog v0.1.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.2.0 h1:0ElL0OHzF3N+OhoJTL0uca20SxtYt4X4+bzHeqrB83c=
k8s.io/klog v0.2.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/kube-openapi v0.0.0-20181025202442-3a9b63ab1e39 h1:4I91xwvUbr2jxozOwbUBFBmo6XvBdoC1RLc85Kfg2dY=
k8s.io/kube-openapi v0.0.0-20181025202442-3a9b63ab1e39/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc=
k8s.io/kube-openapi v0.0.0-20190306001800-15615b16d372 h1:zia7dTzfEtdiSUxi9cXUDsSQH2xE6igmGKyFn2on/9A=
k8s.io/kube-openapi v0.0.0-20190306001800-15615b16d372/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc=
k8s.io/kubeadm v0.0.0-20190212133445-79b49271d4dd h1:mpr8m8yaAUsCeNv8THAWjmXs6a+GqfKon4MYfrmEk4g=
k8s.io/utils v0.0.0-20181115163542-0d26856f57b3/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
k8s.io/utils v0.0.0-20190308190857-21c4ce38f2a7/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
sigs.k8s.io/kind v0.0.0-20190223144119-c64da77e4150 h1:qxVG0hUkStZJAcD4gt9hMYV4vQ3U14VV219UelOFwkk=
sigs.k8s.io/kind v0.0.0-20190223144119-c64da77e4150/go.mod h1:J6Zyw2Z8Fkr3M+dJvT0VDr+tpqLfoZjCOa16SQQhaGQ=
sigs.k8s.io/kind v0.0.0-20190307044858-45ed3d230167 h1:0C/UenQivP94yXDcZ871zuApJSQSiDEZXqZlAGuZfTA=
sigs.k8s.io/kind v0.0.0-20190307044858-45ed3d230167/go.mod h1:5AWVsbr5tH8e/oMjlJhj0uy5rUF2PgEKXnwXldtEG18=
sigs.k8s.io/kind v0.0.0-20190311234037-971e4056261d h1:Q+W1iQsRXsBPVHq3BFzs+BJzJqCVwJCLNcsRft9SgwU=
sigs.k8s.io/kind v0.0.0-20190311234037-971e4056261d/go.mod h1:5AWVsbr5tH8e/oMjlJhj0uy5rUF2PgEKXnwXldtEG18=
sigs.k8s.io/kind v0.0.0-20190312201840-ff7503b14a04 h1:bvGpiqMt6V+n3ale+OHhcprxd1W7g5qvJID72f3SHgQ=
sigs.k8s.io/kind v0.0.0-20190312201840-ff7503b14a04/go.mod h1:5AWVsbr5tH8e/oMjlJhj0uy5rUF2PgEKXnwXldtEG18=
sigs.k8s.io/kind v0.0.0-20190320191144-d4ff13e4808e h1:gUdrCRcbbdmNc6IzfqeMspRRlAH8A3DqQn41gWHiAEc=
sigs.k8s.io/kind v0.0.0-20190320191144-d4ff13e4808e/go.mod h1:5AWVsbr5tH8e/oMjlJhj0uy5rUF2PgEKXnwXldtEG18=
sigs.k8s.io/kustomize v2.0.1+incompatible h1:rfBSJpFHTgrxoQ79w5JnTo00KUuod/U7mwIf36ozNw0=
sigs.k8s.io/kustomize v2.0.1+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU=
sigs.k8s.io/kustomize v2.0.3+incompatible h1:JUufWFNlI44MdtnjUqVnvh29rR37PQFzPbLXqhyOyX0=
sigs.k8s.io/kustomize v2.0.3+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sourcegraph.com/sourcegraph/go-diff v0.5.0/go.mod h1:kuch7UrkMzY0X+p9CRK03kfuPQ2zzQcaEFbx8wA8rck=
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=
sourcegraph.com/sqs/pbtypes v1.0.0/go.mod h1:3AciMUv4qUuRHRHhOG4TZOB+72GdPVz5k+c648qsFS4=

29
kinder/main.go Normal file
View File

@ -0,0 +1,29 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This package is a stub main wrapping cmd/kinder.Main()
package main
import (
kinder "k8s.io/kubeadm/kinder/cmd/kinder"
// forces kinder actions to register
_ "k8s.io/kubeadm/kinder/pkg/actions"
)
func main() {
kinder.Main()
}

View File

@ -0,0 +1,93 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
"fmt"
"github.com/pkg/errors"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
// infoAction implements an action for getting a summary of cluster infos
type infoAction struct{}
func init() {
kcluster.RegisterAction("cluster-info", newInfoAction)
}
func newInfoAction() kcluster.Action {
return &infoAction{}
}
// Tasks returns the list of action tasks for the infoAction
func (b *infoAction) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Cluster-info ⛵",
TargetNodes: "@cp1",
Run: runInfo,
},
}
}
func runInfo(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
if err := kn.DebugCmd(
"==> List nodes 🖥",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "get", "nodes", "-o=wide",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> List pods 📦",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "get", "pods", "--all-namespaces", "-o=wide",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> Check image versions for each pods 🐋",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "get", "pods", "--all-namespaces",
"-o=jsonpath={range .items[*]}{\"\\n\"}{.metadata.name}{\" << \"}{range .spec.containers[*]}{.image}{\", \"}{end}{end}",
); err != nil {
return err
}
ip, err := kn.IP()
if err != nil {
return errors.Wrap(err, "Error getting node ip")
}
if kctx.ExternalEtcd() == nil {
if err := kn.DebugCmd(
"\n==> List etcd members 📦",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "exec", "-n=kube-system", fmt.Sprintf("etcd-%s", kn.Name()),
"--",
"etcdctl", fmt.Sprintf("--endpoints=https://%s:2379", ip),
"--ca-file=/etc/kubernetes/pki/etcd/ca.crt", "--cert-file=/etc/kubernetes/pki/etcd/peer.crt", "--key-file=/etc/kubernetes/pki/etcd/peer.key",
"member", "list",
); err != nil {
return err
}
} else {
fmt.Println("\n==> Using external etcd 📦")
}
return nil
}

View File

@ -0,0 +1,31 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
// APIServerPort is the expected default APIServerPort on the control plane node(s)
// https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips
const APIServerPort = 6443
// Token defines a dummy, well known token for automating TLS bootstrap process
const Token = "abcdef.0123456789abcdef"
// CertificateKey defines a dummy, well known CertificateKey for automating automatic copy certs process
// const CertificateKey = "d02db674b27811f4508bf8a5fa19fbe060921340552f13c15c9feb05aaa96824"
const CertificateKey = "0123456789012345678901234567890123456789012345678901234567890123"
// ControlPlanePort defines the port where the control plane is listening on the load balancer node
const ControlPlanePort = 6443

View File

@ -0,0 +1,52 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
// manualCopyCerts implements copy of certs from bootstrap control-plane to secondary control-planes
type manualCopyCerts struct{}
func init() {
kcluster.RegisterAction("manual-copy-certs", newManualCopyCerts)
}
func newManualCopyCerts() kcluster.Action {
return &manualCopyCerts{}
}
// Tasks returns the list of action tasks for the manualCopyCerts
func (b *manualCopyCerts) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Joining control-plane node to Kubernetes ☸",
TargetNodes: "@cpN",
Run: runManualCopyCerts,
},
}
}
func runManualCopyCerts(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
err := doManualCopyCerts(kctx, kn)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,189 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
"fmt"
"github.com/pkg/errors"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
// initAction implements a developer friendly kubeadm init workflow
type initAction struct{}
func init() {
kcluster.RegisterAction("kubeadm-init", newInitAction)
}
func newInitAction() kcluster.Action {
return &initAction{}
}
// Tasks returns the list of action tasks for the initAction
func (b *initAction) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Starting Kubernetes using kubeadm init (this may take a minute) ☸",
TargetNodes: "@cp1",
Run: func(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
switch flags.UsePhases {
case true:
return runInitPhases(kctx, kn, flags)
default:
return runInit(kctx, kn, flags)
}
},
},
}
}
func runInit(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
initArgs := []string{
"init",
"--ignore-preflight-errors=all",
"--config=/kind/kubeadm.conf",
}
if flags.CopyCerts {
// automatic copy certs is supported starting from v1.14
if err := atLeastKubeadm(kn, "v1.14.0-0"); err != nil {
return errors.Wrapf(err, "--automatic-copy-certs can't be used")
}
initArgs = append(initArgs,
"--experimental-upload-certs",
fmt.Sprintf("--certificate-key=%s", CertificateKey),
)
}
if err := kn.DebugCmd(
"==> kubeadm init 🚀",
"kubeadm", initArgs...,
); err != nil {
return err
}
if err := postInit(
kctx, kn,
); err != nil {
return err
}
return nil
}
func runInitPhases(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
if err := kn.DebugCmd(
"==> kubeadm init phase preflight 🚀",
"kubeadm", "init", "phase", "preflight", "--ignore-preflight-errors=all", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase kubelet-start 🚀",
"kubeadm", "init", "phase", "kubelet-start", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase certs all 🚀",
"kubeadm", "init", "phase", "certs", "all", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase kubeconfig all 🚀",
"kubeadm", "init", "phase", "kubeconfig", "all", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase control-plane all 🚀",
"kubeadm", "init", "phase", "control-plane", "all", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase etcd local 🚀",
"kubeadm", "init", "phase", "etcd", "local", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> wait for kube-api server 🗻",
"/bin/bash", "-c", //use shell to get $(...) resolved into the container
fmt.Sprintf("while [[ \"$(curl -k https://localhost:%d/healthz -s -o /dev/null -w ''%%{http_code}'')\" != \"200\" ]]; do echo -n \".\"; sleep 1; done", APIServerPort),
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase upload-config all 🚀",
"kubeadm", "init", "phase", "upload-config", "all", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if flags.CopyCerts {
if err := atLeastKubeadm(kn, "v1.14.0-0"); err != nil {
return errors.Wrapf(err, "--automatic-copy-certs can't be used")
}
if err := kn.DebugCmd(
"==> kubeadm init phase upload-certs 🚀",
"kubeadm", "init", "phase", "upload-certs", "--config=/kind/kubeadm.conf",
"--experimental-upload-certs", fmt.Sprintf("--certificate-key=%s", CertificateKey),
); err != nil {
return err
}
}
if err := kn.DebugCmd(
"==> kubeadm init phase mark-control-plane 🚀",
"kubeadm", "init", "phase", "mark-control-plane", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase bootstrap-token 🚀",
"kubeadm", "init", "phase", "bootstrap-token", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm init phase addon all 🚀",
"kubeadm", "init", "phase", "addon", "all", "--config=/kind/kubeadm.conf",
); err != nil {
return err
}
if err := postInit(
kctx, kn,
); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,234 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
"fmt"
"github.com/pkg/errors"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
// joinAction implements a developer friendly kubeadm join workflow
type joinAction struct{}
func init() {
kcluster.RegisterAction("kubeadm-join", newJoinAction)
}
func newJoinAction() kcluster.Action {
return &joinAction{}
}
// Tasks returns the list of action tasks for the joinAction
func (b *joinAction) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Joining control-plane node to Kubernetes ☸",
TargetNodes: "@cpN",
Run: func(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
switch flags.UsePhases {
case true:
return runJoinControlPlanePhases(kctx, kn, flags)
default:
return runJoinControlPlane(kctx, kn, flags)
}
},
},
{
Description: "Joining worker node to Kubernetes ☸",
TargetNodes: "@w*",
Run: func(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
switch flags.UsePhases {
case true:
return runJoinWorkersPhases(kctx, kn, flags)
default:
return runJoinWorkers(kctx, kn, flags)
}
},
},
}
}
func runJoinWorkers(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
// get the join addres
joinAddress, err := getJoinAddress(kctx)
if err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm join worker 🚀",
"kubeadm", "join", joinAddress, "--token", Token, "--discovery-token-unsafe-skip-ca-verification", "--ignore-preflight-errors=all",
); err != nil {
return err
}
return nil
}
func runJoinWorkersPhases(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
// join phases are supported starting from v1.14
if err := atLeastKubeadm(kn, "v1.14.0-0"); err != nil {
return errors.Wrapf(err, "join phases can't be used")
}
// get the join addres
joinAddress, err := getJoinAddress(kctx)
if err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm join phase preflight 🚀",
"kubeadm", "join", "phase", "preflight", joinAddress, "--token", Token, "--discovery-token-unsafe-skip-ca-verification", "--ignore-preflight-errors=all",
); err != nil {
return err
}
// NB. Test control-plane-prepare does not execute actions when joining a worker node
//if err := kn.DebugCmd(
// "==> kubeadm join phase control-plane-prepare 🚀",
// "kubeadm", "join", "phase", "control-plane-prepare", "all", joinAddress, "--discovery-token", Token, "--discovery-token-unsafe-skip-ca-verification",
//); err != nil {
// return err
//}
if err := kn.DebugCmd(
"==> kubeadm join phase kubelet-start 🚀",
"kubeadm", "join", "phase", "kubelet-start", joinAddress, "--discovery-token", Token, "--discovery-token-unsafe-skip-ca-verification",
); err != nil {
return err
}
// NB. Test control-plane-join does not execute actions when joining a worker node
//if err := kn.DebugCmd(
// "==> kubeadm join phase control-plane-join all 🚀",
// "kubeadm", "join", "phase", "control-plane-join", "all",
//); err != nil {
// return err
//}
return nil
}
func runJoinControlPlane(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
// automatic copy certs is supported starting from v1.14
if flags.CopyCerts {
if err := atLeastKubeadm(kn, "v1.14.0-0"); err != nil {
return errors.Wrapf(err, "--automatic-copy-certs can't be used")
}
}
// if not automatic copy certs, simulate manual copy
if !flags.CopyCerts {
err := doManualCopyCerts(kctx, kn)
if err != nil {
return err
}
}
// get the join addres
joinAddress, err := getJoinAddress(kctx)
if err != nil {
return err
}
joinArgs := []string{
"join", joinAddress, "--experimental-control-plane", "--token", Token, "--discovery-token-unsafe-skip-ca-verification", "--ignore-preflight-errors=all",
}
if flags.CopyCerts {
joinArgs = append(joinArgs,
fmt.Sprintf("--certificate-key=%s", CertificateKey),
)
}
if err := kn.DebugCmd(
"==> kubeadm join control plane 🚀",
"kubeadm", joinArgs...,
); err != nil {
return err
}
return nil
}
func runJoinControlPlanePhases(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
// join phases are supported starting from v1.14
if err := atLeastKubeadm(kn, "v1.14.0-0"); err != nil {
return errors.Wrapf(err, "join phases can't be used")
}
// if not automatic copy certs, simulate manual copy
if !flags.CopyCerts {
err := doManualCopyCerts(kctx, kn)
if err != nil {
return err
}
}
// get the join addres
joinAddress, err := getJoinAddress(kctx)
if err != nil {
return err
}
preflightArgs := []string{
"join", "phase", "preflight", joinAddress, "--experimental-control-plane", "--token", Token, "--discovery-token-unsafe-skip-ca-verification", "--ignore-preflight-errors=all",
}
if flags.CopyCerts {
preflightArgs = append(preflightArgs,
fmt.Sprintf("--certificate-key=%s", CertificateKey),
)
}
if err := kn.DebugCmd(
"==> kubeadm join phase preflight 🚀",
"kubeadm", preflightArgs...,
); err != nil {
return err
}
prepareArgs := []string{
"join", "phase", "control-plane-prepare", "all", joinAddress, "--experimental-control-plane", "--discovery-token", Token, "--discovery-token-unsafe-skip-ca-verification",
}
if flags.CopyCerts {
prepareArgs = append(prepareArgs,
fmt.Sprintf("--certificate-key=%s", CertificateKey),
)
}
if err := kn.DebugCmd(
"==> kubeadm join phase control-plane-prepare 🚀",
"kubeadm", prepareArgs...,
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm join phase kubelet-start 🚀",
"kubeadm", "join", "phase", "kubelet-start", joinAddress, "--discovery-token", Token, "--discovery-token-unsafe-skip-ca-verification",
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> kubeadm join phase control-plane-join all 🚀",
"kubeadm", "join", "phase", "control-plane-join", "all", "--experimental-control-plane",
); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,54 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
// resetAction implements a developer friendly kubeadm reset workflow
type resetAction struct{}
func init() {
kcluster.RegisterAction("kubeadm-reset", newResetAction)
}
func newResetAction() kcluster.Action {
return &resetAction{}
}
// Tasks returns the list of action tasks for the resetAction
func (b *resetAction) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Destroy the Kubernetes cluster ⛵",
TargetNodes: "@all",
Run: runReset,
},
}
}
func runReset(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
if err := kn.DebugCmd(
"==> Kubeadm reset 🖥",
"kubeadm", "reset", "--force",
); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,174 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
"fmt"
"path/filepath"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
//TODO: use const for paths and filenames
// upgradeAction implements a developer friendly kubeadm upgrade workflow.
// please note that the upgrade will be executed by replacing kubeadm/kubelet/kubectl binaries;
// for sake of simplicity, we are skipping drain/uncordon when upgrading nodes
//
// this actions assumes that:
// 1) all the necessary images are already pre-loaded (otherwise kubeadm/kubelet will attempt to download images as usual)
// 2) the kubeadm/kubelet/kubectl binaries for the new kubernetes version are available in a well know place
//
// TODO:
// - apt upgrade, similar to user procedure (NB. currently only the apt mode uses deb during node-image creation, and the installation doesn't mark packages
// for preventing uncontrolled upgrades)
// - drain/uncordon of worker nodes
// - checking consistency of version among the provided binaries and the declared target version; if possible remove version flag
type upgradeAction struct{}
func init() {
kcluster.RegisterAction("kubeadm-upgrade", newUpgradeAction)
}
func newUpgradeAction() kcluster.Action {
return &upgradeAction{}
}
// Tasks returns the list of action tasks for the upgradeAction
func (b *upgradeAction) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Upgrade the kubeadm binary ⛵",
TargetNodes: "@all",
Run: runUpgradeKubeadmBinary,
},
{
Description: "Upgrade bootstrap control-plane ⛵",
TargetNodes: "@cp1",
Run: runKubeadmUpgrade,
},
{
Description: "Upgrade secondary control-planes ⛵",
TargetNodes: "@cpN",
Run: runKubeadmUpgradeControlPlane,
},
{
Description: "Upgrade workers config ⛵",
TargetNodes: "@w*",
Run: runKubeadmUpgradeWorkers,
},
{
Description: "Upgrade kubelet and kubectl ⛵",
TargetNodes: "@all",
Run: runUpgradeKubeletKubectl,
},
}
}
func runUpgradeKubeadmBinary(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
src := filepath.Join("/kinder", "upgrade", "kubeadm")
dest := filepath.Join("/usr", "bin", "kubeadm")
fmt.Println("==> upgrading kubeadm 🚀")
if err := kn.Command(
"cp", src, dest,
).Run(); err != nil {
return err
}
return nil
}
func runKubeadmUpgrade(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
if err := kn.DebugCmd(
"==> kubeadm upgrade apply 🚀",
"kubeadm", "upgrade", "apply", "-f", flags.UpgradeVersion.String(),
); err != nil {
return err
}
//TODO: check if download config included (and if restart kubelet included)
return nil
}
func runKubeadmUpgradeControlPlane(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
if err := kn.DebugCmd(
"==> kubeadm upgrade node experimental-control-plane 🚀",
"kubeadm", "upgrade", "node", "experimental-control-plane",
); err != nil {
return err
}
//TODO: check if download config included (and if restart kubelet included)
return nil
}
func runKubeadmUpgradeWorkers(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
if err := kn.DebugCmd(
"==> kubeadm upgrade node config 🚀",
"kubeadm", "upgrade", "node", "config", "--kubelet-version", flags.UpgradeVersion.String(),
); err != nil {
return err
}
//TODO: check if restart kubelet included
return nil
}
func runUpgradeKubeletKubectl(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
// upgrade kubectl
fmt.Println("==> upgrading kubectl 🚀")
src := filepath.Join("/kinder", "upgrade", "kubectl")
dest := filepath.Join("/usr", "bin", "kubectl")
if err := kn.Command(
"cp", src, dest,
).Run(); err != nil {
return err
}
// upgrade kubelet
fmt.Println("==> upgrading kubelet 🚀")
src = filepath.Join("/kinder", "upgrade", "kubelet")
dest = filepath.Join("/usr", "bin", "kubelet")
if err := kn.Command(
"cp", src, dest,
).Run(); err != nil {
return err
}
fmt.Println("==> restart kubelet 🚀")
if err := kn.Command(
"systemctl", "restart", "kubelet",
).Run(); err != nil {
return err
}
//write "/kind/version"
if err := kn.Command(
"echo", fmt.Sprintf("\"%s\"", flags.UpgradeVersion.String()), ">", "/kind/version",
).Run(); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,119 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
"fmt"
"strings"
"github.com/pkg/errors"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
)
// smokeTest implements a quick test about the proper functioning of a Kubernetes cluster
type smokeTest struct{}
func init() {
kcluster.RegisterAction("smoke-test", newSmokeTest)
}
func newSmokeTest() kcluster.Action {
return &smokeTest{}
}
// Tasks returns the list of action tasks for the smokeTest
func (b *smokeTest) Tasks() []kcluster.Task {
return []kcluster.Task{
{
Description: "Joining control-plane node to Kubernetes ☸",
TargetNodes: "@cp1",
Run: runSmokeTest,
},
}
}
func runSmokeTest(kctx *kcluster.KContext, kn *kcluster.KNode, flags kcluster.ActionFlags) error {
kn.Command("kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "delete", "deployments/nginx").Run()
kn.Command("kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "delete", "service/nginx").Run()
// Test deployments
if err := kn.DebugCmd(
"==> Test deployments 🖥",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "run", "nginx", "--image=nginx:1.15.9-alpine", "--image-pull-policy=IfNotPresent",
); err != nil {
return err
}
if err := waitForPodsRunning(kn, "nginx", 1); err != nil {
return err
}
// Test service type NodePort
if err := kn.DebugCmd(
"==> service type NodePort 🖥",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "expose", "deployment", "nginx", "--port=80", "--type=NodePort",
); err != nil {
return err
}
nodePort, err := getNodePort(kn, "nginx")
if err != nil {
return err
}
err = checkNodePort(kctx, nodePort)
if err != nil {
return err
}
podName, err := getPodName(kn, "nginx")
if err != nil {
return err
}
fmt.Printf("==> Test kubectl logs 🖥\n\n")
lines, err := kn.CombinedOutputLines(
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "logs", podName,
)
if err != nil {
return errors.Wrapf(err, "failed to run kubectl logs")
}
fmt.Printf("%d logs lines returned\n\n", len(lines))
fmt.Printf("==> Test kubectl exec 🖥\n\n")
lines, err = kn.CombinedOutputLines(
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "exec", podName, "--", "nslookup", "kubernetes",
)
if err != nil {
return errors.Wrapf(err, "failed to run kubectl exec")
}
fmt.Printf("%d output lines returned\n\n", len(lines))
fmt.Printf("==> Test DNS resolution 🖥\n\n")
if len(lines) < 3 || !strings.Contains(lines[3], "kubernetes.default.svc.cluster.local") {
return errors.Wrapf(err, "dns resolution error")
}
fmt.Printf("kubernetes service answers to %s\n\n", lines[3])
kn.Command("kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "delete", "deployments/nginx").Run()
kn.Command("kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "delete", "service/nginx").Run()
fmt.Printf("==> Smoke test passed!\n\n")
return nil
}

300
kinder/pkg/actions/util.go Normal file
View File

@ -0,0 +1,300 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package actions
import (
"fmt"
"os"
"path/filepath"
"strings"
"time"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/util/version"
kcluster "k8s.io/kubeadm/kinder/pkg/cluster"
"sigs.k8s.io/kind/pkg/fs"
)
// copyKubeConfigToHost copies the admin.conf file to the host,
// taking care of replacing the server address with localhost:port
func copyKubeConfigToHost(kctx *kcluster.KContext, kn *kcluster.KNode) error {
//TODO: use host port from external lb in case it exists
hostPort, err := kn.Ports(6443)
if err != nil {
return errors.Wrap(err, "failed to get api server port mapping")
}
kubeConfigPath := kctx.KubeConfigPath()
if err := kn.WriteKubeConfig(kubeConfigPath, hostPort); err != nil {
return errors.Wrap(err, "failed to get kubeconfig from node")
}
return nil
}
// executes postInit tasks, including copying the admin.conf file to the host,
// installing the CNI plugin, and eventually remove the master taint
func postInit(kctx *kcluster.KContext, kn *kcluster.KNode) error {
if err := copyKubeConfigToHost(
kctx, kn,
); err != nil {
return err
}
if err := kn.DebugCmd(
"==> install cni 🗻",
"/bin/sh", "-c", //use shell to get $(...) resolved into the container
`kubectl apply --kubeconfig=/etc/kubernetes/admin.conf -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version --kubeconfig=/etc/kubernetes/admin.conf | base64 | tr -d '\n')"`,
); err != nil {
return err
}
if len(kctx.Workers()) == 0 {
if err := kn.DebugCmd(
"==> remove master taint 🗻",
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "taint", "nodes", "--all", "node-role.kubernetes.io/master-",
); err != nil {
return err
}
}
/*TODO
// add the default storage class
if err := addDefaultStorageClass(node); err != nil {
return errors.Wrap(err, "failed to add default storage class")
}
// Wait for the control plane node to reach Ready status.
isReady := nodes.WaitForReady(node, time.Now().Add(ec.waitForReady))
if ec.waitForReady > 0 {
if !isReady {
log.Warn("timed out waiting for control plane to be ready")
}
}
*/
fmt.Printf(
"Cluster creation complete. You can now use the cluster with:\n\n"+
"export KUBECONFIG=\"$(kind get kubeconfig-path --name=%q)\"\n"+
"kubectl cluster-info\n",
kctx.Name(),
)
return nil
}
// getJoinAddress return the join addres that is the control plane endpoint in case the cluster has
// an external load balancer in front of the control-plane nodes, otherwise the address of the
// boostrap control plane node.
func getJoinAddress(kctx *kcluster.KContext) (string, error) {
// get the control plane endpoint, in case the cluster has an external load balancer in
// front of the control-plane nodes
if kctx.ExternalLoadBalancer() != nil {
// gets the IP of the load balancer
loadBalancerIP, err := kctx.ExternalLoadBalancer().IP()
if err != nil {
return "", errors.Wrapf(err, "failed to get IP for node: %s", kctx.ExternalLoadBalancer().Name())
}
return fmt.Sprintf("%s:%d", loadBalancerIP, ControlPlanePort), nil
}
// gets the IP of the bootstrap control plane node
controlPlaneIP, err := kctx.BootStrapControlPlane().IP()
if err != nil {
return "", errors.Wrapf(err, "failed to get IP for node: %s", kctx.BootStrapControlPlane().Name())
}
return fmt.Sprintf("%s:%d", controlPlaneIP, APIServerPort), nil
}
// Copy certs from the bootstrap master to the current node
func doManualCopyCerts(kctx *kcluster.KContext, kn *kcluster.KNode) error {
fmt.Printf("==> copy certificate\n")
// creates the folder tree for pre-loading necessary cluster certificates
// on the joining node
if err := kn.Command("mkdir", "-p", "/etc/kubernetes/pki/etcd").Run(); err != nil {
return errors.Wrap(err, "failed to create pki folder")
}
// define the list of necessary cluster certificates
fileNames := []string{
"ca.crt", "ca.key",
"front-proxy-ca.crt", "front-proxy-ca.key",
"sa.pub", "sa.key",
}
if kctx.ExternalEtcd() == nil {
fileNames = append(fileNames, "etcd/ca.crt", "etcd/ca.key")
}
// creates a temporary folder on the host that should acts as a transit area
// for moving necessary cluster certificates
tmpDir, err := fs.TempDir("", "")
if err != nil {
return err
}
defer os.RemoveAll(tmpDir)
err = os.MkdirAll(filepath.Join(tmpDir, "/etcd"), os.ModePerm)
if err != nil {
return err
}
// copies certificates from the bootstrap control plane node to the joining node
for _, fileName := range fileNames {
fmt.Printf("%s\n", fileName)
// sets the path of the certificate into a node
containerPath := filepath.Join("/etc/kubernetes/pki", fileName)
// set the path of the certificate into the tmp area on the host
tmpPath := filepath.Join(tmpDir, fileName)
// copies from bootstrap control plane node to tmp area
if err := kctx.BootStrapControlPlane().CopyFrom(containerPath, tmpPath); err != nil {
return errors.Wrapf(err, "failed to copy certificate %s", fileName)
}
// copies from tmp area to joining node
if err := kn.CopyTo(tmpPath, containerPath); err != nil {
return errors.Wrapf(err, "failed to copy certificate %s", fileName)
}
}
fmt.Println()
return nil
}
// Copy certs from the bootstrap master to the current node
func atLeastKubeadm(kn *kcluster.KNode, v string) error {
kubeadmVersion, err := kn.KubeadmVersion()
if err != nil {
return err
}
vS, err := version.ParseSemantic(v)
if err != nil {
return err
}
if !kubeadmVersion.AtLeast(vS) {
return errors.Errorf("At least kubeadm version %s is required on node %s, currently %q", v, kn.Name(), kubeadmVersion)
}
return nil
}
func waitForPodsRunning(kn *kcluster.KNode, label string, replicas int) error {
for i := 0; i < 10; i++ {
fmt.Printf(".")
time.Sleep(time.Duration(i) * time.Second)
if lines, err := kn.CombinedOutputLines(
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "get", "pods", "-l", fmt.Sprintf("run=%s", label), "-o", "jsonpath='{.items[*].status.phase}'",
); err == nil {
if len(lines) != 1 {
return errors.New("Error checking pod status")
}
statuses := strings.Split(strings.Trim(lines[0], "'"), " ")
// if pod number not yet converged, wait
if len(statuses) != replicas {
continue
}
// check for pods status
running := true
for j := 0; j < replicas; j++ {
if statuses[j] != "Running" {
running = false
}
}
if running {
fmt.Printf("%d pods running!\n\n", replicas)
return nil
}
}
}
return errors.New("Pod not yet started :-(")
}
func getNodePort(kn *kcluster.KNode, svc string) (string, error) {
lines, err := kn.CombinedOutputLines(
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "get", "svc", svc, "--output=jsonpath='{range .spec.ports[0]}{.nodePort}'",
)
if err != nil {
return "", errors.Wrapf(err, "failed to get node port")
}
if len(lines) != 1 {
return "", errors.New("failed to parse node port")
}
return strings.Trim(lines[0], "'"), nil
}
func checkNodePort(kctx *kcluster.KContext, port string) error {
for _, n := range kctx.KubernetesNodes() {
fmt.Printf("checking node port %s on node %s...", port, n.Name())
ip, err := n.IP()
if err != nil {
return err
}
lines, err := n.CombinedOutputLines(
"curl", "-I", fmt.Sprintf("http://%s:%s", ip, port),
)
if err != nil {
return errors.Wrapf(err, "error checking node port")
}
if len(lines) < 1 {
return errors.Wrapf(err, "error checking node port. invalid answer")
}
if strings.Trim(lines[0], "\n\r") == "HTTP/1.1 200 OK" {
fmt.Printf("pass!\n")
continue
}
return errors.Errorf("node port %s on node %s doesn't works", port, n.Name())
}
fmt.Printf("\n")
return nil
}
func getPodName(kn *kcluster.KNode, label string) (string, error) {
lines, err := kn.CombinedOutputLines(
"kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "get", "pods", "-l", fmt.Sprintf("run=%s", label), "-o", "jsonpath='{.items[0].metadata.name}'",
)
if err != nil {
return "", errors.Wrapf(err, "failed to get pod name")
}
if len(lines) != 1 {
return "", errors.New("failed to parse pod name")
}
return strings.Trim(lines[0], "'"), nil
}

View File

@ -0,0 +1,232 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alter
import (
"fmt"
"os"
"path"
"time"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"sigs.k8s.io/kind/pkg/container/docker"
"sigs.k8s.io/kind/pkg/exec"
"sigs.k8s.io/kind/pkg/fs"
)
// DefaultBaseImage is the default base image used
const DefaultBaseImage = "kindest/node:latest"
// DefaultImage is the default name:tag for the alter image
const DefaultImage = DefaultBaseImage
// AlterContainerLabelKey is applied to each altered container
const AlterContainerLabelKey = "io.k8s.sigs.kinder.alter"
// Context is used to alter the kind node image, and contains
// alter configuration
type Context struct {
baseImage string
image string
imagePaths []string
upgradeBinariesPath string
kubeadmPath string
bits []bits
}
// Option is Context configuration option supplied to NewContext
type Option func(*Context)
// WithImage configures a NewContext to tag the built image with `image`
func WithImage(image string) Option {
return func(b *Context) {
b.image = image
}
}
// WithBaseImage configures a NewContext to use `image` as the base image
func WithBaseImage(image string) Option {
return func(b *Context) {
b.baseImage = image
}
}
// WithImageTars configures a NewContext to include additional images tars
func WithImageTars(paths []string) Option {
return func(b *Context) {
b.imagePaths = append(b.imagePaths, paths...)
}
}
// WithUpgradeBinaries configures a NewContext to include binaries for upgrade
func WithUpgradeBinaries(upgradeBinariesPath string) Option {
return func(b *Context) {
b.upgradeBinariesPath = upgradeBinariesPath
}
}
// WithKubeadm configures a NewContext to override the kubeadm binary
func WithKubeadm(path string) Option {
return func(b *Context) {
b.kubeadmPath = path
}
}
// NewContext creates a new Context with default configuration,
// overridden by the options supplied in the order that they are supplied
func NewContext(options ...Option) (ctx *Context, err error) {
// default options
ctx = &Context{
baseImage: DefaultBaseImage,
image: DefaultBaseImage,
}
// apply user options
for _, option := range options {
option(ctx)
}
// initialize bits
if len(ctx.imagePaths) > 0 {
ctx.bits = append(ctx.bits, newImageBits(ctx.imagePaths))
}
if ctx.upgradeBinariesPath != "" {
ctx.bits = append(ctx.bits, newUpgradeBinaryBits(ctx.upgradeBinariesPath))
}
if ctx.kubeadmPath != "" {
ctx.bits = append(ctx.bits, newKubeadmBits(ctx.kubeadmPath))
}
return ctx, nil
}
// Alter alters the cluster node image
func (c *Context) Alter() (err error) {
// create tempdir to alter the image in
alterDir, err := fs.TempDir("", "kinder-alter-image")
if err != nil {
return err
}
defer os.RemoveAll(alterDir)
log.Infof("Altering node image in: %s", alterDir)
// populate the kubernetes artifacts first
if err := c.populateBits(alterDir); err != nil {
return err
}
// then the perform the actual docker image alter
return c.alterImage(alterDir)
}
func (c *Context) populateBits(alterDir string) error {
log.Info("Starting populate bits ...")
// always create bits dir
bitsDir := path.Join(alterDir, "bits")
if err := os.Mkdir(bitsDir, 0777); err != nil {
return errors.Wrap(err, "failed to make bits dir")
}
// copy all bits from their source path to where we will COPY them into
// the dockerfile, see images/node/Dockerfile
for _, bits := range c.bits {
bitPaths := bits.Paths()
for src, dest := range bitPaths {
realDest := path.Join(bitsDir, dest)
log.Debugf("Copying: %s to %s", src, dest)
// NOTE: we use copy not copyfile because copy ensures the dest dir
if err := fs.Copy(src, realDest); err != nil {
return errors.Wrap(err, "failed to copy alter bits")
}
}
}
return nil
}
func (c *Context) alterImage(dir string) error {
// alter the image, tagged as tagImageAs, using the our tempdir as the context
log.Debug("Starting image alter ...")
// create alter container
// NOTE: we are using docker run + docker commit so we can install
// debians without permanently copying them into the image.
// if docker gets proper squash support, we can rm them instead
// This also allows the KubeBit implementations to perform programmatic
// install in the image
containerID, err := c.createAlterContainer(dir)
// ensure we will delete it
if containerID != "" {
defer func() {
exec.Command("docker", "rm", "-f", "-v", containerID).Run()
}()
}
if err != nil {
log.Errorf("Image alter Failed! %v", err)
return err
}
// install the kube bits
log.Info("Starting bits install ...")
ic := &installContext{
basePath: dir,
containerID: containerID,
}
for _, bits := range c.bits {
if err = bits.Install(ic); err != nil {
log.Errorf("Image build Failed! %v", err)
return err
}
}
// Save the image changes to a new image
cmd := exec.Command("docker", "commit", containerID, c.image)
exec.InheritOutput(cmd)
if err = cmd.Run(); err != nil {
log.Errorf("Image alter Failed! %v", err)
return err
}
log.Info("Image alter completed.")
return nil
}
func (c *Context) createAlterContainer(alterDir string) (id string, err error) {
// attempt to explicitly pull the image if it doesn't exist locally
// we don't care if this errors, we'll still try to run which also pulls
_, _ = docker.PullIfNotPresent(c.baseImage, 4)
id, err = docker.Run(
c.baseImage,
docker.WithRunArgs(
"-d", // make the client exit while the container continues to run
// label the container to make them easier to track
"--label", fmt.Sprintf("%s=%s", AlterContainerLabelKey, time.Now().Format(time.RFC3339Nano)),
"-v", fmt.Sprintf("%s:/alter", alterDir),
// the container should hang forever so we can exec in it
"--entrypoint=sleep",
),
docker.WithContainerArgs(
"infinity", // sleep infinitely to keep the container around
),
)
if err != nil {
return id, errors.Wrap(err, "failed to create alter container")
}
return id, nil
}

View File

@ -0,0 +1,66 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alter
import "sigs.k8s.io/kind/pkg/exec"
// InstallContext should be implemented by users of Bits
// to allow installing the bits in a Docker image
type installContext struct {
basePath string
containerID string
}
// Returns the base path Paths() were populated relative to
func (ic *installContext) BasePath() string {
return ic.basePath
}
func (ic *installContext) Run(command string, args ...string) error {
cmd := exec.Command(
"docker",
append(
[]string{"exec", ic.containerID, command},
args...,
)...,
)
exec.InheritOutput(cmd)
return cmd.Run()
}
func (ic *installContext) CombinedOutputLines(command string, args ...string) ([]string, error) {
cmd := exec.Command(
"docker",
append(
[]string{"exec", ic.containerID, command},
args...,
)...,
)
return exec.CombinedOutputLines(cmd)
}
// bits provides the locations of Kubernetes Binaries / Images
// needed on the cluster nodes
type bits interface {
// Paths returns a map of path on host machine to desired path in the alter folder
// Note: if Images are populated to images/, the cluster provisioning
// will load these prior to calling kubeadm
Paths() map[string]string
// Install should install (deploy) the bits on the node, assuming paths
// have been populated
Install(*installContext) error
}

View File

@ -0,0 +1,118 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alter
import (
"io/ioutil"
"os"
"path/filepath"
log "github.com/sirupsen/logrus"
)
//TODO: use const for paths
// imageBits implements Bits for the copying into the node image additional image tars
type imageBits struct {
srcs []string
}
var _ bits = &imageBits{}
func newImageBits(args []string) bits {
return &imageBits{
srcs: args,
}
}
// Paths implements bits.Paths
func (b *imageBits) Paths() map[string]string {
var paths = map[string]string{}
// for each of the given path
for _, src := range b.srcs {
// gets the src descriptor
info, err := os.Stat(src)
if err != nil {
log.Warningf("Error getting file descriptor for %q: %v", src, err)
}
// if src is a Directory
if info.IsDir() {
// gets all the entries in the folder
entries, err := ioutil.ReadDir(src)
if err != nil {
log.Warningf("Error getting directory content for %q: %v", src, err)
}
// for each entry in the folder
for _, entry := range entries {
// check if the file is a valid tar file (if not discard)
name := entry.Name()
if !(filepath.Ext(name) == ".tar" && entry.Mode().IsRegular()) {
log.Warningf("Image file %q is not a valid .tar file. Removed from imageBits", name)
continue
}
// Add to the path list; the dest path is a subfolder into the alterDir
entrySrc := filepath.Join(src, name)
entryDest := filepath.Join("images", name)
paths[entrySrc] = entryDest
log.Debugf("imageBits %s added to paths", entrySrc)
}
continue
}
// check if the file is a valid tar file (if not discard)
if !(filepath.Ext(src) == ".tar" && info.Mode().IsRegular()) {
log.Warningf("Image file %q is not a valid .tar file. Removed from imageBits", src)
}
// Add to the path list; the dest path is a subfolder into the alterDir
dest := filepath.Join("images", filepath.Base(src))
paths[src] = dest
log.Debugf("imageBits %s added to paths", src)
}
return paths
}
// Install implements bits.Install
func (b *imageBits) Install(ic *installContext) error {
// The src path is a subfolder into the alterDir, that is mounted in the
// container as /alter
src := filepath.Join("/alter", "bits", "images")
// The dest path is /kind/images, a well known folder where kind(er) will
// search for pre-loaded images during `kind(er) create`
dest := filepath.Join("/kind")
// copy artifacts in
if err := ic.Run("rsync", "-r", src, dest); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
// make sure we own the tarballs
// TODO: someday we might need a different user ...
if err := ic.Run("chown", "-R", "root:root", filepath.Join("/kind", "images")); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
return nil
}

View File

@ -0,0 +1,89 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alter
import (
"os"
"path/filepath"
log "github.com/sirupsen/logrus"
)
//TODO: use const for paths and filenames
// kubeadmBits implements Bits for the overriding the kubeadm binary into the node image
type kubeadmBits struct {
src string
}
var _ bits = &kubeadmBits{}
// newKubeadmBits returns a new bits backed by the kubeadm binary
func newKubeadmBits(arg string) bits {
return &kubeadmBits{
src: arg,
}
}
// Paths implements bits.Paths
func (b *kubeadmBits) Paths() map[string]string {
var paths = map[string]string{}
// gets the src descriptor
info, err := os.Stat(b.src)
if err != nil {
log.Warningf("Error getting file descriptor for %q: %v", b.src, err)
}
// check if the file is a valid kubeadm file (if not discard)
if !(filepath.Base(b.src) == "kubeadm" && info.Mode().IsRegular()) {
log.Warningf("Image file %q is not a valid kubeadm binary file. Removed from kubeadmBits", b.src)
}
// Add to the path list; the dest path is a kubeadm file into the alterDir
dest := "kubeadm"
paths[b.src] = dest
log.Debugf("kubeadmBits %s added to paths", b.src)
return paths
}
// Install implements bits.Install
func (b *kubeadmBits) Install(ic *installContext) error {
// The src path is a subfolder into the alterDir, that is mounted in the
// container as /alter
src := filepath.Join("/alter", "bits", "kubeadm")
// The dest path is /usr/bin/kubeadm, the location of the kubeadm binary
// installed in the node image during kind(er) build node-image
dest := filepath.Join("/usr", "bin", "kubeadm")
// copy artifacts in
if err := ic.Run("cp", src, dest); err != nil {
log.Errorf("Image alter Failed! %v", err)
return err
}
// make sure we own the packages
// TODO: someday we might need a different user ...
if err := ic.Run("chown", "-R", "root:root", "/usr/bin/kubeadm"); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
return nil
}

View File

@ -0,0 +1,107 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package alter
import (
"os"
"path/filepath"
log "github.com/sirupsen/logrus"
)
// upgradeBinaryBits implements Bits for the copying into the node image debian packages for upgrades
type upgradeBinaryBits struct {
src string
}
var _ bits = &upgradeBinaryBits{}
func newUpgradeBinaryBits(arg string) bits {
return &upgradeBinaryBits{
src: arg,
}
}
// Paths implements bits.Paths
func (b *upgradeBinaryBits) Paths() map[string]string {
var paths = map[string]string{}
addPathForBinary := func(binary string) {
// gets the src descriptor
src := filepath.Join(b.src, binary)
info, err := os.Stat(src)
if err != nil {
log.Warningf("Error getting file descriptor for %q: %v", src, err)
}
// check if the file is a valid file (if not error)
if !(info.Mode().IsRegular()) {
log.Warningf("%q is not a valid binary file. Removed from upgradeBinaryBits", src)
}
// Add to the path list; the dest path is subfolder into the alterDir
dest := filepath.Join("upgrade", binary)
paths[src] = dest
log.Debugf("upgradeBinaryBits %s added to paths", src)
}
addPathForBinary("kubeadm")
addPathForBinary("kubelet")
addPathForBinary("kubectl")
return paths
}
// Install implements bits.Install
func (b *upgradeBinaryBits) Install(ic *installContext) error {
// The src path is a subfolder into the alterDir, that is mounted in the
// container as /alter
src := filepath.Join("/alter", "bits", "upgrade")
// The dest path is /kinder/upgrades, a well known folder where kinder will
// search when executing the upgrade procedure
dest := filepath.Join("/kinder")
// create dest folder
if err := ic.Run("mkdir", "-p", dest); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
// copy artifacts in
if err := ic.Run("rsync", "-r", src, dest); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
// make sure we own the binary
// TODO: someday we might need a different user ...
if err := ic.Run("chown", "-R", "root:root", filepath.Join("/kinder", "upgrade")); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
// make sure the binary are executable
// TODO: someday we might need a different user ...
if err := ic.Run("chmod", "-R", "+x", filepath.Join("/kinder", "upgrade")); err != nil {
log.Errorf("Image alter failed! %v", err)
return err
}
return nil
}

View File

@ -0,0 +1,187 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"sort"
"sync"
"github.com/pkg/errors"
)
// Action define a set of tasks to be executed on a `kind` cluster.
// Usage of actions allows to define repetitive, high level abstractions/workflows
// by composing lower level tasks
type Action interface {
// Tasks returns the list of task that are identified by this action
// Please note that the order of task is important, and it will be
// respected during execution
Tasks() []Task
}
// Task define a logical step of an action to be executed on a `kind` cluster.
// At exec time the logical step will then apply to the current cluster
// topology, and be planned for execution zero, one or many times accordingly.
type Task struct {
// Description of the task
Description string
// TargetNodes define a function that identifies the nodes where this
// task should be executed
TargetNodes string
// Run the func that implements the task action
Run func(*KContext, *KNode, ActionFlags) error
}
// plannedTask defines a Task planned for execution on a given node.
type plannedTask struct {
// task to be executed
Task Task
// node where the task should be executed
Node *KNode
// PlannedTask should respects the given order of actions and tasks
actionIndex int
taskIndex int
}
// executionPlan contain an ordered list of Planned Tasks
// Please note that the planning order is critical for providing a
// predictable, "kubeadm friendly" and consistent execution order.
type executionPlan []*plannedTask
// internal registry of named Action implementations
var actionImpls = struct {
impls map[string]func() Action
sync.Mutex
}{
impls: map[string]func() Action{},
}
// RegisterAction registers a new named actionBuilder function for use
func RegisterAction(name string, actionBuilderFunc func() Action) {
actionImpls.Lock()
actionImpls.impls[name] = actionBuilderFunc
actionImpls.Unlock()
}
// getAction returns one instance of a registered action
func getAction(name string) (Action, error) {
actionImpls.Lock()
actionBuilderFunc, ok := actionImpls.impls[name]
actionImpls.Unlock()
if !ok {
return nil, errors.Errorf("no Action implementation with name: %s", name)
}
return actionBuilderFunc(), nil
}
// KnownActions returns the list of know actions
func KnownActions() (actions []string) {
actionImpls.Lock()
for k := range actionImpls.impls {
actions = append(actions, k)
}
actionImpls.Unlock()
return actions
}
// newExecutionPlan creates an execution plan by applying logical step/task
// defined for each action to the actual cluster topology. As a result task
// could be executed zero, one or more times according with the target nodes
// selector defined for each task.
// The execution plan is ordered, providing a predictable, "kubeadm friendly"
// and consistent execution order; with this regard please note that the order
// of actions is important, and it will be respected by planning.
// TODO(fabrizio pandini): probably it will be necessary to add another criteria
// for ordering planned task for the most complex workflows (e.g.
// init-join-upgrade and then join again)
// e.g. it should be something like "action group" where each action
// group is a list of actions
func newExecutionPlan(cfg *KContext, actions []string) (executionPlan, error) {
// for each actionName
var plan = executionPlan{}
for i, name := range actions {
// get the action implementation instance
actionImpl, err := getAction(name)
if err != nil {
return nil, err
}
// for each logical tasks defined for the action
for j, t := range actionImpl.Tasks() {
// get the list of target nodes in the current topology
targetNodes, _ := cfg.selectNodes(t.TargetNodes)
for _, n := range targetNodes {
// creates the planned task
taskContext := &plannedTask{
Node: n,
Task: t,
actionIndex: i,
taskIndex: j,
}
plan = append(plan, taskContext)
}
}
}
// sorts the list of planned task ensuring a predictable, "kubeadm friendly"
// and consistent execution order
sort.Sort(plan)
return plan, nil
}
// Len of the executionPlan.
// It is required for making ExecutionPlan sortable.
func (t executionPlan) Len() int {
return len(t)
}
// Less return the lower between two elements of the ExecutionPlan, where the
// lower element should be executed before the other.
// It is required for making ExecutionPlan sortable.
func (t executionPlan) Less(i, j int) bool {
return t[i].ExecutionOrder() < t[j].ExecutionOrder()
}
// ExecutionOrder returns a string that can be used for sorting planned tasks
// into a predictable, "kubeadm friendly" and consistent order.
// NB. we are using a string to combine all the item considered into something
// that can be easily sorted using a lexicographical order
func (p *plannedTask) ExecutionOrder() string {
return fmt.Sprintf("Node.ProvisioningOrder: %d - Node.Name: %s - actionIndex: %d - taskIndex: %d",
// Then PlannedTask are grouped by machines, respecting the kubeadm node
// ProvisioningOrder: first complete provisioning on bootstrap control
// plane, then complete provisioning of secondary control planes, and
// finally provision worker nodes.
p.Node.ProvisioningOrder(),
// Node name is considered in order to get a predictable/repeatable ordering
// in case of many nodes with the same ProvisioningOrder
p.Node.Name(),
// If both the two criteria above are equal, the given order of actions will
// be respected and, for each action, the predefined order of tasks
// will be used
p.actionIndex,
p.taskIndex,
)
}
// Swap two elements of the ExecutionPlan.
// It is required for making ExecutionPlan sortable.
func (t executionPlan) Swap(i, j int) {
t[i], t[j] = t[j], t[i]
}

View File

@ -0,0 +1,239 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"reflect"
"sort"
"testing"
"sigs.k8s.io/kind/pkg/cluster/constants"
)
func TestExecutionPlanSorting(t *testing.T) {
cases := []struct {
TestName string
actual executionPlan
expected executionPlan
}{
{
TestName: "ExecutionPlan is ordered by provisioning order as a first criteria",
actual: executionPlan{
&plannedTask{Node: newTestNode("worker2", constants.WorkerNodeRoleValue)},
&plannedTask{Node: newTestNode("control-plane2", constants.ControlPlaneNodeRoleValue)},
&plannedTask{Node: newTestNode("etcd", constants.ExternalEtcdNodeRoleValue)},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue)},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue)},
},
expected: executionPlan{
&plannedTask{Node: newTestNode("etcd", constants.ExternalEtcdNodeRoleValue)},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue)},
&plannedTask{Node: newTestNode("control-plane2", constants.ControlPlaneNodeRoleValue)},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue)},
&plannedTask{Node: newTestNode("worker2", constants.WorkerNodeRoleValue)},
},
},
{
TestName: "ExecutionPlan respects the given action order as a second criteria",
actual: executionPlan{
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 3},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 2},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 1},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 1},
},
expected: executionPlan{
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 1},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 2},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 1},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 3},
},
},
{
TestName: "ExecutionPlan respects the predefined order for each action as a third criteria",
actual: executionPlan{
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 1, taskIndex: 2},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 1, taskIndex: 2},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 1, taskIndex: 1},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 1, taskIndex: 1},
},
expected: executionPlan{
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 1, taskIndex: 1},
&plannedTask{Node: newTestNode("control-plane1", constants.ControlPlaneNodeRoleValue), actionIndex: 1, taskIndex: 2},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 1, taskIndex: 1},
&plannedTask{Node: newTestNode("worker1", constants.WorkerNodeRoleValue), actionIndex: 1, taskIndex: 2},
},
},
}
for _, c := range cases {
t.Run(c.TestName, func(t2 *testing.T) {
// sorting planned task
sort.Sort(c.actual)
// checking planned tasks are properly sorted
if !reflect.DeepEqual(c.actual, c.expected) {
t2.Errorf("Expected machineSets")
for _, m := range c.expected {
t2.Logf(" %s on %s, actionIndex %d taskIndex %d", m.Task.Description, m.Node.Name(), m.actionIndex, m.taskIndex)
}
t2.Log("Saw")
for _, m := range c.actual {
t2.Logf(" %s on %s, actionIndex %d taskIndex %d", m.Task.Description, m.Node.Name(), m.actionIndex, m.taskIndex)
}
}
})
}
}
// dummy action with single task targeting all nodes
type action0 struct{}
func newAction0() Action {
return &action0{}
}
func (b *action0) Tasks() []Task {
return []Task{
{
Description: "action0 - task 0/all",
TargetNodes: "@all",
},
}
}
// dummy action with single task targeting control-plane nodes
type action1 struct{}
func newAction1() Action {
return &action1{}
}
func (b *action1) Tasks() []Task {
return []Task{
{
Description: "action1 - task 0/control-planes",
TargetNodes: "@cp*",
},
}
}
// dummy action with multiple tasks each with different targets
type action2 struct{}
func newAction2() Action {
return &action2{}
}
func (b *action2) Tasks() []Task {
return []Task{
{
Description: "action2 - task 0/all",
TargetNodes: "@all",
},
{
Description: "action2 - task 1/control-planes",
TargetNodes: "@cp*",
},
{
Description: "action2 - task 2/workers",
TargetNodes: "@w*",
},
}
}
func TestNewExecutionPlan(t *testing.T) {
testTopology := newTestCluster("test", KNodes{
newTestNode("test-cp", constants.ControlPlaneNodeRoleValue),
newTestNode("test-w1", constants.WorkerNodeRoleValue),
newTestNode("test-w2", constants.WorkerNodeRoleValue),
})
RegisterAction("action0", newAction0) // Task 0 -> allMachines
RegisterAction("action1", newAction1) // Task 0 -> controlPlaneMachines
RegisterAction("action2", newAction2) // Task 0 -> allMachines, Task 1 -> controlPlaneMachines, Task 2 -> workerMachines
cases := []struct {
TestName string
Actions []string
ExpectedPlan []string
}{
{
TestName: "Action with task targeting all machines is planned",
Actions: []string{"action0"},
ExpectedPlan: []string{
"action0 - task 0/all on test-cp",
"action0 - task 0/all on test-w1",
"action0 - task 0/all on test-w2",
},
},
{
TestName: "Action with task targeting control-plane nodes is planned",
Actions: []string{"action1"},
ExpectedPlan: []string{
"action1 - task 0/control-planes on test-cp",
},
},
{
TestName: "Action with many task and targets is planned",
Actions: []string{"action2"},
ExpectedPlan: []string{ // task are grouped by machine/provision order and task order is preserved
"action2 - task 0/all on test-cp",
"action2 - task 1/control-planes on test-cp",
"action2 - task 0/all on test-w1",
"action2 - task 2/workers on test-w1",
"action2 - task 0/all on test-w2",
"action2 - task 2/workers on test-w2",
},
},
{
TestName: "Many actions are planned",
Actions: []string{"action0", "action1", "action2"},
ExpectedPlan: []string{ // task are grouped by machine/provision order and action order/task order is preserved
"action0 - task 0/all on test-cp",
"action1 - task 0/control-planes on test-cp",
"action2 - task 0/all on test-cp",
"action2 - task 1/control-planes on test-cp",
"action0 - task 0/all on test-w1",
"action2 - task 0/all on test-w1",
"action2 - task 2/workers on test-w1",
"action0 - task 0/all on test-w2",
"action2 - task 0/all on test-w2",
"action2 - task 2/workers on test-w2",
},
},
}
for _, c := range cases {
t.Run(c.TestName, func(t2 *testing.T) {
// Creating the execution plane
tasks, _ := newExecutionPlan(testTopology, c.Actions)
// Checking planned task are properly created (and sorted)
if len(tasks) != len(c.ExpectedPlan) {
t2.Fatalf("Invalid PlannedTask expected %d elements, saw %d", len(c.ExpectedPlan), len(tasks))
}
for i, mt := range tasks {
r := fmt.Sprintf("%s on %s", mt.Task.Description, mt.Node.Name())
if r != c.ExpectedPlan[i] {
t2.Errorf("Invalid PlannedTask %d expected %v, saw %v", i, c.ExpectedPlan[i], r)
}
}
})
}
}

View File

@ -0,0 +1,375 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"strings"
"sigs.k8s.io/kind/pkg/cluster/constants"
"sigs.k8s.io/kind/pkg/cluster/nodes"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/version"
"sigs.k8s.io/kind/pkg/cluster"
"sigs.k8s.io/kind/pkg/exec"
)
// KContext is used to create / manipulate kubernetes-in-docker clusters
// See: NewContext()
type KContext struct {
*cluster.Context
kubernetesNodes KNodes
controlPlanes KNodes
workers KNodes
externalEtcd *KNode
externalLoadBalancer *KNode
}
// NewKContext returns a new cluster management context; The context
// is initialized by discovering the actual containers nodes
func NewKContext(ctx *cluster.Context) (c *KContext, err error) {
c = &KContext{
Context: ctx,
}
nodes, err := ctx.ListNodes()
if err != nil {
return nil, err
}
for _, n := range nodes {
node, err := NewKNode(n)
if err != nil {
return nil, err
}
if err = c.add(node); err != nil {
return nil, err
}
}
// There should be at least one control plane
if c.BootStrapControlPlane() == nil {
return nil, errors.Errorf("please add at least one node with role %q", constants.ControlPlaneNodeRoleValue)
}
// There should be one load balancer if more than one control plane exists in the cluster
if len(c.ControlPlanes()) > 1 && c.ExternalLoadBalancer() == nil {
return nil, errors.Errorf("please add a node with role %s because in the cluster there are more than one node with role %s",
constants.ExternalLoadBalancerNodeRoleValue, constants.ControlPlaneNodeRoleValue)
}
return c, nil
}
// add a KNode to the KContext, filling the derived list of KNode by role
func (c *KContext) add(node *KNode) error {
if node.IsControlPlane() || node.IsWorker() {
c.kubernetesNodes = append(c.kubernetesNodes, node)
c.kubernetesNodes.Sort()
}
if node.IsControlPlane() {
c.controlPlanes = append(c.controlPlanes, node)
c.controlPlanes.Sort()
}
if node.IsWorker() {
c.workers = append(c.workers, node)
c.workers.Sort()
}
if node.IsExternalEtcd() {
if c.externalEtcd != nil {
return errors.Errorf("invalid config. there are two nodes with role %q", constants.ExternalEtcdNodeRoleValue)
}
c.externalEtcd = node
}
if node.IsExternalLoadBalancer() {
if c.externalLoadBalancer != nil {
return errors.Errorf("invalid config. there are two nodes with role %q", constants.ExternalLoadBalancerNodeRoleValue)
}
c.externalLoadBalancer = node
}
return nil
}
// CreateNode create a new node of the given role
func (c *KContext) CreateNode(role string, image string) error {
clusterLabel := fmt.Sprintf("%s=%s", constants.ClusterLabelKey, c.Name())
switch role {
case constants.WorkerNodeRoleValue:
n := len(c.workers) + 1
name := fmt.Sprintf("%s-%s%d", c.Name(), role, n)
_, err := nodes.CreateWorkerNode(name, image, clusterLabel, nil)
if err != nil {
return errors.Wrap(err, "failed to create worker node")
}
return nil
case constants.ControlPlaneNodeRoleValue:
//this is currently super hacky, looking for a better solution
if c.externalLoadBalancer == nil {
return errors.Errorf("Unable to create a new control plane node in a cluster without a load balancer")
}
n := len(c.controlPlanes) + 1
name := fmt.Sprintf("%s-%s%d", c.Name(), role, n)
node, err := nodes.CreateControlPlaneNode(name, image, clusterLabel, "127.0.0.1", 6443, nil)
if err != nil {
return errors.Wrap(err, "failed to create control-plane node")
}
ip, err := node.IP()
if err != nil {
return errors.Wrap(err, "failed to get new control-plane node ip")
}
if err := c.ExternalLoadBalancer().Command("bin/bash", "-c",
fmt.Sprintf("`echo \" server %s %s:6443 check\" >> /kind/haproxy.cfg`", name, ip),
).Run(); err != nil {
return errors.Wrap(err, "failed to update load balancer config")
}
if err := c.ExternalLoadBalancer().Command("docker", "kill", "-s", "HUP", "haproxy").Run(); err != nil { //this assumes ha proxy having a well know name
return errors.Wrap(err, "failed to reload load balancer config")
}
return nil
}
return errors.Errorf("creation of new %s nodes not supported", role)
}
// KubernetesNodes returns all the Kubernetes nodes in the cluster
func (c *KContext) KubernetesNodes() KNodes {
return c.kubernetesNodes
}
// ControlPlanes returns all the nodes with control-plane role
func (c *KContext) ControlPlanes() KNodes {
return c.controlPlanes
}
// BootStrapControlPlane returns the first node with control-plane role
// This is the node where kubeadm init will be executed.
func (c *KContext) BootStrapControlPlane() *KNode {
if len(c.controlPlanes) == 0 {
return nil
}
return c.controlPlanes[0]
}
// SecondaryControlPlanes returns all the nodes with control-plane role
// except the BootStrapControlPlane node, if any,
func (c *KContext) SecondaryControlPlanes() KNodes {
if len(c.controlPlanes) <= 1 {
return nil
}
return c.controlPlanes[1:]
}
// Workers returns all the nodes with Worker role, if any
func (c *KContext) Workers() KNodes {
return c.workers
}
// ExternalEtcd returns the node with external-etcd role, if defined
func (c *KContext) ExternalEtcd() *KNode {
return c.externalEtcd
}
// ExternalLoadBalancer returns the node with external-load-balancer role, if defined
func (c *KContext) ExternalLoadBalancer() *KNode {
return c.externalLoadBalancer
}
//TODO: Refactor how we are exposing this flags
type ActionFlags struct {
UsePhases bool
UpgradeVersion *version.Version
CopyCerts bool
}
// Do actions on kubernetes-in-docker cluster
// Actions are repetitive, high level abstractions/workflows composed
// by one or more lower level tasks, that automatically adapt to the
// current cluster topology
func (c *KContext) Do(actions []string, flags ActionFlags, onlyNode string) error {
// Create an ExecutionPlan that applies the given actions to the
// topology defined in the config
executionPlan, err := newExecutionPlan(c, actions)
if err != nil {
return err
}
// Executes all the selected action
for _, plannedTask := range executionPlan {
if onlyNode != "" {
onlyNodeName := fmt.Sprintf("%s-%s", c.Name(), onlyNode)
if !strings.EqualFold(onlyNodeName, plannedTask.Node.Name()) {
continue
}
}
fmt.Printf("[%s] %s\n\n", plannedTask.Node.Name(), plannedTask.Task.Description)
err := plannedTask.Task.Run(c, plannedTask.Node, flags)
if err != nil {
// in case of error, the execution plan is halted
log.Error(err)
return err
}
}
return nil
}
// Exec is a topology aware wrapper of docker exec
func (c *KContext) Exec(nodeSelector string, args []string) error {
nodes, err := c.selectNodes(nodeSelector)
if err != nil {
return err
}
log.Infof("%d nodes selected as target for the command", len(nodes))
for _, node := range nodes {
fmt.Printf("🚀 Executing command on node %s 🚀\n", node.Name())
cmdArgs := append([]string{"exec",
node.Name(),
}, args...)
cmd := exec.Command("docker", cmdArgs...)
exec.InheritOutput(cmd)
err := cmd.Run()
if err != nil {
return errors.Wrapf(err, "failed to execute command on node %s", node.Name())
}
}
return nil
}
// Copy is a topology aware wrapper of docker cp
func (c *KContext) Copy(source, target string) error {
sourceNodes, sourcePath, err := c.resolveNodesPath(source)
if err != nil {
return err
}
teargetNodes, targetPath, err := c.resolveNodesPath(target)
if err != nil {
return err
}
if sourceNodes == nil && teargetNodes == nil {
return errors.Errorf("at least one between source and target must be a node/nodes in the cluster")
}
if sourceNodes != nil {
switch len(sourceNodes) {
case 1:
break // one source node selected: continue
case 0:
return errors.Errorf("no source node matches given criteria")
default:
return errors.Errorf("source can't be more than one node")
}
}
if teargetNodes != nil && len(teargetNodes) == 0 {
return errors.Errorf("no target node matches given criteria")
}
if sourceNodes != nil && teargetNodes != nil {
// create tmp folder
// cp locally
return errors.Errorf("copy between nodes not implemented yet!")
}
if teargetNodes == nil {
fmt.Printf("Copying from %s ...\n", sourceNodes[0].Name())
sourceNodes[0].CopyFrom(sourcePath, targetPath)
}
for _, n := range teargetNodes {
fmt.Printf("Copying to %s ...\n", n.Name())
n.CopyTo(sourcePath, targetPath)
}
return nil
}
// resolveNodesPath takes a "topology aware" path and resolve to one (or more) real paths
func (c *KContext) resolveNodesPath(nodesPath string) (nodes KNodes, path string, err error) {
t := strings.Split(nodesPath, ":")
switch len(t) {
case 1:
nodes = nil
path = t[0]
case 2:
nodes, err = c.selectNodes(t[0])
if err != nil {
return nil, "", err
}
path = t[1]
default:
return nil, "", errors.Errorf("invalid nodesPath %q", nodesPath)
}
return nodes, path, nil
}
// selectNodes returns KNodes according to the given selector
func (c *KContext) selectNodes(nodeSelector string) (nodes KNodes, err error) {
if strings.HasPrefix(nodeSelector, "@") {
switch strings.ToLower(nodeSelector) {
case "@all": // all the kubernetes nodes
return c.KubernetesNodes(), nil
case "@cp*": // all the control-plane nodes
return c.ControlPlanes(), nil
case "@cp1": // the bootstrap-control plane
return toKNodes(c.BootStrapControlPlane()), nil
case "@cpn":
return c.SecondaryControlPlanes(), nil
case "@w*":
return c.Workers(), nil
case "@lb":
return toKNodes(c.ExternalLoadBalancer()), nil
case "@etcd":
return toKNodes(c.ExternalEtcd()), nil
default:
return nil, errors.Errorf("Invalid node selector %q. Use one of [@all, @cp*, @cp1, @cpn, @w*, @lb, @etcd]", nodeSelector)
}
}
nodeName := fmt.Sprintf("%s-%s", c.Name(), nodeSelector)
for _, n := range c.KubernetesNodes() {
if strings.EqualFold(nodeName, n.Name()) {
return toKNodes(n), nil
}
}
return nil, nil
}
func toKNodes(node *KNode) KNodes {
if node != nil {
return KNodes{node}
}
return nil
}

View File

@ -0,0 +1,277 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"strings"
"testing"
"sigs.k8s.io/kind/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster/constants"
)
func newTestNode(name string, role string) *KNode {
return &KNode{name: name, role: role}
}
func (ns *KNodes) Names() string {
var s = []string{}
for _, n := range *ns {
s = append(s, n.Name())
}
return fmt.Sprintf("[%s]", strings.Join(s, ", "))
}
var defaultNodes = KNodes{
newTestNode("test-cp1", constants.ControlPlaneNodeRoleValue),
}
var haNodes = KNodes{
newTestNode("test-lb", constants.ExternalLoadBalancerNodeRoleValue),
newTestNode("test-cp1", constants.ControlPlaneNodeRoleValue),
newTestNode("test-cp2", constants.ControlPlaneNodeRoleValue),
newTestNode("test-cp3", constants.ControlPlaneNodeRoleValue),
newTestNode("test-w1", constants.WorkerNodeRoleValue),
newTestNode("test-w2", constants.WorkerNodeRoleValue),
}
func newTestCluster(name string, nodes KNodes) (c *KContext) {
c = &KContext{
Context: cluster.NewContext(name),
}
for _, n := range nodes {
c.add(n)
}
return c
}
func TestSelectNodes(t *testing.T) {
cases := []struct {
TestName string
Nodes KNodes
NodeSelector string
ExpectNodes string
ExpectError bool
}{
{
TestName: "all on default cluster",
Nodes: defaultNodes,
NodeSelector: "@all",
ExpectNodes: "[test-cp1]",
},
{
TestName: "lb selector on default cluster",
Nodes: defaultNodes,
NodeSelector: "@lb",
ExpectNodes: "[]",
},
{
TestName: "cp* selector on default cluster",
Nodes: defaultNodes,
NodeSelector: "@cp*",
ExpectNodes: "[test-cp1]",
},
{
TestName: "cp1 selector on default cluster",
Nodes: defaultNodes,
NodeSelector: "@cp1",
ExpectNodes: "[test-cp1]",
},
{
TestName: "cpN selector on default cluster",
Nodes: defaultNodes,
NodeSelector: "@cpN",
ExpectNodes: "[]",
},
{
TestName: "w* selector on default cluster",
Nodes: defaultNodes,
NodeSelector: "@w*",
ExpectNodes: "[]",
},
{
TestName: "select by node name on default cluster",
Nodes: defaultNodes,
NodeSelector: "cp1",
ExpectNodes: "[test-cp1]",
},
{
TestName: "all on ha cluster",
Nodes: haNodes,
NodeSelector: "@all",
ExpectNodes: "[test-cp1, test-cp2, test-cp3, test-w1, test-w2]",
},
{
TestName: "lb selector on ha cluster",
Nodes: haNodes,
NodeSelector: "@lb",
ExpectNodes: "[test-lb]",
},
{
TestName: "cp* selector on ha cluster",
Nodes: haNodes,
NodeSelector: "@cp*",
ExpectNodes: "[test-cp1, test-cp2, test-cp3]",
},
{
TestName: "cp1 selector on ha cluster",
Nodes: haNodes,
NodeSelector: "@cp1",
ExpectNodes: "[test-cp1]",
},
{
TestName: "cpN selector on ha cluster",
Nodes: haNodes,
NodeSelector: "@cpN",
ExpectNodes: "[test-cp2, test-cp3]",
},
{
TestName: "w* selector on ha cluster",
Nodes: haNodes,
NodeSelector: "@w*",
ExpectNodes: "[test-w1, test-w2]",
},
{
TestName: "select by node name on ha cluster",
Nodes: haNodes,
NodeSelector: "cp1",
ExpectNodes: "[test-cp1]",
},
{
TestName: "node selectors are case insensitive",
Nodes: haNodes,
NodeSelector: "@ALL",
ExpectNodes: "[test-cp1, test-cp2, test-cp3, test-w1, test-w2]",
},
{
TestName: "invalid selector",
Nodes: defaultNodes,
NodeSelector: "@invalid",
ExpectError: true,
},
{
TestName: "node does not exists",
Nodes: defaultNodes,
ExpectNodes: "[]",
},
}
for _, c := range cases {
t.Run(c.TestName, func(t *testing.T) {
var testCluster = newTestCluster("test", c.Nodes)
n, err := testCluster.selectNodes(c.NodeSelector)
// the error can be:
// - nil, in which case we should expect no errors or fail
if err != nil {
if !c.ExpectError {
t.Fatalf("unexpected error while adding nodes: %v", err)
}
return
}
// - not nil, in which case we should expect errors or fail
if err == nil {
if c.ExpectError {
t.Fatalf("unexpected lack or error while adding nodes")
}
}
if n.Names() != c.ExpectNodes {
t.Errorf("saw %q as nodes, expected %q", n.Names(), c.ExpectNodes)
}
})
}
}
func TestResolveNodesPath(t *testing.T) {
var testCluster = newTestCluster("test", haNodes)
cases := []struct {
TestName string
NodesPath string
ExpectNodes string
ExpectPath string
ExpectError bool
}{
{
TestName: "path without node (local path)",
NodesPath: "path",
ExpectNodes: "[]",
ExpectPath: "path",
},
{
TestName: "nodeSelector path",
NodesPath: "@all:path",
ExpectNodes: "[test-cp1, test-cp2, test-cp3, test-w1, test-w2]",
ExpectPath: "path",
},
{
TestName: "nodeName path",
NodesPath: "cp1:path",
ExpectNodes: "[test-cp1]",
ExpectPath: "path",
},
{
TestName: "node-that-does-not-exists path",
NodesPath: "node-that-does-not-exists:path",
ExpectNodes: "[]",
ExpectPath: "path",
},
{
TestName: "invalid node path",
NodesPath: "@all:path:invalid",
ExpectError: true,
},
{
TestName: "invalid-node selector path",
NodesPath: "@invalid:path",
ExpectError: true,
},
}
for _, c := range cases {
t.Run(c.TestName, func(t *testing.T) {
nodes, path, err := testCluster.resolveNodesPath(c.NodesPath)
// the error can be:
// - nil, in which case we should expect no errors or fail
if err != nil {
if !c.ExpectError {
t.Fatalf("unexpected error while resolving nodes and path: %v", err)
}
return
}
// - not nil, in which case we should expect errors or fail
if err == nil {
if c.ExpectError {
t.Fatalf("unexpected lack or error while resolving nodes and path")
}
}
if nodes.Names() != c.ExpectNodes {
t.Errorf("saw %q as nodes, expected %q", nodes.Names(), c.ExpectNodes)
}
if path != c.ExpectPath {
t.Errorf("saw %q as path, expected %q", path, c.ExpectPath)
}
})
}
}

View File

@ -0,0 +1,68 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"github.com/pkg/errors"
"sigs.k8s.io/kind/pkg/cluster/constants"
"sigs.k8s.io/kind/pkg/cluster/nodes"
"sigs.k8s.io/kind/pkg/container/docker"
)
// CreateExternalEtcd creates a docker container mocking a kind external etcd node
// this is temporary and should go away as soon as kind support external etcd node
func CreateExternalEtcd(name string) (ip string, err error) {
// define name and labels mocking a kind external etcd node
containerName := fmt.Sprintf("%s-%s", name, constants.ExternalEtcdNodeRoleValue)
runArgs := []string{
"-d", // run the container detached
"--hostname", containerName, // make hostname match container name
"--name", containerName, // ... and set the container name
// label the node with the cluster ID
"--label", fmt.Sprintf("%s=%s", constants.ClusterLabelKey, name),
// label the node with the role ID
"--label", fmt.Sprintf("%s=%s", constants.NodeRoleKey, constants.ExternalEtcdNodeRoleValue),
}
// define a minimal etcd (insecure, single node, not exposed to the host machine)
containerArgs := []string{
"etcd",
"--name", fmt.Sprintf("%s-etcd", name),
"--advertise-client-urls", "http://127.0.0.1:2379",
"--listen-client-urls", "http://0.0.0.0:2379",
}
_, err = docker.Run(
"k8s.gcr.io/etcd:3.2.24",
docker.WithRunArgs(runArgs...),
docker.WithContainerArgs(containerArgs...),
)
if err != nil {
return "", errors.Wrap(err, "failed to create external etcd container")
}
kn, err := NewKNode(*nodes.FromName(containerName))
if err != nil {
return "", errors.Wrap(err, "failed to create external etcd node")
}
return kn.IP()
}

162
kinder/pkg/cluster/node.go Normal file
View File

@ -0,0 +1,162 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cluster
import (
"fmt"
"sort"
"strings"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/util/version"
"sigs.k8s.io/kind/pkg/cluster/constants"
"sigs.k8s.io/kind/pkg/cluster/nodes"
"sigs.k8s.io/kind/pkg/exec"
)
// KNode implements a test friendly wrapper on nodes.Node
type KNode struct {
nodes.Node
// local properties use to avoid access to the real nodes.Node during tests
// TODO: move local properties in test file
name string
role string
}
// NewKNode returns a new nodes.Node wrapper
func NewKNode(node nodes.Node) (n *KNode, err error) {
_, err = node.Role()
if err != nil {
return nil, err
}
return &KNode{Node: node}, nil
}
// Name returns the name of the node
func (n *KNode) Name() string {
// if local name is set, use to avoid access to the real nodes.Node during tests
if n.name != "" {
return n.name
}
return n.Node.String()
}
// Role returns the role of the node
func (n *KNode) Role() string {
// if local role is set, use to avoid access to the real nodes.Node during tests
if n.role != "" {
return n.role
}
role, _ := n.Node.Role()
return role
}
// IsControlPlane returns true if the node hosts a control plane instance
// NB. in single node clusters, control-plane nodes act also as a worker nodes
func (n *KNode) IsControlPlane() bool {
return n.Role() == constants.ControlPlaneNodeRoleValue
}
// IsWorker returns true if the node hosts a worker instance
func (n *KNode) IsWorker() bool {
return n.Role() == constants.WorkerNodeRoleValue
}
// IsExternalEtcd returns true if the node hosts an external etcd member
func (n *KNode) IsExternalEtcd() bool {
return n.Role() == constants.ExternalEtcdNodeRoleValue
}
// IsExternalLoadBalancer returns true if the node hosts an external load balancer
func (n *KNode) IsExternalLoadBalancer() bool {
return n.Role() == constants.ExternalLoadBalancerNodeRoleValue
}
// ProvisioningOrder returns the provisioning order for nodes, that
// should be defined according to the assigned NodeRole
func (n *KNode) ProvisioningOrder() int {
switch n.Role() {
// External dependencies should be provisioned first; we are defining an arbitrary
// precedence between etcd and load balancer in order to get predictable/repeatable results
case constants.ExternalEtcdNodeRoleValue:
return 1
case constants.ExternalLoadBalancerNodeRoleValue:
return 2
// Then control plane nodes
case constants.ControlPlaneNodeRoleValue:
return 3
// Finally workers
case constants.WorkerNodeRoleValue:
return 4
default:
return 99
}
}
// DebugCmd executes a command on a node and prints the command output on the screen
func (n *KNode) DebugCmd(message string, command string, args ...string) error {
fmt.Println(message)
fmt.Println()
fmt.Printf("%s %s\n\n", command, strings.Join(args, " "))
cmd := n.Command(command, args...)
exec.InheritOutput(cmd)
if err := cmd.Run(); err != nil {
return errors.Wrapf(err, "Error executing %s", message)
}
fmt.Println()
return nil
}
func (n *KNode) CombinedOutputLines(command string, args ...string) (lines []string, err error) {
cmd := n.Command(command, args...)
return exec.CombinedOutputLines(cmd)
}
// KubeadmVersion returns the kubeadm version installed on the node
func (n *KNode) KubeadmVersion() (*version.Version, error) {
// NB. we are not caching version, because it can change e.g. after upgrades
cmd := n.Command("kubeadm", "version", "-o=short")
lines, err := exec.CombinedOutputLines(cmd)
if err != nil {
return nil, errors.Wrap(err, "failed to get kubeadm version")
}
if len(lines) != 1 {
return nil, errors.Errorf("kubeadm version should only be one line, got %d lines", len(lines))
}
kubeadmVersion, err := version.ParseSemantic(lines[0])
if err != nil {
return nil, errors.Wrapf(err, "%q is not a valid kubeadm version", lines[0])
}
return kubeadmVersion, nil
}
// KNodes defines a list of nodes.Node wrapper
type KNodes []*KNode
// Sort the list of nodes.Node wrapper by node provisioning order and by name
func (l KNodes) Sort() {
sort.Slice(l, func(i, j int) bool {
return l[i].ProvisioningOrder() < l[j].ProvisioningOrder() ||
(l[i].ProvisioningOrder() == l[j].ProvisioningOrder() && l[i].Name() < l[j].Name())
})
}

View File

@ -11,23 +11,23 @@ High level goals for kinder v0.1 include:
- [ ] Provide a local test environment for kubeadm development
- [x] Allow creation of nodes "ready for installing Kubernetes"
- [x] Provide pre built “developer” workflows for kubedam init, join, reset
- [x] Provide pre built developer-workflows for kubeadm init, join, reset
- [x] init and init with phases
- [x] join and join with phases
- [x] init and join with automatic copy certs
- [x] Provide pre built “developer” workflow for kubeadm upgrades
- [x] Provide pre built developer-workflow for kubeadm upgrades
- [x] reset
- [x] Allow build of node-image variants
- [x] add pre-loaded images to a node-image
- [x] replace the kubeadm binary into a node-image
- [x] add kubernetes binaries for a second kubernetes versions (target for upgrades)
- [x] add Kubernetes binaries for a second Kubernetes versions (target for upgrades)
- [x] Allow test of kubeadm cluster variations
- [x] external etcd
- [x] kube-dns
- [x] Provide "topology aware" wrappers for `docker exec` and `docker cp`
- [x] Provide a way to add nodes to an existing cluster
- [x] Add worker node
- [x] Add control plane node (and reconfigure load balancer)
- [ ] Provide a way to add nodes to an existing cluster
- [ ] Add worker node
- [ ] Add control plane node (and reconfigure load balancer)
- [x] Provide smoke test action
- [ ] Provide E2E run action(s)