Added code to enable nodeup and protokube building and execution for vSphere VM (#11)

* Added code to enable nodeup and protokube building and execution for vSphere VM.

* Fixed nodeup template for vSphere.
This commit is contained in:
prashima 2017-03-27 13:51:30 -07:00 committed by Miao Luo
parent 3075258ca3
commit bc3e8c3734
9 changed files with 306 additions and 74 deletions

View File

@ -158,6 +158,27 @@ version-dist: nodeup-dist kops-dist protokube-export utils-dist
cp .build/dist/linux/amd64/utils.tar.gz .build/upload/kops/${VERSION}/linux/amd64/utils.tar.gz
cp .build/dist/linux/amd64/utils.tar.gz.sha1 .build/upload/kops/${VERSION}/linux/amd64/utils.tar.gz.sha1
vsphere-setup:
hack/vsphere/vsphere_env.sh --set
vsphere-version-dist: vsphere-setup nodeup-dist protokube-export
rm -rf .build/upload
mkdir -p .build/upload/kops/${VERSION}/linux/amd64/
mkdir -p .build/upload/kops/${VERSION}/darwin/amd64/
mkdir -p .build/upload/kops/${VERSION}/images/
mkdir -p .build/upload/utils/${VERSION}/linux/amd64/
cp .build/dist/nodeup .build/upload/kops/${VERSION}/linux/amd64/nodeup
cp .build/dist/nodeup.sha1 .build/upload/kops/${VERSION}/linux/amd64/nodeup.sha1
cp .build/dist/images/protokube.tar.gz .build/upload/kops/${VERSION}/images/protokube.tar.gz
cp .build/dist/images/protokube.tar.gz.sha1 .build/upload/kops/${VERSION}/images/protokube.tar.gz.sha1
scp -r .build/dist/nodeup* ${TARGET}:${TARGET_PATH}/nodeup
scp -r .build/dist/images/protokube.tar.gz* ${TARGET}:${TARGET_PATH}/protokube/
make kops-dist
cp .build/dist/linux/amd64/kops .build/upload/kops/${VERSION}/linux/amd64/kops
cp .build/dist/linux/amd64/kops.sha1 .build/upload/kops/${VERSION}/linux/amd64/kops.sha1
cp .build/dist/darwin/amd64/kops .build/upload/kops/${VERSION}/darwin/amd64/kops
cp .build/dist/darwin/amd64/kops.sha1 .build/upload/kops/${VERSION}/darwin/amd64/kops.sha1
upload: kops version-dist
aws s3 sync --acl public-read .build/upload/ ${S3_BUCKET}

View File

@ -67,79 +67,38 @@ make
kops create cluster ...
```
## Hacks
## Kops with vSphere
vSphere cloud provider support in kops is a work in progress. To try out deploying kubernetes cluster on vSphere using kops, some extra steps are required.
### Nodeup and protokube testing
This Section talks about testing nodeup and protokube changes on a standalone VM, running on standalone esx or vSphere.
### Pre-requisites
+ vSphere with at least one ESX, having sufficient free disk space on attached datastore. ESX VM's should have internet connectivity.
+ Setup DNS following steps given in relevant Section above.
+ Create the VM using this template (TBD).
+ Currently vSphere code is using AWS S3 for storing all configurations, specs, addon yamls, etc. You need valid AWS credentials to try out kops on vSphere. s3://your-objectstore/cluster1.skydns.local folder will have all necessary configuration, spec, addons, etc., required to configure kubernetes cluster. (If you don't know how to setup aws, then read more on kops and how to deploy a cluster using kops on aws)
+ Update ```[kops_dir]/hack/vsphere/vsphere_env.sh``` setting up necessary environment variables.
#### Pre-requisites
Following manual steps are pre-requisites for this testing, until vSphere support for kops starts to create this infrastructure.
### Building
Execute following command(s) to build all necessary components required to run kops for vSphere-
+ Setup password free ssh to the VM
```bash
cat ~/.ssh/id_rsa.pub | ssh <username>@<vm_ip> 'cat >> .ssh/authorized_keys'
make vsphere-version-dist
```
+ Nodeup configuration file needs to be present on the VM. It can be copied from an existing AWS created master (or worker, whichever you are testing), from this location /var/cache/kubernetes-install/kube_env.yaml on your existing cluster node. Sample nodeup cofiguation file-
```yaml
Assets:
- 5e486d4a2700a3a61c4edfd97fb088984a7f734f@https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubelet
- 10e675883b167140f78ddf7ed92f936dca291647@https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubectl
- 19d49f7b2b99cd2493d5ae0ace896c64e289ccbb@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-07a8a28637e97b22eb8dfe710eeae1344f69d16e.tar.gz
ClusterName: cluster3.mangoreviews.com
ConfigBase: s3://your-objectstore/cluster1.yourdomain.com
InstanceGroupName: master-us-west-2a
Tags:
- _automatic_upgrades
- _aws
- _cni_bridge
- _cni_host_local
- _cni_loopback
- _cni_ptp
- _kubernetes_master
- _kubernetes_pool
- _protokube
channels:
- s3://your-objectstore/cluster1.yourdomain.com/addons/bootstrap-channel.yaml
protokubeImage:
hash: 6805cba0ea13805b2fa439914679a083be7ac959
name: protokube:1.5.1
source: https://kubeupv2.s3.amazonaws.com/kops/1.5.1/images/protokube.tar.gz
```
+ Currently vSphere code is using AWS S3 for storing all configurations, spec, etc. You need valid AWS credentials.
+ s3://your-objectstore/cluster1.yourdomain.com folder should have all necessary configuration, spec, addons, etc. (If you don't know how to get this, then read more on kops and how to deploy a cluster using kops)
Currently vSphere support is not part of any of the kops releases. Hence, all modified component- kops, nodeup, protokube, need building at least once. ```make vsphere-version-dist``` will do that and copy protokube image and nodeup binary at the target location specified by you in ```vsphere-env.sh```. Dns-controller has also been modified to support vSphere. You can continue to use ```export VSPHERE_DNSCONTROLLER_IMAGE=luomiao/dns-controller```, unless you are making some changes to dns-controller and would like to use your custom image.
#### Testing your changes
Once you are done making your changes in nodeup and protokube code, you would want to test them on a VM. In order to do so you will need to build nodeup binary and copy it on the desired VM. You would also want to modify nodeup code so that it accesses protokube container image that contains your changes. All this can be done by setting few environment variables, minor code updates and running 'make push-vsphere'.
### Creating cluster
Execute following command(s) to create a kubernetes cluster on vSphere using kops-
+ Create or use existing docker hub registry to create 'protokube' repo for your custom image. Update the registry details in Makefile, by modifying DOCKER_REGISTRY variable. Don't forget to do 'docker login' with your registry credentials once.
+ Export TARGET environment variable, setting its value to username@vm_ip of your test VM.
+ Update $KOPS_DIR/upup/models/nodeup/_protokube/services/protokube.service.template-
```
ExecStart=/usr/bin/docker run -v /:/rootfs/ -v /var/run/dbus:/var/run/dbus -v /run/systemd:/run/systemd --net=host --privileged -e AWS_ACCESS_KEY_ID='something' -e AWS_SECRET_ACCESS_KEY='something' <your-registry>/protokube:<image-tag> /usr/bin/protokube "$DAEMON_ARGS"
```
+ Run 'make push-vsphere'. This will build nodeup binary, scp it to your test VM, build protokube image and upload it to your registry.
+ SSH to your test VM and set following environment variables-
```bash
export AWS_REGION=us-west-2
export AWS_ACCESS_KEY_ID=something
export AWS_SECRET_ACCESS_KEY=something
```
+ Run './nodeup --conf kube_env.yaml' to test your custom build nodeup and protokube.
```bash
.build/dist/darwin/amd64/kops create cluster --cloud=vsphere --name=yourcluster.skydns.local --zones=us-west-2a --vsphere-server=<vsphere-server-ip> --vsphere-datacenter=<datacenter-name> --vsphere-resource-pool=<cluster-name> --vsphere-datastore=<datastore-name> --dns=private --vsphere-coredns-server=http://<dns-server-ip>:2379 --dns-zone=skydns.local --image=<template-vm-name> --yes
```
**Tip:** Consider adding following code to $KOPS_DIR/upup/pkg/fi/nodeup/nodetasks/load_image.go to avoid downloading protokube image. Your custom image will be downloaded directly when systemd will run protokube.service (because of the changes we made in protokube.service.template).
```go
// Add this after url variable has been populated.
if strings.Contains(url, "protokube") {
fmt.Println("Skipping protokube image download and loading.")
return nil
}
```
User .build/dist/linux/amd64/kops if working on a linux machine, instead of mac.
### Deleting cluster
Cluster deletion hasn't been fully implemented yet. So you will have to delete vSphere VM's manually for now.
**Note:** Same testing can also be done using alternate steps (these steps are _not working_ currently due to hash match failure):
+ Run 'make protokube-export' and 'make nodeup' to build and export protokube image as tar.gz, and to build nodeup binary. Both located in $KOPS_DIR/.build/dist/images/protokube.tar.gz and $KOPS_DIR/.build/dist/nodeup, respectively.
+ Copy nodeup binary to the test VM.
+ Upload $KOPS_DIR/.build/dist/images/protokube.tar.gz and $KOPS_DIR/.build/dist/images/protokube.tar.gz.sha1, with appropriate permissions, to a location from where it can be accessed from the test VM. Eg: your development machine's public_html, if working on linux based machine.
+ Update hash value to protokube.tar.gz.sha1's value and source to the uploaded location, in kube_env.yaml (see pre-requisite steps).
+ SSH to your test VM, set necessary environment variables and run './nodeup --conf kube_env.yaml'.
Configuration and spec data can be removed from S3 using following command-
```bash
.build/dist/darwin/amd64/kops delete cluster yourcluster.skydns.local --yes
```

98
hack/vsphere/vsphere_env.sh Executable file
View File

@ -0,0 +1,98 @@
#!/usr/bin/env bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ $# -ne 1 ]; then
echo Usage: vsphere_env [options]
echo Options:
echo -e "\t -s, --set \t Set environment variables."
echo -e "\t -u, --unset \t Unset environment variables."
exit 1
fi
option="$1"
flag=0
case $option in
-s | --set)
# If set, coredns will be used for vsphere cloud provider.
export VSPHERE_DNS=coredns
# If set, this dns controller image will be used.
# Leave this value unmodified if you are not building a new dns-controller image.
export VSPHERE_DNSCONTROLLER_IMAGE=luomiao/dns-controller
# S3 bucket that kops should use.
export KOPS_STATE_STORE=s3://your-obj-store
# AWS credentials
export AWS_REGION=us-west-2
export AWS_ACCESS_KEY_ID=something
export AWS_SECRET_ACCESS_KEY=something
# vSphere credentials
export VSPHERE_USERNAME=administrator@vsphere.local
export VSPHERE_PASSWORD=Admin!23
# Set TARGET and TARGET_PATH to values where you want nodeup and protokube binaries to get copied.
# This should be same location as set for NODEUP_URL and PROTOKUBE_IMAGE.
export TARGET=jdoe@pa-dbc1131.eng.vmware.com
export TARGET_PATH=/dbc/pa-dbc1131/jdoe/misc/kops/
export NODEUP_URL=http://pa-dbc1131.eng.vmware.com/jdoe/misc/kops/nodeup/nodeup
export PROTOKUBE_IMAGE=http://pa-dbc1131.eng.vmware.com/jdoe/misc/kops/protokube/protokube.tar.gz
flag=1
;;
-u | --unset)
export VSPHERE_DNS=
export VSPHERE_DNSCONTROLLER_IMAGE=
export KOPS_STATE_STORE=
export AWS_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export VSPHERE_USERNAME=
export VSPHERE_PASSWORD=
export TARGET=
export TARGET_PATH=
export NODEUP_URL=
export PROTOKUBE_IMAGE=
flag=1
;;
--default)
echo Usage: vsphere_env [options]
echo Options:
echo -e "\t -s, --set \t Set environment variables."
echo -e "\t -u, --unset \t Unset environment variables."
exit 1
;;
*)
esac
if [[ $flag -ne 0 ]]; then
echo "VSPHERE_DNS=${VSPHERE_DNS}"
echo "VSPHERE_DNSCONTROLLER_IMAGE=${VSPHERE_DNSCONTROLLER_IMAGE}"
echo "KOPS_STATE_STORE=${KOPS_STATE_STORE}"
echo "AWS_REGION=${AWS_REGION}"
echo "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}"
echo "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}"
echo "VSPHERE_USERNAME=${VSPHERE_USERNAME}"
echo "VSPHERE_PASSWORD=${VSPHERE_PASSWORD}"
echo "NODEUP_URL=${NODEUP_URL}"
echo "PROTOKUBE_IMAGE=${PROTOKUBE_IMAGE}"
echo "TARGET=${TARGET}"
echo "TARGET_PATH=${TARGET_PATH}"
fi

View File

@ -28,8 +28,11 @@ import (
// BootstrapScript creates the bootstrap script
type BootstrapScript struct {
NodeUpSource string
NodeUpSourceHash string
NodeUpSource string
NodeUpSourceHash string
// TODO temporary field to enable workflow for vSphere cloud provider.
AddAwsEnvironmentVariables bool
NodeUpConfigBuilder func(ig *kops.InstanceGroup) (*nodeup.NodeUpConfig, error)
}

View File

@ -0,0 +1,145 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resources
var VsphereNodeUpTemplate = `#!/bin/bash
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
NODEUP_URL={{ NodeUpSource }}
NODEUP_HASH={{ NodeUpSourceHash }}
{{ Env1 }}
{{ Env2 }}
{{ Env3 }}
function ensure-install-dir() {
INSTALL_DIR="/var/cache/kubernetes-install"
mkdir -p ${INSTALL_DIR}
cd ${INSTALL_DIR}
}
# Retry a download until we get it. Takes a hash and a set of URLs.
#
# $1 is the sha1 of the URL. Can be "" if the sha1 is unknown.
# $2+ are the URLs to download.
download-or-bust() {
local -r hash="$1"
shift 1
urls=( $* )
while true; do
for url in "${urls[@]}"; do
local file="${url##*/}"
rm -f "${file}"
if ! curl -f --ipv4 -Lo "${file}" --connect-timeout 20 --retry 6 --retry-delay 10 "${url}"; then
echo "== Failed to download ${url}. Retrying. =="
elif [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
echo "== Hash validation of ${url} failed. Retrying. =="
else
if [[ -n "${hash}" ]]; then
echo "== Downloaded ${url} (SHA1 = ${hash}) =="
else
echo "== Downloaded ${url} =="
fi
return
fi
done
echo "All downloads failed; sleeping before retrying"
sleep 60
done
}
validate-hash() {
local -r file="$1"
local -r expected="$2"
local actual
actual=$(sha1sum ${file} | awk '{ print $1 }') || true
if [[ "${actual}" != "${expected}" ]]; then
echo "== ${file} corrupted, sha1 ${actual} doesn't match expected ${expected} =="
return 1
fi
}
function split-commas() {
echo $1 | tr "," "\n"
}
function try-download-release() {
# TODO(zmerlynn): Now we REALLY have no excuse not to do the reboot
# optimization.
local -r nodeup_urls=( $(split-commas "${NODEUP_URL}") )
local -r nodeup_filename="${nodeup_urls[0]##*/}"
if [[ -n "${NODEUP_HASH:-}" ]]; then
local -r nodeup_hash="${NODEUP_HASH}"
else
# TODO: Remove?
echo "Downloading sha1 (not found in env)"
download-or-bust "" "${nodeup_urls[@]/%/.sha1}"
local -r nodeup_hash=$(cat "${nodeup_filename}.sha1")
fi
echo "Downloading nodeup (${nodeup_urls[@]})"
download-or-bust "${nodeup_hash}" "${nodeup_urls[@]}"
chmod +x nodeup
}
function download-release() {
# In case of failure checking integrity of release, retry.
until try-download-release; do
sleep 15
echo "Couldn't download release. Retrying..."
done
echo "Running nodeup"
# We can't run in the foreground because of https://github.com/docker/docker/issues/23793
( cd ${INSTALL_DIR}; ./nodeup --install-systemd-unit --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8 )
}
####################################################################################
/bin/systemd-machine-id-setup || echo "failed to set up ensure machine-id configured"
echo "== nodeup node config starting =="
ensure-install-dir
cat > kube_env.yaml << __EOF_KUBE_ENV
{{ KubeEnv }}
__EOF_KUBE_ENV
download-release
echo "== nodeup node config done =="
`

View File

@ -31,8 +31,6 @@ type AutoscalingGroupModelBuilder struct {
var _ fi.ModelBuilder = &AutoscalingGroupModelBuilder{}
const defaultVmTemplateName = "Ubuntu_16_10"
func (b *AutoscalingGroupModelBuilder) Build(c *fi.ModelBuilderContext) error {
// Note that we are creating a VM per instance group. Instance group represents a group of VMs.
// The following logic should considerably change once we add support for multiple master/worker nodes,
@ -41,7 +39,7 @@ func (b *AutoscalingGroupModelBuilder) Build(c *fi.ModelBuilderContext) error {
name := b.AutoscalingGroupName(ig)
createVmTask := &vspheretasks.VirtualMachine{
Name: &name,
VMTemplateName: fi.String(defaultVmTemplateName),
VMTemplateName: fi.String(ig.Spec.Image),
}
c.AddTask(createVmTask)
@ -53,6 +51,8 @@ func (b *AutoscalingGroupModelBuilder) Build(c *fi.ModelBuilderContext) error {
IG: ig,
BootstrapScript: b.BootstrapScript,
}
attachISOTask.BootstrapScript.AddAwsEnvironmentVariables = true
c.AddTask(attachISOTask)
powerOnTaskName := "PowerON-" + name

View File

@ -556,9 +556,10 @@ func (c *ApplyClusterCmd) Run() error {
}
bootstrapScriptBuilder := &model.BootstrapScript{
NodeUpConfigBuilder: renderNodeUpConfig,
NodeUpSourceHash: "",
NodeUpSource: c.NodeUpSource,
NodeUpConfigBuilder: renderNodeUpConfig,
NodeUpSourceHash: "",
NodeUpSource: c.NodeUpSource,
AddAwsEnvironmentVariables: false,
}
switch fi.CloudProviderID(cluster.Spec.CloudProvider) {
case fi.CloudProviderAWS:

View File

@ -42,6 +42,8 @@ const (
defaultMasterMachineTypeGCE = "n1-standard-1"
defaultMasterMachineTypeAWS = "m3.medium"
defaultMasterMachineTypeVSphere = "vsphere_master"
defaultVSphereNodeImage = "ubuntu_16_04"
)
var masterMachineTypeExceptions = map[string]string{
@ -250,8 +252,9 @@ func defaultImage(cluster *api.Cluster, channel *api.Channel) string {
return image.Name
}
}
} else if fi.CloudProviderID(cluster.Spec.CloudProvider) == fi.CloudProviderVSphere {
return defaultVSphereNodeImage
}
glog.Infof("Cannot set default Image for CloudProvider=%q", cluster.Spec.CloudProvider)
return ""
}

View File

@ -22,6 +22,7 @@ import (
"runtime"
"text/template"
"bytes"
"github.com/golang/glog"
"k8s.io/apimachinery/pkg/util/sets"
api "k8s.io/kops/pkg/apis/kops"
@ -30,6 +31,7 @@ import (
"k8s.io/kops/upup/pkg/fi"
"k8s.io/kops/upup/pkg/fi/secrets"
"k8s.io/kops/util/pkg/vfs"
"os"
)
const TagMaster = "_kubernetes_master"