Support user-defined s3 endpoint.

When Environment variable S3_ENDPOINT is not empty, kops will use
the bucket on this specific s3 endpoint, instead of using AWS S3
by default.
This commit is contained in:
Miao Luo 2017-04-12 15:25:35 -07:00
parent 3bfe3b6e18
commit 58197e6dab
7 changed files with 65 additions and 181 deletions

View File

@ -10,7 +10,6 @@ Here is a [list of requirements and tasks](https://docs.google.com/document/d/10
## Setting up DNS
Since vSphere doesn't have built-in DNS service, we use CoreDNS to support the DNS requirement in vSphere provider. This requires the users to setup a CoreDNS server before creating a kubernetes cluster. Please follow the following instructions to setup.
**Before the support of CoreDNS becomes stable, use env parameter "VSPHERE_DNS=coredns"** to enable using CoreDNS. Or else AWS Route53 will be the default DNS service. To use Route53, follow instructions on: https://github.com/vmware/kops/blob/vsphere-develop/docs/aws.md
For now we hardcoded DNS zone to skydns.local. So your cluster name should have suffix skydns.local, for example: "mycluster.skydns.local"
@ -57,22 +56,55 @@ ns1.ns.dns.skydns.local. 160 IN A 192.168.0.1
Add ```--dns=private --vsphere-coredns-server=http://[DNS server's IP]:2379``` into the ```kops create cluster``` command line.
### Use CoreDNS supported DNS Controller
Information about DNS Controller can be found [here](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
Information about DNS Controller can be found [here](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md).
Currently the DNS Controller is an add-on container and the image is from kope/dns-controller.
Before the vSphere support is officially merged into upstream, we need to set up CoreDNS supported DNS controller manually.
Before the vSphere support is officially merged into upstream, please use the following CoreDNS supported DNS controller.
```bash
DOCKER_REGISTRY=[your docker hub repo] make dns-controller-push
export VSPHERE_DNSCONTROLLER_IMAGE=[your docker hub repo]
make
kops create cluster ...
export DNSCONTROLLER_IMAGE=cnastorage/dns-controller
```
(The above environment variable is already set in [kops_dir]/hack/vsphere/set_env)
## Setting up cluster state storage
Kops requires the state of clusters to be stored inside certain storage service. AWS S3 is the default option.
More about using AWS S3 for cluster state store can be found at "Cluster State storage" on this [page](https://github.com/kubernetes/kops/blob/master/docs/aws.md).
Users can also setup their own S3 server and use the following instructions to use user-defined S3-compatible applications for cluster state storage.
This is recommended if you don't have AWS account or you don't want to store the status of your clusters on public cloud storage.
Minio is a S3-compatible object storage application. We have included Minio components inside the same OVA template for CoreDNS service.
If you haven't setup CoreDNS according to section "Setup CoreDNS server" of this document, please follow the instructions in section "Setup CoreDNS server" Step 1 to Step 6.
Then SSH into the VM for CoreDNS/Minio service and execute:
```bash
/root/start-minio.sh [bucket_name]
```
Output of the script should look like:
```bash
Please set the following environment variables into hack/vsphere/set_env accordingly, before using kops create cluster:
KOPS_STATE_STORE=s3://[s3_bucket]
S3_ACCESS_KEY_ID=[s3_access_key]
S3_SECRET_ACCESS_KEY=[s3_secret_key]
S3_REGION=[s3_region]
```
Update [kops_dir]hack/vsphere/set_env according to the output of the script and the IP address/service port of the Minio server:
```bash
export KOPS_STATE_STORE=s3://[s3_bucket]
export S3_ACCESS_KEY_ID=[s3_access_key]
export S3_SECRET_ACCESS_KEY=[s3_secret_key]
export S3_REGION=[s3_region]
export S3_ENDPOINT=http://[s3_server_ip]:[s3_service_port]
```
Users can also choose their own S3-compatible storage applications by setting environment varibales similiarly.
## Kops with vSphere
vSphere cloud provider support in kops is a work in progress. To try out deploying kubernetes cluster on vSphere using kops, some extra steps are required.
### Pre-requisites
+ vSphere with at least one ESX, having sufficient free disk space on attached datastore. ESX VM's should have internet connectivity.
+ Setup DNS following steps given in relevant Section above.
+ Setup DNS and S3 storage service following steps given in relevant Section above.
+ Upload VM template. Steps:
1. Login to vSphere Client.
2. Right-Click on ESX host on which you want to deploy the template.
@ -80,8 +112,8 @@ vSphere cloud provider support in kops is a work in progress. To try out deployi
4. Copy and paste URL for [OVA](https://storage.googleapis.com/kops-vsphere/kops_ubuntu_16_04.ova) (uploaded 04/18/2017).
5. Follow next steps according to instructions mentioned in wizard.
**NOTE: DO NOT POWER ON THE IMPORTED TEMPLATE VM.**
+ Currently vSphere code is using AWS S3 for storing all configurations, specs, addon yamls, etc. You need valid AWS credentials to try out kops on vSphere. s3://your-objectstore/cluster1.skydns.local folder will have all necessary configuration, spec, addons, etc., required to configure kubernetes cluster. (If you don't know how to setup aws, then read more on kops and how to deploy a cluster using kops on aws)
+ Update ```[kops_dir]/hack/vsphere/set_env``` setting up necessary environment variables.
+ ```source [kops_dir]/hack/vsphere/set_env```
### Installing
Currently vSphere support is not part of upstream kops releases. Please use the following instructions to use binaries/images with vSphere support.

View File

@ -15,12 +15,12 @@
# limitations under the License.
export KOPS_FEATURE_FLAGS=
export VSPHERE_DNS=
export VSPHERE_DNSCONTROLLER_IMAGE=
export DNSCONTROLLER_IMAGE=
export KOPS_STATE_STORE=
export AWS_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export S3_REGION=
export S3_ACCESS_KEY_ID=
export S3_SECRET_ACCESS_KEY=
export S3_ENDPOINT=
export VSPHERE_USERNAME=
export VSPHERE_PASSWORD=
export TARGET=
@ -29,15 +29,15 @@ export NODEUP_URL=
export PROTOKUBE_IMAGE=
echo "KOPS_FEATURE_FLAGS=${KOPS_FEATURE_FLAGS}"
echo "VSPHERE_DNS=${VSPHERE_DNS}"
echo "VSPHERE_DNSCONTROLLER_IMAGE=${VSPHERE_DNSCONTROLLER_IMAGE}"
echo "DNSCONTROLLER_IMAGE=${VSPHERE_DNSCONTROLLER_IMAGE}"
echo "KOPS_STATE_STORE=${KOPS_STATE_STORE}"
echo "AWS_REGION=${AWS_REGION}"
echo "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}"
echo "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}"
echo "S3_REGION=${S3_REGION}"
echo "S3_ACCESS_KEY_ID=${S3_ACCESS_KEY_ID}"
echo "S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}"
echo "S3_ENDPOINT=${S3_ENDPOINT}"
echo "VSPHERE_USERNAME=${VSPHERE_USERNAME}"
echo "VSPHERE_PASSWORD=${VSPHERE_PASSWORD}"
echo "NODEUP_URL=${NODEUP_URL}"
echo "PROTOKUBE_IMAGE=${PROTOKUBE_IMAGE}"
echo "TARGET=${TARGET}"
echo "TARGET_PATH=${TARGET_PATH}"
echo "TARGET_PATH=${TARGET_PATH}"

View File

@ -24,10 +24,11 @@ export DNSCONTROLLER_IMAGE=cnastorage/dns-controller
# S3 bucket that kops should use.
export KOPS_STATE_STORE=s3://your-obj-store
# AWS credentials
export AWS_REGION=us-west-2
export AWS_ACCESS_KEY_ID=something
export AWS_SECRET_ACCESS_KEY=something
# S3 state store credentials
export S3_REGION=us-west-2
export S3_ACCESS_KEY_ID=something
export S3_SECRET_ACCESS_KEY=something
export S3_ENDPOINT=http://endpoint_ip:port
# vSphere credentials
export VSPHERE_USERNAME=administrator@vsphere.local
@ -47,12 +48,12 @@ export NODEUP_URL=https://storage.googleapis.com/kops-vsphere/nodeup
export PROTOKUBE_IMAGE=https://storage.googleapis.com/kops-vsphere/protokube.tar.gz
echo "KOPS_FEATURE_FLAGS=${KOPS_FEATURE_FLAGS}"
echo "VSPHERE_DNS=${VSPHERE_DNS}"
echo "VSPHERE_DNSCONTROLLER_IMAGE=${VSPHERE_DNSCONTROLLER_IMAGE}"
echo "DNSCONTROLLER_IMAGE=${VSPHERE_DNSCONTROLLER_IMAGE}"
echo "KOPS_STATE_STORE=${KOPS_STATE_STORE}"
echo "AWS_REGION=${AWS_REGION}"
echo "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}"
echo "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}"
echo "S3_REGION=${S3_REGION}"
echo "S3_ACCESS_KEY_ID=${S3_ACCESS_KEY_ID}"
echo "S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}"
echo "S3_ENDPOINT=${S3_ENDPOINT}"
echo "VSPHERE_USERNAME=${VSPHERE_USERNAME}"
echo "VSPHERE_PASSWORD=${VSPHERE_PASSWORD}"
echo "NODEUP_URL=${NODEUP_URL}"

View File

@ -30,8 +30,6 @@ import (
type BootstrapScript struct {
NodeUpSource string
NodeUpSourceHash string
// TODO temporary field to enable workflow for vSphere cloud provider.
AddAwsEnvironmentVariables bool
NodeUpConfigBuilder func(ig *kops.InstanceGroup) (*nodeup.NodeUpConfig, error)
}

View File

@ -1,145 +0,0 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package resources
var VsphereNodeUpTemplate = `#!/bin/bash
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
NODEUP_URL={{ NodeUpSource }}
NODEUP_HASH={{ NodeUpSourceHash }}
{{ Env1 }}
{{ Env2 }}
{{ Env3 }}
function ensure-install-dir() {
INSTALL_DIR="/var/cache/kubernetes-install"
mkdir -p ${INSTALL_DIR}
cd ${INSTALL_DIR}
}
# Retry a download until we get it. Takes a hash and a set of URLs.
#
# $1 is the sha1 of the URL. Can be "" if the sha1 is unknown.
# $2+ are the URLs to download.
download-or-bust() {
local -r hash="$1"
shift 1
urls=( $* )
while true; do
for url in "${urls[@]}"; do
local file="${url##*/}"
rm -f "${file}"
if ! curl -f --ipv4 -Lo "${file}" --connect-timeout 20 --retry 6 --retry-delay 10 "${url}"; then
echo "== Failed to download ${url}. Retrying. =="
elif [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
echo "== Hash validation of ${url} failed. Retrying. =="
else
if [[ -n "${hash}" ]]; then
echo "== Downloaded ${url} (SHA1 = ${hash}) =="
else
echo "== Downloaded ${url} =="
fi
return
fi
done
echo "All downloads failed; sleeping before retrying"
sleep 60
done
}
validate-hash() {
local -r file="$1"
local -r expected="$2"
local actual
actual=$(sha1sum ${file} | awk '{ print $1 }') || true
if [[ "${actual}" != "${expected}" ]]; then
echo "== ${file} corrupted, sha1 ${actual} doesn't match expected ${expected} =="
return 1
fi
}
function split-commas() {
echo $1 | tr "," "\n"
}
function try-download-release() {
# TODO(zmerlynn): Now we REALLY have no excuse not to do the reboot
# optimization.
local -r nodeup_urls=( $(split-commas "${NODEUP_URL}") )
local -r nodeup_filename="${nodeup_urls[0]##*/}"
if [[ -n "${NODEUP_HASH:-}" ]]; then
local -r nodeup_hash="${NODEUP_HASH}"
else
# TODO: Remove?
echo "Downloading sha1 (not found in env)"
download-or-bust "" "${nodeup_urls[@]/%/.sha1}"
local -r nodeup_hash=$(cat "${nodeup_filename}.sha1")
fi
echo "Downloading nodeup (${nodeup_urls[@]})"
download-or-bust "${nodeup_hash}" "${nodeup_urls[@]}"
chmod +x nodeup
}
function download-release() {
# In case of failure checking integrity of release, retry.
until try-download-release; do
sleep 15
echo "Couldn't download release. Retrying..."
done
echo "Running nodeup"
# We can't run in the foreground because of https://github.com/docker/docker/issues/23793
( cd ${INSTALL_DIR}; ./nodeup --install-systemd-unit --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8 )
}
####################################################################################
/bin/systemd-machine-id-setup || echo "failed to set up ensure machine-id configured"
echo "== nodeup node config starting =="
ensure-install-dir
cat > kube_env.yaml << __EOF_KUBE_ENV
{{ KubeEnv }}
__EOF_KUBE_ENV
download-release
echo "== nodeup node config done =="
`

View File

@ -62,7 +62,6 @@ func (b *AutoscalingGroupModelBuilder) Build(c *fi.ModelBuilderContext) error {
BootstrapScript: b.BootstrapScript,
EtcdClusters: b.Cluster.Spec.EtcdClusters,
}
attachISOTask.BootstrapScript.AddAwsEnvironmentVariables = true
c.AddTask(attachISOTask)

View File

@ -556,10 +556,9 @@ func (c *ApplyClusterCmd) Run() error {
}
bootstrapScriptBuilder := &model.BootstrapScript{
NodeUpConfigBuilder: renderNodeUpConfig,
NodeUpSourceHash: "",
NodeUpSource: c.NodeUpSource,
AddAwsEnvironmentVariables: false,
NodeUpConfigBuilder: renderNodeUpConfig,
NodeUpSourceHash: "",
NodeUpSource: c.NodeUpSource,
}
switch fi.CloudProviderID(cluster.Spec.CloudProvider) {
case fi.CloudProviderAWS: