mirror of https://github.com/kubernetes/kops.git
Merge pull request #8 from justinsb/fintegration
cloudup & nodeup: direct, terraform, cloud-init
This commit is contained in:
commit
72a79d7998
|
@ -0,0 +1,2 @@
|
||||||
|
vendor/
|
||||||
|
.build/
|
|
@ -0,0 +1,42 @@
|
||||||
|
Random scribblings useful for development...
|
||||||
|
|
||||||
|
|
||||||
|
## Developing nodeup
|
||||||
|
|
||||||
|
ssh ${HOST} sudo mkdir -p /opt/nodeup/state
|
||||||
|
ssh ${HOST} sudo chown -R ${USER} /opt/nodeup
|
||||||
|
|
||||||
|
go install k8s.io/kube-deploy/upup/... && rsync ~/k8s/bin/nodeup ${HOST}:/opt/nodeup/nodeup && rsync --delete -avz trees/ ${HOST}:/opt/nodeup/trees/ \
|
||||||
|
&& rsync state/node.yaml ${HOST}:/opt/nodeup/state/node.yaml \
|
||||||
|
&& ssh ${HOST} sudo /opt/nodeup/nodeup --v=2 --template=/opt/nodeup/trees/nodeup --state=/opt/nodeup/state --tags=kubernetes_pool,debian_family,gce,systemd
|
||||||
|
|
||||||
|
|
||||||
|
# Random misc
|
||||||
|
|
||||||
|
Extract the master node config from a terraform output
|
||||||
|
|
||||||
|
cat tf/k8s.tf.json | jq -r '.resource.google_compute_instance["kubernetes-master"].metadata.config' > state/node.yaml
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
TODOS
|
||||||
|
======
|
||||||
|
|
||||||
|
* Implement number-of-tags prioritization
|
||||||
|
* Allow files ending in .md to be ignored. Useful for comments.
|
||||||
|
* Better dependency tracking on systemd services?
|
||||||
|
* Automatically use different file mode if starts with #! ?
|
||||||
|
* Support .static under files to allow for files ending in .template?
|
||||||
|
* How to inherit options
|
||||||
|
* Allow customization of ordering? Maybe prefix based.
|
||||||
|
* Cache hashes in-process (along with timestamp?) so we don't hash the kubernetes binary bundle repeatedly
|
||||||
|
* Fix the fact that we hash assets twice
|
||||||
|
* Confirm that we drop support for init.d
|
||||||
|
* Can we just use JSON custom marshalling instead of all our reflection stuff (or at least lighten the load)
|
||||||
|
|
||||||
|
* Do we officially publish https://storage.googleapis.com/kubernetes-release/release/v1.2.2/kubernetes-server-linux-amd64.tar.gz (ie just the server tar.gz)?
|
||||||
|
|
||||||
|
* Need to start docker-healthcheck once
|
||||||
|
|
||||||
|
* Can we replace some or all of nodeup config with pkg/apis/componentconfig/types.go ?
|
||||||
|
|
|
@ -0,0 +1,31 @@
|
||||||
|
gocode:
|
||||||
|
glide install
|
||||||
|
go install k8s.io/kube-deploy/upup/cmd/...
|
||||||
|
|
||||||
|
tar: gocode
|
||||||
|
rm -rf .build/tar
|
||||||
|
mkdir -p .build/tar/nodeup/root
|
||||||
|
cp ${GOPATH}/bin/nodeup .build/tar/nodeup/root
|
||||||
|
cp -r models/nodeup/ .build/tar/nodeup/root/model/
|
||||||
|
tar czvf .build/nodeup.tar.gz -C .build/tar/ .
|
||||||
|
tar tvf .build/nodeup.tar.gz
|
||||||
|
(sha1sum .build/nodeup.tar.gz | cut -d' ' -f1) > .build/nodeup.tar.gz.sha1
|
||||||
|
|
||||||
|
upload: tar
|
||||||
|
rm -rf .build/s3
|
||||||
|
mkdir -p .build/s3/nodeup
|
||||||
|
cp .build/nodeup.tar.gz .build/s3/nodeup/
|
||||||
|
cp .build/nodeup.tar.gz.sha1 .build/s3/nodeup/
|
||||||
|
aws s3 sync .build/s3/ s3://kubeupv2/
|
||||||
|
aws s3api put-object-acl --bucket kubeupv2 --key nodeup/nodeup.tar.gz --acl public-read
|
||||||
|
aws s3api put-object-acl --bucket kubeupv2 --key nodeup/nodeup.tar.gz.sha1 --acl public-read
|
||||||
|
|
||||||
|
push: tar
|
||||||
|
scp .build/nodeup.tar.gz ${TARGET}:/tmp/
|
||||||
|
ssh ${TARGET} sudo tar zxf /tmp/nodeup.tar.gz -C /var/cache/kubernetes-install
|
||||||
|
|
||||||
|
push-dry: push
|
||||||
|
ssh ${TARGET} sudo SKIP_PACKAGE_UPDATE=1 /var/cache/kubernetes-install/nodeup/root/nodeup --conf=metadata://gce/config --dryrun --v=8 --template=/var/cache/kubernetes-install/nodeup/root/model
|
||||||
|
|
||||||
|
push-run: push
|
||||||
|
ssh ${TARGET} sudo SKIP_PACKAGE_UPDATE=1 /var/cache/kubernetes-install/nodeup/root/nodeup --conf=metadata://gce/config --v=8 --template=/var/cache/kubernetes-install/nodeup/root/model
|
|
@ -0,0 +1,51 @@
|
||||||
|
## UpUp - CloudUp & NodeUp
|
||||||
|
|
||||||
|
CloudUp and NodeUp are two tools that are aiming to replace kube-up:
|
||||||
|
the easiest way to get a production Kubernetes up and running.
|
||||||
|
|
||||||
|
(Currently work in progress, but working. Some of these statements are forward-looking.)
|
||||||
|
|
||||||
|
Some of the more interesting features:
|
||||||
|
|
||||||
|
* Written in go, so hopefully easier to maintain and extend, as complexity inevitably increases
|
||||||
|
* Uses a state-sync model, so we get things like a dry-run mode and idempotency automatically
|
||||||
|
* Based on a simple meta-model defined in a directory tree
|
||||||
|
* Can produce configurations in other formats (currently Terraform & Cloud-Init), so that we can have working
|
||||||
|
configurations for other tools also.
|
||||||
|
|
||||||
|
## Bringing up a cluster
|
||||||
|
|
||||||
|
Set `YOUR_GCE_PROJECT`, then:
|
||||||
|
|
||||||
|
```
|
||||||
|
cd upup
|
||||||
|
make
|
||||||
|
${GOPATH}/bin/cloudup --v=0 --logtostderr -cloud=gce -zone=us-central1-f -project=$YOUR_GCE_PROJECT -name=kubernetes -kubernetes-version=1.2.2
|
||||||
|
```
|
||||||
|
|
||||||
|
If you have problems, please set `--v=8 --logtostderr` and open an issue, and ping justinsb on slack!
|
||||||
|
|
||||||
|
For now, we don't build a local kubectl file. So just ssh to the master, and run kubectl from there:
|
||||||
|
|
||||||
|
```
|
||||||
|
gcloud compute ssh kubernetes-master
|
||||||
|
...
|
||||||
|
kubectl get nodes
|
||||||
|
kubectl get pods --all-namespaces
|
||||||
|
```
|
||||||
|
|
||||||
|
## Other interesting modes:
|
||||||
|
|
||||||
|
See changes that would be applied: `${GOPATH}/bin/cloudup --dryrun`
|
||||||
|
|
||||||
|
Build a terrform model: `${GOPATH}/bin/cloudup $NORMAL_ARGS --target=terraform > tf/k8s.tf.json`
|
||||||
|
|
||||||
|
# How it works
|
||||||
|
|
||||||
|
Everything is driven by a local configuration directory tree, called the "model". The model represents
|
||||||
|
the desired state of the world.
|
||||||
|
|
||||||
|
Each file in the tree describes a Task.
|
||||||
|
|
||||||
|
On the nodeup side, Tasks can manage files, systemd services, packages etc.
|
||||||
|
On the cloudup side, Tasks manage cloud resources: instances, networks, disks etc.
|
|
@ -0,0 +1,244 @@
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"github.com/golang/glog"
|
||||||
|
"io/ioutil"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/cloudup"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/cloudup/gce"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/cloudup/gcetasks"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/cloudup/terraform"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/loader"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/utils"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
dryrun := false
|
||||||
|
flag.BoolVar(&dryrun, "dryrun", false, "Don't create cloud resources; just show what would be done")
|
||||||
|
target := "direct"
|
||||||
|
flag.StringVar(&target, "target", target, "Target - direct, terraform")
|
||||||
|
configFile := ""
|
||||||
|
flag.StringVar(&configFile, "conf", configFile, "Configuration file to load")
|
||||||
|
modelDir := "models/cloudup"
|
||||||
|
flag.StringVar(&modelDir, "model", modelDir, "Source directory to use as model")
|
||||||
|
stateDir := "./state"
|
||||||
|
flag.StringVar(&stateDir, "state", stateDir, "Directory to use to store local state")
|
||||||
|
nodeModelDir := "models/nodeup"
|
||||||
|
flag.StringVar(&nodeModelDir, "nodemodel", nodeModelDir, "Source directory to use as model for node configuration")
|
||||||
|
|
||||||
|
// TODO: Replace all these with a direct binding to the CloudConfig
|
||||||
|
// (we have plenty of reflection helpers if one isn't already available!)
|
||||||
|
config := &cloudup.CloudConfig{}
|
||||||
|
flag.StringVar(&config.CloudProvider, "cloud", config.CloudProvider, "Cloud provider to use - gce, aws")
|
||||||
|
flag.StringVar(&config.Zone, "zone", config.Zone, "Cloud zone to target (warning - will be replaced by region)")
|
||||||
|
flag.StringVar(&config.Project, "project", config.Project, "Project to use (must be set on GCE)")
|
||||||
|
flag.StringVar(&config.ClusterName, "name", config.ClusterName, "Name for cluster")
|
||||||
|
flag.StringVar(&config.KubernetesVersion, "kubernetes-version", config.KubernetesVersion, "Version of kubernetes to run")
|
||||||
|
//flag.StringVar(&config.Region, "region", config.Region, "Cloud region to target")
|
||||||
|
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
if dryrun {
|
||||||
|
target = "dryrun"
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd := &CreateClusterCmd{
|
||||||
|
Config: config,
|
||||||
|
ModelDir: modelDir,
|
||||||
|
StateDir: stateDir,
|
||||||
|
Target: target,
|
||||||
|
NodeModelDir: nodeModelDir,
|
||||||
|
}
|
||||||
|
|
||||||
|
if configFile != "" {
|
||||||
|
//confFile := path.Join(cmd.StateDir, "kubernetes.yaml")
|
||||||
|
err := cmd.LoadConfig(configFile)
|
||||||
|
if err != nil {
|
||||||
|
glog.Errorf("error loading config: %v", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cmd.Run()
|
||||||
|
if err != nil {
|
||||||
|
glog.Errorf("error running command: %v", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
glog.Infof("Completed successfully")
|
||||||
|
}
|
||||||
|
|
||||||
|
type CreateClusterCmd struct {
|
||||||
|
// Config is the cluster configuration
|
||||||
|
Config *cloudup.CloudConfig
|
||||||
|
// ModelDir is the directory in which the cloudup model is found
|
||||||
|
ModelDir string
|
||||||
|
// StateDir is a directory in which we store state (such as the PKI tree)
|
||||||
|
StateDir string
|
||||||
|
// Target specifies how we are operating e.g. direct to GCE, or AWS, or dry-run, or terraform
|
||||||
|
Target string
|
||||||
|
// The directory in which the node model is found
|
||||||
|
NodeModelDir string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CreateClusterCmd) LoadConfig(configFile string) error {
|
||||||
|
conf, err := ioutil.ReadFile(configFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("error loading configuration file %q: %v", configFile, err)
|
||||||
|
}
|
||||||
|
err = utils.YamlUnmarshal(conf, c.Config)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("error parsing configuration file %q: %v", configFile, err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CreateClusterCmd) Run() error {
|
||||||
|
if c.StateDir == "" {
|
||||||
|
return fmt.Errorf("state dir is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Config.CloudProvider == "" {
|
||||||
|
return fmt.Errorf("must specify CloudProvider. Specify with -cloud")
|
||||||
|
}
|
||||||
|
|
||||||
|
tags := make(map[string]struct{})
|
||||||
|
|
||||||
|
l := &cloudup.Loader{}
|
||||||
|
l.Init()
|
||||||
|
|
||||||
|
caStore, err := fi.NewFilesystemCAStore(path.Join(c.StateDir, "pki"))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("error building CA store: %v", err)
|
||||||
|
}
|
||||||
|
secretStore, err := fi.NewFilesystemSecretStore(path.Join(c.StateDir, "secrets"))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("error building secret store: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(c.Config.Assets) == 0 {
|
||||||
|
if c.Config.KubernetesVersion == "" {
|
||||||
|
return fmt.Errorf("Must either specify a KubernetesVersion (-kubernetes-version) or provide an asset with the release bundle")
|
||||||
|
}
|
||||||
|
defaultReleaseAsset := fmt.Sprintf("https://storage.googleapis.com/kubernetes-release/release/v%s/kubernetes-server-linux-amd64.tar.gz", c.Config.KubernetesVersion)
|
||||||
|
glog.Infof("Adding default kubernetes release asset: %s", defaultReleaseAsset)
|
||||||
|
// TODO: Verify it exists, get the hash (that will check that KubernetesVersion is valid)
|
||||||
|
c.Config.Assets = append(c.Config.Assets, defaultReleaseAsset)
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Config.NodeUp.Location == "" {
|
||||||
|
location := "https://kubeupv2.s3.amazonaws.com/nodeup/nodeup.tar.gz"
|
||||||
|
glog.Infof("Using default nodeup location: %q", location)
|
||||||
|
c.Config.NodeUp.Location = location
|
||||||
|
}
|
||||||
|
|
||||||
|
var cloud fi.Cloud
|
||||||
|
|
||||||
|
var project string
|
||||||
|
var region string
|
||||||
|
|
||||||
|
checkExisting := true
|
||||||
|
|
||||||
|
switch c.Config.CloudProvider {
|
||||||
|
case "gce":
|
||||||
|
tags["_gce"] = struct{}{}
|
||||||
|
l.AddTypes(map[string]interface{}{
|
||||||
|
"persistentDisk": &gcetasks.PersistentDisk{},
|
||||||
|
"instance": &gcetasks.Instance{},
|
||||||
|
"instanceTemplate": &gcetasks.InstanceTemplate{},
|
||||||
|
"network": &gcetasks.Network{},
|
||||||
|
"managedInstanceGroup": &gcetasks.ManagedInstanceGroup{},
|
||||||
|
"firewallRule": &gcetasks.FirewallRule{},
|
||||||
|
"ipAddress": &gcetasks.IPAddress{},
|
||||||
|
})
|
||||||
|
|
||||||
|
// For now a zone to be specified...
|
||||||
|
// This will be replace with a region when we go full HA
|
||||||
|
zone := c.Config.Zone
|
||||||
|
if zone == "" {
|
||||||
|
return fmt.Errorf("Must specify a zone (use -zone)")
|
||||||
|
}
|
||||||
|
tokens := strings.Split(zone, "-")
|
||||||
|
if len(tokens) <= 2 {
|
||||||
|
return fmt.Errorf("Invalid Zone: %v", zone)
|
||||||
|
}
|
||||||
|
region = tokens[0] + "-" + tokens[1]
|
||||||
|
|
||||||
|
project = c.Config.Project
|
||||||
|
if project == "" {
|
||||||
|
return fmt.Errorf("project is required for GCE")
|
||||||
|
}
|
||||||
|
gceCloud, err := gce.NewGCECloud(region, project)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
cloud = gceCloud
|
||||||
|
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unknown CloudProvider %q", c.Config.CloudProvider)
|
||||||
|
}
|
||||||
|
|
||||||
|
l.Tags = tags
|
||||||
|
l.CAStore = caStore
|
||||||
|
l.SecretStore = secretStore
|
||||||
|
l.StateDir = c.StateDir
|
||||||
|
l.NodeModelDir = c.NodeModelDir
|
||||||
|
l.OptionsLoader = loader.NewOptionsLoader(c.Config)
|
||||||
|
|
||||||
|
taskMap, err := l.Build(c.ModelDir)
|
||||||
|
if err != nil {
|
||||||
|
glog.Exitf("error building: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Config.ClusterName == "" {
|
||||||
|
return fmt.Errorf("ClusterName is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Config.Zone == "" {
|
||||||
|
return fmt.Errorf("Zone is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
var target fi.Target
|
||||||
|
|
||||||
|
switch c.Target {
|
||||||
|
case "direct":
|
||||||
|
switch c.Config.CloudProvider {
|
||||||
|
case "gce":
|
||||||
|
target = gce.NewGCEAPITarget(cloud.(*gce.GCECloud))
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("direct configuration not supported with CloudProvider:%q", c.Config.CloudProvider)
|
||||||
|
}
|
||||||
|
|
||||||
|
case "terraform":
|
||||||
|
checkExisting = false
|
||||||
|
target = terraform.NewTerraformTarget(region, project, os.Stdout)
|
||||||
|
|
||||||
|
case "dryrun":
|
||||||
|
target = fi.NewDryRunTarget(os.Stdout)
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unsupported target type %q", c.Target)
|
||||||
|
}
|
||||||
|
|
||||||
|
context, err := fi.NewContext(target, cloud, caStore, checkExisting)
|
||||||
|
if err != nil {
|
||||||
|
glog.Exitf("error building context: %v", err)
|
||||||
|
}
|
||||||
|
defer context.Close()
|
||||||
|
|
||||||
|
err = context.RunTasks(taskMap)
|
||||||
|
if err != nil {
|
||||||
|
glog.Exitf("error running tasks: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = target.Finish(taskMap)
|
||||||
|
if err != nil {
|
||||||
|
glog.Exitf("error closing target: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
|
@ -0,0 +1,51 @@
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"github.com/golang/glog"
|
||||||
|
"k8s.io/kube-deploy/upup/pkg/fi/nodeup"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
flagModel := "model"
|
||||||
|
flag.StringVar(&flagModel, "model", flagModel, "directory to use as model for desired configuration")
|
||||||
|
var flagConf string
|
||||||
|
flag.StringVar(&flagConf, "conf", "node.yaml", "configuration location")
|
||||||
|
var flagAssetDir string
|
||||||
|
flag.StringVar(&flagAssetDir, "assets", "/var/cache/nodeup", "the location for the local asset cache")
|
||||||
|
|
||||||
|
dryrun := false
|
||||||
|
flag.BoolVar(&dryrun, "dryrun", false, "Don't create cloud resources; just show what would be done")
|
||||||
|
target := "direct"
|
||||||
|
flag.StringVar(&target, "target", target, "Target - direct, cloudinit")
|
||||||
|
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
if dryrun {
|
||||||
|
target = "dryrun"
|
||||||
|
}
|
||||||
|
|
||||||
|
flag.Set("logtostderr", "true")
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
if flagConf == "" {
|
||||||
|
glog.Exitf("--conf is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
config := &nodeup.NodeConfig{}
|
||||||
|
cmd := &nodeup.NodeUpCommand{
|
||||||
|
Config: config,
|
||||||
|
ConfigLocation: flagConf,
|
||||||
|
ModelDir: flagModel,
|
||||||
|
Target: target,
|
||||||
|
AssetDir: flagAssetDir,
|
||||||
|
}
|
||||||
|
err := cmd.Run(os.Stdout)
|
||||||
|
if err != nil {
|
||||||
|
glog.Exitf("error running nodeup: %v", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
fmt.Printf("success")
|
||||||
|
}
|
|
@ -0,0 +1,53 @@
|
||||||
|
hash: 67b60195692c44c9e3be82ced106118324e5de46357a1c11a2942aef8675816e
|
||||||
|
updated: 2016-05-06T15:48:38.735466083-04:00
|
||||||
|
imports:
|
||||||
|
- name: github.com/cloudfoundry-incubator/candiedyaml
|
||||||
|
version: 99c3df83b51532e3615f851d8c2dbb638f5313bf
|
||||||
|
- name: github.com/ghodss/yaml
|
||||||
|
version: e8e0db9016175449df0e9c4b6e6995a9433a395c
|
||||||
|
- name: github.com/golang/glog
|
||||||
|
version: 23def4e6c14b4da8ac2ed8007337bc5eb5007998
|
||||||
|
- name: github.com/golang/protobuf
|
||||||
|
version: 7cc19b78d562895b13596ddce7aafb59dd789318
|
||||||
|
subpackages:
|
||||||
|
- proto
|
||||||
|
- name: golang.org/x/net
|
||||||
|
version: 7e42c0e1329bb108f7376a7618a2871ab90f1c4d
|
||||||
|
subpackages:
|
||||||
|
- context
|
||||||
|
- context/ctxhttp
|
||||||
|
- name: golang.org/x/oauth2
|
||||||
|
version: e86e2718db89775a4604abc10a5d3a5672e7336e
|
||||||
|
subpackages:
|
||||||
|
- google
|
||||||
|
- internal
|
||||||
|
- jws
|
||||||
|
- jwt
|
||||||
|
- name: google.golang.org/api
|
||||||
|
version: f9a4669e07732c84854dce1f5c451c22427228fb
|
||||||
|
subpackages:
|
||||||
|
- compute/v1
|
||||||
|
- googleapi
|
||||||
|
- storage/v1
|
||||||
|
- gensupport
|
||||||
|
- googleapi/internal/uritemplates
|
||||||
|
- name: google.golang.org/appengine
|
||||||
|
version: e234e71924d4aa52444bc76f2f831f13fa1eca60
|
||||||
|
subpackages:
|
||||||
|
- urlfetch
|
||||||
|
- internal
|
||||||
|
- internal/app_identity
|
||||||
|
- internal/modules
|
||||||
|
- internal/urlfetch
|
||||||
|
- internal/base
|
||||||
|
- internal/datastore
|
||||||
|
- internal/log
|
||||||
|
- internal/remote_api
|
||||||
|
- name: google.golang.org/cloud
|
||||||
|
version: 200292f09e3aaa34878d801ab71fe823b1f7d36a
|
||||||
|
subpackages:
|
||||||
|
- compute/metadata
|
||||||
|
- internal
|
||||||
|
- name: google.golang.org/grpc
|
||||||
|
version: 9604a2bb7dd81d87c2873a9580258465f3c311c8
|
||||||
|
devImports: []
|
|
@ -0,0 +1,16 @@
|
||||||
|
package: k8s.io/kube-deploy/upup
|
||||||
|
import:
|
||||||
|
- package: github.com/ghodss/yaml
|
||||||
|
- package: github.com/golang/glog
|
||||||
|
- package: golang.org/x/net
|
||||||
|
subpackages:
|
||||||
|
- context
|
||||||
|
- package: golang.org/x/oauth2
|
||||||
|
subpackages:
|
||||||
|
- google
|
||||||
|
- package: google.golang.org/api
|
||||||
|
subpackages:
|
||||||
|
- compute/v1
|
||||||
|
- googleapi
|
||||||
|
- storage/v1
|
||||||
|
- package: google.golang.org/grpc
|
|
@ -0,0 +1,53 @@
|
||||||
|
ClusterName: {{ .InstancePrefix }}
|
||||||
|
InstancePrefix: kubernetes
|
||||||
|
AllocateNodeCIDRs: true
|
||||||
|
Multizone: true
|
||||||
|
|
||||||
|
ServiceClusterIPRange: 10.0.0.0/16
|
||||||
|
ClusterIPRange: 10.244.0.0/16
|
||||||
|
MasterInternalIP: 172.20.0.9
|
||||||
|
MasterIPRange: 10.246.0.0/24
|
||||||
|
NetworkProvider: none
|
||||||
|
|
||||||
|
AdmissionControl: NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,PersistentVolumeLabel
|
||||||
|
|
||||||
|
EnableClusterMonitoring: none
|
||||||
|
EnableL7LoadBalancing: none
|
||||||
|
EnableClusterUI: true
|
||||||
|
|
||||||
|
EnableClusterDNS: true
|
||||||
|
DNSReplicas: 1
|
||||||
|
DNSServerIP: 10.0.0.10
|
||||||
|
DNSDomain: cluster.local
|
||||||
|
|
||||||
|
EnableClusterLogging: true
|
||||||
|
EnableNodeLogging: true
|
||||||
|
LoggingDestination: elasticsearch
|
||||||
|
ElasticsearchLoggingReplicas: 1
|
||||||
|
|
||||||
|
MasterImage: k8s-1-2-debian-jessie-amd64-2016-04-17
|
||||||
|
MasterName: {{ .InstancePrefix }}-master
|
||||||
|
MasterTag: {{ .InstancePrefix }}-master
|
||||||
|
{{ if gt .NodeCount 500 }}
|
||||||
|
MasterMachineType: n1-standard-32
|
||||||
|
{{ else if gt .NodeCount 250 }}
|
||||||
|
MasterMachineType: n1-standard-16
|
||||||
|
{{ else if gt .NodeCount 100 }}
|
||||||
|
MasterMachineType: n1-standard-8
|
||||||
|
{{ else if gt .NodeCount 10 }}
|
||||||
|
MasterMachineType: n1-standard-4
|
||||||
|
{{ else if gt .NodeCount 5 }}
|
||||||
|
MasterMachineType: n1-standard-2
|
||||||
|
{{ else }}
|
||||||
|
MasterMachineType: n1-standard-1
|
||||||
|
{{ end }}
|
||||||
|
MasterVolumeType: pd-ssd
|
||||||
|
MasterVolumeSize: 20
|
||||||
|
|
||||||
|
NodeImage: k8s-1-2-debian-jessie-amd64-2016-04-17
|
||||||
|
NodeCount: 2
|
||||||
|
NodeTag: {{ .InstancePrefix }}-minion
|
||||||
|
NodeInstancePrefix: {{ .InstancePrefix }}-minion
|
||||||
|
NodeMachineType: n1-standard-2
|
||||||
|
|
||||||
|
KubeUser: admin
|
|
@ -0,0 +1,44 @@
|
||||||
|
# TODO: Support multiple masters
|
||||||
|
|
||||||
|
persistentDisk/{{ .MasterName }}-pd:
|
||||||
|
zone: {{ .Zone }}
|
||||||
|
sizeGB: {{ or .MasterVolumeSize 20 }}
|
||||||
|
volumeType: {{ or .MasterVolumeType "pd-ssd" }}
|
||||||
|
|
||||||
|
# Open master HTTPS
|
||||||
|
firewallRule/{{ .MasterName }}-https:
|
||||||
|
network: network/default
|
||||||
|
sourceRanges: 0.0.0.0/0
|
||||||
|
targetTags: {{ .MasterTag }}
|
||||||
|
allowed: tcp:443
|
||||||
|
|
||||||
|
# Allocate master IP
|
||||||
|
ipAddress/{{ .MasterName }}-ip:
|
||||||
|
address: {{ .MasterPublicIP }}
|
||||||
|
|
||||||
|
# Master instance
|
||||||
|
instance/{{ .MasterName }}:
|
||||||
|
ipaddress: ipAddress/{{ .MasterName }}-ip
|
||||||
|
zone: {{ .Zone }}
|
||||||
|
machineType: {{ .MasterMachineType }}
|
||||||
|
image: {{ .MasterImage }}
|
||||||
|
tags: {{ .MasterTag }}
|
||||||
|
network: network/default
|
||||||
|
scopes:
|
||||||
|
- storage-ro
|
||||||
|
- compute-rw
|
||||||
|
- monitoring
|
||||||
|
- logging-write
|
||||||
|
canIpForward: true
|
||||||
|
disks:
|
||||||
|
master-pd: persistentDisk/{{ .MasterName }}-pd
|
||||||
|
metadata:
|
||||||
|
#kube-env: resources/kube-env
|
||||||
|
{{ if eq .NodeInit "cloudinit" }}
|
||||||
|
config: resources/cloudinit.yaml _kubernetes_master
|
||||||
|
{{ else }}
|
||||||
|
startup-script: resources/nodeup.sh
|
||||||
|
config: resources/config.yaml _kubernetes_master
|
||||||
|
{{ end }}
|
||||||
|
cluster-name: resources/cluster-name
|
||||||
|
preemptible: false
|
|
@ -0,0 +1,20 @@
|
||||||
|
{{ $networkName := "default" }}
|
||||||
|
|
||||||
|
network/{{ $networkName }}:
|
||||||
|
cidr: 10.240.0.0/16
|
||||||
|
|
||||||
|
# Allow all internal traffic
|
||||||
|
firewallRule/{{ $networkName }}-default-internal:
|
||||||
|
network: network/{{$networkName}}
|
||||||
|
sourceRanges: 10.0.0.0/8
|
||||||
|
allowed:
|
||||||
|
- tcp:1-65535
|
||||||
|
- udp:1-65535
|
||||||
|
- icmp
|
||||||
|
|
||||||
|
# SSH is open to the world
|
||||||
|
firewallRule/{{ $networkName }}-default-ssh:
|
||||||
|
network: network/default
|
||||||
|
sourceRanges: 0.0.0.0/0
|
||||||
|
allowed: tcp:22
|
||||||
|
|
|
@ -0,0 +1,48 @@
|
||||||
|
# TODO: Support multiple instance groups
|
||||||
|
|
||||||
|
instanceTemplate/{{ .NodeInstancePrefix }}-template:
|
||||||
|
network: network/default
|
||||||
|
machineType: {{ .NodeMachineType }}
|
||||||
|
# TODO: Make configurable
|
||||||
|
bootDiskType: pd-standard
|
||||||
|
bootDiskSizeGB: 100
|
||||||
|
bootDiskImage: {{ .NodeImage }}
|
||||||
|
canIpForward: true
|
||||||
|
# TODO: Support preemptible nodes?
|
||||||
|
preemptible: false
|
||||||
|
scopes:
|
||||||
|
- compute-rw
|
||||||
|
- monitoring
|
||||||
|
- logging-write
|
||||||
|
- storage-ro
|
||||||
|
metadata:
|
||||||
|
# kube-env: resources/kube-env
|
||||||
|
{{ if eq .NodeInit "cloudinit" }}
|
||||||
|
# TODO: we should probably always store the config somewhere
|
||||||
|
config: resources/cloudinit.yaml _kubernetes_master
|
||||||
|
{{ else }}
|
||||||
|
startup-script: resources/nodeup.sh
|
||||||
|
config: resources/config.yaml _kubernetes_pool
|
||||||
|
{{ end }}
|
||||||
|
cluster-name: resources/cluster-name
|
||||||
|
tags:
|
||||||
|
- {{ .NodeTag }}
|
||||||
|
|
||||||
|
managedInstanceGroup/{{ .NodeInstancePrefix }}-group:
|
||||||
|
zone: {{ .Zone }}
|
||||||
|
baseInstanceName: {{ .NodeInstancePrefix }}
|
||||||
|
targetSize: {{ .NodeCount }}
|
||||||
|
instanceTemplate: instanceTemplate/{{ .NodeInstancePrefix }}-template
|
||||||
|
|
||||||
|
# Allow traffic from nodes -> nodes
|
||||||
|
firewallRule/{{ .NodeTag }}-all:
|
||||||
|
network: network/default
|
||||||
|
sourceRanges: {{ .ClusterIPRange }}
|
||||||
|
targetTags: {{ .NodeTag }}
|
||||||
|
allowed:
|
||||||
|
- tcp
|
||||||
|
- udp
|
||||||
|
- icmp
|
||||||
|
- esp
|
||||||
|
- ah
|
||||||
|
- sctp
|
|
@ -0,0 +1 @@
|
||||||
|
{{ BuildNodeConfig "cloudinit" "resources/config.yaml.template" Args }}
|
|
@ -0,0 +1 @@
|
||||||
|
{{ .ClusterName }}
|
|
@ -0,0 +1,39 @@
|
||||||
|
Kubelet:
|
||||||
|
Certificate: {{ Base64Encode (CA.Cert "kubelet").AsString }}
|
||||||
|
Key: {{ Base64Encode (CA.PrivateKey "kubelet").AsString }}
|
||||||
|
|
||||||
|
NodeUp:
|
||||||
|
Location: https://kubeupv2.s3.amazonaws.com/nodeup/nodeup.tar.gz
|
||||||
|
|
||||||
|
CACertificate: {{ Base64Encode (CA.Cert "ca").AsString }}
|
||||||
|
|
||||||
|
APIServer:
|
||||||
|
Certificate: {{ Base64Encode (CA.Cert "master").AsString }}
|
||||||
|
Key: {{ Base64Encode (CA.PrivateKey "master").AsString }}
|
||||||
|
|
||||||
|
KubeUser: {{ .KubeUser }}
|
||||||
|
KubePassword: {{ (Secrets.Secret "kube").AsString }}
|
||||||
|
|
||||||
|
Tokens:
|
||||||
|
admin: {{ (Secrets.Secret "admin").AsString }}
|
||||||
|
kubelet: {{ (Secrets.Secret "kubelet").AsString }}
|
||||||
|
kube-proxy: {{ (Secrets.Secret "kube-proxy").AsString }}
|
||||||
|
"system:scheduler": {{ (Secrets.Secret "system:scheduler").AsString }}
|
||||||
|
"system:controller_manager": {{ (Secrets.Secret "system:controller_manager").AsString }}
|
||||||
|
"system:logging": {{ (Secrets.Secret "system:logging").AsString }}
|
||||||
|
"system:monitoring": {{ (Secrets.Secret "system:monitoring").AsString }}
|
||||||
|
"system:dns": {{ (Secrets.Secret "system:dns").AsString }}
|
||||||
|
|
||||||
|
Tags:
|
||||||
|
{{ range $tag := Args }}
|
||||||
|
- {{ $tag }}
|
||||||
|
{{ end }}
|
||||||
|
- _gce
|
||||||
|
- _jessie
|
||||||
|
- _debian_family
|
||||||
|
- _systemd
|
||||||
|
|
||||||
|
Assets:
|
||||||
|
{{ range $asset := .Assets }}
|
||||||
|
- {{ $asset }}
|
||||||
|
{{ end }}
|
|
@ -0,0 +1,150 @@
|
||||||
|
INSTANCE_PREFIX: {{ .InstancePrefix }}
|
||||||
|
NODE_INSTANCE_PREFIX: {{ .NodeInstancePrefix }}
|
||||||
|
CLUSTER_IP_RANGE: {{ .ClusterIPRange }}
|
||||||
|
|
||||||
|
#{
|
||||||
|
#url, hash, err := k.ServerBinaryTar.Resolve(fi.HashAlgorithmSHA1)
|
||||||
|
#if err != nil {
|
||||||
|
#return nil, err
|
||||||
|
#}
|
||||||
|
#SERVER_BINARY_TAR_URL"] = url
|
||||||
|
#SERVER_BINARY_TAR_HASH"] = hash
|
||||||
|
#}
|
||||||
|
|
||||||
|
#{
|
||||||
|
#url, hash, err := k.SaltTar.Resolve(fi.HashAlgorithmSHA1)
|
||||||
|
#if err != nil {
|
||||||
|
#return nil, err
|
||||||
|
#}
|
||||||
|
#SALT_TAR_URL"] = url
|
||||||
|
#SALT_TAR_HASH"] = hash
|
||||||
|
#}
|
||||||
|
|
||||||
|
SERVICE_CLUSTER_IP_RANGE: {{ .ServiceClusterIPRange }}
|
||||||
|
|
||||||
|
KUBERNETES_MASTER_NAME: {{ .MasterName }}
|
||||||
|
|
||||||
|
ALLOCATE_NODE_CIDRS: {{ .AllocateNodeCIDRs }}
|
||||||
|
|
||||||
|
ENABLE_CLUSTER_MONITORING: {{ .EnableClusterMonitoring }}
|
||||||
|
ENABLE_L7_LOADBALANCING: {{ .EnableL7LoadBalancing }}
|
||||||
|
ENABLE_CLUSTER_LOGGING: {{ .EnableClusterLogging }}
|
||||||
|
ENABLE_CLUSTER_UI: {{ .EnableClusterUI }}
|
||||||
|
ENABLE_NODE_LOGGING: {{ .EnableNodeLogging }}
|
||||||
|
LOGGING_DESTINATION: {{ .LoggingDestination }}
|
||||||
|
ELASTICSEARCH_LOGGING_REPLICAS: {{ .ElasticsearchLoggingReplicas }}
|
||||||
|
ENABLE_CLUSTER_DNS: {{ .EnableClusterDNS }}
|
||||||
|
ENABLE_CLUSTER_REGISTRY: {{ .EnableClusterRegistry }}
|
||||||
|
CLUSTER_REGISTRY_DISK: {{ .ClusterRegistryDisk }}
|
||||||
|
CLUSTER_REGISTRY_DISK_SIZE: {{ .ClusterRegistryDiskSize }}
|
||||||
|
DNS_REPLICAS: {{.DNSReplicas }}
|
||||||
|
DNS_SERVER_IP: {{ .DNSServerIP }}
|
||||||
|
DNS_DOMAIN: {{ .DNSDomain }}
|
||||||
|
|
||||||
|
KUBELET_TOKEN: {{ .KubeletToken }}
|
||||||
|
KUBE_PROXY_TOKEN: {{ .KubeProxyToken }}
|
||||||
|
ADMISSION_CONTROL: {{ .AdmissionControl }}
|
||||||
|
MASTER_IP_RANGE: {{ .MasterIPRange }}
|
||||||
|
RUNTIME_CONFIG: {{ .RuntimeConfig }}
|
||||||
|
|
||||||
|
CA_CERT: {{ Base64Encode (CA.Cert "ca").AsString }}
|
||||||
|
KUBELET_CERT: {{ Base64Encode (CA.Cert "kubelet").AsString }}
|
||||||
|
KUBELET_KEY: {{ Base64Encode (CA.PrivateKey "kubelet").AsString }}
|
||||||
|
|
||||||
|
NETWORK_PROVIDER: {{ .NetworkProvider }}
|
||||||
|
HAIRPIN_MODE: {{ .HairpinMode }}
|
||||||
|
OPENCONTRAIL_TAG: {{ .OpencontrailTag }}
|
||||||
|
OPENCONTRAIL_KUBERNETES_TAG: {{ .OpencontrailKubernetesTag }}
|
||||||
|
OPENCONTRAIL_PUBLIC_SUBNET: {{ .OpencontrailPublicSubnet }}
|
||||||
|
E2E_STORAGE_TEST_ENVIRONMENT: {{ .E2EStorageTestEnvironment }}
|
||||||
|
KUBE_IMAGE_TAG: {{ .KubeImageTag }}
|
||||||
|
KUBE_DOCKER_REGISTRY: {{ .KubeDockerRegistry }}
|
||||||
|
KUBE_ADDON_REGISTRY: {{ .KubeAddonRegistry }}
|
||||||
|
MULTIZONE: {{ .Multizone }}
|
||||||
|
NON_MASQUERADE_CIDR: {{ .NonMasqueradeCidr }}
|
||||||
|
|
||||||
|
KUBELET_PORT: {{ .KubeletPort }}
|
||||||
|
|
||||||
|
KUBE_APISERVER_REQUEST_TIMEOUT: {{ .KubeApiserverRequestTimeout }}
|
||||||
|
|
||||||
|
TERMINATED_POD_GC_THRESHOLD: {{ .TerminatedPodGcThreshold }}
|
||||||
|
|
||||||
|
#if k.OsDistribution == "trusty" {
|
||||||
|
#KUBE_MANIFESTS_TAR_URL: .KubeManifestsTarURL }}
|
||||||
|
#KUBE_MANIFESTS_TAR_HASH: .KubeManifestsTarSha256 }}
|
||||||
|
#}
|
||||||
|
|
||||||
|
TEST_CLUSTER: {{ .TestCluster }}
|
||||||
|
|
||||||
|
KUBELET_TEST_ARGS: {{ .KubeletTestArgs }}
|
||||||
|
|
||||||
|
KUBELET_TEST_LOG_LEVEL: {{ .KubeletTestLogLevel }}
|
||||||
|
|
||||||
|
DOCKER_TEST_LOG_LEVEL: {{ .DockerTestLogLevel }}
|
||||||
|
|
||||||
|
ENABLE_CUSTOM_METRICS: {{ .EnableCustomMetrics }}
|
||||||
|
|
||||||
|
# if .Target.IsMaster
|
||||||
|
|
||||||
|
# If the user requested that the master be part of the cluster, set the
|
||||||
|
# environment variable to program the master kubelet to register itself.
|
||||||
|
{{ if .RegisterMasterKubelet }}
|
||||||
|
KUBELET_APISERVER: {{ .MasterName }}
|
||||||
|
{{ end }}
|
||||||
|
|
||||||
|
KUBERNETES_MASTER: true
|
||||||
|
KUBE_USER: {{ .KubeUser }}
|
||||||
|
KUBE_PASSWORD: {{ .KubePassword }}
|
||||||
|
KUBE_BEARER_TOKEN: {{ .BearerToken }}
|
||||||
|
MASTER_CERT: {{ Base64Encode (CA.Cert "master").AsString }}
|
||||||
|
MASTER_KEY: {{ Base64Encode (CA.PrivateKey "master").AsString }}
|
||||||
|
KUBECFG_CERT: {{ Base64Encode (CA.Cert "kubecfg").AsString }}
|
||||||
|
KUBECFG_KEY: {{ Base64Encode (CA.PrivateKey "kubecfg").AsString }}
|
||||||
|
|
||||||
|
ENABLE_MANIFEST_URL: {{ .EnableManifestURL }}
|
||||||
|
MANIFEST_URL: {{ .ManifestURL }}
|
||||||
|
MANIFEST_URL_HEADER: {{ .ManifestURLHeader }}
|
||||||
|
NUM_NODES: {{.NodeCount }}
|
||||||
|
|
||||||
|
APISERVER_TEST_ARGS: {{ .ApiserverTestArgs }}
|
||||||
|
|
||||||
|
APISERVER_TEST_LOG_LEVEL: {{ .ApiserverTestLogLevel }}
|
||||||
|
|
||||||
|
CONTROLLER_MANAGER_TEST_ARGS: {{ .ControllerManagerTestArgs }}
|
||||||
|
|
||||||
|
CONTROLLER_MANAGER_TEST_LOG_LEVEL: {{ .ControllerManagerTestLogLevel }}
|
||||||
|
|
||||||
|
SCHEDULER_TEST_ARGS: {{ .SchedulerTestArgs }}
|
||||||
|
|
||||||
|
SCHEDULER_TEST_LOG_LEVEL: {{ .SchedulerTestLogLevel }}
|
||||||
|
|
||||||
|
# else
|
||||||
|
|
||||||
|
# Node-only vars
|
||||||
|
|
||||||
|
KUBERNETES_MASTER: false
|
||||||
|
ZONE: {{ .Zone }}
|
||||||
|
EXTRA_DOCKER_OPTS: {{ .ExtraDockerOpts }}
|
||||||
|
MANIFEST_URL: {{ .ManifestURL }}
|
||||||
|
|
||||||
|
KUBEPROXY_TEST_ARGS: {{ .KubeProxyTestArgs }}
|
||||||
|
|
||||||
|
KUBEPROXY_TEST_LOG_LEVEL: {{ .KubeProxyTestLogLevel }}
|
||||||
|
|
||||||
|
# end
|
||||||
|
|
||||||
|
NODE_LABELS: {{ .NodeLabels }}
|
||||||
|
|
||||||
|
#if k.OsDistribution == "coreos" {
|
||||||
|
#// CoreOS-only env vars. TODO(yifan): Make them available on other distros.
|
||||||
|
#KUBE_MANIFESTS_TAR_URL: .KubeManifestsTarURL }}
|
||||||
|
#KUBE_MANIFESTS_TAR_HASH: .KubeManifestsTarSha256 }}
|
||||||
|
#KUBERNETES_CONTAINER_RUNTIME: .ContainerRuntime }}
|
||||||
|
#RKT_VERSION: .RktVersion }}
|
||||||
|
#RKT_PATH: .RktPath }}
|
||||||
|
#KUBERNETES_CONFIGURE_CBR0: .KubernetesConfigureCbr0 }}
|
||||||
|
#}
|
||||||
|
|
||||||
|
# This next bit for changes vs kube-up:
|
||||||
|
# https://github.com/kubernetes/kubernetes/issues/23264
|
||||||
|
CA_KEY: {{ Base64Encode (CA.PrivateKey "ca").AsString }}
|
|
@ -0,0 +1,139 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
set -o errexit
|
||||||
|
set -o nounset
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
NODEUP_TAR_URL={{ .NodeUp.Location }}
|
||||||
|
NODEUP_TAR_HASH={{ .NodeUp.Hash }}
|
||||||
|
|
||||||
|
function ensure-basic-networking() {
|
||||||
|
# Deal with GCE networking bring-up race. (We rely on DNS for a lot,
|
||||||
|
# and it's just not worth doing a whole lot of startup work if this
|
||||||
|
# isn't ready yet.)
|
||||||
|
until getent hosts metadata.google.internal &>/dev/null; do
|
||||||
|
echo 'Waiting for functional DNS (trying to resolve metadata.google.internal)...'
|
||||||
|
sleep 3
|
||||||
|
done
|
||||||
|
until getent hosts $(hostname -f || echo _error_) &>/dev/null; do
|
||||||
|
echo 'Waiting for functional DNS (trying to resolve my own FQDN)...'
|
||||||
|
sleep 3
|
||||||
|
done
|
||||||
|
until getent hosts $(hostname -i || echo _error_) &>/dev/null; do
|
||||||
|
echo 'Waiting for functional DNS (trying to resolve my own IP)...'
|
||||||
|
sleep 3
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Networking functional on $(hostname) ($(hostname -i))"
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensure-install-dir() {
|
||||||
|
INSTALL_DIR="/var/cache/kubernetes-install"
|
||||||
|
mkdir -p ${INSTALL_DIR}
|
||||||
|
cd ${INSTALL_DIR}
|
||||||
|
}
|
||||||
|
|
||||||
|
function curl-metadata() {
|
||||||
|
curl --fail --retry 5 --silent -H 'Metadata-Flavor: Google' "http://metadata/computeMetadata/v1/instance/attributes/${1}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Retry a download until we get it. Takes a hash and a set of URLs.
|
||||||
|
#
|
||||||
|
# $1 is the sha1 of the URL. Can be "" if the sha1 is unknown.
|
||||||
|
# $2+ are the URLs to download.
|
||||||
|
download-or-bust() {
|
||||||
|
local -r hash="$1"
|
||||||
|
shift 1
|
||||||
|
|
||||||
|
urls=( $* )
|
||||||
|
while true; do
|
||||||
|
for url in "${urls[@]}"; do
|
||||||
|
local file="${url##*/}"
|
||||||
|
rm -f "${file}"
|
||||||
|
if ! curl -f --ipv4 -Lo "${file}" --connect-timeout 20 --retry 6 --retry-delay 10 "${url}"; then
|
||||||
|
echo "== Failed to download ${url}. Retrying. =="
|
||||||
|
elif [[ -n "${hash}" ]] && ! validate-hash "${file}" "${hash}"; then
|
||||||
|
echo "== Hash validation of ${url} failed. Retrying. =="
|
||||||
|
else
|
||||||
|
if [[ -n "${hash}" ]]; then
|
||||||
|
echo "== Downloaded ${url} (SHA1 = ${hash}) =="
|
||||||
|
else
|
||||||
|
echo "== Downloaded ${url} =="
|
||||||
|
fi
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
validate-hash() {
|
||||||
|
local -r file="$1"
|
||||||
|
local -r expected="$2"
|
||||||
|
local actual
|
||||||
|
|
||||||
|
actual=$(sha1sum ${file} | awk '{ print $1 }') || true
|
||||||
|
if [[ "${actual}" != "${expected}" ]]; then
|
||||||
|
echo "== ${file} corrupted, sha1 ${actual} doesn't match expected ${expected} =="
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function split-commas() {
|
||||||
|
echo $1 | tr "," "\n"
|
||||||
|
}
|
||||||
|
|
||||||
|
function try-download-release() {
|
||||||
|
# TODO(zmerlynn): Now we REALLY have no excuse not to do the reboot
|
||||||
|
# optimization.
|
||||||
|
|
||||||
|
local -r nodeup_tar_urls=( $(split-commas "${NODEUP_TAR_URL}") )
|
||||||
|
local -r nodeup_tar="${nodeup_tar_urls[0]##*/}"
|
||||||
|
if [[ -n "${NODEUP_TAR_HASH:-}" ]]; then
|
||||||
|
local -r nodeup_tar_hash="${NODEUP_TAR_HASH}"
|
||||||
|
else
|
||||||
|
# TODO: Remove?
|
||||||
|
echo "Downloading binary release sha1 (not found in env)"
|
||||||
|
download-or-bust "" "${nodeup_tar_urls[@]/.tar.gz/.tar.gz.sha1}"
|
||||||
|
local -r nodeup_tar_hash=$(cat "${nodeup_tar}.sha1")
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Downloading binary release tar (${nodeup_tar_urls[@]})"
|
||||||
|
download-or-bust "${nodeup_tar_hash}" "${nodeup_tar_urls[@]}"
|
||||||
|
|
||||||
|
echo "Unpacking and checking integrity of nodeup"
|
||||||
|
rm -rf nodeupcurl https://kubeupv2.s3.amazonaws.com/nodeup/nodeup.tar.gz
|
||||||
|
|
||||||
|
tar xzf "${nodeup_tar}" && tar tzf "${nodeup_tar}" > /dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
function download-release() {
|
||||||
|
# In case of failure checking integrity of release, retry.
|
||||||
|
until try-download-release; do
|
||||||
|
sleep 15
|
||||||
|
echo "Couldn't download release. Retrying..."
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Running release install script"
|
||||||
|
( cd nodeup/root; ./nodeup --conf=metadata://{{ .CloudProvider }}/config --v=8 )
|
||||||
|
}
|
||||||
|
|
||||||
|
####################################################################################
|
||||||
|
|
||||||
|
echo "== nodeup node config starting =="
|
||||||
|
ensure-basic-networking
|
||||||
|
ensure-install-dir
|
||||||
|
download-release
|
||||||
|
echo "== nodeup node config done =="
|
|
@ -0,0 +1,3 @@
|
||||||
|
subject:
|
||||||
|
CommonName: kubecfg
|
||||||
|
type: client
|
|
@ -0,0 +1,3 @@
|
||||||
|
subject:
|
||||||
|
CommonName: kubelet
|
||||||
|
type: client
|
|
@ -0,0 +1,12 @@
|
||||||
|
subject:
|
||||||
|
CommonName: kubernetes-master
|
||||||
|
type: server
|
||||||
|
alternateNames:
|
||||||
|
- kubernetes
|
||||||
|
- kubernetes.default
|
||||||
|
- kubernetes.default.svc
|
||||||
|
- kubernetes.default.svc.{{ .DNSDomain }}
|
||||||
|
- {{ .MasterName }}
|
||||||
|
- {{ .MasterPublicIP }}
|
||||||
|
- {{ .MasterInternalIP }}
|
||||||
|
- {{ .WellKnownServiceIP 1 }}
|
|
@ -0,0 +1,66 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# loadedImageFlags is a bit-flag to track which docker images loaded successfully.
|
||||||
|
let loadedImageFlags=0
|
||||||
|
|
||||||
|
while true; do
|
||||||
|
restart_docker=false
|
||||||
|
|
||||||
|
if which docker 1>/dev/null 2>&1; then
|
||||||
|
|
||||||
|
timeout 30 docker load -i /srv/salt/kube-bins/kube-apiserver.tar 1>/dev/null 2>&1
|
||||||
|
rc=$?
|
||||||
|
if [[ $rc == 0 ]]; then
|
||||||
|
let loadedImageFlags="$loadedImageFlags|1"
|
||||||
|
elif [[ $rc == 124 ]]; then
|
||||||
|
restart_docker=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
timeout 30 docker load -i /srv/salt/kube-bins/kube-scheduler.tar 1>/dev/null 2>&1
|
||||||
|
rc=$?
|
||||||
|
if [[ $rc == 0 ]]; then
|
||||||
|
let loadedImageFlags="$loadedImageFlags|2"
|
||||||
|
elif [[ $rc == 124 ]]; then
|
||||||
|
restart_docker=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
timeout 30 docker load -i /srv/salt/kube-bins/kube-controller-manager.tar 1>/dev/null 2>&1
|
||||||
|
rc=$?
|
||||||
|
if [[ $rc == 0 ]]; then
|
||||||
|
let loadedImageFlags="$loadedImageFlags|4"
|
||||||
|
elif [[ $rc == 124 ]]; then
|
||||||
|
restart_docker=true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# required docker images got installed. exit while loop.
|
||||||
|
if [[ $loadedImageFlags == 7 ]]; then break; fi
|
||||||
|
|
||||||
|
# Sometimes docker load hang, restart docker daemon resolve the issue
|
||||||
|
if [[ $restart_docker ]]; then
|
||||||
|
if ! service docker restart; then # Try systemctl if there's no service command.
|
||||||
|
systemctl restart docker
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# sleep for 15 seconds before attempting to load docker images again
|
||||||
|
sleep 15
|
||||||
|
|
||||||
|
done
|
||||||
|
|
||||||
|
# Now exit. After kube-push, salt will notice that the service is down and it
|
||||||
|
# will start it and new docker images will be loaded.
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes-Master Addon Object Manager
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/etc/kubernetes/kube-master-addons.sh
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1 @@
|
||||||
|
{{ .CACertificate.AsString }}
|
|
@ -0,0 +1 @@
|
||||||
|
{{ .APIServer.Certificate.AsString }}
|
|
@ -0,0 +1 @@
|
||||||
|
{{ .APIServer.Key.AsString }}
|
|
@ -0,0 +1,65 @@
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "Pod",
|
||||||
|
"metadata": {
|
||||||
|
"name":"etcd-server-events",
|
||||||
|
"namespace": "kube-system"
|
||||||
|
},
|
||||||
|
"spec":{
|
||||||
|
"hostNetwork": true,
|
||||||
|
"containers":[
|
||||||
|
{
|
||||||
|
"name": "etcd-container",
|
||||||
|
"image": "gcr.io/google_containers/etcd:2.2.1",
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "100m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"command": [
|
||||||
|
"/bin/sh",
|
||||||
|
"-c",
|
||||||
|
"/usr/local/bin/etcd --listen-peer-urls http://127.0.0.1:2381 --addr 127.0.0.1:4002 --bind-addr 127.0.0.1:4002 --data-dir /var/etcd/data-events 1>>/var/log/etcd-events.log 2>&1"
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
"port": 4002,
|
||||||
|
"path": "/health"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15
|
||||||
|
},
|
||||||
|
"ports":[
|
||||||
|
{ "name": "serverport",
|
||||||
|
"containerPort": 2381,
|
||||||
|
"hostPort": 2381
|
||||||
|
},{
|
||||||
|
"name": "clientport",
|
||||||
|
"containerPort": 4002,
|
||||||
|
"hostPort": 4002
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumeMounts": [
|
||||||
|
{"name": "varetcd",
|
||||||
|
"mountPath": "/var/etcd",
|
||||||
|
"readOnly": false
|
||||||
|
},
|
||||||
|
{"name": "varlogetcd",
|
||||||
|
"mountPath": "/var/log/etcd-events.log",
|
||||||
|
"readOnly": false
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumes":[
|
||||||
|
{ "name": "varetcd",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/mnt/master-pd/var/etcd"}
|
||||||
|
},
|
||||||
|
{ "name": "varlogetcd",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/log/etcd-events.log"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
|
@ -0,0 +1,65 @@
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "Pod",
|
||||||
|
"metadata": {
|
||||||
|
"name":"etcd-server",
|
||||||
|
"namespace": "kube-system"
|
||||||
|
},
|
||||||
|
"spec":{
|
||||||
|
"hostNetwork": true,
|
||||||
|
"containers":[
|
||||||
|
{
|
||||||
|
"name": "etcd-container",
|
||||||
|
"image": "gcr.io/google_containers/etcd:2.2.1",
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "200m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"command": [
|
||||||
|
"/bin/sh",
|
||||||
|
"-c",
|
||||||
|
"/usr/local/bin/etcd --listen-peer-urls http://127.0.0.1:2380 --addr 127.0.0.1:4001 --bind-addr 127.0.0.1:4001 --data-dir /var/etcd/data 1>>/var/log/etcd.log 2>&1"
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
"port": 4001,
|
||||||
|
"path": "/health"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15
|
||||||
|
},
|
||||||
|
"ports":[
|
||||||
|
{ "name": "serverport",
|
||||||
|
"containerPort": 2380,
|
||||||
|
"hostPort": 2380
|
||||||
|
},{
|
||||||
|
"name": "clientport",
|
||||||
|
"containerPort": 4001,
|
||||||
|
"hostPort": 4001
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumeMounts": [
|
||||||
|
{"name": "varetcd",
|
||||||
|
"mountPath": "/var/etcd",
|
||||||
|
"readOnly": false
|
||||||
|
},
|
||||||
|
{"name": "varlogetcd",
|
||||||
|
"mountPath": "/var/log/etcd.log",
|
||||||
|
"readOnly": false
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumes":[
|
||||||
|
{ "name": "varetcd",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/mnt/master-pd/var/etcd"}
|
||||||
|
},
|
||||||
|
{ "name": "varlogetcd",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/log/etcd.log"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"ifNotExists": true
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"ifNotExists": true
|
||||||
|
}
|
|
@ -0,0 +1,4 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: kube-system
|
|
@ -0,0 +1,514 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# The business logic for whether a given object should be created
|
||||||
|
# was already enforced by salt, and /etc/kubernetes/addons is the
|
||||||
|
# managed result is of that. Start everything below that directory.
|
||||||
|
|
||||||
|
# Parameters
|
||||||
|
# $1 path to add-ons
|
||||||
|
|
||||||
|
|
||||||
|
# LIMITATIONS
|
||||||
|
# 1. controllers are not updated unless their name is changed
|
||||||
|
# 3. Services will not be updated unless their name is changed,
|
||||||
|
# but for services we actually want updates without name change.
|
||||||
|
# 4. Json files are not handled at all. Currently addons must be
|
||||||
|
# in yaml files
|
||||||
|
# 5. exit code is probably not always correct (I haven't checked
|
||||||
|
# carefully if it works in 100% cases)
|
||||||
|
# 6. There are no unittests
|
||||||
|
# 8. Will not work if the total length of paths to addons is greater than
|
||||||
|
# bash can handle. Probably it is not a problem: ARG_MAX=2097152 on GCE.
|
||||||
|
# 9. Performance issue: yaml files are read many times in a single execution.
|
||||||
|
|
||||||
|
# cosmetic improvements to be done
|
||||||
|
# 1. improve the log function; add timestamp, file name, etc.
|
||||||
|
# 2. logging doesn't work from files that print things out.
|
||||||
|
# 3. kubectl prints the output to stderr (the output should be captured and then
|
||||||
|
# logged)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# global config
|
||||||
|
KUBECTL=${TEST_KUBECTL:-} # substitute for tests
|
||||||
|
KUBECTL=${KUBECTL:-${KUBECTL_BIN:-}}
|
||||||
|
KUBECTL=${KUBECTL:-/usr/local/bin/kubectl}
|
||||||
|
if [[ ! -x ${KUBECTL} ]]; then
|
||||||
|
echo "ERROR: kubectl command (${KUBECTL}) not found or is not executable" 1>&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If an add-on definition is incorrect, or a definition has just disappeared
|
||||||
|
# from the local directory, the script will still keep on retrying.
|
||||||
|
# The script does not end until all retries are done, so
|
||||||
|
# one invalid manifest may block updates of other add-ons.
|
||||||
|
# Be careful how you set these parameters
|
||||||
|
NUM_TRIES=1 # will be updated based on input parameters
|
||||||
|
DELAY_AFTER_ERROR_SEC=${TEST_DELAY_AFTER_ERROR_SEC:=10}
|
||||||
|
|
||||||
|
|
||||||
|
# remember that you can't log from functions that print some output (because
|
||||||
|
# logs are also printed on stdout)
|
||||||
|
# $1 level
|
||||||
|
# $2 message
|
||||||
|
function log() {
|
||||||
|
# manage log levels manually here
|
||||||
|
|
||||||
|
# add the timestamp if you find it useful
|
||||||
|
case $1 in
|
||||||
|
DB3 )
|
||||||
|
# echo "$1: $2"
|
||||||
|
;;
|
||||||
|
DB2 )
|
||||||
|
# echo "$1: $2"
|
||||||
|
;;
|
||||||
|
DBG )
|
||||||
|
# echo "$1: $2"
|
||||||
|
;;
|
||||||
|
INFO )
|
||||||
|
echo "$1: $2"
|
||||||
|
;;
|
||||||
|
WRN )
|
||||||
|
echo "$1: $2"
|
||||||
|
;;
|
||||||
|
ERR )
|
||||||
|
echo "$1: $2"
|
||||||
|
;;
|
||||||
|
* )
|
||||||
|
echo "INVALID_LOG_LEVEL $1: $2"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
#$1 yaml file path
|
||||||
|
function get-object-kind-from-file() {
|
||||||
|
# prints to stdout, so log cannot be used
|
||||||
|
#WARNING: only yaml is supported
|
||||||
|
cat $1 | ${PYTHON} -c '''
|
||||||
|
try:
|
||||||
|
import pipes,sys,yaml
|
||||||
|
y = yaml.load(sys.stdin)
|
||||||
|
labels = y["metadata"]["labels"]
|
||||||
|
if ("kubernetes.io/cluster-service", "true") not in labels.iteritems():
|
||||||
|
# all add-ons must have the label "kubernetes.io/cluster-service".
|
||||||
|
# Otherwise we are ignoring them (the update will not work anyway)
|
||||||
|
print "ERROR"
|
||||||
|
else:
|
||||||
|
print y["kind"]
|
||||||
|
except Exception, ex:
|
||||||
|
print "ERROR"
|
||||||
|
'''
|
||||||
|
}
|
||||||
|
|
||||||
|
# $1 yaml file path
|
||||||
|
# returns a string of the form <namespace>/<name> (we call it nsnames)
|
||||||
|
function get-object-nsname-from-file() {
|
||||||
|
# prints to stdout, so log cannot be used
|
||||||
|
#WARNING: only yaml is supported
|
||||||
|
#addons that do not specify a namespace are assumed to be in "default".
|
||||||
|
cat $1 | ${PYTHON} -c '''
|
||||||
|
try:
|
||||||
|
import pipes,sys,yaml
|
||||||
|
y = yaml.load(sys.stdin)
|
||||||
|
labels = y["metadata"]["labels"]
|
||||||
|
if ("kubernetes.io/cluster-service", "true") not in labels.iteritems():
|
||||||
|
# all add-ons must have the label "kubernetes.io/cluster-service".
|
||||||
|
# Otherwise we are ignoring them (the update will not work anyway)
|
||||||
|
print "ERROR"
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
print "%s/%s" % (y["metadata"]["namespace"], y["metadata"]["name"])
|
||||||
|
except Exception, ex:
|
||||||
|
print "default/%s" % y["metadata"]["name"]
|
||||||
|
except Exception, ex:
|
||||||
|
print "ERROR"
|
||||||
|
'''
|
||||||
|
}
|
||||||
|
|
||||||
|
# $1 addon directory path
|
||||||
|
# $2 addon type (e.g. ReplicationController)
|
||||||
|
# echoes the string with paths to files containing addon for the given type
|
||||||
|
# works only for yaml files (!) (ignores json files)
|
||||||
|
function get-addon-paths-from-disk() {
|
||||||
|
# prints to stdout, so log cannot be used
|
||||||
|
local -r addon_dir=$1
|
||||||
|
local -r obj_type=$2
|
||||||
|
local kind
|
||||||
|
local file_path
|
||||||
|
for file_path in $(find ${addon_dir} -name \*.yaml); do
|
||||||
|
kind=$(get-object-kind-from-file ${file_path})
|
||||||
|
# WARNING: assumption that the topmost indentation is zero (I'm not sure yaml allows for topmost indentation)
|
||||||
|
if [[ "${kind}" == "${obj_type}" ]]; then
|
||||||
|
echo ${file_path}
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# waits for all subprocesses
|
||||||
|
# returns 0 if all of them were successful and 1 otherwise
|
||||||
|
function wait-for-jobs() {
|
||||||
|
local rv=0
|
||||||
|
local pid
|
||||||
|
for pid in $(jobs -p); do
|
||||||
|
wait ${pid}
|
||||||
|
if [[ $? -ne 0 ]]; then
|
||||||
|
rv=1;
|
||||||
|
log ERR "error in pid ${pid}"
|
||||||
|
fi
|
||||||
|
log DB2 "pid ${pid} completed, current error code: ${rv}"
|
||||||
|
done
|
||||||
|
return ${rv}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function run-until-success() {
|
||||||
|
local -r command=$1
|
||||||
|
local tries=$2
|
||||||
|
local -r delay=$3
|
||||||
|
local -r command_name=$1
|
||||||
|
while [ ${tries} -gt 0 ]; do
|
||||||
|
log DBG "executing: '$command'"
|
||||||
|
# let's give the command as an argument to bash -c, so that we can use
|
||||||
|
# && and || inside the command itself
|
||||||
|
/bin/bash -c "${command}" && \
|
||||||
|
log DB3 "== Successfully executed ${command_name} at $(date -Is) ==" && \
|
||||||
|
return 0
|
||||||
|
let tries=tries-1
|
||||||
|
log INFO "== Failed to execute ${command_name} at $(date -Is). ${tries} tries remaining. =="
|
||||||
|
sleep ${delay}
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# $1 object type
|
||||||
|
# returns a list of <namespace>/<name> pairs (nsnames)
|
||||||
|
function get-addon-nsnames-from-server() {
|
||||||
|
local -r obj_type=$1
|
||||||
|
"${KUBECTL}" get "${obj_type}" --all-namespaces -o go-template="{{range.items}}{{.metadata.namespace}}/{{.metadata.name}} {{end}}" --api-version=v1 -l kubernetes.io/cluster-service=true
|
||||||
|
}
|
||||||
|
|
||||||
|
# returns the characters after the last separator (including)
|
||||||
|
# If the separator is empty or if it doesn't appear in the string,
|
||||||
|
# an empty string is printed
|
||||||
|
# $1 input string
|
||||||
|
# $2 separator (must be single character, or empty)
|
||||||
|
function get-suffix() {
|
||||||
|
# prints to stdout, so log cannot be used
|
||||||
|
local -r input_string=$1
|
||||||
|
local -r separator=$2
|
||||||
|
local suffix
|
||||||
|
|
||||||
|
if [[ "${separator}" == "" ]]; then
|
||||||
|
echo ""
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "${input_string}" == *"${separator}"* ]]; then
|
||||||
|
suffix=$(echo "${input_string}" | rev | cut -d "${separator}" -f1 | rev)
|
||||||
|
echo "${separator}${suffix}"
|
||||||
|
else
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# returns the characters up to the last '-' (without it)
|
||||||
|
# $1 input string
|
||||||
|
# $2 separator
|
||||||
|
function get-basename() {
|
||||||
|
# prints to stdout, so log cannot be used
|
||||||
|
local -r input_string=$1
|
||||||
|
local -r separator=$2
|
||||||
|
local suffix
|
||||||
|
suffix="$(get-suffix ${input_string} ${separator})"
|
||||||
|
# this will strip the suffix (if matches)
|
||||||
|
echo ${input_string%$suffix}
|
||||||
|
}
|
||||||
|
|
||||||
|
function delete-object() {
|
||||||
|
local -r obj_type=$1
|
||||||
|
local -r namespace=$2
|
||||||
|
local -r obj_name=$3
|
||||||
|
log INFO "Deleting ${obj_type} ${namespace}/${obj_name}"
|
||||||
|
|
||||||
|
run-until-success "${KUBECTL} delete --namespace=${namespace} ${obj_type} ${obj_name}" ${NUM_TRIES} ${DELAY_AFTER_ERROR_SEC}
|
||||||
|
}
|
||||||
|
|
||||||
|
function create-object() {
|
||||||
|
local -r obj_type=$1
|
||||||
|
local -r file_path=$2
|
||||||
|
|
||||||
|
local nsname_from_file
|
||||||
|
nsname_from_file=$(get-object-nsname-from-file ${file_path})
|
||||||
|
if [[ "${nsname_from_file}" == "ERROR" ]]; then
|
||||||
|
log INFO "Cannot read object name from ${file_path}. Ignoring"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
IFS='/' read namespace obj_name <<< "${nsname_from_file}"
|
||||||
|
|
||||||
|
log INFO "Creating new ${obj_type} from file ${file_path} in namespace ${namespace}, name: ${obj_name}"
|
||||||
|
# this will keep on failing if the ${file_path} disappeared in the meantime.
|
||||||
|
# Do not use too many retries.
|
||||||
|
run-until-success "${KUBECTL} create --namespace=${namespace} -f ${file_path}" ${NUM_TRIES} ${DELAY_AFTER_ERROR_SEC}
|
||||||
|
}
|
||||||
|
|
||||||
|
function update-object() {
|
||||||
|
local -r obj_type=$1
|
||||||
|
local -r namespace=$2
|
||||||
|
local -r obj_name=$3
|
||||||
|
local -r file_path=$4
|
||||||
|
log INFO "updating the ${obj_type} ${namespace}/${obj_name} with the new definition ${file_path}"
|
||||||
|
delete-object ${obj_type} ${namespace} ${obj_name}
|
||||||
|
create-object ${obj_type} ${file_path}
|
||||||
|
}
|
||||||
|
|
||||||
|
# deletes the objects from the server
|
||||||
|
# $1 object type
|
||||||
|
# $2 a list of object nsnames
|
||||||
|
function delete-objects() {
|
||||||
|
local -r obj_type=$1
|
||||||
|
local -r obj_nsnames=$2
|
||||||
|
local namespace
|
||||||
|
local obj_name
|
||||||
|
for nsname in ${obj_nsnames}; do
|
||||||
|
IFS='/' read namespace obj_name <<< "${nsname}"
|
||||||
|
delete-object ${obj_type} ${namespace} ${obj_name} &
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# creates objects from the given files
|
||||||
|
# $1 object type
|
||||||
|
# $2 a list of paths to definition files
|
||||||
|
function create-objects() {
|
||||||
|
local -r obj_type=$1
|
||||||
|
local -r file_paths=$2
|
||||||
|
local file_path
|
||||||
|
for file_path in ${file_paths}; do
|
||||||
|
# Remember that the file may have disappear by now
|
||||||
|
# But we don't want to check it here because
|
||||||
|
# such race condition may always happen after
|
||||||
|
# we check it. Let's have the race
|
||||||
|
# condition happen a bit more often so that
|
||||||
|
# we see that our tests pass anyway.
|
||||||
|
create-object ${obj_type} ${file_path} &
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# updates objects
|
||||||
|
# $1 object type
|
||||||
|
# $2 a list of update specifications
|
||||||
|
# each update specification is a ';' separated pair: <nsname>;<file path>
|
||||||
|
function update-objects() {
|
||||||
|
local -r obj_type=$1 # ignored
|
||||||
|
local -r update_spec=$2
|
||||||
|
local objdesc
|
||||||
|
local nsname
|
||||||
|
local obj_name
|
||||||
|
local namespace
|
||||||
|
|
||||||
|
for objdesc in ${update_spec}; do
|
||||||
|
IFS=';' read nsname file_path <<< "${objdesc}"
|
||||||
|
IFS='/' read namespace obj_name <<< "${nsname}"
|
||||||
|
|
||||||
|
update-object ${obj_type} ${namespace} ${obj_name} ${file_path} &
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# Global variables set by function match-objects.
|
||||||
|
nsnames_for_delete="" # a list of object nsnames to be deleted
|
||||||
|
for_update="" # a list of pairs <nsname>;<filePath> for objects that should be updated
|
||||||
|
nsnames_for_ignore="" # a list of object nsnames that will be ignored
|
||||||
|
new_files="" # a list of file paths that weren't matched by any existing objects (these objects must be created now)
|
||||||
|
|
||||||
|
|
||||||
|
# $1 path to files with objects
|
||||||
|
# $2 object type in the API (ReplicationController or Service)
|
||||||
|
# $3 name separator (single character or empty)
|
||||||
|
function match-objects() {
|
||||||
|
local -r addon_dir=$1
|
||||||
|
local -r obj_type=$2
|
||||||
|
local -r separator=$3
|
||||||
|
|
||||||
|
# output variables (globals)
|
||||||
|
nsnames_for_delete=""
|
||||||
|
for_update=""
|
||||||
|
nsnames_for_ignore=""
|
||||||
|
new_files=""
|
||||||
|
|
||||||
|
addon_nsnames_on_server=$(get-addon-nsnames-from-server "${obj_type}")
|
||||||
|
# if the api server is unavailable then abandon the update for this cycle
|
||||||
|
if [[ $? -ne 0 ]]; then
|
||||||
|
log ERR "unable to query ${obj_type} - exiting"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
addon_paths_in_files=$(get-addon-paths-from-disk "${addon_dir}" "${obj_type}")
|
||||||
|
|
||||||
|
log DB2 "addon_nsnames_on_server=${addon_nsnames_on_server}"
|
||||||
|
log DB2 "addon_paths_in_files=${addon_paths_in_files}"
|
||||||
|
|
||||||
|
local matched_files=""
|
||||||
|
|
||||||
|
local basensname_on_server=""
|
||||||
|
local nsname_on_server=""
|
||||||
|
local suffix_on_server=""
|
||||||
|
local nsname_from_file=""
|
||||||
|
local suffix_from_file=""
|
||||||
|
local found=0
|
||||||
|
local addon_path=""
|
||||||
|
|
||||||
|
# objects that were moved between namespaces will have different nsname
|
||||||
|
# because the namespace is included. So they will be treated
|
||||||
|
# like different objects and not updated but deleted and created again
|
||||||
|
# (in the current version update is also delete+create, so it does not matter)
|
||||||
|
for nsname_on_server in ${addon_nsnames_on_server}; do
|
||||||
|
basensname_on_server=$(get-basename ${nsname_on_server} ${separator})
|
||||||
|
suffix_on_server="$(get-suffix ${nsname_on_server} ${separator})"
|
||||||
|
|
||||||
|
log DB3 "Found existing addon ${nsname_on_server}, basename=${basensname_on_server}"
|
||||||
|
|
||||||
|
# check if the addon is present in the directory and decide
|
||||||
|
# what to do with it
|
||||||
|
# this is not optimal because we're reading the files over and over
|
||||||
|
# again. But for small number of addons it doesn't matter so much.
|
||||||
|
found=0
|
||||||
|
for addon_path in ${addon_paths_in_files}; do
|
||||||
|
nsname_from_file=$(get-object-nsname-from-file ${addon_path})
|
||||||
|
if [[ "${nsname_from_file}" == "ERROR" ]]; then
|
||||||
|
log INFO "Cannot read object name from ${addon_path}. Ignoring"
|
||||||
|
continue
|
||||||
|
else
|
||||||
|
log DB2 "Found object name '${nsname_from_file}' in file ${addon_path}"
|
||||||
|
fi
|
||||||
|
suffix_from_file="$(get-suffix ${nsname_from_file} ${separator})"
|
||||||
|
|
||||||
|
log DB3 "matching: ${basensname_on_server}${suffix_from_file} == ${nsname_from_file}"
|
||||||
|
if [[ "${basensname_on_server}${suffix_from_file}" == "${nsname_from_file}" ]]; then
|
||||||
|
log DB3 "matched existing ${obj_type} ${nsname_on_server} to file ${addon_path}; suffix_on_server=${suffix_on_server}, suffix_from_file=${suffix_from_file}"
|
||||||
|
found=1
|
||||||
|
matched_files="${matched_files} ${addon_path}"
|
||||||
|
if [[ "${suffix_on_server}" == "${suffix_from_file}" ]]; then
|
||||||
|
nsnames_for_ignore="${nsnames_for_ignore} ${nsname_from_file}"
|
||||||
|
else
|
||||||
|
for_update="${for_update} ${nsname_on_server};${addon_path}"
|
||||||
|
fi
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [[ ${found} -eq 0 ]]; then
|
||||||
|
log DB2 "No definition file found for replication controller ${nsname_on_server}. Scheduling for deletion"
|
||||||
|
nsnames_for_delete="${nsnames_for_delete} ${nsname_on_server}"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
log DB3 "matched_files=${matched_files}"
|
||||||
|
|
||||||
|
|
||||||
|
# note that if the addon file is invalid (or got removed after listing files
|
||||||
|
# but before we managed to match it) it will not be matched to any
|
||||||
|
# of the existing objects. So we will treat it as a new file
|
||||||
|
# and try to create its object.
|
||||||
|
for addon_path in ${addon_paths_in_files}; do
|
||||||
|
echo ${matched_files} | grep "${addon_path}" >/dev/null
|
||||||
|
if [[ $? -ne 0 ]]; then
|
||||||
|
new_files="${new_files} ${addon_path}"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
function reconcile-objects() {
|
||||||
|
local -r addon_path=$1
|
||||||
|
local -r obj_type=$2
|
||||||
|
local -r separator=$3 # name separator
|
||||||
|
match-objects ${addon_path} ${obj_type} ${separator}
|
||||||
|
|
||||||
|
log DBG "${obj_type}: nsnames_for_delete=${nsnames_for_delete}"
|
||||||
|
log DBG "${obj_type}: for_update=${for_update}"
|
||||||
|
log DBG "${obj_type}: nsnames_for_ignore=${nsnames_for_ignore}"
|
||||||
|
log DBG "${obj_type}: new_files=${new_files}"
|
||||||
|
|
||||||
|
delete-objects "${obj_type}" "${nsnames_for_delete}"
|
||||||
|
# wait for jobs below is a protection against changing the basename
|
||||||
|
# of a replication controllerm without changing the selector.
|
||||||
|
# If we don't wait, the new rc may be created before the old one is deleted
|
||||||
|
# In such case the old one will wait for all its pods to be gone, but the pods
|
||||||
|
# are created by the new replication controller.
|
||||||
|
# passing --cascade=false could solve the problem, but we want
|
||||||
|
# all orphan pods to be deleted.
|
||||||
|
wait-for-jobs
|
||||||
|
deleteResult=$?
|
||||||
|
|
||||||
|
create-objects "${obj_type}" "${new_files}"
|
||||||
|
update-objects "${obj_type}" "${for_update}"
|
||||||
|
|
||||||
|
local nsname
|
||||||
|
for nsname in ${nsnames_for_ignore}; do
|
||||||
|
log DB2 "The ${obj_type} ${nsname} is already up to date"
|
||||||
|
done
|
||||||
|
|
||||||
|
wait-for-jobs
|
||||||
|
createUpdateResult=$?
|
||||||
|
|
||||||
|
if [[ ${deleteResult} -eq 0 ]] && [[ ${createUpdateResult} -eq 0 ]]; then
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function update-addons() {
|
||||||
|
local -r addon_path=$1
|
||||||
|
# be careful, reconcile-objects uses global variables
|
||||||
|
reconcile-objects ${addon_path} ReplicationController "-" &
|
||||||
|
reconcile-objects ${addon_path} Deployment "-" &
|
||||||
|
|
||||||
|
# We don't expect names to be versioned for the following kinds, so
|
||||||
|
# we match the entire name, ignoring version suffix.
|
||||||
|
# That's why we pass an empty string as the version separator.
|
||||||
|
# If the description differs on disk, the object should be recreated.
|
||||||
|
# This is not implemented in this version.
|
||||||
|
reconcile-objects ${addon_path} Service "" &
|
||||||
|
reconcile-objects ${addon_path} PersistentVolume "" &
|
||||||
|
reconcile-objects ${addon_path} PersistentVolumeClaim "" &
|
||||||
|
|
||||||
|
wait-for-jobs
|
||||||
|
if [[ $? -eq 0 ]]; then
|
||||||
|
log INFO "== Kubernetes addon update completed successfully at $(date -Is) =="
|
||||||
|
else
|
||||||
|
log WRN "== Kubernetes addon update completed with errors at $(date -Is) =="
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# input parameters:
|
||||||
|
# $1 input directory
|
||||||
|
# $2 retry period in seconds - the script will retry api-server errors for approximately
|
||||||
|
# this amound of time (it is not very precise), at interval equal $DELAY_AFTER_ERROR_SEC.
|
||||||
|
#
|
||||||
|
|
||||||
|
if [[ $# -ne 2 ]]; then
|
||||||
|
echo "Illegal number of parameters. Usage $0 addon-dir [retry-period]" 1>&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
NUM_TRIES=$(($2 / ${DELAY_AFTER_ERROR_SEC}))
|
||||||
|
if [[ ${NUM_TRIES} -le 0 ]]; then
|
||||||
|
NUM_TRIES=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
addon_path=$1
|
||||||
|
update-addons ${addon_path}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,125 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2014 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# The business logic for whether a given object should be created
|
||||||
|
# was already enforced by salt, and /etc/kubernetes/addons is the
|
||||||
|
# managed result is of that. Start everything below that directory.
|
||||||
|
KUBECTL=${KUBECTL_BIN:-/usr/local/bin/kubectl}
|
||||||
|
|
||||||
|
ADDON_CHECK_INTERVAL_SEC=${TEST_ADDON_CHECK_INTERVAL_SEC:-600}
|
||||||
|
|
||||||
|
SYSTEM_NAMESPACE=kube-system
|
||||||
|
trusty_master=${TRUSTY_MASTER:-false}
|
||||||
|
|
||||||
|
function ensure_python() {
|
||||||
|
if ! python --version > /dev/null 2>&1; then
|
||||||
|
echo "No python on the machine, will use a python image"
|
||||||
|
local -r PYTHON_IMAGE=gcr.io/google_containers/python:v1
|
||||||
|
export PYTHON="docker run --interactive --rm --net=none ${PYTHON_IMAGE} python"
|
||||||
|
else
|
||||||
|
export PYTHON=python
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# $1 filename of addon to start.
|
||||||
|
# $2 count of tries to start the addon.
|
||||||
|
# $3 delay in seconds between two consecutive tries
|
||||||
|
# $4 namespace
|
||||||
|
function start_addon() {
|
||||||
|
local -r addon_filename=$1;
|
||||||
|
local -r tries=$2;
|
||||||
|
local -r delay=$3;
|
||||||
|
local -r namespace=$4
|
||||||
|
|
||||||
|
create-resource-from-string "$(cat ${addon_filename})" "${tries}" "${delay}" "${addon_filename}" "${namespace}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# $1 string with json or yaml.
|
||||||
|
# $2 count of tries to start the addon.
|
||||||
|
# $3 delay in seconds between two consecutive tries
|
||||||
|
# $4 name of this object to use when logging about it.
|
||||||
|
# $5 namespace for this object
|
||||||
|
function create-resource-from-string() {
|
||||||
|
local -r config_string=$1;
|
||||||
|
local tries=$2;
|
||||||
|
local -r delay=$3;
|
||||||
|
local -r config_name=$4;
|
||||||
|
local -r namespace=$5;
|
||||||
|
while [ ${tries} -gt 0 ]; do
|
||||||
|
echo "${config_string}" | ${KUBECTL} --namespace="${namespace}" apply -f - && \
|
||||||
|
echo "== Successfully started ${config_name} in namespace ${namespace} at $(date -Is)" && \
|
||||||
|
return 0;
|
||||||
|
let tries=tries-1;
|
||||||
|
echo "== Failed to start ${config_name} in namespace ${namespace} at $(date -Is). ${tries} tries remaining. =="
|
||||||
|
sleep ${delay};
|
||||||
|
done
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
# The business logic for whether a given object should be created
|
||||||
|
# was already enforced by salt, and /etc/kubernetes/addons is the
|
||||||
|
# managed result is of that. Start everything below that directory.
|
||||||
|
echo "== Kubernetes addon manager started at $(date -Is) with ADDON_CHECK_INTERVAL_SEC=${ADDON_CHECK_INTERVAL_SEC} =="
|
||||||
|
|
||||||
|
ensure_python
|
||||||
|
|
||||||
|
# Load the kube-env, which has all the environment variables we care
|
||||||
|
# about, in a flat yaml format.
|
||||||
|
kube_env_yaml="/var/cache/kubernetes-install/kube_env.yaml"
|
||||||
|
if [ ! -e "${kubelet_kubeconfig_file}" ]; then
|
||||||
|
eval $(${PYTHON} -c '''
|
||||||
|
import pipes,sys,yaml
|
||||||
|
|
||||||
|
for k,v in yaml.load(sys.stdin).iteritems():
|
||||||
|
print("readonly {var}={value}".format(var = k, value = pipes.quote(str(v))))
|
||||||
|
''' < "${kube_env_yaml}")
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
# Create the namespace that will be used to host the cluster-level add-ons.
|
||||||
|
start_addon /etc/kubernetes/addons/namespace.yaml 100 10 "" &
|
||||||
|
|
||||||
|
# Wait for the default service account to be created in the kube-system namespace.
|
||||||
|
token_found=""
|
||||||
|
while [ -z "${token_found}" ]; do
|
||||||
|
sleep .5
|
||||||
|
token_found=$(${KUBECTL} get --namespace="${SYSTEM_NAMESPACE}" serviceaccount default -o go-template="{{with index .secrets 0}}{{.name}}{{end}}" || true)
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "== default service account in the ${SYSTEM_NAMESPACE} namespace has token ${token_found} =="
|
||||||
|
|
||||||
|
# Create admission_control objects if defined before any other addon services. If the limits
|
||||||
|
# are defined in a namespace other than default, we should still create the limits for the
|
||||||
|
# default namespace.
|
||||||
|
for obj in $(find /etc/kubernetes/admission-controls \( -name \*.yaml -o -name \*.json \)); do
|
||||||
|
start_addon "${obj}" 100 10 default &
|
||||||
|
echo "++ obj ${obj} is created ++"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check if the configuration has changed recently - in case the user
|
||||||
|
# created/updated/deleted the files on the master.
|
||||||
|
while true; do
|
||||||
|
start_sec=$(date +"%s")
|
||||||
|
#kube-addon-update.sh must be deployed in the same directory as this file
|
||||||
|
`dirname $0`/kube-addon-update.sh /etc/kubernetes/addons ${ADDON_CHECK_INTERVAL_SEC}
|
||||||
|
end_sec=$(date +"%s")
|
||||||
|
len_sec=$((${end_sec}-${start_sec}))
|
||||||
|
# subtract the time passed from the sleep time
|
||||||
|
if [[ ${len_sec} -lt ${ADDON_CHECK_INTERVAL_SEC} ]]; then
|
||||||
|
sleep_time=$((${ADDON_CHECK_INTERVAL_SEC}-${len_sec}))
|
||||||
|
sleep ${sleep_time}
|
||||||
|
fi
|
||||||
|
done
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Addon Object Manager
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/etc/kubernetes/kube-addons.sh
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,100 @@
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "Pod",
|
||||||
|
"metadata": {
|
||||||
|
"name":"kube-apiserver",
|
||||||
|
"namespace": "kube-system"
|
||||||
|
},
|
||||||
|
"spec":{
|
||||||
|
"hostNetwork": true,
|
||||||
|
"containers":[
|
||||||
|
{
|
||||||
|
"name": "kube-apiserver",
|
||||||
|
"image": "{{ .APIServer.Image }}",
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "250m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"command": [
|
||||||
|
"/bin/sh",
|
||||||
|
"-c",
|
||||||
|
"/usr/local/bin/kube-apiserver {{ BuildFlags .APIServer }} 1>>/var/log/kube-apiserver.log 2>&1"
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
"port": 8080,
|
||||||
|
"path": "/healthz"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15
|
||||||
|
},
|
||||||
|
"ports":[
|
||||||
|
{ "name": "https",
|
||||||
|
"containerPort": {{ .APIServer.SecurePort }},
|
||||||
|
"hostPort": {{ .APIServer.SecurePort }} },{
|
||||||
|
"name": "local",
|
||||||
|
"containerPort": 8080,
|
||||||
|
"hostPort": 8080}
|
||||||
|
],
|
||||||
|
"volumeMounts": [
|
||||||
|
{"name": "usrsharessl","mountPath": "/usr/share/ssl", "readOnly": true}, {"name": "usrssl","mountPath": "/usr/ssl", "readOnly": true}, {"name": "usrlibssl","mountPath": "/usr/lib/ssl", "readOnly": true}, {"name": "usrlocalopenssl","mountPath": "/usr/local/openssl", "readOnly": true},
|
||||||
|
|
||||||
|
{ "name": "srvkube",
|
||||||
|
"mountPath": "{{ .APIServer.PathSrvKubernetes }}",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "logfile",
|
||||||
|
"mountPath": "/var/log/kube-apiserver.log",
|
||||||
|
"readOnly": false},
|
||||||
|
{ "name": "etcssl",
|
||||||
|
"mountPath": "/etc/ssl",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "varssl",
|
||||||
|
"mountPath": "/var/ssl",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "etcopenssl",
|
||||||
|
"mountPath": "/etc/openssl",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "etcpkitls",
|
||||||
|
"mountPath": "/etc/pki/tls",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "srvsshproxy",
|
||||||
|
"mountPath": "{{ .APIServer.PathSrvSshproxy }}",
|
||||||
|
"readOnly": false}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumes":[
|
||||||
|
{"name": "usrsharessl","hostPath": {"path": "/usr/share/ssl"}}, {"name": "usrssl","hostPath": {"path": "/usr/ssl"}}, {"name": "usrlibssl","hostPath": {"path": "/usr/lib/ssl"}}, {"name": "usrlocalopenssl","hostPath": {"path": "/usr/local/openssl"}},
|
||||||
|
|
||||||
|
{ "name": "srvkube",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "{{ .APIServer.PathSrvKubernetes }}"}
|
||||||
|
},
|
||||||
|
{ "name": "logfile",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/log/kube-apiserver.log"}
|
||||||
|
},
|
||||||
|
{ "name": "etcssl",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/ssl"}
|
||||||
|
},
|
||||||
|
{ "name": "varssl",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/ssl"}
|
||||||
|
},
|
||||||
|
{ "name": "etcopenssl",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/openssl"}
|
||||||
|
},
|
||||||
|
{ "name": "etcpkitls",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/pki/tls"}
|
||||||
|
},
|
||||||
|
{ "name": "srvsshproxy",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "{{ .APIServer.PathSrvSshproxy }}"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
|
@ -0,0 +1 @@
|
||||||
|
{{ .KubePassword }},{{ .KubeUser }},admin
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0600"
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{{ range $id, $token := .Tokens }}
|
||||||
|
{{ $token }},{{ $id }},{{ $id }}
|
||||||
|
{{ end }}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0600"
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"ifNotExists": true
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
APIServer:
|
||||||
|
CloudProvider: aws
|
|
@ -0,0 +1,2 @@
|
||||||
|
APIServer:
|
||||||
|
CloudProvider: gce
|
|
@ -0,0 +1,17 @@
|
||||||
|
APIServer:
|
||||||
|
SecurePort: 443
|
||||||
|
PathSrvKubernetes: /srv/kubernetes
|
||||||
|
PathSrvSshproxy: /srv/sshproxy
|
||||||
|
Image: gcr.io/google_containers/kube-apiserver:v1.2.2
|
||||||
|
Address: 127.0.0.1
|
||||||
|
EtcdServers: http://127.0.0.1:4001
|
||||||
|
EtcdServersOverrides: /events#http://127.0.0.1:4002
|
||||||
|
AdmissionControl: NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,PersistentVolumeLabel
|
||||||
|
ServiceClusterIPRange: 10.0.0.0/16
|
||||||
|
ClientCAFile: /srv/kubernetes/ca.crt
|
||||||
|
BasicAuthFile: /srv/kubernetes/basic_auth.csv
|
||||||
|
TLSCertFile: /srv/kubernetes/server.cert
|
||||||
|
TLSPrivateKeyFile: /srv/kubernetes/server.key
|
||||||
|
TokenAuthFile: /srv/kubernetes/known_tokens.csv
|
||||||
|
LogLevel: 2
|
||||||
|
AllowPrivileged: true
|
|
@ -0,0 +1,84 @@
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "Pod",
|
||||||
|
"metadata": {
|
||||||
|
"name":"kube-controller-manager",
|
||||||
|
"namespace": "kube-system"
|
||||||
|
},
|
||||||
|
"spec":{
|
||||||
|
"hostNetwork": true,
|
||||||
|
"containers":[
|
||||||
|
{
|
||||||
|
"name": "kube-controller-manager",
|
||||||
|
"image": "{{ .KubeControllerManager.Image }}",
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "200m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"command": [
|
||||||
|
"/bin/sh",
|
||||||
|
"-c",
|
||||||
|
"/usr/local/bin/kube-controller-manager {{ BuildFlags .KubeControllerManager }} 1>>/var/log/kube-controller-manager.log 2>&1"
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
"port": 10252,
|
||||||
|
"path": "/healthz"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15
|
||||||
|
},
|
||||||
|
"volumeMounts": [
|
||||||
|
{"name": "usrsharessl","mountPath": "/usr/share/ssl", "readOnly": true}, {"name": "usrssl","mountPath": "/usr/ssl", "readOnly": true}, {"name": "usrlibssl","mountPath": "/usr/lib/ssl", "readOnly": true}, {"name": "usrlocalopenssl","mountPath": "/usr/local/openssl", "readOnly": true},
|
||||||
|
{ "name": "srvkube",
|
||||||
|
"mountPath": "{{ .KubeControllerManager.PathSrvKubernetes }}",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "logfile",
|
||||||
|
"mountPath": "/var/log/kube-controller-manager.log",
|
||||||
|
"readOnly": false},
|
||||||
|
{ "name": "etcssl",
|
||||||
|
"mountPath": "/etc/ssl",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "varssl",
|
||||||
|
"mountPath": "/var/ssl",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "etcopenssl",
|
||||||
|
"mountPath": "/etc/openssl",
|
||||||
|
"readOnly": true},
|
||||||
|
{ "name": "etcpkitls",
|
||||||
|
"mountPath": "/etc/pki/tls",
|
||||||
|
"readOnly": true}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumes":[
|
||||||
|
{"name": "usrsharessl","hostPath": {"path": "/usr/share/ssl"}}, {"name": "usrssl","hostPath": {"path": "/usr/ssl"}}, {"name": "usrlibssl","hostPath": {"path": "/usr/lib/ssl"}}, {"name": "usrlocalopenssl","hostPath": {"path": "/usr/local/openssl"}},
|
||||||
|
|
||||||
|
{ "name": "srvkube",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "{{ .KubeControllerManager.PathSrvKubernetes }}"}
|
||||||
|
},
|
||||||
|
{ "name": "logfile",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/log/kube-controller-manager.log"}
|
||||||
|
},
|
||||||
|
{ "name": "etcssl",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/ssl"}
|
||||||
|
},
|
||||||
|
{ "name": "varssl",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/ssl"}
|
||||||
|
},
|
||||||
|
{ "name": "etcopenssl",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/openssl"}
|
||||||
|
},
|
||||||
|
{ "name": "etcpkitls",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/etc/pki/tls"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"ifNotExists": true
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
KubeControllerManager:
|
||||||
|
CloudProvider: aws
|
|
@ -0,0 +1,2 @@
|
||||||
|
KubeControllerManager:
|
||||||
|
CloudProvider: gce
|
|
@ -0,0 +1,10 @@
|
||||||
|
KubeControllerManager:
|
||||||
|
PathSrvKubernetes: /srv/kubernetes
|
||||||
|
Image: gcr.io/google_containers/kube-controller-manager:v1.2.2
|
||||||
|
Master: 127.0.0.1:8080
|
||||||
|
ClusterName: kubernetes
|
||||||
|
ClusterCIDR: 10.244.0.0/16
|
||||||
|
AllocateNodeCIDRs: true
|
||||||
|
ServiceAccountPrivateKeyFile: /srv/kubernetes/server.key
|
||||||
|
LogLevel: 2
|
||||||
|
RootCAFile: /srv/kubernetes/ca.crt
|
|
@ -0,0 +1,119 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ReplicationController
|
||||||
|
metadata:
|
||||||
|
name: kube-dns-v10
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
version: v10
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
spec:
|
||||||
|
replicas: {{ .DNS.Replicas }}
|
||||||
|
selector:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
version: v10
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
version: v10
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: etcd
|
||||||
|
image: gcr.io/google_containers/etcd:2.0.9
|
||||||
|
resources:
|
||||||
|
# keep request = limit to keep this container in guaranteed class
|
||||||
|
limits:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
command:
|
||||||
|
- /usr/local/bin/etcd
|
||||||
|
- -data-dir
|
||||||
|
- /var/etcd/data
|
||||||
|
- -listen-client-urls
|
||||||
|
- http://127.0.0.1:2379,http://127.0.0.1:4001
|
||||||
|
- -advertise-client-urls
|
||||||
|
- http://127.0.0.1:2379,http://127.0.0.1:4001
|
||||||
|
- -initial-cluster-token
|
||||||
|
- skydns-etcd
|
||||||
|
volumeMounts:
|
||||||
|
- name: etcd-storage
|
||||||
|
mountPath: /var/etcd/data
|
||||||
|
- name: kube2sky
|
||||||
|
image: gcr.io/google_containers/kube2sky:1.12
|
||||||
|
resources:
|
||||||
|
# keep request = limit to keep this container in guaranteed class
|
||||||
|
limits:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
command:
|
||||||
|
- /kube2sky
|
||||||
|
args:
|
||||||
|
- -domain={{ .DNS.Domain }}
|
||||||
|
- name: skydns
|
||||||
|
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
|
||||||
|
resources:
|
||||||
|
# keep request = limit to keep this container in guaranteed class
|
||||||
|
limits:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 50Mi
|
||||||
|
command:
|
||||||
|
- /skydns
|
||||||
|
args:
|
||||||
|
- -machines=http://127.0.0.1:4001
|
||||||
|
- -addr=0.0.0.0:53
|
||||||
|
- -ns-rotate=false
|
||||||
|
- -domain={{ .DNS.Domain }}.
|
||||||
|
ports:
|
||||||
|
- containerPort: 53
|
||||||
|
name: dns
|
||||||
|
protocol: UDP
|
||||||
|
- containerPort: 53
|
||||||
|
name: dns-tcp
|
||||||
|
protocol: TCP
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 8080
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
timeoutSeconds: 5
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 8080
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 1
|
||||||
|
timeoutSeconds: 5
|
||||||
|
- name: healthz
|
||||||
|
image: gcr.io/google_containers/exechealthz:1.0
|
||||||
|
resources:
|
||||||
|
# keep request = limit to keep this container in guaranteed class
|
||||||
|
limits:
|
||||||
|
cpu: 10m
|
||||||
|
memory: 20Mi
|
||||||
|
requests:
|
||||||
|
cpu: 10m
|
||||||
|
memory: 20Mi
|
||||||
|
command:
|
||||||
|
- /exechealthz
|
||||||
|
args:
|
||||||
|
- -cmd=nslookup kubernetes.default.svc.{{ .DNS.Domain }} 127.0.0.1 >/dev/null
|
||||||
|
- -port=8080
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
protocol: TCP
|
||||||
|
volumes:
|
||||||
|
- name: etcd-storage
|
||||||
|
emptyDir: {}
|
||||||
|
dnsPolicy: Default # Don't use cluster DNS.
|
|
@ -0,0 +1,20 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: kube-dns
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
kubernetes.io/name: "KubeDNS"
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
k8s-app: kube-dns
|
||||||
|
clusterIP: {{ .DNS.ServerIP }}
|
||||||
|
ports:
|
||||||
|
- name: dns
|
||||||
|
port: 53
|
||||||
|
protocol: UDP
|
||||||
|
- name: dns-tcp
|
||||||
|
port: 53
|
||||||
|
protocol: TCP
|
|
@ -0,0 +1,4 @@
|
||||||
|
DNS:
|
||||||
|
Replicas: 1
|
||||||
|
ServerIP: 10.0.0.10
|
||||||
|
Domain: cluster.local
|
|
@ -0,0 +1,48 @@
|
||||||
|
{
|
||||||
|
"apiVersion": "v1",
|
||||||
|
"kind": "Pod",
|
||||||
|
"metadata": {
|
||||||
|
"name":"kube-scheduler",
|
||||||
|
"namespace": "kube-system"
|
||||||
|
},
|
||||||
|
"spec":{
|
||||||
|
"hostNetwork": true,
|
||||||
|
"containers":[
|
||||||
|
{
|
||||||
|
"name": "kube-scheduler",
|
||||||
|
"image": "{{ .KubeScheduler.Image }}",
|
||||||
|
"resources": {
|
||||||
|
"requests": {
|
||||||
|
"cpu": "100m"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"command": [
|
||||||
|
"/bin/sh",
|
||||||
|
"-c",
|
||||||
|
"/usr/local/bin/kube-scheduler {{ BuildFlags .KubeScheduler }} 1>>/var/log/kube-scheduler.log 2>&1"
|
||||||
|
],
|
||||||
|
"livenessProbe": {
|
||||||
|
"httpGet": {
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
"port": 10251,
|
||||||
|
"path": "/healthz"
|
||||||
|
},
|
||||||
|
"initialDelaySeconds": 15,
|
||||||
|
"timeoutSeconds": 15
|
||||||
|
},
|
||||||
|
"volumeMounts": [
|
||||||
|
{
|
||||||
|
"name": "logfile",
|
||||||
|
"mountPath": "/var/log/kube-scheduler.log",
|
||||||
|
"readOnly": false
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"volumes":[
|
||||||
|
{ "name": "logfile",
|
||||||
|
"hostPath": {
|
||||||
|
"path": "/var/log/kube-scheduler.log"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"ifNotExists": true
|
||||||
|
}
|
|
@ -0,0 +1,4 @@
|
||||||
|
KubeScheduler:
|
||||||
|
Image: gcr.io/google_containers/kube-scheduler:v1.2.2
|
||||||
|
Master: 127.0.0.1:8080
|
||||||
|
LogLevel: 2
|
|
@ -0,0 +1,46 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# loadedImageFlags is a bit-flag to track which docker images loaded successfully.
|
||||||
|
let loadedImageFlags=0
|
||||||
|
|
||||||
|
while true; do
|
||||||
|
restart_docker=false
|
||||||
|
|
||||||
|
if which docker 1>/dev/null 2>&1; then
|
||||||
|
|
||||||
|
timeout 30 docker load -i /srv/salt/kube-bins/kube-proxy.tar 1>/dev/null 2>&1
|
||||||
|
rc=$?
|
||||||
|
if [[ "${rc}" == 0 ]]; then
|
||||||
|
let loadedImageFlags="${loadedImageFlags}|1"
|
||||||
|
elif [[ "${rc}" == 124 ]]; then
|
||||||
|
restart_docker=true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# required docker images got installed. exit while loop.
|
||||||
|
if [[ "${loadedImageFlags}" == 1 ]]; then break; fi
|
||||||
|
|
||||||
|
# Sometimes docker load hang, restart docker daemon resolve the issue
|
||||||
|
if [[ "${restart_docker}" ]]; then service docker restart; fi
|
||||||
|
|
||||||
|
# sleep for 15 seconds before attempting to load docker images again
|
||||||
|
sleep 15
|
||||||
|
|
||||||
|
done
|
||||||
|
|
||||||
|
# Now exit. After kube-push, salt will notice that the service is down and it
|
||||||
|
# will start it and new docker images will be loaded.
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
{
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Node Unpacker
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/etc/kubernetes/kube-node-unpacker.sh
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,147 @@
|
||||||
|
#! /bin/bash
|
||||||
|
# Copyright 2013 Google Inc. All Rights Reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# Mount a disk, formatting it if necessary. If the disk looks like it may
|
||||||
|
# have been formatted before, we will not format it.
|
||||||
|
#
|
||||||
|
# This script uses blkid and file to search for magic "formatted" bytes
|
||||||
|
# at the beginning of the disk. Furthermore, it attempts to use fsck to
|
||||||
|
# repair the filesystem before formatting it.
|
||||||
|
|
||||||
|
FSCK=fsck.ext4
|
||||||
|
MOUNT_OPTIONS="discard,defaults"
|
||||||
|
MKFS="mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -F"
|
||||||
|
if [ -e /etc/redhat-release ]; then
|
||||||
|
if grep -q '6\..' /etc/redhat-release; then
|
||||||
|
# lazy_journal_init is not recognized in redhat 6
|
||||||
|
MKFS="mkfs.ext4 -E lazy_itable_init=0 -F"
|
||||||
|
elif grep -q '7\..' /etc/redhat-release; then
|
||||||
|
FSCK=fsck.xfs
|
||||||
|
MKFS=mkfs.xfs
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
LOGTAG=safe_format_and_mount
|
||||||
|
LOGFACILITY=user
|
||||||
|
|
||||||
|
function log() {
|
||||||
|
local readonly severity=$1; shift;
|
||||||
|
logger -t ${LOGTAG} -p ${LOGFACILITY}.${severity} -s "$@"
|
||||||
|
}
|
||||||
|
|
||||||
|
function log_command() {
|
||||||
|
local readonly log_file=$(mktemp)
|
||||||
|
local readonly retcode
|
||||||
|
log info "Running: $*"
|
||||||
|
$* > ${log_file} 2>&1
|
||||||
|
retcode=$?
|
||||||
|
# only return the last 1000 lines of the logfile, just in case it's HUGE.
|
||||||
|
tail -1000 ${log_file} | logger -t ${LOGTAG} -p ${LOGFACILITY}.info -s
|
||||||
|
rm -f ${log_file}
|
||||||
|
return ${retcode}
|
||||||
|
}
|
||||||
|
|
||||||
|
function help() {
|
||||||
|
cat >&2 <<EOF
|
||||||
|
$0 [-f fsck_cmd] [-m mkfs_cmd] [-o mount_opts] <device> <mountpoint>
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
while getopts ":hf:o:m:" opt; do
|
||||||
|
case $opt in
|
||||||
|
h) help;;
|
||||||
|
f) FSCK=$OPTARG;;
|
||||||
|
o) MOUNT_OPTIONS=$OPTARG;;
|
||||||
|
m) MKFS=$OPTARG;;
|
||||||
|
-) break;;
|
||||||
|
\?) log error "Invalid option: -${OPTARG}"; exit 1;;
|
||||||
|
:) log "Option -${OPTARG} requires an argument."; exit 1;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
shift $(($OPTIND - 1))
|
||||||
|
readonly DISK=$1
|
||||||
|
readonly MOUNTPOINT=$2
|
||||||
|
|
||||||
|
[[ -z ${DISK} ]] && help
|
||||||
|
[[ -z ${MOUNTPOINT} ]] && help
|
||||||
|
|
||||||
|
function disk_looks_unformatted() {
|
||||||
|
blkid ${DISK}
|
||||||
|
if [[ $? == 0 ]]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
local readonly file_type=$(file --special-files ${DISK})
|
||||||
|
case ${file_type} in
|
||||||
|
*filesystem*)
|
||||||
|
return 0;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
function format_disk() {
|
||||||
|
log_command ${MKFS} ${DISK}
|
||||||
|
}
|
||||||
|
|
||||||
|
function try_repair_disk() {
|
||||||
|
log_command ${FSCK} -a ${DISK}
|
||||||
|
local readonly fsck_return=$?
|
||||||
|
if [[ ${fsck_return} -ge 8 ]]; then
|
||||||
|
log error "Fsck could not correct errors on ${DISK}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
if [[ ${fsck_return} -gt 0 ]]; then
|
||||||
|
log warning "Fsck corrected errors on ${DISK}"
|
||||||
|
fi
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
function try_mount() {
|
||||||
|
local mount_retcode
|
||||||
|
try_repair_disk
|
||||||
|
|
||||||
|
log_command mount -o ${MOUNT_OPTIONS} ${DISK} ${MOUNTPOINT}
|
||||||
|
mount_retcode=$?
|
||||||
|
if [[ ${mount_retcode} == 0 ]]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check to see if it looks like a filesystem before formatting it.
|
||||||
|
disk_looks_unformatted ${DISK}
|
||||||
|
if [[ $? == 0 ]]; then
|
||||||
|
log error "Disk ${DISK} looks formatted but won't mount. Giving up."
|
||||||
|
return ${mount_retcode}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# The disk looks like it's not been formatted before.
|
||||||
|
format_disk
|
||||||
|
if [[ $? != 0 ]]; then
|
||||||
|
log error "Format of ${DISK} failed."
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_command mount -o ${MOUNT_OPTIONS} ${DISK} ${MOUNTPOINT}
|
||||||
|
mount_retcode=$?
|
||||||
|
if [[ ${mount_retcode} == 0 ]]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
log error "Tried everything we could, but could not mount ${DISK}."
|
||||||
|
return ${mount_retcode}
|
||||||
|
}
|
||||||
|
|
||||||
|
try_mount
|
||||||
|
exit $?
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,40 @@
|
||||||
|
# kube-proxy podspec
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: kube-proxy
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
containers:
|
||||||
|
- name: kube-proxy
|
||||||
|
image: {{ .KubeProxy.Image }}
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: {{ .KubeProxy.CPURequest }}
|
||||||
|
command:
|
||||||
|
- /bin/sh
|
||||||
|
- -c
|
||||||
|
- kube-proxy --kubeconfig=/var/lib/kube-proxy/kubeconfig --resource-container="" {{ BuildFlags .KubeProxy }} 1>>/var/log/kube-proxy.log 2>&1
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /etc/ssl/certs
|
||||||
|
name: ssl-certs-host
|
||||||
|
readOnly: true
|
||||||
|
- mountPath: /var/log
|
||||||
|
name: varlog
|
||||||
|
readOnly: false
|
||||||
|
- mountPath: /var/lib/kube-proxy/kubeconfig
|
||||||
|
name: kubeconfig
|
||||||
|
readOnly: false
|
||||||
|
volumes:
|
||||||
|
- hostPath:
|
||||||
|
path: /usr/share/ca-certificates
|
||||||
|
name: ssl-certs-host
|
||||||
|
- hostPath:
|
||||||
|
path: /var/lib/kube-proxy/kubeconfig
|
||||||
|
name: kubeconfig
|
||||||
|
- hostPath:
|
||||||
|
path: /var/log
|
||||||
|
name: varlog
|
|
@ -0,0 +1,16 @@
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Config
|
||||||
|
users:
|
||||||
|
- name: kube-proxy
|
||||||
|
user:
|
||||||
|
token: {{ .GetToken "kube-proxy" }}
|
||||||
|
clusters:
|
||||||
|
- name: local
|
||||||
|
cluster:
|
||||||
|
certificate-authority-data: {{ Base64Encode .CACertificate.AsString }}
|
||||||
|
contexts:
|
||||||
|
- context:
|
||||||
|
cluster: local
|
||||||
|
user: kube-proxy
|
||||||
|
name: service-account-context
|
||||||
|
current-context: service-account-context
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0400"
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"ifNotExists": true
|
||||||
|
}
|
|
@ -0,0 +1,10 @@
|
||||||
|
KubeProxy:
|
||||||
|
Image: gcr.io/google_containers/kube-proxy:v1.2.2
|
||||||
|
Master: https://kubernetes-master
|
||||||
|
LogLevel: 2
|
||||||
|
# 20m might cause kube-proxy CPU starvation on full nodes, resulting in
|
||||||
|
# delayed service updates. But, giving it more would be a breaking change
|
||||||
|
# to the overhead requirements for existing clusters.
|
||||||
|
# Any change here should be accompanied by a proportional change in CPU
|
||||||
|
# requests of other per-node add-ons (e.g. fluentd).
|
||||||
|
CPURequest: 20m
|
|
@ -0,0 +1,4 @@
|
||||||
|
APT::Periodic::Update-Package-Lists "1";
|
||||||
|
APT::Periodic::Unattended-Upgrade "1";
|
||||||
|
|
||||||
|
APT::Periodic::AutocleanInterval "7";
|
|
@ -0,0 +1,2 @@
|
||||||
|
# Kubernetes
|
||||||
|
net.ipv4.ip_forward=1
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"onChangeExecute": [ "sysctl", "--system" ]
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
DOCKER_OPTS="{{ BuildFlags .Docker }}"
|
||||||
|
DOCKER_NOFILE=1000000
|
|
@ -0,0 +1,45 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# This script is intended to be run periodically, to check the health
|
||||||
|
# of docker. If it detects a failure, it will restart docker using systemctl.
|
||||||
|
|
||||||
|
if timeout 10 docker version > /dev/null; then
|
||||||
|
echo "docker healthy"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "docker failed"
|
||||||
|
echo "Giving docker 30 seconds grace before restarting"
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
if timeout 10 docker version > /dev/null; then
|
||||||
|
echo "docker recovered"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "docker still down; triggering docker restart"
|
||||||
|
systemctl restart docker
|
||||||
|
|
||||||
|
echo "Waiting 60 seconds to give docker time to start"
|
||||||
|
sleep 60
|
||||||
|
|
||||||
|
if timeout 10 docker version > /dev/null; then
|
||||||
|
echo "docker recovered"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "docker still failing"
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,21 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# This script is intended to be run before we start Docker.
|
||||||
|
|
||||||
|
# cleanup docker network checkpoint to avoid running into known issue
|
||||||
|
# of docker (https://github.com/docker/docker/issues/18283)
|
||||||
|
rm -rf /var/lib/docker/network
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Run docker-healthcheck once
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
ExecStart=/opt/kubernetes/helpers/docker-healthcheck
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"manageState": false
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Trigger docker-healthcheck periodically
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnUnitInactiveSec=10s
|
||||||
|
Unit=docker-healthcheck.service
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,21 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Docker Application Container Engine
|
||||||
|
Documentation=https://docs.docker.com
|
||||||
|
After=network.target docker.socket
|
||||||
|
Requires=docker.socket
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=notify
|
||||||
|
EnvironmentFile=/etc/sysconfig/docker
|
||||||
|
ExecStart=/usr/bin/docker daemon -H fd:// "$DOCKER_OPTS"
|
||||||
|
MountFlags=slave
|
||||||
|
LimitNOFILE=1048576
|
||||||
|
LimitNPROC=1048576
|
||||||
|
LimitCORE=infinity
|
||||||
|
Restart=always
|
||||||
|
RestartSec=2s
|
||||||
|
StartLimitInterval=0
|
||||||
|
ExecStartPre=/opt/kubernetes/helpers/docker-prestart
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"manageState": false
|
||||||
|
}
|
|
@ -0,0 +1,4 @@
|
||||||
|
{
|
||||||
|
"source": "https://storage.googleapis.com/kubernetes-release/docker/apache2.txt",
|
||||||
|
"hash": "2b8b815229aa8a61e483fb4ba0588b8b6c491890"
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
Docker:
|
||||||
|
Storage: devicemapper
|
|
@ -0,0 +1,5 @@
|
||||||
|
{% set log_level = "--log-level=warn" -%}
|
||||||
|
{% if pillar['docker_test_log_level'] is defined -%}
|
||||||
|
{% set log_level = pillar['docker_test_log_level'] -%}
|
||||||
|
{% endif -%}
|
||||||
|
docker.bridge=
|
|
@ -0,0 +1,5 @@
|
||||||
|
Docker:
|
||||||
|
Bridge: cbr0
|
||||||
|
LogLevel: warn
|
||||||
|
IPTables: false
|
||||||
|
IPMasq: false
|
|
@ -0,0 +1,7 @@
|
||||||
|
{
|
||||||
|
"version": "1.9.1-0~jessie",
|
||||||
|
"source": "http://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_1.9.1-0~jessie_amd64.deb",
|
||||||
|
"hash": "c58c39008fd6399177f6b2491222e4438f518d78",
|
||||||
|
|
||||||
|
"preventStart": true
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
{
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
|
@ -0,0 +1 @@
|
||||||
|
DAEMON_ARGS="{{ BuildFlags .Kubelet }}"
|
|
@ -0,0 +1,2 @@
|
||||||
|
{
|
||||||
|
}
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"mode": "0755"
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue