mirror of https://github.com/kubernetes/kops.git
Merge pull request #7593 from mikesplain/docs_site_map
Docs cleanup / mkdocs migration
This commit is contained in:
commit
18cfa5f5b2
8
Makefile
8
Makefile
|
@ -871,7 +871,7 @@ prow-postsubmit: bazel-version-dist
|
|||
|
||||
.PHONY: live-docs
|
||||
live-docs:
|
||||
@docker build --pull -t kops/mkdocs images/mkdocs
|
||||
@docker build -t kops/mkdocs images/mkdocs
|
||||
@docker run --rm -it -p 3000:3000 -v ${PWD}:/docs kops/mkdocs
|
||||
|
||||
.PHONY: build-docs
|
||||
|
@ -879,6 +879,12 @@ build-docs:
|
|||
@docker build --pull -t kops/mkdocs images/mkdocs
|
||||
@docker run --rm -v ${PWD}:/docs kops/mkdocs build
|
||||
|
||||
# TODO: Remove before merging
|
||||
.PHONY: build-docs-netlify
|
||||
build-docs-netlify:
|
||||
@pip install -r ${MAKEDIR}/images/mkdocs/requirements.txt
|
||||
@mkdocs build
|
||||
|
||||
# Update machine_types.go
|
||||
.PHONY: update-machine-types
|
||||
update-machine-types:
|
||||
|
|
|
@ -30,15 +30,15 @@ en alpha, y otras plataformas planeadas.
|
|||
|
||||
## Lanzando un anfitrión de Kubernetes cluster en AWS o GCE
|
||||
|
||||
Para reproducir exactamente el demo anterior, visualizalo en el [tutorial](/docs/aws.md) para
|
||||
Para reproducir exactamente el demo anterior, visualizalo en el [tutorial](/docs/getting_started/aws.md) para
|
||||
lanzar un anfitrión de Kubernetes cluster en AWS.
|
||||
|
||||
Para instalar un Kubernetes cluster en GCE por fabor siga esta [guide](/docs/tutorial/gce.md).
|
||||
Para instalar un Kubernetes cluster en GCE por fabor siga esta [guide](/docs/getting_started/gce.md).
|
||||
|
||||
|
||||
## Caracteristicas
|
||||
|
||||
* Automatiza el aprovisionamiento de Kubernetes clusters en [AWS](/docs/aws.md) y [GCE](/docs/tutorial/gce.md)
|
||||
* Automatiza el aprovisionamiento de Kubernetes clusters en [AWS](/docs/getting_started/aws.md) y [GCE](/docs/getting_started/gce.md)
|
||||
* Un Despliegue Altamente Disponible (HA) Kubernetes Masters
|
||||
* Construye en un modelo de estado sincronizado para **dry-runs** y **idempotency** automático
|
||||
* Capacidad de generar [Terraform](/docs/terraform.md)
|
||||
|
|
|
@ -32,10 +32,10 @@ in alpha, and other platforms planned.
|
|||
|
||||
## Launching a Kubernetes cluster hosted on AWS, GCE, DigitalOcean or OpenStack
|
||||
|
||||
To replicate the above demo, check out our [tutorial](/docs/aws.md) for
|
||||
To replicate the above demo, check out our [tutorial](/docs/getting_started/aws.md) for
|
||||
launching a Kubernetes cluster hosted on AWS.
|
||||
|
||||
To install a Kubernetes cluster on GCE please follow this [guide](/docs/tutorial/gce.md).
|
||||
To install a Kubernetes cluster on GCE please follow this [guide](/docs/getting_started/gce.md).
|
||||
|
||||
To install a Kubernetes cluster on DigitalOcean, follow this [guide](/docs/tutorial/digitalocean.md).
|
||||
|
||||
|
@ -45,7 +45,7 @@ To install a Kubernetes cluster on OpenStack, follow this [guide](/docs/tutorial
|
|||
|
||||
## Features
|
||||
|
||||
* Automates the provisioning of Kubernetes clusters in [AWS](/docs/aws.md), [OpenStack](/docs/tutorial/openstack.md) and [GCE](/docs/tutorial/gce.md)
|
||||
* Automates the provisioning of Kubernetes clusters in [AWS](/docs/getting_started/aws.md), [OpenStack](/docs/getting_started/openstack.md) and [GCE](/docs/getting_started/gce.md)
|
||||
* Deploys Highly Available (HA) Kubernetes Masters
|
||||
* Built on a state-sync model for **dry-runs** and automatic **idempotency**
|
||||
* Ability to generate [Terraform](/docs/terraform.md)
|
||||
|
|
|
@ -7,4 +7,4 @@ arrange:
|
|||
- examples
|
||||
- tutorial
|
||||
- releases
|
||||
- aws.md
|
||||
- getting_started/aws.md
|
||||
|
|
|
@ -1,27 +1,33 @@
|
|||
# Documentation Index
|
||||
|
||||
## Quick start
|
||||
* [Getting started on AWS](aws.md)
|
||||
* [Getting started on GCE](tutorial/gce.md)
|
||||
* [Getting started on AWS](getting_started/aws.md)
|
||||
* [Getting started on GCE](getting_started/gce.md)
|
||||
* [CLI reference](cli/kops.md)
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
* [Command-line interface](#command-line-interface)
|
||||
* [Inspection](#inspection)
|
||||
* [`kops` design documents](#kops-design-documents)
|
||||
* [Networking](#networking)
|
||||
* [Operations](#operations)
|
||||
* [Security](#security)
|
||||
* [Development](#development)
|
||||
- [Documentation Index](#documentation-index)
|
||||
- [Quick start](#quick-start)
|
||||
- [Overview](#overview)
|
||||
- [Command-line interface](#command-line-interface)
|
||||
- [Advanced / Detailed List of Configurations](#advanced--detailed-list-of-configurations)
|
||||
- [API / Configuration References](#api--configuration-references)
|
||||
- [API Usage Guides](#api-usage-guides)
|
||||
- [Operations](#operations)
|
||||
- [Networking](#networking)
|
||||
- [`kops` design documents](#kops-design-documents)
|
||||
- [Security](#security)
|
||||
- [Inspection](#inspection)
|
||||
- [Development](#development)
|
||||
|
||||
|
||||
## Command-line interface
|
||||
|
||||
* [CLI argument explanations](arguments.md)
|
||||
* [CLI reference](cli/kops.md)
|
||||
* [Commands](commands.md)
|
||||
* [Commands](usage/commands.md)
|
||||
* miscellaneous CLI-related remarks
|
||||
* [Experimental features](experimental.md)
|
||||
* list of and how to enable experimental flags in the CLI
|
||||
|
@ -46,7 +52,7 @@
|
|||
* [Cluster addons](addons.md)
|
||||
* [Cluster configuration management](changing_configuration.md)
|
||||
* [Cluster desired configuration creation from template](cluster_template.md)
|
||||
* [Cluster upgrades and migrations](cluster_upgrades_and_migrations.md)
|
||||
* [Cluster upgrades and migrations](operations/cluster_upgrades_and_migrations.md)
|
||||
* [`etcd` volume encryption setup](etcd_volume_encryption.md)
|
||||
* [`etcd` backup/restore](etcd/backup-restore.md)
|
||||
* [GPU setup](gpu.md)
|
||||
|
|
155
docs/addons.md
155
docs/addons.md
|
@ -1,155 +0,0 @@
|
|||
## Installing Kubernetes Addons
|
||||
|
||||
With kops you manage addons by using kubectl.
|
||||
|
||||
(For a description of the addon-manager, please see [addon_manager.md](addon_manager.md).)
|
||||
|
||||
Addons in Kubernetes are traditionally done by copying files to `/etc/kubernetes/addons` on the master. But this
|
||||
doesn't really make sense in HA master configurations. We also have kubectl available, and addons are just a thin
|
||||
wrapper over calling kubectl.
|
||||
|
||||
The command `kops create cluster` does not support specifying addons to be added to the cluster when it is created. Instead they can be added after cluster creation using kubectl. Alternatively when creating a cluster from a yaml manifest, addons can be specified using `spec.addons`.
|
||||
```yaml
|
||||
spec:
|
||||
addons:
|
||||
- manifest: kubernetes-dashboard
|
||||
- manifest: s3://kops-addons/addon.yaml
|
||||
```
|
||||
|
||||
This document describes how to install some common addons and how to create your own custom ones.
|
||||
|
||||
### Custom addons
|
||||
|
||||
The docs about the [addon manager](addon_manager.md) describe in more detail how to define a addon resource with regards to versioning.
|
||||
Here is a minimal example of an addon manifest that would install two different addons.
|
||||
|
||||
```yaml
|
||||
kind: Addons
|
||||
metadata:
|
||||
name: example
|
||||
spec:
|
||||
addons:
|
||||
- name: foo.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: foo.addons.org.io
|
||||
manifest: foo.addons.org.io/v0.0.1.yaml
|
||||
- name: bar.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: bar.addons.org.io
|
||||
manifest: bar.addons.org.io/v0.0.1.yaml
|
||||
```
|
||||
|
||||
In this this example the folder structure should look like this;
|
||||
|
||||
```
|
||||
addon.yaml
|
||||
foo.addons.org.io
|
||||
v0.0.1.yaml
|
||||
bar.addons.org.io
|
||||
v0.0.1.yaml
|
||||
```
|
||||
|
||||
The yaml files in the foo/bar folders can be any kubernetes resource. Typically this file structure would be pushed to S3 or another of the supported backends and then referenced as above in `spec.addons`. In order for master nodes to be able to access the S3 bucket containing the addon manifests, one might have to add additional iam policies to the master nodes using `spec.additionalPolicies`, like so;
|
||||
```yaml
|
||||
spec:
|
||||
additionalPolicies:
|
||||
master: |
|
||||
[
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons/*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetBucketLocation",
|
||||
"s3:ListBucket"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons"]
|
||||
}
|
||||
]
|
||||
```
|
||||
The masters will poll for changes in the bucket and keep the addons up to date.
|
||||
|
||||
|
||||
### Dashboard
|
||||
|
||||
The [dashboard project](https://github.com/kubernetes/dashboard) provides a nice administrative UI:
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
|
||||
```
|
||||
|
||||
And then follow the instructions in the [dashboard documentation](https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above) to access the dashboard.
|
||||
|
||||
The login credentials are:
|
||||
|
||||
* Username: `admin`
|
||||
* Password: get by running `kops get secrets kube --type secret -oplaintext` or `kubectl config view --minify`
|
||||
|
||||
#### RBAC
|
||||
|
||||
For k8s version > 1.6 and [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) enabled it's necessary to add your own permission to the dashboard. Please read the [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) docs before applying permissions.
|
||||
|
||||
Below you see an example giving **full access** to the dashboard.
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
```
|
||||
|
||||
### Monitoring with Heapster - Standalone
|
||||
|
||||
Monitoring supports the horizontal pod autoscaler.
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.11.0.yaml
|
||||
```
|
||||
Please note that [heapster is retired](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md). Consider using [metrics-server](https://github.com/kubernetes-incubator/metrics-server) and a third party metrics pipeline to gather Prometheus-format metrics instead.
|
||||
|
||||
### Monitoring with Prometheus Operator + kube-prometheus
|
||||
|
||||
The [Prometheus Operator](https://github.com/coreos/prometheus-operator/) makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
|
||||
|
||||
[kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus) combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
|
||||
|
||||
```console
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/prometheus-operator/v0.26.0.yaml
|
||||
```
|
||||
|
||||
### Route53 Mapper
|
||||
|
||||
Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).
|
||||
The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer to
|
||||
use one or the other.
|
||||
[README for the included dns-controller](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
|
||||
|
||||
route53-mapper automates creation and updating of entries on Route53 with `A` records pointing
|
||||
to ELB-backed `LoadBalancer` services created by Kubernetes. Install using:
|
||||
|
||||
The project is created by wearemolecule, and maintained at
|
||||
[wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes).
|
||||
[Usage instructions](https://github.com/kubernetes/kops/blob/master/addons/route53-mapper/README.md)
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/route53-mapper/v1.3.0.yml
|
||||
```
|
|
@ -37,9 +37,9 @@ kops create cluster $NAME \
|
|||
|
||||
You can simply use the kops command `kops get --name $NAME -o yaml > a_fun_name_you_will_remember.yml`
|
||||
|
||||
Note: for the above command to work the cluster NAME and the KOPS_STATE_STORE will have to be exported in your environment.
|
||||
Note: for the above command to work the cluster NAME and the KOPS_STATE_STORE will have to be exported in your environment.
|
||||
|
||||
For more information on how to use and modify the configurations see [here](manifests_and_customizing_via_api.md).
|
||||
For more information on how to use and modify the configurations see [here](../manifests_and_customizing_via_api.md).
|
||||
|
||||
## Managing instance groups
|
||||
|
|
@ -14,4 +14,4 @@ The following experimental features are currently available:
|
|||
* `+EnableSeparateConfigBase` - Allow a config-base that is different from the state store.
|
||||
* `+SpecOverrideFlag` - Allow setting spec values on `kops create`.
|
||||
* `+ExperimentalClusterDNS` - Turns off validation of the kubelet cluster dns flag.
|
||||
* `+EnableNodeAuthorization` - Enable support of Node Authorization, see [node_authorization.md](node_authorization.md).
|
||||
* `+EnableNodeAuthorization` - Enable support of Node Authorization, see [node_authorization.md](../node_authorization.md).
|
|
@ -119,7 +119,7 @@ data:
|
|||
|
||||
### Creating a new cluster with IAM Authenticator on.
|
||||
|
||||
* Create a cluster following the [AWS getting started guide](https://github.com/kubernetes/kops/blob/master/docs/aws.md)
|
||||
* Create a cluster following the [AWS getting started guide](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md)
|
||||
* When you reach the "Customize Cluster Configuration" section of the guide modify the cluster spec and add the Authentication and Authorization configs to the YAML config.
|
||||
* Continue following the cluster creation guide to build the cluster.
|
||||
* :warning: When the cluster first comes up the aws-iam-authenticator PODs will be in a bad state.
|
||||
|
|
|
@ -4,15 +4,15 @@
|
|||
|
||||
Kops used to only support Google Cloud DNS and Amazon Route53 to provision a kubernetes cluster. But since 1.6.2 `gossip` has been added which make it possible to provision a cluster without one of those DNS providers. Thanks to `gossip`, it's officially supported to provision a fully-functional kubernetes cluster in AWS China Region [which doesn't have Route53 so far][1] since [1.7][2]. Should support both `cn-north-1` and `cn-northwest-1`, but only `cn-north-1` is tested.
|
||||
|
||||
Most of the following procedures to provision a cluster are the same with [the guide to use kops in AWS](aws.md). The differences will be highlighted and the similar parts will be omitted.
|
||||
Most of the following procedures to provision a cluster are the same with [the guide to use kops in AWS](getting_started/aws.md). The differences will be highlighted and the similar parts will be omitted.
|
||||
|
||||
*NOTE: THE FOLLOWING PROCEDURES ARE ONLY TESTED WITH KOPS 1.10.0, 1.10.1 AND KUBERNETES 1.9.11, 1.10.12*
|
||||
*NOTE: THE FOLLOWING PROCEDURES ARE ONLY TESTED WITH KOPS 1.10.0, 1.10.1 AND KUBERNETES 1.9.11, 1.10.12*
|
||||
|
||||
### [Install kops](aws.md#install-kops)
|
||||
### [Install kops](getting_started/aws.md#install-kops)
|
||||
|
||||
### [Install kubectl](aws.md#install-kubectl)
|
||||
### [Install kubectl](getting_started/aws.md#install-kubectl)
|
||||
|
||||
### [Setup your environment](aws.md#setup-your-environment)
|
||||
### [Setup your environment](getting_started/aws.md#setup-your-environment)
|
||||
|
||||
#### AWS
|
||||
|
||||
|
@ -31,15 +31,15 @@ And export it correctly.
|
|||
export AWS_REGION=$(aws configure get region)
|
||||
```
|
||||
|
||||
## [Configure DNS](aws.md#configure-dns)
|
||||
## [Configure DNS](getting_started/aws.md#configure-dns)
|
||||
|
||||
As the note kindly pointing out, a gossip-based cluster can be easily created by having the cluster name end with `.k8s.local`. We will adopt this trick below. Rest of this section can be skipped safely.
|
||||
|
||||
## [Testing your DNS setup](aws.md#testing-your-dns-setup)
|
||||
## [Testing your DNS setup](getting_started/aws.md#testing-your-dns-setup)
|
||||
|
||||
Thanks to `gossip`, this section can be skipped safely as well.
|
||||
|
||||
## [Cluster State storage](aws.md#cluster-state-storage)
|
||||
## [Cluster State storage](getting_started/aws.md#cluster-state-storage)
|
||||
|
||||
Since we are provisioning a cluster in AWS China Region, we need to create a dedicated S3 bucket in AWS China Region.
|
||||
|
||||
|
@ -47,7 +47,7 @@ Since we are provisioning a cluster in AWS China Region, we need to create a ded
|
|||
aws s3api create-bucket --bucket prefix-example-com-state-store --create-bucket-configuration LocationConstraint=$AWS_REGION
|
||||
```
|
||||
|
||||
## [Creating your first cluster](aws.md#creating-your-first-cluster)
|
||||
## [Creating your first cluster](getting_started/aws.md#creating-your-first-cluster)
|
||||
|
||||
### Ensure you have a VPC which can access the internet NORMALLY
|
||||
|
||||
|
@ -55,7 +55,7 @@ First of all, we have to solve the slow and unstable connection to the internet
|
|||
|
||||
### Prepare kops ami
|
||||
|
||||
We have to build our own AMI because there is [no official kops ami in AWS China Regions][3]. There're two ways to accomplish so.
|
||||
We have to build our own AMI because there is [no official kops ami in AWS China Regions][3]. There're two ways to accomplish so.
|
||||
|
||||
#### ImageBuilder **RECOMMENDED**
|
||||
|
||||
|
@ -99,7 +99,7 @@ Following [the comment][5] to copy the kops image from another region, e.g. `ap-
|
|||
|
||||
No matter how to build the AMI, we get an AMI finally, e.g. `k8s-1.9-debian-jessie-amd64-hvm-ebs-2018-07-18`.
|
||||
|
||||
### [Prepare local environment](aws.md#prepare-local-environment)
|
||||
### [Prepare local environment](getting_started/aws.md#prepare-local-environment)
|
||||
|
||||
Set up a few environment variables.
|
||||
|
||||
|
@ -108,7 +108,7 @@ export NAME=example.k8s.local
|
|||
export KOPS_STATE_STORE=s3://prefix-example-com-state-store
|
||||
```
|
||||
|
||||
### [Create cluster configuration](aws.md#create-cluster-configuration)
|
||||
### [Create cluster configuration](getting_started/aws.md#create-cluster-configuration)
|
||||
|
||||
We will need to note which availability zones are available to us. AWS China (Beijing) Region only has two availability zones. It will have [the same problem][6], like other regions having less than three AZs, that there is no true HA support in two AZs. You can [add more master nodes](#add-more-master-nodes) to improve the reliability in one AZ.
|
||||
|
||||
|
@ -135,7 +135,7 @@ kops create cluster \
|
|||
${NAME}
|
||||
```
|
||||
|
||||
### [Customize Cluster Configuration](aws.md#prepare-local-environment)
|
||||
### [Customize Cluster Configuration](getting_started/aws.md#prepare-local-environment)
|
||||
|
||||
Now we have a cluster configuration, we adjust the subnet config to reuse [shared subnets](run_in_existing_vpc.md#shared-subnets) by editing the description.
|
||||
|
||||
|
@ -169,13 +169,13 @@ spec:
|
|||
|
||||
Please note that this mirror *MIGHT BE* not suitable for some cases. It's can be replaced by any other registry mirror as long as it's compatible with the docker api.
|
||||
|
||||
### [Build the Cluster](aws.md#build-the-cluster)
|
||||
### [Build the Cluster](getting_started/aws.md#build-the-cluster)
|
||||
|
||||
### [Use the Cluster](aws.md#use-the-cluster)
|
||||
### [Use the Cluster](getting_started/aws.md#use-the-cluster)
|
||||
|
||||
### [Delete the Cluster](aws.md#delete-the-cluster)
|
||||
### [Delete the Cluster](getting_started/aws.md#delete-the-cluster)
|
||||
|
||||
## [What's next?](aws.md#whats-next)
|
||||
## [What's next?](getting_started/aws.md#whats-next)
|
||||
|
||||
### Add more master nodes
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ This strategy can be extended to sequentially upgrade Kops on multiple clusters,
|
|||
This page provides examples for managing Kops clusters in CI environments.
|
||||
The [Manifest documentation](./manifests_and_customizing_via_api.md) describes how to create the YAML manifest files locally and includes high level examples of commands described below.
|
||||
|
||||
If you have a solution for a different CI platform or deployment strategy, feel free to open a Pull Request!
|
||||
If you have a solution for a different CI platform or deployment strategy, feel free to open a Pull Request!
|
||||
|
||||
## GitLab CI
|
||||
|
||||
|
@ -79,9 +79,9 @@ roll:
|
|||
* This pipeline setup will create and update existing clusters in place. It does not perform a "blue/green" deployment of multiple clusters.
|
||||
* The pipeline can be extended to support multiple clusters by making separate jobs per cluster for each stage.
|
||||
Ensure the `KOPS_CLUSTER_NAME` variable is set correctly for each set of jobs.
|
||||
|
||||
|
||||
In this case, it is possible to use `kops toolbox template` to manage one YAML template and per-cluster values files with which to render the template.
|
||||
See the [Cluster Template](./cluster_template.md) documentation for more information.
|
||||
See the [Cluster Template](./operations/cluster_template.md) documentation for more information.
|
||||
`kops toolbox template` would then be ran before `kops replace`.
|
||||
|
||||
### Limitations
|
||||
|
|
|
@ -19,3 +19,8 @@ It does not update the cloud resources, to apply the changes use "kops update cl
|
|||
|
||||
* `Example`: Example(s) of how to use the command. This field is formatted as a code snippet in the docs, so make sure if you have comments that these are written as a bash comment (e.g. `# this is a comment`).
|
||||
|
||||
## Mkdocs
|
||||
|
||||
`make live-docs` runs a docker container to live build and view docs when working on them locally
|
||||
|
||||
`make build-docs` will build a final version of docs which will be checked in via automation.
|
|
@ -1,65 +0,0 @@
|
|||
# Etcd Volume Encryption
|
||||
|
||||
You must configure etcd volume encryption before bringing up your cluster. You cannot add etcd volume encryption to an already running cluster.
|
||||
|
||||
## Encrypting Etcd Volumes Using the Default AWS KMS Key
|
||||
|
||||
Edit your cluster to add `encryptedVolume: true` to each etcd volume:
|
||||
|
||||
`kops edit cluster ${CLUSTER_NAME}`
|
||||
|
||||
```
|
||||
...
|
||||
etcdClusters:
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
name: main
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
name: events
|
||||
...
|
||||
```
|
||||
|
||||
Update your cluster:
|
||||
|
||||
```
|
||||
kops update cluster ${CLUSTER_NAME}
|
||||
# Review changes before applying
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
||||
## Encrypting Etcd Volumes Using a Custom AWS KMS Key
|
||||
|
||||
Edit your cluster to add `encryptedVolume: true` to each etcd volume:
|
||||
|
||||
`kops edit cluster ${CLUSTER_NAME}`
|
||||
|
||||
```
|
||||
...
|
||||
etcdClusters:
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
kmsKeyId: <full-arn-of-your-kms-key>
|
||||
name: main
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
kmsKeyId: <full-arn-of-your-kms-key>
|
||||
name: events
|
||||
...
|
||||
```
|
||||
|
||||
Update your cluster:
|
||||
|
||||
```
|
||||
kops update cluster ${CLUSTER_NAME}
|
||||
# Review changes before applying
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
|
@ -1,6 +1,6 @@
|
|||
# COMMON BASIC REQUIREMENTS FOR KOPS-RELATED LABS. PRE-FLIGHT CHECK:
|
||||
|
||||
Before rushing in to replicate any of the exercises, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](../aws.md).
|
||||
Before rushing in to replicate any of the exercises, please ensure your basic environment is correctly setup. See the [KOPS AWS tutorial for more information](../getting_started/aws.md).
|
||||
|
||||
Ensure that the following points are covered and working in your environment:
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state
|
|||
Some things to note from here:
|
||||
|
||||
- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "coreosbasedkopscluster.k8s.local".
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name needs to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/aws.md)
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name needs to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md)
|
||||
|
||||
|
||||
## COREOS IMAGE INFORMATION:
|
||||
|
@ -119,7 +119,7 @@ aws ec2 describe-images --region=us-east-1 --owner=595879546273 \
|
|||
--query 'sort_by(Images,&CreationDate)[-1].{id:ImageLocation}' \
|
||||
--output table
|
||||
|
||||
|
||||
|
||||
---------------------------------------------------
|
||||
| DescribeImages |
|
||||
+----+--------------------------------------------+
|
||||
|
@ -273,7 +273,7 @@ curl http://54.210.119.98
|
|||
curl http://34.200.247.63
|
||||
<html><body><h1>It works!</h1></body></html>
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
**NOTE:** If you are replicating this exercise in a production environment, use a "real" load balancer in order to expose your replicated services. We are here just testing things so we really don't care right now about that, but, if you are doing this for a "real" production environment, either use an AWS ELB service, or an nginx ingress controller as described in our documentation: [NGINX Based ingress controller](https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx).
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state
|
|||
Some things to note from here:
|
||||
|
||||
- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "privatekopscluster.k8s.local".
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name need to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/aws.md)
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name need to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md)
|
||||
|
||||
|
||||
## KOPS PRIVATE CLUSTER CREATION:
|
||||
|
|
|
@ -19,7 +19,7 @@ Examples:
|
|||
|
||||
**YAML:**
|
||||
|
||||
See the docs in [cluster_spec.md#adminaccess](cluster_spec.md#adminaccess)
|
||||
See the docs in [../cluster_spec.md#adminaccess](../cluster_spec.md#adminaccess)
|
||||
|
||||
## dns-zone
|
||||
|
||||
|
@ -67,7 +67,7 @@ Values:
|
|||
|
||||
# API only Arguments
|
||||
|
||||
Certain arguments can only be passed via the API, eg, `kops edit cluster`. The following documents some of the more interesting or lesser-known options. See the [Cluster Spec](./cluster_spec.md) page for more fields.
|
||||
Certain arguments can only be passed via the API, eg, `kops edit cluster`. The following documents some of the more interesting or lesser-known options. See the [Cluster Spec](./../cluster_spec.md) page for more fields.
|
||||
|
||||
## kubeletPreferredAddressTypes
|
||||
|
||||
|
@ -82,4 +82,4 @@ kubeAPIServer:
|
|||
- ExternalIP
|
||||
```
|
||||
|
||||
More information about using YAML is available [here](manifests_and_customizing_via_api.md).
|
||||
More information about using YAML is available [here](../manifests_and_customizing_via_api.md).
|
|
@ -1,22 +1,6 @@
|
|||
<p align="center">
|
||||
<img src="img/k8s-aws.png"> </image>
|
||||
</p>
|
||||
# Getting Started with kops on AWS
|
||||
|
||||
# Getting Started
|
||||
|
||||
## Install kops
|
||||
|
||||
Before we can bring up the cluster we need to [install the CLI tool](install.md) `kops`.
|
||||
|
||||
## Install kubectl
|
||||
|
||||
In order to control Kubernetes clusters we need to [install the CLI tool](install.md) `kubectl`.
|
||||
|
||||
#### Other Platforms
|
||||
|
||||
* [Kubernetes Latest Release](https://github.com/kubernetes/kubernetes/releases/latest)
|
||||
|
||||
* [Installation Guide](http://kubernetes.io/docs/user-guide/prereqs/)
|
||||
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md).
|
||||
|
||||
## Setup your environment
|
||||
|
||||
|
@ -112,7 +96,7 @@ This is copying the NS servers of your **SUBDOMAIN** up to the **PARENT**
|
|||
domain in Route53. To do this you should:
|
||||
|
||||
* Create the subdomain, and note your **SUBDOMAIN** name servers (If you have
|
||||
already done this you can also [get the values](ns.md))
|
||||
already done this you can also [get the values](../advanced/ns.md))
|
||||
|
||||
```bash
|
||||
# Note: This example assumes you have jq installed locally.
|
||||
|
@ -186,7 +170,7 @@ You might need to grab [jq](https://github.com/stedolan/jq/wiki/Installation)
|
|||
for some of these instructions.
|
||||
|
||||
* Create the subdomain, and note your name servers (If you have already done
|
||||
this you can also [get the values](ns.md))
|
||||
this you can also [get the values](../advanced/ns.md))
|
||||
|
||||
```bash
|
||||
ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain.example.com --caller-reference $ID | jq .DelegationSet.NameServers
|
||||
|
@ -271,7 +255,7 @@ to revert or recover a previous state store.
|
|||
aws s3api put-bucket-versioning --bucket prefix-example-com-state-store --versioning-configuration Status=Enabled
|
||||
```
|
||||
|
||||
Information regarding cluster state store location must be set when using `kops` cli see [state store](state.md) for further information.
|
||||
Information regarding cluster state store location must be set when using `kops` cli see [state store](../state.md) for further information.
|
||||
|
||||
### Using S3 default bucket encryption
|
||||
|
||||
|
@ -284,7 +268,7 @@ aws s3api put-bucket-encryption --bucket prefix-example-com-state-store --server
|
|||
```
|
||||
|
||||
If the default encryption is not set or it cannot be checked, kops will resort to using client side AES256 encryption.
|
||||
|
||||
|
||||
### Sharing an S3 bucket across multiple accounts
|
||||
|
||||
It is possible to use a single S3 bucket for storing kops state for clusters
|
||||
|
@ -334,7 +318,7 @@ aws ec2 describe-availability-zones --region us-west-2
|
|||
```
|
||||
|
||||
Below is a create cluster command. We'll use the most basic example possible,
|
||||
with more verbose examples in [high availability](high_availability.md#advanced-example).
|
||||
with more verbose examples in [high availability](../operations/high_availability.md#advanced-example).
|
||||
The below command will generate a cluster configuration, but not start building
|
||||
it. Make sure that you have generated SSH key pair before creating the cluster.
|
||||
|
||||
|
@ -431,9 +415,9 @@ modes](commands.md#other-interesting-modes) to learn more about generating
|
|||
Terraform configurations, or running your cluster in an HA (Highly Available)
|
||||
mode.
|
||||
|
||||
The [cluster spec docs](cluster_spec.md) can help to configure these "other
|
||||
The [cluster spec docs](../cluster_spec.md) can help to configure these "other
|
||||
interesting modes". Also be sure to check out how to run a [private network
|
||||
topology](topology.md) in AWS.
|
||||
topology](../topology.md) in AWS.
|
||||
|
||||
## Feedback
|
||||
|
|
@ -1,6 +1,5 @@
|
|||
# Documentation
|
||||
|
||||
Please refer to the [cli](cli) directory for full documentation.
|
||||
# Commands & Arguments
|
||||
Please refer to the kops [cli reference](../cli/kops.md) for full documentation.
|
||||
|
||||
## `kops create cluster`
|
||||
|
||||
|
@ -16,6 +15,13 @@ creating it).
|
|||
It is recommended that you run it first in 'preview' mode with `kops update cluster --name <name>`, and then
|
||||
when you are happy that it is making the right changes you run`kops update cluster --name <name> --yes`.
|
||||
|
||||
## `kops rolling-update cluster`
|
||||
|
||||
`kops update cluster <clustername>` updates a kubernetes cluster to match the cloud and kops specifications.
|
||||
|
||||
It is recommended that you run it first in 'preview' mode with `kops rolling-update cluster --name <name>`, and then
|
||||
when you are happy that it is making the right changes you run`kops rolling-update cluster --name <name> --yes`.
|
||||
|
||||
## `kops get clusters`
|
||||
|
||||
`kops get clusters` lists all clusters in the registry.
|
||||
|
@ -32,24 +38,3 @@ when you are happy that it is deleting the right things you run `kops delete clu
|
|||
## `kops version`
|
||||
|
||||
`kops version` will print the version of the code you are running.
|
||||
|
||||
## Other interesting modes:
|
||||
|
||||
* Build a terraform model: `--target=terraform` The terraform model will be built in `out/terraform`
|
||||
|
||||
* Build a Cloudformation model: `--target=cloudformation` The Cloudformation json file will be built in 'out/cloudformation'
|
||||
|
||||
* Specify the k8s build to run: `--kubernetes-version=1.2.2`
|
||||
|
||||
* Run nodes in multiple zones: `--zones=us-east-1b,us-east-1c,us-east-1d`
|
||||
|
||||
* Run with a HA master: `--master-zones=us-east-1b,us-east-1c,us-east-1d`
|
||||
|
||||
* Specify the number of nodes: `--node-count=4`
|
||||
|
||||
* Specify the node size: `--node-size=m4.large`
|
||||
|
||||
* Specify the master size: `--master-size=m4.large`
|
||||
|
||||
* Override the default DNS zone: `--dns-zone=<my.hosted.zone>`
|
||||
|
|
@ -154,7 +154,7 @@ At this point you have a kubernetes cluster - the core commands to do so are as
|
|||
and `kops update cluster`. There's a lot more power in kops, and even more power in kubernetes itself, so we've
|
||||
put a few jumping off places here. But when you're done, don't forget to [delete your cluster](#deleting-the-cluster).
|
||||
|
||||
* [Manipulate InstanceGroups](working-with-instancegroups.md) to add more nodes, change image
|
||||
* [Manipulate InstanceGroups](../tutorial/working-with-instancegroups.md) to add more nodes, change image
|
||||
|
||||
# Deleting the cluster
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
## Prerequisite
|
||||
|
||||
`kubectl` is required, see [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
|
||||
## OSX From Homebrew
|
||||
|
||||
```console
|
||||
brew update && brew install kops
|
||||
```
|
||||
|
||||
The `kops` binary is also available via our [releases](https://github.com/kubernetes/kops/releases/latest).
|
||||
|
||||
|
||||
## Linux
|
||||
|
||||
```console
|
||||
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
|
||||
chmod +x kops-linux-amd64
|
||||
sudo mv kops-linux-amd64 /usr/local/bin/kops
|
||||
```
|
||||
|
||||
## Windows
|
||||
|
||||
1. Get `kops-windows-amd64` from our [releases](https://github.com/kubernetes/kops/releases/latest).
|
||||
2. Rename `kops-windows-amd64` to `kops.exe` and store it in a preferred path.
|
||||
3. Make sure the path you chose is added to your `Path` environment variable.
|
|
@ -11,10 +11,9 @@ We like to think of it as `kubectl` for clusters.
|
|||
|
||||
`kops` helps you create, destroy, upgrade and maintain production-grade, highly
|
||||
available, Kubernetes clusters from the command line. AWS (Amazon Web Services)
|
||||
is currently officially supported, with GCE in beta support , and VMware vSphere
|
||||
is currently officially supported, with GCE and OpenStack in beta support, and VMware vSphere
|
||||
in alpha, and other platforms planned.
|
||||
|
||||
|
||||
## Can I see it in action?
|
||||
|
||||
<p align="center">
|
||||
|
@ -26,14 +25,14 @@ in alpha, and other platforms planned.
|
|||
|
||||
## Features
|
||||
|
||||
* Automates the provisioning of Kubernetes clusters in [AWS](aws.md) and [GCE](tutorial/gce.md)
|
||||
* Automates the provisioning of Kubernetes clusters in [AWS](getting_started/aws.md) and [GCE](getting_started/gce.md)
|
||||
* Deploys Highly Available (HA) Kubernetes Masters
|
||||
* Built on a state-sync model for **dry-runs** and automatic **idempotency**
|
||||
* Ability to generate [Terraform](terraform.md)
|
||||
* Supports custom Kubernetes [add-ons](addons.md)
|
||||
* Supports custom Kubernetes [add-ons](operations/addons.md)
|
||||
* Command line [autocompletion](cli/kops_completion.md)
|
||||
* YAML Manifest Based API [Configuration](manifests_and_customizing_via_api.md)
|
||||
* [Templating](cluster_template.md) and dry-run modes for creating
|
||||
* [Templating](operations/cluster_template.md) and dry-run modes for creating
|
||||
Manifests
|
||||
* Choose from eight different CNI [Networking](networking.md) providers out-of-the-box
|
||||
* Supports upgrading from [kube-up](upgrade_from_kubeup.md)
|
||||
|
@ -51,30 +50,43 @@ in alpha, and other platforms planned.
|
|||
### Kubernetes Version Support
|
||||
|
||||
kops is intended to be backward compatible. It is always recommended to use the
|
||||
latest version of kops with whatever version of Kubernetes you are using. Always
|
||||
use the latest version of kops.
|
||||
latest version of kops with whatever version of Kubernetes you are using. We suggest
|
||||
kops users run one of the [3 minor versions](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew) Kubernetes is supporting however we
|
||||
do our best to support previous releases for a period of time.
|
||||
|
||||
One exception, in regard to compatibility, kops supports the equivalent
|
||||
Kubernetes minor release number. A minor version is the second digit in the
|
||||
release number. kops version 1.8.0 has a minor version of 8. The numbering
|
||||
release number. kops version 1.13.0 has a minor version of 13. The numbering
|
||||
follows the semantic versioning specification, MAJOR.MINOR.PATCH.
|
||||
|
||||
For example, kops 1.8.0 does not support Kubernetes 1.9.2, but kops 1.9.0
|
||||
supports Kubernetes 1.9.2 and previous Kubernetes versions. Only when kops minor
|
||||
version matches, the Kubernetes minor version does kops officially support the
|
||||
For example, kops 1.12.0 does not support Kubernetes 1.13.0, but kops 1.13.0
|
||||
supports Kubernetes 1.12.2 and previous Kubernetes versions. Only when the kops minor
|
||||
version matches the Kubernetes minor version does kops officially support the
|
||||
Kubernetes release. kops does not stop a user from installing mismatching
|
||||
versions of K8s, but Kubernetes releases always require kops to install specific
|
||||
versions of components like docker, that tested against the particular
|
||||
Kubernetes version.
|
||||
|
||||
#### Compatibility Matrix
|
||||
|
||||
| kops version | k8s 1.5.x | k8s 1.6.x | k8s 1.7.x | k8s 1.8.x | k8s 1.9.x |
|
||||
|--------------|-----------|-----------|-----------|-----------|-----------|
|
||||
| 1.9.x | Y | Y | Y | Y | Y |
|
||||
| 1.8.x | Y | Y | Y | Y | N |
|
||||
| 1.7.x | Y | Y | Y | N | N |
|
||||
| 1.6.x | Y | Y | N | N | N |
|
||||
| kops version | k8s 1.9.x | k8s 1.10.x | k8s 1.11.x | k8s 1.12.x | k8s 1.13.x | k8s 1.14.x |
|
||||
|---------------|-----------|------------|------------|------------|------------|------------|
|
||||
| 1.14.x - Beta | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| 1.13.x | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
|
||||
| 1.12.x | ✔ | ✔ | ✔ | ✔ | ❌ | ❌ |
|
||||
| 1.11.x | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
|
||||
| ~~1.10.x~~ | ✔ | ✔ | ❌ | ❌ | ❌ | ❌ |
|
||||
| ~~1.9.x~~ | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
||||
|
||||
Use the latest version of kops for all releases of Kubernetes, with the caveat
|
||||
that higher versions of Kubernetes are not _officially_ supported by kops.
|
||||
that higher versions of Kubernetes are not _officially_ supported by kops. Releases who are ~~crossed out~~ _should_ work but we suggest should be upgraded soon.
|
||||
|
||||
### kops Release Schedule
|
||||
|
||||
This project does not follow the Kubernetes release schedule. `kops` aims to
|
||||
provide a reliable installation experience for kubernetes, and typically
|
||||
releases about a month after the corresponding Kubernetes release. This time
|
||||
allows for the Kubernetes project to resolve any issues introduced by the new
|
||||
version and ensures that we can support the latest features. kops will release
|
||||
alpha and beta pre-releases for people that are eager to try the latest
|
||||
Kubernetes release. Please only use pre-GA kops releases in environments that
|
||||
can tolerate the quirks of new releases, and please do report any issues
|
||||
encountered.
|
||||
|
|
|
@ -305,7 +305,7 @@ The `ClusterSpec` allows a user to set configurations for such values as Docker
|
|||
More information about some of the elements in the `ClusterSpec` is available in the following:
|
||||
|
||||
- Cluster Spec [document](cluster_spec.md) which outlines some of the values in the Cluster Specification.
|
||||
- [Etcd Encryption](etcd_volume_encryption.md)
|
||||
- [Etcd Encryption](operations/etcd_backup_restore_encryption.md)
|
||||
- [GPU](gpu.md) setup
|
||||
- [IAM Roles](iam_roles.md) - adding additional IAM roles.
|
||||
- [Labels](labels.md)
|
||||
|
|
|
@ -1,3 +1,160 @@
|
|||
# Kubernetes Addons and Addon Manager
|
||||
|
||||
## Addons
|
||||
With kops you manage addons by using kubectl.
|
||||
|
||||
(For a description of the addon-manager, please see [addon_manager.md](#addon-management).)
|
||||
|
||||
Addons in Kubernetes are traditionally done by copying files to `/etc/kubernetes/addons` on the master. But this
|
||||
doesn't really make sense in HA master configurations. We also have kubectl available, and addons are just a thin
|
||||
wrapper over calling kubectl.
|
||||
|
||||
The command `kops create cluster` does not support specifying addons to be added to the cluster when it is created. Instead they can be added after cluster creation using kubectl. Alternatively when creating a cluster from a yaml manifest, addons can be specified using `spec.addons`.
|
||||
```yaml
|
||||
spec:
|
||||
addons:
|
||||
- manifest: kubernetes-dashboard
|
||||
- manifest: s3://kops-addons/addon.yaml
|
||||
```
|
||||
|
||||
This document describes how to install some common addons and how to create your own custom ones.
|
||||
|
||||
### Custom addons
|
||||
|
||||
The docs about the [addon manager](#addon-management) describe in more detail how to define a addon resource with regards to versioning.
|
||||
Here is a minimal example of an addon manifest that would install two different addons.
|
||||
|
||||
```yaml
|
||||
kind: Addons
|
||||
metadata:
|
||||
name: example
|
||||
spec:
|
||||
addons:
|
||||
- name: foo.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: foo.addons.org.io
|
||||
manifest: foo.addons.org.io/v0.0.1.yaml
|
||||
- name: bar.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: bar.addons.org.io
|
||||
manifest: bar.addons.org.io/v0.0.1.yaml
|
||||
```
|
||||
|
||||
In this this example the folder structure should look like this;
|
||||
|
||||
```
|
||||
addon.yaml
|
||||
foo.addons.org.io
|
||||
v0.0.1.yaml
|
||||
bar.addons.org.io
|
||||
v0.0.1.yaml
|
||||
```
|
||||
|
||||
The yaml files in the foo/bar folders can be any kubernetes resource. Typically this file structure would be pushed to S3 or another of the supported backends and then referenced as above in `spec.addons`. In order for master nodes to be able to access the S3 bucket containing the addon manifests, one might have to add additional iam policies to the master nodes using `spec.additionalPolicies`, like so;
|
||||
```yaml
|
||||
spec:
|
||||
additionalPolicies:
|
||||
master: |
|
||||
[
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons/*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetBucketLocation",
|
||||
"s3:ListBucket"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons"]
|
||||
}
|
||||
]
|
||||
```
|
||||
The masters will poll for changes in the bucket and keep the addons up to date.
|
||||
|
||||
|
||||
### Dashboard
|
||||
|
||||
The [dashboard project](https://github.com/kubernetes/dashboard) provides a nice administrative UI:
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
|
||||
```
|
||||
|
||||
And then follow the instructions in the [dashboard documentation](https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above) to access the dashboard.
|
||||
|
||||
The login credentials are:
|
||||
|
||||
* Username: `admin`
|
||||
* Password: get by running `kops get secrets kube --type secret -oplaintext` or `kubectl config view --minify`
|
||||
|
||||
#### RBAC
|
||||
|
||||
For k8s version > 1.6 and [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) enabled it's necessary to add your own permission to the dashboard. Please read the [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) docs before applying permissions.
|
||||
|
||||
Below you see an example giving **full access** to the dashboard.
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
```
|
||||
|
||||
### Monitoring with Heapster - Standalone
|
||||
|
||||
Monitoring supports the horizontal pod autoscaler.
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.11.0.yaml
|
||||
```
|
||||
Please note that [heapster is retired](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md). Consider using [metrics-server](https://github.com/kubernetes-incubator/metrics-server) and a third party metrics pipeline to gather Prometheus-format metrics instead.
|
||||
|
||||
### Monitoring with Prometheus Operator + kube-prometheus
|
||||
|
||||
The [Prometheus Operator](https://github.com/coreos/prometheus-operator/) makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
|
||||
|
||||
[kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus) combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
|
||||
|
||||
```console
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/prometheus-operator/v0.26.0.yaml
|
||||
```
|
||||
|
||||
### Route53 Mapper
|
||||
|
||||
Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).
|
||||
The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer to
|
||||
use one or the other.
|
||||
[README for the included dns-controller](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
|
||||
|
||||
route53-mapper automates creation and updating of entries on Route53 with `A` records pointing
|
||||
to ELB-backed `LoadBalancer` services created by Kubernetes. Install using:
|
||||
|
||||
The project is created by wearemolecule, and maintained at
|
||||
[wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes).
|
||||
[Usage instructions](https://github.com/kubernetes/kops/blob/master/addons/route53-mapper/README.md)
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/route53-mapper/v1.3.0.yml
|
||||
```
|
||||
|
||||
## Addons Management
|
||||
|
||||
kops incorporates management of some addons; we _have_ to manage some addons which are needed before
|
||||
|
@ -23,14 +180,14 @@ with a `kubectl apply -f https://...` or with the channels tool.
|
|||
In future, we may as a convenience make it easy to add optional addons to the kops manifest,
|
||||
though this will just be a convenience wrapper around doing it manually.
|
||||
|
||||
## Update BootStrap Addons
|
||||
### Update BootStrap Addons
|
||||
|
||||
If you want to update the bootstrap addons, you can run the following command to show you which addons need updating. Add `--yes` to actually apply the updates.
|
||||
|
||||
**channels apply channel s3://*KOPS_S3_BUCKET*/*CLUSTER_NAME*/addons/bootstrap-channel.yaml**
|
||||
|
||||
|
||||
## Versioning
|
||||
### Versioning
|
||||
|
||||
The channels tool adds a manifest-of-manifests file, of `Kind: Addons`, which allows for a description
|
||||
of the various manifest versions that are available. In this way kops can manage updates
|
||||
|
@ -71,7 +228,7 @@ The `selector` determines the objects which make up the addon. This will be use
|
|||
to construct a `--prune` argument (TODO), so that objects that existed in the
|
||||
previous but not the new version will be removed as part of an upgrade.
|
||||
|
||||
## Kubernetes Version Selection
|
||||
### Kubernetes Version Selection
|
||||
|
||||
The addon manager now supports a `kubernetesVersion` field, which is a semver range specifier
|
||||
on the kubernetes version. If the targeted version of kubernetes does not match the semver
|
||||
|
@ -104,7 +261,7 @@ versions 1.6 on we will install `v1.6.0.yaml`.
|
|||
Note that we remove the `pre-release` field of the kubernetes semver, so that `1.6.0-beta.1`
|
||||
will match `>=1.6.0`. This matches the way kubernetes does pre-releases.
|
||||
|
||||
## Semver is not enough: `id`
|
||||
### Semver is not enough: `id`
|
||||
|
||||
However, semver is insufficient here with the kubernetes version selection. The problem
|
||||
arises in the following scenario:
|
|
@ -1,8 +1,8 @@
|
|||
# Cluster Templating
|
||||
|
||||
The command `kops replace` can replace a cluster desired configuration from the config in a yaml file (see [cli/kops_replace.md](cli/kops_replace.md)).
|
||||
The command `kops replace` can replace a cluster desired configuration from the config in a yaml file (see [cli/kops_replace.md](../cli/kops_replace.md)).
|
||||
|
||||
It is possible to generate that yaml file from a template, using the command `kops toolbox template` (see [cli/kops_toolbox_template.md](cli/kops_toolbox_template.md)).
|
||||
It is possible to generate that yaml file from a template, using the command `kops toolbox template` (see [cli/kops_toolbox_template.md](../cli/kops_toolbox_template.md)).
|
||||
|
||||
This document details the template language used.
|
||||
|
||||
|
@ -81,7 +81,7 @@ Running `kops toolbox template` replaces the placeholders in the template by val
|
|||
|
||||
Note: when creating a cluster desired configuration template, you can
|
||||
|
||||
- use `kops get k8s-cluster.example.com -o yaml > cluster-desired-config.yaml` to create the cluster desired configuration file (see [cli/kops_get.md](cli/kops_get.md)). The values in this file are defined in [cluster_spec.md](cluster_spec.md).
|
||||
- use `kops get k8s-cluster.example.com -o yaml > cluster-desired-config.yaml` to create the cluster desired configuration file (see [cli/kops_get.md](../cli/kops_get.md)). The values in this file are defined in [cluster_spec.md](../cluster_spec.md).
|
||||
- replace values by placeholders in that file to create the template.
|
||||
|
||||
### Templates
|
|
@ -5,13 +5,13 @@ At some point you will almost definitely want to upgrade the Kubernetes version
|
|||
- Upgrade an existing `kube-up` managed cluster to one managed by `kops`
|
||||
+ [The simple method with downtime](#kube-up---kops-downtime)
|
||||
+ [The more complex method with zero-downtime](#kube-up---kops-sans-downtime)
|
||||
- [Upgrade a `kops` cluster from one Kubernetes version to another](upgrade.md)
|
||||
- [Upgrade a `kops` cluster from one Kubernetes version to another](cluster_upgrades_and_migrations.md)
|
||||
|
||||
## `kube-up` -> `kops`, with downtime
|
||||
|
||||
`kops` lets you upgrade an existing 1.x cluster, installed using `kube-up`, to a cluster managed by `kops` running the latest kubernetes version (or the version of your choice).
|
||||
|
||||
**This is an experimental and slightly risky procedure, so we recommend backing up important data before proceeding.
|
||||
**This is an experimental and slightly risky procedure, so we recommend backing up important data before proceeding.
|
||||
Take a snapshot of your EBS volumes; export all your data from kubectl etc.**
|
||||
|
||||
Limitations:
|
||||
|
@ -188,14 +188,14 @@ This method provides zero-downtime when migrating a cluster from `kube-up` to `k
|
|||
|
||||
Limitations:
|
||||
- If you're using the default networking (`kubenet`), there is a account limit of 50 entries in a VPC's route table. If your cluster contains more than ~25 nodes, this strategy, as-is, will not work.
|
||||
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kops networking docs](networking.md) for more information.
|
||||
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kops networking docs](../networking.md) for more information.
|
||||
+ One solution is to gradually shift traffic from one cluster to the other, scaling down the number of nodes on the old cluster, and scaling up the number of nodes on the new cluster.
|
||||
|
||||
### Steps
|
||||
|
||||
1. If using another service to manage a domain's DNS records, delegate cluster-level DNS resolution to Route53 by adding appropriate NS records pointing `cluster.example.com` to Route53's Hosted Zone's nameservers.
|
||||
2. Create the new cluster's configuration files with kops. For example:
|
||||
- `kops create cluster --cloud=aws --zones=us-east-1a,us-east-1b --admin-access=12.34.56.78/32 --dns-zone=cluster.example.com --kubernetes-version=1.4.0 --node-count=14 --node-size=c3.xlarge --master-zones=us-east-1a --master-size=m4.large --vpc=vpc-123abcdef --network-cidr=172.20.0.0/16 cluster.example.com`
|
||||
- `kops create cluster --cloud=aws --zones=us-east-1a,us-east-1b --admin-access=12.34.56.78/32 --dns-zone=cluster.example.com --kubernetes-version=1.4.0 --node-count=14 --node-size=c3.xlarge --master-zones=us-east-1a --master-size=m4.large --vpc=vpc-123abcdef --network-cidr=172.20.0.0/16 cluster.example.com`
|
||||
- `--vpc` is the resource id of the existing VPC.
|
||||
- `--network-cidr` is the CIDR of the existing VPC.
|
||||
- note that `kops` will propose re-naming the existing VPC but the change never occurs.
|
||||
|
@ -212,7 +212,7 @@ Limitations:
|
|||
- If you have a `LoadBalancer` service, you should be able to access the ELB's DNS name directly (although perhaps with an SSL error) and use your application as expected.
|
||||
11. Transition traffic from the old cluster to the new cluster. This depends a bit on your infrastructure, but
|
||||
- if using a DNS server, update the `CNAME` record for `example.com` to point to the new ELB's DNS name.
|
||||
- if using a reverse proxy, update the upstream to point to the new ELB's DNS name.
|
||||
- if using a reverse proxy, update the upstream to point to the new ELB's DNS name.
|
||||
- note that if you're proxying through Cloudflare or similar, changes are instantaneous because it's technically a reverse proxy and not a DNS record.
|
||||
- if not using Cloudflare or similar, you'll want to update your DNS record's TTL to a very low duration about 48 hours in advance of this change (and then change it back to the previous value once the shift has been finalized).
|
||||
12. Rejoice.
|
|
@ -1,10 +1,12 @@
|
|||
# Backing up etcd
|
||||
# Etcd
|
||||
|
||||
## Backing up etcd
|
||||
|
||||
Kubernetes is relying on etcd for state storage. More details about the usage
|
||||
can be found [here](https://kubernetes.io/docs/admin/etcd/) and
|
||||
[here](https://coreos.com/etcd/docs/latest/).
|
||||
|
||||
## Backup requirement
|
||||
### Backup requirement
|
||||
|
||||
A Kubernetes cluster deployed with kops stores the etcd state in two different
|
||||
AWS EBS volumes per master node. One volume is used to store the Kubernetes
|
||||
|
@ -13,16 +15,16 @@ result in six volumes for etcd data (one in each AZ). An EBS volume is designed
|
|||
to have a [failure rate](https://aws.amazon.com/ebs/details/#AvailabilityandDurability)
|
||||
of 0.1%-0.2% per year.
|
||||
|
||||
## Backups using etcd-manager
|
||||
### Backups using etcd-manager
|
||||
|
||||
Backups are done periodically and before cluster modifications using [etcd-manager](./manager.md)
|
||||
Backups are done periodically and before cluster modifications using [etcd-manager](../etcd/manager.md)
|
||||
(introduced in kops 1.12). Backups for both the `main` and `events` etcd clusters
|
||||
are stored in object storage (like S3) together with the cluster configuration.
|
||||
|
||||
## Volume backups (legacy etcd)
|
||||
### Volume backups (legacy etcd)
|
||||
|
||||
If you are running your cluster in legacy etcd mode (without etcd-manager),
|
||||
backups can be done through snapshots of the etcd volumes.
|
||||
If you are running your cluster in legacy etcd mode (without etcd-manager),
|
||||
backups can be done through snapshots of the etcd volumes.
|
||||
|
||||
You can for example use CloudWatch to trigger an AWS Lambda with a defined schedule (e.g. once per
|
||||
hour). The Lambda will then create a new snapshot of all etcd volumes. A complete
|
||||
|
@ -33,13 +35,13 @@ Note: this is one of many examples on how to do scheduled snapshots.
|
|||
## Restore using etcd-manager
|
||||
|
||||
In case of a disaster situation with etcd (lost data, cluster issues etc.) it's
|
||||
possible to do a restore of the etcd cluster using `etcd-manager-ctl`.
|
||||
Currently the `etcd-manager-ctl` binary is not shipped, so you will have to build it yourself.
|
||||
possible to do a restore of the etcd cluster using `etcd-manager-ctl`.
|
||||
Currently the `etcd-manager-ctl` binary is not shipped, so you will have to build it yourself.
|
||||
Please check the documentation at the [etcd-manager repository](https://github.com/kopeio/etcd-manager).
|
||||
It is not necessary to run `etcd-manager-ctl` in your cluster, as long as you have access to cluster storage (like S3).
|
||||
|
||||
Please note that this process involves downtime for your masters (and so the api server).
|
||||
A restore cannot be undone (unless by restoring again), and you might lose pods, events
|
||||
A restore cannot be undone (unless by restoring again), and you might lose pods, events
|
||||
and other resources that were created after the backup.
|
||||
|
||||
For this example, we assume we have a cluster named `test.my.clusters` in a S3 bucket called `my.clusters`.
|
||||
|
@ -58,10 +60,10 @@ etcd-manager-ctl --backup-store=s3://my.clusters/test.my.clusters/backups/etcd/m
|
|||
etcd-manager-ctl --backup-store=s3://my.clusters/test.my.clusters/backups/etcd/events restore-backup [events backup file]
|
||||
```
|
||||
|
||||
Note that this does not start the restore immediately; you need to restart etcd on all masters
|
||||
(or roll your masters quickly). A new etcd cluster will be created and the backup will be
|
||||
restored onto this new cluster. Please note that this process might take a short while,
|
||||
depending on the size of your cluster.
|
||||
Note that this does not start the restore immediately; you need to restart etcd on all masters
|
||||
(or roll your masters quickly). A new etcd cluster will be created and the backup will be
|
||||
restored onto this new cluster. Please note that this process might take a short while,
|
||||
depending on the size of your cluster.
|
||||
|
||||
You can follow the progress by reading the etcd logs (`/var/log/etcd(-events).log`)
|
||||
on the master that is the leader of the cluster (you can find this out by checking the etcd logs on all masters).
|
||||
|
@ -73,7 +75,7 @@ It's a good idea to temporarily increase the instance size of your masters and r
|
|||
|
||||
For more information and troubleshooting, please check the [etcd-manager documentation](https://github.com/kopeio/etcd-manager).
|
||||
|
||||
## Restore volume backups (legacy etcd)
|
||||
### Restore volume backups (legacy etcd)
|
||||
|
||||
If you're using legacy etcd (without etcd-manager), it is possible to restore the volume from a snapshot we created
|
||||
earlier. Details about creating a volume from a snapshot can be found in the
|
||||
|
@ -93,3 +95,70 @@ protokube will look for the following tags:
|
|||
After fully restoring the volume ensure that the old volume is no longer there,
|
||||
or you've removed the tags from the old volume. After restarting the master node
|
||||
Kubernetes should pick up the new volume and start running again.
|
||||
|
||||
|
||||
## Etcd Volume Encryption
|
||||
|
||||
You must configure etcd volume encryption before bringing up your cluster. You cannot add etcd volume encryption to an already running cluster.
|
||||
|
||||
### Encrypting Etcd Volumes Using the Default AWS KMS Key
|
||||
|
||||
Edit your cluster to add `encryptedVolume: true` to each etcd volume:
|
||||
|
||||
`kops edit cluster ${CLUSTER_NAME}`
|
||||
|
||||
```
|
||||
...
|
||||
etcdClusters:
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
name: main
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
name: events
|
||||
...
|
||||
```
|
||||
|
||||
Update your cluster:
|
||||
|
||||
```
|
||||
kops update cluster ${CLUSTER_NAME}
|
||||
# Review changes before applying
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
||||
### Encrypting Etcd Volumes Using a Custom AWS KMS Key
|
||||
|
||||
Edit your cluster to add `encryptedVolume: true` to each etcd volume:
|
||||
|
||||
`kops edit cluster ${CLUSTER_NAME}`
|
||||
|
||||
```
|
||||
...
|
||||
etcdClusters:
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
kmsKeyId: <full-arn-of-your-kms-key>
|
||||
name: main
|
||||
- etcdMembers:
|
||||
- instanceGroup: master-us-east-1a
|
||||
name: a
|
||||
encryptedVolume: true
|
||||
kmsKeyId: <full-arn-of-your-kms-key>
|
||||
name: events
|
||||
...
|
||||
```
|
||||
|
||||
Update your cluster:
|
||||
|
||||
```
|
||||
kops update cluster ${CLUSTER_NAME}
|
||||
# Review changes before applying
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
|
@ -33,7 +33,7 @@ Using Kops HA
|
|||
-------------
|
||||
|
||||
We can create HA clusters using kops, but only it's important to note that migrating from a single-master
|
||||
cluster to a multi-master cluster is a complicated operation (described [here](./single-to-multi-master.md)).
|
||||
cluster to a multi-master cluster is a complicated operation (described [here](../single-to-multi-master.md)).
|
||||
If possible, try to plan this at time of cluster creation.
|
||||
|
||||
When you first call `kops create cluster`, you specify the `--master-zones` flag listing the zones you want your masters
|
||||
|
@ -66,7 +66,7 @@ As a result there are a few considerations that need to be taken into account wh
|
|||
Advanced Example
|
||||
----------------
|
||||
|
||||
Another example `create cluster` invocation for HA with [a private network topology](topology.md):
|
||||
Another example `create cluster` invocation for HA with [a private network topology](../topology.md):
|
||||
|
||||
```
|
||||
kops create cluster \
|
|
@ -1,3 +1,39 @@
|
|||
# Updates and Upgrades
|
||||
|
||||
## Updating kops
|
||||
|
||||
### MacOS
|
||||
|
||||
From Homebrew:
|
||||
|
||||
```bash
|
||||
brew update && brew upgrade kops
|
||||
```
|
||||
|
||||
From Github:
|
||||
|
||||
```bash
|
||||
rm -rf /usr/local/bin/kops
|
||||
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
|
||||
chmod +x ./kops
|
||||
sudo mv ./kops /usr/local/bin/
|
||||
```
|
||||
|
||||
You can also rerun [these steps](../development/building.md) if previously built from source.
|
||||
|
||||
### Linux
|
||||
|
||||
From Github:
|
||||
|
||||
```bash
|
||||
rm -rf /usr/local/bin/kops
|
||||
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
|
||||
chmod +x ./kops
|
||||
sudo mv ./kops /usr/local/bin/
|
||||
```
|
||||
|
||||
You can also rerun [these steps](../development/building.md) if previously built from source.
|
||||
|
||||
## Upgrading Kubernetes
|
||||
|
||||
Upgrading Kubernetes is easy with kops. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
|
@ -39,5 +75,4 @@ Upgrade uses the latest Kubernetes version considered stable by kops, defined in
|
|||
* `kops rolling-update cluster $NAME` to preview, then `kops rolling-update cluster $NAME --yes`
|
||||
|
||||
### Other Notes:
|
||||
* In general, we recommend that you upgrade your cluster one minor release at a time (1.7 --> 1.8 --> 1.9). Although jumping minor versions may work if you have not enabled alpha features, you run a greater risk of running into problems due to version deprecation.
|
||||
|
||||
* In general, we recommend that you upgrade your cluster one minor release at a time (1.7 --> 1.8 --> 1.9). Although jumping minor versions may work if you have not enabled alpha features, you run a greater risk of running into problems due to version deprecation.
|
|
@ -60,7 +60,7 @@
|
|||
|
||||
* We now automatically add a default NodeLabel with the InstanceGroup name
|
||||
|
||||
* Addons: added external-dns, kube-state-metrics addon. Updates for autoscaler, dashboard, heapster,
|
||||
* Addons: added external-dns, kube-state-metrics addon. Updates for autoscaler, dashboard, heapster,
|
||||
|
||||
* Networking: initial support for kube-router & romana. Updates for weave, kopeio-networking, flannel, canal, calico.
|
||||
|
||||
|
@ -136,7 +136,7 @@ or specify a different network (current using `--vpc` flag)
|
|||
* added missing command in documentation [@gekart](https://github.com/gekart) [#3116](https://github.com/kubernetes/kops/pull/3116)
|
||||
* Add k8s dashbard v1.6.2 [@Globegitter](https://github.com/Globegitter) [#3075](https://github.com/kubernetes/kops/pull/3075)
|
||||
* Kube Proxy Feature Gates [@gambol99](https://github.com/gambol99) [#3130](https://github.com/kubernetes/kops/pull/3130)
|
||||
* Update aws.md for SSH Key pair generation instructions [@sathiyas](https://github.com/sathiyas) [#3138](https://github.com/kubernetes/kops/pull/3138)
|
||||
* Update getting_started/aws.md for SSH Key pair generation instructions [@sathiyas](https://github.com/sathiyas) [#3138](https://github.com/kubernetes/kops/pull/3138)
|
||||
* MVP of templating [@mad01](https://github.com/mad01) [#3040](https://github.com/kubernetes/kops/pull/3040)
|
||||
* Rename OWNERS assignees: to approvers: [@spiffxp](https://github.com/spiffxp) [#3133](https://github.com/kubernetes/kops/pull/3133)
|
||||
* CoreOS: Ensure docker configuration is loaded [@johanneswuerbach](https://github.com/johanneswuerbach) [#3134](https://github.com/kubernetes/kops/pull/3134)
|
||||
|
@ -224,13 +224,13 @@ or specify a different network (current using `--vpc` flag)
|
|||
* Add missed error handling on session.NewSession [@justinsb](https://github.com/justinsb) [#3280](https://github.com/kubernetes/kops/pull/3280)
|
||||
* Refactor PKI classes into their own package [@justinsb](https://github.com/justinsb) [#3288](https://github.com/kubernetes/kops/pull/3288)
|
||||
* baremetal: relax validation on subnets & networking [@justinsb](https://github.com/justinsb) [#3301](https://github.com/kubernetes/kops/pull/3301)
|
||||
* Update aws.md pointing links to the k8s slack directly [@krishna-mk](https://github.com/krishna-mk) [#3306](https://github.com/kubernetes/kops/pull/3306)
|
||||
* Update getting_started/aws.md pointing links to the k8s slack directly [@krishna-mk](https://github.com/krishna-mk) [#3306](https://github.com/kubernetes/kops/pull/3306)
|
||||
* Kubelet Readonly Port [@gambol99](https://github.com/gambol99) [#3303](https://github.com/kubernetes/kops/pull/3303)
|
||||
* Additional Kubelet Options [@gambol99](https://github.com/gambol99) [#3302](https://github.com/kubernetes/kops/pull/3302)
|
||||
* Misc go vet fixes [@justinsb](https://github.com/justinsb) [#3307](https://github.com/kubernetes/kops/pull/3307)
|
||||
* Adds DNSControllerSpec and WatchIngress flag [@geojaz](https://github.com/geojaz) [#2504](https://github.com/kubernetes/kops/pull/2504)
|
||||
* Fixes #3317 allowing to spawn flannel on all nodes in the cluster [@BradErz](https://github.com/BradErz) [#3318](https://github.com/kubernetes/kops/pull/3318)
|
||||
* Fix broken link in aws.md [@BlueMonday](https://github.com/BlueMonday) [#3324](https://github.com/kubernetes/kops/pull/3324)
|
||||
* Fix broken link in getting_started/aws.md [@BlueMonday](https://github.com/BlueMonday) [#3324](https://github.com/kubernetes/kops/pull/3324)
|
||||
* refactor resource tracker to be usable across packages [@andrewsykim](https://github.com/andrewsykim) [#3331](https://github.com/kubernetes/kops/pull/3331)
|
||||
* Fix RenderGCE issue on Address [@justinsb](https://github.com/justinsb) [#3338](https://github.com/kubernetes/kops/pull/3338)
|
||||
* Extract UserData from CloudFormation output during testing [@justinsb](https://github.com/justinsb) [#3299](https://github.com/kubernetes/kops/pull/3299)
|
||||
|
|
|
@ -30,7 +30,7 @@ None known at this time
|
|||
* Update bazel / gazelle [@justinsb](https://github.com/justinsb) [#4000](https://github.com/kubernetes/kops/pull/4000)
|
||||
* When using private DNS add ELB name to the api certificate [@vainu-arto](https://github.com/vainu-arto) [#3941](https://github.com/kubernetes/kops/pull/3941)
|
||||
* Fixed minor typo in 1.8-NOTES.md file [@sellers](https://github.com/sellers) [#4013](https://github.com/kubernetes/kops/pull/4013)
|
||||
* Minor update to docs/aws.md [@ysim](https://github.com/ysim) [#4008](https://github.com/kubernetes/kops/pull/4008)
|
||||
* Minor update to docs/getting_started/aws.md [@ysim](https://github.com/ysim) [#4008](https://github.com/kubernetes/kops/pull/4008)
|
||||
* Fix libcgroup dependency typo [@wannabesrevenge](https://github.com/wannabesrevenge) [#4030](https://github.com/kubernetes/kops/pull/4030)
|
||||
* Spelling fix in instancegroups.go error msg [@sneako](https://github.com/sneako) [#4024](https://github.com/kubernetes/kops/pull/4024)
|
||||
* Include roles in toolbox dump structured output [@justinsb](https://github.com/justinsb) [#3934](https://github.com/kubernetes/kops/pull/3934)
|
||||
|
@ -294,7 +294,7 @@ None known at this time
|
|||
* Update Compatibility Matrix [@mikesplain](https://github.com/mikesplain) [#4580](https://github.com/kubernetes/kops/pull/4580)
|
||||
* Typo fix "NAT Gateways" -> "NAT gateways" [@AdamDang](https://github.com/AdamDang) [#4576](https://github.com/kubernetes/kops/pull/4576)
|
||||
* Force bazel builds to be pure. [@mikesplain](https://github.com/mikesplain) [#4602](https://github.com/kubernetes/kops/pull/4602)
|
||||
* Update aws.md [@sanketjpatel](https://github.com/sanketjpatel) [#4605](https://github.com/kubernetes/kops/pull/4605)
|
||||
* Update getting_started/aws.md [@sanketjpatel](https://github.com/sanketjpatel) [#4605](https://github.com/kubernetes/kops/pull/4605)
|
||||
* Typo delete duplicated word [@AdamDang](https://github.com/AdamDang) [#4600](https://github.com/kubernetes/kops/pull/4600)
|
||||
* Typo fix "kubernetes"->"Kubernetes" [@AdamDang](https://github.com/AdamDang) [#4577](https://github.com/kubernetes/kops/pull/4577)
|
||||
* Fix distroless error [@mikesplain](https://github.com/mikesplain) [#4597](https://github.com/kubernetes/kops/pull/4597)
|
||||
|
@ -380,7 +380,7 @@ None known at this time
|
|||
* Release 1.9.0-alpha.2 [@justinsb](https://github.com/justinsb) [#4750](https://github.com/kubernetes/kops/pull/4750)
|
||||
* Update instance_groups.md [@AdamDang](https://github.com/AdamDang) [#4751](https://github.com/kubernetes/kops/pull/4751)
|
||||
* Update cluster_upgrades_and_migrations.md [@AdamDang](https://github.com/AdamDang) [#4756](https://github.com/kubernetes/kops/pull/4756)
|
||||
* Update aws.md [@kmaris](https://github.com/kmaris) [#4755](https://github.com/kubernetes/kops/pull/4755)
|
||||
* Update getting_started/aws.md [@kmaris](https://github.com/kmaris) [#4755](https://github.com/kubernetes/kops/pull/4755)
|
||||
* Update networking.md [@AdamDang](https://github.com/AdamDang) [#4754](https://github.com/kubernetes/kops/pull/4754)
|
||||
* Update README.md [@AdamDang](https://github.com/AdamDang) [#4752](https://github.com/kubernetes/kops/pull/4752)
|
||||
* Bump stable/alpha channels to 1.9.0-alpha.2 [@mikesplain](https://github.com/mikesplain) [#4757](https://github.com/kubernetes/kops/pull/4757)
|
||||
|
|
|
@ -4,7 +4,7 @@ The kops InstanceGroup is a declarative model of a group of nodes. By modifying
|
|||
can change the instance type you're using, the number of nodes you have, the OS image you're running - essentially
|
||||
all the per-node configuration is in the InstanceGroup.
|
||||
|
||||
We'll assume you have a working cluster - if not, you probably want to read [how to get started on GCE](gce.md).
|
||||
We'll assume you have a working cluster - if not, you probably want to read [how to get started on GCE](../getting_started/gce.md).
|
||||
|
||||
## Changing the number of nodes
|
||||
|
||||
|
@ -45,7 +45,7 @@ spec:
|
|||
|
||||
Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your editor. If you wanted to change
|
||||
the image or the machineType, you could do that here as well. There are actually a lot more fields,
|
||||
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
|
||||
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
|
||||
|
||||
<!-- TODO link to API reference docs -->
|
||||
|
||||
|
|
|
@ -1,33 +0,0 @@
|
|||
# Updating kops (Binaries)
|
||||
|
||||
## MacOS
|
||||
|
||||
From Homebrew:
|
||||
|
||||
```bash
|
||||
brew update && brew upgrade kops
|
||||
```
|
||||
|
||||
From Github:
|
||||
|
||||
```bash
|
||||
rm -rf /usr/local/bin/kops
|
||||
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
|
||||
chmod +x ./kops
|
||||
sudo mv ./kops /usr/local/bin/
|
||||
```
|
||||
|
||||
You can also rerun [these steps](development/building.md) if previously built from source.
|
||||
|
||||
## Linux
|
||||
|
||||
From Github:
|
||||
|
||||
```bash
|
||||
rm -rf /usr/local/bin/kops
|
||||
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
|
||||
chmod +x ./kops
|
||||
sudo mv ./kops /usr/local/bin/
|
||||
```
|
||||
|
||||
You can also rerun [these steps](development/building.md) if previously built from source.
|
|
@ -66,7 +66,7 @@ export DNSCONTROLLER_IMAGE=cnastorage/dns-controller
|
|||
|
||||
## Setting up cluster state storage
|
||||
Kops requires the state of clusters to be stored inside certain storage service. AWS S3 is the default option.
|
||||
More about using AWS S3 for cluster state store can be found at "Cluster State storage" on this [page](https://github.com/kubernetes/kops/blob/master/docs/aws.md).
|
||||
More about using AWS S3 for cluster state store can be found at "Cluster State storage" on this [page](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md).
|
||||
|
||||
Users can also setup their own S3 server and use the following instructions to use user-defined S3-compatible applications for cluster state storage.
|
||||
This is recommended if you don't have AWS account or you don't want to store the status of your clusters on public cloud storage.
|
||||
|
|
|
@ -0,0 +1,72 @@
|
|||
## Getting Involved and Contributing
|
||||
|
||||
Are you interested in contributing to kops? We, the maintainers and community,
|
||||
would love your suggestions, contributions, and help! We have a quick-start
|
||||
guide on [adding a feature](/docs/development/adding_a_feature.md). Also, the
|
||||
maintainers can be contacted at any time to learn more about how to get
|
||||
involved.
|
||||
|
||||
In the interest of getting more newer folks involved with kops, we are starting to
|
||||
tag issues with `good-starter-issue`. These are typically issues that have
|
||||
smaller scope but are good ways to start to get acquainted with the codebase.
|
||||
|
||||
We also encourage ALL active community participants to act as if they are
|
||||
maintainers, even if you don't have "official" write permissions. This is a
|
||||
community effort, we are here to serve the Kubernetes community. If you have an
|
||||
active interest and you want to get involved, you have real power! Don't assume
|
||||
that the only people who can get things done around here are the "maintainers".
|
||||
|
||||
We also would love to add more "official" maintainers, so show us what you can
|
||||
do!
|
||||
|
||||
|
||||
What this means:
|
||||
|
||||
__Issues__
|
||||
* Help read and triage issues, assist when possible.
|
||||
* Point out issues that are duplicates, out of date, etc.
|
||||
- Even if you don't have tagging permissions, make a note and tag maintainers (`/close`,`/dupe #127`).
|
||||
|
||||
__Pull Requests__
|
||||
* Read and review the code. Leave comments, questions, and critiques (`/lgtm` ).
|
||||
* Download, compile, and run the code and make sure the tests pass (make test).
|
||||
- Also verify that the new feature seems sane, follows best architectural patterns, and includes tests.
|
||||
|
||||
This repository uses the Kubernetes bots. See a full list of the commands [here](
|
||||
https://go.k8s.io/bot-commands).
|
||||
|
||||
|
||||
## Office Hours
|
||||
|
||||
Kops maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
|
||||
|
||||
For more information, checkout the [office hours page.](office_hours.md)
|
||||
|
||||
### Other Ways to Communicate with the Contributors
|
||||
|
||||
Please check in with us in the [#kops-users](https://kubernetes.slack.com/messages/kops-users/) or [#kops-dev](https://kubernetes.slack.com/messages/kops-dev/) channel. Often-times, a well crafted question or potential bug report in slack will catch the attention of the right folks and help quickly get the ship righted.
|
||||
|
||||
## GitHub Issues
|
||||
|
||||
|
||||
### Bugs
|
||||
|
||||
If you think you have found a bug please follow the instructions below.
|
||||
|
||||
- Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
|
||||
- Set `-v 10` command line option and save the log output. Please paste this into your issue.
|
||||
- Note the version of kops you are running (from `kops version`), and the command line options you are using.
|
||||
- Open a [new issue](https://github.com/kubernetes/kops/issues/new).
|
||||
- Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
|
||||
- Feel free to reach out to the kops community on [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kops become even more awesome follow the steps below.
|
||||
|
||||
- Open a [new issue](https://github.com/kubernetes/kops/issues/new).
|
||||
- Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
|
||||
- Clearly define the use case, using concrete examples. EG: I type `this` and kops does `that`.
|
||||
- Some of our larger features will require some design. If you would like to include a technical design for your feature please include it in the issue.
|
||||
- After the new feature is well understood, and the design agreed upon we can start coding the feature. We would love for you to code it. So please open up a **WIP** *(work in progress)* pull request, and happy coding.
|
|
@ -0,0 +1,24 @@
|
|||
## Office Hours
|
||||
|
||||
Kops maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
|
||||
|
||||
Office hours are hosted on a [zoom video chat](https://zoom.us/my/k8ssigaws) on Fridays at [12 noon (Eastern Time)/9 am (Pacific Time)](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12) during weeks with odd "numbers". To check this weeks' number, run: `date +%V`. If the response is odd, join us on Friday for office hours!
|
||||
|
||||
### Office Hours Topics
|
||||
|
||||
We do maintain an [agenda](https://docs.google.com/document/d/12QkyL0FkNbWPcLFxxRGSPt_tNPBHbmni3YLY-lHny7E/edit) and stick to it as much as possible. If you want to hold the floor, put your item in this doc. Bullet/note form is fine. Even if your topic gets in late, we do our best to cover it.
|
||||
|
||||
Our office hours call is recorded, but the tone tends to be casual. First-timers are always welcome. Typical areas of discussion can include:
|
||||
- Contributors with a feature proposal seeking feedback, assistance, etc
|
||||
- Members planning for what we want to get done for the next release
|
||||
- Strategizing for larger initiatives, such as those that involve more than one sig or potentially more moving pieces
|
||||
- Help wanted requests
|
||||
- Demonstrations of cool stuff. PoCs. Fresh ideas. Show us how you use kops to go beyond the norm- help us define the future!
|
||||
|
||||
Office hours are designed for ALL of those contributing to kops or the community. Contributions are not limited to those who commit source code. There are so many important ways to be involved-
|
||||
- helping in the slack channels
|
||||
- triaging/writing issues
|
||||
- thinking about the topics raised at office hours and forming and advocating for your good ideas forming opinions
|
||||
- testing pre-(and official) releases
|
||||
|
||||
Although not exhaustive, the above activities are extremely important to our continued success and are all worth contributions. If you want to talk about kops and you have doubt, just come.
|
|
@ -0,0 +1,46 @@
|
|||
# Kops Releases & Versioning
|
||||
|
||||
kops is intended to be backward compatible. It is always recommended to use the
|
||||
latest version of kops with whatever version of Kubernetes you are using. We suggest
|
||||
kops users run one of the [3 minor versions](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew) Kubernetes is supporting however we
|
||||
do our best to support previous releases for a period of time.
|
||||
|
||||
One exception, in regard to compatibility, kops supports the equivalent
|
||||
Kubernetes minor release number. A minor version is the second digit in the
|
||||
release number. kops version 1.13.0 has a minor version of 13. The numbering
|
||||
follows the semantic versioning specification, MAJOR.MINOR.PATCH.
|
||||
|
||||
For example, kops 1.12.0 does not support Kubernetes 1.13.0, but kops 1.13.0
|
||||
supports Kubernetes 1.12.2 and previous Kubernetes versions. Only when the kops minor
|
||||
version matches the Kubernetes minor version does kops officially support the
|
||||
Kubernetes release. kops does not stop a user from installing mismatching
|
||||
versions of K8s, but Kubernetes releases always require kops to install specific
|
||||
versions of components like docker, that tested against the particular
|
||||
Kubernetes version.
|
||||
|
||||
|
||||
## Compatibility Matrix
|
||||
|
||||
| kops version | k8s 1.9.x | k8s 1.10.x | k8s 1.11.x | k8s 1.12.x | k8s 1.13.x | k8s 1.14.x |
|
||||
|---------------|-----------|------------|------------|------------|------------|------------|
|
||||
| 1.14.x - Beta | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| 1.13.x | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
|
||||
| 1.12.x | ✔ | ✔ | ✔ | ✔ | ❌ | ❌ |
|
||||
| 1.11.x | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
|
||||
| ~~1.10.x~~ | ✔ | ✔ | ❌ | ❌ | ❌ | ❌ |
|
||||
| ~~1.9.x~~ | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
||||
|
||||
Use the latest version of kops for all releases of Kubernetes, with the caveat
|
||||
that higher versions of Kubernetes are not _officially_ supported by kops. Releases who are ~~crossed out~~ _should_ work but we suggest should be upgraded soon.
|
||||
|
||||
## Release Schedule
|
||||
|
||||
This project does not follow the Kubernetes release schedule. `kops` aims to
|
||||
provide a reliable installation experience for kubernetes, and typically
|
||||
releases about a month after the corresponding Kubernetes release. This time
|
||||
allows for the Kubernetes project to resolve any issues introduced by the new
|
||||
version and ensures that we can support the latest features. kops will release
|
||||
alpha and beta pre-releases for people that are eager to try the latest
|
||||
Kubernetes release. Please only use pre-GA kops releases in environments that
|
||||
can tolerate the quirks of new releases, and please do report any issues
|
||||
encountered.
|
|
@ -35,7 +35,7 @@
|
|||
# NODEUP_BUCKET="s3-devel-bucket-name-store-nodeup" \
|
||||
# IMAGE="kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02" \
|
||||
# ./dev-build.sh
|
||||
#
|
||||
#
|
||||
# # TLDR;
|
||||
# 1. setup dns in route53
|
||||
# 2. create s3 buckets - state store and nodeup bucket
|
||||
|
@ -45,10 +45,10 @@
|
|||
# 6. use ssh-agent and ssh -A
|
||||
# 7. your pem will be the access token
|
||||
# 8. user is admin, and the default is debian
|
||||
#
|
||||
#
|
||||
# # For more details see:
|
||||
#
|
||||
# https://github.com/kubernetes/kops/blob/master/docs/aws.md
|
||||
# https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md
|
||||
#
|
||||
###############################################################################
|
||||
|
||||
|
@ -124,11 +124,11 @@ if [[ $TOPOLOGY == "private" ]]; then
|
|||
kops_command+=" --bastion='true'"
|
||||
fi
|
||||
|
||||
if [ -n "${KOPS_FEATURE_FLAGS+x}" ]; then
|
||||
if [ -n "${KOPS_FEATURE_FLAGS+x}" ]; then
|
||||
kops_command=KOPS_FEATURE_FLAGS="${KOPS_FEATURE_FLAGS}" $kops_command
|
||||
fi
|
||||
|
||||
if [[ $KOPS_CREATE == "yes" ]]; then
|
||||
if [[ $KOPS_CREATE == "yes" ]]; then
|
||||
kops_command="$kops_command --yes"
|
||||
fi
|
||||
|
||||
|
|
129
mkdocs.yml
129
mkdocs.yml
|
@ -1,5 +1,5 @@
|
|||
site_name: Kubernetes Operations - kops
|
||||
strict: true
|
||||
# strict: true
|
||||
repo_name: 'kubernetes/kops'
|
||||
repo_url: 'https://github.com/kubernetes/kops'
|
||||
markdown_extensions:
|
||||
|
@ -9,6 +9,7 @@ markdown_extensions:
|
|||
- pymdownx.tasklist:
|
||||
custom_checkbox: true
|
||||
- pymdownx.superfences
|
||||
- pymdownx.tilde
|
||||
- toc:
|
||||
permalink: true
|
||||
theme:
|
||||
|
@ -24,4 +25,128 @@ plugins:
|
|||
- search
|
||||
|
||||
extra_css: [extra.css]
|
||||
#google_analytics: ['UA-XXXXXXX-X', 'kubernetes.github.io']
|
||||
|
||||
nav:
|
||||
- Welcome:
|
||||
- Welcome: "index.md"
|
||||
- Releases & Versioning: "welcome/releases.md"
|
||||
- Contributing: "welcome/contributing.md"
|
||||
- Office Hours: "welcome/office_hours.md"
|
||||
- Getting Started:
|
||||
- Installing: "getting_started/install.md"
|
||||
- Deploying to AWS: "getting_started/aws.md"
|
||||
- Deploying to GCE: "getting_started/gce.md"
|
||||
- Deploying to OpenStack - Beta: "getting_started/openstack.md"
|
||||
- Deploying to Digital Ocean - Alpha: "getting_started/digitalocean.md"
|
||||
- Kops Commands: "getting_started/commands.md"
|
||||
- Kops Arguments: "getting_started/arguments.md"
|
||||
- kubectl usage: "getting_started/kubectl.md"
|
||||
- CLI:
|
||||
- kops: "cli/kops.md"
|
||||
- kops completion: "cli/kops_completion.md"
|
||||
- kops create: "cli/kops_create.md"
|
||||
- kops delete: "cli/kops_delete.md"
|
||||
- kops describe: "cli/kops_describe.md"
|
||||
- kops edit: "cli/kops_edit.md"
|
||||
- kops export: "cli/kops_export.md"
|
||||
- kops get: "cli/kops_get.md"
|
||||
- kops import: "cli/kops_import.md"
|
||||
- kops replace: "cli/kops_replace.md"
|
||||
- kops rolling-update: "cli/kops_rolling-update.md"
|
||||
- kops set: "cli/kops_set.md"
|
||||
- kops toolbox: "cli/kops_toolbox.md"
|
||||
- kops update: "cli/kops_update.md"
|
||||
- kops upgrade: "cli/kops_upgrade.md"
|
||||
- kops validate: "cli/kops_validate.md"
|
||||
- kops version: "cli/kops_version.md"
|
||||
- API:
|
||||
- Cluster Spec: "cluster_spec.md"
|
||||
- Instance Group API: "instance_groups.md"
|
||||
- Using Manifests and Customizing: "manifests_and_customizing_via_api.md"
|
||||
- Godocs for Cluster - ClusterSpec: "https://godoc.org/k8s.io/kops/pkg/apis/kops#ClusterSpec"
|
||||
- Godocs for Instance Group - InstanceGroupSpec: "https://godoc.org/k8s.io/kops/pkg/apis/kops#InstanceGroupSpec"
|
||||
|
||||
- Operations:
|
||||
- Updates & Upgrades: "operations/updates_and_upgrades.md"
|
||||
- High Availability: "operations/high_availability.md"
|
||||
- etcd backup, restore and encryption: "operations/etcd_backup_restore_encryption.md"
|
||||
- Instancegroup images: "operations/images.md"
|
||||
- Cluster Addons & Manager : "operations/addons.md"
|
||||
- Cluster configuration management: "changing_configuration.md"
|
||||
- Cluster Templating: "operations/cluster_template.md"
|
||||
- Cluster upgrades and migrations: "operations/cluster_upgrades_and_migrations.md"
|
||||
- GPU setup: "gpu.md"
|
||||
- kube-up to kops upgrade: "upgrade_from_kubeup.md"
|
||||
- Label management: "labels.md"
|
||||
- Secret management: "secrets.md"
|
||||
- Moving from a Single Master to Multiple HA Masters: "single-to-multi-master.md"
|
||||
- Working with Instance Groups: "tutorial/working-with-instancegroups.md"
|
||||
- Running kops in a CI environment: "continuous_integration.md"
|
||||
- Networking:
|
||||
- Networking Overview including CNI: "networking.md"
|
||||
- Run kops in an existing VPC: "run_in_existing_vpc.md"
|
||||
- Supported network topologies: "topology.md"
|
||||
- Subdomain setup: "creating_subdomain.md"
|
||||
- Security:
|
||||
- Security: "security.md"
|
||||
- Advisories: "advisories/README.md"
|
||||
- Bastion setup: "bastion.md"
|
||||
- IAM roles: "iam_roles.md"
|
||||
- MFA setup: "mfa.md"
|
||||
- Security Groups: "security_groups.md"
|
||||
- Advanced:
|
||||
- Download Config: "advanced/download_config.md"
|
||||
- Subdomain NS Records: "advanced/ns.md"
|
||||
- Experimental: "advanced/experimental.md"
|
||||
- Cluster boot sequence: "boot-sequence.md"
|
||||
- Philosophy: "philosophy.md"
|
||||
- State store: "state.md"
|
||||
- AWS China: "aws-china.md"
|
||||
- Calico v3: "calico-v3.md"
|
||||
- Custom CA: "custom_ca.md"
|
||||
- etcd3 Migration: "etcd3-migration.md"
|
||||
- Horizontal Pod Autoscaling: "horizontal_pod_autoscaling.md"
|
||||
- Egress Proxy: "http_proxy.md"
|
||||
- Node Authorization: "node_authorization.md"
|
||||
- Node Resource Allocation: "node_resource_handling.md"
|
||||
- Rotate Secrets: "rotate-secrets.md"
|
||||
- Terraform: "terraform.md"
|
||||
- Authentication: "authentication.md"
|
||||
- Development:
|
||||
- Building: "development/building.md"
|
||||
- Releases: "releases.md"
|
||||
- New Kubernetes Version: "development/new_kubernetes_version.md"
|
||||
- Developing using Docker: "development/Docker.md"
|
||||
- Development with vSphere: "vsphere-dev.md"
|
||||
- vSphere support status: "vsphere-development-status.md"
|
||||
- Documentation Guidelines: "development/documentation.md"
|
||||
- E2E testing with `kops` clusters: "development/testing.md"
|
||||
- Example on how to add a feature: "development/adding_a_feature.md"
|
||||
- Hack Directory: "development/hack.md"
|
||||
- How to update `kops` API: "development/api_updates.md"
|
||||
- Low level description on how kops works: "development/how_it_works.md"
|
||||
- Notes on Gossip design: "development/gossip.md"
|
||||
- Notes on master instance sizing: "development/instancesizes.md"
|
||||
- Our release process: "development/release.md"
|
||||
- Releasing with Homebrew: "development/homebrew.md"
|
||||
- Rolling Update Diagrams: "development/rolling_update.md"
|
||||
- Bazel: "development/bazel.md"
|
||||
- Vendoring: "development/vendoring.md"
|
||||
- Ports: "development/ports.md"
|
||||
- Releases:
|
||||
- 1.15: releases/1.15-NOTES.md
|
||||
- 1.14: releases/1.14-NOTES.md
|
||||
- 1.13: releases/1.13-NOTES.md
|
||||
- 1.12: releases/1.12-NOTES.md
|
||||
- 1.11: releases/1.11-NOTES.md
|
||||
- 1.10: releases/1.10-NOTES.md
|
||||
- 1.9: releases/1.9-NOTES.md
|
||||
- 1.8.1: releases/1.8.1.md
|
||||
- 1.8: releases/1.8-NOTES.md
|
||||
- 1.7.1: releases/1.7.1.md
|
||||
- 1.7: releases/1.7-NOTES.md
|
||||
- 1.6.2: releases/1.6.2.md
|
||||
- 1.6.1: releases/1.6.1.md
|
||||
- 1.6.0: releases/1.6-NOTES.md
|
||||
- 1.6.0-alpha: releases/1.6.0-alpha.1.md
|
||||
- legacy-changes.md: releases/legacy-changes.md
|
||||
|
|
|
@ -12,7 +12,7 @@ In a broader sense, ExternalDNS allows you to control DNS records dynamically vi
|
|||
|
||||
The following tutorials are provided:
|
||||
|
||||
* [AWS](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md)
|
||||
* [AWS](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/getting_started/aws.md)
|
||||
* [Azure](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md)
|
||||
* [Cloudflare](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/cloudflare.md)
|
||||
* [DigitalOcean](https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/digitalocean.md)
|
||||
|
|
Loading…
Reference in New Issue