mirror of https://github.com/kubernetes/kops.git
aws cleanup and addon management compaction
This commit is contained in:
parent
e5fd697283
commit
1fdca28a8b
155
docs/addons.md
155
docs/addons.md
|
@ -1,155 +0,0 @@
|
|||
## Installing Kubernetes Addons
|
||||
|
||||
With kops you manage addons by using kubectl.
|
||||
|
||||
(For a description of the addon-manager, please see [addon_manager.md](addon_manager.md).)
|
||||
|
||||
Addons in Kubernetes are traditionally done by copying files to `/etc/kubernetes/addons` on the master. But this
|
||||
doesn't really make sense in HA master configurations. We also have kubectl available, and addons are just a thin
|
||||
wrapper over calling kubectl.
|
||||
|
||||
The command `kops create cluster` does not support specifying addons to be added to the cluster when it is created. Instead they can be added after cluster creation using kubectl. Alternatively when creating a cluster from a yaml manifest, addons can be specified using `spec.addons`.
|
||||
```yaml
|
||||
spec:
|
||||
addons:
|
||||
- manifest: kubernetes-dashboard
|
||||
- manifest: s3://kops-addons/addon.yaml
|
||||
```
|
||||
|
||||
This document describes how to install some common addons and how to create your own custom ones.
|
||||
|
||||
### Custom addons
|
||||
|
||||
The docs about the [addon manager](addon_manager.md) describe in more detail how to define a addon resource with regards to versioning.
|
||||
Here is a minimal example of an addon manifest that would install two different addons.
|
||||
|
||||
```yaml
|
||||
kind: Addons
|
||||
metadata:
|
||||
name: example
|
||||
spec:
|
||||
addons:
|
||||
- name: foo.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: foo.addons.org.io
|
||||
manifest: foo.addons.org.io/v0.0.1.yaml
|
||||
- name: bar.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: bar.addons.org.io
|
||||
manifest: bar.addons.org.io/v0.0.1.yaml
|
||||
```
|
||||
|
||||
In this this example the folder structure should look like this;
|
||||
|
||||
```
|
||||
addon.yaml
|
||||
foo.addons.org.io
|
||||
v0.0.1.yaml
|
||||
bar.addons.org.io
|
||||
v0.0.1.yaml
|
||||
```
|
||||
|
||||
The yaml files in the foo/bar folders can be any kubernetes resource. Typically this file structure would be pushed to S3 or another of the supported backends and then referenced as above in `spec.addons`. In order for master nodes to be able to access the S3 bucket containing the addon manifests, one might have to add additional iam policies to the master nodes using `spec.additionalPolicies`, like so;
|
||||
```yaml
|
||||
spec:
|
||||
additionalPolicies:
|
||||
master: |
|
||||
[
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons/*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetBucketLocation",
|
||||
"s3:ListBucket"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons"]
|
||||
}
|
||||
]
|
||||
```
|
||||
The masters will poll for changes in the bucket and keep the addons up to date.
|
||||
|
||||
|
||||
### Dashboard
|
||||
|
||||
The [dashboard project](https://github.com/kubernetes/dashboard) provides a nice administrative UI:
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
|
||||
```
|
||||
|
||||
And then follow the instructions in the [dashboard documentation](https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above) to access the dashboard.
|
||||
|
||||
The login credentials are:
|
||||
|
||||
* Username: `admin`
|
||||
* Password: get by running `kops get secrets kube --type secret -oplaintext` or `kubectl config view --minify`
|
||||
|
||||
#### RBAC
|
||||
|
||||
For k8s version > 1.6 and [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) enabled it's necessary to add your own permission to the dashboard. Please read the [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) docs before applying permissions.
|
||||
|
||||
Below you see an example giving **full access** to the dashboard.
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
```
|
||||
|
||||
### Monitoring with Heapster - Standalone
|
||||
|
||||
Monitoring supports the horizontal pod autoscaler.
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.11.0.yaml
|
||||
```
|
||||
Please note that [heapster is retired](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md). Consider using [metrics-server](https://github.com/kubernetes-incubator/metrics-server) and a third party metrics pipeline to gather Prometheus-format metrics instead.
|
||||
|
||||
### Monitoring with Prometheus Operator + kube-prometheus
|
||||
|
||||
The [Prometheus Operator](https://github.com/coreos/prometheus-operator/) makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
|
||||
|
||||
[kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus) combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
|
||||
|
||||
```console
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/prometheus-operator/v0.26.0.yaml
|
||||
```
|
||||
|
||||
### Route53 Mapper
|
||||
|
||||
Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).
|
||||
The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer to
|
||||
use one or the other.
|
||||
[README for the included dns-controller](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
|
||||
|
||||
route53-mapper automates creation and updating of entries on Route53 with `A` records pointing
|
||||
to ELB-backed `LoadBalancer` services created by Kubernetes. Install using:
|
||||
|
||||
The project is created by wearemolecule, and maintained at
|
||||
[wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes).
|
||||
[Usage instructions](https://github.com/kubernetes/kops/blob/master/addons/route53-mapper/README.md)
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/route53-mapper/v1.3.0.yml
|
||||
```
|
|
@ -1,16 +1,6 @@
|
|||
## Install kops
|
||||
# Getting Started with kops on AWS
|
||||
|
||||
Before we can bring up the cluster we need to [install the CLI tool](install.md) `kops`.
|
||||
|
||||
## Install kubectl
|
||||
|
||||
In order to control Kubernetes clusters we need to [install the CLI tool](install.md) `kubectl`.
|
||||
|
||||
#### Other Platforms
|
||||
|
||||
* [Kubernetes Latest Release](https://github.com/kubernetes/kubernetes/releases/latest)
|
||||
|
||||
* [Installation Guide](http://kubernetes.io/docs/user-guide/prereqs/)
|
||||
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md).
|
||||
|
||||
## Setup your environment
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ in alpha, and other platforms planned.
|
|||
* Deploys Highly Available (HA) Kubernetes Masters
|
||||
* Built on a state-sync model for **dry-runs** and automatic **idempotency**
|
||||
* Ability to generate [Terraform](terraform.md)
|
||||
* Supports custom Kubernetes [add-ons](addons.md)
|
||||
* Supports custom Kubernetes [add-ons](operations/addons.md)
|
||||
* Command line [autocompletion](cli/kops_completion.md)
|
||||
* YAML Manifest Based API [Configuration](manifests_and_customizing_via_api.md)
|
||||
* [Templating](cluster_template.md) and dry-run modes for creating
|
||||
|
|
|
@ -1,3 +1,160 @@
|
|||
# Kubernetes Addons and Addon Manager
|
||||
|
||||
## Addons
|
||||
With kops you manage addons by using kubectl.
|
||||
|
||||
(For a description of the addon-manager, please see [addon_manager.md](#addon-management).)
|
||||
|
||||
Addons in Kubernetes are traditionally done by copying files to `/etc/kubernetes/addons` on the master. But this
|
||||
doesn't really make sense in HA master configurations. We also have kubectl available, and addons are just a thin
|
||||
wrapper over calling kubectl.
|
||||
|
||||
The command `kops create cluster` does not support specifying addons to be added to the cluster when it is created. Instead they can be added after cluster creation using kubectl. Alternatively when creating a cluster from a yaml manifest, addons can be specified using `spec.addons`.
|
||||
```yaml
|
||||
spec:
|
||||
addons:
|
||||
- manifest: kubernetes-dashboard
|
||||
- manifest: s3://kops-addons/addon.yaml
|
||||
```
|
||||
|
||||
This document describes how to install some common addons and how to create your own custom ones.
|
||||
|
||||
### Custom addons
|
||||
|
||||
The docs about the [addon manager](#addon-management) describe in more detail how to define a addon resource with regards to versioning.
|
||||
Here is a minimal example of an addon manifest that would install two different addons.
|
||||
|
||||
```yaml
|
||||
kind: Addons
|
||||
metadata:
|
||||
name: example
|
||||
spec:
|
||||
addons:
|
||||
- name: foo.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: foo.addons.org.io
|
||||
manifest: foo.addons.org.io/v0.0.1.yaml
|
||||
- name: bar.addons.org.io
|
||||
version: 0.0.1
|
||||
selector:
|
||||
k8s-addon: bar.addons.org.io
|
||||
manifest: bar.addons.org.io/v0.0.1.yaml
|
||||
```
|
||||
|
||||
In this this example the folder structure should look like this;
|
||||
|
||||
```
|
||||
addon.yaml
|
||||
foo.addons.org.io
|
||||
v0.0.1.yaml
|
||||
bar.addons.org.io
|
||||
v0.0.1.yaml
|
||||
```
|
||||
|
||||
The yaml files in the foo/bar folders can be any kubernetes resource. Typically this file structure would be pushed to S3 or another of the supported backends and then referenced as above in `spec.addons`. In order for master nodes to be able to access the S3 bucket containing the addon manifests, one might have to add additional iam policies to the master nodes using `spec.additionalPolicies`, like so;
|
||||
```yaml
|
||||
spec:
|
||||
additionalPolicies:
|
||||
master: |
|
||||
[
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons/*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetBucketLocation",
|
||||
"s3:ListBucket"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::kops-addons"]
|
||||
}
|
||||
]
|
||||
```
|
||||
The masters will poll for changes in the bucket and keep the addons up to date.
|
||||
|
||||
|
||||
### Dashboard
|
||||
|
||||
The [dashboard project](https://github.com/kubernetes/dashboard) provides a nice administrative UI:
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
|
||||
```
|
||||
|
||||
And then follow the instructions in the [dashboard documentation](https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above) to access the dashboard.
|
||||
|
||||
The login credentials are:
|
||||
|
||||
* Username: `admin`
|
||||
* Password: get by running `kops get secrets kube --type secret -oplaintext` or `kubectl config view --minify`
|
||||
|
||||
#### RBAC
|
||||
|
||||
For k8s version > 1.6 and [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) enabled it's necessary to add your own permission to the dashboard. Please read the [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) docs before applying permissions.
|
||||
|
||||
Below you see an example giving **full access** to the dashboard.
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
```
|
||||
|
||||
### Monitoring with Heapster - Standalone
|
||||
|
||||
Monitoring supports the horizontal pod autoscaler.
|
||||
|
||||
Install using:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.11.0.yaml
|
||||
```
|
||||
Please note that [heapster is retired](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md). Consider using [metrics-server](https://github.com/kubernetes-incubator/metrics-server) and a third party metrics pipeline to gather Prometheus-format metrics instead.
|
||||
|
||||
### Monitoring with Prometheus Operator + kube-prometheus
|
||||
|
||||
The [Prometheus Operator](https://github.com/coreos/prometheus-operator/) makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
|
||||
|
||||
[kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus) combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
|
||||
|
||||
```console
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/prometheus-operator/v0.26.0.yaml
|
||||
```
|
||||
|
||||
### Route53 Mapper
|
||||
|
||||
Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).
|
||||
The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer to
|
||||
use one or the other.
|
||||
[README for the included dns-controller](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
|
||||
|
||||
route53-mapper automates creation and updating of entries on Route53 with `A` records pointing
|
||||
to ELB-backed `LoadBalancer` services created by Kubernetes. Install using:
|
||||
|
||||
The project is created by wearemolecule, and maintained at
|
||||
[wearemolecule/route53-kubernetes](https://github.com/wearemolecule/route53-kubernetes).
|
||||
[Usage instructions](https://github.com/kubernetes/kops/blob/master/addons/route53-mapper/README.md)
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/route53-mapper/v1.3.0.yml
|
||||
```
|
||||
|
||||
## Addons Management
|
||||
|
||||
kops incorporates management of some addons; we _have_ to manage some addons which are needed before
|
||||
|
@ -23,14 +180,14 @@ with a `kubectl apply -f https://...` or with the channels tool.
|
|||
In future, we may as a convenience make it easy to add optional addons to the kops manifest,
|
||||
though this will just be a convenience wrapper around doing it manually.
|
||||
|
||||
## Update BootStrap Addons
|
||||
### Update BootStrap Addons
|
||||
|
||||
If you want to update the bootstrap addons, you can run the following command to show you which addons need updating. Add `--yes` to actually apply the updates.
|
||||
|
||||
**channels apply channel s3://*KOPS_S3_BUCKET*/*CLUSTER_NAME*/addons/bootstrap-channel.yaml**
|
||||
|
||||
|
||||
## Versioning
|
||||
### Versioning
|
||||
|
||||
The channels tool adds a manifest-of-manifests file, of `Kind: Addons`, which allows for a description
|
||||
of the various manifest versions that are available. In this way kops can manage updates
|
||||
|
@ -71,7 +228,7 @@ The `selector` determines the objects which make up the addon. This will be use
|
|||
to construct a `--prune` argument (TODO), so that objects that existed in the
|
||||
previous but not the new version will be removed as part of an upgrade.
|
||||
|
||||
## Kubernetes Version Selection
|
||||
### Kubernetes Version Selection
|
||||
|
||||
The addon manager now supports a `kubernetesVersion` field, which is a semver range specifier
|
||||
on the kubernetes version. If the targeted version of kubernetes does not match the semver
|
||||
|
@ -104,7 +261,7 @@ versions 1.6 on we will install `v1.6.0.yaml`.
|
|||
Note that we remove the `pre-release` field of the kubernetes semver, so that `1.6.0-beta.1`
|
||||
will match `>=1.6.0`. This matches the way kubernetes does pre-releases.
|
||||
|
||||
## Semver is not enough: `id`
|
||||
### Semver is not enough: `id`
|
||||
|
||||
However, semver is insufficient here with the kubernetes version selection. The problem
|
||||
arises in the following scenario:
|
|
@ -68,8 +68,7 @@ nav:
|
|||
- Commands: "usage/commands.md"
|
||||
- Arguments: "usage/arguments.md"
|
||||
- Operations:
|
||||
- Cluster addon manager: "addon_manager.md"
|
||||
- Cluster addons: "addons.md"
|
||||
- Cluster Addons & Manager : "operations/addons.md"
|
||||
- Cluster configuration management: "changing_configuration.md"
|
||||
- Cluster desired configuration creation from template: "cluster_template.md"
|
||||
- Cluster upgrades and migrations: "cluster_upgrades_and_migrations.md"
|
||||
|
|
Loading…
Reference in New Issue