Merge pull request #10077 from hakman/kOps

Rebrand kops to kOps
This commit is contained in:
Kubernetes Prow Robot 2020-10-29 18:16:06 -07:00 committed by GitHub
commit 9885df83ad
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
107 changed files with 530 additions and 532 deletions

View File

@ -1,4 +1,4 @@
# kops - Operaciones con Kubernetes
# kOps - Operaciones con Kubernetes
[![Build Status](https://travis-ci.org/kubernetes/kops.svg?branch=master)](https://travis-ci.org/kubernetes/kops) [![Go Report Card](https://goreportcard.com/badge/k8s.io/kops)](https://goreportcard.com/report/k8s.io/kops) [![GoDoc Widget]][GoDoc]
@ -9,7 +9,7 @@
La forma más fácil de poner en marcha un cluster Kubernetes en producción.
## ¿Qué es kops?
## ¿Qué es kOps?
Queremos pensar que es algo como `kubectl` para clusters.
@ -62,26 +62,26 @@ La documentación está en el directorio `/docs`, [and the index is here.](docs/
### Soporte de la Versión Kubernetes
kops está destinado a ser compatible con versiones anteriores. Siempre es recomendado utilizar la
última versión de kops con cualquier versión de Kubernetes que estés utilizando. Siempre
utilize la última versión de kops.
kOps está destinado a ser compatible con versiones anteriores. Siempre es recomendado utilizar la
última versión de kOps con cualquier versión de Kubernetes que estés utilizando. Siempre
utilize la última versión de kOps.
Una excepción, en lo que respecta a la compatibilidad, kops soporta el equivalente a
Una excepción, en lo que respecta a la compatibilidad, kOps soporta el equivalente a
un número de versión menor de Kubernetes. Una versión menor es el segundo dígito en el
número de versión. la versión de kops 1.8.0 tiene una versión menor de 8. La numeración
número de versión. la versión de kOps 1.8.0 tiene una versión menor de 8. La numeración
sigue la especificación de versión semántica, MAJOR.MINOR.PATCH.
Por ejemplo kops, 1.8.0 no soporta Kubernetes 1.9.2, pero kops 1.9.0
Por ejemplo kOps, 1.8.0 no soporta Kubernetes 1.9.2, pero kOps 1.9.0
soporta Kubernetes 1.9.2 y versiones anteriores de Kubernetes. Sólo cuando coincide la versión
menor de kops, La versión menor de kubernetes hace que kops soporte oficialmente
el lanzamiento de kubernetes. kops no impide que un usuario instale versiones
no coincidentes de K8, pero las versiones de Kubernetes siempre requieren kops para instalar
menor de kOps, La versión menor de kubernetes hace que kOps soporte oficialmente
el lanzamiento de kubernetes. kOps no impide que un usuario instale versiones
no coincidentes de K8, pero las versiones de Kubernetes siempre requieren kOps para instalar
versiones de componentes como docker, probado contra la versión
particular de Kubernetes.
#### Compatibilidad Matrix
| kops version | k8s 1.12.x | k8s 1.13.x | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x |
| kOps version | k8s 1.12.x | k8s 1.13.x | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x |
|---------------|------------|------------|------------|------------|------------|
| 1.16.0 | ✔ | ✔ | ✔ | ✔ | ✔ |
| 1.15.x | ✔ | ✔ | ✔ | ✔ | ⚫ |
@ -89,14 +89,14 @@ particular de Kubernetes.
| ~~1.13.x~~ | ✔ | ✔ | ⚫ | ⚫ | ⚫ |
| ~~1.12.x~~ | ✔ | ⚫ | ⚫ | ⚫ | ⚫ |
Utilice la última versión de kops para todas las versiones de Kubernetes, con la advertencia de que las versiones más altas de Kubernetes no cuentan con el respaldo _oficial_ de kops.
Utilice la última versión de kOps para todas las versiones de Kubernetes, con la advertencia de que las versiones más altas de Kubernetes no cuentan con el respaldo _oficial_ de kOps.
### Cronograma de Lanzamiento de kops
### Cronograma de Lanzamiento de kOps
Este proyecto no sigue el cronograma de lanzamiento de Kubernetes. `kops` tiene como objetivo
proporcionar una experiencia de instalación confiable para Kubernetes, y, por lo general, se lanza
aproximadamente un mes después de la publicación correspondiente de Kubernetes. Esta vez, permite que el proyecto Kubernetes resuelva los problemas que presenta la nueva versión y garantiza que podamos admitir las funciones más recientes. kops lanzará pre-lanzamientos alfa y beta para las personas que están ansiosas por probar la última versión de Kubernetes.
Utilice únicamente lanzamientos pre-GA kops en ambientes que puedan tolerar las peculiaridades de las nuevas versiones, e informe cualquier problema que surja.
aproximadamente un mes después de la publicación correspondiente de Kubernetes. Esta vez, permite que el proyecto Kubernetes resuelva los problemas que presenta la nueva versión y garantiza que podamos admitir las funciones más recientes. kOps lanzará pre-lanzamientos alfa y beta para las personas que están ansiosas por probar la última versión de Kubernetes.
Utilice únicamente lanzamientos pre-GA kOps en ambientes que puedan tolerar las peculiaridades de las nuevas versiones, e informe cualquier problema que surja.
## Instalación
@ -132,12 +132,12 @@ información sobre cambios entre lanzamientos.
## Involucrarse y Contribuir
¿Estás interesado en contribuir con kops? Nosotros, los mantenedores y la comunidad,
¿Estás interesado en contribuir con kOps? Nosotros, los mantenedores y la comunidad,
nos encantaría sus sugerencias, contribuciones y ayuda.
Tenemos una guía de inicio rápido en [adding a feature](/docs/development/adding_a_feature.md). Además, se
puede contactar a los mantenedores en cualquier momento para obtener más información sobre
cómo involucrarse.
Con el interés de involucrar a más personas con kops, estamos comenzando a
Con el interés de involucrar a más personas con kOps, estamos comenzando a
etiquetar los problemas con `good-starter-issue`. Por lo general, se trata de problemas que tienen
un alcance menor, pero que son buenas maneras de familiarizarse con la base de código.
@ -168,7 +168,7 @@ Este repositorio usa los bots de Kubernetes. Hay una lista completa de los coma
## Horas de Oficina
Los mantenedores de Kops reservaron una hora cada dos semanas para **horas de oficina** públicas. Los horarios de oficina se alojan en un [zoom video chat](https://zoom.us/j/97072789944?pwd=VVlUR3dhN2h5TEFQZHZTVVd4SnJUdz09) los viernes en [5 pm UTC/12 noon ET/9 am US Pacific](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12), en semanas impares numeradas. Nos esforzamos por conocer y ayudar a los programadores, ya sea trabajando en `kops` o interesados en conocer más sobre el proyecto.
Los mantenedores de kOps reservaron una hora cada dos semanas para **horas de oficina** públicas. Los horarios de oficina se alojan en un [zoom video chat](https://zoom.us/j/97072789944?pwd=VVlUR3dhN2h5TEFQZHZTVVd4SnJUdz09) los viernes en [5 pm UTC/12 noon ET/9 am US Pacific](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12), en semanas impares numeradas. Nos esforzamos por conocer y ayudar a los programadores, ya sea trabajando en `kops` o interesados en conocer más sobre el proyecto.
### Temas Abiertos en Horas de Oficina
@ -176,12 +176,12 @@ Los mantenedores de Kops reservaron una hora cada dos semanas para **horas de of
Incluye pero no limitado a:
- Ayuda y guía para aquellos que asisten, que están interesados en contribuir.
- Discuta el estado actual del proyecto kops, incluidas las versiones.
- Discuta el estado actual del proyecto kOps, incluidas las versiones.
- Diseña estrategias para mover `kops` hacia adelante.
- Colabora sobre PRs abiertos y próximos.
- Presenta demos.
Esta vez se enfoca en los programadores, aunque nunca rechazaremos a un participante cortés. Pase por alto, incluso si nunca ha instalado Kops.
Esta vez se enfoca en los programadores, aunque nunca rechazaremos a un participante cortés. Pase por alto, incluso si nunca ha instalado kOps.
Le recomendamos que se comunique **de antemano** si planea asistir. Puedes unirte a cualquier sesión y no dudes en agregar un elemento a la [agenda](https://docs.google.com/document/d/12QkyL0FkNbWPcLFxxRGSPt_tNPBHbmni3YLY-lHny7E/edit) donde rastreamos notas en el horario de oficina.
@ -193,7 +193,7 @@ Puede verificar su número de semana utilizando:
date +%V
```
Los mantenedores y otros miembros de la comunidad están generalmente disponibles en [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media) en [#kops](https://kubernetes.slack.com/messages/kops/), ¡así que ven y conversa con nosotros sobre cómo los kops pueden ser mejores para ti!
Los mantenedores y otros miembros de la comunidad están generalmente disponibles en [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media) en [#kops](https://kubernetes.slack.com/messages/kops/), ¡así que ven y conversa con nosotros sobre cómo los kOps pueden ser mejores para ti!
## GitHub Issues
@ -205,18 +205,18 @@ Si cree que ha encontrado un error, siga las instrucciones a continuación.
- Dedique una pequeña cantidad de tiempo a prestar la debida diligencia al rastreador de problemas. Tu problema puede ser un duplicado.
- Establezca la `-v 10` línea de comando y guarde la salida de los registros. Por favor pegue esto en su issue.
- Note the version of kops you are running (from `kops version`), and the command line options you are using.
- Note the version of kOps you are running (from `kops version`), and the command line options you are using.
- Abra un [new issue](https://github.com/kubernetes/kops/issues/new).
- Recuerde que los usuarios pueden estar buscando su issue en el futuro, por lo que debe darle un título significativo para ayudar a otros.
- No dude en comunicarse con la comunidad de kops en [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
- No dude en comunicarse con la comunidad de kOps en [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
### Caracteristicas
También usamos el rastreador de problemas para rastrear características. Si tiene una idea para una función, o cree que puede ayudar a que los kops se vuelvan aún más impresionantes, siga los pasos a continuación.
También usamos el rastreador de problemas para rastrear características. Si tiene una idea para una función, o cree que puede ayudar a que los kOps se vuelvan aún más impresionantes, siga los pasos a continuación.
- Abra un [new issue](https://github.com/kubernetes/kops/issues/new).
- Recuerde que los usuarios pueden estar buscando su issue en el futuro, por lo que debe darle un título significativo para ayudar a otros.
- Defina claramente el caso de uso, usando ejemplos concretos. P EJ: Escribo `esto` y kops hace `eso`.
- Defina claramente el caso de uso, usando ejemplos concretos. P EJ: Escribo `esto` y kOps hace `eso`.
- Algunas de nuestras características más grandes requerirán algún diseño. Si desea incluir un diseño técnico para su función, inclúyalo en el problema.
- Después de que la nueva característica sea bien comprendida, y el diseño acordado, podemos comenzar a codificar la característica. Nos encantaría que lo codificaras. Por lo tanto, abra una **WIP** *(trabajo en progreso)* solicitud de extracción, y que tenga una feliz codificación.

View File

@ -1,6 +1,4 @@
<img src="/docs/img/logo.jpg" width="500px" alt="kops logo">
# kops - Kubernetes Operations
# kOps - Kubernetes Operations
[![Build Status](https://travis-ci.org/kubernetes/kops.svg?branch=master)](https://travis-ci.org/kubernetes/kops) [![Go Report Card](https://goreportcard.com/badge/k8s.io/kops)](https://goreportcard.com/report/k8s.io/kops) [![GoDoc Widget]][GoDoc]
@ -12,9 +10,9 @@ The easiest way to get a production grade Kubernetes cluster up and running.
## 2020-05-06 etcd-manager Certificate Expiration Advisory
kops versions released today contain a **critical fix** to etcd-manager: 1 year after creation (or first adopting etcd-manager), clusters will stop responding due to expiration of a TLS certificate. Upgrading kops to 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3 is highly recommended. Please see the [advisory](./docs/advisories/etcd-manager-certificate-expiration.md) for the full details.
kOps versions released today contain a **critical fix** to etcd-manager: 1 year after creation (or first adopting etcd-manager), clusters will stop responding due to expiration of a TLS certificate. Upgrading kOps to 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3 is highly recommended. Please see the [advisory](./docs/advisories/etcd-manager-certificate-expiration.md) for the full details.
## What is kops?
## What is kOps?
We like to think of it as `kubectl` for clusters.
@ -54,7 +52,7 @@ See [Contributing](https://kops.sigs.k8s.io/welcome/contributing/)
### Office Hours
Kops maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kOps. This session is open to both developers and users.
We do maintain an [agenda](https://docs.google.com/document/d/12QkyL0FkNbWPcLFxxRGSPt_tNPBHbmni3YLY-lHny7E/edit) and stick to it as much as possible. If you want to hold the floor, put your item in this doc. Bullet/note form is fine. Even if your topic gets in late, we do our best to cover it.

View File

@ -1,42 +1,42 @@
# ROADMAP
## VERSION SUPPORT
kops 1.N.x _officially_ supports Kubernetes 1.N.x and prior versions. We understand that those in the community run a wide selection of versions and we do our best to maintain backward compatibility as far as we can.
kOps 1.N.x _officially_ supports Kubernetes 1.N.x and prior versions. We understand that those in the community run a wide selection of versions and we do our best to maintain backward compatibility as far as we can.
However, kops 1.N.x does NOT support Kubernetes 1.N+1.x. Sometimes you get lucky and kops 1.N will technically install a later version of Kubernetes, but we cannot guarantee or support this situation. As always, we recommend waiting for the official release of kops with minor version >= the version of Kubernetes you wish to install. Please see the [compatibility matrix](README.md#Compatibility_Matrix) for further questions.
However, kOps 1.N.x does NOT support Kubernetes 1.N+1.x. Sometimes you get lucky and kOps 1.N will technically install a later version of Kubernetes, but we cannot guarantee or support this situation. As always, we recommend waiting for the official release of kOps with minor version >= the version of Kubernetes you wish to install. Please see the [compatibility matrix](README.md#Compatibility_Matrix) for further questions.
## RELEASE SCHEDULE
There is a natural lag between the release of Kubernetes and the corresponding version of kops that has full support for it. While the first patch versions of a minor Kubernetes release are burning in, the kops team races to incorporate all the updates needed to release. Once we have both some stability in the upstream version of Kubernetes AND full support in kops, we will cut a release that includes version specific configuration and a selection of add-ons to match.
There is a natural lag between the release of Kubernetes and the corresponding version of kOps that has full support for it. While the first patch versions of a minor Kubernetes release are burning in, the kOps team races to incorporate all the updates needed to release. Once we have both some stability in the upstream version of Kubernetes AND full support in kOps, we will cut a release that includes version specific configuration and a selection of add-ons to match.
In practice, sometimes this means that kops release lags the upstream release by 1 or more months. We sincerely try to avoid this scenario- we understand how important this project is and respect the need that teams have to maintain their clusters.
In practice, sometimes this means that kOps release lags the upstream release by 1 or more months. We sincerely try to avoid this scenario- we understand how important this project is and respect the need that teams have to maintain their clusters.
Our goal is to have an official kops release no later than a month after the corresponding Kubernetes version is released. Please help us achieve this timeline and meet our goals by jumping in and giving us a hand. We always need assistance closing issues, reviewing PRs, and contributing code! Stop by office hours if you're interested.
Our goal is to have an official kOps release no later than a month after the corresponding Kubernetes version is released. Please help us achieve this timeline and meet our goals by jumping in and giving us a hand. We always need assistance closing issues, reviewing PRs, and contributing code! Stop by office hours if you're interested.
A rough outline of the timeline/release cycle with respect to the Kubernetes release follows. We are revising the automation around the release process so that we can get alpha and beta releases out to the community and other developers much faster for testing and to get more eyes on open issues.
Example release timeline based on Kubernetes quarterly release cycle:
July 1: Kubernetes 1.W.0 is released.
July 7: kops 1.W.beta1
July 21: kops 1.W.0 released
August 15: kops 1.W+1alpha1
August 31: kops 1.W+1alpha2
July 7: kOps 1.W.beta1
July 21: kOps 1.W.0 released
August 15: kOps 1.W+1alpha1
August 31: kOps 1.W+1alpha2
... etc
September 25: Kubernetes1.W+1.RC-X
Oct 1: Kubernetes 1.W+1.0
Oct 7: kops 1.W+1beta1
Oct 21: kops 1.W+1.0
Oct 7: kOps 1.W+1beta1
Oct 21: kOps 1.W+1.0
## UPCOMING RELEASES
### kops 1.17
### kOps 1.17
* Full support for Kubernetes 1.17
### kops 1.18
### kOps 1.18
* Full support for Kubernetes 1.18
* Support for Containerd as an alternate container runtime

View File

@ -13,7 +13,7 @@ Instructions for reporting a vulnerability can be found on the
## Supported Versions
Information about supported Kops versions and the Kubernetes versions they support can be found on the
Information about supported kOps versions and the Kubernetes versions they support can be found on the
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the
[Kubernetes version and version skew support policy] page on the Kubernetes website.

View File

@ -32,7 +32,7 @@ kubectl apply -f ${addon}
An enhanced script which also adds the IAM policies is included here [cluster-autoscaler.sh](cluster-autoscaler.sh)
Question: Which ASG group should be autoscaled?
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kops instancesgroup, and update the cluster so the maxSize propagates to the ASG.
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
Question: The cluster-autoscaler [documentation](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws) mentions an IAM Policy. Which IAM Role should the Policy be attached to?
Answer: Kops creates two Roles, nodes.$CLUSTER_NAME and masters.$CLUSTER_NAME. Currently the example scripts run the autoscaler process on the k8s master node, so the IAM Policy should be assigned to masters.$CLUSTER_NAME (substituting that variable for your actual cluster name).

View File

@ -111,7 +111,7 @@ kube-ingress-aws-controller, which we will use:
}
```
To apply the mentioned policy you have to add [additionalPolicies with kops](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md) for your cluster, so edit your cluster.
To apply the mentioned policy you have to add [additionalPolicies with kOps](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md) for your cluster, so edit your cluster.
```
kops edit cluster $KOPS_CLUSTER_NAME

View File

@ -1,6 +1,6 @@
# Prometheus Operator Addon
[Prometheus Operator](https://coreos.com/operators/prometheus) creates/configures/manages Prometheus clusters atop Kubernetes. This addon deploy prometheus-operator and [kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/README.md) in a kops cluster.
[Prometheus Operator](https://coreos.com/operators/prometheus) creates/configures/manages Prometheus clusters atop Kubernetes. This addon deploy prometheus-operator and [kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/README.md) in a kOps cluster.
## Prerequisites

View File

@ -1,4 +1,4 @@
# Download kops config spec file
# Download kOps config spec file
KOPS operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
@ -17,7 +17,7 @@ export NETWORKCIDR="10.240.0.0/16"
export MASTER_SIZE="m3.large"
export WORKER_SIZE="m4.large"
```
Next you call the kops command to create the cluster in your terminal:
Next you call the kOps command to create the cluster in your terminal:
```shell
kops create cluster $NAME \
@ -33,9 +33,9 @@ kops create cluster $NAME \
--ssh-public-key=$HOME/.ssh/lab_no_password.pub
```
## kops command
## kOps command
You can simply use the kops command `kops get --name $NAME -o yaml > a_fun_name_you_will_remember.yml`
You can simply use the kOps command `kops get --name $NAME -o yaml > a_fun_name_you_will_remember.yml`
Note: for the above command to work the cluster NAME and the KOPS_STATE_STORE will have to be exported in your environment.

View File

@ -1,4 +1,4 @@
## kops Advisories
## kOps Advisories
- [etcd-manager certificate expiration](etcd-manager-certificate-expiration.md)

View File

@ -5,7 +5,7 @@ kube-dns component. This component is the default DNS component installed in
Kubernetes. The vulnerability may be externally exploitable. Links below exist
with the full detail of the CVE. This exploit is not a Kubernetes specific vulnerability but exists in dnsmasq.
## Current kops Status
## Current kOps Status
`kops` release 1.7.1 addresses this CVE. This version of `kops` will upgrade and
create clusters. `kops` 1.8.0.alpha.1 release does not contain the required
@ -43,15 +43,15 @@ successfully, but we cannot validate full production stability. Local testing
in a non-production environment is always recommended. We are not able to
quantify the risk of using a non-tested version.
## Fixed kops releases
## Fixed kOps releases
We are planning to release in 1.8.x kops releases. 1.7.1 release is released with
We are planning to release in 1.8.x kOps releases. 1.7.1 release is released with
the needed changes. If you are using the 1.8.x alpha releases, we recommend
applying the hotfixes.
### Fixed kops Version Matrix
### Fixed kOps Version Matrix
| kops Version | Fixed | Released | Will Fix | URL |
| kOps Version | Fixed | Released | Will Fix | URL |
|---|---|---|---|---|
| 1.7.1 | Y | Y | Not Applicable | [here](https://github.com/kubernetes/kops/releases/tag/1.7.1) |
| master | Y | N | Not Applicable | [here](https://github.com/kubernetes/kops) |
@ -59,12 +59,12 @@ applying the hotfixes.
| 1.8.0.alpha.1 | N | Y | N | Not Applicable |
| 1.7.0 | N | Y | N | Not Applicable |
## kops PR fixes
## kOps PR fixes
- Fixed by @mikesplain in [#3511](https://github.com/kubernetes/kops/pull/3511)
- Fixed by @mikesplain in [#3538](https://github.com/kubernetes/kops/pull/3538)
## kops Tracking Issue
## kOps Tracking Issue
- Filed by @chrislovecnm [#3512](https://github.com/kubernetes/kops/issues/3512)

View File

@ -5,13 +5,13 @@ overwrite the host runc binary and consequently obtain host root access. For
more information, please see the [NIST advisory][NIST advisory]
or the [kubernetes advisory][kubernetes advisory].
For kops, kops releases 1.11.1 or later include workarounds, but note that the
For kOps, kOps releases 1.11.1 or later include workarounds, but note that the
fixes depend on the version of kubernetes you are running. Because kubernetes
1.10 and 1.11 were only validated with Docker version 17.03.x (and earlier), and
because Docker has chosen not to offer fixes for 17.03 in OSS, there is no
patched version.
**You must update to kops 1.11.1 (or later) if you are running kubernetes <=
**You must update to kOps 1.11.1 (or later) if you are running kubernetes <=
1.11.x to get this fix**
However, there is [an alternative](https://seclists.org/oss-sec/2019/q1/134) to
@ -27,7 +27,7 @@ anyway) and pods that have explicitly been granted CAP_LINUX_IMMUTABLE in the
running kubernetes 1.11 (or earlier), you should consider one of the
alternative fixes listed below**
## Summary of fixes that ship with kops >= 1.11.1
## Summary of fixes that ship with kOps >= 1.11.1
* Kubernetes 1.11 (and earlier): we mark runc with the immutable attribute.
* Kubernetes 1.12 (and later): we install a version of docker that includes a
@ -36,13 +36,13 @@ anyway) and pods that have explicitly been granted CAP_LINUX_IMMUTABLE in the
## Alternative fixes for users of kubernetes 1.11 (or earlier)
* Anticipate upgrading to kubernetes 1.12 earlier than previously planned. We are
accelerating the kops 1.12 release to facilitate this.
accelerating the kOps 1.12 release to facilitate this.
* Consider replacing the docker version with 18.06.3 or later. Note that this
will "pin" your docker version and in future you will want to remove this to
get future docker upgrades. (Do not select docker 18.06.2 on Redhat/Centos,
that version was mistakenly packaged by Docker without including the fix)
* Consider replacing just runc - some third parties have backported the fix to
runc 17.03, and our wonderful community of kops users has shared their
runc 17.03, and our wonderful community of kOps users has shared their
approaches to patching runc, see
[here](https://github.com/kubernetes/kops/issues/6459) and
[here](https://github.com/kubernetes/kops/issues/6476#issuecomment-465861406).

View File

@ -9,13 +9,13 @@ If these certificates are not rotated prior to their expiration, Kubernetes apis
## How do I know if I'm affected?
Clusters are affected by this issue if they're using a version of etcd-manager < `3.0.20200428`.
The etcd-manager version is set automatically based on the Kops version.
These Kops versions are affected:
The etcd-manager version is set automatically based on the kOps version.
These kOps versions are affected:
* Kops 1.10.0-alpha.1 through 1.15.2
* Kops 1.16.0-alpha.1 through 1.16.1
* Kops 1.17.0-alpha.1 through 1.17.0-beta.1
* Kops 1.18.0-alpha.1 through 1.18.0-alpha.2
* kOps 1.10.0-alpha.1 through 1.15.2
* kOps 1.16.0-alpha.1 through 1.16.1
* kOps 1.17.0-alpha.1 through 1.17.0-beta.1
* kOps 1.18.0-alpha.1 through 1.18.0-alpha.2
The issue can be confirmed by checking for the existence of etcd-manager pods and observing their image tags:
@ -34,9 +34,9 @@ Upgrade etcd-manager. etcd-manager version >= `3.0.20200428` manages certificate
We have two suggested workflows to upgrade etcd-manager in your cluster. While both workflows require a rolling-update of the master nodes, neither require control-plane downtime (if the clusters have highly available masters).
1. Upgrade to Kops 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3.
1. Upgrade to kOps 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3.
This is the recommended approach.
Follow the normal steps when upgrading Kops and confirm the etcd-manager image will be updated based on the output of `kops update cluster`.
Follow the normal steps when upgrading kOps and confirm the etcd-manager image will be updated based on the output of `kops update cluster`.
```
kops update cluster --yes
kops rolling-update cluster --instance-group-roles=Master --cloudonly

View File

@ -12,8 +12,8 @@
* All unpatched versions of linux are vulnerable when running on affected hardware, across all platforms (AWS, GCE, etc)
* Patches are included in Linux 4.4.110 for 4.4, 4.9.75 for 4.9, 4.14.12 for 4.14.
* kops can run an image of your choice, so we can only provide detailed advice for the default image.
* By default, kops runs an image that includes the 4.4 kernel. An updated image is available with the patched version (4.4.110). Users running the default image are strongly encouraged to upgrade.
* kOps can run an image of your choice, so we can only provide detailed advice for the default image.
* By default, kOps runs an image that includes the 4.4 kernel. An updated image is available with the patched version (4.4.110). Users running the default image are strongly encouraged to upgrade.
* If running another image please see your distro for updated images.
## CVEs
@ -56,7 +56,7 @@ other vendors for the appropriate AMI version.
### Update Process
For all examples please replace `$CLUSTER` with the appropriate kops cluster
For all examples please replace `$CLUSTER` with the appropriate kOps cluster
name.
#### List instance groups

View File

@ -1,6 +1,6 @@
# Authentication
Kops has support for configuring authentication systems. This should not be used with kubernetes versions
kOps has support for configuring authentication systems. This should not be used with kubernetes versions
before 1.8.5 because of a serious bug with apimachinery [#55022](https://github.com/kubernetes/kubernetes/issues/55022).
## kopeio authentication

View File

@ -1,14 +1,14 @@
# How to use kops in AWS China Region
# How to use kOps in AWS China Region
## Getting Started
Kops used to only support Google Cloud DNS and Amazon Route53 to provision a kubernetes cluster. But since 1.6.2 `gossip` has been added which make it possible to provision a cluster without one of those DNS providers. Thanks to `gossip`, it's officially supported to provision a fully-functional kubernetes cluster in AWS China Region [which doesn't have Route53 so far][1] since [1.7][2]. Should support both `cn-north-1` and `cn-northwest-1`, but only `cn-north-1` is tested.
kOps used to only support Google Cloud DNS and Amazon Route53 to provision a kubernetes cluster. But since 1.6.2 `gossip` has been added which make it possible to provision a cluster without one of those DNS providers. Thanks to `gossip`, it's officially supported to provision a fully-functional kubernetes cluster in AWS China Region [which doesn't have Route53 so far][1] since [1.7][2]. Should support both `cn-north-1` and `cn-northwest-1`, but only `cn-north-1` is tested.
Most of the following procedures to provision a cluster are the same with [the guide to use kops in AWS](getting_started/aws.md). The differences will be highlighted and the similar parts will be omitted.
Most of the following procedures to provision a cluster are the same with [the guide to use kOps in AWS](getting_started/aws.md). The differences will be highlighted and the similar parts will be omitted.
*NOTE: THE FOLLOWING PROCEDURES ARE ONLY TESTED WITH KOPS 1.10.0, 1.10.1 AND KUBERNETES 1.9.11, 1.10.12*
### [Install kops](getting_started/aws.md#install-kops)
### [Install kOps](getting_started/aws.md#install-kOps)
### [Install kubectl](getting_started/aws.md#install-kubectl)
@ -53,9 +53,9 @@ aws s3api create-bucket --bucket prefix-example-com-state-store --create-bucket-
First of all, we have to solve the slow and unstable connection to the internet outside China, or the following processes won't work. One way to do that is to build a NAT instance which can route the traffic via some reliable connection. The details won't be discussed here.
### Prepare kops ami
### Prepare kOps ami
We have to build our own AMI because there is [no official kops ami in AWS China Regions][3]. There're two ways to accomplish so.
We have to build our own AMI because there is [no official kOps ami in AWS China Regions][3]. There're two ways to accomplish so.
#### ImageBuilder **RECOMMENDED**
@ -93,7 +93,7 @@ ${GOPATH}/bin/imagebuilder --config aws-1.9-jessie.yaml --v=8 --publish=false --
#### Copy AMI from another region
Following [the comment][5] to copy the kops image from another region, e.g. `ap-southeast-1`.
Following [the comment][5] to copy the kOps image from another region, e.g. `ap-southeast-1`.
#### Get the AMI id

View File

@ -1,4 +1,4 @@
# Bastion in Kops
# Bastion in kOps
Bastion provide an external facing point of entry into a network containing private network instances. This host can provide a single point of fortification or audit and can be started and stopped to enable or disable inbound SSH communication from the Internet, some call bastion as the "jump server".
@ -126,7 +126,7 @@ ssh admin@<master_ip>
### Changing your ELB idle timeout
The bastion is accessed via an AWS ELB. The ELB is required to gain secure access into the private network and connect the user to the ASG that the bastion lives in. Kops will by default set the bastion ELB idle timeout to 5 minutes. This is important for SSH connections to the bastion that you plan to keep open.
The bastion is accessed via an AWS ELB. The ELB is required to gain secure access into the private network and connect the user to the ASG that the bastion lives in. kOps will by default set the bastion ELB idle timeout to 5 minutes. This is important for SSH connections to the bastion that you plan to keep open.
You can increase the ELB idle timeout by editing the main cluster config `kops edit cluster $NAME` and modifying the following block

View File

@ -4,7 +4,7 @@ This is an overview of how a Kubernetes cluster comes up, when using kops.
## From spec to complete configuration
The kops tool itself takes the (minimal) spec of a cluster that the user specifies,
The kOps tool itself takes the (minimal) spec of a cluster that the user specifies,
and computes a complete configuration, setting defaults where values are not specified,
and deriving appropriate dependencies. The "complete" specification includes the set
of all flags that will be passed to all components. All decisions about how to install the
@ -71,7 +71,7 @@ APIServer also listens on the HTTPS port (443) on all interfaces. This is a sec
and requires valid authentication/authorization to use it. This is the endpoint that node kubelets
will reach, and also that end-users will reach.
kops uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
kOps uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
/etc/kubernetes/manifests) includes annotations that will cause the dns-controller to create the
records. It creates `api.internal.mycluster.com` for use inside the cluster (using InternalIP addresses),
and it creates `api.mycluster.com` for use outside the cluster (using ExternalIP addresses).
@ -81,7 +81,7 @@ kops uses DNS to allow nodes and end-users to discover the api-server. The apis
etcd is where we have put all of our synchronization logic, so it is more complicated than most other pieces,
and we must be really careful when bringing it up.
kops follows CoreOS's recommend procedure for [bring-up of etcd on clouds](https://github.com/coreos/etcd/issues/5418):
kOps follows CoreOS's recommend procedure for [bring-up of etcd on clouds](https://github.com/coreos/etcd/issues/5418):
* We have one EBS volume for each etcd cluster member (in different nodes)
* We attach the EBS volume to a master, and bring up etcd on that master

View File

@ -2,7 +2,7 @@
The `Cluster` resource contains the specification of the cluster itself.
The complete list of keys can be found at the [Cluster](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec) reference page.
The complete list of keys can be found at the [Cluster](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ClusterSpec) reference page.
On this page, we will expand on the more important configuration keys.
@ -50,7 +50,7 @@ spec:
You can use a valid SSL Certificate for your API Server Load Balancer. Currently, only AWS is supported.
Note that when using `sslCertificate`, client certificate authentication, such as with the credentials generated via `kops export kubecfg`, will not work through the load balancer. As of Kops 1.19, a `kubecfg` that bypasses the load balancer may be created with the `--internal` flag to `kops update cluster` or `kops export kubecfg`. Security groups may need to be opened to allow access from the clients to the master instances' port TCP/443, for example by using the `additionalSecurityGroups` field on the master instance groups.
Note that when using `sslCertificate`, client certificate authentication, such as with the credentials generated via `kOps export kubecfg`, will not work through the load balancer. As of kOps 1.19, a `kubecfg` that bypasses the load balancer may be created with the `--internal` flag to `kops update cluster` or `kOps export kubecfg`. Security groups may need to be opened to allow access from the clients to the master instances' port TCP/443, for example by using the `additionalSecurityGroups` field on the master instance groups.
```yaml
spec:
@ -61,7 +61,7 @@ spec:
```
*Openstack only*
As of Kops 1.12.0 it is possible to use the load balancer internally by setting the `useForInternalApi: true`.
As of kOps 1.12.0 it is possible to use the load balancer internally by setting the `useForInternalApi: true`.
This will point both `masterPublicName` and `masterInternalName` to the load balancer. You can therefore set both of these to the same value in this configuration.
```yaml
@ -84,7 +84,7 @@ spec:
### The default etcd configuration
Kops will default to v3 using TLS by default. etcd provisioning and upgrades are handled by etcd-manager. By default, the spec looks like this:
kOps will default to v3 using TLS by default. etcd provisioning and upgrades are handled by etcd-manager. By default, the spec looks like this:
```yaml
etcdClusters:
@ -106,11 +106,11 @@ etcdClusters:
name: events
```
The etcd version used by kops follows the recommended etcd version for the given kubernetes version. It is possible to override this by adding the `version` key to each of the etcd clusters.
The etcd version used by kOps follows the recommended etcd version for the given kubernetes version. It is possible to override this by adding the `version` key to each of the etcd clusters.
By default, the Volumes created for the etcd clusters are `gp2` and 20GB each. The volume size, type and Iops( for `io1`) can be configured via their parameters. Conversion between `gp2` and `io1` is not supported, nor are size changes.
As of Kops 1.12.0 it is also possible to modify the requests for your etcd cluster members using the `cpuRequest` and `memoryRequest` parameters.
As of kOps 1.12.0 it is also possible to modify the requests for your etcd cluster members using the `cpuRequest` and `memoryRequest` parameters.
```yaml
etcdClusters:
@ -219,7 +219,7 @@ spec:
zone: us-east-1a
```
In the case that you don't use NAT gateways or internet gateways, Kops 1.12.0 introduced the "External" flag for egress to force kops to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kops, typically with an existing cluster.
In the case that you don't use NAT gateways or internet gateways, kOps 1.12.0 introduced the "External" flag for egress to force kOps to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kOps, typically with an existing cluster.
```yaml
spec:
@ -406,7 +406,7 @@ spec:
## externalDns
This block contains configuration options for your `external-DNS` provider.
The current external-DNS provider is the kops `dns-controller`, which can set up DNS records for Kubernetes resources.
The current external-DNS provider is the kOps `dns-controller`, which can set up DNS records for Kubernetes resources.
`dns-controller` is scheduled to be phased out and replaced with `external-dns`.
```yaml
@ -415,7 +415,7 @@ spec:
watchIngress: true
```
Default _kops_ behavior is false. `watchIngress: true` uses the default _dns-controller_ behavior which is to watch the ingress controller for changes. Set this option at risk of interrupting Service updates in some cases.
Default kOps behavior is false. `watchIngress: true` uses the default _dns-controller_ behavior which is to watch the ingress controller for changes. Set this option at risk of interrupting Service updates in some cases.
## kubelet
@ -460,7 +460,7 @@ spec:
```
### Setting kubelet CPU management policies
Kops 1.12.0 added support for enabling cpu management policies in kubernetes as per [cpu management doc](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies)
kOps 1.12.0 added support for enabling cpu management policies in kubernetes as per [cpu management doc](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies)
we have to set the flag `--cpu-manager-policy` to the appropriate value on all the kubelets. This must be specified in the `kubelet` spec in our cluster.yml.
```yaml
@ -489,7 +489,7 @@ spec:
### Configure a Flex Volume plugin directory
An optional flag can be provided within the KubeletSpec to set a volume plugin directory (must be accessible for read/write operations), which is additionally provided to the Controller Manager and mounted in accordingly.
Kops will set this for you based off the Operating System in use:
kOps will set this for you based off the Operating System in use:
- ContainerOS: `/home/kubernetes/flexvolume/`
- Flatcar: `/var/lib/kubelet/volumeplugins/`
- Default (in-line with upstream k8s): `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`
@ -630,7 +630,7 @@ spec:
enableProfiling: false
```
For more details on `horizontalPodAutoscaler` flags see the [official HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and the [Kops guides on how to set it up](horizontal_pod_autoscaling.md).
For more details on `horizontalPodAutoscaler` flags see the [official HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and the [kOps guides on how to set it up](horizontal_pod_autoscaling.md).
## Cluster autoscaler
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.15') }}
@ -867,7 +867,7 @@ spec:
## containerd
It is possible to override the [containerd](https://github.com/containerd/containerd/blob/master/README.md) daemon options for all the nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ContainerdConfig) for the full list of options.
It is possible to override the [containerd](https://github.com/containerd/containerd/blob/master/README.md) daemon options for all the nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ContainerdConfig) for the full list of options.
```yaml
spec:
@ -879,7 +879,7 @@ spec:
## docker
It is possible to override Docker daemon options for all masters and nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#DockerConfig) for the full list of options.
It is possible to override Docker daemon options for all masters and nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#DockerConfig) for the full list of options.
### registryMirrors
@ -933,7 +933,7 @@ docker:
## sshKeyName
In some cases, it may be desirable to use an existing AWS SSH key instead of allowing kops to create a new one.
In some cases, it may be desirable to use an existing AWS SSH key instead of allowing kOps to create a new one.
Providing the name of a key already in AWS is an alternative to `--ssh-public-key`.
```yaml
@ -976,7 +976,7 @@ snip
## target
In some use-cases you may wish to augment the target output with extra options. `target` supports a minimal amount of options you can do this with. Currently only the terraform target supports this, but if other use cases present themselves, kops may eventually support more.
In some use-cases you may wish to augment the target output with extra options. `target` supports a minimal amount of options you can do this with. Currently only the terraform target supports this, but if other use cases present themselves, kOps may eventually support more.
```yaml
spec:
@ -992,12 +992,12 @@ Assets define alternative locations from where to retrieve static files and cont
### containerRegistry
The container registry enables kops / kubernetes to pull containers from a managed registry.
The container registry enables kOps / kubernetes to pull containers from a managed registry.
This is useful when pulling containers from the internet is not an option, eg. because the
deployment is offline / internet restricted or because of special requirements that apply
for deployed artifacts, eg. auditing of containers.
For a use case example, see [How to use kops in AWS China Region](https://github.com/kubernetes/kops/blob/master/docs/aws-china.md)
For a use case example, see [How to use kOps in AWS China Region](https://github.com/kubernetes/kops/blob/master/docs/aws-china.md)
```yaml
spec:

View File

@ -1,17 +1,17 @@
# Continuous Integration
Using Kops' declarative manifests it is possible to create and manage clusters entirely in a CI environment.
Using kOps' declarative manifests it is possible to create and manage clusters entirely in a CI environment.
Rather than using `kops create cluster` and `kops edit cluster`, the cluster and instance group manifests can be stored in version control.
This allows cluster changes to be made through reviewable commits rather than on a local workstation.
This is ideal for larger teams in order to avoid possible collisions from simultaneous changes being made.
It also provides an audit trail, consistent environment, and centralized view for any Kops commands being ran.
It also provides an audit trail, consistent environment, and centralized view for any kOps commands being ran.
Running Kops in a CI environment can also be useful for upgrading Kops.
Running kOps in a CI environment can also be useful for upgrading kOps.
Simply download a newer version in the CI environment and run a new pipeline.
This will allow any changes to be reviewed prior to being applied.
This strategy can be extended to sequentially upgrade Kops on multiple clusters, allowing changes to be tested on non-production environments first.
This strategy can be extended to sequentially upgrade kOps on multiple clusters, allowing changes to be tested on non-production environments first.
This page provides examples for managing Kops clusters in CI environments.
This page provides examples for managing kOps clusters in CI environments.
The [Manifest documentation](./manifests_and_customizing_via_api.md) describes how to create the YAML manifest files locally and includes high level examples of commands described below.
If you have a solution for a different CI platform or deployment strategy, feel free to open a Pull Request!
@ -22,7 +22,7 @@ If you have a solution for a different CI platform or deployment strategy, feel
### Requirements
* The GitLab runners that run the jobs need the appropriate permissions to invoke the Kops commands.
* The GitLab runners that run the jobs need the appropriate permissions to invoke the kOps commands.
For clusters in AWS this means providing AWS IAM credentials either with IAM User Keys set as secret variables in the project, or having the runner run on an EC2 instance with an instance profile attached.

View File

@ -1,5 +1,5 @@
In order to develop inside a Docker container you must mount your local copy of
the Kops repo into the container's `GOPATH`. For the official Golang Docker
the kOps repo into the container's `GOPATH`. For the official Golang Docker
image this is simply a matter of running the following command:
```bash

View File

@ -35,7 +35,7 @@ Validation is done in validation.go, and is fairly simple - we just add an error
```go
if v.Ipam != "" {
// "azure" not supported by kops
// "azure" not supported by kOps
allErrs = append(allErrs, IsValidValue(fldPath.Child("ipam"), &v.Ipam, []string{"crd", "eni"})...)
if v.Ipam == kops.CiliumIpamEni {
@ -246,7 +246,7 @@ kops create cluster <clustername> --zones us-east-1b
...
```
If you have changed the dns or kops controllers, you would want to test them as well. To do so, run the respective snippets below before creating the cluster.
If you have changed the dns or kOps controllers, you would want to test them as well. To do so, run the respective snippets below before creating the cluster.
For dns-controller:
@ -282,7 +282,7 @@ Users would simply `kops edit cluster`, and add a value like:
```
Then `kops update cluster --yes` would create the new NodeUpConfig, which is included in the instance startup script
and thus requires a new LaunchConfiguration, and thus a `kops rolling update`. We're working on changing settings
and thus requires a new LaunchConfiguration, and thus a `kops-rolling update`. We're working on changing settings
without requiring a reboot, but likely for this particular setting it isn't the sort of thing you need to change
very often.

View File

@ -1,11 +1,11 @@
# API machinery
Kops uses the Kubernetes API machinery. It is well designed, and very powerful, but you have to
kOps uses the Kubernetes API machinery. It is well designed, and very powerful, but you have to
jump through some hoops to use it.
Recommended reading: [kubernetes API convention doc](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) and [kubernetes API changes doc](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md).
The kops APIs live in [pkg/apis/kops](https://github.com/kubernetes/kops/tree/master/pkg/apis/kops), both in
The kOps APIs live in [pkg/apis/kops](https://github.com/kubernetes/kops/tree/master/pkg/apis/kops), both in
that directory directly (the unversioned API) and in the versioned subdirectory (`v1alpha2`).
## Updating the generated API code

View File

@ -11,7 +11,7 @@ While bazel works well for small projects, building with kubernetes still has a
* We strip bazel files from external dependencies, so we don't confuse gazelle
## Bazel versions:
For building kops release branches 1.14 and older, you may need to run an older version of bazel such as `0.24.0`. kops 1.15 and newer should be able to use more recent versions of bazel due to deprecation fixes that have not be backported.
For building kOps release branches 1.14 and older, you may need to run an older version of bazel such as `0.24.0`. kOps 1.15 and newer should be able to use more recent versions of bazel due to deprecation fixes that have not be backported.
## How to run

View File

@ -1,6 +1,6 @@
# Building from source
[Installation from a binary](../install.md) is recommended for normal kops operation. However, if you want
[Installation from a binary](../install.md) is recommended for normal kOps operation. However, if you want
to build from source, it is straightforward:
If you don't have a GOPATH:
@ -29,14 +29,14 @@ Cross compiling for things like `nodeup` are now done automatically via `make no
## Debugging
To enable interactive debugging, the kops binary needs to be specially compiled to include debugging symbols.
To enable interactive debugging, the kOps binary needs to be specially compiled to include debugging symbols.
Add `DEBUGGING=true` to the `make` invocation to set the compile flags appropriately.
For example, `DEBUGGING=true make` will produce a kops binary that can be interactively debugged.
For example, `DEBUGGING=true make` will produce a kOps binary that can be interactively debugged.
### Interactive debugging with Delve
[Delve](https://github.com/derekparker/delve) can be used to interactively debug the kops binary.
[Delve](https://github.com/derekparker/delve) can be used to interactively debug the kOps binary.
After installing Delve, you can use it directly, or run it in headless mode for use with an
Interactive Development Environment (IDE).
@ -46,6 +46,6 @@ and then configure your IDE to connect its debugger to port 2345 on localhost.
## Troubleshooting
- Make sure `$GOPATH` is set, and your [workspace](https://golang.org/doc/code.html#Workspaces) is configured.
- kops will not compile with symlinks in `$GOPATH`. See issue go issue [17451](https://github.com/golang/go/issues/17451) for more information
- kOps will not compile with symlinks in `$GOPATH`. See issue go issue [17451](https://github.com/golang/go/issues/17451) for more information
- building kops requires go 1.15
- Kops will only compile if the source is checked out in `$GOPATH/src/k8s.io/kops`. If you try to use `$GOPATH/src/github.com/kubernetes/kops` you will run into issues with package imports not working as expected.
- kOps will only compile if the source is checked out in `$GOPATH/src/k8s.io/kops`. If you try to use `$GOPATH/src/github.com/kubernetes/kops` you will run into issues with package imports not working as expected.

View File

@ -1,6 +1,6 @@
# Installing Kops via Hombrew
# Installing kOps via Hombrew
Homebrew makes installing kops [very simple for MacOS.](../install.md)
Homebrew makes installing kOps [very simple for MacOS.](../install.md)
```bash
brew update && brew install kops
```
@ -13,7 +13,7 @@ brew update && brew install kops --HEAD
Previously we could also ship development updates to homebrew but their [policy has changed.](https://github.com/Homebrew/brew/pull/5060#issuecomment-428149176)
Note: if you already have kops installed, you need to substitute `upgrade` for `install`.
Note: if you already have kOps installed, you need to substitute `upgrade` for `install`.
You can switch between installed releases with:
```bash
@ -21,9 +21,9 @@ brew switch kops 1.17.0
brew switch kops 1.18.0
```
# Releasing kops to Brew
# Releasing kOps to Brew
Submitting a new release of kops to Homebrew is very simple.
Submitting a new release of kOps to Homebrew is very simple.
### From a homebrew machine

View File

@ -4,7 +4,7 @@
Everything in `kops` is currently driven by a command line interface. We use [cobra](https://github.com/spf13/cobra) to define all of our command line UX.
All of the CLI code for kops can be found in `/cmd/kops` [link](https://github.com/kubernetes/kops/tree/master/cmd/kops)
All of the CLI code for kOps can be found in `/cmd/kops` [link](https://github.com/kubernetes/kops/tree/master/cmd/kops)
For instance, if you are interested in finding the entry point to `kops create cluster` you would look in `/cmd/kops/create_cluster.go`. There you would find a function called `RunCreateCluster()`. That is the entry point of the command.
@ -38,7 +38,7 @@ The `kops` API is a definition of struct members in Go code found [here](https:/
The base level struct of the API is `api.Cluster{}` which is defined [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/cluster.go#L40). The top level struct contains meta information about the object such as the kind and version and the main data for the cluster itself can be found in `cluster.Spec`
It is important to note that the API members are a representation of a Kubernetes cluster. These values are stored in the `kops` **STATE STORE** mentioned above for later use. By design kops does not store information about the state of the cloud in the state store, if it can infer it from looking at the actual state of the cloud.
It is important to note that the API members are a representation of a Kubernetes cluster. These values are stored in the `kops` **STATE STORE** mentioned above for later use. By design kOps does not store information about the state of the cloud in the state store, if it can infer it from looking at the actual state of the cloud.
More information on the API can be found [here](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md).

View File

@ -1,20 +1,20 @@
** This file documents the release process used through kops 1.18.
** This file documents the release process used through kOps 1.18.
For the new process that will be used for 1.19, please see
[the new release process](../release-process.md)**
# Release Process
The kops project is released on an as-needed basis. The process is as follows:
The kOps project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. All [OWNERS](https://github.com/kubernetes/kops/blob/master/OWNERS) must LGTM this release
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kops $VERSION is released`
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kOps $VERSION is released`
## Branches
We maintain a `release-1.16` branch for kops 1.16.X, `release-1.17` for kops 1.17.X
We maintain a `release-1.16` branch for kOps 1.16.X, `release-1.17` for kOps 1.17.X
etc.
`master` is where development happens. We create new branches from master as a
@ -24,7 +24,7 @@ to focus on the new functionality, and start cherry-picking back more selectivel
to the release branches only as needed.
Generally we don't encourage users to run older kops versions, or older
branches, because newer versions of kops should remain compatible with older
branches, because newer versions of kOps should remain compatible with older
versions of Kubernetes.
Releases should be done from the `release-1.X` branch. The tags should be made
@ -35,15 +35,15 @@ the current `release-1.X` tag.
## New Kubernetes versions and release branches
Typically Kops alpha releases are created off the master branch and beta and stable releases are created off of release branches.
Typically kOps alpha releases are created off the master branch and beta and stable releases are created off of release branches.
In order to create a new release branch off of master prior to a beta release, perform the following steps:
1. Create a new presubmit E2E prow job for the new release branch [here](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/kops-presubmits.yaml).
2. Create a new milestone in the GitHub repo.
3. Update [prow's milestone_applier config](https://github.com/kubernetes/test-infra/blob/dc99617c881805981b85189da232d29747f87004/config/prow/plugins.yaml#L309-L313) to update master to use the new milestone and add an entry for the new branch that targets master's old milestone.
4. Create the new release branch in git and push it to the GitHub repo. This will trigger a warning in #kops-dev as well as trigger the postsubmit job that creates the `https://storage.googleapis.com/k8s-staging-kops/kops/releases/markers/release-1.XX/latest-ci.txt` version marker via cloudbuild.
5. Update [build-pipeline.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-pipeline.py), incrementing `master_k8s_version` and the list of branches to reflect all actively maintained kops branches. Run `make` to regenerate the pipeline jobs.
6. Update the [build-grid.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-grid.py), incrementing the single non-master `kops_version` list item and incrementing the `k8s_versions` values. Update the `ko` suffixes in the `skip_jobs` list to reflect the new kops release branch being tested. Run `make` to regenerate the grid jobs.
5. Update [build-pipeline.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-pipeline.py), incrementing `master_k8s_version` and the list of branches to reflect all actively maintained kOps branches. Run `make` to regenerate the pipeline jobs.
6. Update the [build-grid.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-grid.py), incrementing the single non-master `kops_version` list item and incrementing the `k8s_versions` values. Update the `ko` suffixes in the `skip_jobs` list to reflect the new kOps release branch being tested. Run `make` to regenerate the grid jobs.
An example set of PRs are linked from [this](https://github.com/kubernetes/kops/issues/10079) issue.
@ -167,7 +167,7 @@ k8s-container-image-promoter --snapshot gcr.io/k8s-staging-kops --snapshot-tag $
cd ~/k8s/src/k8s.io/k8s.io
git add k8s.gcr.io/images/k8s-staging-kops/images.yaml
git add artifacts/manifests/k8s-staging-kops/${VERSION}.yaml
git commit -m "Promote artifacts for kops ${VERSION}"
git commit -m "Promote artifacts for kOps ${VERSION}"
git push ${USER}
hub pull-request
```
@ -195,7 +195,7 @@ relnotes -config .shipbot.yaml < /tmp/prs >> docs/releases/${DOC}-NOTES.md
* Add notes
* Publish it
## Release kops to homebrew
## Release kOps to homebrew
* Following the [documentation](homebrew.md) we must release a compatible homebrew formulae with the release.
* This should be done at the same time as the release, and we will iterate on how to improve timing of this.
@ -204,11 +204,11 @@ relnotes -config .shipbot.yaml < /tmp/prs >> docs/releases/${DOC}-NOTES.md
Once we are satisfied the release is sound:
* Bump the kops recommended version in the alpha channel
* Bump the kOps recommended version in the alpha channel
Once we are satisfied the release is stable:
* Bump the kops recommended version in the stable channel
* Bump the kOps recommended version in the stable channel
## Update conformance results with CNCF

View File

@ -33,7 +33,7 @@ Following the examples below, kubetest will download artifacts such as a given K
### Running against an existing cluster
You can run something like the following to have `kubetest` re-use an existing cluster.
This assumes you have already built the kops binary from source. The exact path to the `kops` binary used in the `--kops` flag may differ.
This assumes you have already built the kOps binary from source. The exact path to the `kops` binary used in the `--kops` flag may differ.
```
GINKGO_PARALLEL=y kubetest --test --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort" --provider=aws --deployment=kops --cluster=my.testcluster.com --kops-state=${KOPS_STATE_STORE} --kops ${GOPATH}/bin/kops --extract=ci/latest
@ -49,6 +49,6 @@ By adding the `--up` flag, `kubetest` will spin up a new cluster. In most cases,
GINKGO_PARALLEL=y kubetest --test --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort" --provider=aws --check-version-skew=false --deployment=kops --kops-state=${KOPS_STATE_STORE} --kops ${GOPATH}/bin/kops --kops-args="--network-cidr=192.168.1.0/24" --cluster=my.testcluster.com --up --kops-ssh-key ~/.ssh/id_rsa --kops-admin-access=0.0.0.0/0
```
If you want to run the tests against your development version of kops, you need to upload the binaries and set the environment variables as described in [Adding a new feature](adding_a_feature.md).
If you want to run the tests against your development version of kOps, you need to upload the binaries and set the environment variables as described in [Adding a new feature](adding_a_feature.md).
Since we assume you are using this cluster for testing, we leave the cluster running after the tests have finished so that you can inspect the nodes if anything unexpected happens. If you do not need this, you can add the `--down` flag. Otherwise, just delete the cluster as any other cluster: `kops delete cluster my.testcluster.com --yes`

View File

@ -1,6 +1,6 @@
# Vendoring Go dependencies
kops uses [go mod](https://github.com/golang/go/wiki/Modules) to manage
kOps uses [go mod](https://github.com/golang/go/wiki/Modules) to manage
vendored dependencies.
## Prerequisites

View File

@ -4,7 +4,7 @@
Kubernetes has moved from etcd2 to etcd3, which is an upgrade that involves Kubernetes API Server
downtime. Technically there is no usable upgrade path from etcd2 to etcd3 that
supports HA scenarios, but kops has enabled it using etcd-manager.
supports HA scenarios, but kOps has enabled it using etcd-manager.
Nonetheless, this remains a *higher-risk upgrade* than most other kubernetes
upgrades - you are strongly recommended to plan accordingly: back up critical
@ -29,7 +29,7 @@ bottom of this doc that outlines how to do that.
## Default upgrades
When upgrading to kubernetes 1.12 with kops 1.12, by default:
When upgrading to kubernetes 1.12 with kOps 1.12, by default:
* Calico and Cilium will be updated to a configuration that uses CRDs
* We will automatically start using etcd-manager
@ -73,20 +73,20 @@ If you would like to upgrade more gradually, we offer the following strategies
to spread out the disruption over several steps. Note that they likely involve
more disruption and are not necessarily lower risk.
### Adopt etcd-manager with kops 1.11 / kubernetes 1.11
### Adopt etcd-manager with kOps 1.11 / kubernetes 1.11
If you don't already have TLS enabled with etcd, you can adopt etcd-manager before
kops 1.12 & kubernetes 1.12 by running:
kOps 1.12 & kubernetes 1.12 by running:
```bash
kops set cluster cluster.spec.etcdClusters[*].provider=manager
```
Then you can proceed to update to kops 1.12 & kubernetes 1.12, as this becomes the default.
Then you can proceed to update to kOps 1.12 & kubernetes 1.12, as this becomes the default.
### Delay adopting etcd-manager with kops 1.12
### Delay adopting etcd-manager with kOps 1.12
To delay adopting etcd-manager with kops 1.12, specify the provider as type `legacy`:
To delay adopting etcd-manager with kOps 1.12, specify the provider as type `legacy`:
```bash
kops set cluster cluster.spec.etcdClusters[*].provider=legacy
@ -94,9 +94,9 @@ kops set cluster cluster.spec.etcdClusters[*].provider=legacy
To remove, `kops edit` your cluster and delete the `provider: Legacy` lines from both etcdCluster blocks.
### Delay adopting etcd3 with kops 1.12
### Delay adopting etcd3 with kOps 1.12
To delay adopting etcd3 with kops 1.12, specify the etcd version as 2.2.1
To delay adopting etcd3 with kOps 1.12, specify the etcd version as 2.2.1
```bash
kops set cluster cluster.spec.etcdClusters[*].version=2.2.1

View File

@ -1,10 +1,10 @@
# KOPS CASE-USE EXAMPLES AND LABORATORY EXERCISES.
This section of our documentation contains typical use-cases for Kops. We'll cover here from the most basic things to very advanced use cases with a lot of technical detail. You can and will be able to reproduce all exercises (if you first read and understand what we did and why we did it) providing you have access to the proper resources.
This section of our documentation contains typical use-cases for kOps. We'll cover here from the most basic things to very advanced use cases with a lot of technical detail. You can and will be able to reproduce all exercises (if you first read and understand what we did and why we did it) providing you have access to the proper resources.
All exercises will need you to prepare your base environment (with kops and kubectl). You can see the ["basic requirements"](basic-requirements.md) document that is a common set of procedures for all our exercises. Please note that all the exercises covered here are production-oriented.
Our exercises are divided on "chapters". Each chapter covers a specific use-case for Kops:
Our exercises are divided on "chapters". Each chapter covers a specific use-case for kOps:
- Chapter I: [USING KOPS WITH COREOS - A MULTI-MASTER/MULTI-NODE PRACTICAL EXAMPLE](coreos-kops-tests-multimaster.md).
- Chapter II: [USING KOPS WITH PRIVATE NETWORKING AND A BASTION HOST IN A HIGLY-AVAILABLE SETUP](kops-tests-private-net-bastion-host.md).

View File

@ -1,10 +1,10 @@
# Common Basic Requirements For Kops-Related Labs. Pre-Flight Check:
# Common Basic Requirements For kOps-Related Labs. Pre-Flight Check:
Before rushing-in to replicate any of the exercises, please ensure your basic environment is correctly set-up. See [KOPS AWS tutorial](../getting_started/aws.md) for more information.
Basic requirements:
- Configured AWS cli (aws account set-up with proper permissions/roles needed for kops). Depending on your distro, you can set-up directly from packages, or if you want the most updated version, use `pip` (python package manager) to install by running `pip install awscli` command from your local terminal. Your choice!
- Configured AWS cli (aws account set-up with proper permissions/roles needed for kOps). Depending on your distro, you can set-up directly from packages, or if you want the most updated version, use `pip` (python package manager) to install by running `pip install awscli` command from your local terminal. Your choice!
- Local ssh key ready on `~/.ssh/id_rsa` / `id_rsa.pub`. You can generate it using `ssh-keygen` command if you don't have one already: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""`.
- AWS Region set.
- Throughout most of the exercises, we'll deploy our clusters in us-east-1 region (AZs: us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e and us-east-1f).

View File

@ -196,7 +196,7 @@ If you don't want KOPS to auto-select the instance type, you can use the followi
But, before doing that, always ensure the instance types are available on your desired AZ.
NOTE: More arguments and kops commands are described [here](../cli/kops.md).
NOTE: More arguments and kOps commands are described [here](../cli/kops.md).
Let's continue exploring our cluster, but now with "kubectl":

View File

@ -279,7 +279,7 @@ ${NAME}
A few things to note here:
- The environment variable ${NAME} was previously exported with our cluster name: mycluster01.kopsclustertest.example.org.
- "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- "--cloud=aws": As kOps grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- For true HA at the master level, we need to pick a region with at least 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e.
- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce that we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters).
- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes.
@ -312,7 +312,7 @@ I0906 09:42:29.215995 13538 executor.go:91] Tasks: 71 done / 75 total; 4 can r
I0906 09:42:30.073417 13538 executor.go:91] Tasks: 75 done / 75 total; 0 can run
I0906 09:42:30.073471 13538 dns.go:152] Pre-creating DNS records
I0906 09:42:32.403909 13538 update_cluster.go:247] Exporting kubecfg for cluster
Kops has set your kubectl context to mycluster01.kopsclustertest.example.org
kOps has set your kubectl context to mycluster01.kopsclustertest.example.org
Cluster is starting. It should be ready in a few minutes.
@ -787,11 +787,11 @@ You can see how your cluster scaled up to 3 nodes.
**SCALING RECOMMENDATIONS:**
- Always think ahead. If you want to ensure to have the capability to scale-up to all available zones in the region, ensure to add them to the "--zones=" argument when using the "kops create cluster" command. Example: --zones=us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1e. That will make things simpler later.
- For the masters, always consider "odd" numbers starting from 3. Like many other cluster, odd numbers starting from "3" are the proper way to create a fully redundant multi-master solution. In the specific case of "kops", you add masters by adding zones to the "--master-zones" argument on "kops create command".
- For the masters, always consider "odd" numbers starting from 3. Like many other cluster, odd numbers starting from "3" are the proper way to create a fully redundant multi-master solution. In the specific case of "kOps", you add masters by adding zones to the "--master-zones" argument on "kops create command".
## DELETING OUR CLUSTER AND CHECKING OUR DNS SUBDOMAIN:
If we don't need our cluster anymore, let's use a kops command in order to delete it:
If we don't need our cluster anymore, let's use a kOps command in order to delete it:
```bash
kops delete cluster ${NAME} --yes

View File

@ -71,13 +71,13 @@ ${NAME}
A few things to note here:
- The environment variable ${NAME} was previously exported with our cluster name: privatekopscluster.k8s.local.
- "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- "--cloud=aws": As kOps grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- For true HA (high availability) at the master level, we need to pick a region with 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. We used "us-east-1a,us-east-1b,us-east-1c" for our masters.
- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). Again, real "HA" on Kubernetes control plane requires 3 masters.
- The "--topology private" argument will ensure that all our instances will have private IP's and no public IP's from amazon.
- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes.
- Because we are just doing a simple LAB, we are using "t3.micro" machines. Please DON'T USE t3.micro on real production systems. Start with "t3.medium" as a minimum realistic/workable machine type.
- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kops which networking subsystem to use. More information about kops supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](../networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short).
- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kOps which networking subsystem to use. More information about kOps supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](../networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short).
**NOTE**: You can add the "--bastion" argument here if you are not using "gossip dns" and create the bastion from start, but if you are using "gossip-dns" this will make this cluster to fail (this is a bug we are correcting now). For the moment don't use "--bastion" when using gossip DNS. We'll show you how to get around this by first creating the private cluster, then creation the bastion instance group once the cluster is running.
@ -114,7 +114,7 @@ ip-172-20-74-55.ec2.internal master True
Your cluster privatekopscluster.k8s.local is ready
```
The ELB created by kops will expose the Kubernetes API trough "https" (configured on our ~/.kube/config file):
The ELB created by kOps will expose the Kubernetes API trough "https" (configured on our ~/.kube/config file):
```bash
grep server ~/.kube/config
@ -138,7 +138,7 @@ kops create instancegroup bastions --role Bastion --subnet utility-us-east-1a --
**Explanation of this command:**
- This command will add to our cluster definition a new instance group called "bastions" with the "Bastion" role on the aws subnet "utility-us-east-1a". Note that the "Bastion" role need the first letter to be a capital (Bastion=ok, bastion=not ok).
- The subnet "utility-us-east-1a" was created when we created our cluster the first time. KOPS add the "utility-" prefix to all subnets created on all specified AZ's. In other words, if we instructed kops to deploy our instances on us-east-1a, use-east-1b and use-east-1c, kops will create the subnets "utility-us-east-1a", "utility-us-east-1b" and "utility-us-east-1c". Because we need to tell kops where to deploy our bastion (or bastions), we need to specify the subnet.
- The subnet "utility-us-east-1a" was created when we created our cluster the first time. KOPS add the "utility-" prefix to all subnets created on all specified AZ's. In other words, if we instructed kOps to deploy our instances on us-east-1a, use-east-1b and use-east-1c, kOps will create the subnets "utility-us-east-1a", "utility-us-east-1b" and "utility-us-east-1c". Because we need to tell kOps where to deploy our bastion (or bastions), we need to specify the subnet.
You'll see the following output in your editor when you can change your bastion group size and add more networks.
@ -177,7 +177,7 @@ I0828 13:06:49.761535 16528 executor.go:91] Tasks: 100 done / 116 total; 9 can
I0828 13:06:50.897272 16528 executor.go:91] Tasks: 109 done / 116 total; 7 can run
I0828 13:06:51.516158 16528 executor.go:91] Tasks: 116 done / 116 total; 0 can run
I0828 13:06:51.944576 16528 update_cluster.go:247] Exporting kubecfg for cluster
Kops has set your kubectl context to privatekopscluster.k8s.local
kOps has set your kubectl context to privatekopscluster.k8s.local
Cluster changes have been applied to the cloud.
@ -185,7 +185,7 @@ Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
```
This is "kops" creating the instance group with your bastion instance. Let's validate our cluster:
This is "kOps" creating the instance group with your bastion instance. Let's validate our cluster:
```bash
kops validate cluster
@ -216,13 +216,13 @@ Our bastion instance group is there. Also, kops created an ELB for our "bastions
```bash
aws elb --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}'
bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
```
For this LAB, the "ELB" FQDN is "bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com" We can "ssh" to it:
For this LAB, the "ELB" FQDN is "bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com" We can "ssh" to it:
```bash
ssh -i ~/.ssh/id_rsa ubuntu@bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
ssh -i ~/.ssh/id_rsa ubuntu@bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
@ -250,7 +250,7 @@ Identity added: /home/kops/.ssh/id_rsa (/home/kops/.ssh/id_rsa)
Then, ssh to your bastion ELB FQDN
```bash
ssh -A ubuntu@bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
ssh -A ubuntu@bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
```
Or if you want to automate it:
@ -351,11 +351,11 @@ kops update cluster ${NAME} --yes
W0828 15:22:46.461033 5852 executor.go:109] error running task "LoadBalancer/bastion.privatekopscluster.k8s.local" (1m5s remaining to succeed): subnet changes on LoadBalancer not yet implemented: actual=[subnet-c029639a] -> expected=[subnet-23f8a90f subnet-4a24ef2e subnet-c029639a]
```
This happens because the original ELB created by "kops" only contained the subnet "utility-us-east-1a" and it can't add the additional subnets. In order to fix this, go to your AWS console and add the remaining subnets in your ELB. Then the recurring error will disappear and your bastion layer will be fully redundant.
This happens because the original ELB created by "kOps" only contained the subnet "utility-us-east-1a" and it can't add the additional subnets. In order to fix this, go to your AWS console and add the remaining subnets in your ELB. Then the recurring error will disappear and your bastion layer will be fully redundant.
**NOTE:** Always think ahead: If you are creating a fully redundant cluster (with fully redundant bastions), always configure the redundancy from the beginning.
When you are finished playing with kops, then destroy/delete your cluster:
When you are finished playing with kOps, then destroy/delete your cluster:
Finally, let's destroy our cluster:

View File

@ -31,10 +31,10 @@ Suppose you are creating a cluster named "dev.kubernetes.example.com`:
* You can specify a `--dns-zone=example.com` (you can have subdomains in a hosted zone)
* You could also use `--dns-zone=kubernetes.example.com`
You do have to set up the DNS nameservers so your hosted zone resolves. kops used to create the hosted
You do have to set up the DNS nameservers so your hosted zone resolves. kOps used to create the hosted
zone for you, but now (as you have to set up the nameservers anyway), there doesn't seem much reason to do so!
If you don't specify a dns-zone, kops will list all your hosted zones, and choose the longest that
If you don't specify a dns-zone, kOps will list all your hosted zones, and choose the longest that
is a suffix of your cluster name. So for `dev.kubernetes.example.com`, if you have `kubernetes.example.com`,
`example.com` and `somethingelse.example.com`, it would choose `kubernetes.example.com`. `example.com` matches
but is shorter; `somethingelse.example.com` is not a suffix-match.
@ -66,7 +66,7 @@ Required packages are also updated during bootstrapping if the value is not set.
## out
`out` determines the directory into which Kops will write the target output for Terraform and CloudFormation. It defaults to `out/terraform` and `out/cloudformation` respectively.
`out` determines the directory into which kOps will write the target output for Terraform and CloudFormation. It defaults to `out/terraform` and `out/cloudformation` respectively.
# API only Arguments

View File

@ -1,6 +1,6 @@
# Getting Started with kops on AWS
# Getting Started with kOps on AWS
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md).
Make sure you have [installed kOps](../install.md) and [installed kubectl](../install.md).
## Setup your environment
@ -31,7 +31,7 @@ IAMFullAccess
AmazonVPCFullAccess
```
You can create the kops IAM user from the command line using the following:
You can create the kOps IAM user from the command line using the following:
```bash
aws iam create-group --group-name kops
@ -182,7 +182,7 @@ ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain.example.com --c
* Information on adding NS records with [Google Cloud
Platform](https://cloud.google.com/dns/update-name-servers)
#### Using Public/Private DNS (Kops 1.5+)
#### Using Public/Private DNS (kOps 1.5+)
By default the assumption is that NS records are publicly available. If you
require private DNS records you should modify the commands we run later in this
@ -255,24 +255,24 @@ Information regarding cluster state store location must be set when using `kops`
### Using S3 default bucket encryption
`kops` supports [default bucket encryption](https://aws.amazon.com/de/blogs/aws/new-amazon-s3-encryption-security-features/) to encrypt its state in an S3 bucket. This way, the default server side encryption set for your bucket will be used for the kops state too. You may want to use this AWS feature, e.g., for easily encrypting every written object by default or when you need to use specific encryption keys (KMS, CMK) for compliance reasons.
`kops` supports [default bucket encryption](https://aws.amazon.com/de/blogs/aws/new-amazon-s3-encryption-security-features/) to encrypt its state in an S3 bucket. This way, the default server side encryption set for your bucket will be used for the kOps state too. You may want to use this AWS feature, e.g., for easily encrypting every written object by default or when you need to use specific encryption keys (KMS, CMK) for compliance reasons.
If your S3 bucket has a default encryption set up, kops will use it:
If your S3 bucket has a default encryption set up, kOps will use it:
```bash
aws s3api put-bucket-encryption --bucket prefix-example-com-state-store --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
```
If the default encryption is not set or it cannot be checked, kops will resort to using server-side AES256 bucket encryption with [Amazon S3-Managed Encryption Keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html).
If the default encryption is not set or it cannot be checked, kOps will resort to using server-side AES256 bucket encryption with [Amazon S3-Managed Encryption Keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html).
### Sharing an S3 bucket across multiple accounts
It is possible to use a single S3 bucket for storing kops state for clusters
It is possible to use a single S3 bucket for storing kOps state for clusters
located in different accounts by using [cross-account bucket policies](http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html#access-policies-walkthrough-cross-account-permissions-acctA-tasks).
Kops will be able to use buckets configured with cross-account policies by default.
kOps will be able to use buckets configured with cross-account policies by default.
In this case you may want to override the object ACLs which kops places on the
In this case you may want to override the object ACLs which kOps places on the
state files, as default AWS ACLs will make it possible for an account that has
delegated access to write files that the bucket owner cannot read.
@ -406,11 +406,11 @@ kops delete cluster --name ${NAME} --yes
## Next steps
Now that you have a working _kops_ cluster, read through the [recommendations for production setups guide](production.md)
Now that you have a working kOps cluster, read through the [recommendations for production setups guide](production.md)
## Feedback
There's an incredible team behind Kops and we encourage you to reach out to the
There's an incredible team behind kOps and we encourage you to reach out to the
community on the Kubernetes
Slack(http://slack.k8s.io/). Bring your
questions, comments, and requests and meet the people behind the project!

View File

@ -1,6 +1,6 @@
# Commands & Arguments
This page lists the most common kops commands.
Please refer to the kops [cli reference](../cli/kops.md) for full documentation.
This page lists the most common kOps commands.
Please refer to the kOps [cli reference](../cli/kops.md) for full documentation.
## `kops create`
@ -8,7 +8,7 @@ Please refer to the kops [cli reference](../cli/kops.md) for full documentation.
### `kops create -f <cluster spec>`
`kops create -f <cluster spec>` will register a cluster using a kops spec yaml file. After the cluster has been registered you need to run `kops update cluster --yes` to create the cloud resources.
`kops create -f <cluster spec>` will register a cluster using a kOps spec yaml file. After the cluster has been registered you need to run `kops update cluster --yes` to create the cloud resources.
### `kops create cluster`
@ -24,7 +24,7 @@ the output matches your expectations, you can apply the changes by adding `--yes
## `kops rolling-update cluster`
`kops update cluster <clustername>` updates a kubernetes cluster to match the cloud and kops specifications.
`kops update cluster <clustername>` updates a kubernetes cluster to match the cloud and kOps specifications.
As a precaution, it is safer run in 'preview' mode first using `kops rolling-update cluster --name <name>`, and once confirmed
the output matches your expectations, you can apply the changes by adding `--yes` to the command - `kops rolling-update cluster --name <name> --yes`.
@ -43,7 +43,7 @@ the output matches your expectations, you can perform the actual deletion by add
## `kops toolbox template`
`kops toolbox template` lets you generate a kops spec using `go` templates. This is very handy if you want to consistently manage multiple clusters.
`kops toolbox template` lets you generate a kOps spec using `go` templates. This is very handy if you want to consistently manage multiple clusters.
## `kops version`

View File

@ -1,6 +1,6 @@
# Getting Started with kops on DigitalOcean
# Getting Started with kOps on DigitalOcean
**WARNING**: digitalocean support on kops is currently **alpha** meaning it is in the early stages of development and subject to change, please use with caution.
**WARNING**: digitalocean support on kOps is currently **alpha** meaning it is in the early stages of development and subject to change, please use with caution.
## Requirements
@ -31,7 +31,7 @@ export KOPS_FEATURE_FLAGS="AlphaAllowDO"
## Creating a Single Master Cluster
In the following examples, `example.com` should be replaced with the DigitalOcean domain you created when going through the [Requirements](#requirements).
Note that you kops will only be able to successfully provision clusters in regions that support block storage (AMS3, BLR1, FRA1, LON1, NYC1, NYC3, SFO2, SGP1 and TOR1).
Note that you kOps will only be able to successfully provision clusters in regions that support block storage (AMS3, BLR1, FRA1, LON1, NYC1, NYC3, SFO2, SGP1 and TOR1).
```bash
# debian (the default) + flannel overlay cluster in tor1
@ -65,10 +65,10 @@ kops delete cluster dev5.k8s.local --yes
## Features Still in Development
kops for DigitalOcean currently does not support these features:
kOps for DigitalOcean currently does not support these features:
* rolling update for instance groups
# Next steps
Now that you have a working _kops_ cluster, read through the [recommendations for production setups guide](production.md) to learn more about how to configure _kops_ for production workloads.
Now that you have a working kOps cluster, read through the [recommendations for production setups guide](production.md) to learn more about how to configure kOps for production workloads.

View File

@ -1,6 +1,6 @@
# Getting Started with kops on GCE
# Getting Started with kOps on GCE
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md), and installed
Make sure you have [installed kOps](../install.md) and [installed kubectl](../install.md), and installed
the [gcloud tools](https://cloud.google.com/sdk/downloads).
You'll need a Google Cloud account, and make sure that gcloud is logged in to your account using `gcloud init`.
@ -13,7 +13,7 @@ You'll also need to [configure default credentials](https://developers.google.co
# Creating a state store
kops needs a state store, to hold the configuration for your clusters. The simplest configuration
kOps needs a state store, to hold the configuration for your clusters. The simplest configuration
for Google Cloud is to store it in a Google Cloud Storage bucket in the same account, so that's how we'll
start.
@ -30,7 +30,7 @@ You can also put this in your `~/.bashrc` or similar.
# Creating our first cluster
`kops create cluster` creates the Cluster object and InstanceGroup object you'll be working with in kops:
`kops create cluster` creates the Cluster object and InstanceGroup object you'll be working with in kOps:
PROJECT=`gcloud config get-value project`
@ -38,7 +38,7 @@ You can also put this in your `~/.bashrc` or similar.
kops create cluster simple.k8s.local --zones us-central1-a --state ${KOPS_STATE_STORE}/ --project=${PROJECT}
You can now list the Cluster objects in your kops state store (the GCS bucket
You can now list the Cluster objects in your kOps state store (the GCS bucket
we created).
@ -53,7 +53,7 @@ we created).
This shows that you have one Cluster object configured, named `simple.k8s.local`. The cluster holds the cluster-wide configuration for
a kubernetes cluster - things like the kubernetes version, and the authorization policy in use.
The `kops` tool should feel a lot like `kubectl` - kops uses the same API machinery as kubernetes,
The `kops` tool should feel a lot like `kubectl` - kOps uses the same API machinery as kubernetes,
so it should behave similarly, although now you are managing kubernetes clusters, instead of managing
objects on a kubernetes cluster.
@ -118,12 +118,12 @@ Similarly, you can also see your InstanceGroups using:
<!-- TODO: Fix subnets vs regions -->
InstanceGroups are the other main kops object - an InstanceGroup manages a set of cloud instances,
InstanceGroups are the other main kOps object - an InstanceGroup manages a set of cloud instances,
which then are registered in kubernetes as Nodes. You have multiple InstanceGroups for different types
of instances / Nodes - in our simple example we have one for our master (which only has a single member),
and one for our nodes (and we have two nodes configured).
We'll see a lot more of Cluster objects and InstanceGroups as we use kops to reconfigure clusters. But let's get
We'll see a lot more of Cluster objects and InstanceGroups as we use kOps to reconfigure clusters. But let's get
on with our first cluster.
# Creating a cluster
@ -133,7 +133,7 @@ but didn't actually create any instances or other cloud objects in GCE. To do t
`kops update cluster`.
`kops update cluster` without `--yes` will show you a preview of all the changes will be made;
it is very useful to see what kops is about to do, before actually making the changes.
it is very useful to see what kOps is about to do, before actually making the changes.
Run `kops update cluster simple.k8s.local` and peruse the changes.
@ -143,7 +143,7 @@ We're now finally ready to create the object: `kops update cluster simple.k8s.lo
<!-- TODO: We don't need this on GCE; remove SSH key requirement -->
Your cluster is created in the background - kops actually creates GCE Managed Instance Groups
Your cluster is created in the background - kOps actually creates GCE Managed Instance Groups
that run the instances; this ensures that even if instances are terminated, they will automatically
be relaunched by GCE and your cluster will self-heal.
@ -152,7 +152,7 @@ After a few minutes, you should be able to do `kubectl get nodes` and your first
# Enjoy
At this point you have a kubernetes cluster - the core commands to do so are as simple as `kops create cluster`
and `kops update cluster`. There's a lot more power in kops, and even more power in kubernetes itself, so we've
and `kops update cluster`. There's a lot more power in kOps, and even more power in kubernetes itself, so we've
put a few jumping off places here. But when you're done, don't forget to [delete your cluster](#deleting-the-cluster).
* [Manipulate InstanceGroups](../tutorial/working-with-instancegroups.md) to add more nodes, change image
@ -188,4 +188,4 @@ After you've double-checked you're deleting exactly what you want to delete, run
# Next steps
Now that you have a working _kops_ cluster, read through the [recommendations for production setups guide](production.md) to learn more about how to configure _kops_ for production workloads.
Now that you have a working kOps cluster, read through the [recommendations for production setups guide](production.md) to learn more about how to configure kOps for production workloads.

View File

@ -1,7 +1,7 @@
# kubectl cluster admin configuration
When you run `kops update cluster` during cluster creation, you automatically get a kubectl configuration for accessing the cluster. This configuration gives you full admin access to the cluster.
If you want to create this configuration on other machine, you can run the following as long as you have access to the kops state store.
If you want to create this configuration on other machine, you can run the following as long as you have access to the kOps state store.
To create the kubecfg configuration settings for use with kubectl:

View File

@ -1,7 +1,7 @@
# Getting Started with kops on OpenStack
# Getting Started with kOps on OpenStack
OpenStack support on kops is currently **beta**, which means that OpenStack support is in good shape and could be used for production. However, it is not as rigorously tested as the stable cloud providers and there are some features not supported. In particular, kops tries to support a wide variety of OpenStack setups and not all of them are equally well tested.
OpenStack support on kOps is currently **beta**, which means that OpenStack support is in good shape and could be used for production. However, it is not as rigorously tested as the stable cloud providers and there are some features not supported. In particular, kOps tries to support a wide variety of OpenStack setups and not all of them are equally well tested.
## OpenStack requirements
@ -12,7 +12,7 @@ In order to deploy a kops-managed cluster on OpenStack, you need the following O
* Glance (image)
* Cinder (block storage)
In addition, kops can make use of the following services:
In addition, kOps can make use of the following services:
* Swift (object store)
* Dvelve (dns)
@ -33,7 +33,7 @@ We recommend using [Application Credentials](https://docs.openstack.org/keystone
## Environment Variables
kops stores its configuration in a state store. Before creating a cluster, we need to export the path to the state store:
kOps stores its configuration in a state store. Before creating a cluster, we need to export the path to the state store:
```bash
export KOPS_STATE_STORE=swift://<bucket-name> # where <bucket-name> is the name of the Swift container to use for kops state
@ -83,9 +83,9 @@ kops delete cluster my-cluster.k8s.local --yes
## Compute and volume zone names does not match
Some of the openstack users do not have compute zones named exactly the same than volume zones. Good example is that there are several compute zones for instance `zone-1`, `zone-2` and `zone-3`. Then there is only one volumezone which is usually called `nova`. By default this is problem in kops, because kops assumes that if you are deploying things to `zone-1` there should be compute and volume zone called `zone-1`.
Some of the openstack users do not have compute zones named exactly the same than volume zones. Good example is that there are several compute zones for instance `zone-1`, `zone-2` and `zone-3`. Then there is only one volumezone which is usually called `nova`. By default this is problem in kOps, because kOps assumes that if you are deploying things to `zone-1` there should be compute and volume zone called `zone-1`.
However, you can still get kops working in your openstack by doing following:
However, you can still get kOps working in your openstack by doing following:
Create cluster using your compute zones:
@ -120,7 +120,7 @@ spec:
kops update cluster my-cluster.k8s.local --state ${KOPS_STATE_STORE} --yes
```
Kops should create instances to all three zones, but provision volumes from the same zone.
kOps should create instances to all three zones, but provision volumes from the same zone.
# Using external cloud controller manager
If you want use [External CCM](https://github.com/kubernetes/cloud-provider-openstack) in your installation, this section contains instructions what you should do to get it up and running.
@ -159,9 +159,9 @@ In clusters without loadbalancer, the address of a single random master will be
# Using existing OpenStack network
You can have kops reuse existing network components instead of provisioning one per cluster. As OpenStack support is still beta, we recommend you take extra care when deleting clusters and ensure that kops do not try to remove any resources not belonging to the cluster.
You can have kOps reuse existing network components instead of provisioning one per cluster. As OpenStack support is still beta, we recommend you take extra care when deleting clusters and ensure that kOps do not try to remove any resources not belonging to the cluster.
## Let kops provision new subnets within an existing network
## Let kOps provision new subnets within an existing network
Use an existing network by using `--network <network id>`.
@ -175,7 +175,7 @@ spec:
## Use existing networks
Instead of kops creating new subnets for the cluster, you can reuse an existing subnet.
Instead of kOps creating new subnets for the cluster, you can reuse an existing subnet.
When you create a new cluster, you can specify subnets using the `--subnets` and `--utility-subnets` flags.
@ -207,8 +207,8 @@ kops create cluster \
# Using with self-signed certificates in OpenStack
Kops can be configured to use insecure mode towards OpenStack. However, this is not recommended as OpenStack cloudprovider in kubernetes does not support it.
If you use insecure flag in kops it might be that the cluster does not work correctly.
kOps can be configured to use insecure mode towards OpenStack. However, this is not recommended as OpenStack cloudprovider in kubernetes does not support it.
If you use insecure flag in kOps it might be that the cluster does not work correctly.
```yaml
spec:
@ -219,4 +219,4 @@ spec:
# Next steps
Now that you have a working _kops_ cluster, read through the [recommendations for production setups guide](production.md) to learn more about how to configure _kops_ for production workloads.
Now that you have a working kOps cluster, read through the [recommendations for production setups guide](production.md) to learn more about how to configure kOps for production workloads.

View File

@ -1,6 +1,6 @@
# Recommendations for production setups
The getting started-documentation is a fast way of spinning up a Kubernetes cluster, but there are some aspects of _kops_ that require extra consideration. This document will highlight the most important things you should know about before deploying your production workload.
The getting started-documentation is a fast way of spinning up a Kubernetes cluster, but there are some aspects of kOps that require extra consideration. This document will highlight the most important things you should know about before deploying your production workload.
## High availability
@ -10,13 +10,13 @@ Read through the [high availability documentation](../operations/high_availabili
## Networking
The default networking of _kops_, kubenet, is **not** recommended for production. Most importantly, it does not support network policies, nor does it support internal networking.
The default networking of kOps, kubenet, is **not** recommended for production. Most importantly, it does not support network policies, nor does it support internal networking.
Read through the [networking page](../networking.md) and choose a stable CNI.
## Private topology
By default kops will create clusters using public topology, where all nodes and the Kubernetes API are exposed on public Internet.
By default kOps will create clusters using public topology, where all nodes and the Kubernetes API are exposed on public Internet.
Read through the [topology page](../topology.md) to understand the options you have running nodes in internal IP addresses and using a [bastion](../bastion.md) for SSH access.
@ -24,7 +24,7 @@ Read through the [topology page](../topology.md) to understand the options you h
The `kops` command allows you to configure some aspects of your cluster, but for almost any production cluster, you want to change settings that is not accecible through CLI. The cluster spec can be exported as a yaml file and checked into version control.
Read through the [cluster spec page](../cluster_spec.md) and familiarize yourself with the key options that kops offers.
Read through the [cluster spec page](../cluster_spec.md) and familiarize yourself with the key options that kOps offers.
## Templating

View File

@ -1,4 +1,4 @@
# Getting Started with kops on Spot Ocean
# Getting Started with kOps on Spot Ocean
[Ocean](https://spot.io/products/ocean/) by [Spot](https://spot.io/) simplifies infrastructure management for Kubernetes. With robust, container-driven infrastructure auto-scaling and intelligent right-sizing for container resource requirements, operations can literally "set and forget" the underlying cluster.
@ -20,7 +20,7 @@ Ocean not only intelligently leverages Spot Instances and reserved capacity to r
## Prerequisites
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md#installing-other-dependencies).
Make sure you have [installed kOps](../install.md) and [installed kubectl](../install.md#installing-other-dependencies).
## Setup your environment

View File

@ -15,7 +15,7 @@ In order to use gossip-based DNS, configure the cluster domain name to end with
### Kubernetes API
When using gossip mode, you have to expose the kubernetes API using a loadbalancer. Since there is no hosted zone for gossip-based clusters, you simply use the load balancer address directly. The user experience is identical to standard clusters. Kops will add the ELB DNS name to the kops-generated kubernetes configuration.
When using gossip mode, you have to expose the kubernetes API using a loadbalancer. Since there is no hosted zone for gossip-based clusters, you simply use the load balancer address directly. The user experience is identical to standard clusters. kOps will add the ELB DNS name to the kops-generated kubernetes configuration.
### Bastion

View File

@ -35,7 +35,7 @@ Also note the node label we set. This will be used to ensure the GPU Operator re
## Install GPU Operator
GPU Operator is installed using `helm`. See the [general install instructions for GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html#install-gpu-operator).
In order to match the _kops_ environment, create a `values.yaml` file with the following content:
In order to match the kOps environment, create a `values.yaml` file with the following content:
```yaml
operator:

View File

@ -10,14 +10,14 @@ be found in the `autoscaling/v1` API version. The alpha version, which includes
support for scaling on memory and custom metrics, can be found in
`autoscaling/v2beta1` (and `autoscaling/v2beta2` in 1.12 and later).
Kops can assist in setting up HPA. Relevant reading you will need to go through:
kOps can assist in setting up HPA. Relevant reading you will need to go through:
* [Extending the Kubernetes API with the aggregation layer][k8s-extend-api]
* [Configure The Aggregation Layer][k8s-aggregation-layer]
* [Horizontal Pod Autoscaling][k8s-hpa]
While the above links go into details on how Kubernetes needs to be configured
to work with HPA, a lot of that work is already done for you by Kops.
to work with HPA, a lot of that work is already done for you by kOps.
Specifically:
* [x] Enable the [Aggregation Layer][k8s-aggregation-layer] via the following

View File

@ -6,7 +6,7 @@ It is possible to launch a Kubernetes cluster from behind an http forward proxy
It is assumed the proxy is already existing. If you want a private topology on AWS, for example, with a proxy instead of a NAT instance, you'll need to create the proxy yourself. See [Running in a shared VPC](run_in_existing_vpc.md).
This configuration only manages proxy configurations for Kops and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
This configuration only manages proxy configurations for kOps and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
## Configuration

View File

@ -1,6 +1,6 @@
# IAM Roles
By default Kops creates two IAM roles for the cluster: one for the masters, and one for the nodes.
By default kOps creates two IAM roles for the cluster: one for the masters, and one for the nodes.
> Please note that currently all Pods running on your cluster have access to the instance IAM role.
> Consider using projects such as [kube2iam](https://github.com/jtblin/kube2iam) to prevent that.
@ -12,7 +12,7 @@ An example of the new IAM policies can be found here:
- Master Nodes: https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_master_strict.json
- Compute Nodes: https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_node_strict.json
On provisioning a new cluster with Kops v1.8.0 or above, by default you will be using the new stricter IAM policies. Upgrading an existing cluster will use the legacy IAM privileges to reduce risk of potential regression.
On provisioning a new cluster with kOps v1.8.0 or above, by default you will be using the new stricter IAM policies. Upgrading an existing cluster will use the legacy IAM privileges to reduce risk of potential regression.
In order to update your cluster to use the strict IAM privileges, add the following within your Cluster Spec:
```yaml
@ -40,7 +40,7 @@ Adding ECR permissions will extend the IAM policy documents as below:
The additional permissions are:
```json
{
"Sid": "kopsK8sECR",
"Sid": "kOpsK8sECR",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
@ -59,15 +59,15 @@ The additional permissions are:
## Permissions Boundaries
{{ kops_feature_table(kops_added_default='1.19') }}
AWS Permissions Boundaries enable you to use a policy (managed or custom) to set the maximum permissions that roles created by Kops will be able to grant to instances they're attached to. It can be useful to prevent possible privilege escalations.
AWS Permissions Boundaries enable you to use a policy (managed or custom) to set the maximum permissions that roles created by kOps will be able to grant to instances they're attached to. It can be useful to prevent possible privilege escalations.
To set a Permissions Boundary for Kops' roles, update your Cluster Spec with the following and then perform a cluster update:
To set a Permissions Boundary for kOps' roles, update your Cluster Spec with the following and then perform a cluster update:
```yaml
iam:
permissionsBoundary: aws:arn:iam:123456789000:policy:test-boundary
```
*NOTE: Currently, Kops only supports using a single Permissions Boundary for all roles it creates. In case you need to set per-role Permissions Boundaries, we recommend that you refer to this [section](#use-existing-aws-instance-profiles) below, and provide your own roles to Kops.*
*NOTE: Currently, kOps only supports using a single Permissions Boundary for all roles it creates. In case you need to set per-role Permissions Boundaries, we recommend that you refer to this [section](#use-existing-aws-instance-profiles) below, and provide your own roles to kOps.*
## Adding External Policies
@ -86,13 +86,13 @@ spec:
- aws:arn:iam:123456789000:policy:test-policy
```
External Policy attachments are treated declaritively. Any policies declared will be attached to the role, any policies not specified will be detached _after_ new policies are attached. This does not replace or affect built in Kops policies in any way.
External Policy attachments are treated declaritively. Any policies declared will be attached to the role, any policies not specified will be detached _after_ new policies are attached. This does not replace or affect built in kOps policies in any way.
It's important to note that externalPolicies will only handle the attachment and detachment of policies, not creation, modification, or deletion.
## Adding Additional Policies
Sometimes you may need to extend the kops IAM roles to add additional policies. You can do this
Sometimes you may need to extend the kOps IAM roles to add additional policies. You can do this
through the `additionalPolicies` spec field. For instance, let's say you want
to add DynamoDB and Elasticsearch permissions to your nodes.
@ -151,7 +151,7 @@ Now you can run a cluster update to have the changes take effect:
kops update cluster ${CLUSTER_NAME} --yes
```
You can have an additional policy for each kops role (node, master, bastion). For instance, if you wanted to apply one set of additional permissions to the master instances, and another to the nodes, you could do the following:
You can have an additional policy for each kOps role (node, master, bastion). For instance, if you wanted to apply one set of additional permissions to the master instances, and another to the nodes, you could do the following:
```yaml
spec:
@ -176,12 +176,12 @@ spec:
## Use existing AWS Instance Profiles
Rather than having Kops create and manage IAM roles and instance profiles, it is possible to use an existing instance profile. This is useful in organizations where security policies prevent tools from creating their own IAM roles and policies.
Kops will still output any differences in the IAM Inline Policy for each IAM Role.
This is convenient for determining policy changes that need to be made when upgrading Kops.
Rather than having kOps create and manage IAM roles and instance profiles, it is possible to use an existing instance profile. This is useful in organizations where security policies prevent tools from creating their own IAM roles and policies.
kOps will still output any differences in the IAM Inline Policy for each IAM Role.
This is convenient for determining policy changes that need to be made when upgrading kOps.
**Using IAM Managed Policies will not output these differences, it is up to the user to track expected changes to policies.**
*NOTE: Currently Kops only supports using existing instance profiles for every instance group in the cluster, not a mix of existing and managed instance profiles.
*NOTE: Currently kOps only supports using existing instance profiles for every instance group in the cluster, not a mix of existing and managed instance profiles.
This is due to the lifecycle overrides being used to prevent creation of the IAM-related resources.*
To do this, get a list of instance group names for the cluster:

View File

@ -4,18 +4,18 @@
<hr>
</div>
# kops - Kubernetes Operations
# kOps - Kubernetes Operations
[GoDoc]: https://pkg.go.dev/k8s.io/kops
[GoDoc Widget]: https://godoc.org/k8s.io/kops?status.svg
[GoDoc]: https://pkg.go.dev/k8s.io/kOps
[GoDoc Widget]: https://godoc.org/k8s.io/kOps?status.svg
The easiest way to get a production grade Kubernetes cluster up and running.
## 2020-05-06 etcd-manager Certificate Expiration Advisory
kops versions released today contain a **critical fix** to etcd-manager: 1 year after creation (or first adopting etcd-manager), clusters will stop responding due to expiration of a TLS certificate. Upgrading kops to 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3 is highly recommended. Please see the [advisory](./advisories/etcd-manager-certificate-expiration.md) for the full details.
kops versions released today contain a **critical fix** to etcd-manager: 1 year after creation (or first adopting etcd-manager), clusters will stop responding due to expiration of a TLS certificate. Upgrading kOps to 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3 is highly recommended. Please see the [advisory](./advisories/etcd-manager-certificate-expiration.md) for the full details.
## What is kops?
## What is kOps?
We like to think of it as `kubectl` for clusters.

View File

@ -1,4 +1,4 @@
# Installing kops (Binaries)
# Installing kOps (Binaries)
## MacOS

View File

@ -88,7 +88,7 @@ spec:
## additionalUserData
Kops utilizes cloud-init to initialize and setup a host at boot time. However in certain cases you may already be leveraging certain features of cloud-init in your infrastructure and would like to continue doing so. More information on cloud-init can be found [here](http://cloudinit.readthedocs.io/en/latest/)
kOps utilizes cloud-init to initialize and setup a host at boot time. However in certain cases you may already be leveraging certain features of cloud-init in your infrastructure and would like to continue doing so. More information on cloud-init can be found [here](http://cloudinit.readthedocs.io/en/latest/)
Additional user-data can be passed to the host provisioning by setting the `additionalUserData` field. A list of valid user-data content-types can be found [here](http://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive)

View File

@ -1,3 +1,3 @@
# Kops HTTP API Server (Deprecated)
# kOps HTTP API Server (Deprecated)
The kops-server component has been deprecated in place of CRDs.

View File

@ -1,6 +1,6 @@
# Labels
There are two main types of labels that kops can create:
There are two main types of labels that kOps can create:
* `cloudLabels` become tags in AWS on the instances
* `nodeLabels` become labels on the k8s Node objects

View File

@ -1,10 +1,10 @@
# Using A Manifest to Manage kops Clusters
# Using A Manifest to Manage kOps Clusters
This document also applies to using the `kops` API to customize a Kubernetes cluster with or without using YAML or JSON.
## Table of Contents
* [Using A Manifest to Manage kops Clusters](#using-a-manifest-to-manage-kops-clusters)
* [Using A Manifest to Manage kOps Clusters](#using-a-manifest-to-manage-kops-clusters)
* [Background](#background)
* [Exporting a Cluster](#exporting-a-cluster)
* [YAML Examples](#yaml-examples)
@ -19,7 +19,7 @@ This document also applies to using the `kops` API to customize a Kubernetes clu
Because of the above statement `kops` includes an API which provides a feature for users to utilize YAML or JSON manifests for managing their `kops` created Kubernetes installations. In the same way that you can use a YAML manifest to deploy a Job, you can deploy and manage a `kops` Kubernetes instance with a manifest. All of these values are also usable via the interactive editor with `kops edit`.
> You can see all the options that are currently supported in Kops [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go) or [more prettily here](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec)
> You can see all the options that are currently supported in kOps [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go) or [more prettily here](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ClusterSpec)
The following is a list of the benefits of using a file to manage instances.
@ -30,7 +30,7 @@ The following is a list of the benefits of using a file to manage instances.
## Exporting a Cluster
At this time you must run `kops create cluster` and then export the YAML from the state store. We plan in the future to have the capability to generate kops YAML via the command line. The following is an example of creating a cluster and exporting the YAML.
At this time you must run `kops create cluster` and then export the YAML from the state store. We plan in the future to have the capability to generate kOps YAML via the command line. The following is an example of creating a cluster and exporting the YAML.
```shell
export NAME=k8s.example.com
@ -290,7 +290,7 @@ spec:
api:
```
Full documentation is accessible via [godoc](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec).
Full documentation is accessible via [godoc](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ClusterSpec).
The `ClusterSpec` allows a user to set configurations for such values as Docker log driver, Kubernetes API server log level, VPC for reusing a VPC (`NetworkID`), and the Kubernetes version.
@ -321,7 +321,7 @@ metadata:
spec:
```
Full documentation is accessible via [godocs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#InstanceGroupSpec).
Full documentation is accessible via [godocs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#InstanceGroupSpec).
Instance Groups map to Auto Scaling Groups in AWS, and Instance Groups in GCE. They are an API level description of a group of compute instances used as Masters or Nodes.
@ -329,10 +329,10 @@ More documentation is available in the [Instance Group](instance_groups.md) docu
## Closing Thoughts
Using YAML or JSON-based configuration for building and managing kops clusters is powerful, but use this strategy with caution.
Using YAML or JSON-based configuration for building and managing kOps clusters is powerful, but use this strategy with caution.
- If you do not need to define or customize a value, let kops set that value. Setting too many values prevents kops from doing its job in setting up the cluster and you may end up with strange bugs.
- If you end up with strange bugs, try letting kops do more.
- If you do not need to define or customize a value, let kOps set that value. Setting too many values prevents kOps from doing its job in setting up the cluster and you may end up with strange bugs.
- If you end up with strange bugs, try letting kOps do more.
- Be cautious, take care, and test outside of production!
If you need to run a custom version of Kubernetes Controller Manager, set `kubeControllerManager.image` and update your cluster. This is the beauty of using a manifest for your cluster!

View File

@ -1,4 +1,4 @@
# kops & MFA
# kOps & MFA
You can secure `kops` with MFA by creating an AWS role & policy that requires MFA to access to the `KOPS_STATE_STORE` bucket. Unfortunately the Go AWS SDK does not transparently support assuming roles with required MFA. This may change in a future version. `kops` plans to support this behavior eventually. You can track progress in this [Github issue](https://github.com/kubernetes/kops/issues/226). If you'd like to use MFA with `kops`, you'll need a work around until then.

View File

@ -10,7 +10,7 @@ listed below, are available which implement and manage this abstraction.
## Supported networking options
The following table provides the support status for various networking providers with regards to Kops version:
The following table provides the support status for various networking providers with regards to kOps version:
| Network provider | Experimental | Stable | Deprecated | Removed |
| ------------ | -----------: | -----: | ---------: | ------: |
@ -29,7 +29,7 @@ The following table provides the support status for various networking providers
### Which networking provider should you use?
Kops maintainers have no bias over the CNI provider that you run, we only aim to be flexible and provide a working setup of the CNIs.
kOps maintainers have no bias over the CNI provider that you run, we only aim to be flexible and provide a working setup of the CNIs.
We do recommended something other than `kubenet` for production clusters due to `kubenet`'s limitations, as explained [below](#kubenet-default).
@ -39,7 +39,7 @@ You can specify the network provider via the `--networking` command line switch.
### Kubenet (default)
Kubernetes Operations (kops) uses `kubenet` networking by default. This sets up networking on AWS using VPC
Kubernetes Operations (kOps) uses `kubenet` networking by default. This sets up networking on AWS using VPC
networking, where the master allocates a /24 CIDR to each Node, drawing from the Node network.
Using `kubenet` mode routes for each node are then configured in the AWS VPC routing tables.
@ -68,7 +68,7 @@ For more on the `kubenet` networking provider, please see the [`kubenet` section
and libraries for writing plugins to configure network interfaces in Linux containers. Kubernetes
has built in support for CNI networking components.
Several CNI providers are currently built into kops:
Several CNI providers are currently built into kOps:
* [AWS VPC](networking/aws-vpc.md)
* [Calico](networking/calico.md)
@ -80,8 +80,8 @@ Several CNI providers are currently built into kops:
* [Romana](networking/romana.md)
* [Weave](networking/weave.md)
Kops makes it easy for cluster operators to choose one of these options. The manifests for the providers
are included with kops, and you simply use `--networking <provider-name>`. Replace the provider name
kOps makes it easy for cluster operators to choose one of these options. The manifests for the providers
are included with kOps, and you simply use `--networking <provider-name>`. Replace the provider name
with the name listed in the provider's documentation (from the list above) when you run
`kops cluster create`. For instance, for a default Calico installation, execute the following:
@ -93,19 +93,19 @@ Later, when you run `kops get cluster -oyaml`, you will see the option you chose
### Advanced
Kops makes a best-effort attempt to expose as many configuration options as possible for the upstream CNI options that it supports within the Kops cluster spec. However, as upstream CNI options are always changing, not all options may be available, or you may wish to use a CNI option which Kops doesn't support. There may also be edge-cases to operating a given CNI that were not considered by the Kops maintainers. Allowing Kops to manage the CNI installation is sufficient for the vast majority of production clusters; however, if this is not true in your case, then Kops provides an escape-hatch that allows you to take greater control over the CNI installation.
kOps makes a best-effort attempt to expose as many configuration options as possible for the upstream CNI options that it supports within the kOps cluster spec. However, as upstream CNI options are always changing, not all options may be available, or you may wish to use a CNI option which kOps doesn't support. There may also be edge-cases to operating a given CNI that were not considered by the kOps maintainers. Allowing kOps to manage the CNI installation is sufficient for the vast majority of production clusters; however, if this is not true in your case, then kOps provides an escape-hatch that allows you to take greater control over the CNI installation.
When using the flag `--networking cni` on `kops create cluster` or `spec.networking: cni {}`, Kops will not install any CNI at all, but expect that you install it.
When using the flag `--networking cni` on `kops create cluster` or `spec.networking: cni {}`, kOps will not install any CNI at all, but expect that you install it.
If you try to create a new cluster in this mode, the master nodes will come up in `not ready` state. You will then be able to deploy any CNI DaemonSet by following [the vanilla kubernetes install instructions](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network). Once the CNI DaemonSet has been deployed, the master nodes should enter `ready` state and the remaining nodes should join the cluster shortly thereafter.
#### Important Caveats
For some of the CNI implementations, Kops does more than just launch a DaemonSet with the relevant CNI pod. For example, when installing Calico, Kops installs client certificates for Calico to enable mTLS for connections to etcd. If you were to simply replace `spec.networking`'s Calico options with `spec.networking: cni {}`, you would cause an outage.
For some of the CNI implementations, kOps does more than just launch a DaemonSet with the relevant CNI pod. For example, when installing Calico, kOps installs client certificates for Calico to enable mTLS for connections to etcd. If you were to simply replace `spec.networking`'s Calico options with `spec.networking: cni {}`, you would cause an outage.
If you do decide to take manual responsibility for maintaining the CNI, you should familiarize yourself with the parts of the Kops codebase which install your CNI ([example](https://github.com/kubernetes/kops/tree/master/nodeup/pkg/model/networking)) to ensure that you are replicating any additional actions which Kops was applying for your CNI option. You should closely follow your upstream CNI's releases and Kops's releases, to ensure that you can apply any updates or fixes issued by your upstream CNI or by the Kops maintainers.
If you do decide to take manual responsibility for maintaining the CNI, you should familiarize yourself with the parts of the kOps codebase which install your CNI ([example](https://github.com/kubernetes/kops/tree/master/nodeup/pkg/model/networking)) to ensure that you are replicating any additional actions which kOps was applying for your CNI option. You should closely follow your upstream CNI's releases and kOps's releases, to ensure that you can apply any updates or fixes issued by your upstream CNI or by the kOps maintainers.
Additionally, you should bear in mind that the Kops maintainers run e2e testing over the variety of supported CNI options that a Kops update must pass in order to be released. If you take over maintaining the CNI for your cluster, you should test potential Kops, Kubernetes, and CNI updates in a test cluster before updating.
Additionally, you should bear in mind that the kOps maintainers run e2e testing over the variety of supported CNI options that a kOps update must pass in order to be released. If you take over maintaining the CNI for your cluster, you should test potential kOps, Kubernetes, and CNI updates in a test cluster before updating.
## Validating CNI Installation
@ -125,7 +125,7 @@ for Kubernetes specific CNI challenges.
Switching from `kubenet` providers to a CNI provider is considered safe. Just update the config and roll the cluster.
It is also possible to switch between CNI providers, but this usually is a disruptive change. Kops will also not clean up any resources left behind by the previous CNI, _including_ the CNI daemonset.
It is also possible to switch between CNI providers, but this usually is a disruptive change. kOps will also not clean up any resources left behind by the previous CNI, _including_ the CNI daemonset.
## Additional Reading

View File

@ -11,7 +11,7 @@ To use Amazon VPC, specify the following in the cluster spec:
amazonvpc: {}
```
in the cluster spec file or pass the `--networking amazonvpc` option on the command line to kops:
in the cluster spec file or pass the `--networking amazonvpc` option on the command line to kOps:
```sh
export ZONES=<mylistofzones>

View File

@ -56,7 +56,7 @@ To enable this mode in a cluster, add the following to the cluster spec:
crossSubnet: true
```
In the case of AWS, EC2 instances have source/destination checks enabled by default.
When you enable cross-subnet mode in kops 1.19+, it is equivalent to:
When you enable cross-subnet mode in kOps 1.19+, it is equivalent to:
```yaml
networking:
calico:
@ -64,14 +64,14 @@ When you enable cross-subnet mode in kops 1.19+, it is equivalent to:
IPIPMode: CrossSubnet
```
An IAM policy will be added to all nodes to allow Calico to execute `ec2:DescribeInstances` and `ec2:ModifyNetworkInterfaceAttribute`, as required when [awsSrcDstCheck](https://docs.projectcalico.org/reference/resources/felixconfig#spec) is set.
For older versions of kops, an addon controller ([k8s-ec2-srcdst](https://github.com/ottoyiu/k8s-ec2-srcdst))
For older versions of kOps, an addon controller ([k8s-ec2-srcdst](https://github.com/ottoyiu/k8s-ec2-srcdst))
will be deployed as a Pod (which will be scheduled on one of the masters) to facilitate the disabling of said source/destination address checks.
Only the control plane nodes have an IAM policy to allow k8s-ec2-srcdst to execute `ec2:ModifyInstanceAttribute`.
### Configuring Calico MTU
The Calico MTU is configurable by editing the cluster and setting `mtu` option in the calico configuration.
AWS VPCs support jumbo frames, so on cluster creation kops sets the calico MTU to 8912 bytes (9001 minus overhead).
AWS VPCs support jumbo frames, so on cluster creation kOps sets the calico MTU to 8912 bytes (9001 minus overhead).
For more details on Calico MTU please see the [Calico Docs](https://docs.projectcalico.org/networking/mtu#determine-mtu-size).
@ -84,7 +84,7 @@ spec:
### Configuring Calico to use Typha
As of Kops 1.12 Calico uses the kube-apiserver as its datastore. The default setup does not make use of [Typha](https://github.com/projectcalico/typha) - a component intended to lower the impact of Calico on the k8s APIServer which is recommended in [clusters over 50 nodes](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastoremore-than-50-nodes) and is strongly recommended in clusters of 100+ nodes.
As of kOps 1.12 Calico uses the kube-apiserver as its datastore. The default setup does not make use of [Typha](https://github.com/projectcalico/typha) - a component intended to lower the impact of Calico on the k8s APIServer which is recommended in [clusters over 50 nodes](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastoremore-than-50-nodes) and is strongly recommended in clusters of 100+ nodes.
It is possible to configure Calico to use Typha by editing a cluster and adding a `typhaReplicas` option to the Calico spec:
```yaml
@ -113,7 +113,7 @@ For more details on enabling the eBPF dataplane please refer the [Calico Docs](h
### Configuring WireGuard
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.16') }}
Calico supports WireGuard to encrypt pod-to-pod traffic. If you enable this options, WireGuard encryption is automatically enabled for all nodes. At the moment, kops installs WireGuard automatically only when the host OS is *Ubuntu*. For other OSes, WireGuard has to be part of the base image or installed via a hook.
Calico supports WireGuard to encrypt pod-to-pod traffic. If you enable this options, WireGuard encryption is automatically enabled for all nodes. At the moment, kOps installs WireGuard automatically only when the host OS is *Ubuntu*. For other OSes, WireGuard has to be part of the base image or installed via a hook.
For more details of Calico WireGuard please refer the [Calico Docs](https://docs.projectcalico.org/security/encrypt-cluster-pod-traffic).
@ -142,8 +142,8 @@ For more general information on options available with Calico see the official [
### New nodes are taking minutes for syncing ip routes and new pods on them can't reach kubedns
This is caused by nodes in the Calico etcd nodestore no longer existing. Due to the ephemeral nature of AWS EC2 instances, new nodes are brought up with different hostnames, and nodes that are taken offline remain in the Calico nodestore. This is unlike most datacentre deployments where the hostnames are mostly static in a cluster. Read more about this issue at https://github.com/kubernetes/kops/issues/3224
This has been solved in kops 1.9.0, when creating a new cluster no action is needed, but if the cluster was created with a prior kops version the following actions should be taken:
This has been solved in kOps 1.9.0, when creating a new cluster no action is needed, but if the cluster was created with a prior kops version the following actions should be taken:
* Use kops to update the cluster ```kops update cluster <name> --yes``` and wait for calico-kube-controllers deployment and calico-node daemonset pods to be updated
* Use kOps to update the cluster ```kops update cluster <name> --yes``` and wait for calico-kube-controllers deployment and calico-node daemonset pods to be updated
* Decommission all invalid nodes, [see here](https://docs.projectcalico.org/v2.6/usage/decommissioning-a-node)
* All nodes that are deleted from the cluster after this actions should be cleaned from calico's etcd storage and the delay programming routes should be solved.

View File

@ -27,9 +27,9 @@ kops create cluster \
### Using etcd for agent state sync
This feature is in beta state as of kops 1.18.
This feature is in beta state as of kOps 1.18.
By default, Cilium will use CRDs for synchronizing agent state. This can cause performance problems on larger clusters. As of kops 1.18, kops can manage an etcd cluster using etcd-manager dedicated for cilium agent state sync. The [Cilium docs](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-external-etcd/) contains recommendations for when this must be enabled.
By default, Cilium will use CRDs for synchronizing agent state. This can cause performance problems on larger clusters. As of kOps 1.18, kOps can manage an etcd cluster using etcd-manager dedicated for cilium agent state sync. The [Cilium docs](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-external-etcd/) contains recommendations for when this must be enabled.
For new clusters you can use the `cilium-etcd` networking provider:
@ -75,7 +75,7 @@ Then enable etcd as kvstore:
### Enabling BPF NodePort
As of kops 1.19, BPF NodePort is enabled by default for new clusters if the kubernetes version is 1.12 or newer. It can be safely enabled as of kops 1.18.
As of kOps 1.19, BPF NodePort is enabled by default for new clusters if the kubernetes version is 1.12 or newer. It can be safely enabled as of kOps 1.18.
In this mode, the cluster is fully functional without kube-proxy, with Cilium replacing kube-proxy's NodePort implementation using BPF.
Read more about this in the [Cilium docs](https://docs.cilium.io/en/stable/gettingstarted/nodeport/)
@ -103,9 +103,9 @@ kops rolling-update cluster --yes
### Enabling Cilium ENI IPAM
This feature is in beta state as of kops 1.18.
This feature is in beta state as of kOps 1.18.
As of Kops 1.18, you can have Cilium provision AWS managed adresses and attach them directly to Pods much like Lyft VPC and AWS VPC. See [the Cilium docs for more information](https://docs.cilium.io/en/v1.6/concepts/ipam/eni/)
As of kOps 1.18, you can have Cilium provision AWS managed adresses and attach them directly to Pods much like Lyft VPC and AWS VPC. See [the Cilium docs for more information](https://docs.cilium.io/en/v1.6/concepts/ipam/eni/)
When using ENI IPAM you need to disable masquerading in Cilium as well.
@ -118,19 +118,19 @@ When using ENI IPAM you need to disable masquerading in Cilium as well.
Note that since Cilium Operator is the entity that interacts with the EC2 API to provision and attaching ENIs, we force it to run on the master nodes when this IPAM is used.
Also note that this feature has only been tested on the default kops AMIs.
Also note that this feature has only been tested on the default kOps AMIs.
#### Enabling Encryption in Cilium
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.17') }}
As of Kops 1.19, it is possible to enable encryption for Cilium agent.
As of kOps 1.19, it is possible to enable encryption for Cilium agent.
In order to enable encryption, you must first generate the pre-shared key using this command:
```bash
cat <<EOF | kops create secret ciliumpassword -f -
keys: $(echo "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null| xxd -p -c 64)) 128")
EOF
```
The above command will create a dedicated secret for cilium and store it in the Kops secret store.
The above command will create a dedicated secret for cilium and store it in the kOps secret store.
Once the secret has been created, encryption can be enabled by setting `enableEncryption` option in `spec.networking.cilium` to `true`:
```yaml
networking:

View File

@ -17,7 +17,7 @@ kops create cluster \
### Configuring Flannel iptables resync period
As of Kops 1.12.0, Flannel iptables resync option is configurable via editing a cluster and adding
As of kOps 1.12.0, Flannel iptables resync option is configurable via editing a cluster and adding
`iptablesResyncSeconds` option to spec:
```yaml

View File

@ -13,7 +13,7 @@ To use the Lyft CNI, specify the following in the cluster spec.
lyftvpc: {}
```
in the cluster spec file or pass the `--networking lyftvpc` option on the command line to kops:
in the cluster spec file or pass the `--networking lyftvpc` option on the command line to kOps:
```console
$ export ZONES=mylistofzones

View File

@ -1,6 +1,6 @@
# Romana
Support for Romana is deprecated as of kops 1.18 and removed in kops 1.19.
Support for Romana is deprecated as of kOps 1.18 and removed in kOps 1.19.
## Installing

View File

@ -23,7 +23,7 @@ kops create cluster \
### Configuring Weave MTU
The Weave MTU is configurable by editing the cluster and setting `mtu` option in the weave configuration.
AWS VPCs support jumbo frames, so on cluster creation kops sets the weave MTU to 8912 bytes (9001 minus overhead).
AWS VPCs support jumbo frames, so on cluster creation kOps sets the weave MTU to 8912 bytes (9001 minus overhead).
```yaml
spec:
@ -64,7 +64,7 @@ Note that it is possible to break the cluster networking if flags are improperly
The Weave network encryption is configurable by creating a weave network secret password.
Weaveworks recommends choosing a secret with [at least 50 bits of entropy](https://www.weave.works/docs/net/latest/tasks/manage/security-untrusted-networks/).
If no password is supplied, kops will generate one at random.
If no password is supplied, kOps will generate one at random.
```sh
cat /dev/urandom | tr -dc A-Za-z0-9 | head -c9 > password

View File

@ -1,7 +1,7 @@
### **Node Authorization Service**
:warning: The node authorization service is deprecated.
As of Kubernetes 1.19 kops will, on AWS, ignore the `nodeAuthorization` field of the cluster spec and
As of Kubernetes 1.19 kOps will, on AWS, ignore the `nodeAuthorization` field of the cluster spec and
worker nodes will obtain client certificates for kubelet and other purposes through kops-controller.
The [node authorization service] is an experimental service which in the absence of a kops-apiserver provides the distribution of tokens to the worker nodes. Bootstrap tokens provide worker nodes a short-time credential to request access kubeconfig certificate. A gist of the flow is;
@ -10,10 +10,10 @@ The [node authorization service] is an experimental service which in the absence
- the token is distributed to the node by _some_ means and then used as the bearer token of the initial request to the kubernetes api.
- the token itself is bound to the cluster role which grants permission to generate a CSR, an additional cluster role provides access for the controller to auto-approve this CSR requests as well.
- two certificates are generated by the kubelet using bootstrap process, one for the kubelet api and the other a client certificate to the kubelet itself.
- the client certificate by default is added into the system:nodes rbac group _(note, if you are using PSP this is automatically bound by kops on your behalf)_.
- the client certificate by default is added into the system:nodes rbac group _(note, if you are using PSP this is automatically bound by kOps on your behalf)_.
- the kubelet at this point has a server certificate and the client api certificate and good to go.
#### **Integration with Kops**
#### **Integration with kOps**
The node authorization service is run on the master as a daemonset, by default dns is _node-authorizer-internal.dns_zone_:10443 and added via same mechanism at the internal kube-apiserver i.e. annotations on the kube-apiserver pods which is picked up the dns-controller and added to the dns zone.

View File

@ -81,7 +81,7 @@ able to reclaim memory (because it may not observe memory pressure right away,
since it polls `cAdvisor` to collect memory usage stats at a regular interval).
All the while, keep in mind that without `kube-reserved` nor `system-reserved`
reservations set (which is most clusters i.e. [GKE][5], [Kops][6]), the
reservations set (which is most clusters i.e. [GKE][5], [kOps][6]), the
scheduler doesn't account for resources that non-pod components would require to
function properly because `Capacity` and `Allocatable` resources are more or
less equal.

View File

@ -1,7 +1,7 @@
# Kubernetes Addons and Addon Manager
## Addons
With kops you manage addons by using kubectl.
With kOps you manage addons by using kubectl.
(For a description of the addon-manager, please see [addon_management](#addon-management).)
@ -154,7 +154,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons
*This addon is deprecated. Please use [external-dns](https://github.com/kubernetes-sigs/external-dns) instead.*
Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).
Please note that kOps installs a Route53 DNS controller automatically (it is required for cluster discovery).
The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer to
use one or the other.
[README for the included dns-controller](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
@ -172,27 +172,27 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons
## Addon Management
kops incorporates management of some addons; we _have_ to manage some addons which are needed before
kOps incorporates management of some addons; we _have_ to manage some addons which are needed before
the kubernetes API is functional.
In addition, kops offers end-user management of addons via the `channels` tool (which is still experimental,
In addition, kOps offers end-user management of addons via the `channels` tool (which is still experimental,
but we are working on making it a recommended part of kubernetes addon management). We ship some
curated addons in the [addons directory](https://github.com/kubernetes/kops/tree/master/addons), more information in the [addons document](addons.md).
kops uses the `channels` tool for system addon management also. Because kops uses the same tool
kOps uses the `channels` tool for system addon management also. Because kOps uses the same tool
for *system* addon management as it does for *user* addon management, this means that
addons installed by kops as part of cluster bringup can be managed alongside additional addons.
(Though note that bootstrap addons are much more likely to be replaced during a kops upgrade).
addons installed by kOps as part of cluster bringup can be managed alongside additional addons.
(Though note that bootstrap addons are much more likely to be replaced during a kOps upgrade).
The general kops philosophy is to try to make the set of bootstrap addons minimal, and
The general kOps philosophy is to try to make the set of bootstrap addons minimal, and
to make installation of subsequent addons easy.
Thus, `kube-dns` and the networking overlay (if any) are the canonical bootstrap addons.
But addons such as the dashboard or the EFK stack are easily installed after kops bootstrap,
But addons such as the dashboard or the EFK stack are easily installed after kOps bootstrap,
with a `kubectl apply -f https://...` or with the channels tool.
In future, we may as a convenience make it easy to add optional addons to the kops manifest,
In future, we may as a convenience make it easy to add optional addons to the kOps manifest,
though this will just be a convenience wrapper around doing it manually.
### Update BootStrap Addons
@ -205,7 +205,7 @@ If you want to update the bootstrap addons, you can run the following command to
### Versioning
The channels tool adds a manifest-of-manifests file, of `Kind: Addons`, which allows for a description
of the various manifest versions that are available. In this way kops can manage updates
of the various manifest versions that are available. In this way kOps can manage updates
as new versions of the addon are released. For example,
the [dashboard addon](https://github.com/kubernetes/kops/blob/master/addons/kubernetes-dashboard/addon.yaml)
lists multiple versions.

View File

@ -41,7 +41,7 @@ dnsZone: k8s.example.com
awsRegion: eu-west-1
```
When multiple environment files are passed using `--values` Kops performs a deep merge, for example given the following two files:
When multiple environment files are passed using `--values` kOps performs a deep merge, for example given the following two files:
```yaml
# File values-a.yaml
instanceGroups:

View File

@ -16,7 +16,7 @@ Take a snapshot of your EBS volumes; export all your data from kubectl etc.**
Limitations:
* kops splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but you will lose your events history.
* kOps splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but you will lose your events history.
* Doubtless others not yet known - please open issues if you encounter them!
### Overview
@ -190,7 +190,7 @@ This method provides zero-downtime when migrating a cluster from `kube-up` to `k
Limitations:
- If you're using the default networking (`kubenet`), there is a account limit of 50 entries in a VPC's route table. If your cluster contains more than ~25 nodes, this strategy, as-is, will not work.
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kops networking docs](../networking.md) for more information.
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kOps networking docs](../networking.md) for more information.
+ One solution is to gradually shift traffic from one cluster to the other, scaling down the number of nodes on the old cluster, and scaling up the number of nodes on the new cluster.
### Steps

View File

@ -2,13 +2,13 @@
## etcd-manager
etcd-manager is a kubernetes-associated project that kops uses to manage
etcd-manager is a kubernetes-associated project that kOps uses to manage
etcd.
etcd-manager uses many of the same ideas as the existing etcd implementation
built into kops, but it addresses some limitations also:
built into kOps, but it addresses some limitations also:
* separate from kops - can be used by other projects
* separate from kOps - can be used by other projects
* allows etcd2 -> etcd3 upgrade (along with minor upgrades)
* allows cluster resizing (e.g. going from 1 to 3 nodes)
@ -16,7 +16,7 @@ When using kubernetes >= 1.12 etcd-manager will be used by default. See [../etcd
## Backups
Backups and restores of etcd on kops are covered in [etcd_backup_restore_encryption.md](etcd_backup_restore_encryption.md)
Backups and restores of etcd on kOps are covered in [etcd_backup_restore_encryption.md](etcd_backup_restore_encryption.md)
## Direct Data Access

View File

@ -8,7 +8,7 @@ can be found [here](https://kubernetes.io/docs/admin/etcd/) and
### Backup requirement
A Kubernetes cluster deployed with kops stores the etcd state in two different
A Kubernetes cluster deployed with kOps stores the etcd state in two different
AWS EBS volumes per master node. One volume is used to store the Kubernetes
main data, the other one for events. For a HA master with three nodes this will
result in six volumes for etcd data (one in each AZ). An EBS volume is designed
@ -20,7 +20,7 @@ of 0.1%-0.2% per year.
## Taking backups
Backups are done periodically and before cluster modifications using [etcd-manager](etcd_administration.md)
(introduced in kops 1.12). Backups for both the `main` and `events` etcd clusters
(introduced in kOps 1.12). Backups for both the `main` and `events` etcd clusters
are stored in object storage (like S3) together with the cluster configuration.
By default, backups are taken every 15 min. Hourly backups are kept for 1 week and

View File

@ -4,7 +4,7 @@
For testing purposes, kubernetes works just fine with a single master. However, when the master becomes unavailable, for example due to upgrade or instance failure, the kubernetes API will be unavailable. Pods and services that are running in the cluster continue to operate as long as they do not depend on interacting with the API, but operations such as adding nodes, scaling pods, replacing terminated pods will not work. Running kubectl will also not work.
kops runs each master in a dedicated autoscaling groups (ASG) and stores data on EBS volumes. That way, if a master node is terminated the ASG will launch a new master instance with the master's volume. Because of the dedicated EBS volumes, each master is bound to a fixed Availability Zone (AZ). If the AZ becomes unavailable, the master instance in that AZ will also become unavailable.
kOps runs each master in a dedicated autoscaling groups (ASG) and stores data on EBS volumes. That way, if a master node is terminated the ASG will launch a new master instance with the master's volume. Because of the dedicated EBS volumes, each master is bound to a fixed Availability Zone (AZ). If the AZ becomes unavailable, the master instance in that AZ will also become unavailable.
For production use, you therefore want to run kubernetes in a HA setup with multiple masters. With multiple master nodes, you will be able both to do graceful (zero-downtime) upgrades and you will be able to survive AZ failures.
@ -19,7 +19,7 @@ Note that running clusters spanning several AZs is more expensive than running c
### Example 1: public topology
The simplest way to get started with a HA cluster is to run `kops create cluster` as shown below. The `--master-zones` flag lists the zones you want your masters
to run in. By default, kops will create one master per AZ. Since the kubernetes etcd cluster runs on the master nodes, you have to specify an odd number of zones in order to obtain quorum.
to run in. By default, kOps will create one master per AZ. Since the kubernetes etcd cluster runs on the master nodes, you have to specify an odd number of zones in order to obtain quorum.
```
kops create cluster \

View File

@ -1,6 +1,6 @@
# Images
As of Kubernetes 1.18 the default images used by _kops_ are the **[official Ubuntu 20.04](#ubuntu-2004-focal)** images.
As of Kubernetes 1.18 the default images used by kOps are the **[official Ubuntu 20.04](#ubuntu-2004-focal)** images.
You can choose a different image for an instance group by editing it with `kops edit ig nodes`. You should see an `image` field in one of the following formats:
@ -22,7 +22,7 @@ You can find the name for an image using:
## Security Updates
Automated security updates are handled by _kops_ for Debian, Flatcar and Ubuntu distros. This can be disabled by editing the cluster configuration:
Automated security updates are handled by kOps for Debian, Flatcar and Ubuntu distros. This can be disabled by editing the cluster configuration:
```yaml
spec:
@ -31,7 +31,7 @@ spec:
## Distros Support Matrix
The following table provides the support status for various distros with regards to _kops_ version:
The following table provides the support status for various distros with regards to kOps version:
| Distro | Experimental | Stable | Deprecated | Removed |
| ------------ | -----------: | -----: | ---------: | ------: |
@ -56,7 +56,7 @@ The following table provides the support status for various distros with regards
Amazon Linux 2 is based on Kernel version **4.14** which fixes some of the bugs present in RHEL/CentOS 7 and effects are less visible, but it's still quite old.
For _kops_ versions 1.16 and 1.17, the only supported Docker version is `18.06.3`. Newer versions of Docker cannot be installed due to missing dependencies for `container-selinux`. This issue is fixed in _kops_ **1.18**.
For kOps versions 1.16 and 1.17, the only supported Docker version is `18.06.3`. Newer versions of Docker cannot be installed due to missing dependencies for `container-selinux`. This issue is fixed in kOps **1.18**.
Available images can be listed using:
@ -212,27 +212,27 @@ aws ec2 describe-images --region us-east-1 --output table \
### CoreOS
Support for CoreOS was removed in _kops_ 1.18.
Support for CoreOS was removed in kOps 1.18.
You should consider using [Flatcar](#flatcar) as a replacement.
### Debian 8 (Jessie)
Support for Debian 8 (Jessie) was removed in _kops_ 1.18.
Support for Debian 8 (Jessie) was removed in kOps 1.18.
### Kope.io
Support for _kope.io_ images is deprecated. These images were the default until Kubernetes 1.18, when they were replaced by the [official Ubuntu 20.04](#ubuntu-2004-focal) images.
The _kope.io_ images were based on [Debian 9 (Stretch)](#debian-9-stretch) and had all packages required by _kops_ pre-installed. Other than that, the changes to the official Debian images were [minimal](https://github.com/kubernetes-sigs/image-builder/blob/master/images/kube-deploy/imagebuilder/templates/1.18-stretch.yml#L174-L198).
The _kope.io_ images were based on [Debian 9 (Stretch)](#debian-9-stretch) and had all packages required by kOps pre-installed. Other than that, the changes to the official Debian images were [minimal](https://github.com/kubernetes-sigs/image-builder/blob/master/images/kube-deploy/imagebuilder/templates/1.18-stretch.yml#L174-L198).
### Ubuntu 16.04 (Xenial)
Support for Ubuntu 16.04 (Xenial) is deprecated and will be removed in _kops_ 1.20.
Support for Ubuntu 16.04 (Xenial) is deprecated and will be removed in kOps 1.20.
## Owner aliases
_kops_ supports owner aliases for the official accounts of supported distros:
kOps supports owner aliases for the official accounts of supported distros:
* `kope.io` => `383156758163`
* `amazon` => `137112412989`

View File

@ -1,7 +1,7 @@
# Rolling Updates
Upgrading and modifying a k8s cluster usually requires the replacement of cloud instances.
In order to avoid loss of service and other disruption, Kops replaces cloud instances
In order to avoid loss of service and other disruption, kOps replaces cloud instances
incrementally with a rolling update.
Rolling updates are performed using

View File

@ -1,6 +1,6 @@
# Updates and Upgrades
## Updating kops
## Updating kOps
### MacOS
@ -40,7 +40,7 @@ Upgrading Kubernetes is easy with kops. The cluster spec contains a `kubernetesV
The `kops upgrade` command also automates checking for and applying updates.
It is recommended to run the latest version of Kops to ensure compatibility with the target kubernetesVersion. When applying a Kubernetes minor version upgrade (e.g. `v1.5.3` to `v1.6.0`), you should confirm that the target kubernetesVersion is compatible with the [current Kops release](https://github.com/kubernetes/kops/releases).
It is recommended to run the latest version of kOps to ensure compatibility with the target kubernetesVersion. When applying a Kubernetes minor version upgrade (e.g. `v1.5.3` to `v1.6.0`), you should confirm that the target kubernetesVersion is compatible with the [current kOps release](https://github.com/kubernetes/kops/releases).
Note: if you want to upgrade from a `kube-up` installation, please see the instructions for [how to upgrade kubernetes installed with kube-up](cluster_upgrades_and_migrations.md).
@ -61,7 +61,7 @@ node restart), but currently you must:
* `kops update cluster $NAME` to preview, then `kops update cluster $NAME --yes`
* `kops rolling-update cluster $NAME` to preview, then `kops rolling-update cluster $NAME --yes`
Upgrade uses the latest Kubernetes version considered stable by kops, defined in `https://github.com/kubernetes/kops/blob/master/channels/stable`.
Upgrade uses the latest Kubernetes version considered stable by kOps, defined in `https://github.com/kubernetes/kops/blob/master/channels/stable`.
### Terraform Users

View File

@ -1,6 +1,6 @@
kops: Operate Kubernetes the Kubernetes Way
kOps: Operate Kubernetes the Kubernetes Way
kops (Kubernetes-Ops) is a set of tools for installing, operating and deleting Kubernetes clusters.
kOps (Kubernetes-Ops) is a set of tools for installing, operating and deleting Kubernetes clusters.
It follows the Kubernetes design philosophy: the user creates a Cluster configuration object in JSON/YAML,
and then controllers create the Cluster.
@ -8,7 +8,7 @@ and then controllers create the Cluster.
Each component (kubelet, kube-apiserver...) is explicitly configured: We reuse the k8s componentconfig types
where we can, and we create additional types for the configuration of additional components.
kops can:
kOps can:
* create a cluster
* upgrade a cluster
@ -18,8 +18,8 @@ kops can:
* delete a cluster
Some users will need or prefer to use tools like Terraform for cluster configuration,
so kops can also output the equivalent configuration for those tools also (currently just Terraform, others
planned). After creation with your preferred tool, you can still use the rest of the kops tooling to operate
so kOps can also output the equivalent configuration for those tools also (currently just Terraform, others
planned). After creation with your preferred tool, you can still use the rest of the kOps tooling to operate
your cluster.
## Primary API types

View File

@ -1,30 +1,30 @@
** This file documents the new release process, as used from kops 1.19
onwards. For the process used for versions up to kops 1.18, please
** This file documents the new release process, as used from kOps 1.19
onwards. For the process used for versions up to kOps 1.18, please
see [the old release process](development/release.md)**
# Release Process
The kops project is released on an as-needed basis. The process is as follows:
The kOps project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. All [OWNERS](https://github.com/kubernetes/kops/blob/master/OWNERS) must LGTM this release
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kops $VERSION is released`
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kOps $VERSION is released`
## Branches
We maintain a `release-1.17` branch for kops 1.17.X, `release-1.18` for kops 1.18.X
We maintain a `release-1.17` branch for kOps 1.17.X, `release-1.18` for kOps 1.18.X
etc.
`master` is where development happens. We create new branches from master as a
new kops version is released, or in preparation for a new release. As we are
new kOps version is released, or in preparation for a new release. As we are
preparing for a new kubernetes release, we will try to advance the master branch
to focus on the new functionality, and start cherry-picking back more selectively
to the release branches only as needed.
Generally we don't encourage users to run older kops versions, or older
branches, because newer versions of kops should remain compatible with older
branches, because newer versions of kOps should remain compatible with older
versions of Kubernetes.
Releases should be done from the `release-1.X` branch. The tags should be made
@ -35,11 +35,11 @@ the current `release-1.X` tag.
## New Kubernetes versions and release branches
Typically Kops alpha releases are created off the master branch and beta and stable releases are created off of release branches.
Typically kOps alpha releases are created off the master branch and beta and stable releases are created off of release branches.
In order to create a new release branch off of master prior to a beta release, perform the following steps:
1. Create a new periodic E2E prow job for the "next" kubernetes minor version.
* All Kops prow jobs are defined [here](https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes/kops)
* All kOps prow jobs are defined [here](https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes/kops)
2. Create a new presubmit E2E prow job for the new release branch.
3. Create a new milestone in the GitHub repo.
4. Update [prow's milestone_applier config](https://github.com/kubernetes/test-infra/blob/dc99617c881805981b85189da232d29747f87004/config/prow/plugins.yaml#L309-L313) to update master to use the new milestone and add an entry for the new branch that targets master's old milestone.
@ -183,7 +183,7 @@ Currently we send the image and non-image artifact promotion PRs separately.
```
git add -p
git commit -m "Promote kops $VERSION images"
git commit -m "Promote kOps $VERSION images"
git push ${USER}
hub pull-request
```
@ -207,7 +207,7 @@ Verify, then send a PR:
```
git add artifacts/manifests/k8s-staging-kops/${VERSION}.yaml
git commit -m "Promote kops $VERSION binary artifacts"
git commit -m "Promote kOps $VERSION binary artifacts"
git push ${USER}
hub pull-request
```
@ -279,7 +279,7 @@ chmod +x ko
./ko version
```
Also run through a kops create cluster flow, ideally verifying that
Also run through a `kops create cluster` flow, ideally verifying that
everything is pulling from the new locations.
## On github
@ -289,7 +289,7 @@ everything is pulling from the new locations.
* Add notes
* Publish it
## Release kops to homebrew
## Release kOps to homebrew
* Following the [documentation](homebrew.md) we must release a compatible homebrew formulae with the release.
* This should be done at the same time as the release, and we will iterate on how to improve timing of this.
@ -298,11 +298,11 @@ everything is pulling from the new locations.
Once we are satisfied the release is sound:
* Bump the kops recommended version in the alpha channel
* Bump the kOps recommended version in the alpha channel
Once we are satisfied the release is stable:
* Bump the kops recommended version in the stable channel
* Bump the kOps recommended version in the stable channel
## Update conformance results with CNCF

View File

@ -51,7 +51,7 @@ None known at this time
* Add SubnetType tags to run_in_existing_vpc docs [@tsupertramp](https://github.com/tsupertramp) [#5094](https://github.com/kubernetes/kops/pull/5094)
* Typo fix: actually->actually/overide->override/to to->to [@AdamDang](https://github.com/AdamDang) [#5099](https://github.com/kubernetes/kops/pull/5099) (fixed typo in message for verify - @chrisz100)
* Typo fix detaults->defaults [@AdamDang](https://github.com/AdamDang) [#5067](https://github.com/kubernetes/kops/pull/5067)
* Update upgrade_from_kops_1.6_to_1.7_calico_cidr_migration.md [@AdamDang](https://github.com/AdamDang) [#5107](https://github.com/kubernetes/kops/pull/5107)
* Update upgrade_fromkOps1.6_to_1.7_calico_cidr_migration.md [@AdamDang](https://github.com/AdamDang) [#5107](https://github.com/kubernetes/kops/pull/5107)
* Typo fix: healthly->healthy [@AdamDang](https://github.com/AdamDang) [#5125](https://github.com/kubernetes/kops/pull/5125)
* Remove custom Statement IDs from IAM Policy Statements [@KashifSaadat](https://github.com/KashifSaadat) [#4958](https://github.com/kubernetes/kops/pull/4958)
* Adds new kops logo [@iMartyn](https://github.com/iMartyn) [#5113](https://github.com/kubernetes/kops/pull/5113)
@ -62,7 +62,7 @@ None known at this time
* Added tls certificate and private key path flags to kubelet config [@chrisz100](https://github.com/chrisz100) [#5088](https://github.com/kubernetes/kops/pull/5088)
* kubelet: expose --experimental-allowed-unsafe-sysctls [@smcquay](https://github.com/smcquay) [#5104](https://github.com/kubernetes/kops/pull/5104)
* Update docker image versions [@justinsb](https://github.com/justinsb) [#5057](https://github.com/kubernetes/kops/pull/5057)
* CoreDNS in Kops as an addon [@rajansandeep](https://github.com/rajansandeep) [#4041](https://github.com/kubernetes/kops/pull/4041)
* CoreDNS in kOps as an addon [@rajansandeep](https://github.com/rajansandeep) [#4041](https://github.com/kubernetes/kops/pull/4041)
* Implement network task for AlibabaCloud [@LilyFaFa](https://github.com/LilyFaFa),[@xh4n3](https://github.com/xh4n3) [#4991](https://github.com/kubernetes/kops/pull/4991)
* Allow rolling-update to filter on roles [@justinsb](https://github.com/justinsb) [#5122](https://github.com/kubernetes/kops/pull/5122)
* Remove stub tests [@justinsb](https://github.com/justinsb) [#5117](https://github.com/kubernetes/kops/pull/5117)
@ -268,7 +268,7 @@ None known at this time
* Add autoscaling group ids to terraform module output [@kampka](https://github.com/kampka) [#5472](https://github.com/kubernetes/kops/pull/5472)
* Allow kubelet to bind the hosts primary IP [@rdrgmnzs](https://github.com/rdrgmnzs) [#5460](https://github.com/kubernetes/kops/pull/5460)
* ContainerRegistry remapping should be atomic [@kampka](https://github.com/kampka) [#5479](https://github.com/kubernetes/kops/pull/5479)
* [GPU] Updated Kops GPU Setup Hook [@dcwangmit01](https://github.com/dcwangmit01) [#4971](https://github.com/kubernetes/kops/pull/4971)
* [GPU] Updated kOps GPU Setup Hook [@dcwangmit01](https://github.com/dcwangmit01) [#4971](https://github.com/kubernetes/kops/pull/4971)
* Only use SSL for ELB if certificate configured [@justinsb](https://github.com/justinsb) [#5485](https://github.com/kubernetes/kops/pull/5485)
* Simplify logic around master rolling-update [@justinsb](https://github.com/justinsb) [#5488](https://github.com/kubernetes/kops/pull/5488)
* Update Issue templates and add PR template [@mikesplain](https://github.com/mikesplain) [#5487](https://github.com/kubernetes/kops/pull/5487)

View File

@ -106,7 +106,7 @@
* Add doc regarding upgrading to CoreDNS [@joshbranham](https://github.com/joshbranham) [#6344](https://github.com/kubernetes/kops/pull/6344)
* AWS: Enable ICMP Type 3 Code 4 for API server ELBs [@davidarcher](https://github.com/davidarcher) [#6297](https://github.com/kubernetes/kops/pull/6297)
* Additional Storage & Volume Mounting [@gambol99](https://github.com/gambol99) [#6066](https://github.com/kubernetes/kops/pull/6066)
* Kops for Openstack [@jrperritt](https://github.com/jrperritt),[@drekle](https://github.com/drekle),[@wozniakjan](https://github.com/wozniakjan),[@marsavela](https://github.com/marsavela) [#6351](https://github.com/kubernetes/kops/pull/6351)
* kOps for Openstack [@jrperritt](https://github.com/jrperritt),[@drekle](https://github.com/drekle),[@wozniakjan](https://github.com/wozniakjan),[@marsavela](https://github.com/marsavela) [#6351](https://github.com/kubernetes/kops/pull/6351)
* Update go version to 1.10.8 [@justinsb](https://github.com/justinsb) [#6401](https://github.com/kubernetes/kops/pull/6401)
* Suffix openstack subnet name with cluster name [@wozniakjan](https://github.com/wozniakjan) [#6380](https://github.com/kubernetes/kops/pull/6380)
* Update upgrade.md [@ms4720](https://github.com/ms4720) [#6396](https://github.com/kubernetes/kops/pull/6396)

View File

@ -35,7 +35,7 @@ is safer.
apiGroup will now be kops.k8s.io, not kops. If performing strict string
comparison you will need to update your expected values.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -111,7 +111,7 @@ is safer.
* Make gofmt fails find usage [@drekle](https://github.com/drekle) [#6954](https://github.com/kubernetes/kops/pull/6954)
* Update commitlog relnotes for 1.12.0 [@justinsb](https://github.com/justinsb) [#6981](https://github.com/kubernetes/kops/pull/6981)
* 1.12 highlight changelog [@granular-ryanbonham](https://github.com/granular-ryanbonham) [#6982](https://github.com/kubernetes/kops/pull/6982)
* Mention version of Kops that introduced new features [@rifelpet](https://github.com/rifelpet) [#6983](https://github.com/kubernetes/kops/pull/6983)
* Mention version of kOps that introduced new features [@rifelpet](https://github.com/rifelpet) [#6983](https://github.com/kubernetes/kops/pull/6983)
* Terraform: fix options field, should be spot_options [@kimxogus](https://github.com/kimxogus) [#6988](https://github.com/kubernetes/kops/pull/6988)
* Add shortNames and columns to InstanceGroup CRD [@justinsb](https://github.com/justinsb) [#6995](https://github.com/kubernetes/kops/pull/6995)
* Add script to verify CRD generation [@justinsb](https://github.com/justinsb) [#6996](https://github.com/kubernetes/kops/pull/6996)
@ -191,7 +191,7 @@ is safer.
* Use readinessProbe for weave-net instead of livenessProbe [@ReillyProcentive](https://github.com/ReillyProcentive) [#7102](https://github.com/kubernetes/kops/pull/7102)
* Add some permissions to cluster-autoscaler clusterrole [@Coolknight](https://github.com/Coolknight) [#7248](https://github.com/kubernetes/kops/pull/7248)
* Spotinst: Rolling update always reports NeedsUpdate [@liranp](https://github.com/liranp) [#7251](https://github.com/kubernetes/kops/pull/7251)
* Add documentation example for running Kops in a CI environment [@rifelpet](https://github.com/rifelpet) [#7256](https://github.com/kubernetes/kops/pull/7256)
* Add documentation example for running kOps in a CI environment [@rifelpet](https://github.com/rifelpet) [#7256](https://github.com/kubernetes/kops/pull/7256)
* Calico -> 3.7.4 for older versions [@justinsb](https://github.com/justinsb) [#7282](https://github.com/kubernetes/kops/pull/7282)
* [Issue-7148] Legacyetcd support for Digital Ocean [@srikiz](https://github.com/srikiz) [#7221](https://github.com/kubernetes/kops/pull/7221)
* Stop .gitignoring all files named go-bindata [@justinsb](https://github.com/justinsb) [#7288](https://github.com/kubernetes/kops/pull/7288)
@ -376,7 +376,7 @@ is safer.
* Fix issues with older versions of k8s for basic clusters [@hakman](https://github.com/hakman),[@rifelpet](https://github.com/rifelpet) [#8248](https://github.com/kubernetes/kops/pull/8248)
* CoreDNS default image bump to 1.6.6 to resolve CVE [@gjtempleton](https://github.com/gjtempleton) [#8333](https://github.com/kubernetes/kops/pull/8333)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
## 1.15.1 to 1.15.2

View File

@ -28,11 +28,11 @@
# Required Actions
* If either a Kops 1.16 alpha release or a custom Kops build was used on a cluster,
* If either a kOps 1.16 alpha release or a custom kOps build was used on a cluster,
a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.16.0-beta.1 or later.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to kOps 1.16.0-beta.1 or later.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -144,7 +144,7 @@
* Add a BAZEL_CONFIG Makefile arg to bazel commands [@fejta](https://github.com/fejta) [#7758](https://github.com/kubernetes/kops/pull/7758)
* Memberlist gossip implementation [@jacksontj](https://github.com/jacksontj) [#7521](https://github.com/kubernetes/kops/pull/7521)
* bazel: comment out shallow_since as fails to build with bazel 1.0 [@justinsb](https://github.com/justinsb) [#7771](https://github.com/kubernetes/kops/pull/7771)
* Kops controller support for OpenStack [@zetaab](https://github.com/zetaab) [#7692](https://github.com/kubernetes/kops/pull/7692)
* kOps controller support for OpenStack [@zetaab](https://github.com/zetaab) [#7692](https://github.com/kubernetes/kops/pull/7692)
* Upgrade Amazon VPC CNI plugin to 1.5.4 [@rifelpet](https://github.com/rifelpet) [#7398](https://github.com/kubernetes/kops/pull/7398)
* Add documentation for updating CRDs when making API changes [@rifelpet](https://github.com/rifelpet) [#7728](https://github.com/kubernetes/kops/pull/7728)
* Kubelet configuration: Maximum pods flag is miscalculated when using Amazon VPC CNI [@liranp](https://github.com/liranp) [#7539](https://github.com/kubernetes/kops/pull/7539)
@ -233,7 +233,7 @@
* Machine types updates [@mikesplain](https://github.com/mikesplain) [#7947](https://github.com/kubernetes/kops/pull/7947)
* fix 404 urls in docs [@tanjunchen](https://github.com/tanjunchen) [#7943](https://github.com/kubernetes/kops/pull/7943)
* Fix generation of documentation /sitemap.xml file [@aledbf](https://github.com/aledbf) [#7949](https://github.com/kubernetes/kops/pull/7949)
* Kops site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* kOps site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* Fix netlify mixed content [@mikesplain](https://github.com/mikesplain) [#7953](https://github.com/kubernetes/kops/pull/7953)
* Fix goimports errors [@rifelpet](https://github.com/rifelpet) [#7955](https://github.com/kubernetes/kops/pull/7955)
* Upate Lyft CNI to v0.5.1 [@maruina](https://github.com/maruina) [#7402](https://github.com/kubernetes/kops/pull/7402)
@ -268,7 +268,7 @@
* Add Cilium.EnablePolicy back into templates [@olemarkus](https://github.com/olemarkus) [#8379](https://github.com/kubernetes/kops/pull/8379)
* CoreDNS default image bump to 1.6.6 to resolve CVE [@gjtempleton](https://github.com/gjtempleton) [#8333](https://github.com/kubernetes/kops/pull/8333)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* EBS Root Volume Termination [@tioxy](https://github.com/tioxy) [#7865](https://github.com/kubernetes/kops/pull/7865)
* Announce impending removal of v1alpha1 API [@johngmyers](https://github.com/johngmyers) [#8064](https://github.com/kubernetes/kops/pull/8064)
* Add missing priorityClassName for critical pods [@johngmyers](https://github.com/johngmyers) [#8200](https://github.com/kubernetes/kops/pull/8200)

View File

@ -28,11 +28,11 @@
# Required Actions
* Terraform users on AWS may need to rename resources in their terraform state file in order to prepare for future Terraform 0.12 support.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In Kops, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In kOps, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
* The default route was named `aws_route.0-0-0-0--0` and will now be named `aws_route.route-0-0-0-0--0`.
* Additional CIDR blocks associated with a VPC were similarly named the hyphenated CIDR block with two hyphens for the `/`, for example `aws_vpc_ipv4_cidr_block_association.10-1-0-0--16`. These will now be prefixed with `cidr-`, for example `aws_vpc_ipv4_cidr_block_association.cidr-10-1-0-0--16`.
To prevent downtime, follow these steps with the new version of Kops:
To prevent downtime, follow these steps with the new version of kOps:
```
kops update cluster --target terraform ...
terraform plan
@ -45,7 +45,7 @@
terraform apply
```
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -57,9 +57,9 @@
PodPriority: "true"
```
* If either a Kops 1.17 alpha release or a custom Kops build was used on a cluster,
* If either a kOps 1.17 alpha release or a custom kOps build was used on a cluster,
a kops-controller Deployment may have been created that should get deleted because it has been replaced with a DaemonSet.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.17.0-alpha.2 or later.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to kOps 1.17.0-alpha.2 or later.
# Deprecations
@ -67,27 +67,27 @@
* The `kops/v1alpha1` API is deprecated and will be removed in kops 1.18. Users of `kops replace` will need to supply v1alpha2 resources.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of Kops.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of kOps.
* Support for Debian 8 (Jessie) has been deprecated and will be removed in future versions of Kops.
* Support for Debian 8 (Jessie) has been deprecated and will be removed in future versions of kOps.
* Support for CoreOS has been deprecated and will be removed in future versions of Kops. Those affected should consider using [Flatcar](../operations/images.md#flatcar) as a replacement.
* Support for CoreOS has been deprecated and will be removed in future versions of kOps. Those affected should consider using [Flatcar](../operations/images.md#flatcar) as a replacement.
* Support for the "Legacy" etcd provider has been deprecated. It will not be supported for Kubernetes 1.18 or later. To migrate to the default "Manager" etcd provider see the [etcd migration documentation](../etcd3-migration.md).
* The default StorageClass `gp2` prior to Kops 1.17.0 is no longer the default, replaced by StorageClass `kops-ssd-1-17`.
* The default StorageClass `gp2` prior to kOps 1.17.0 is no longer the default, replaced by StorageClass `kops-ssd-1-17`.
# Known Issues
* Kops 1.17.0-beta.1 included an update for AWS IAM Authenticator to 0.5.0.
* kOps 1.17.0-beta.1 included an update for AWS IAM Authenticator to 0.5.0.
This version fails to use the volume mounted ConfigMap causing API authentication issues for clients with aws-iam-authenticator credentials.
Any cluster with `spec.authentication.aws` defined according to the [docs](../authentication.md#aws-iam-authenticator) without overriding the `spec.authentication.aws.image` is affected.
The workaround is to specify the old 0.4.0 image with `spec.authentication.aws.image=602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.4.0`.
For the 1.17.0 release, this change was rolled back, and the AWS IAM authenticator defaults to version 0.4.0
* Kops 1.17.0 includes a new StorageClass `kops-ssd-1-17` which is set as the default via the annotation `"storageclass.beta.kubernetes.io/is-default-class":"true"`.
* kOps 1.17.0 includes a new StorageClass `kops-ssd-1-17` which is set as the default via the annotation `"storageclass.beta.kubernetes.io/is-default-class":"true"`.
If you have modified the previous `gp2` StorageClass, it could conflict with the defaulting behavior.
To resolve, patch the `gp2` StorageClass to have the annotation `"storageclass.beta.kubernetes.io/is-default-class":"false"`, which aligns with a patch to Kops 1.17.1 as well.
To resolve, patch the `gp2` StorageClass to have the annotation `"storageclass.beta.kubernetes.io/is-default-class":"false"`, which aligns with a patch to kOps 1.17.1 as well.
`kubectl patch storageclass.storage.k8s.io/gp2 --patch '{"metadata": {"annotations": {"storageclass.beta.kubernetes.io/is-default-class": "false"}}}'`
# Full change list since 1.16.0 release
@ -113,7 +113,7 @@
* Machine types updates [@mikesplain](https://github.com/mikesplain) [#7947](https://github.com/kubernetes/kops/pull/7947)
* fix 404 urls in docs [@tanjunchen](https://github.com/tanjunchen) [#7943](https://github.com/kubernetes/kops/pull/7943)
* Fix generation of documentation /sitemap.xml file [@aledbf](https://github.com/aledbf) [#7949](https://github.com/kubernetes/kops/pull/7949)
* Kops site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* kOps site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* Fix netlify mixed content [@mikesplain](https://github.com/mikesplain) [#7953](https://github.com/kubernetes/kops/pull/7953)
* Fix goimports errors [@rifelpet](https://github.com/rifelpet) [#7955](https://github.com/kubernetes/kops/pull/7955)
* Upate Lyft CNI to v0.5.1 [@maruina](https://github.com/maruina) [#7402](https://github.com/kubernetes/kops/pull/7402)
@ -182,7 +182,7 @@
* Bump etcd-manager to 3.0.20200116 (#8310) [@mmerrill3](https://github.com/mmerrill3) [#8399](https://github.com/kubernetes/kops/pull/8399)
* CoreDNS default image bump to 1.6.6 to resolve CVE [@gjtempleton](https://github.com/gjtempleton) [#8333](https://github.com/kubernetes/kops/pull/8333)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* EBS Root Volume Termination [@tioxy](https://github.com/tioxy) [#7865](https://github.com/kubernetes/kops/pull/7865)
* Alicloud: etcd-manager support [@bittopaz](https://github.com/bittopaz) [#8016](https://github.com/kubernetes/kops/pull/8016)
@ -236,7 +236,7 @@
* Update lyft CNI to 0.6.0 [@maruina](https://github.com/maruina) [#8757](https://github.com/kubernetes/kops/pull/8757)
* Fix Handling of LaunchTemplate Versions for MixedInstancePolicy [@granular-ryanbonham](https://github.com/granular-ryanbonham),[@KashifSaadat](https://github.com/KashifSaadat),[@qqshfox](https://github.com/qqshfox) [#8038](https://github.com/kubernetes/kops/pull/8038)
* Enable stamping on bazel image builds [@rifelpet](https://github.com/rifelpet) [#8835](https://github.com/kubernetes/kops/pull/8835)
* Add support for Docker 19.03.8 in Kops 1.17 [@hakman](https://github.com/hakman) [#8845](https://github.com/kubernetes/kops/pull/8845)
* Add support for Docker 19.03.8 in kOps 1.17 [@hakman](https://github.com/hakman) [#8845](https://github.com/kubernetes/kops/pull/8845)
* Remove support for Docker 1.11, 1.12 and 1.13 [@hakman](https://github.com/hakman) [#8855](https://github.com/kubernetes/kops/pull/8855)
* Fix kuberouter for k8s 1.16+ [@UnderMyBed](https://github.com/UnderMyBed),[@hakman](https://github.com/hakman) [#8697](https://github.com/kubernetes/kops/pull/8697)
* Fix tests for obsolete Docker versions in 1.17 [@hakman](https://github.com/hakman) [#8889](https://github.com/kubernetes/kops/pull/8889)

View File

@ -1,4 +1,4 @@
## Release notes for kops 1.18 series
## Release notes for kOps 1.18 series
# Significant changes
@ -18,7 +18,7 @@
* Rolling updates now support surging and parallelism within an instance group. For details see [the documentation](https://kops.sigs.k8s.io/operations/rolling-update/).
* Cilium CNI can now use AWS networking natively through the AWS ENI IPAM mode. Kops can also run a Kubernetes cluster entirely without kube-proxy using Cilium's BPF NodePort implementation.
* Cilium CNI can now use AWS networking natively through the AWS ENI IPAM mode. kOps can also run a Kubernetes cluster entirely without kube-proxy using Cilium's BPF NodePort implementation.
* Cilium CNI can now use a dedicated etcd cluster managed by etcd-manager for synchronizing agent state instead of CRDs.
@ -44,7 +44,7 @@
* The Docker `health-check` service has been disabled by default. It shouldn't be needed anymore, but it can still be enabled by setting `spec.docker.healthCheck: true`. It is recommended to also check [node-problem-detector](https://github.com/kubernetes/node-problem-detector) and [draino](https://github.com/planetlabs/draino) as replacements. See Required Actions below.
* Network and internet access for `docker run` containers has been disabled by default, to avoid any unwanted interaction between the Docker firewall rules and the firewall rules of netwok plugins. This was the default since the early days of Kops, but a race condition in the Docker startup sequence changed this behaviour in more recent years. To re-enable, set `spec.docker.ipTables: true` and `spec.docker.ipMasq: true`.
* Network and internet access for `docker run` containers has been disabled by default, to avoid any unwanted interaction between the Docker firewall rules and the firewall rules of netwok plugins. This was the default since the early days of kOps, but a race condition in the Docker startup sequence changed this behaviour in more recent years. To re-enable, set `spec.docker.ipTables: true` and `spec.docker.ipMasq: true`.
* Lyft CNI plugin default subnet tags changed from from `Type: pod` to `KubernetesCluster: myclustername.mydns.io`. Subnets intended for use by the plugin will need to be tagged with this new tag and [additional tag filters](https://github.com/lyft/cni-ipvlan-vpc-k8s#other-configuration-flags) may need to be added to the cluster spec in order to achieve the desired set of subnets.
@ -62,16 +62,16 @@
* The `kops.k8s.io/v1alpha1` API has been removed. Users of `kops replace` will need to supply v1alpha2 resources.
* Please see the notes in the 1.15 release about the apiGroup changing from kops to kops.k8s.io
* Please see the notes in the 1.15 release about the apiGroup changing from kOps to kops.k8s.io
# Required Actions
* Terraform users on AWS may need to rename resources in their terraform state file in order to support Terraform 0.12.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In Kops, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In kOps, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
* The default route was named `aws_route.0-0-0-0--0` and will now be named `aws_route.route-0-0-0-0--0`.
* Additional CIDR blocks associated with a VPC were similarly named the hyphenated CIDR block with two hyphens for the `/`, for example `aws_vpc_ipv4_cidr_block_association.10-1-0-0--16`. These will now be prefixed with `cidr-`, for example `aws_vpc_ipv4_cidr_block_association.cidr-10-1-0-0--16`.
To prevent downtime, follow these steps with the new version of Kops:
To prevent downtime, follow these steps with the new version of kOps:
```
KOPS_FEATURE_FLAGS=-Terraform-0.12 kops update cluster --target terraform ...
# Use Terraform <0.12
@ -84,7 +84,7 @@
# Ensure these resources are no longer being destroyed and recreated
terraform apply
```
Kops will now output Terraform 0.12 syntax with the normal workflow:
kOps will now output Terraform 0.12 syntax with the normal workflow:
```
kops update cluster --target terraform ...
# Use Terraform 0.12. This plan should be a no-op
@ -100,7 +100,7 @@
healthCheck: true
```
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -112,22 +112,22 @@
PodPriority: "true"
```
* If a custom Kops build was used on a cluster, a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.16.0-beta.1 or later.
* If a custom kOps build was used on a cluster, a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to kOps 1.16.0-beta.1 or later.
# Known Issues
* AWS clusters with an ACM certificate attached to the API ELB (the cluster's `spec.api.loadBalancer.sslCertificate` is set) will need to reenable basic auth to use the kubeconfig context created by `kops export kubecfg`. Set `spec.kubeAPIServer.disableBasicAuth: false` before running `kops export kubecfg`. See [#9756](https://github.com/kubernetes/kops/issues/9756) for more information.
* AWS clusters with an ACM certificate attached to the API ELB (the cluster's `spec.api.loadBalancer.sslCertificate` is set) will need to reenable basic auth to use the kubeconfig context created by `kOps export kubecfg`. Set `spec.kubeAPIServer.disableBasicAuth: false` before running `kOps export kubecfg`. See [#9756](https://github.com/kubernetes/kops/issues/9756) for more information.
# Deprecations
* Support for Kubernetes versions 1.9 and 1.10 are deprecated and will be removed in kops 1.19.
* Support for Kubernetes versions 1.9 and 1.10 are deprecated and will be removed in kOps 1.19.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of Kops.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of kOps.
* Support for the Romana networking provider is deprecated and will be removed in kops 1.19.
* Support for the Romana networking provider is deprecated and will be removed in kOps 1.19.
* Support for legacy IAM permissions is deprecated and will be removed in kops 1.19.
* Support for legacy IAM permissions is deprecated and will be removed in kOps 1.19.
# Full change list since 1.17.0 release
@ -222,7 +222,7 @@
* GCE: Fix Permission for the Storage Bucket [@mccare](https://github.com/mccare) [#8157](https://github.com/kubernetes/kops/pull/8157)
* pkg/instancegroups - fix static check [@johngmyers](https://github.com/johngmyers) [#8186](https://github.com/kubernetes/kops/pull/8186)
* pkg/resources/aws:simplify code and remove code [@Aresforchina](https://github.com/Aresforchina) [#8188](https://github.com/kubernetes/kops/pull/8188)
* Update links printed by Kops to use new docs site [@rifelpet](https://github.com/rifelpet) [#8190](https://github.com/kubernetes/kops/pull/8190)
* Update links printed by kOps to use new docs site [@rifelpet](https://github.com/rifelpet) [#8190](https://github.com/kubernetes/kops/pull/8190)
* dnsprovider/pkg/dnsprovider - fix static check [@hakman](https://github.com/hakman) [#8149](https://github.com/kubernetes/kops/pull/8149)
* fix staticcheck failures in pkg/resources [@Aresforchina](https://github.com/Aresforchina) [#8191](https://github.com/kubernetes/kops/pull/8191)
* Add corresponding unit test to the function in subnet.go. [@fenggw-fnst](https://github.com/fenggw-fnst) [#8195](https://github.com/kubernetes/kops/pull/8195)
@ -346,7 +346,7 @@
* Remove addons only applicable to unsupported versions of Kubernetes [@johngmyers](https://github.com/johngmyers) [#8318](https://github.com/kubernetes/kops/pull/8318)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Edit author name [@LinshanYu](https://github.com/LinshanYu) [#8374](https://github.com/kubernetes/kops/pull/8374)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* Support additional kube-scheduler config parameters via config file [@rralcala](https://github.com/rralcala) [#8407](https://github.com/kubernetes/kops/pull/8407)
* Option to increase concurrency of rolling update within instancegroup [@johngmyers](https://github.com/johngmyers) [#8271](https://github.com/kubernetes/kops/pull/8271)
* Fix template clusterName behavior [@lcrisci](https://github.com/lcrisci) [#7319](https://github.com/kubernetes/kops/pull/7319)
@ -577,7 +577,7 @@
* Update Calico and Canal to v3.13.2 [@hakman](https://github.com/hakman) [#8865](https://github.com/kubernetes/kops/pull/8865)
* GCE: Delete cluster will also delete the DNS entries created by kubernetes [@mccare](https://github.com/mccare),[@justinsb](https://github.com/justinsb) [#8250](https://github.com/kubernetes/kops/pull/8250)
* Add Terraform 0.12 support [@rifelpet](https://github.com/rifelpet) [#8825](https://github.com/kubernetes/kops/pull/8825)
* Don't compress bindata & allow Kops to be imported as a package. [@rdrgmnzs](https://github.com/rdrgmnzs),[@justinsb](https://github.com/justinsb) [#8584](https://github.com/kubernetes/kops/pull/8584)
* Don't compress bindata & allow kOps to be imported as a package. [@rdrgmnzs](https://github.com/rdrgmnzs),[@justinsb](https://github.com/justinsb) [#8584](https://github.com/kubernetes/kops/pull/8584)
* Validate cluster N times in rolling-update [@zetaab](https://github.com/zetaab) [#8868](https://github.com/kubernetes/kops/pull/8868)
* Update go.mod for k8s 1.17 [@justinsb](https://github.com/justinsb) [#8873](https://github.com/kubernetes/kops/pull/8873)
* pkg: add some unit tests [@q384566678](https://github.com/q384566678) [#8872](https://github.com/kubernetes/kops/pull/8872)

View File

@ -1,12 +1,12 @@
## Release notes for kops 1.19 series
## Release notes for kOps 1.19 series
(The kops 1.19 release has not been released yet; this is a document to gather the notes prior to the release).
(The kOps 1.19 release has not been released yet; this is a document to gather the notes prior to the release).
# Significant changes
## Changes to kubernetes config export
Kops will no longer automatically export the kubernetes config on `kops update cluster`. In order to export the config on cluster update, you need to either add the `--user <user>` to reference an existing user, or `--admin` to export the cluster admin user. If neither flag is passed, the kubernetes config will not be modified. This makes it easier to reuse user definitions across clusters should you, for example, use OIDC for authentication.
kOps will no longer automatically export the kubernetes config on `kops update cluster`. In order to export the config on cluster update, you need to either add the `--user <user>` to reference an existing user, or `--admin` to export the cluster admin user. If neither flag is passed, the kubernetes config will not be modified. This makes it easier to reuse user definitions across clusters should you, for example, use OIDC for authentication.
Similarly, `kops export kubecfg` will also require passing either the `--admin` or `--user` flag if the context does not already exist.
@ -17,9 +17,9 @@ credentials may be specified as a value of the `--admin` flag. To get the previo
## OpenStack Cinder plugin
Kops will install the Cinder plugin for kops running kubernetes 1.16 or newer. If you already have this plugin installed you should remove it before upgrading.
kOps will install the Cinder plugin for kOps running kubernetes 1.16 or newer. If you already have this plugin installed you should remove it before upgrading.
If you already have a default `StorageClass`, you should set `cloudConfig.Openstack.BlockStorage.CreateStorageClass: false` to prevent kops from installing one.
If you already have a default `StorageClass`, you should set `cloudConfig.Openstack.BlockStorage.CreateStorageClass: false` to prevent kOps from installing one.
## Other significant changes by kind
@ -27,7 +27,7 @@ If you already have a default `StorageClass`, you should set `cloudConfig.Openst
* New clusters will now have one nodes group per zone. The number of nodes now defaults to the number of zones.
* On AWS kops now defaults to using launch templates instead of launch configurations.
* On AWS kOps now defaults to using launch templates instead of launch configurations.
* There is now Alpha support for Hashicorp Vault as a store for secrets and keys. See the [Vault state store docs](/state/#vault-vault).
@ -38,7 +38,7 @@ The expiration times vary randomly so that nodes are likely to have their certs
### CLI
* The `kops update cluster` command will now refuse to run on a cluster that
has been updated by a newer version of kops unless it is given the `--allow-kops-downgrade` flag.
has been updated by a newer version of kOps unless it is given the `--allow-kops-downgrade` flag.
* New command for deleting a single instance: [kops delete instance](/docs/cli/kops_delete_instance/)
@ -68,7 +68,7 @@ has been updated by a newer version of kops unless it is given the `--allow-kops
* Support for the Romana networking provider has been removed.
* Support for legacy IAM permissions has been removed. This removal may be temporarily deferred to kops 1.20 by setting the `LegacyIAM` feature flag.
* Support for legacy IAM permissions has been removed. This removal may be temporarily deferred to kOps 1.20 by setting the `LegacyIAM` feature flag.
# Required Actions
@ -90,11 +90,11 @@ To prevent downtime, follow these steps with the new version of Kops:
# Deprecations
* Support for Kubernetes versions 1.11 and 1.12 are deprecated and will be removed in kops 1.20.
* Support for Kubernetes versions 1.11 and 1.12 are deprecated and will be removed in kOps 1.20.
* Support for Terraform version 0.11 has been deprecated and will be removed in kops 1.20.
* Support for Terraform version 0.11 has been deprecated and will be removed in kOps 1.20.
* Support for feature flag `Terraform-0.12` has been deprecated and will be removed in kops 1.20. All generated Terraform HCL2/JSON files will support versions `0.12.26+` and `0.13.0+`.
* Support for feature flag `Terraform-0.12` has been deprecated and will be removed in kOps 1.20. All generated Terraform HCL2/JSON files will support versions `0.12.26+` and `0.13.0+`.
* The [manifest based metrics server addon](https://github.com/kubernetes/kops/tree/master/addons/metrics-server) has been deprecated in favour of a configurable addon.
@ -225,7 +225,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Spotinst: Allow a user specifiable node draining timeout [@liranp](https://github.com/liranp) [#9221](https://github.com/kubernetes/kops/pull/9221)
* Validate IG RootVolumeType [@olemarkus](https://github.com/olemarkus) [#9265](https://github.com/kubernetes/kops/pull/9265)
* gce: log bucket-policy-only message at a level that always appears [@justinsb](https://github.com/justinsb) [#9276](https://github.com/kubernetes/kops/pull/9276)
* Prepare Kops for multi-architecture support [@hakman](https://github.com/hakman) [#9216](https://github.com/kubernetes/kops/pull/9216)
* Prepare kOps for multi-architecture support [@hakman](https://github.com/hakman) [#9216](https://github.com/kubernetes/kops/pull/9216)
* Ensure we have IAM bucket permissions to other S3 buckets [@justinsb](https://github.com/justinsb) [#9274](https://github.com/kubernetes/kops/pull/9274)
* Refactor cert issuance code [@johngmyers](https://github.com/johngmyers) [#9130](https://github.com/kubernetes/kops/pull/9130)
* Allow failure of the ARM64 job in TravisCI [@hakman](https://github.com/hakman) [#9279](https://github.com/kubernetes/kops/pull/9279)
@ -257,7 +257,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Docs helptext [@olemarkus](https://github.com/olemarkus) [#9333](https://github.com/kubernetes/kops/pull/9333)
* Use launch templates by default [@johngmyers](https://github.com/johngmyers) [#9289](https://github.com/kubernetes/kops/pull/9289)
* Refactor kubemanifest to be clearer [@justinsb](https://github.com/justinsb) [#9342](https://github.com/kubernetes/kops/pull/9342)
* Refactor BootstrapChannelBuilder to use a KopsModelContext [@justinsb](https://github.com/justinsb) [#9338](https://github.com/kubernetes/kops/pull/9338)
* Refactor BootstrapChannelBuilder to use a kOpsModelContext [@justinsb](https://github.com/justinsb) [#9338](https://github.com/kubernetes/kops/pull/9338)
* Issue kubecfg and kops certs in nodeup [@johngmyers](https://github.com/johngmyers) [#9347](https://github.com/kubernetes/kops/pull/9347)
* Update release notes for Ubuntu 20.04 and CVEs [@hakman](https://github.com/hakman) [#9332](https://github.com/kubernetes/kops/pull/9332)
* Add nodelocal dns cache to release notes and add kops version to docs [@olemarkus](https://github.com/olemarkus) [#9351](https://github.com/kubernetes/kops/pull/9351)
@ -310,7 +310,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Typo and wording fix to getting_started/commands doc [@MoShitrit](https://github.com/MoShitrit) [#9417](https://github.com/kubernetes/kops/pull/9417)
* Alicloud: Refactor LoadBalancerWhiteList to LoadBalancerACL [@bittopaz](https://github.com/bittopaz) [#8304](https://github.com/kubernetes/kops/pull/8304)
* Remove PHONY declaration on non-phony targets [@johngmyers](https://github.com/johngmyers) [#9419](https://github.com/kubernetes/kops/pull/9419)
* Build and publish only Linux AMD64 Kops artifacts for CI [@hakman](https://github.com/hakman) [#9401](https://github.com/kubernetes/kops/pull/9401)
* Build and publish only Linux AMD64 kOps artifacts for CI [@hakman](https://github.com/hakman) [#9401](https://github.com/kubernetes/kops/pull/9401)
* Remove more sha1-generation code [@johngmyers](https://github.com/johngmyers) [#9423](https://github.com/kubernetes/kops/pull/9423)
* Fix: dns-controller: 3999 port address already in use [@vgunapati](https://github.com/vgunapati) [#9404](https://github.com/kubernetes/kops/pull/9404)
* Fix dns selectors for older k8s [@olemarkus](https://github.com/olemarkus) [#9431](https://github.com/kubernetes/kops/pull/9431)
@ -548,7 +548,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Release notes for 1.19.0-alpha.3 [@hakman](https://github.com/hakman) [#9805](https://github.com/kubernetes/kops/pull/9805)
* Stop trying to pull the Protokube image [@hakman](https://github.com/hakman) [#9809](https://github.com/kubernetes/kops/pull/9809)
* Add all images to GH release [@hakman](https://github.com/hakman) [#9811](https://github.com/kubernetes/kops/pull/9811)
* Refactor: KopsModelContext embeds IAMModelContext [@justinsb](https://github.com/justinsb) [#9814](https://github.com/kubernetes/kops/pull/9814)
* Refactor: kOpsModelContext embeds IAMModelContext [@justinsb](https://github.com/justinsb) [#9814](https://github.com/kubernetes/kops/pull/9814)
* Adding docs on AWS Permissions Boundaries support [@victorfrancax1](https://github.com/victorfrancax1) [#9807](https://github.com/kubernetes/kops/pull/9807)
* Fix GCE cluster creation with private topology [@rifelpet](https://github.com/rifelpet) [#9815](https://github.com/kubernetes/kops/pull/9815)
* Support writing a full certificate chain [@justinsb](https://github.com/justinsb) [#9812](https://github.com/kubernetes/kops/pull/9812)

View File

@ -125,7 +125,7 @@
* Mark kops 1.7.0-beta.1 [@justinsb](https://github.com/justinsb) [#3005](https://github.com/kubernetes/kops/pull/3005)
* Add missing step to pull template file; correct kops option. [@j14s](https://github.com/j14s) [#3006](https://github.com/kubernetes/kops/pull/3006)
* Test kops submit-queue [@cjwagner](https://github.com/cjwagner) [#3012](https://github.com/kubernetes/kops/pull/3012)
* Kops apiserver support for openapi and generated API docs [@pwittrock](https://github.com/pwittrock) [#3001](https://github.com/kubernetes/kops/pull/3001)
* kOps apiserver support for openapi and generated API docs [@pwittrock](https://github.com/pwittrock) [#3001](https://github.com/kubernetes/kops/pull/3001)
* Fix for the instructions about using KOPS_FEATURE_FLAGS for drain and… [@FrederikNS](https://github.com/FrederikNS) [#2934](https://github.com/kubernetes/kops/pull/2934)
* populate cloud labels with cluster autoscaler tags [@sethpollack](https://github.com/sethpollack) [#3017](https://github.com/kubernetes/kops/pull/3017)
* Support for lifecycles [@justinsb](https://github.com/justinsb) [#2763](https://github.com/kubernetes/kops/pull/2763)

View File

@ -154,7 +154,7 @@ or specify a different network (current using `--vpc` flag)
* Fixing clusterautoscaler rbac [@BradErz](https://github.com/BradErz) [#3145](https://github.com/kubernetes/kops/pull/3145)
* Fix for Canal Taints and Tolerations [@prachetasp](https://github.com/prachetasp) [#3142](https://github.com/kubernetes/kops/pull/3142)
* Etcd TLS Options [@gambol99](https://github.com/gambol99) [#3114](https://github.com/kubernetes/kops/pull/3114)
* Kops Replace Command - create unprovisioned [@gambol99](https://github.com/gambol99) [#3089](https://github.com/kubernetes/kops/pull/3089)
* kOps Replace Command - create unprovisioned [@gambol99](https://github.com/gambol99) [#3089](https://github.com/kubernetes/kops/pull/3089)
* Add support for cluster using http forward proxy #2481 [@DerekV](https://github.com/DerekV) [#2777](https://github.com/kubernetes/kops/pull/2777)
* Fix Typo to improve GoReportCard [@asifdxtreme](https://github.com/asifdxtreme) [#3156](https://github.com/kubernetes/kops/pull/3156)
* Update alpha channel with update image & versions [@justinsb](https://github.com/justinsb) [#3103](https://github.com/kubernetes/kops/pull/3103)
@ -220,11 +220,11 @@ or specify a different network (current using `--vpc` flag)
* Explicit CreateCluster & UpdateCluster functions [@justinsb](https://github.com/justinsb) [#3240](https://github.com/kubernetes/kops/pull/3240)
* remove --cluster-cidr from kube-router's manifest. [@murali-reddy](https://github.com/murali-reddy) [#3263](https://github.com/kubernetes/kops/pull/3263)
* Replace deprecated aws session.New() with session.NewSession() [@alrs](https://github.com/alrs) [#3255](https://github.com/kubernetes/kops/pull/3255)
* Kops command fixes [@alrs](https://github.com/alrs) [#3277](https://github.com/kubernetes/kops/pull/3277)
* kOps command fixes [@alrs](https://github.com/alrs) [#3277](https://github.com/kubernetes/kops/pull/3277)
* Update go-ini dep to v1.28.2 [@justinsb](https://github.com/justinsb) [#3283](https://github.com/kubernetes/kops/pull/3283)
* Add go1.9 target to travis [@justinsb](https://github.com/justinsb) [#3279](https://github.com/kubernetes/kops/pull/3279)
* Refactor apiserver templates [@georgebuckerfield](https://github.com/georgebuckerfield) [#3284](https://github.com/kubernetes/kops/pull/3284)
* Kops Secrets on Nodes [@gambol99](https://github.com/gambol99) [#3270](https://github.com/kubernetes/kops/pull/3270)
* kOps Secrets on Nodes [@gambol99](https://github.com/gambol99) [#3270](https://github.com/kubernetes/kops/pull/3270)
* Add Initializers admission controller [@justinsb](https://github.com/justinsb) [#3289](https://github.com/kubernetes/kops/pull/3289)
* Limit the IAM EC2 policy for the master nodes [@KashifSaadat](https://github.com/KashifSaadat) [#3186](https://github.com/kubernetes/kops/pull/3186)
* Allow user defined endpoint to host action for Canal [@KashifSaadat](https://github.com/KashifSaadat) [#3272](https://github.com/kubernetes/kops/pull/3272)
@ -269,7 +269,7 @@ or specify a different network (current using `--vpc` flag)
* Correct typo in Hooks Spec examples [@KashifSaadat](https://github.com/KashifSaadat) [#3381](https://github.com/kubernetes/kops/pull/3381)
* Honor ServiceNodePortRange when opening NodePort access [@justinsb](https://github.com/justinsb) [#3379](https://github.com/kubernetes/kops/pull/3379)
* More Makefile improvements [@alrs](https://github.com/alrs) [#3380](https://github.com/kubernetes/kops/pull/3380)
* Revision to IAM Policies created by Kops [@chrislovecnm](https://github.com/chrislovecnm) [#3343](https://github.com/kubernetes/kops/pull/3343)
* Revision to IAM Policies created by kOps [@chrislovecnm](https://github.com/chrislovecnm) [#3343](https://github.com/kubernetes/kops/pull/3343)
* Add file assets to node user data scripts, fingerprint fileAssets and hooks content. [@KashifSaadat](https://github.com/KashifSaadat) [#3323](https://github.com/kubernetes/kops/pull/3323)
* Makefile remove redundant logic [@alrs](https://github.com/alrs) [#3390](https://github.com/kubernetes/kops/pull/3390)
* Makefile: build kops in dev-mode by default [@justinsb](https://github.com/justinsb) [#3402](https://github.com/kubernetes/kops/pull/3402)
@ -422,7 +422,7 @@ or specify a different network (current using `--vpc` flag)
* update kubernetes-dashboard image version to v1.7.1 [@tallaxes](https://github.com/tallaxes) [#3652](https://github.com/kubernetes/kops/pull/3652)
* Bump channels version of dashboard to 1.7.1 [@so0k](https://github.com/so0k) [#3681](https://github.com/kubernetes/kops/pull/3681)
* [AWS] Properly tag public and private subnets for ELB creation [@geojaz](https://github.com/geojaz) [#3682](https://github.com/kubernetes/kops/pull/3682)
* Kops Toolbox Template Missing Variables [@gambol99](https://github.com/gambol99) [#3680](https://github.com/kubernetes/kops/pull/3680)
* kOps Toolbox Template Missing Variables [@gambol99](https://github.com/gambol99) [#3680](https://github.com/kubernetes/kops/pull/3680)
* Delete firewall rules on GCE [@justinsb](https://github.com/justinsb) [#3684](https://github.com/kubernetes/kops/pull/3684)
* Fix typo in SessionAffinity terraform field [@justinsb](https://github.com/justinsb) [#3685](https://github.com/kubernetes/kops/pull/3685)
* Grant kubelets system:node role in 1.8 [@justinsb](https://github.com/justinsb) [#3683](https://github.com/kubernetes/kops/pull/3683)
@ -456,7 +456,7 @@ or specify a different network (current using `--vpc` flag)
* Refactor gce resources into pkg/resources/gce [@justinsb](https://github.com/justinsb) [#3721](https://github.com/kubernetes/kops/pull/3721)
* Add initial docs for how to rotate a CA keypair [@justinsb](https://github.com/justinsb) [#3727](https://github.com/kubernetes/kops/pull/3727)
* GCS: Use ACLs for GCE permissions [@justinsb](https://github.com/justinsb) [#3726](https://github.com/kubernetes/kops/pull/3726)
* Kops Template YAML Formatting [@gambol99](https://github.com/gambol99) [#3706](https://github.com/kubernetes/kops/pull/3706)
* kOps Template YAML Formatting [@gambol99](https://github.com/gambol99) [#3706](https://github.com/kubernetes/kops/pull/3706)
* Tolerate errors from Find for tasks with WarnIfInsufficientAccess [@justinsb](https://github.com/justinsb) [#3728](https://github.com/kubernetes/kops/pull/3728)
* GCE Dump: Include instance IPs [@justinsb](https://github.com/justinsb) [#3722](https://github.com/kubernetes/kops/pull/3722)
* Route53 based example [@tigerlinux](https://github.com/tigerlinux) [#3367](https://github.com/kubernetes/kops/pull/3367)
@ -542,7 +542,7 @@ or specify a different network (current using `--vpc` flag)
* Include encryptionConfig setting within userdata for masters. [@KashifSaadat](https://github.com/KashifSaadat) [#3874](https://github.com/kubernetes/kops/pull/3874)
* Add Example for instance group tagging [@sergeohl](https://github.com/sergeohl) [#3879](https://github.com/kubernetes/kops/pull/3879)
* README and issue template updates [@chrislovecnm](https://github.com/chrislovecnm) [#3818](https://github.com/kubernetes/kops/pull/3818)
* Kops Template Config Value [@gambol99](https://github.com/gambol99) [#3863](https://github.com/kubernetes/kops/pull/3863)
* kOps Template Config Value [@gambol99](https://github.com/gambol99) [#3863](https://github.com/kubernetes/kops/pull/3863)
* Fix spelling [@jonstacks](https://github.com/jonstacks) [#3864](https://github.com/kubernetes/kops/pull/3864)
* Improving UX for placeholder IP Address [@chrislovecnm](https://github.com/chrislovecnm) [#3709](https://github.com/kubernetes/kops/pull/3709)
* Bump all flannel versions to latest release - v0.9.1 [@tomdee](https://github.com/tomdee) [#3880](https://github.com/kubernetes/kops/pull/3880)
@ -581,7 +581,7 @@ or specify a different network (current using `--vpc` flag)
* Fix flannel version [@mikesplain](https://github.com/mikesplain) [#3953](https://github.com/kubernetes/kops/pull/3953)
* Fix flannel error on starting [@mikesplain](https://github.com/mikesplain) [#3956](https://github.com/kubernetes/kops/pull/3956)
* Fix brew docs typo [@mikesplain](https://github.com/mikesplain) [#3949](https://github.com/kubernetes/kops/pull/3949)
* kops not Kops [@chrislovecnm](https://github.com/chrislovecnm) [#3960](https://github.com/kubernetes/kops/pull/3960)
* kops not kOps [@chrislovecnm](https://github.com/chrislovecnm) [#3960](https://github.com/kubernetes/kops/pull/3960)
* openapi doc updates [@chrislovecnm](https://github.com/chrislovecnm) [#3948](https://github.com/kubernetes/kops/pull/3948)
* Add kubernetes-dashboard addon version constraint [@so0k](https://github.com/so0k) [#3959](https://github.com/kubernetes/kops/pull/3959)
* Initial support for nvme [@justinsb](https://github.com/justinsb) [#3969](https://github.com/kubernetes/kops/pull/3969)

View File

@ -134,7 +134,7 @@ None known at this time
* Update binary installation commands for macOS to use curl alone [@hopkinsth](https://github.com/hopkinsth) [#4260](https://github.com/kubernetes/kops/pull/4260)
* Slight changes to commands. [@darron](https://github.com/darron) [#4259](https://github.com/kubernetes/kops/pull/4259)
* Add SubnetType Tag to Subnets [@KashifSaadat](https://github.com/KashifSaadat) [#4198](https://github.com/kubernetes/kops/pull/4198)
* Kops Replace Force [@gambol99](https://github.com/gambol99) [#4275](https://github.com/kubernetes/kops/pull/4275)
* kOps Replace Force [@gambol99](https://github.com/gambol99) [#4275](https://github.com/kubernetes/kops/pull/4275)
* docs: upgrade.md: drop DrainAndValidateRollingUpdate note [@dkeitel](https://github.com/dkeitel) [#4282](https://github.com/kubernetes/kops/pull/4282)
* Bump alpha channel [@justinsb](https://github.com/justinsb) [#4285](https://github.com/kubernetes/kops/pull/4285)
* Validate IG MaxSize is not less than MinSize. [@mikesplain](https://github.com/mikesplain) [#4278](https://github.com/kubernetes/kops/pull/4278)

View File

@ -4,7 +4,7 @@
## Delete all secrets
Delete all secrets & keypairs that kops is holding:
Delete all secrets & keypairs that kOps is holding:
```shell
kops get secrets | grep ^Secret | awk '{print $2}' | xargs -I {} kops delete secret secret {}
@ -20,7 +20,7 @@ kops update cluster
kops update cluster --yes
```
Kops may fail to recreate all the keys on first try. If you get errors about ca key for 'ca' not being found, run `kops update cluster --yes` once more.
kOps may fail to recreate all the keys on first try. If you get errors about ca key for 'ca' not being found, run `kops update cluster --yes` once more.
## Force cluster to use new secrets

View File

@ -1,7 +1,7 @@
## Running in a shared VPC
When launching into a shared VPC, kops will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
or NAT Gateway you can tell kops to ignore egress. By default, kops creates a new subnet per zone and a new route table,
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
or NAT Gateway you can tell kOps to ignore egress. By default, kops creates a new subnet per zone and a new route table,
but you can instead use a shared subnet (see [below](#shared-subnets)).
1. Use `kops create cluster` with the `--vpc` argument for your existing VPC:
@ -45,7 +45,7 @@ When launching into a shared VPC, kops will reuse the VPC and Internet Gateway.
Review the changes to make sure they are OK—the Kubernetes settings might
not be ones you want on a shared VPC (in which case, open an issue!)
**Note also the Kubernetes VPCs (currently) require `EnableDNSHostnames=true`. kops will detect the required change,
**Note also the Kubernetes VPCs (currently) require `EnableDNSHostnames=true`. kOps will detect the required change,
but refuse to make it automatically because it is a shared VPC. Please review the implications and make the change
to the VPC manually.**
@ -56,7 +56,7 @@ When launching into a shared VPC, kops will reuse the VPC and Internet Gateway.
```
This will add an additional tag to your AWS VPC resource. This tag
will be removed automatically if you delete your kops cluster.
will be removed automatically if you delete your kOps cluster.
```
"kubernetes.io/cluster/<cluster-name>" = "shared"
@ -139,7 +139,7 @@ spec:
### Subnet Tags
By default, kops will tag your existing subnets with the standard tags:
By default, kOps will tag your existing subnets with the standard tags:
Public/Utility Subnets:
```
@ -157,7 +157,7 @@ spec:
These tags are important, for example, your services will be unable to create public or private Elastic Load Balancers (ELBs) if the respective `elb` or `internal-elb` tags are missing.
If you would like to manage these tags externally then specify `--disable-subnet-tags` during your cluster creation. This will prevent kops from tagging existing subnets and allow some custom control, such as separate subnets for internal ELBs.
If you would like to manage these tags externally then specify `--disable-subnet-tags` during your cluster creation. This will prevent kOps from tagging existing subnets and allow some custom control, such as separate subnets for internal ELBs.
### Shared NAT Egress
@ -191,17 +191,17 @@ spec:
Please note:
* You must specify pre-created subnets for either all of the subnets or none of them.
* kops won't alter your existing subnets. They must be correctly set up with route tables, etc. The
* kOps won't alter your existing subnets. They must be correctly set up with route tables, etc. The
Public or Utility subnets should have public IPs and an Internet Gateway configured as their default route
in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway
configured as their default route.
* kops won't create a route-table at all if it's not creating subnets.
* kOps won't create a route-table at all if it's not creating subnets.
* In the example above the first subnet is using a shared NAT Gateway while the
second one is using a shared NAT Instance
### Externally Managed Egress
If you are using an unsupported egress configuration in your VPC, kops can be told to ignore egress by using a configuration such as:
If you are using an unsupported egress configuration in your VPC, kOps can be told to ignore egress by using a configuration such as:
```yaml
spec:
@ -223,7 +223,7 @@ spec:
egress: External
```
This tells kops that egress is managed externally. This is preferable when using virtual private gateways
This tells kOps that egress is managed externally. This is preferable when using virtual private gateways
(currently unsupported) or using other configurations to handle egress routing.
### Proxy VPC Egress

View File

@ -38,7 +38,7 @@ spec:
```
## Workaround for changing secrets with type "Secret"
As it is currently not possible to modify or delete + create secrets of type "Secret" with the CLI you have to modify them directly in the kops s3 bucket.
As it is currently not possible to modify or delete + create secrets of type "Secret" with the CLI you have to modify them directly in the kOps s3 bucket.
They are stored /clustername/secrets/ and contain the secret as a base64 encoded string. To change the secret base64 encode it with:

View File

@ -22,7 +22,7 @@ To change the SSH public key on an existing cluster:
## Docker Configuration
If you are using a private registry such as quay.io, you may be familiar with the inconvenience of managing the `imagePullSecrets` for each namespace. It can also be a pain to use [Kops Hooks](cluster_spec.md#hooks) with private images. To configure docker on all nodes with access to one or more private registries:
If you are using a private registry such as quay.io, you may be familiar with the inconvenience of managing the `imagePullSecrets` for each namespace. It can also be a pain to use [kOps Hooks](cluster_spec.md#hooks) with private images. To configure docker on all nodes with access to one or more private registries:
* `kops create secret --name <clustername> dockerconfig -f ~/.docker/config.json`
* `kops rolling-update cluster --name <clustername> --yes` to immediately roll all the machines so they have the new key (optional)

View File

@ -1,18 +1,18 @@
# Security Groups
## Use existing AWS Security Groups
**Note: Use this at your own risk, when existing SGs are used Kops will NOT ensure they are properly configured.**
**Note: Use this at your own risk, when existing SGs are used kOps will NOT ensure they are properly configured.**
Rather than having Kops create and manage IAM Security Groups, it is possible to use an existing one. This is useful in organizations where security policies prevent tools from creating their own Security Groups.
Kops will still output any differences in the managed and your own Security Groups.
This is convenient for determining policy changes that need to be made when upgrading Kops.
Rather than having kOps create and manage IAM Security Groups, it is possible to use an existing one. This is useful in organizations where security policies prevent tools from creating their own Security Groups.
kOps will still output any differences in the managed and your own Security Groups.
This is convenient for determining policy changes that need to be made when upgrading kOps.
**Using Managed Security Groups will not output these differences, it is up to the user to track expected changes to policies.**
NOTE:
- *Currently Kops only supports using existing Security Groups for every instance group and Load Balancer in the Cluster, not a mix of existing and managed Security Groups.
- *Currently kOps only supports using existing Security Groups for every instance group and Load Balancer in the Cluster, not a mix of existing and managed Security Groups.
This is due to the lifecycle overrides being used to prevent creation of the Security Groups related resources.*
- *Kops will add necessary rules to the security group specified in `securityGroupOverride`.*
- *kOps will add necessary rules to the security group specified in `securityGroupOverride`.*
To do this first specify the Security Groups for the ELB (if you are using a LB) and Instance Groups
Example:

View File

@ -121,7 +121,7 @@ After about 5 minutes all three masters should have found each other. Run the fo
kops validate cluster --wait 10m
```
While rotating the original master is not strictly necessary, kops will say it needs updating because of the configuration change.
While rotating the original master is not strictly necessary, kOps will say it needs updating because of the configuration change.
```
kops rolling-update cluster --yes

View File

@ -1,13 +1,13 @@
# The State Store
kops has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
kOps has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
here not only when you first create a cluster, but also you can change the state and apply changes to a running cluster.
Eventually, kubernetes services will also pull from the state store, so that we don't need to marshal all our
configuration through a channel like user-data. (This is currently done for secrets and SSL keys, for example,
though we have to copy the data from the state store to a file where components like kubelet can read them).
The state store uses kops's VFS implementation, so can in theory be stored anywhere.
The state store uses kOps's VFS implementation, so can in theory be stored anywhere.
As of now the following state stores are supported:
* Amazon AWS S3 (`s3://`)
* local filesystem (`file://`) (only for dry-run purposes, see [note](#local-filesystem-state-stores) below)
@ -51,7 +51,7 @@ There are a few ways to configure your state store. In priority order:
The local filesystem state store (`file://`) is **not** functional for running clusters. It is permitted so as to enable review workflows.
For example, in a review workflow, it can be desirable to check a set of untrusted changes before they are applied to real infrastructure. If submitted untrusted changes to configuration files are naively run by `kops replace`, then Kops would overwrite the state store used by production infrastructure with changes which have not yet been approved. This is dangerous.
For example, in a review workflow, it can be desirable to check a set of untrusted changes before they are applied to real infrastructure. If submitted untrusted changes to configuration files are naively run by `kops replace`, then kOps would overwrite the state store used by production infrastructure with changes which have not yet been approved. This is dangerous.
Instead, a review workflow may download the contents of the state bucket to a local directory (using `aws s3 sync` or similar), set the state store to the local directory (e.g. `--state file:///path/to/state/store`), and then run `kops replace` and `kops update` (but for a dry-run only - _not_ `kops update --yes`). This allows the review process to make changes to a local copy of the state bucket, and check those changes, without touching the production state bucket or production infrastructure.
@ -107,7 +107,7 @@ Repeat for each cluster needing to be moved.
#### Cross Account State-store
Many enterprises prefer to run many AWS accounts. In these setups, having a shared cross-account S3 bucket for state may make inventory and management easier.
Consider the S3 bucket living in Account B and the kops cluster living in Account A. In order to achieve this, you first need to let Account A access the s3 bucket. This is done by adding the following _bucket policy_ on the S3 bucket:
Consider the S3 bucket living in Account B and the kOps cluster living in Account A. In order to achieve this, you first need to let Account A access the s3 bucket. This is done by adding the following _bucket policy_ on the S3 bucket:
```json
{
@ -134,7 +134,7 @@ Consider the S3 bucket living in Account B and the kops cluster living in Accoun
}
```
Kops will then use that bucket as if it was in the remote account, including creating appropriate IAM policies that limits nodes from doing bad things.
kOps will then use that bucket as if it was in the remote account, including creating appropriate IAM policies that limits nodes from doing bad things.
Note that any user/role with full S3 access will be able to delete any cluster from the state store, but may not delete any instances or other things outside of S3.
## Digital Ocean (do://)
@ -191,9 +191,9 @@ gcsClient, err := storage.New(httpClient)
## Vault (vault://)
{{ kops_feature_table(kops_added_ff='1.19') }}
Kops has support for using Vault as state store. It is currently an experimental feature and you have to enable the `VFSVaultSupport` feature flag to enable it.
kOps has support for using Vault as state store. It is currently an experimental feature and you have to enable the `VFSVaultSupport` feature flag to enable it.
The goal of the vault store is to be a safe storage for the kops keys and secrets store. It will not work to use this as a kops registry/config store. Among other things, etcd-manager is unable to read VFS control files from vault. Vault also cannot be used as backend for etcd backups.
The goal of the vault store is to be a safe storage for the kOps keys and secrets store. It will not work to use this as a kOps registry/config store. Among other things, etcd-manager is unable to read VFS control files from vault. Vault also cannot be used as backend for etcd backups.
```sh
@ -205,7 +205,7 @@ The vault store uses IAM auth to authenticate against the vault server and expec
Instructions for configuring your vault server to accept IAM authentication are at https://learn.hashicorp.com/vault/identity-access-management/iam-authentication
To configure kops to use the Vault store, add this to the cluster spec:
To configure kOps to use the Vault store, add this to the cluster spec:
```yaml
spec:
@ -218,7 +218,7 @@ Each of the paths specified above can be configurable, but they must be unique a
After launching your cluster you need to add the cluster roles to Vault, binding them to the cluster's IAM identity and granting them access to the appropriate secrets and keys. The nodes will wait until they can authenticate before completing provisioning.
#### Vault policies
Note that contrary to the S3 state store, kops will not provision any policies for you. You have to provide roles for both operators and nodes.
Note that contrary to the S3 state store, kOps will not provision any policies for you. You have to provide roles for both operators and nodes.
Using the example paths above, a policy for the cluster nodes can be:

View File

@ -1,15 +1,15 @@
## Building Kubernetes clusters with Terraform
Kops can generate Terraform configurations, and then you can apply them using the `terraform plan` and `terraform apply` tools. This is very handy if you are already using Terraform, or if you want to check in the Terraform output into version control.
kOps can generate Terraform configurations, and then you can apply them using the `terraform plan` and `terraform apply` tools. This is very handy if you are already using Terraform, or if you want to check in the Terraform output into version control.
The gist of it is that, instead of letting kops apply the changes, you tell kops what you want, and then kops spits out what it wants done into a `.tf` file. **_You_** are then responsible for turning those plans into reality.
The gist of it is that, instead of letting kOps apply the changes, you tell kOps what you want, and then kOps spits out what it wants done into a `.tf` file. **_You_** are then responsible for turning those plans into reality.
The Terraform output should be reasonably stable (i.e. the text files should only change where something has actually changed - items should appear in the same order etc). This is extremely useful when using version control as you can diff your changes easily.
Note that if you modify the Terraform files that kops spits out, it will override your changes with the configuration state defined by its own configs. In other terms, kops's own state is the ultimate source of truth (as far as kops is concerned), and Terraform is a representation of that state for your convenience.
Note that if you modify the Terraform files that kOps spits out, it will override your changes with the configuration state defined by its own configs. In other terms, kOps's own state is the ultimate source of truth (as far as kOps is concerned), and Terraform is a representation of that state for your convenience.
### Terraform Version Compatibility
| Kops Version | Terraform Version | Feature Flag Notes |
| kOps Version | Terraform Version | Feature Flag Notes |
|--------------|-------------------|--------------------|
| >= 1.19 | >= 0.12.26, >= 0.13 | HCL2 supported by default <br>`KOPS_FEATURE_FLAGS=Terraform-0.12` is now deprecated |
| >= 1.18 | >= 0.12 | HCL2 supported by default |
@ -49,9 +49,9 @@ $ kops create cluster \
--target=terraform
```
The above command will create kops state on S3 (defined in `--state`) and output a representation of your configuration into Terraform files. Thereafter you can preview your changes in `kubernetes.tf` and then use Terraform to create all the resources as shown below:
The above command will create kOps state on S3 (defined in `--state`) and output a representation of your configuration into Terraform files. Thereafter you can preview your changes in `kubernetes.tf` and then use Terraform to create all the resources as shown below:
Additional Terraform `.tf` files could be added at this stage to customize your deployment, but remember the kops state should continue to remain the ultimate source of truth for the Kubernetes cluster.
Additional Terraform `.tf` files could be added at this stage to customize your deployment, but remember the kOps state should continue to remain the ultimate source of truth for the Kubernetes cluster.
Initialize Terraform to set-up the S3 backend and provider plugins.
@ -95,7 +95,7 @@ $ kops edit cluster \
# editor opens, make your changes ...
```
Then output your changes/edits to kops cluster state into the Terraform files. Run `kops update` with `--target` and `--out` parameters:
Then output your changes/edits to kOps cluster state into the Terraform files. Run `kops update` with `--target` and `--out` parameters:
```
$ kops update cluster \
@ -118,7 +118,7 @@ Keep in mind that some changes will require a `kops rolling-update` to be applie
#### Teardown the cluster
When you eventually `terraform destroy` the cluster, you should still run `kops delete cluster`, to remove the kops cluster specification and any dynamically created Kubernetes resources (ELBs or volumes). To do this, run:
When you eventually `terraform destroy` the cluster, you should still run `kops delete cluster`, to remove the kOps cluster specification and any dynamically created Kubernetes resources (ELBs or volumes). To do this, run:
```
$ terraform plan -destroy
@ -128,7 +128,7 @@ $ kops delete cluster --yes \
--state=s3://mycompany.kubernetes
```
Ps: You don't have to `kops delete cluster` if you just want to recreate from scratch. Deleting kops cluster state means that you've have to `kops create` again.
Ps: You don't have to `kops delete cluster` if you just want to recreate from scratch. Deleting kOps cluster state means that you've have to `kops create` again.
### Caveats

View File

@ -1,12 +1,12 @@
# Network Topologies in Kops
# Network Topologies in kOps
Kops supports a number of pre defined network topologies. They are separated into commonly used scenarios, or topologies.
kOps supports a number of pre defined network topologies. They are separated into commonly used scenarios, or topologies.
Each of the supported topologies are listed below, with an example on how to deploy them.
# AWS
Kops supports the following topologies on AWS
kOps supports the following topologies on AWS
| Topology | Value | Description |
| ----------------- |----------- | ----------------------------------------------------------------------------------------------------------- |
@ -47,7 +47,7 @@ More information about [networking options](networking.md) can be found in our d
## Changing Topology of the API server
To change the ELB that fronts the API server from Internet facing to Internal only there are a few steps to accomplish
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kops recreate the ELB for us.
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kOps recreate the ELB for us.
### Steps to change the ELB from Internet-Facing to Internal
- Edit the cluster: `kops edit cluster $NAME`
@ -62,6 +62,6 @@ The AWS ELB does not support changing from internet facing to Internal. However
- Run the update command to check the config: `kops update cluster $NAME`
- BEFORE DOING the same command with the `--yes` option go into the AWS console and DELETE the api ELB
- Now run: `kops update cluster $NAME --yes`
- Finally execute a rolling update so that the instances register with the new internal ELB, execute: `kops rolling-update cluster --cloudonly --force` command. We have to use the `--cloudonly` option because we deleted the api ELB so there is no way to talk to the cluster through the k8s api. The force option is there because kops / terraform doesn't know that we need to update the instances with the ELB so we have to force it.
- Finally execute a rolling update so that the instances register with the new internal ELB, execute: `kops rolling-update cluster --cloudonly --force` command. We have to use the `--cloudonly` option because we deleted the api ELB so there is no way to talk to the cluster through the k8s api. The force option is there because kOps / terraform doesn't know that we need to update the instances with the ELB so we have to force it.
Once the rolling update has completed you have an internal only ELB that has the master k8s nodes registered with it.

View File

@ -1,12 +1,12 @@
# Upgrading kubernetes
Upgrading kubernetes is very easy with kops, as long as you are using a compatible version of kops.
The kops `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kops.
The kOps `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
are on a best-effort basis and will have little if any testing. kops `1.18` will not support the kubernetes
`1.19` series, and for full support of kubernetes `1.19` it is best to wait for the kops `1.19` series release.
We aim to release the next major version of kops within a few weeks of the equivalent major release of kubernetes,
so kops `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
`1.19` series, and for full support of kubernetes `1.19` it is best to wait for the kOps `1.19` series release.
We aim to release the next major version of kOps within a few weeks of the equivalent major release of kubernetes,
so kOps `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
(alpha or beta) is available at the kubernetes release, for early adopters.
Upgrading kubernetes is similar to changing the image on an InstanceGroup, except that the kubernetes version is

View File

@ -1,6 +1,6 @@
# Managinging Instance Groups
kops has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
an AutoScalingGroup.
By default, a cluster has:
@ -103,7 +103,7 @@ Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your edit
the image or the machineType, you could do that here as well. There are actually a lot more fields,
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kops to
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kOps to
apply your changes to the cloud.
We use the same `kops update cluster` command that we used when initially creating the cluster; when
@ -121,7 +121,7 @@ This is saying that we will alter the `TargetSize` property of the `InstanceGrou
That's what we want, so we `kops update cluster --yes`.
kops will resize the GCE managed instance group from 2 to 3, which will create a new GCE instance,
kOps will resize the GCE managed instance group from 2 to 3, which will create a new GCE instance,
which will then boot and join the cluster. Within a minute or so you should see the new node join:
```
@ -186,7 +186,7 @@ that the instances had not yet been reconfigured. There's a hint at the bottom:
Changes may require instances to restart: kops rolling-update cluster`
```
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kops
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kOps
can perform a rolling update to minimize disruption, but even so you might not want to perform the update right away;
you might want to make more changes or you might want to wait for off-peak hours. You might just want to wait for
the instances to terminate naturally - new instances will come up with the new configuration - though if you're not
@ -236,7 +236,7 @@ spec:
## Adding additional storage to the instance groups
As of Kops 1.12.0 you can add additional storage _(note, presently confined to AWS)_ via the instancegroup specification.
As of kOps 1.12.0 you can add additional storage _(note, presently confined to AWS)_ via the instancegroup specification.
```YAML
---
@ -351,7 +351,7 @@ So the procedure is:
AWS permits the creation of mixed instance EC2 Autoscaling Groups using a [mixed instance policy](https://aws.amazon.com/blogs/aws/new-ec2-auto-scaling-groups-with-multiple-instance-types-purchase-options/), allowing the users to build a target capacity and make up of on-demand and spot instances while offloading the allocation strategy to AWS.
Support for mixed instance groups was added in Kops 1.12.0
Support for mixed instance groups was added in kOps 1.12.0
```YAML
@ -519,7 +519,7 @@ spec:
If `openstack.kops.io/osVolumeSize` is not set it will default to the minimum disk specified by the image.
# Working with InstanceGroups
The kops InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
The kOps InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
can change the instance type you're using, the number of nodes you have, the OS image you're running - essentially
all the per-node configuration is in the InstanceGroup.

View File

@ -1,6 +1,6 @@
# Upgrading from kube-up to kops
# Upgrading from kube-up to kOps
Kops let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
kOps let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
kops.
** This is a slightly risky procedure, so we recommend backing up important data before proceeding.
@ -8,7 +8,7 @@ Take a snapshot of your EBS volumes; export all your data from kubectl etc. **
Limitations:
* kops splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but
* kOps splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but
you will lose your events history.
## Overview

View File

@ -1,12 +1,12 @@
## Getting Involved and Contributing
Are you interested in contributing to kops? We, the maintainers and community,
Are you interested in contributing to kOps? We, the maintainers and community,
would love your suggestions, contributions, and help! We have a quick-start
guide on [adding a feature](../development/adding_a_feature.md). Also, the
maintainers can be contacted at any time to learn more about how to get
involved.
In the interest of getting more newer folks involved with kops, we are starting to
In the interest of getting more newer folks involved with kOps, we are starting to
tag issues with `good-starter-issue`. These are typically issues that have
smaller scope but are good ways to start to get acquainted with the codebase.
@ -39,7 +39,7 @@ https://go.k8s.io/bot-commands).
## Office Hours
Kops maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
For more information, checkout the [office hours page.](office_hours.md)
@ -56,18 +56,18 @@ If you think you have found a bug please follow the instructions below.
- Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
- Set `-v 10` command line option and save the log output. Please paste this into your issue.
- Note the version of kops you are running (from `kops version`), and the command line options you are using.
- Note the version of kOps you are running (from `kops version`), and the command line options you are using.
- Open a [new issue](https://github.com/kubernetes/kops/issues/new).
- Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
- Feel free to reach out to the kops community on [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
- Feel free to reach out to the kOps community on [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
### Features
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kops become even more awesome follow the steps below.
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kOps become even more awesome follow the steps below.
- Open a [new issue](https://github.com/kubernetes/kops/issues/new).
- Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
- Clearly define the use case, using concrete examples. EG: I type `this` and kops does `that`.
- Clearly define the use case, using concrete examples. EG: I type `this` and kOps does `that`.
- Some of our larger features will require some design. If you would like to include a technical design for your feature please include it in the issue.
- After the new feature is well understood, and the design agreed upon we can start coding the feature. We would love for you to code it. So please open up a **WIP** *(work in progress)* pull request, and happy coding.

Some files were not shown because too many files have changed in this diff Show More