Rename Kops to kOps in the docs

This commit is contained in:
Ciprian Hacman 2020-10-21 08:10:17 +03:00
parent e4c9eca241
commit 6a4d86baf9
56 changed files with 166 additions and 166 deletions

View File

@ -9,13 +9,13 @@ If these certificates are not rotated prior to their expiration, Kubernetes apis
## How do I know if I'm affected?
Clusters are affected by this issue if they're using a version of etcd-manager < `3.0.20200428`.
The etcd-manager version is set automatically based on the Kops version.
These Kops versions are affected:
The etcd-manager version is set automatically based on the kOps version.
These kOps versions are affected:
* Kops 1.10.0-alpha.1 through 1.15.2
* Kops 1.16.0-alpha.1 through 1.16.1
* Kops 1.17.0-alpha.1 through 1.17.0-beta.1
* Kops 1.18.0-alpha.1 through 1.18.0-alpha.2
* kOps 1.10.0-alpha.1 through 1.15.2
* kOps 1.16.0-alpha.1 through 1.16.1
* kOps 1.17.0-alpha.1 through 1.17.0-beta.1
* kOps 1.18.0-alpha.1 through 1.18.0-alpha.2
The issue can be confirmed by checking for the existence of etcd-manager pods and observing their image tags:
@ -34,9 +34,9 @@ Upgrade etcd-manager. etcd-manager version >= `3.0.20200428` manages certificate
We have two suggested workflows to upgrade etcd-manager in your cluster. While both workflows require a rolling-update of the master nodes, neither require control-plane downtime (if the clusters have highly available masters).
1. Upgrade to Kops 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3.
1. Upgrade to kOps 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3.
This is the recommended approach.
Follow the normal steps when upgrading Kops and confirm the etcd-manager image will be updated based on the output of `kops update cluster`.
Follow the normal steps when upgrading kOps and confirm the etcd-manager image will be updated based on the output of `kops update cluster`.
```
kops update cluster --yes
kops rolling-update cluster --instance-group-roles=Master --cloudonly

View File

@ -1,6 +1,6 @@
# Authentication
Kops has support for configuring authentication systems. This should not be used with kubernetes versions
kOps has support for configuring authentication systems. This should not be used with kubernetes versions
before 1.8.5 because of a serious bug with apimachinery [#55022](https://github.com/kubernetes/kubernetes/issues/55022).
## kopeio authentication

View File

@ -2,7 +2,7 @@
## Getting Started
Kops used to only support Google Cloud DNS and Amazon Route53 to provision a kubernetes cluster. But since 1.6.2 `gossip` has been added which make it possible to provision a cluster without one of those DNS providers. Thanks to `gossip`, it's officially supported to provision a fully-functional kubernetes cluster in AWS China Region [which doesn't have Route53 so far][1] since [1.7][2]. Should support both `cn-north-1` and `cn-northwest-1`, but only `cn-north-1` is tested.
kOps used to only support Google Cloud DNS and Amazon Route53 to provision a kubernetes cluster. But since 1.6.2 `gossip` has been added which make it possible to provision a cluster without one of those DNS providers. Thanks to `gossip`, it's officially supported to provision a fully-functional kubernetes cluster in AWS China Region [which doesn't have Route53 so far][1] since [1.7][2]. Should support both `cn-north-1` and `cn-northwest-1`, but only `cn-north-1` is tested.
Most of the following procedures to provision a cluster are the same with [the guide to use kops in AWS](getting_started/aws.md). The differences will be highlighted and the similar parts will be omitted.

View File

@ -1,4 +1,4 @@
# Bastion in Kops
# Bastion in kOps
Bastion provide an external facing point of entry into a network containing private network instances. This host can provide a single point of fortification or audit and can be started and stopped to enable or disable inbound SSH communication from the Internet, some call bastion as the "jump server".
@ -126,7 +126,7 @@ ssh admin@<master_ip>
### Changing your ELB idle timeout
The bastion is accessed via an AWS ELB. The ELB is required to gain secure access into the private network and connect the user to the ASG that the bastion lives in. Kops will by default set the bastion ELB idle timeout to 5 minutes. This is important for SSH connections to the bastion that you plan to keep open.
The bastion is accessed via an AWS ELB. The ELB is required to gain secure access into the private network and connect the user to the ASG that the bastion lives in. kOps will by default set the bastion ELB idle timeout to 5 minutes. This is important for SSH connections to the bastion that you plan to keep open.
You can increase the ELB idle timeout by editing the main cluster config `kops edit cluster $NAME` and modifying the following block

View File

@ -50,7 +50,7 @@ spec:
You can use a valid SSL Certificate for your API Server Load Balancer. Currently, only AWS is supported.
Note that when using `sslCertificate`, client certificate authentication, such as with the credentials generated via `kops export kubecfg`, will not work through the load balancer. As of Kops 1.19, a `kubecfg` that bypasses the load balancer may be created with the `--internal` flag to `kops update cluster` or `kops export kubecfg`. Security groups may need to be opened to allow access from the clients to the master instances' port TCP/443, for example by using the `additionalSecurityGroups` field on the master instance groups.
Note that when using `sslCertificate`, client certificate authentication, such as with the credentials generated via `kops export kubecfg`, will not work through the load balancer. As of kOps 1.19, a `kubecfg` that bypasses the load balancer may be created with the `--internal` flag to `kops update cluster` or `kops export kubecfg`. Security groups may need to be opened to allow access from the clients to the master instances' port TCP/443, for example by using the `additionalSecurityGroups` field on the master instance groups.
```yaml
spec:
@ -61,7 +61,7 @@ spec:
```
*Openstack only*
As of Kops 1.12.0 it is possible to use the load balancer internally by setting the `useForInternalApi: true`.
As of kOps 1.12.0 it is possible to use the load balancer internally by setting the `useForInternalApi: true`.
This will point both `masterPublicName` and `masterInternalName` to the load balancer. You can therefore set both of these to the same value in this configuration.
```yaml
@ -84,7 +84,7 @@ spec:
### The default etcd configuration
Kops will default to v3 using TLS by default. etcd provisioning and upgrades are handled by etcd-manager. By default, the spec looks like this:
kOps will default to v3 using TLS by default. etcd provisioning and upgrades are handled by etcd-manager. By default, the spec looks like this:
```yaml
etcdClusters:
@ -110,7 +110,7 @@ The etcd version used by kops follows the recommended etcd version for the given
By default, the Volumes created for the etcd clusters are `gp2` and 20GB each. The volume size, type and Iops( for `io1`) can be configured via their parameters. Conversion between `gp2` and `io1` is not supported, nor are size changes.
As of Kops 1.12.0 it is also possible to modify the requests for your etcd cluster members using the `cpuRequest` and `memoryRequest` parameters.
As of kOps 1.12.0 it is also possible to modify the requests for your etcd cluster members using the `cpuRequest` and `memoryRequest` parameters.
```yaml
etcdClusters:
@ -219,7 +219,7 @@ spec:
zone: us-east-1a
```
In the case that you don't use NAT gateways or internet gateways, Kops 1.12.0 introduced the "External" flag for egress to force kops to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kops, typically with an existing cluster.
In the case that you don't use NAT gateways or internet gateways, kOps 1.12.0 introduced the "External" flag for egress to force kops to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kops, typically with an existing cluster.
```yaml
spec:
@ -460,7 +460,7 @@ spec:
```
### Setting kubelet CPU management policies
Kops 1.12.0 added support for enabling cpu management policies in kubernetes as per [cpu management doc](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies)
kOps 1.12.0 added support for enabling cpu management policies in kubernetes as per [cpu management doc](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies)
we have to set the flag `--cpu-manager-policy` to the appropriate value on all the kubelets. This must be specified in the `kubelet` spec in our cluster.yml.
```yaml
@ -489,7 +489,7 @@ spec:
### Configure a Flex Volume plugin directory
An optional flag can be provided within the KubeletSpec to set a volume plugin directory (must be accessible for read/write operations), which is additionally provided to the Controller Manager and mounted in accordingly.
Kops will set this for you based off the Operating System in use:
kOps will set this for you based off the Operating System in use:
- ContainerOS: `/home/kubernetes/flexvolume/`
- Flatcar: `/var/lib/kubelet/volumeplugins/`
- Default (in-line with upstream k8s): `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`
@ -630,7 +630,7 @@ spec:
enableProfiling: false
```
For more details on `horizontalPodAutoscaler` flags see the [official HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and the [Kops guides on how to set it up](horizontal_pod_autoscaling.md).
For more details on `horizontalPodAutoscaler` flags see the [official HPA docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) and the [kOps guides on how to set it up](horizontal_pod_autoscaling.md).
## Cluster autoscaler
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.15') }}

View File

@ -1,17 +1,17 @@
# Continuous Integration
Using Kops' declarative manifests it is possible to create and manage clusters entirely in a CI environment.
Using kOps' declarative manifests it is possible to create and manage clusters entirely in a CI environment.
Rather than using `kops create cluster` and `kops edit cluster`, the cluster and instance group manifests can be stored in version control.
This allows cluster changes to be made through reviewable commits rather than on a local workstation.
This is ideal for larger teams in order to avoid possible collisions from simultaneous changes being made.
It also provides an audit trail, consistent environment, and centralized view for any Kops commands being ran.
It also provides an audit trail, consistent environment, and centralized view for any kOps commands being ran.
Running Kops in a CI environment can also be useful for upgrading Kops.
Running kOps in a CI environment can also be useful for upgrading kOps.
Simply download a newer version in the CI environment and run a new pipeline.
This will allow any changes to be reviewed prior to being applied.
This strategy can be extended to sequentially upgrade Kops on multiple clusters, allowing changes to be tested on non-production environments first.
This strategy can be extended to sequentially upgrade kOps on multiple clusters, allowing changes to be tested on non-production environments first.
This page provides examples for managing Kops clusters in CI environments.
This page provides examples for managing kOps clusters in CI environments.
The [Manifest documentation](./manifests_and_customizing_via_api.md) describes how to create the YAML manifest files locally and includes high level examples of commands described below.
If you have a solution for a different CI platform or deployment strategy, feel free to open a Pull Request!
@ -22,7 +22,7 @@ If you have a solution for a different CI platform or deployment strategy, feel
### Requirements
* The GitLab runners that run the jobs need the appropriate permissions to invoke the Kops commands.
* The GitLab runners that run the jobs need the appropriate permissions to invoke the kOps commands.
For clusters in AWS this means providing AWS IAM credentials either with IAM User Keys set as secret variables in the project, or having the runner run on an EC2 instance with an instance profile attached.

View File

@ -1,5 +1,5 @@
In order to develop inside a Docker container you must mount your local copy of
the Kops repo into the container's `GOPATH`. For the official Golang Docker
the kOps repo into the container's `GOPATH`. For the official Golang Docker
image this is simply a matter of running the following command:
```bash

View File

@ -1,6 +1,6 @@
# API machinery
Kops uses the Kubernetes API machinery. It is well designed, and very powerful, but you have to
kOps uses the Kubernetes API machinery. It is well designed, and very powerful, but you have to
jump through some hoops to use it.
Recommended reading: [kubernetes API convention doc](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) and [kubernetes API changes doc](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md).

View File

@ -48,4 +48,4 @@ and then configure your IDE to connect its debugger to port 2345 on localhost.
- Make sure `$GOPATH` is set, and your [workspace](https://golang.org/doc/code.html#Workspaces) is configured.
- kops will not compile with symlinks in `$GOPATH`. See issue go issue [17451](https://github.com/golang/go/issues/17451) for more information
- building kops requires go 1.15
- Kops will only compile if the source is checked out in `$GOPATH/src/k8s.io/kops`. If you try to use `$GOPATH/src/github.com/kubernetes/kops` you will run into issues with package imports not working as expected.
- kOps will only compile if the source is checked out in `$GOPATH/src/k8s.io/kops`. If you try to use `$GOPATH/src/github.com/kubernetes/kops` you will run into issues with package imports not working as expected.

View File

@ -1,4 +1,4 @@
# Installing Kops via Hombrew
# Installing kOps via Hombrew
Homebrew makes installing kops [very simple for MacOS.](../install.md)
```bash

View File

@ -35,15 +35,15 @@ the current `release-1.X` tag.
## New Kubernetes versions and release branches
Typically Kops alpha releases are created off the master branch and beta and stable releases are created off of release branches.
Typically kOps alpha releases are created off the master branch and beta and stable releases are created off of release branches.
In order to create a new release branch off of master prior to a beta release, perform the following steps:
1. Create a new presubmit E2E prow job for the new release branch [here](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/kops-presubmits.yaml).
2. Create a new milestone in the GitHub repo.
3. Update [prow's milestone_applier config](https://github.com/kubernetes/test-infra/blob/dc99617c881805981b85189da232d29747f87004/config/prow/plugins.yaml#L309-L313) to update master to use the new milestone and add an entry for the new branch that targets master's old milestone.
4. Create the new release branch in git and push it to the GitHub repo. This will trigger a warning in #kops-dev as well as trigger the postsubmit job that creates the `https://storage.googleapis.com/k8s-staging-kops/kops/releases/markers/release-1.XX/latest-ci.txt` version marker via cloudbuild.
5. Update [build-pipeline.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-pipeline.py), incrementing `master_k8s_version` and the list of branches to reflect all actively maintained kops branches. Run `make` to regenerate the pipeline jobs.
6. Update the [build-grid.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-grid.py), incrementing the single non-master `kops_version` list item and incrementing the `k8s_versions` values. Update the `ko` suffixes in the `skip_jobs` list to reflect the new kops release branch being tested. Run `make` to regenerate the grid jobs.
5. Update [build-pipeline.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-pipeline.py), incrementing `master_k8s_version` and the list of branches to reflect all actively maintained kOps branches. Run `make` to regenerate the pipeline jobs.
6. Update the [build-grid.py](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/kops/build-grid.py), incrementing the single non-master `kops_version` list item and incrementing the `k8s_versions` values. Update the `ko` suffixes in the `skip_jobs` list to reflect the new kOps release branch being tested. Run `make` to regenerate the grid jobs.
An example set of PRs are linked from [this](https://github.com/kubernetes/kops/issues/10079) issue.

View File

@ -1,10 +1,10 @@
# KOPS CASE-USE EXAMPLES AND LABORATORY EXERCISES.
This section of our documentation contains typical use-cases for Kops. We'll cover here from the most basic things to very advanced use cases with a lot of technical detail. You can and will be able to reproduce all exercises (if you first read and understand what we did and why we did it) providing you have access to the proper resources.
This section of our documentation contains typical use-cases for kOps. We'll cover here from the most basic things to very advanced use cases with a lot of technical detail. You can and will be able to reproduce all exercises (if you first read and understand what we did and why we did it) providing you have access to the proper resources.
All exercises will need you to prepare your base environment (with kops and kubectl). You can see the ["basic requirements"](basic-requirements.md) document that is a common set of procedures for all our exercises. Please note that all the exercises covered here are production-oriented.
Our exercises are divided on "chapters". Each chapter covers a specific use-case for Kops:
Our exercises are divided on "chapters". Each chapter covers a specific use-case for kOps:
- Chapter I: [USING KOPS WITH COREOS - A MULTI-MASTER/MULTI-NODE PRACTICAL EXAMPLE](coreos-kops-tests-multimaster.md).
- Chapter II: [USING KOPS WITH PRIVATE NETWORKING AND A BASTION HOST IN A HIGLY-AVAILABLE SETUP](kops-tests-private-net-bastion-host.md).

View File

@ -1,4 +1,4 @@
# Common Basic Requirements For Kops-Related Labs. Pre-Flight Check:
# Common Basic Requirements For kOps-Related Labs. Pre-Flight Check:
Before rushing-in to replicate any of the exercises, please ensure your basic environment is correctly set-up. See [KOPS AWS tutorial](../getting_started/aws.md) for more information.

View File

@ -312,7 +312,7 @@ I0906 09:42:29.215995 13538 executor.go:91] Tasks: 71 done / 75 total; 4 can r
I0906 09:42:30.073417 13538 executor.go:91] Tasks: 75 done / 75 total; 0 can run
I0906 09:42:30.073471 13538 dns.go:152] Pre-creating DNS records
I0906 09:42:32.403909 13538 update_cluster.go:247] Exporting kubecfg for cluster
Kops has set your kubectl context to mycluster01.kopsclustertest.example.org
kOps has set your kubectl context to mycluster01.kopsclustertest.example.org
Cluster is starting. It should be ready in a few minutes.

View File

@ -177,7 +177,7 @@ I0828 13:06:49.761535 16528 executor.go:91] Tasks: 100 done / 116 total; 9 can
I0828 13:06:50.897272 16528 executor.go:91] Tasks: 109 done / 116 total; 7 can run
I0828 13:06:51.516158 16528 executor.go:91] Tasks: 116 done / 116 total; 0 can run
I0828 13:06:51.944576 16528 update_cluster.go:247] Exporting kubecfg for cluster
Kops has set your kubectl context to privatekopscluster.k8s.local
kOps has set your kubectl context to privatekopscluster.k8s.local
Cluster changes have been applied to the cloud.

View File

@ -66,7 +66,7 @@ Required packages are also updated during bootstrapping if the value is not set.
## out
`out` determines the directory into which Kops will write the target output for Terraform and CloudFormation. It defaults to `out/terraform` and `out/cloudformation` respectively.
`out` determines the directory into which kOps will write the target output for Terraform and CloudFormation. It defaults to `out/terraform` and `out/cloudformation` respectively.
# API only Arguments

View File

@ -182,7 +182,7 @@ ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain.example.com --c
* Information on adding NS records with [Google Cloud
Platform](https://cloud.google.com/dns/update-name-servers)
#### Using Public/Private DNS (Kops 1.5+)
#### Using Public/Private DNS (kOps 1.5+)
By default the assumption is that NS records are publicly available. If you
require private DNS records you should modify the commands we run later in this
@ -270,7 +270,7 @@ If the default encryption is not set or it cannot be checked, kops will resort t
It is possible to use a single S3 bucket for storing kops state for clusters
located in different accounts by using [cross-account bucket policies](http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html#access-policies-walkthrough-cross-account-permissions-acctA-tasks).
Kops will be able to use buckets configured with cross-account policies by default.
kOps will be able to use buckets configured with cross-account policies by default.
In this case you may want to override the object ACLs which kops places on the
state files, as default AWS ACLs will make it possible for an account that has
@ -410,7 +410,7 @@ Now that you have a working _kops_ cluster, read through the [recommendations fo
## Feedback
There's an incredible team behind Kops and we encourage you to reach out to the
There's an incredible team behind kOps and we encourage you to reach out to the
community on the Kubernetes
Slack(http://slack.k8s.io/). Bring your
questions, comments, and requests and meet the people behind the project!

View File

@ -120,7 +120,7 @@ spec:
kops update cluster my-cluster.k8s.local --state ${KOPS_STATE_STORE} --yes
```
Kops should create instances to all three zones, but provision volumes from the same zone.
kOps should create instances to all three zones, but provision volumes from the same zone.
# Using external cloud controller manager
If you want use [External CCM](https://github.com/kubernetes/cloud-provider-openstack) in your installation, this section contains instructions what you should do to get it up and running.
@ -207,7 +207,7 @@ kops create cluster \
# Using with self-signed certificates in OpenStack
Kops can be configured to use insecure mode towards OpenStack. However, this is not recommended as OpenStack cloudprovider in kubernetes does not support it.
kOps can be configured to use insecure mode towards OpenStack. However, this is not recommended as OpenStack cloudprovider in kubernetes does not support it.
If you use insecure flag in kops it might be that the cluster does not work correctly.
```yaml

View File

@ -15,7 +15,7 @@ In order to use gossip-based DNS, configure the cluster domain name to end with
### Kubernetes API
When using gossip mode, you have to expose the kubernetes API using a loadbalancer. Since there is no hosted zone for gossip-based clusters, you simply use the load balancer address directly. The user experience is identical to standard clusters. Kops will add the ELB DNS name to the kops-generated kubernetes configuration.
When using gossip mode, you have to expose the kubernetes API using a loadbalancer. Since there is no hosted zone for gossip-based clusters, you simply use the load balancer address directly. The user experience is identical to standard clusters. kOps will add the ELB DNS name to the kops-generated kubernetes configuration.
### Bastion

View File

@ -10,14 +10,14 @@ be found in the `autoscaling/v1` API version. The alpha version, which includes
support for scaling on memory and custom metrics, can be found in
`autoscaling/v2beta1` (and `autoscaling/v2beta2` in 1.12 and later).
Kops can assist in setting up HPA. Relevant reading you will need to go through:
kOps can assist in setting up HPA. Relevant reading you will need to go through:
* [Extending the Kubernetes API with the aggregation layer][k8s-extend-api]
* [Configure The Aggregation Layer][k8s-aggregation-layer]
* [Horizontal Pod Autoscaling][k8s-hpa]
While the above links go into details on how Kubernetes needs to be configured
to work with HPA, a lot of that work is already done for you by Kops.
to work with HPA, a lot of that work is already done for you by kOps.
Specifically:
* [x] Enable the [Aggregation Layer][k8s-aggregation-layer] via the following

View File

@ -6,7 +6,7 @@ It is possible to launch a Kubernetes cluster from behind an http forward proxy
It is assumed the proxy is already existing. If you want a private topology on AWS, for example, with a proxy instead of a NAT instance, you'll need to create the proxy yourself. See [Running in a shared VPC](run_in_existing_vpc.md).
This configuration only manages proxy configurations for Kops and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
This configuration only manages proxy configurations for kOps and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
## Configuration

View File

@ -1,6 +1,6 @@
# IAM Roles
By default Kops creates two IAM roles for the cluster: one for the masters, and one for the nodes.
By default kOps creates two IAM roles for the cluster: one for the masters, and one for the nodes.
> Please note that currently all Pods running on your cluster have access to the instance IAM role.
> Consider using projects such as [kube2iam](https://github.com/jtblin/kube2iam) to prevent that.
@ -12,7 +12,7 @@ An example of the new IAM policies can be found here:
- Master Nodes: https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_master_strict.json
- Compute Nodes: https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_node_strict.json
On provisioning a new cluster with Kops v1.8.0 or above, by default you will be using the new stricter IAM policies. Upgrading an existing cluster will use the legacy IAM privileges to reduce risk of potential regression.
On provisioning a new cluster with kOps v1.8.0 or above, by default you will be using the new stricter IAM policies. Upgrading an existing cluster will use the legacy IAM privileges to reduce risk of potential regression.
In order to update your cluster to use the strict IAM privileges, add the following within your Cluster Spec:
```yaml
@ -59,15 +59,15 @@ The additional permissions are:
## Permissions Boundaries
{{ kops_feature_table(kops_added_default='1.19') }}
AWS Permissions Boundaries enable you to use a policy (managed or custom) to set the maximum permissions that roles created by Kops will be able to grant to instances they're attached to. It can be useful to prevent possible privilege escalations.
AWS Permissions Boundaries enable you to use a policy (managed or custom) to set the maximum permissions that roles created by kOps will be able to grant to instances they're attached to. It can be useful to prevent possible privilege escalations.
To set a Permissions Boundary for Kops' roles, update your Cluster Spec with the following and then perform a cluster update:
To set a Permissions Boundary for kOps' roles, update your Cluster Spec with the following and then perform a cluster update:
```yaml
iam:
permissionsBoundary: aws:arn:iam:123456789000:policy:test-boundary
```
*NOTE: Currently, Kops only supports using a single Permissions Boundary for all roles it creates. In case you need to set per-role Permissions Boundaries, we recommend that you refer to this [section](#use-existing-aws-instance-profiles) below, and provide your own roles to Kops.*
*NOTE: Currently, kOps only supports using a single Permissions Boundary for all roles it creates. In case you need to set per-role Permissions Boundaries, we recommend that you refer to this [section](#use-existing-aws-instance-profiles) below, and provide your own roles to kOps.*
## Adding External Policies
@ -86,7 +86,7 @@ spec:
- aws:arn:iam:123456789000:policy:test-policy
```
External Policy attachments are treated declaritively. Any policies declared will be attached to the role, any policies not specified will be detached _after_ new policies are attached. This does not replace or affect built in Kops policies in any way.
External Policy attachments are treated declaritively. Any policies declared will be attached to the role, any policies not specified will be detached _after_ new policies are attached. This does not replace or affect built in kOps policies in any way.
It's important to note that externalPolicies will only handle the attachment and detachment of policies, not creation, modification, or deletion.
@ -176,12 +176,12 @@ spec:
## Use existing AWS Instance Profiles
Rather than having Kops create and manage IAM roles and instance profiles, it is possible to use an existing instance profile. This is useful in organizations where security policies prevent tools from creating their own IAM roles and policies.
Kops will still output any differences in the IAM Inline Policy for each IAM Role.
This is convenient for determining policy changes that need to be made when upgrading Kops.
Rather than having kOps create and manage IAM roles and instance profiles, it is possible to use an existing instance profile. This is useful in organizations where security policies prevent tools from creating their own IAM roles and policies.
kOps will still output any differences in the IAM Inline Policy for each IAM Role.
This is convenient for determining policy changes that need to be made when upgrading kOps.
**Using IAM Managed Policies will not output these differences, it is up to the user to track expected changes to policies.**
*NOTE: Currently Kops only supports using existing instance profiles for every instance group in the cluster, not a mix of existing and managed instance profiles.
*NOTE: Currently kOps only supports using existing instance profiles for every instance group in the cluster, not a mix of existing and managed instance profiles.
This is due to the lifecycle overrides being used to prevent creation of the IAM-related resources.*
To do this, get a list of instance group names for the cluster:

View File

@ -88,7 +88,7 @@ spec:
## additionalUserData
Kops utilizes cloud-init to initialize and setup a host at boot time. However in certain cases you may already be leveraging certain features of cloud-init in your infrastructure and would like to continue doing so. More information on cloud-init can be found [here](http://cloudinit.readthedocs.io/en/latest/)
kOps utilizes cloud-init to initialize and setup a host at boot time. However in certain cases you may already be leveraging certain features of cloud-init in your infrastructure and would like to continue doing so. More information on cloud-init can be found [here](http://cloudinit.readthedocs.io/en/latest/)
Additional user-data can be passed to the host provisioning by setting the `additionalUserData` field. A list of valid user-data content-types can be found [here](http://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive)

View File

@ -1,3 +1,3 @@
# Kops HTTP API Server (Deprecated)
# kOps HTTP API Server (Deprecated)
The kops-server component has been deprecated in place of CRDs.

View File

@ -19,7 +19,7 @@ This document also applies to using the `kops` API to customize a Kubernetes clu
Because of the above statement `kops` includes an API which provides a feature for users to utilize YAML or JSON manifests for managing their `kops` created Kubernetes installations. In the same way that you can use a YAML manifest to deploy a Job, you can deploy and manage a `kops` Kubernetes instance with a manifest. All of these values are also usable via the interactive editor with `kops edit`.
> You can see all the options that are currently supported in Kops [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go) or [more prettily here](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec)
> You can see all the options that are currently supported in kOps [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go) or [more prettily here](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec)
The following is a list of the benefits of using a file to manage instances.

View File

@ -10,7 +10,7 @@ listed below, are available which implement and manage this abstraction.
## Supported networking options
The following table provides the support status for various networking providers with regards to Kops version:
The following table provides the support status for various networking providers with regards to kOps version:
| Network provider | Experimental | Stable | Deprecated | Removed |
| ------------ | -----------: | -----: | ---------: | ------: |
@ -29,7 +29,7 @@ The following table provides the support status for various networking providers
### Which networking provider should you use?
Kops maintainers have no bias over the CNI provider that you run, we only aim to be flexible and provide a working setup of the CNIs.
kOps maintainers have no bias over the CNI provider that you run, we only aim to be flexible and provide a working setup of the CNIs.
We do recommended something other than `kubenet` for production clusters due to `kubenet`'s limitations, as explained [below](#kubenet-default).
@ -80,7 +80,7 @@ Several CNI providers are currently built into kops:
* [Romana](networking/romana.md)
* [Weave](networking/weave.md)
Kops makes it easy for cluster operators to choose one of these options. The manifests for the providers
kOps makes it easy for cluster operators to choose one of these options. The manifests for the providers
are included with kops, and you simply use `--networking <provider-name>`. Replace the provider name
with the name listed in the provider's documentation (from the list above) when you run
`kops cluster create`. For instance, for a default Calico installation, execute the following:
@ -93,19 +93,19 @@ Later, when you run `kops get cluster -oyaml`, you will see the option you chose
### Advanced
Kops makes a best-effort attempt to expose as many configuration options as possible for the upstream CNI options that it supports within the Kops cluster spec. However, as upstream CNI options are always changing, not all options may be available, or you may wish to use a CNI option which Kops doesn't support. There may also be edge-cases to operating a given CNI that were not considered by the Kops maintainers. Allowing Kops to manage the CNI installation is sufficient for the vast majority of production clusters; however, if this is not true in your case, then Kops provides an escape-hatch that allows you to take greater control over the CNI installation.
kOps makes a best-effort attempt to expose as many configuration options as possible for the upstream CNI options that it supports within the kOps cluster spec. However, as upstream CNI options are always changing, not all options may be available, or you may wish to use a CNI option which kOps doesn't support. There may also be edge-cases to operating a given CNI that were not considered by the kOps maintainers. Allowing kOps to manage the CNI installation is sufficient for the vast majority of production clusters; however, if this is not true in your case, then kOps provides an escape-hatch that allows you to take greater control over the CNI installation.
When using the flag `--networking cni` on `kops create cluster` or `spec.networking: cni {}`, Kops will not install any CNI at all, but expect that you install it.
When using the flag `--networking cni` on `kops create cluster` or `spec.networking: cni {}`, kOps will not install any CNI at all, but expect that you install it.
If you try to create a new cluster in this mode, the master nodes will come up in `not ready` state. You will then be able to deploy any CNI DaemonSet by following [the vanilla kubernetes install instructions](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network). Once the CNI DaemonSet has been deployed, the master nodes should enter `ready` state and the remaining nodes should join the cluster shortly thereafter.
#### Important Caveats
For some of the CNI implementations, Kops does more than just launch a DaemonSet with the relevant CNI pod. For example, when installing Calico, Kops installs client certificates for Calico to enable mTLS for connections to etcd. If you were to simply replace `spec.networking`'s Calico options with `spec.networking: cni {}`, you would cause an outage.
For some of the CNI implementations, kOps does more than just launch a DaemonSet with the relevant CNI pod. For example, when installing Calico, kOps installs client certificates for Calico to enable mTLS for connections to etcd. If you were to simply replace `spec.networking`'s Calico options with `spec.networking: cni {}`, you would cause an outage.
If you do decide to take manual responsibility for maintaining the CNI, you should familiarize yourself with the parts of the Kops codebase which install your CNI ([example](https://github.com/kubernetes/kops/tree/master/nodeup/pkg/model/networking)) to ensure that you are replicating any additional actions which Kops was applying for your CNI option. You should closely follow your upstream CNI's releases and Kops's releases, to ensure that you can apply any updates or fixes issued by your upstream CNI or by the Kops maintainers.
If you do decide to take manual responsibility for maintaining the CNI, you should familiarize yourself with the parts of the kOps codebase which install your CNI ([example](https://github.com/kubernetes/kops/tree/master/nodeup/pkg/model/networking)) to ensure that you are replicating any additional actions which kOps was applying for your CNI option. You should closely follow your upstream CNI's releases and kOps's releases, to ensure that you can apply any updates or fixes issued by your upstream CNI or by the kOps maintainers.
Additionally, you should bear in mind that the Kops maintainers run e2e testing over the variety of supported CNI options that a Kops update must pass in order to be released. If you take over maintaining the CNI for your cluster, you should test potential Kops, Kubernetes, and CNI updates in a test cluster before updating.
Additionally, you should bear in mind that the kOps maintainers run e2e testing over the variety of supported CNI options that a kOps update must pass in order to be released. If you take over maintaining the CNI for your cluster, you should test potential kOps, Kubernetes, and CNI updates in a test cluster before updating.
## Validating CNI Installation
@ -125,7 +125,7 @@ for Kubernetes specific CNI challenges.
Switching from `kubenet` providers to a CNI provider is considered safe. Just update the config and roll the cluster.
It is also possible to switch between CNI providers, but this usually is a disruptive change. Kops will also not clean up any resources left behind by the previous CNI, _including_ the CNI daemonset.
It is also possible to switch between CNI providers, but this usually is a disruptive change. kOps will also not clean up any resources left behind by the previous CNI, _including_ the CNI daemonset.
## Additional Reading

View File

@ -84,7 +84,7 @@ spec:
### Configuring Calico to use Typha
As of Kops 1.12 Calico uses the kube-apiserver as its datastore. The default setup does not make use of [Typha](https://github.com/projectcalico/typha) - a component intended to lower the impact of Calico on the k8s APIServer which is recommended in [clusters over 50 nodes](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastoremore-than-50-nodes) and is strongly recommended in clusters of 100+ nodes.
As of kOps 1.12 Calico uses the kube-apiserver as its datastore. The default setup does not make use of [Typha](https://github.com/projectcalico/typha) - a component intended to lower the impact of Calico on the k8s APIServer which is recommended in [clusters over 50 nodes](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastoremore-than-50-nodes) and is strongly recommended in clusters of 100+ nodes.
It is possible to configure Calico to use Typha by editing a cluster and adding a `typhaReplicas` option to the Calico spec:
```yaml

View File

@ -105,7 +105,7 @@ kops rolling-update cluster --yes
This feature is in beta state as of kops 1.18.
As of Kops 1.18, you can have Cilium provision AWS managed adresses and attach them directly to Pods much like Lyft VPC and AWS VPC. See [the Cilium docs for more information](https://docs.cilium.io/en/v1.6/concepts/ipam/eni/)
As of kOps 1.18, you can have Cilium provision AWS managed adresses and attach them directly to Pods much like Lyft VPC and AWS VPC. See [the Cilium docs for more information](https://docs.cilium.io/en/v1.6/concepts/ipam/eni/)
When using ENI IPAM you need to disable masquerading in Cilium as well.
@ -123,14 +123,14 @@ Also note that this feature has only been tested on the default kops AMIs.
#### Enabling Encryption in Cilium
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.17') }}
As of Kops 1.19, it is possible to enable encryption for Cilium agent.
As of kOps 1.19, it is possible to enable encryption for Cilium agent.
In order to enable encryption, you must first generate the pre-shared key using this command:
```bash
cat <<EOF | kops create secret ciliumpassword -f -
keys: $(echo "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null| xxd -p -c 64)) 128")
EOF
```
The above command will create a dedicated secret for cilium and store it in the Kops secret store.
The above command will create a dedicated secret for cilium and store it in the kOps secret store.
Once the secret has been created, encryption can be enabled by setting `enableEncryption` option in `spec.networking.cilium` to `true`:
```yaml
networking:

View File

@ -17,7 +17,7 @@ kops create cluster \
### Configuring Flannel iptables resync period
As of Kops 1.12.0, Flannel iptables resync option is configurable via editing a cluster and adding
As of kOps 1.12.0, Flannel iptables resync option is configurable via editing a cluster and adding
`iptablesResyncSeconds` option to spec:
```yaml

View File

@ -13,7 +13,7 @@ The [node authorization service] is an experimental service which in the absence
- the client certificate by default is added into the system:nodes rbac group _(note, if you are using PSP this is automatically bound by kops on your behalf)_.
- the kubelet at this point has a server certificate and the client api certificate and good to go.
#### **Integration with Kops**
#### **Integration with kOps**
The node authorization service is run on the master as a daemonset, by default dns is _node-authorizer-internal.dns_zone_:10443 and added via same mechanism at the internal kube-apiserver i.e. annotations on the kube-apiserver pods which is picked up the dns-controller and added to the dns zone.

View File

@ -81,7 +81,7 @@ able to reclaim memory (because it may not observe memory pressure right away,
since it polls `cAdvisor` to collect memory usage stats at a regular interval).
All the while, keep in mind that without `kube-reserved` nor `system-reserved`
reservations set (which is most clusters i.e. [GKE][5], [Kops][6]), the
reservations set (which is most clusters i.e. [GKE][5], [kOps][6]), the
scheduler doesn't account for resources that non-pod components would require to
function properly because `Capacity` and `Allocatable` resources are more or
less equal.

View File

@ -41,7 +41,7 @@ dnsZone: k8s.example.com
awsRegion: eu-west-1
```
When multiple environment files are passed using `--values` Kops performs a deep merge, for example given the following two files:
When multiple environment files are passed using `--values` kOps performs a deep merge, for example given the following two files:
```yaml
# File values-a.yaml
instanceGroups:

View File

@ -1,7 +1,7 @@
# Rolling Updates
Upgrading and modifying a k8s cluster usually requires the replacement of cloud instances.
In order to avoid loss of service and other disruption, Kops replaces cloud instances
In order to avoid loss of service and other disruption, kOps replaces cloud instances
incrementally with a rolling update.
Rolling updates are performed using

View File

@ -40,7 +40,7 @@ Upgrading Kubernetes is easy with kops. The cluster spec contains a `kubernetesV
The `kops upgrade` command also automates checking for and applying updates.
It is recommended to run the latest version of Kops to ensure compatibility with the target kubernetesVersion. When applying a Kubernetes minor version upgrade (e.g. `v1.5.3` to `v1.6.0`), you should confirm that the target kubernetesVersion is compatible with the [current Kops release](https://github.com/kubernetes/kops/releases).
It is recommended to run the latest version of kOps to ensure compatibility with the target kubernetesVersion. When applying a Kubernetes minor version upgrade (e.g. `v1.5.3` to `v1.6.0`), you should confirm that the target kubernetesVersion is compatible with the [current kOps release](https://github.com/kubernetes/kops/releases).
Note: if you want to upgrade from a `kube-up` installation, please see the instructions for [how to upgrade kubernetes installed with kube-up](cluster_upgrades_and_migrations.md).

View File

@ -35,11 +35,11 @@ the current `release-1.X` tag.
## New Kubernetes versions and release branches
Typically Kops alpha releases are created off the master branch and beta and stable releases are created off of release branches.
Typically kOps alpha releases are created off the master branch and beta and stable releases are created off of release branches.
In order to create a new release branch off of master prior to a beta release, perform the following steps:
1. Create a new periodic E2E prow job for the "next" kubernetes minor version.
* All Kops prow jobs are defined [here](https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes/kops)
* All kOps prow jobs are defined [here](https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes/kops)
2. Create a new presubmit E2E prow job for the new release branch.
3. Create a new milestone in the GitHub repo.
4. Update [prow's milestone_applier config](https://github.com/kubernetes/test-infra/blob/dc99617c881805981b85189da232d29747f87004/config/prow/plugins.yaml#L309-L313) to update master to use the new milestone and add an entry for the new branch that targets master's old milestone.

View File

@ -62,7 +62,7 @@ None known at this time
* Added tls certificate and private key path flags to kubelet config [@chrisz100](https://github.com/chrisz100) [#5088](https://github.com/kubernetes/kops/pull/5088)
* kubelet: expose --experimental-allowed-unsafe-sysctls [@smcquay](https://github.com/smcquay) [#5104](https://github.com/kubernetes/kops/pull/5104)
* Update docker image versions [@justinsb](https://github.com/justinsb) [#5057](https://github.com/kubernetes/kops/pull/5057)
* CoreDNS in Kops as an addon [@rajansandeep](https://github.com/rajansandeep) [#4041](https://github.com/kubernetes/kops/pull/4041)
* CoreDNS in kOps as an addon [@rajansandeep](https://github.com/rajansandeep) [#4041](https://github.com/kubernetes/kops/pull/4041)
* Implement network task for AlibabaCloud [@LilyFaFa](https://github.com/LilyFaFa),[@xh4n3](https://github.com/xh4n3) [#4991](https://github.com/kubernetes/kops/pull/4991)
* Allow rolling-update to filter on roles [@justinsb](https://github.com/justinsb) [#5122](https://github.com/kubernetes/kops/pull/5122)
* Remove stub tests [@justinsb](https://github.com/justinsb) [#5117](https://github.com/kubernetes/kops/pull/5117)
@ -268,7 +268,7 @@ None known at this time
* Add autoscaling group ids to terraform module output [@kampka](https://github.com/kampka) [#5472](https://github.com/kubernetes/kops/pull/5472)
* Allow kubelet to bind the hosts primary IP [@rdrgmnzs](https://github.com/rdrgmnzs) [#5460](https://github.com/kubernetes/kops/pull/5460)
* ContainerRegistry remapping should be atomic [@kampka](https://github.com/kampka) [#5479](https://github.com/kubernetes/kops/pull/5479)
* [GPU] Updated Kops GPU Setup Hook [@dcwangmit01](https://github.com/dcwangmit01) [#4971](https://github.com/kubernetes/kops/pull/4971)
* [GPU] Updated kOps GPU Setup Hook [@dcwangmit01](https://github.com/dcwangmit01) [#4971](https://github.com/kubernetes/kops/pull/4971)
* Only use SSL for ELB if certificate configured [@justinsb](https://github.com/justinsb) [#5485](https://github.com/kubernetes/kops/pull/5485)
* Simplify logic around master rolling-update [@justinsb](https://github.com/justinsb) [#5488](https://github.com/kubernetes/kops/pull/5488)
* Update Issue templates and add PR template [@mikesplain](https://github.com/mikesplain) [#5487](https://github.com/kubernetes/kops/pull/5487)

View File

@ -106,7 +106,7 @@
* Add doc regarding upgrading to CoreDNS [@joshbranham](https://github.com/joshbranham) [#6344](https://github.com/kubernetes/kops/pull/6344)
* AWS: Enable ICMP Type 3 Code 4 for API server ELBs [@davidarcher](https://github.com/davidarcher) [#6297](https://github.com/kubernetes/kops/pull/6297)
* Additional Storage & Volume Mounting [@gambol99](https://github.com/gambol99) [#6066](https://github.com/kubernetes/kops/pull/6066)
* Kops for Openstack [@jrperritt](https://github.com/jrperritt),[@drekle](https://github.com/drekle),[@wozniakjan](https://github.com/wozniakjan),[@marsavela](https://github.com/marsavela) [#6351](https://github.com/kubernetes/kops/pull/6351)
* kOps for Openstack [@jrperritt](https://github.com/jrperritt),[@drekle](https://github.com/drekle),[@wozniakjan](https://github.com/wozniakjan),[@marsavela](https://github.com/marsavela) [#6351](https://github.com/kubernetes/kops/pull/6351)
* Update go version to 1.10.8 [@justinsb](https://github.com/justinsb) [#6401](https://github.com/kubernetes/kops/pull/6401)
* Suffix openstack subnet name with cluster name [@wozniakjan](https://github.com/wozniakjan) [#6380](https://github.com/kubernetes/kops/pull/6380)
* Update upgrade.md [@ms4720](https://github.com/ms4720) [#6396](https://github.com/kubernetes/kops/pull/6396)

View File

@ -35,7 +35,7 @@ is safer.
apiGroup will now be kops.k8s.io, not kops. If performing strict string
comparison you will need to update your expected values.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -111,7 +111,7 @@ is safer.
* Make gofmt fails find usage [@drekle](https://github.com/drekle) [#6954](https://github.com/kubernetes/kops/pull/6954)
* Update commitlog relnotes for 1.12.0 [@justinsb](https://github.com/justinsb) [#6981](https://github.com/kubernetes/kops/pull/6981)
* 1.12 highlight changelog [@granular-ryanbonham](https://github.com/granular-ryanbonham) [#6982](https://github.com/kubernetes/kops/pull/6982)
* Mention version of Kops that introduced new features [@rifelpet](https://github.com/rifelpet) [#6983](https://github.com/kubernetes/kops/pull/6983)
* Mention version of kOps that introduced new features [@rifelpet](https://github.com/rifelpet) [#6983](https://github.com/kubernetes/kops/pull/6983)
* Terraform: fix options field, should be spot_options [@kimxogus](https://github.com/kimxogus) [#6988](https://github.com/kubernetes/kops/pull/6988)
* Add shortNames and columns to InstanceGroup CRD [@justinsb](https://github.com/justinsb) [#6995](https://github.com/kubernetes/kops/pull/6995)
* Add script to verify CRD generation [@justinsb](https://github.com/justinsb) [#6996](https://github.com/kubernetes/kops/pull/6996)
@ -191,7 +191,7 @@ is safer.
* Use readinessProbe for weave-net instead of livenessProbe [@ReillyProcentive](https://github.com/ReillyProcentive) [#7102](https://github.com/kubernetes/kops/pull/7102)
* Add some permissions to cluster-autoscaler clusterrole [@Coolknight](https://github.com/Coolknight) [#7248](https://github.com/kubernetes/kops/pull/7248)
* Spotinst: Rolling update always reports NeedsUpdate [@liranp](https://github.com/liranp) [#7251](https://github.com/kubernetes/kops/pull/7251)
* Add documentation example for running Kops in a CI environment [@rifelpet](https://github.com/rifelpet) [#7256](https://github.com/kubernetes/kops/pull/7256)
* Add documentation example for running kOps in a CI environment [@rifelpet](https://github.com/rifelpet) [#7256](https://github.com/kubernetes/kops/pull/7256)
* Calico -> 3.7.4 for older versions [@justinsb](https://github.com/justinsb) [#7282](https://github.com/kubernetes/kops/pull/7282)
* [Issue-7148] Legacyetcd support for Digital Ocean [@srikiz](https://github.com/srikiz) [#7221](https://github.com/kubernetes/kops/pull/7221)
* Stop .gitignoring all files named go-bindata [@justinsb](https://github.com/justinsb) [#7288](https://github.com/kubernetes/kops/pull/7288)
@ -376,7 +376,7 @@ is safer.
* Fix issues with older versions of k8s for basic clusters [@hakman](https://github.com/hakman),[@rifelpet](https://github.com/rifelpet) [#8248](https://github.com/kubernetes/kops/pull/8248)
* CoreDNS default image bump to 1.6.6 to resolve CVE [@gjtempleton](https://github.com/gjtempleton) [#8333](https://github.com/kubernetes/kops/pull/8333)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
## 1.15.1 to 1.15.2

View File

@ -28,11 +28,11 @@
# Required Actions
* If either a Kops 1.16 alpha release or a custom Kops build was used on a cluster,
* If either a kOps 1.16 alpha release or a custom kOps build was used on a cluster,
a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.16.0-beta.1 or later.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to kOps 1.16.0-beta.1 or later.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -144,7 +144,7 @@
* Add a BAZEL_CONFIG Makefile arg to bazel commands [@fejta](https://github.com/fejta) [#7758](https://github.com/kubernetes/kops/pull/7758)
* Memberlist gossip implementation [@jacksontj](https://github.com/jacksontj) [#7521](https://github.com/kubernetes/kops/pull/7521)
* bazel: comment out shallow_since as fails to build with bazel 1.0 [@justinsb](https://github.com/justinsb) [#7771](https://github.com/kubernetes/kops/pull/7771)
* Kops controller support for OpenStack [@zetaab](https://github.com/zetaab) [#7692](https://github.com/kubernetes/kops/pull/7692)
* kOps controller support for OpenStack [@zetaab](https://github.com/zetaab) [#7692](https://github.com/kubernetes/kops/pull/7692)
* Upgrade Amazon VPC CNI plugin to 1.5.4 [@rifelpet](https://github.com/rifelpet) [#7398](https://github.com/kubernetes/kops/pull/7398)
* Add documentation for updating CRDs when making API changes [@rifelpet](https://github.com/rifelpet) [#7728](https://github.com/kubernetes/kops/pull/7728)
* Kubelet configuration: Maximum pods flag is miscalculated when using Amazon VPC CNI [@liranp](https://github.com/liranp) [#7539](https://github.com/kubernetes/kops/pull/7539)
@ -233,7 +233,7 @@
* Machine types updates [@mikesplain](https://github.com/mikesplain) [#7947](https://github.com/kubernetes/kops/pull/7947)
* fix 404 urls in docs [@tanjunchen](https://github.com/tanjunchen) [#7943](https://github.com/kubernetes/kops/pull/7943)
* Fix generation of documentation /sitemap.xml file [@aledbf](https://github.com/aledbf) [#7949](https://github.com/kubernetes/kops/pull/7949)
* Kops site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* kOps site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* Fix netlify mixed content [@mikesplain](https://github.com/mikesplain) [#7953](https://github.com/kubernetes/kops/pull/7953)
* Fix goimports errors [@rifelpet](https://github.com/rifelpet) [#7955](https://github.com/kubernetes/kops/pull/7955)
* Upate Lyft CNI to v0.5.1 [@maruina](https://github.com/maruina) [#7402](https://github.com/kubernetes/kops/pull/7402)
@ -268,7 +268,7 @@
* Add Cilium.EnablePolicy back into templates [@olemarkus](https://github.com/olemarkus) [#8379](https://github.com/kubernetes/kops/pull/8379)
* CoreDNS default image bump to 1.6.6 to resolve CVE [@gjtempleton](https://github.com/gjtempleton) [#8333](https://github.com/kubernetes/kops/pull/8333)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* EBS Root Volume Termination [@tioxy](https://github.com/tioxy) [#7865](https://github.com/kubernetes/kops/pull/7865)
* Announce impending removal of v1alpha1 API [@johngmyers](https://github.com/johngmyers) [#8064](https://github.com/kubernetes/kops/pull/8064)
* Add missing priorityClassName for critical pods [@johngmyers](https://github.com/johngmyers) [#8200](https://github.com/kubernetes/kops/pull/8200)

View File

@ -28,11 +28,11 @@
# Required Actions
* Terraform users on AWS may need to rename resources in their terraform state file in order to prepare for future Terraform 0.12 support.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In Kops, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In kOps, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
* The default route was named `aws_route.0-0-0-0--0` and will now be named `aws_route.route-0-0-0-0--0`.
* Additional CIDR blocks associated with a VPC were similarly named the hyphenated CIDR block with two hyphens for the `/`, for example `aws_vpc_ipv4_cidr_block_association.10-1-0-0--16`. These will now be prefixed with `cidr-`, for example `aws_vpc_ipv4_cidr_block_association.cidr-10-1-0-0--16`.
To prevent downtime, follow these steps with the new version of Kops:
To prevent downtime, follow these steps with the new version of kOps:
```
kops update cluster --target terraform ...
terraform plan
@ -45,7 +45,7 @@
terraform apply
```
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -57,9 +57,9 @@
PodPriority: "true"
```
* If either a Kops 1.17 alpha release or a custom Kops build was used on a cluster,
* If either a kOps 1.17 alpha release or a custom kOps build was used on a cluster,
a kops-controller Deployment may have been created that should get deleted because it has been replaced with a DaemonSet.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.17.0-alpha.2 or later.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to kOps 1.17.0-alpha.2 or later.
# Deprecations
@ -67,27 +67,27 @@
* The `kops/v1alpha1` API is deprecated and will be removed in kops 1.18. Users of `kops replace` will need to supply v1alpha2 resources.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of Kops.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of kOps.
* Support for Debian 8 (Jessie) has been deprecated and will be removed in future versions of Kops.
* Support for Debian 8 (Jessie) has been deprecated and will be removed in future versions of kOps.
* Support for CoreOS has been deprecated and will be removed in future versions of Kops. Those affected should consider using [Flatcar](../operations/images.md#flatcar) as a replacement.
* Support for CoreOS has been deprecated and will be removed in future versions of kOps. Those affected should consider using [Flatcar](../operations/images.md#flatcar) as a replacement.
* Support for the "Legacy" etcd provider has been deprecated. It will not be supported for Kubernetes 1.18 or later. To migrate to the default "Manager" etcd provider see the [etcd migration documentation](../etcd3-migration.md).
* The default StorageClass `gp2` prior to Kops 1.17.0 is no longer the default, replaced by StorageClass `kops-ssd-1-17`.
* The default StorageClass `gp2` prior to kOps 1.17.0 is no longer the default, replaced by StorageClass `kops-ssd-1-17`.
# Known Issues
* Kops 1.17.0-beta.1 included an update for AWS IAM Authenticator to 0.5.0.
* kOps 1.17.0-beta.1 included an update for AWS IAM Authenticator to 0.5.0.
This version fails to use the volume mounted ConfigMap causing API authentication issues for clients with aws-iam-authenticator credentials.
Any cluster with `spec.authentication.aws` defined according to the [docs](../authentication.md#aws-iam-authenticator) without overriding the `spec.authentication.aws.image` is affected.
The workaround is to specify the old 0.4.0 image with `spec.authentication.aws.image=602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.4.0`.
For the 1.17.0 release, this change was rolled back, and the AWS IAM authenticator defaults to version 0.4.0
* Kops 1.17.0 includes a new StorageClass `kops-ssd-1-17` which is set as the default via the annotation `"storageclass.beta.kubernetes.io/is-default-class":"true"`.
* kOps 1.17.0 includes a new StorageClass `kops-ssd-1-17` which is set as the default via the annotation `"storageclass.beta.kubernetes.io/is-default-class":"true"`.
If you have modified the previous `gp2` StorageClass, it could conflict with the defaulting behavior.
To resolve, patch the `gp2` StorageClass to have the annotation `"storageclass.beta.kubernetes.io/is-default-class":"false"`, which aligns with a patch to Kops 1.17.1 as well.
To resolve, patch the `gp2` StorageClass to have the annotation `"storageclass.beta.kubernetes.io/is-default-class":"false"`, which aligns with a patch to kOps 1.17.1 as well.
`kubectl patch storageclass.storage.k8s.io/gp2 --patch '{"metadata": {"annotations": {"storageclass.beta.kubernetes.io/is-default-class": "false"}}}'`
# Full change list since 1.16.0 release
@ -113,7 +113,7 @@
* Machine types updates [@mikesplain](https://github.com/mikesplain) [#7947](https://github.com/kubernetes/kops/pull/7947)
* fix 404 urls in docs [@tanjunchen](https://github.com/tanjunchen) [#7943](https://github.com/kubernetes/kops/pull/7943)
* Fix generation of documentation /sitemap.xml file [@aledbf](https://github.com/aledbf) [#7949](https://github.com/kubernetes/kops/pull/7949)
* Kops site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* kOps site link [@mikesplain](https://github.com/mikesplain) [#7950](https://github.com/kubernetes/kops/pull/7950)
* Fix netlify mixed content [@mikesplain](https://github.com/mikesplain) [#7953](https://github.com/kubernetes/kops/pull/7953)
* Fix goimports errors [@rifelpet](https://github.com/rifelpet) [#7955](https://github.com/kubernetes/kops/pull/7955)
* Upate Lyft CNI to v0.5.1 [@maruina](https://github.com/maruina) [#7402](https://github.com/kubernetes/kops/pull/7402)
@ -182,7 +182,7 @@
* Bump etcd-manager to 3.0.20200116 (#8310) [@mmerrill3](https://github.com/mmerrill3) [#8399](https://github.com/kubernetes/kops/pull/8399)
* CoreDNS default image bump to 1.6.6 to resolve CVE [@gjtempleton](https://github.com/gjtempleton) [#8333](https://github.com/kubernetes/kops/pull/8333)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* EBS Root Volume Termination [@tioxy](https://github.com/tioxy) [#7865](https://github.com/kubernetes/kops/pull/7865)
* Alicloud: etcd-manager support [@bittopaz](https://github.com/bittopaz) [#8016](https://github.com/kubernetes/kops/pull/8016)
@ -236,7 +236,7 @@
* Update lyft CNI to 0.6.0 [@maruina](https://github.com/maruina) [#8757](https://github.com/kubernetes/kops/pull/8757)
* Fix Handling of LaunchTemplate Versions for MixedInstancePolicy [@granular-ryanbonham](https://github.com/granular-ryanbonham),[@KashifSaadat](https://github.com/KashifSaadat),[@qqshfox](https://github.com/qqshfox) [#8038](https://github.com/kubernetes/kops/pull/8038)
* Enable stamping on bazel image builds [@rifelpet](https://github.com/rifelpet) [#8835](https://github.com/kubernetes/kops/pull/8835)
* Add support for Docker 19.03.8 in Kops 1.17 [@hakman](https://github.com/hakman) [#8845](https://github.com/kubernetes/kops/pull/8845)
* Add support for Docker 19.03.8 in kOps 1.17 [@hakman](https://github.com/hakman) [#8845](https://github.com/kubernetes/kops/pull/8845)
* Remove support for Docker 1.11, 1.12 and 1.13 [@hakman](https://github.com/hakman) [#8855](https://github.com/kubernetes/kops/pull/8855)
* Fix kuberouter for k8s 1.16+ [@UnderMyBed](https://github.com/UnderMyBed),[@hakman](https://github.com/hakman) [#8697](https://github.com/kubernetes/kops/pull/8697)
* Fix tests for obsolete Docker versions in 1.17 [@hakman](https://github.com/hakman) [#8889](https://github.com/kubernetes/kops/pull/8889)

View File

@ -18,7 +18,7 @@
* Rolling updates now support surging and parallelism within an instance group. For details see [the documentation](https://kops.sigs.k8s.io/operations/rolling-update/).
* Cilium CNI can now use AWS networking natively through the AWS ENI IPAM mode. Kops can also run a Kubernetes cluster entirely without kube-proxy using Cilium's BPF NodePort implementation.
* Cilium CNI can now use AWS networking natively through the AWS ENI IPAM mode. kOps can also run a Kubernetes cluster entirely without kube-proxy using Cilium's BPF NodePort implementation.
* Cilium CNI can now use a dedicated etcd cluster managed by etcd-manager for synchronizing agent state instead of CRDs.
@ -44,7 +44,7 @@
* The Docker `health-check` service has been disabled by default. It shouldn't be needed anymore, but it can still be enabled by setting `spec.docker.healthCheck: true`. It is recommended to also check [node-problem-detector](https://github.com/kubernetes/node-problem-detector) and [draino](https://github.com/planetlabs/draino) as replacements. See Required Actions below.
* Network and internet access for `docker run` containers has been disabled by default, to avoid any unwanted interaction between the Docker firewall rules and the firewall rules of netwok plugins. This was the default since the early days of Kops, but a race condition in the Docker startup sequence changed this behaviour in more recent years. To re-enable, set `spec.docker.ipTables: true` and `spec.docker.ipMasq: true`.
* Network and internet access for `docker run` containers has been disabled by default, to avoid any unwanted interaction between the Docker firewall rules and the firewall rules of netwok plugins. This was the default since the early days of kOps, but a race condition in the Docker startup sequence changed this behaviour in more recent years. To re-enable, set `spec.docker.ipTables: true` and `spec.docker.ipMasq: true`.
* Lyft CNI plugin default subnet tags changed from from `Type: pod` to `KubernetesCluster: myclustername.mydns.io`. Subnets intended for use by the plugin will need to be tagged with this new tag and [additional tag filters](https://github.com/lyft/cni-ipvlan-vpc-k8s#other-configuration-flags) may need to be added to the cluster spec in order to achieve the desired set of subnets.
@ -67,11 +67,11 @@
# Required Actions
* Terraform users on AWS may need to rename resources in their terraform state file in order to support Terraform 0.12.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In Kops, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
Terraform 0.12 [no longer supports resource names starting with digits](https://www.terraform.io/upgrade-guides/0-12.html#pre-upgrade-checklist). In kOps, both the default route and additional VPC CIDR associations are affected. See [#7957](https://github.com/kubernetes/kops/pull/7957) for more information.
* The default route was named `aws_route.0-0-0-0--0` and will now be named `aws_route.route-0-0-0-0--0`.
* Additional CIDR blocks associated with a VPC were similarly named the hyphenated CIDR block with two hyphens for the `/`, for example `aws_vpc_ipv4_cidr_block_association.10-1-0-0--16`. These will now be prefixed with `cidr-`, for example `aws_vpc_ipv4_cidr_block_association.cidr-10-1-0-0--16`.
To prevent downtime, follow these steps with the new version of Kops:
To prevent downtime, follow these steps with the new version of kOps:
```
KOPS_FEATURE_FLAGS=-Terraform-0.12 kops update cluster --target terraform ...
# Use Terraform <0.12
@ -84,7 +84,7 @@
# Ensure these resources are no longer being destroyed and recreated
terraform apply
```
Kops will now output Terraform 0.12 syntax with the normal workflow:
kOps will now output Terraform 0.12 syntax with the normal workflow:
```
kops update cluster --target terraform ...
# Use Terraform 0.12. This plan should be a no-op
@ -100,7 +100,7 @@
healthCheck: true
```
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of Kops.
* Kubernetes 1.9 users will need to enable the PodPriority feature gate. This is required for newer versions of kOps.
To enable the Pod priority feature, follow these steps:
```
@ -112,8 +112,8 @@
PodPriority: "true"
```
* If a custom Kops build was used on a cluster, a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to Kops 1.16.0-beta.1 or later.
* If a custom kOps build was used on a cluster, a kops-controller Deployment may have been created that should get deleted.
Run `kubectl -n kube-system delete deployment kops-controller` after upgrading to kOps 1.16.0-beta.1 or later.
# Known Issues
@ -123,7 +123,7 @@
* Support for Kubernetes versions 1.9 and 1.10 are deprecated and will be removed in kops 1.19.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of Kops.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of kOps.
* Support for the Romana networking provider is deprecated and will be removed in kops 1.19.
@ -222,7 +222,7 @@
* GCE: Fix Permission for the Storage Bucket [@mccare](https://github.com/mccare) [#8157](https://github.com/kubernetes/kops/pull/8157)
* pkg/instancegroups - fix static check [@johngmyers](https://github.com/johngmyers) [#8186](https://github.com/kubernetes/kops/pull/8186)
* pkg/resources/aws:simplify code and remove code [@Aresforchina](https://github.com/Aresforchina) [#8188](https://github.com/kubernetes/kops/pull/8188)
* Update links printed by Kops to use new docs site [@rifelpet](https://github.com/rifelpet) [#8190](https://github.com/kubernetes/kops/pull/8190)
* Update links printed by kOps to use new docs site [@rifelpet](https://github.com/rifelpet) [#8190](https://github.com/kubernetes/kops/pull/8190)
* dnsprovider/pkg/dnsprovider - fix static check [@hakman](https://github.com/hakman) [#8149](https://github.com/kubernetes/kops/pull/8149)
* fix staticcheck failures in pkg/resources [@Aresforchina](https://github.com/Aresforchina) [#8191](https://github.com/kubernetes/kops/pull/8191)
* Add corresponding unit test to the function in subnet.go. [@fenggw-fnst](https://github.com/fenggw-fnst) [#8195](https://github.com/kubernetes/kops/pull/8195)
@ -346,7 +346,7 @@
* Remove addons only applicable to unsupported versions of Kubernetes [@johngmyers](https://github.com/johngmyers) [#8318](https://github.com/kubernetes/kops/pull/8318)
* Don't load nonexistent calico-client cert when CNI is Cilium [@johngmyers](https://github.com/johngmyers) [#8338](https://github.com/kubernetes/kops/pull/8338)
* Edit author name [@LinshanYu](https://github.com/LinshanYu) [#8374](https://github.com/kubernetes/kops/pull/8374)
* Kops releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* kOps releases - prefix git tags with v [@rifelpet](https://github.com/rifelpet) [#8373](https://github.com/kubernetes/kops/pull/8373)
* Support additional kube-scheduler config parameters via config file [@rralcala](https://github.com/rralcala) [#8407](https://github.com/kubernetes/kops/pull/8407)
* Option to increase concurrency of rolling update within instancegroup [@johngmyers](https://github.com/johngmyers) [#8271](https://github.com/kubernetes/kops/pull/8271)
* Fix template clusterName behavior [@lcrisci](https://github.com/lcrisci) [#7319](https://github.com/kubernetes/kops/pull/7319)
@ -577,7 +577,7 @@
* Update Calico and Canal to v3.13.2 [@hakman](https://github.com/hakman) [#8865](https://github.com/kubernetes/kops/pull/8865)
* GCE: Delete cluster will also delete the DNS entries created by kubernetes [@mccare](https://github.com/mccare),[@justinsb](https://github.com/justinsb) [#8250](https://github.com/kubernetes/kops/pull/8250)
* Add Terraform 0.12 support [@rifelpet](https://github.com/rifelpet) [#8825](https://github.com/kubernetes/kops/pull/8825)
* Don't compress bindata & allow Kops to be imported as a package. [@rdrgmnzs](https://github.com/rdrgmnzs),[@justinsb](https://github.com/justinsb) [#8584](https://github.com/kubernetes/kops/pull/8584)
* Don't compress bindata & allow kOps to be imported as a package. [@rdrgmnzs](https://github.com/rdrgmnzs),[@justinsb](https://github.com/justinsb) [#8584](https://github.com/kubernetes/kops/pull/8584)
* Validate cluster N times in rolling-update [@zetaab](https://github.com/zetaab) [#8868](https://github.com/kubernetes/kops/pull/8868)
* Update go.mod for k8s 1.17 [@justinsb](https://github.com/justinsb) [#8873](https://github.com/kubernetes/kops/pull/8873)
* pkg: add some unit tests [@q384566678](https://github.com/q384566678) [#8872](https://github.com/kubernetes/kops/pull/8872)

View File

@ -6,7 +6,7 @@
## Changes to kubernetes config export
Kops will no longer automatically export the kubernetes config on `kops update cluster`. In order to export the config on cluster update, you need to either add the `--user <user>` to reference an existing user, or `--admin` to export the cluster admin user. If neither flag is passed, the kubernetes config will not be modified. This makes it easier to reuse user definitions across clusters should you, for example, use OIDC for authentication.
kOps will no longer automatically export the kubernetes config on `kops update cluster`. In order to export the config on cluster update, you need to either add the `--user <user>` to reference an existing user, or `--admin` to export the cluster admin user. If neither flag is passed, the kubernetes config will not be modified. This makes it easier to reuse user definitions across clusters should you, for example, use OIDC for authentication.
Similarly, `kops export kubecfg` will also require passing either the `--admin` or `--user` flag if the context does not already exist.
@ -17,7 +17,7 @@ credentials may be specified as a value of the `--admin` flag. To get the previo
## OpenStack Cinder plugin
Kops will install the Cinder plugin for kops running kubernetes 1.16 or newer. If you already have this plugin installed you should remove it before upgrading.
kOps will install the Cinder plugin for kops running kubernetes 1.16 or newer. If you already have this plugin installed you should remove it before upgrading.
If you already have a default `StorageClass`, you should set `cloudConfig.Openstack.BlockStorage.CreateStorageClass: false` to prevent kops from installing one.
@ -225,7 +225,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Spotinst: Allow a user specifiable node draining timeout [@liranp](https://github.com/liranp) [#9221](https://github.com/kubernetes/kops/pull/9221)
* Validate IG RootVolumeType [@olemarkus](https://github.com/olemarkus) [#9265](https://github.com/kubernetes/kops/pull/9265)
* gce: log bucket-policy-only message at a level that always appears [@justinsb](https://github.com/justinsb) [#9276](https://github.com/kubernetes/kops/pull/9276)
* Prepare Kops for multi-architecture support [@hakman](https://github.com/hakman) [#9216](https://github.com/kubernetes/kops/pull/9216)
* Prepare kOps for multi-architecture support [@hakman](https://github.com/hakman) [#9216](https://github.com/kubernetes/kops/pull/9216)
* Ensure we have IAM bucket permissions to other S3 buckets [@justinsb](https://github.com/justinsb) [#9274](https://github.com/kubernetes/kops/pull/9274)
* Refactor cert issuance code [@johngmyers](https://github.com/johngmyers) [#9130](https://github.com/kubernetes/kops/pull/9130)
* Allow failure of the ARM64 job in TravisCI [@hakman](https://github.com/hakman) [#9279](https://github.com/kubernetes/kops/pull/9279)
@ -257,7 +257,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Docs helptext [@olemarkus](https://github.com/olemarkus) [#9333](https://github.com/kubernetes/kops/pull/9333)
* Use launch templates by default [@johngmyers](https://github.com/johngmyers) [#9289](https://github.com/kubernetes/kops/pull/9289)
* Refactor kubemanifest to be clearer [@justinsb](https://github.com/justinsb) [#9342](https://github.com/kubernetes/kops/pull/9342)
* Refactor BootstrapChannelBuilder to use a KopsModelContext [@justinsb](https://github.com/justinsb) [#9338](https://github.com/kubernetes/kops/pull/9338)
* Refactor BootstrapChannelBuilder to use a kOpsModelContext [@justinsb](https://github.com/justinsb) [#9338](https://github.com/kubernetes/kops/pull/9338)
* Issue kubecfg and kops certs in nodeup [@johngmyers](https://github.com/johngmyers) [#9347](https://github.com/kubernetes/kops/pull/9347)
* Update release notes for Ubuntu 20.04 and CVEs [@hakman](https://github.com/hakman) [#9332](https://github.com/kubernetes/kops/pull/9332)
* Add nodelocal dns cache to release notes and add kops version to docs [@olemarkus](https://github.com/olemarkus) [#9351](https://github.com/kubernetes/kops/pull/9351)
@ -310,7 +310,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Typo and wording fix to getting_started/commands doc [@MoShitrit](https://github.com/MoShitrit) [#9417](https://github.com/kubernetes/kops/pull/9417)
* Alicloud: Refactor LoadBalancerWhiteList to LoadBalancerACL [@bittopaz](https://github.com/bittopaz) [#8304](https://github.com/kubernetes/kops/pull/8304)
* Remove PHONY declaration on non-phony targets [@johngmyers](https://github.com/johngmyers) [#9419](https://github.com/kubernetes/kops/pull/9419)
* Build and publish only Linux AMD64 Kops artifacts for CI [@hakman](https://github.com/hakman) [#9401](https://github.com/kubernetes/kops/pull/9401)
* Build and publish only Linux AMD64 kOps artifacts for CI [@hakman](https://github.com/hakman) [#9401](https://github.com/kubernetes/kops/pull/9401)
* Remove more sha1-generation code [@johngmyers](https://github.com/johngmyers) [#9423](https://github.com/kubernetes/kops/pull/9423)
* Fix: dns-controller: 3999 port address already in use [@vgunapati](https://github.com/vgunapati) [#9404](https://github.com/kubernetes/kops/pull/9404)
* Fix dns selectors for older k8s [@olemarkus](https://github.com/olemarkus) [#9431](https://github.com/kubernetes/kops/pull/9431)
@ -548,7 +548,7 @@ To prevent downtime, follow these steps with the new version of Kops:
* Release notes for 1.19.0-alpha.3 [@hakman](https://github.com/hakman) [#9805](https://github.com/kubernetes/kops/pull/9805)
* Stop trying to pull the Protokube image [@hakman](https://github.com/hakman) [#9809](https://github.com/kubernetes/kops/pull/9809)
* Add all images to GH release [@hakman](https://github.com/hakman) [#9811](https://github.com/kubernetes/kops/pull/9811)
* Refactor: KopsModelContext embeds IAMModelContext [@justinsb](https://github.com/justinsb) [#9814](https://github.com/kubernetes/kops/pull/9814)
* Refactor: kOpsModelContext embeds IAMModelContext [@justinsb](https://github.com/justinsb) [#9814](https://github.com/kubernetes/kops/pull/9814)
* Adding docs on AWS Permissions Boundaries support [@victorfrancax1](https://github.com/victorfrancax1) [#9807](https://github.com/kubernetes/kops/pull/9807)
* Fix GCE cluster creation with private topology [@rifelpet](https://github.com/rifelpet) [#9815](https://github.com/kubernetes/kops/pull/9815)
* Support writing a full certificate chain [@justinsb](https://github.com/justinsb) [#9812](https://github.com/kubernetes/kops/pull/9812)

View File

@ -125,7 +125,7 @@
* Mark kops 1.7.0-beta.1 [@justinsb](https://github.com/justinsb) [#3005](https://github.com/kubernetes/kops/pull/3005)
* Add missing step to pull template file; correct kops option. [@j14s](https://github.com/j14s) [#3006](https://github.com/kubernetes/kops/pull/3006)
* Test kops submit-queue [@cjwagner](https://github.com/cjwagner) [#3012](https://github.com/kubernetes/kops/pull/3012)
* Kops apiserver support for openapi and generated API docs [@pwittrock](https://github.com/pwittrock) [#3001](https://github.com/kubernetes/kops/pull/3001)
* kOps apiserver support for openapi and generated API docs [@pwittrock](https://github.com/pwittrock) [#3001](https://github.com/kubernetes/kops/pull/3001)
* Fix for the instructions about using KOPS_FEATURE_FLAGS for drain and… [@FrederikNS](https://github.com/FrederikNS) [#2934](https://github.com/kubernetes/kops/pull/2934)
* populate cloud labels with cluster autoscaler tags [@sethpollack](https://github.com/sethpollack) [#3017](https://github.com/kubernetes/kops/pull/3017)
* Support for lifecycles [@justinsb](https://github.com/justinsb) [#2763](https://github.com/kubernetes/kops/pull/2763)

View File

@ -154,7 +154,7 @@ or specify a different network (current using `--vpc` flag)
* Fixing clusterautoscaler rbac [@BradErz](https://github.com/BradErz) [#3145](https://github.com/kubernetes/kops/pull/3145)
* Fix for Canal Taints and Tolerations [@prachetasp](https://github.com/prachetasp) [#3142](https://github.com/kubernetes/kops/pull/3142)
* Etcd TLS Options [@gambol99](https://github.com/gambol99) [#3114](https://github.com/kubernetes/kops/pull/3114)
* Kops Replace Command - create unprovisioned [@gambol99](https://github.com/gambol99) [#3089](https://github.com/kubernetes/kops/pull/3089)
* kOps Replace Command - create unprovisioned [@gambol99](https://github.com/gambol99) [#3089](https://github.com/kubernetes/kops/pull/3089)
* Add support for cluster using http forward proxy #2481 [@DerekV](https://github.com/DerekV) [#2777](https://github.com/kubernetes/kops/pull/2777)
* Fix Typo to improve GoReportCard [@asifdxtreme](https://github.com/asifdxtreme) [#3156](https://github.com/kubernetes/kops/pull/3156)
* Update alpha channel with update image & versions [@justinsb](https://github.com/justinsb) [#3103](https://github.com/kubernetes/kops/pull/3103)
@ -220,11 +220,11 @@ or specify a different network (current using `--vpc` flag)
* Explicit CreateCluster & UpdateCluster functions [@justinsb](https://github.com/justinsb) [#3240](https://github.com/kubernetes/kops/pull/3240)
* remove --cluster-cidr from kube-router's manifest. [@murali-reddy](https://github.com/murali-reddy) [#3263](https://github.com/kubernetes/kops/pull/3263)
* Replace deprecated aws session.New() with session.NewSession() [@alrs](https://github.com/alrs) [#3255](https://github.com/kubernetes/kops/pull/3255)
* Kops command fixes [@alrs](https://github.com/alrs) [#3277](https://github.com/kubernetes/kops/pull/3277)
* kOps command fixes [@alrs](https://github.com/alrs) [#3277](https://github.com/kubernetes/kops/pull/3277)
* Update go-ini dep to v1.28.2 [@justinsb](https://github.com/justinsb) [#3283](https://github.com/kubernetes/kops/pull/3283)
* Add go1.9 target to travis [@justinsb](https://github.com/justinsb) [#3279](https://github.com/kubernetes/kops/pull/3279)
* Refactor apiserver templates [@georgebuckerfield](https://github.com/georgebuckerfield) [#3284](https://github.com/kubernetes/kops/pull/3284)
* Kops Secrets on Nodes [@gambol99](https://github.com/gambol99) [#3270](https://github.com/kubernetes/kops/pull/3270)
* kOps Secrets on Nodes [@gambol99](https://github.com/gambol99) [#3270](https://github.com/kubernetes/kops/pull/3270)
* Add Initializers admission controller [@justinsb](https://github.com/justinsb) [#3289](https://github.com/kubernetes/kops/pull/3289)
* Limit the IAM EC2 policy for the master nodes [@KashifSaadat](https://github.com/KashifSaadat) [#3186](https://github.com/kubernetes/kops/pull/3186)
* Allow user defined endpoint to host action for Canal [@KashifSaadat](https://github.com/KashifSaadat) [#3272](https://github.com/kubernetes/kops/pull/3272)
@ -269,7 +269,7 @@ or specify a different network (current using `--vpc` flag)
* Correct typo in Hooks Spec examples [@KashifSaadat](https://github.com/KashifSaadat) [#3381](https://github.com/kubernetes/kops/pull/3381)
* Honor ServiceNodePortRange when opening NodePort access [@justinsb](https://github.com/justinsb) [#3379](https://github.com/kubernetes/kops/pull/3379)
* More Makefile improvements [@alrs](https://github.com/alrs) [#3380](https://github.com/kubernetes/kops/pull/3380)
* Revision to IAM Policies created by Kops [@chrislovecnm](https://github.com/chrislovecnm) [#3343](https://github.com/kubernetes/kops/pull/3343)
* Revision to IAM Policies created by kOps [@chrislovecnm](https://github.com/chrislovecnm) [#3343](https://github.com/kubernetes/kops/pull/3343)
* Add file assets to node user data scripts, fingerprint fileAssets and hooks content. [@KashifSaadat](https://github.com/KashifSaadat) [#3323](https://github.com/kubernetes/kops/pull/3323)
* Makefile remove redundant logic [@alrs](https://github.com/alrs) [#3390](https://github.com/kubernetes/kops/pull/3390)
* Makefile: build kops in dev-mode by default [@justinsb](https://github.com/justinsb) [#3402](https://github.com/kubernetes/kops/pull/3402)
@ -422,7 +422,7 @@ or specify a different network (current using `--vpc` flag)
* update kubernetes-dashboard image version to v1.7.1 [@tallaxes](https://github.com/tallaxes) [#3652](https://github.com/kubernetes/kops/pull/3652)
* Bump channels version of dashboard to 1.7.1 [@so0k](https://github.com/so0k) [#3681](https://github.com/kubernetes/kops/pull/3681)
* [AWS] Properly tag public and private subnets for ELB creation [@geojaz](https://github.com/geojaz) [#3682](https://github.com/kubernetes/kops/pull/3682)
* Kops Toolbox Template Missing Variables [@gambol99](https://github.com/gambol99) [#3680](https://github.com/kubernetes/kops/pull/3680)
* kOps Toolbox Template Missing Variables [@gambol99](https://github.com/gambol99) [#3680](https://github.com/kubernetes/kops/pull/3680)
* Delete firewall rules on GCE [@justinsb](https://github.com/justinsb) [#3684](https://github.com/kubernetes/kops/pull/3684)
* Fix typo in SessionAffinity terraform field [@justinsb](https://github.com/justinsb) [#3685](https://github.com/kubernetes/kops/pull/3685)
* Grant kubelets system:node role in 1.8 [@justinsb](https://github.com/justinsb) [#3683](https://github.com/kubernetes/kops/pull/3683)
@ -456,7 +456,7 @@ or specify a different network (current using `--vpc` flag)
* Refactor gce resources into pkg/resources/gce [@justinsb](https://github.com/justinsb) [#3721](https://github.com/kubernetes/kops/pull/3721)
* Add initial docs for how to rotate a CA keypair [@justinsb](https://github.com/justinsb) [#3727](https://github.com/kubernetes/kops/pull/3727)
* GCS: Use ACLs for GCE permissions [@justinsb](https://github.com/justinsb) [#3726](https://github.com/kubernetes/kops/pull/3726)
* Kops Template YAML Formatting [@gambol99](https://github.com/gambol99) [#3706](https://github.com/kubernetes/kops/pull/3706)
* kOps Template YAML Formatting [@gambol99](https://github.com/gambol99) [#3706](https://github.com/kubernetes/kops/pull/3706)
* Tolerate errors from Find for tasks with WarnIfInsufficientAccess [@justinsb](https://github.com/justinsb) [#3728](https://github.com/kubernetes/kops/pull/3728)
* GCE Dump: Include instance IPs [@justinsb](https://github.com/justinsb) [#3722](https://github.com/kubernetes/kops/pull/3722)
* Route53 based example [@tigerlinux](https://github.com/tigerlinux) [#3367](https://github.com/kubernetes/kops/pull/3367)
@ -542,7 +542,7 @@ or specify a different network (current using `--vpc` flag)
* Include encryptionConfig setting within userdata for masters. [@KashifSaadat](https://github.com/KashifSaadat) [#3874](https://github.com/kubernetes/kops/pull/3874)
* Add Example for instance group tagging [@sergeohl](https://github.com/sergeohl) [#3879](https://github.com/kubernetes/kops/pull/3879)
* README and issue template updates [@chrislovecnm](https://github.com/chrislovecnm) [#3818](https://github.com/kubernetes/kops/pull/3818)
* Kops Template Config Value [@gambol99](https://github.com/gambol99) [#3863](https://github.com/kubernetes/kops/pull/3863)
* kOps Template Config Value [@gambol99](https://github.com/gambol99) [#3863](https://github.com/kubernetes/kops/pull/3863)
* Fix spelling [@jonstacks](https://github.com/jonstacks) [#3864](https://github.com/kubernetes/kops/pull/3864)
* Improving UX for placeholder IP Address [@chrislovecnm](https://github.com/chrislovecnm) [#3709](https://github.com/kubernetes/kops/pull/3709)
* Bump all flannel versions to latest release - v0.9.1 [@tomdee](https://github.com/tomdee) [#3880](https://github.com/kubernetes/kops/pull/3880)
@ -581,7 +581,7 @@ or specify a different network (current using `--vpc` flag)
* Fix flannel version [@mikesplain](https://github.com/mikesplain) [#3953](https://github.com/kubernetes/kops/pull/3953)
* Fix flannel error on starting [@mikesplain](https://github.com/mikesplain) [#3956](https://github.com/kubernetes/kops/pull/3956)
* Fix brew docs typo [@mikesplain](https://github.com/mikesplain) [#3949](https://github.com/kubernetes/kops/pull/3949)
* kops not Kops [@chrislovecnm](https://github.com/chrislovecnm) [#3960](https://github.com/kubernetes/kops/pull/3960)
* kops not kOps [@chrislovecnm](https://github.com/chrislovecnm) [#3960](https://github.com/kubernetes/kops/pull/3960)
* openapi doc updates [@chrislovecnm](https://github.com/chrislovecnm) [#3948](https://github.com/kubernetes/kops/pull/3948)
* Add kubernetes-dashboard addon version constraint [@so0k](https://github.com/so0k) [#3959](https://github.com/kubernetes/kops/pull/3959)
* Initial support for nvme [@justinsb](https://github.com/justinsb) [#3969](https://github.com/kubernetes/kops/pull/3969)

View File

@ -134,7 +134,7 @@ None known at this time
* Update binary installation commands for macOS to use curl alone [@hopkinsth](https://github.com/hopkinsth) [#4260](https://github.com/kubernetes/kops/pull/4260)
* Slight changes to commands. [@darron](https://github.com/darron) [#4259](https://github.com/kubernetes/kops/pull/4259)
* Add SubnetType Tag to Subnets [@KashifSaadat](https://github.com/KashifSaadat) [#4198](https://github.com/kubernetes/kops/pull/4198)
* Kops Replace Force [@gambol99](https://github.com/gambol99) [#4275](https://github.com/kubernetes/kops/pull/4275)
* kOps Replace Force [@gambol99](https://github.com/gambol99) [#4275](https://github.com/kubernetes/kops/pull/4275)
* docs: upgrade.md: drop DrainAndValidateRollingUpdate note [@dkeitel](https://github.com/dkeitel) [#4282](https://github.com/kubernetes/kops/pull/4282)
* Bump alpha channel [@justinsb](https://github.com/justinsb) [#4285](https://github.com/kubernetes/kops/pull/4285)
* Validate IG MaxSize is not less than MinSize. [@mikesplain](https://github.com/mikesplain) [#4278](https://github.com/kubernetes/kops/pull/4278)

View File

@ -20,7 +20,7 @@ kops update cluster
kops update cluster --yes
```
Kops may fail to recreate all the keys on first try. If you get errors about ca key for 'ca' not being found, run `kops update cluster --yes` once more.
kOps may fail to recreate all the keys on first try. If you get errors about ca key for 'ca' not being found, run `kops update cluster --yes` once more.
## Force cluster to use new secrets

View File

@ -22,7 +22,7 @@ To change the SSH public key on an existing cluster:
## Docker Configuration
If you are using a private registry such as quay.io, you may be familiar with the inconvenience of managing the `imagePullSecrets` for each namespace. It can also be a pain to use [Kops Hooks](cluster_spec.md#hooks) with private images. To configure docker on all nodes with access to one or more private registries:
If you are using a private registry such as quay.io, you may be familiar with the inconvenience of managing the `imagePullSecrets` for each namespace. It can also be a pain to use [kOps Hooks](cluster_spec.md#hooks) with private images. To configure docker on all nodes with access to one or more private registries:
* `kops create secret --name <clustername> dockerconfig -f ~/.docker/config.json`
* `kops rolling-update cluster --name <clustername> --yes` to immediately roll all the machines so they have the new key (optional)

View File

@ -1,18 +1,18 @@
# Security Groups
## Use existing AWS Security Groups
**Note: Use this at your own risk, when existing SGs are used Kops will NOT ensure they are properly configured.**
**Note: Use this at your own risk, when existing SGs are used kOps will NOT ensure they are properly configured.**
Rather than having Kops create and manage IAM Security Groups, it is possible to use an existing one. This is useful in organizations where security policies prevent tools from creating their own Security Groups.
Kops will still output any differences in the managed and your own Security Groups.
This is convenient for determining policy changes that need to be made when upgrading Kops.
Rather than having kOps create and manage IAM Security Groups, it is possible to use an existing one. This is useful in organizations where security policies prevent tools from creating their own Security Groups.
kOps will still output any differences in the managed and your own Security Groups.
This is convenient for determining policy changes that need to be made when upgrading kOps.
**Using Managed Security Groups will not output these differences, it is up to the user to track expected changes to policies.**
NOTE:
- *Currently Kops only supports using existing Security Groups for every instance group and Load Balancer in the Cluster, not a mix of existing and managed Security Groups.
- *Currently kOps only supports using existing Security Groups for every instance group and Load Balancer in the Cluster, not a mix of existing and managed Security Groups.
This is due to the lifecycle overrides being used to prevent creation of the Security Groups related resources.*
- *Kops will add necessary rules to the security group specified in `securityGroupOverride`.*
- *kOps will add necessary rules to the security group specified in `securityGroupOverride`.*
To do this first specify the Security Groups for the ELB (if you are using a LB) and Instance Groups
Example:

View File

@ -51,7 +51,7 @@ There are a few ways to configure your state store. In priority order:
The local filesystem state store (`file://`) is **not** functional for running clusters. It is permitted so as to enable review workflows.
For example, in a review workflow, it can be desirable to check a set of untrusted changes before they are applied to real infrastructure. If submitted untrusted changes to configuration files are naively run by `kops replace`, then Kops would overwrite the state store used by production infrastructure with changes which have not yet been approved. This is dangerous.
For example, in a review workflow, it can be desirable to check a set of untrusted changes before they are applied to real infrastructure. If submitted untrusted changes to configuration files are naively run by `kops replace`, then kOps would overwrite the state store used by production infrastructure with changes which have not yet been approved. This is dangerous.
Instead, a review workflow may download the contents of the state bucket to a local directory (using `aws s3 sync` or similar), set the state store to the local directory (e.g. `--state file:///path/to/state/store`), and then run `kops replace` and `kops update` (but for a dry-run only - _not_ `kops update --yes`). This allows the review process to make changes to a local copy of the state bucket, and check those changes, without touching the production state bucket or production infrastructure.
@ -134,7 +134,7 @@ Consider the S3 bucket living in Account B and the kops cluster living in Accoun
}
```
Kops will then use that bucket as if it was in the remote account, including creating appropriate IAM policies that limits nodes from doing bad things.
kOps will then use that bucket as if it was in the remote account, including creating appropriate IAM policies that limits nodes from doing bad things.
Note that any user/role with full S3 access will be able to delete any cluster from the state store, but may not delete any instances or other things outside of S3.
## Digital Ocean (do://)
@ -191,7 +191,7 @@ gcsClient, err := storage.New(httpClient)
## Vault (vault://)
{{ kops_feature_table(kops_added_ff='1.19') }}
Kops has support for using Vault as state store. It is currently an experimental feature and you have to enable the `VFSVaultSupport` feature flag to enable it.
kOps has support for using Vault as state store. It is currently an experimental feature and you have to enable the `VFSVaultSupport` feature flag to enable it.
The goal of the vault store is to be a safe storage for the kops keys and secrets store. It will not work to use this as a kops registry/config store. Among other things, etcd-manager is unable to read VFS control files from vault. Vault also cannot be used as backend for etcd backups.

View File

@ -1,6 +1,6 @@
## Building Kubernetes clusters with Terraform
Kops can generate Terraform configurations, and then you can apply them using the `terraform plan` and `terraform apply` tools. This is very handy if you are already using Terraform, or if you want to check in the Terraform output into version control.
kOps can generate Terraform configurations, and then you can apply them using the `terraform plan` and `terraform apply` tools. This is very handy if you are already using Terraform, or if you want to check in the Terraform output into version control.
The gist of it is that, instead of letting kops apply the changes, you tell kops what you want, and then kops spits out what it wants done into a `.tf` file. **_You_** are then responsible for turning those plans into reality.
@ -9,7 +9,7 @@ The Terraform output should be reasonably stable (i.e. the text files should onl
Note that if you modify the Terraform files that kops spits out, it will override your changes with the configuration state defined by its own configs. In other terms, kops's own state is the ultimate source of truth (as far as kops is concerned), and Terraform is a representation of that state for your convenience.
### Terraform Version Compatibility
| Kops Version | Terraform Version | Feature Flag Notes |
| kOps Version | Terraform Version | Feature Flag Notes |
|--------------|-------------------|--------------------|
| >= 1.19 | >= 0.12.26, >= 0.13 | HCL2 supported by default <br>`KOPS_FEATURE_FLAGS=Terraform-0.12` is now deprecated |
| >= 1.18 | >= 0.12 | HCL2 supported by default |
@ -49,9 +49,9 @@ $ kops create cluster \
--target=terraform
```
The above command will create kops state on S3 (defined in `--state`) and output a representation of your configuration into Terraform files. Thereafter you can preview your changes in `kubernetes.tf` and then use Terraform to create all the resources as shown below:
The above command will create kOps state on S3 (defined in `--state`) and output a representation of your configuration into Terraform files. Thereafter you can preview your changes in `kubernetes.tf` and then use Terraform to create all the resources as shown below:
Additional Terraform `.tf` files could be added at this stage to customize your deployment, but remember the kops state should continue to remain the ultimate source of truth for the Kubernetes cluster.
Additional Terraform `.tf` files could be added at this stage to customize your deployment, but remember the kOps state should continue to remain the ultimate source of truth for the Kubernetes cluster.
Initialize Terraform to set-up the S3 backend and provider plugins.

View File

@ -1,12 +1,12 @@
# Network Topologies in Kops
# Network Topologies in kOps
Kops supports a number of pre defined network topologies. They are separated into commonly used scenarios, or topologies.
kOps supports a number of pre defined network topologies. They are separated into commonly used scenarios, or topologies.
Each of the supported topologies are listed below, with an example on how to deploy them.
# AWS
Kops supports the following topologies on AWS
kOps supports the following topologies on AWS
| Topology | Value | Description |
| ----------------- |----------- | ----------------------------------------------------------------------------------------------------------- |

View File

@ -236,7 +236,7 @@ spec:
## Adding additional storage to the instance groups
As of Kops 1.12.0 you can add additional storage _(note, presently confined to AWS)_ via the instancegroup specification.
As of kOps 1.12.0 you can add additional storage _(note, presently confined to AWS)_ via the instancegroup specification.
```YAML
---
@ -351,7 +351,7 @@ So the procedure is:
AWS permits the creation of mixed instance EC2 Autoscaling Groups using a [mixed instance policy](https://aws.amazon.com/blogs/aws/new-ec2-auto-scaling-groups-with-multiple-instance-types-purchase-options/), allowing the users to build a target capacity and make up of on-demand and spot instances while offloading the allocation strategy to AWS.
Support for mixed instance groups was added in Kops 1.12.0
Support for mixed instance groups was added in kOps 1.12.0
```YAML

View File

@ -1,6 +1,6 @@
# Upgrading from kube-up to kops
Kops let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
kOps let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
kops.
** This is a slightly risky procedure, so we recommend backing up important data before proceeding.

View File

@ -39,7 +39,7 @@ https://go.k8s.io/bot-commands).
## Office Hours
Kops maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
For more information, checkout the [office hours page.](office_hours.md)

View File

@ -1,6 +1,6 @@
## Office Hours
Kops maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
Office hours are hosted on a [zoom video chat](https://zoom.us/j/97072789944?pwd=VVlUR3dhN2h5TEFQZHZTVVd4SnJUdz09) on Fridays at [12 noon (Eastern Time)/9 am (Pacific Time)](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12) during weeks with odd "numbers". To check this weeks' number, run: `date +%V`. If the response is odd, join us on Friday for office hours!

View File

@ -1,11 +1,11 @@
# Kops Releases & Versioning
# kOps Releases & Versioning
Kops intends to be backward compatible. It is always recommended using the
kOps intends to be backward compatible. It is always recommended using the
latest version of kops with whatever version of Kubernetes you are using. We suggest
kops users run one of the [3 minor versions](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew) Kubernetes is supporting however we
do our best to support previous releases for some period.
Kops does not, however, support Kubernetes releases that have either a greater major
kOps does not, however, support Kubernetes releases that have either a greater major
release number or greater minor release number than it.
(The numbers before the first and second dots are the major and minor release numbers, respectively.)
For example, kops 1.16.0 does not support Kubernetes 1.17.0, but does
@ -28,11 +28,11 @@ Releases which are ~~crossed out~~ _should_ work, but we suggest they be upgrade
## Release Schedule
This project does not follow the Kubernetes release schedule. Kops aims to
This project does not follow the Kubernetes release schedule. kOps aims to
provide a reliable installation experience for Kubernetes, and typically
releases about a month after the corresponding Kubernetes release. This time
allows for the Kubernetes project to resolve any issues introduced by the new
version and ensures that we can support the latest features. Kops will release
version and ensures that we can support the latest features. kOps will release
alpha and beta pre-releases for people that are eager to try the latest
Kubernetes release. Please only use pre-GA kops releases in environments that
can tolerate the quirks of new releases, and please do report any issues