Rename kops to kOps in the docs

This commit is contained in:
Ciprian Hacman 2020-10-21 08:30:56 +03:00
parent 6a4d86baf9
commit 61708eae6b
75 changed files with 297 additions and 297 deletions

View File

@ -32,7 +32,7 @@ kubectl apply -f ${addon}
An enhanced script which also adds the IAM policies is included here [cluster-autoscaler.sh](cluster-autoscaler.sh)
Question: Which ASG group should be autoscaled?
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kops instancesgroup, and update the cluster so the maxSize propagates to the ASG.
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
Question: The cluster-autoscaler [documentation](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws) mentions an IAM Policy. Which IAM Role should the Policy be attached to?
Answer: Kops creates two Roles, nodes.$CLUSTER_NAME and masters.$CLUSTER_NAME. Currently the example scripts run the autoscaler process on the k8s master node, so the IAM Policy should be assigned to masters.$CLUSTER_NAME (substituting that variable for your actual cluster name).

View File

@ -111,7 +111,7 @@ kube-ingress-aws-controller, which we will use:
}
```
To apply the mentioned policy you have to add [additionalPolicies with kops](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md) for your cluster, so edit your cluster.
To apply the mentioned policy you have to add [additionalPolicies with kOps](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md) for your cluster, so edit your cluster.
```
kops edit cluster $KOPS_CLUSTER_NAME

View File

@ -1,6 +1,6 @@
# Prometheus Operator Addon
[Prometheus Operator](https://coreos.com/operators/prometheus) creates/configures/manages Prometheus clusters atop Kubernetes. This addon deploy prometheus-operator and [kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/README.md) in a kops cluster.
[Prometheus Operator](https://coreos.com/operators/prometheus) creates/configures/manages Prometheus clusters atop Kubernetes. This addon deploy prometheus-operator and [kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/README.md) in a kOps cluster.
## Prerequisites

View File

@ -1,4 +1,4 @@
# Download kops config spec file
# Download kOps config spec file
KOPS operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
@ -17,7 +17,7 @@ export NETWORKCIDR="10.240.0.0/16"
export MASTER_SIZE="m3.large"
export WORKER_SIZE="m4.large"
```
Next you call the kops command to create the cluster in your terminal:
Next you call the kOps command to create the cluster in your terminal:
```shell
kops create cluster $NAME \
@ -33,9 +33,9 @@ kops create cluster $NAME \
--ssh-public-key=$HOME/.ssh/lab_no_password.pub
```
## kops command
## kOps command
You can simply use the kops command `kops get --name $NAME -o yaml > a_fun_name_you_will_remember.yml`
You can simply use the kOps command `kops get --name $NAME -o yaml > a_fun_name_you_will_remember.yml`
Note: for the above command to work the cluster NAME and the KOPS_STATE_STORE will have to be exported in your environment.

View File

@ -1,4 +1,4 @@
## kops Advisories
## kOps Advisories
- [etcd-manager certificate expiration](etcd-manager-certificate-expiration.md)

View File

@ -5,7 +5,7 @@ kube-dns component. This component is the default DNS component installed in
Kubernetes. The vulnerability may be externally exploitable. Links below exist
with the full detail of the CVE. This exploit is not a Kubernetes specific vulnerability but exists in dnsmasq.
## Current kops Status
## Current kOps Status
`kops` release 1.7.1 addresses this CVE. This version of `kops` will upgrade and
create clusters. `kops` 1.8.0.alpha.1 release does not contain the required
@ -43,15 +43,15 @@ successfully, but we cannot validate full production stability. Local testing
in a non-production environment is always recommended. We are not able to
quantify the risk of using a non-tested version.
## Fixed kops releases
## Fixed kOps releases
We are planning to release in 1.8.x kops releases. 1.7.1 release is released with
We are planning to release in 1.8.x kOps releases. 1.7.1 release is released with
the needed changes. If you are using the 1.8.x alpha releases, we recommend
applying the hotfixes.
### Fixed kops Version Matrix
### Fixed kOps Version Matrix
| kops Version | Fixed | Released | Will Fix | URL |
| kOps Version | Fixed | Released | Will Fix | URL |
|---|---|---|---|---|
| 1.7.1 | Y | Y | Not Applicable | [here](https://github.com/kubernetes/kops/releases/tag/1.7.1) |
| master | Y | N | Not Applicable | [here](https://github.com/kubernetes/kops) |
@ -59,12 +59,12 @@ applying the hotfixes.
| 1.8.0.alpha.1 | N | Y | N | Not Applicable |
| 1.7.0 | N | Y | N | Not Applicable |
## kops PR fixes
## kOps PR fixes
- Fixed by @mikesplain in [#3511](https://github.com/kubernetes/kops/pull/3511)
- Fixed by @mikesplain in [#3538](https://github.com/kubernetes/kops/pull/3538)
## kops Tracking Issue
## kOps Tracking Issue
- Filed by @chrislovecnm [#3512](https://github.com/kubernetes/kops/issues/3512)

View File

@ -5,13 +5,13 @@ overwrite the host runc binary and consequently obtain host root access. For
more information, please see the [NIST advisory][NIST advisory]
or the [kubernetes advisory][kubernetes advisory].
For kops, kops releases 1.11.1 or later include workarounds, but note that the
For kOps, kOps releases 1.11.1 or later include workarounds, but note that the
fixes depend on the version of kubernetes you are running. Because kubernetes
1.10 and 1.11 were only validated with Docker version 17.03.x (and earlier), and
because Docker has chosen not to offer fixes for 17.03 in OSS, there is no
patched version.
**You must update to kops 1.11.1 (or later) if you are running kubernetes <=
**You must update to kOps 1.11.1 (or later) if you are running kubernetes <=
1.11.x to get this fix**
However, there is [an alternative](https://seclists.org/oss-sec/2019/q1/134) to
@ -27,7 +27,7 @@ anyway) and pods that have explicitly been granted CAP_LINUX_IMMUTABLE in the
running kubernetes 1.11 (or earlier), you should consider one of the
alternative fixes listed below**
## Summary of fixes that ship with kops >= 1.11.1
## Summary of fixes that ship with kOps >= 1.11.1
* Kubernetes 1.11 (and earlier): we mark runc with the immutable attribute.
* Kubernetes 1.12 (and later): we install a version of docker that includes a
@ -36,13 +36,13 @@ anyway) and pods that have explicitly been granted CAP_LINUX_IMMUTABLE in the
## Alternative fixes for users of kubernetes 1.11 (or earlier)
* Anticipate upgrading to kubernetes 1.12 earlier than previously planned. We are
accelerating the kops 1.12 release to facilitate this.
accelerating the kOps 1.12 release to facilitate this.
* Consider replacing the docker version with 18.06.3 or later. Note that this
will "pin" your docker version and in future you will want to remove this to
get future docker upgrades. (Do not select docker 18.06.2 on Redhat/Centos,
that version was mistakenly packaged by Docker without including the fix)
* Consider replacing just runc - some third parties have backported the fix to
runc 17.03, and our wonderful community of kops users has shared their
runc 17.03, and our wonderful community of kOps users has shared their
approaches to patching runc, see
[here](https://github.com/kubernetes/kops/issues/6459) and
[here](https://github.com/kubernetes/kops/issues/6476#issuecomment-465861406).

View File

@ -12,8 +12,8 @@
* All unpatched versions of linux are vulnerable when running on affected hardware, across all platforms (AWS, GCE, etc)
* Patches are included in Linux 4.4.110 for 4.4, 4.9.75 for 4.9, 4.14.12 for 4.14.
* kops can run an image of your choice, so we can only provide detailed advice for the default image.
* By default, kops runs an image that includes the 4.4 kernel. An updated image is available with the patched version (4.4.110). Users running the default image are strongly encouraged to upgrade.
* kOps can run an image of your choice, so we can only provide detailed advice for the default image.
* By default, kOps runs an image that includes the 4.4 kernel. An updated image is available with the patched version (4.4.110). Users running the default image are strongly encouraged to upgrade.
* If running another image please see your distro for updated images.
## CVEs
@ -56,7 +56,7 @@ other vendors for the appropriate AMI version.
### Update Process
For all examples please replace `$CLUSTER` with the appropriate kops cluster
For all examples please replace `$CLUSTER` with the appropriate kOps cluster
name.
#### List instance groups

View File

@ -1,14 +1,14 @@
# How to use kops in AWS China Region
# How to use kOps in AWS China Region
## Getting Started
kOps used to only support Google Cloud DNS and Amazon Route53 to provision a kubernetes cluster. But since 1.6.2 `gossip` has been added which make it possible to provision a cluster without one of those DNS providers. Thanks to `gossip`, it's officially supported to provision a fully-functional kubernetes cluster in AWS China Region [which doesn't have Route53 so far][1] since [1.7][2]. Should support both `cn-north-1` and `cn-northwest-1`, but only `cn-north-1` is tested.
Most of the following procedures to provision a cluster are the same with [the guide to use kops in AWS](getting_started/aws.md). The differences will be highlighted and the similar parts will be omitted.
Most of the following procedures to provision a cluster are the same with [the guide to use kOps in AWS](getting_started/aws.md). The differences will be highlighted and the similar parts will be omitted.
*NOTE: THE FOLLOWING PROCEDURES ARE ONLY TESTED WITH KOPS 1.10.0, 1.10.1 AND KUBERNETES 1.9.11, 1.10.12*
### [Install kops](getting_started/aws.md#install-kops)
### [Install kOps](getting_started/aws.md#install-kOps)
### [Install kubectl](getting_started/aws.md#install-kubectl)
@ -53,9 +53,9 @@ aws s3api create-bucket --bucket prefix-example-com-state-store --create-bucket-
First of all, we have to solve the slow and unstable connection to the internet outside China, or the following processes won't work. One way to do that is to build a NAT instance which can route the traffic via some reliable connection. The details won't be discussed here.
### Prepare kops ami
### Prepare kOps ami
We have to build our own AMI because there is [no official kops ami in AWS China Regions][3]. There're two ways to accomplish so.
We have to build our own AMI because there is [no official kOps ami in AWS China Regions][3]. There're two ways to accomplish so.
#### ImageBuilder **RECOMMENDED**
@ -93,7 +93,7 @@ ${GOPATH}/bin/imagebuilder --config aws-1.9-jessie.yaml --v=8 --publish=false --
#### Copy AMI from another region
Following [the comment][5] to copy the kops image from another region, e.g. `ap-southeast-1`.
Following [the comment][5] to copy the kOps image from another region, e.g. `ap-southeast-1`.
#### Get the AMI id

View File

@ -4,7 +4,7 @@ This is an overview of how a Kubernetes cluster comes up, when using kops.
## From spec to complete configuration
The kops tool itself takes the (minimal) spec of a cluster that the user specifies,
The kOps tool itself takes the (minimal) spec of a cluster that the user specifies,
and computes a complete configuration, setting defaults where values are not specified,
and deriving appropriate dependencies. The "complete" specification includes the set
of all flags that will be passed to all components. All decisions about how to install the
@ -71,7 +71,7 @@ APIServer also listens on the HTTPS port (443) on all interfaces. This is a sec
and requires valid authentication/authorization to use it. This is the endpoint that node kubelets
will reach, and also that end-users will reach.
kops uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
kOps uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
/etc/kubernetes/manifests) includes annotations that will cause the dns-controller to create the
records. It creates `api.internal.mycluster.com` for use inside the cluster (using InternalIP addresses),
and it creates `api.mycluster.com` for use outside the cluster (using ExternalIP addresses).
@ -81,7 +81,7 @@ kops uses DNS to allow nodes and end-users to discover the api-server. The apis
etcd is where we have put all of our synchronization logic, so it is more complicated than most other pieces,
and we must be really careful when bringing it up.
kops follows CoreOS's recommend procedure for [bring-up of etcd on clouds](https://github.com/coreos/etcd/issues/5418):
kOps follows CoreOS's recommend procedure for [bring-up of etcd on clouds](https://github.com/coreos/etcd/issues/5418):
* We have one EBS volume for each etcd cluster member (in different nodes)
* We attach the EBS volume to a master, and bring up etcd on that master

View File

@ -2,7 +2,7 @@
The `Cluster` resource contains the specification of the cluster itself.
The complete list of keys can be found at the [Cluster](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec) reference page.
The complete list of keys can be found at the [Cluster](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ClusterSpec) reference page.
On this page, we will expand on the more important configuration keys.
@ -50,7 +50,7 @@ spec:
You can use a valid SSL Certificate for your API Server Load Balancer. Currently, only AWS is supported.
Note that when using `sslCertificate`, client certificate authentication, such as with the credentials generated via `kops export kubecfg`, will not work through the load balancer. As of kOps 1.19, a `kubecfg` that bypasses the load balancer may be created with the `--internal` flag to `kops update cluster` or `kops export kubecfg`. Security groups may need to be opened to allow access from the clients to the master instances' port TCP/443, for example by using the `additionalSecurityGroups` field on the master instance groups.
Note that when using `sslCertificate`, client certificate authentication, such as with the credentials generated via `kOps export kubecfg`, will not work through the load balancer. As of kOps 1.19, a `kubecfg` that bypasses the load balancer may be created with the `--internal` flag to `kops update cluster` or `kOps export kubecfg`. Security groups may need to be opened to allow access from the clients to the master instances' port TCP/443, for example by using the `additionalSecurityGroups` field on the master instance groups.
```yaml
spec:
@ -106,7 +106,7 @@ etcdClusters:
name: events
```
The etcd version used by kops follows the recommended etcd version for the given kubernetes version. It is possible to override this by adding the `version` key to each of the etcd clusters.
The etcd version used by kOps follows the recommended etcd version for the given kubernetes version. It is possible to override this by adding the `version` key to each of the etcd clusters.
By default, the Volumes created for the etcd clusters are `gp2` and 20GB each. The volume size, type and Iops( for `io1`) can be configured via their parameters. Conversion between `gp2` and `io1` is not supported, nor are size changes.
@ -219,7 +219,7 @@ spec:
zone: us-east-1a
```
In the case that you don't use NAT gateways or internet gateways, kOps 1.12.0 introduced the "External" flag for egress to force kops to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kops, typically with an existing cluster.
In the case that you don't use NAT gateways or internet gateways, kOps 1.12.0 introduced the "External" flag for egress to force kOps to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kOps, typically with an existing cluster.
```yaml
spec:
@ -406,7 +406,7 @@ spec:
## externalDns
This block contains configuration options for your `external-DNS` provider.
The current external-DNS provider is the kops `dns-controller`, which can set up DNS records for Kubernetes resources.
The current external-DNS provider is the kOps `dns-controller`, which can set up DNS records for Kubernetes resources.
`dns-controller` is scheduled to be phased out and replaced with `external-dns`.
```yaml
@ -867,7 +867,7 @@ spec:
## containerd
It is possible to override the [containerd](https://github.com/containerd/containerd/blob/master/README.md) daemon options for all the nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ContainerdConfig) for the full list of options.
It is possible to override the [containerd](https://github.com/containerd/containerd/blob/master/README.md) daemon options for all the nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ContainerdConfig) for the full list of options.
```yaml
spec:
@ -879,7 +879,7 @@ spec:
## docker
It is possible to override Docker daemon options for all masters and nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#DockerConfig) for the full list of options.
It is possible to override Docker daemon options for all masters and nodes in the cluster. See the [API docs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#DockerConfig) for the full list of options.
### registryMirrors
@ -933,7 +933,7 @@ docker:
## sshKeyName
In some cases, it may be desirable to use an existing AWS SSH key instead of allowing kops to create a new one.
In some cases, it may be desirable to use an existing AWS SSH key instead of allowing kOps to create a new one.
Providing the name of a key already in AWS is an alternative to `--ssh-public-key`.
```yaml
@ -976,7 +976,7 @@ snip
## target
In some use-cases you may wish to augment the target output with extra options. `target` supports a minimal amount of options you can do this with. Currently only the terraform target supports this, but if other use cases present themselves, kops may eventually support more.
In some use-cases you may wish to augment the target output with extra options. `target` supports a minimal amount of options you can do this with. Currently only the terraform target supports this, but if other use cases present themselves, kOps may eventually support more.
```yaml
spec:
@ -992,12 +992,12 @@ Assets define alternative locations from where to retrieve static files and cont
### containerRegistry
The container registry enables kops / kubernetes to pull containers from a managed registry.
The container registry enables kOps / kubernetes to pull containers from a managed registry.
This is useful when pulling containers from the internet is not an option, eg. because the
deployment is offline / internet restricted or because of special requirements that apply
for deployed artifacts, eg. auditing of containers.
For a use case example, see [How to use kops in AWS China Region](https://github.com/kubernetes/kops/blob/master/docs/aws-china.md)
For a use case example, see [How to use kOps in AWS China Region](https://github.com/kubernetes/kops/blob/master/docs/aws-china.md)
```yaml
spec:

View File

@ -35,7 +35,7 @@ Validation is done in validation.go, and is fairly simple - we just add an error
```go
if v.Ipam != "" {
// "azure" not supported by kops
// "azure" not supported by kOps
allErrs = append(allErrs, IsValidValue(fldPath.Child("ipam"), &v.Ipam, []string{"crd", "eni"})...)
if v.Ipam == kops.CiliumIpamEni {
@ -246,7 +246,7 @@ kops create cluster <clustername> --zones us-east-1b
...
```
If you have changed the dns or kops controllers, you would want to test them as well. To do so, run the respective snippets below before creating the cluster.
If you have changed the dns or kOps controllers, you would want to test them as well. To do so, run the respective snippets below before creating the cluster.
For dns-controller:
@ -282,7 +282,7 @@ Users would simply `kops edit cluster`, and add a value like:
```
Then `kops update cluster --yes` would create the new NodeUpConfig, which is included in the instance startup script
and thus requires a new LaunchConfiguration, and thus a `kops rolling update`. We're working on changing settings
and thus requires a new LaunchConfiguration, and thus a `kops-rolling update`. We're working on changing settings
without requiring a reboot, but likely for this particular setting it isn't the sort of thing you need to change
very often.

View File

@ -5,7 +5,7 @@ jump through some hoops to use it.
Recommended reading: [kubernetes API convention doc](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md) and [kubernetes API changes doc](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md).
The kops APIs live in [pkg/apis/kops](https://github.com/kubernetes/kops/tree/master/pkg/apis/kops), both in
The kOps APIs live in [pkg/apis/kops](https://github.com/kubernetes/kops/tree/master/pkg/apis/kops), both in
that directory directly (the unversioned API) and in the versioned subdirectory (`v1alpha2`).
## Updating the generated API code

View File

@ -11,7 +11,7 @@ While bazel works well for small projects, building with kubernetes still has a
* We strip bazel files from external dependencies, so we don't confuse gazelle
## Bazel versions:
For building kops release branches 1.14 and older, you may need to run an older version of bazel such as `0.24.0`. kops 1.15 and newer should be able to use more recent versions of bazel due to deprecation fixes that have not be backported.
For building kOps release branches 1.14 and older, you may need to run an older version of bazel such as `0.24.0`. kOps 1.15 and newer should be able to use more recent versions of bazel due to deprecation fixes that have not be backported.
## How to run

View File

@ -1,6 +1,6 @@
# Building from source
[Installation from a binary](../install.md) is recommended for normal kops operation. However, if you want
[Installation from a binary](../install.md) is recommended for normal kOps operation. However, if you want
to build from source, it is straightforward:
If you don't have a GOPATH:
@ -29,14 +29,14 @@ Cross compiling for things like `nodeup` are now done automatically via `make no
## Debugging
To enable interactive debugging, the kops binary needs to be specially compiled to include debugging symbols.
To enable interactive debugging, the kOps binary needs to be specially compiled to include debugging symbols.
Add `DEBUGGING=true` to the `make` invocation to set the compile flags appropriately.
For example, `DEBUGGING=true make` will produce a kops binary that can be interactively debugged.
For example, `DEBUGGING=true make` will produce a kOps binary that can be interactively debugged.
### Interactive debugging with Delve
[Delve](https://github.com/derekparker/delve) can be used to interactively debug the kops binary.
[Delve](https://github.com/derekparker/delve) can be used to interactively debug the kOps binary.
After installing Delve, you can use it directly, or run it in headless mode for use with an
Interactive Development Environment (IDE).
@ -46,6 +46,6 @@ and then configure your IDE to connect its debugger to port 2345 on localhost.
## Troubleshooting
- Make sure `$GOPATH` is set, and your [workspace](https://golang.org/doc/code.html#Workspaces) is configured.
- kops will not compile with symlinks in `$GOPATH`. See issue go issue [17451](https://github.com/golang/go/issues/17451) for more information
- kOps will not compile with symlinks in `$GOPATH`. See issue go issue [17451](https://github.com/golang/go/issues/17451) for more information
- building kops requires go 1.15
- kOps will only compile if the source is checked out in `$GOPATH/src/k8s.io/kops`. If you try to use `$GOPATH/src/github.com/kubernetes/kops` you will run into issues with package imports not working as expected.

View File

@ -1,6 +1,6 @@
# Installing kOps via Hombrew
Homebrew makes installing kops [very simple for MacOS.](../install.md)
Homebrew makes installing kOps [very simple for MacOS.](../install.md)
```bash
brew update && brew install kops
```
@ -13,7 +13,7 @@ brew update && brew install kops --HEAD
Previously we could also ship development updates to homebrew but their [policy has changed.](https://github.com/Homebrew/brew/pull/5060#issuecomment-428149176)
Note: if you already have kops installed, you need to substitute `upgrade` for `install`.
Note: if you already have kOps installed, you need to substitute `upgrade` for `install`.
You can switch between installed releases with:
```bash
@ -21,9 +21,9 @@ brew switch kops 1.17.0
brew switch kops 1.18.0
```
# Releasing kops to Brew
# Releasing kOps to Brew
Submitting a new release of kops to Homebrew is very simple.
Submitting a new release of kOps to Homebrew is very simple.
### From a homebrew machine

View File

@ -4,7 +4,7 @@
Everything in `kops` is currently driven by a command line interface. We use [cobra](https://github.com/spf13/cobra) to define all of our command line UX.
All of the CLI code for kops can be found in `/cmd/kops` [link](https://github.com/kubernetes/kops/tree/master/cmd/kops)
All of the CLI code for kOps can be found in `/cmd/kops` [link](https://github.com/kubernetes/kops/tree/master/cmd/kops)
For instance, if you are interested in finding the entry point to `kops create cluster` you would look in `/cmd/kops/create_cluster.go`. There you would find a function called `RunCreateCluster()`. That is the entry point of the command.
@ -38,7 +38,7 @@ The `kops` API is a definition of struct members in Go code found [here](https:/
The base level struct of the API is `api.Cluster{}` which is defined [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/cluster.go#L40). The top level struct contains meta information about the object such as the kind and version and the main data for the cluster itself can be found in `cluster.Spec`
It is important to note that the API members are a representation of a Kubernetes cluster. These values are stored in the `kops` **STATE STORE** mentioned above for later use. By design kops does not store information about the state of the cloud in the state store, if it can infer it from looking at the actual state of the cloud.
It is important to note that the API members are a representation of a Kubernetes cluster. These values are stored in the `kops` **STATE STORE** mentioned above for later use. By design kOps does not store information about the state of the cloud in the state store, if it can infer it from looking at the actual state of the cloud.
More information on the API can be found [here](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md).

View File

@ -1,20 +1,20 @@
** This file documents the release process used through kops 1.18.
** This file documents the release process used through kOps 1.18.
For the new process that will be used for 1.19, please see
[the new release process](../release-process.md)**
# Release Process
The kops project is released on an as-needed basis. The process is as follows:
The kOps project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. All [OWNERS](https://github.com/kubernetes/kops/blob/master/OWNERS) must LGTM this release
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kops $VERSION is released`
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kOps $VERSION is released`
## Branches
We maintain a `release-1.16` branch for kops 1.16.X, `release-1.17` for kops 1.17.X
We maintain a `release-1.16` branch for kOps 1.16.X, `release-1.17` for kOps 1.17.X
etc.
`master` is where development happens. We create new branches from master as a
@ -24,7 +24,7 @@ to focus on the new functionality, and start cherry-picking back more selectivel
to the release branches only as needed.
Generally we don't encourage users to run older kops versions, or older
branches, because newer versions of kops should remain compatible with older
branches, because newer versions of kOps should remain compatible with older
versions of Kubernetes.
Releases should be done from the `release-1.X` branch. The tags should be made
@ -167,7 +167,7 @@ k8s-container-image-promoter --snapshot gcr.io/k8s-staging-kops --snapshot-tag $
cd ~/k8s/src/k8s.io/k8s.io
git add k8s.gcr.io/images/k8s-staging-kops/images.yaml
git add artifacts/manifests/k8s-staging-kops/${VERSION}.yaml
git commit -m "Promote artifacts for kops ${VERSION}"
git commit -m "Promote artifacts for kOps ${VERSION}"
git push ${USER}
hub pull-request
```
@ -195,7 +195,7 @@ relnotes -config .shipbot.yaml < /tmp/prs >> docs/releases/${DOC}-NOTES.md
* Add notes
* Publish it
## Release kops to homebrew
## Release kOps to homebrew
* Following the [documentation](homebrew.md) we must release a compatible homebrew formulae with the release.
* This should be done at the same time as the release, and we will iterate on how to improve timing of this.
@ -204,11 +204,11 @@ relnotes -config .shipbot.yaml < /tmp/prs >> docs/releases/${DOC}-NOTES.md
Once we are satisfied the release is sound:
* Bump the kops recommended version in the alpha channel
* Bump the kOps recommended version in the alpha channel
Once we are satisfied the release is stable:
* Bump the kops recommended version in the stable channel
* Bump the kOps recommended version in the stable channel
## Update conformance results with CNCF

View File

@ -33,7 +33,7 @@ Following the examples below, kubetest will download artifacts such as a given K
### Running against an existing cluster
You can run something like the following to have `kubetest` re-use an existing cluster.
This assumes you have already built the kops binary from source. The exact path to the `kops` binary used in the `--kops` flag may differ.
This assumes you have already built the kOps binary from source. The exact path to the `kops` binary used in the `--kops` flag may differ.
```
GINKGO_PARALLEL=y kubetest --test --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort" --provider=aws --deployment=kops --cluster=my.testcluster.com --kops-state=${KOPS_STATE_STORE} --kops ${GOPATH}/bin/kops --extract=ci/latest
@ -49,6 +49,6 @@ By adding the `--up` flag, `kubetest` will spin up a new cluster. In most cases,
GINKGO_PARALLEL=y kubetest --test --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort" --provider=aws --check-version-skew=false --deployment=kops --kops-state=${KOPS_STATE_STORE} --kops ${GOPATH}/bin/kops --kops-args="--network-cidr=192.168.1.0/24" --cluster=my.testcluster.com --up --kops-ssh-key ~/.ssh/id_rsa --kops-admin-access=0.0.0.0/0
```
If you want to run the tests against your development version of kops, you need to upload the binaries and set the environment variables as described in [Adding a new feature](adding_a_feature.md).
If you want to run the tests against your development version of kOps, you need to upload the binaries and set the environment variables as described in [Adding a new feature](adding_a_feature.md).
Since we assume you are using this cluster for testing, we leave the cluster running after the tests have finished so that you can inspect the nodes if anything unexpected happens. If you do not need this, you can add the `--down` flag. Otherwise, just delete the cluster as any other cluster: `kops delete cluster my.testcluster.com --yes`

View File

@ -1,6 +1,6 @@
# Vendoring Go dependencies
kops uses [go mod](https://github.com/golang/go/wiki/Modules) to manage
kOps uses [go mod](https://github.com/golang/go/wiki/Modules) to manage
vendored dependencies.
## Prerequisites

View File

@ -4,7 +4,7 @@
Kubernetes has moved from etcd2 to etcd3, which is an upgrade that involves Kubernetes API Server
downtime. Technically there is no usable upgrade path from etcd2 to etcd3 that
supports HA scenarios, but kops has enabled it using etcd-manager.
supports HA scenarios, but kOps has enabled it using etcd-manager.
Nonetheless, this remains a *higher-risk upgrade* than most other kubernetes
upgrades - you are strongly recommended to plan accordingly: back up critical
@ -29,7 +29,7 @@ bottom of this doc that outlines how to do that.
## Default upgrades
When upgrading to kubernetes 1.12 with kops 1.12, by default:
When upgrading to kubernetes 1.12 with kOps 1.12, by default:
* Calico and Cilium will be updated to a configuration that uses CRDs
* We will automatically start using etcd-manager
@ -73,20 +73,20 @@ If you would like to upgrade more gradually, we offer the following strategies
to spread out the disruption over several steps. Note that they likely involve
more disruption and are not necessarily lower risk.
### Adopt etcd-manager with kops 1.11 / kubernetes 1.11
### Adopt etcd-manager with kOps 1.11 / kubernetes 1.11
If you don't already have TLS enabled with etcd, you can adopt etcd-manager before
kops 1.12 & kubernetes 1.12 by running:
kOps 1.12 & kubernetes 1.12 by running:
```bash
kops set cluster cluster.spec.etcdClusters[*].provider=manager
```
Then you can proceed to update to kops 1.12 & kubernetes 1.12, as this becomes the default.
Then you can proceed to update to kOps 1.12 & kubernetes 1.12, as this becomes the default.
### Delay adopting etcd-manager with kops 1.12
### Delay adopting etcd-manager with kOps 1.12
To delay adopting etcd-manager with kops 1.12, specify the provider as type `legacy`:
To delay adopting etcd-manager with kOps 1.12, specify the provider as type `legacy`:
```bash
kops set cluster cluster.spec.etcdClusters[*].provider=legacy
@ -94,9 +94,9 @@ kops set cluster cluster.spec.etcdClusters[*].provider=legacy
To remove, `kops edit` your cluster and delete the `provider: Legacy` lines from both etcdCluster blocks.
### Delay adopting etcd3 with kops 1.12
### Delay adopting etcd3 with kOps 1.12
To delay adopting etcd3 with kops 1.12, specify the etcd version as 2.2.1
To delay adopting etcd3 with kOps 1.12, specify the etcd version as 2.2.1
```bash
kops set cluster cluster.spec.etcdClusters[*].version=2.2.1

View File

@ -4,7 +4,7 @@ Before rushing-in to replicate any of the exercises, please ensure your basic en
Basic requirements:
- Configured AWS cli (aws account set-up with proper permissions/roles needed for kops). Depending on your distro, you can set-up directly from packages, or if you want the most updated version, use `pip` (python package manager) to install by running `pip install awscli` command from your local terminal. Your choice!
- Configured AWS cli (aws account set-up with proper permissions/roles needed for kOps). Depending on your distro, you can set-up directly from packages, or if you want the most updated version, use `pip` (python package manager) to install by running `pip install awscli` command from your local terminal. Your choice!
- Local ssh key ready on `~/.ssh/id_rsa` / `id_rsa.pub`. You can generate it using `ssh-keygen` command if you don't have one already: `ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""`.
- AWS Region set.
- Throughout most of the exercises, we'll deploy our clusters in us-east-1 region (AZs: us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e and us-east-1f).

View File

@ -196,7 +196,7 @@ If you don't want KOPS to auto-select the instance type, you can use the followi
But, before doing that, always ensure the instance types are available on your desired AZ.
NOTE: More arguments and kops commands are described [here](../cli/kops.md).
NOTE: More arguments and kOps commands are described [here](../cli/kops.md).
Let's continue exploring our cluster, but now with "kubectl":

View File

@ -279,7 +279,7 @@ ${NAME}
A few things to note here:
- The environment variable ${NAME} was previously exported with our cluster name: mycluster01.kopsclustertest.example.org.
- "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- "--cloud=aws": As kOps grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- For true HA at the master level, we need to pick a region with at least 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e.
- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce that we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters).
- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes.
@ -787,11 +787,11 @@ You can see how your cluster scaled up to 3 nodes.
**SCALING RECOMMENDATIONS:**
- Always think ahead. If you want to ensure to have the capability to scale-up to all available zones in the region, ensure to add them to the "--zones=" argument when using the "kops create cluster" command. Example: --zones=us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1e. That will make things simpler later.
- For the masters, always consider "odd" numbers starting from 3. Like many other cluster, odd numbers starting from "3" are the proper way to create a fully redundant multi-master solution. In the specific case of "kops", you add masters by adding zones to the "--master-zones" argument on "kops create command".
- For the masters, always consider "odd" numbers starting from 3. Like many other cluster, odd numbers starting from "3" are the proper way to create a fully redundant multi-master solution. In the specific case of "kOps", you add masters by adding zones to the "--master-zones" argument on "kops create command".
## DELETING OUR CLUSTER AND CHECKING OUR DNS SUBDOMAIN:
If we don't need our cluster anymore, let's use a kops command in order to delete it:
If we don't need our cluster anymore, let's use a kOps command in order to delete it:
```bash
kops delete cluster ${NAME} --yes

View File

@ -71,13 +71,13 @@ ${NAME}
A few things to note here:
- The environment variable ${NAME} was previously exported with our cluster name: privatekopscluster.k8s.local.
- "--cloud=aws": As kops grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- "--cloud=aws": As kOps grows and begin to support more clouds, we need to tell the command to use the specific cloud we want for our deployment. In this case: amazon web services (aws).
- For true HA (high availability) at the master level, we need to pick a region with 3 availability zones. For this practical exercise, we are using "us-east-1" AWS region which contains 5 availability zones (az's for short): us-east-1a, us-east-1b, us-east-1c, us-east-1d and us-east-1e. We used "us-east-1a,us-east-1b,us-east-1c" for our masters.
- The "--master-zones=us-east-1a,us-east-1b,us-east-1c" KOPS argument will actually enforce we want 3 masters here. "--node-count=2" only applies to the worker nodes (not the masters). Again, real "HA" on Kubernetes control plane requires 3 masters.
- The "--topology private" argument will ensure that all our instances will have private IP's and no public IP's from amazon.
- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes.
- Because we are just doing a simple LAB, we are using "t3.micro" machines. Please DON'T USE t3.micro on real production systems. Start with "t3.medium" as a minimum realistic/workable machine type.
- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kops which networking subsystem to use. More information about kops supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](../networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short).
- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kOps which networking subsystem to use. More information about kOps supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](../networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short).
**NOTE**: You can add the "--bastion" argument here if you are not using "gossip dns" and create the bastion from start, but if you are using "gossip-dns" this will make this cluster to fail (this is a bug we are correcting now). For the moment don't use "--bastion" when using gossip DNS. We'll show you how to get around this by first creating the private cluster, then creation the bastion instance group once the cluster is running.
@ -114,7 +114,7 @@ ip-172-20-74-55.ec2.internal master True
Your cluster privatekopscluster.k8s.local is ready
```
The ELB created by kops will expose the Kubernetes API trough "https" (configured on our ~/.kube/config file):
The ELB created by kOps will expose the Kubernetes API trough "https" (configured on our ~/.kube/config file):
```bash
grep server ~/.kube/config
@ -138,7 +138,7 @@ kops create instancegroup bastions --role Bastion --subnet utility-us-east-1a --
**Explanation of this command:**
- This command will add to our cluster definition a new instance group called "bastions" with the "Bastion" role on the aws subnet "utility-us-east-1a". Note that the "Bastion" role need the first letter to be a capital (Bastion=ok, bastion=not ok).
- The subnet "utility-us-east-1a" was created when we created our cluster the first time. KOPS add the "utility-" prefix to all subnets created on all specified AZ's. In other words, if we instructed kops to deploy our instances on us-east-1a, use-east-1b and use-east-1c, kops will create the subnets "utility-us-east-1a", "utility-us-east-1b" and "utility-us-east-1c". Because we need to tell kops where to deploy our bastion (or bastions), we need to specify the subnet.
- The subnet "utility-us-east-1a" was created when we created our cluster the first time. KOPS add the "utility-" prefix to all subnets created on all specified AZ's. In other words, if we instructed kOps to deploy our instances on us-east-1a, use-east-1b and use-east-1c, kOps will create the subnets "utility-us-east-1a", "utility-us-east-1b" and "utility-us-east-1c". Because we need to tell kOps where to deploy our bastion (or bastions), we need to specify the subnet.
You'll see the following output in your editor when you can change your bastion group size and add more networks.
@ -185,7 +185,7 @@ Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
```
This is "kops" creating the instance group with your bastion instance. Let's validate our cluster:
This is "kOps" creating the instance group with your bastion instance. Let's validate our cluster:
```bash
kops validate cluster
@ -216,13 +216,13 @@ Our bastion instance group is there. Also, kops created an ELB for our "bastions
```bash
aws elb --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}'
bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
```
For this LAB, the "ELB" FQDN is "bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com" We can "ssh" to it:
For this LAB, the "ELB" FQDN is "bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com" We can "ssh" to it:
```bash
ssh -i ~/.ssh/id_rsa ubuntu@bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
ssh -i ~/.ssh/id_rsa ubuntu@bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
@ -250,7 +250,7 @@ Identity added: /home/kops/.ssh/id_rsa (/home/kops/.ssh/id_rsa)
Then, ssh to your bastion ELB FQDN
```bash
ssh -A ubuntu@bastion-privatekopscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
ssh -A ubuntu@bastion-privatekOpscluste-bgl0hp-1327959377.us-east-1.elb.amazonaws.com
```
Or if you want to automate it:
@ -351,11 +351,11 @@ kops update cluster ${NAME} --yes
W0828 15:22:46.461033 5852 executor.go:109] error running task "LoadBalancer/bastion.privatekopscluster.k8s.local" (1m5s remaining to succeed): subnet changes on LoadBalancer not yet implemented: actual=[subnet-c029639a] -> expected=[subnet-23f8a90f subnet-4a24ef2e subnet-c029639a]
```
This happens because the original ELB created by "kops" only contained the subnet "utility-us-east-1a" and it can't add the additional subnets. In order to fix this, go to your AWS console and add the remaining subnets in your ELB. Then the recurring error will disappear and your bastion layer will be fully redundant.
This happens because the original ELB created by "kOps" only contained the subnet "utility-us-east-1a" and it can't add the additional subnets. In order to fix this, go to your AWS console and add the remaining subnets in your ELB. Then the recurring error will disappear and your bastion layer will be fully redundant.
**NOTE:** Always think ahead: If you are creating a fully redundant cluster (with fully redundant bastions), always configure the redundancy from the beginning.
When you are finished playing with kops, then destroy/delete your cluster:
When you are finished playing with kOps, then destroy/delete your cluster:
Finally, let's destroy our cluster:

View File

@ -31,10 +31,10 @@ Suppose you are creating a cluster named "dev.kubernetes.example.com`:
* You can specify a `--dns-zone=example.com` (you can have subdomains in a hosted zone)
* You could also use `--dns-zone=kubernetes.example.com`
You do have to set up the DNS nameservers so your hosted zone resolves. kops used to create the hosted
You do have to set up the DNS nameservers so your hosted zone resolves. kOps used to create the hosted
zone for you, but now (as you have to set up the nameservers anyway), there doesn't seem much reason to do so!
If you don't specify a dns-zone, kops will list all your hosted zones, and choose the longest that
If you don't specify a dns-zone, kOps will list all your hosted zones, and choose the longest that
is a suffix of your cluster name. So for `dev.kubernetes.example.com`, if you have `kubernetes.example.com`,
`example.com` and `somethingelse.example.com`, it would choose `kubernetes.example.com`. `example.com` matches
but is shorter; `somethingelse.example.com` is not a suffix-match.

View File

@ -1,6 +1,6 @@
# Getting Started with kops on AWS
# Getting Started with kOps on AWS
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md).
Make sure you have [installed kOps](../install.md) and [installed kubectl](../install.md).
## Setup your environment
@ -31,7 +31,7 @@ IAMFullAccess
AmazonVPCFullAccess
```
You can create the kops IAM user from the command line using the following:
You can create the kOps IAM user from the command line using the following:
```bash
aws iam create-group --group-name kops
@ -255,24 +255,24 @@ Information regarding cluster state store location must be set when using `kops`
### Using S3 default bucket encryption
`kops` supports [default bucket encryption](https://aws.amazon.com/de/blogs/aws/new-amazon-s3-encryption-security-features/) to encrypt its state in an S3 bucket. This way, the default server side encryption set for your bucket will be used for the kops state too. You may want to use this AWS feature, e.g., for easily encrypting every written object by default or when you need to use specific encryption keys (KMS, CMK) for compliance reasons.
`kops` supports [default bucket encryption](https://aws.amazon.com/de/blogs/aws/new-amazon-s3-encryption-security-features/) to encrypt its state in an S3 bucket. This way, the default server side encryption set for your bucket will be used for the kOps state too. You may want to use this AWS feature, e.g., for easily encrypting every written object by default or when you need to use specific encryption keys (KMS, CMK) for compliance reasons.
If your S3 bucket has a default encryption set up, kops will use it:
If your S3 bucket has a default encryption set up, kOps will use it:
```bash
aws s3api put-bucket-encryption --bucket prefix-example-com-state-store --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
```
If the default encryption is not set or it cannot be checked, kops will resort to using server-side AES256 bucket encryption with [Amazon S3-Managed Encryption Keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html).
If the default encryption is not set or it cannot be checked, kOps will resort to using server-side AES256 bucket encryption with [Amazon S3-Managed Encryption Keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html).
### Sharing an S3 bucket across multiple accounts
It is possible to use a single S3 bucket for storing kops state for clusters
It is possible to use a single S3 bucket for storing kOps state for clusters
located in different accounts by using [cross-account bucket policies](http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html#access-policies-walkthrough-cross-account-permissions-acctA-tasks).
kOps will be able to use buckets configured with cross-account policies by default.
In this case you may want to override the object ACLs which kops places on the
In this case you may want to override the object ACLs which kOps places on the
state files, as default AWS ACLs will make it possible for an account that has
delegated access to write files that the bucket owner cannot read.

View File

@ -1,6 +1,6 @@
# Commands & Arguments
This page lists the most common kops commands.
Please refer to the kops [cli reference](../cli/kops.md) for full documentation.
This page lists the most common kOps commands.
Please refer to the kOps [cli reference](../cli/kops.md) for full documentation.
## `kops create`
@ -8,7 +8,7 @@ Please refer to the kops [cli reference](../cli/kops.md) for full documentation.
### `kops create -f <cluster spec>`
`kops create -f <cluster spec>` will register a cluster using a kops spec yaml file. After the cluster has been registered you need to run `kops update cluster --yes` to create the cloud resources.
`kops create -f <cluster spec>` will register a cluster using a kOps spec yaml file. After the cluster has been registered you need to run `kops update cluster --yes` to create the cloud resources.
### `kops create cluster`
@ -24,7 +24,7 @@ the output matches your expectations, you can apply the changes by adding `--yes
## `kops rolling-update cluster`
`kops update cluster <clustername>` updates a kubernetes cluster to match the cloud and kops specifications.
`kops update cluster <clustername>` updates a kubernetes cluster to match the cloud and kOps specifications.
As a precaution, it is safer run in 'preview' mode first using `kops rolling-update cluster --name <name>`, and once confirmed
the output matches your expectations, you can apply the changes by adding `--yes` to the command - `kops rolling-update cluster --name <name> --yes`.
@ -43,7 +43,7 @@ the output matches your expectations, you can perform the actual deletion by add
## `kops toolbox template`
`kops toolbox template` lets you generate a kops spec using `go` templates. This is very handy if you want to consistently manage multiple clusters.
`kops toolbox template` lets you generate a kOps spec using `go` templates. This is very handy if you want to consistently manage multiple clusters.
## `kops version`

View File

@ -1,6 +1,6 @@
# Getting Started with kops on DigitalOcean
# Getting Started with kOps on DigitalOcean
**WARNING**: digitalocean support on kops is currently **alpha** meaning it is in the early stages of development and subject to change, please use with caution.
**WARNING**: digitalocean support on kOps is currently **alpha** meaning it is in the early stages of development and subject to change, please use with caution.
## Requirements
@ -31,7 +31,7 @@ export KOPS_FEATURE_FLAGS="AlphaAllowDO"
## Creating a Single Master Cluster
In the following examples, `example.com` should be replaced with the DigitalOcean domain you created when going through the [Requirements](#requirements).
Note that you kops will only be able to successfully provision clusters in regions that support block storage (AMS3, BLR1, FRA1, LON1, NYC1, NYC3, SFO2, SGP1 and TOR1).
Note that you kOps will only be able to successfully provision clusters in regions that support block storage (AMS3, BLR1, FRA1, LON1, NYC1, NYC3, SFO2, SGP1 and TOR1).
```bash
# debian (the default) + flannel overlay cluster in tor1
@ -65,7 +65,7 @@ kops delete cluster dev5.k8s.local --yes
## Features Still in Development
kops for DigitalOcean currently does not support these features:
kOps for DigitalOcean currently does not support these features:
* rolling update for instance groups

View File

@ -1,6 +1,6 @@
# Getting Started with kops on GCE
# Getting Started with kOps on GCE
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md), and installed
Make sure you have [installed kOps](../install.md) and [installed kubectl](../install.md), and installed
the [gcloud tools](https://cloud.google.com/sdk/downloads).
You'll need a Google Cloud account, and make sure that gcloud is logged in to your account using `gcloud init`.
@ -13,7 +13,7 @@ You'll also need to [configure default credentials](https://developers.google.co
# Creating a state store
kops needs a state store, to hold the configuration for your clusters. The simplest configuration
kOps needs a state store, to hold the configuration for your clusters. The simplest configuration
for Google Cloud is to store it in a Google Cloud Storage bucket in the same account, so that's how we'll
start.
@ -30,7 +30,7 @@ You can also put this in your `~/.bashrc` or similar.
# Creating our first cluster
`kops create cluster` creates the Cluster object and InstanceGroup object you'll be working with in kops:
`kops create cluster` creates the Cluster object and InstanceGroup object you'll be working with in kOps:
PROJECT=`gcloud config get-value project`
@ -38,7 +38,7 @@ You can also put this in your `~/.bashrc` or similar.
kops create cluster simple.k8s.local --zones us-central1-a --state ${KOPS_STATE_STORE}/ --project=${PROJECT}
You can now list the Cluster objects in your kops state store (the GCS bucket
You can now list the Cluster objects in your kOps state store (the GCS bucket
we created).
@ -53,7 +53,7 @@ we created).
This shows that you have one Cluster object configured, named `simple.k8s.local`. The cluster holds the cluster-wide configuration for
a kubernetes cluster - things like the kubernetes version, and the authorization policy in use.
The `kops` tool should feel a lot like `kubectl` - kops uses the same API machinery as kubernetes,
The `kops` tool should feel a lot like `kubectl` - kOps uses the same API machinery as kubernetes,
so it should behave similarly, although now you are managing kubernetes clusters, instead of managing
objects on a kubernetes cluster.
@ -118,12 +118,12 @@ Similarly, you can also see your InstanceGroups using:
<!-- TODO: Fix subnets vs regions -->
InstanceGroups are the other main kops object - an InstanceGroup manages a set of cloud instances,
InstanceGroups are the other main kOps object - an InstanceGroup manages a set of cloud instances,
which then are registered in kubernetes as Nodes. You have multiple InstanceGroups for different types
of instances / Nodes - in our simple example we have one for our master (which only has a single member),
and one for our nodes (and we have two nodes configured).
We'll see a lot more of Cluster objects and InstanceGroups as we use kops to reconfigure clusters. But let's get
We'll see a lot more of Cluster objects and InstanceGroups as we use kOps to reconfigure clusters. But let's get
on with our first cluster.
# Creating a cluster
@ -133,7 +133,7 @@ but didn't actually create any instances or other cloud objects in GCE. To do t
`kops update cluster`.
`kops update cluster` without `--yes` will show you a preview of all the changes will be made;
it is very useful to see what kops is about to do, before actually making the changes.
it is very useful to see what kOps is about to do, before actually making the changes.
Run `kops update cluster simple.k8s.local` and peruse the changes.
@ -143,7 +143,7 @@ We're now finally ready to create the object: `kops update cluster simple.k8s.lo
<!-- TODO: We don't need this on GCE; remove SSH key requirement -->
Your cluster is created in the background - kops actually creates GCE Managed Instance Groups
Your cluster is created in the background - kOps actually creates GCE Managed Instance Groups
that run the instances; this ensures that even if instances are terminated, they will automatically
be relaunched by GCE and your cluster will self-heal.
@ -152,7 +152,7 @@ After a few minutes, you should be able to do `kubectl get nodes` and your first
# Enjoy
At this point you have a kubernetes cluster - the core commands to do so are as simple as `kops create cluster`
and `kops update cluster`. There's a lot more power in kops, and even more power in kubernetes itself, so we've
and `kops update cluster`. There's a lot more power in kOps, and even more power in kubernetes itself, so we've
put a few jumping off places here. But when you're done, don't forget to [delete your cluster](#deleting-the-cluster).
* [Manipulate InstanceGroups](../tutorial/working-with-instancegroups.md) to add more nodes, change image

View File

@ -1,7 +1,7 @@
# kubectl cluster admin configuration
When you run `kops update cluster` during cluster creation, you automatically get a kubectl configuration for accessing the cluster. This configuration gives you full admin access to the cluster.
If you want to create this configuration on other machine, you can run the following as long as you have access to the kops state store.
If you want to create this configuration on other machine, you can run the following as long as you have access to the kOps state store.
To create the kubecfg configuration settings for use with kubectl:

View File

@ -1,7 +1,7 @@
# Getting Started with kops on OpenStack
# Getting Started with kOps on OpenStack
OpenStack support on kops is currently **beta**, which means that OpenStack support is in good shape and could be used for production. However, it is not as rigorously tested as the stable cloud providers and there are some features not supported. In particular, kops tries to support a wide variety of OpenStack setups and not all of them are equally well tested.
OpenStack support on kOps is currently **beta**, which means that OpenStack support is in good shape and could be used for production. However, it is not as rigorously tested as the stable cloud providers and there are some features not supported. In particular, kOps tries to support a wide variety of OpenStack setups and not all of them are equally well tested.
## OpenStack requirements
@ -12,7 +12,7 @@ In order to deploy a kops-managed cluster on OpenStack, you need the following O
* Glance (image)
* Cinder (block storage)
In addition, kops can make use of the following services:
In addition, kOps can make use of the following services:
* Swift (object store)
* Dvelve (dns)
@ -33,7 +33,7 @@ We recommend using [Application Credentials](https://docs.openstack.org/keystone
## Environment Variables
kops stores its configuration in a state store. Before creating a cluster, we need to export the path to the state store:
kOps stores its configuration in a state store. Before creating a cluster, we need to export the path to the state store:
```bash
export KOPS_STATE_STORE=swift://<bucket-name> # where <bucket-name> is the name of the Swift container to use for kops state
@ -83,9 +83,9 @@ kops delete cluster my-cluster.k8s.local --yes
## Compute and volume zone names does not match
Some of the openstack users do not have compute zones named exactly the same than volume zones. Good example is that there are several compute zones for instance `zone-1`, `zone-2` and `zone-3`. Then there is only one volumezone which is usually called `nova`. By default this is problem in kops, because kops assumes that if you are deploying things to `zone-1` there should be compute and volume zone called `zone-1`.
Some of the openstack users do not have compute zones named exactly the same than volume zones. Good example is that there are several compute zones for instance `zone-1`, `zone-2` and `zone-3`. Then there is only one volumezone which is usually called `nova`. By default this is problem in kOps, because kOps assumes that if you are deploying things to `zone-1` there should be compute and volume zone called `zone-1`.
However, you can still get kops working in your openstack by doing following:
However, you can still get kOps working in your openstack by doing following:
Create cluster using your compute zones:
@ -159,9 +159,9 @@ In clusters without loadbalancer, the address of a single random master will be
# Using existing OpenStack network
You can have kops reuse existing network components instead of provisioning one per cluster. As OpenStack support is still beta, we recommend you take extra care when deleting clusters and ensure that kops do not try to remove any resources not belonging to the cluster.
You can have kOps reuse existing network components instead of provisioning one per cluster. As OpenStack support is still beta, we recommend you take extra care when deleting clusters and ensure that kOps do not try to remove any resources not belonging to the cluster.
## Let kops provision new subnets within an existing network
## Let kOps provision new subnets within an existing network
Use an existing network by using `--network <network id>`.
@ -175,7 +175,7 @@ spec:
## Use existing networks
Instead of kops creating new subnets for the cluster, you can reuse an existing subnet.
Instead of kOps creating new subnets for the cluster, you can reuse an existing subnet.
When you create a new cluster, you can specify subnets using the `--subnets` and `--utility-subnets` flags.
@ -208,7 +208,7 @@ kops create cluster \
# Using with self-signed certificates in OpenStack
kOps can be configured to use insecure mode towards OpenStack. However, this is not recommended as OpenStack cloudprovider in kubernetes does not support it.
If you use insecure flag in kops it might be that the cluster does not work correctly.
If you use insecure flag in kOps it might be that the cluster does not work correctly.
```yaml
spec:

View File

@ -16,7 +16,7 @@ Read through the [networking page](../networking.md) and choose a stable CNI.
## Private topology
By default kops will create clusters using public topology, where all nodes and the Kubernetes API are exposed on public Internet.
By default kOps will create clusters using public topology, where all nodes and the Kubernetes API are exposed on public Internet.
Read through the [topology page](../topology.md) to understand the options you have running nodes in internal IP addresses and using a [bastion](../bastion.md) for SSH access.
@ -24,7 +24,7 @@ Read through the [topology page](../topology.md) to understand the options you h
The `kops` command allows you to configure some aspects of your cluster, but for almost any production cluster, you want to change settings that is not accecible through CLI. The cluster spec can be exported as a yaml file and checked into version control.
Read through the [cluster spec page](../cluster_spec.md) and familiarize yourself with the key options that kops offers.
Read through the [cluster spec page](../cluster_spec.md) and familiarize yourself with the key options that kOps offers.
## Templating

View File

@ -1,4 +1,4 @@
# Getting Started with kops on Spot Ocean
# Getting Started with kOps on Spot Ocean
[Ocean](https://spot.io/products/ocean/) by [Spot](https://spot.io/) simplifies infrastructure management for Kubernetes. With robust, container-driven infrastructure auto-scaling and intelligent right-sizing for container resource requirements, operations can literally "set and forget" the underlying cluster.
@ -20,7 +20,7 @@ Ocean not only intelligently leverages Spot Instances and reserved capacity to r
## Prerequisites
Make sure you have [installed kops](../install.md) and [installed kubectl](../install.md#installing-other-dependencies).
Make sure you have [installed kOps](../install.md) and [installed kubectl](../install.md#installing-other-dependencies).
## Setup your environment

View File

@ -40,7 +40,7 @@ Adding ECR permissions will extend the IAM policy documents as below:
The additional permissions are:
```json
{
"Sid": "kopsK8sECR",
"Sid": "kOpsK8sECR",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
@ -92,7 +92,7 @@ It's important to note that externalPolicies will only handle the attachment and
## Adding Additional Policies
Sometimes you may need to extend the kops IAM roles to add additional policies. You can do this
Sometimes you may need to extend the kOps IAM roles to add additional policies. You can do this
through the `additionalPolicies` spec field. For instance, let's say you want
to add DynamoDB and Elasticsearch permissions to your nodes.
@ -151,7 +151,7 @@ Now you can run a cluster update to have the changes take effect:
kops update cluster ${CLUSTER_NAME} --yes
```
You can have an additional policy for each kops role (node, master, bastion). For instance, if you wanted to apply one set of additional permissions to the master instances, and another to the nodes, you could do the following:
You can have an additional policy for each kOps role (node, master, bastion). For instance, if you wanted to apply one set of additional permissions to the master instances, and another to the nodes, you could do the following:
```yaml
spec:

View File

@ -4,18 +4,18 @@
<hr>
</div>
# kops - Kubernetes Operations
# kOps - Kubernetes Operations
[GoDoc]: https://pkg.go.dev/k8s.io/kops
[GoDoc Widget]: https://godoc.org/k8s.io/kops?status.svg
[GoDoc]: https://pkg.go.dev/k8s.io/kOps
[GoDoc Widget]: https://godoc.org/k8s.io/kOps?status.svg
The easiest way to get a production grade Kubernetes cluster up and running.
## 2020-05-06 etcd-manager Certificate Expiration Advisory
kops versions released today contain a **critical fix** to etcd-manager: 1 year after creation (or first adopting etcd-manager), clusters will stop responding due to expiration of a TLS certificate. Upgrading kops to 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3 is highly recommended. Please see the [advisory](./advisories/etcd-manager-certificate-expiration.md) for the full details.
kops versions released today contain a **critical fix** to etcd-manager: 1 year after creation (or first adopting etcd-manager), clusters will stop responding due to expiration of a TLS certificate. Upgrading kOps to 1.15.3, 1.16.2, 1.17.0-beta.2, or 1.18.0-alpha.3 is highly recommended. Please see the [advisory](./advisories/etcd-manager-certificate-expiration.md) for the full details.
## What is kops?
## What is kOps?
We like to think of it as `kubectl` for clusters.

View File

@ -1,4 +1,4 @@
# Installing kops (Binaries)
# Installing kOps (Binaries)
## MacOS

View File

@ -1,6 +1,6 @@
# Labels
There are two main types of labels that kops can create:
There are two main types of labels that kOps can create:
* `cloudLabels` become tags in AWS on the instances
* `nodeLabels` become labels on the k8s Node objects

View File

@ -1,10 +1,10 @@
# Using A Manifest to Manage kops Clusters
# Using A Manifest to Manage kOps Clusters
This document also applies to using the `kops` API to customize a Kubernetes cluster with or without using YAML or JSON.
## Table of Contents
* [Using A Manifest to Manage kops Clusters](#using-a-manifest-to-manage-kops-clusters)
* [Using A Manifest to Manage kOps Clusters](#using-a-manifest-to-manage-kops-clusters)
* [Background](#background)
* [Exporting a Cluster](#exporting-a-cluster)
* [YAML Examples](#yaml-examples)
@ -19,7 +19,7 @@ This document also applies to using the `kops` API to customize a Kubernetes clu
Because of the above statement `kops` includes an API which provides a feature for users to utilize YAML or JSON manifests for managing their `kops` created Kubernetes installations. In the same way that you can use a YAML manifest to deploy a Job, you can deploy and manage a `kops` Kubernetes instance with a manifest. All of these values are also usable via the interactive editor with `kops edit`.
> You can see all the options that are currently supported in kOps [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go) or [more prettily here](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec)
> You can see all the options that are currently supported in kOps [here](https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go) or [more prettily here](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ClusterSpec)
The following is a list of the benefits of using a file to manage instances.
@ -30,7 +30,7 @@ The following is a list of the benefits of using a file to manage instances.
## Exporting a Cluster
At this time you must run `kops create cluster` and then export the YAML from the state store. We plan in the future to have the capability to generate kops YAML via the command line. The following is an example of creating a cluster and exporting the YAML.
At this time you must run `kops create cluster` and then export the YAML from the state store. We plan in the future to have the capability to generate kOps YAML via the command line. The following is an example of creating a cluster and exporting the YAML.
```shell
export NAME=k8s.example.com
@ -290,7 +290,7 @@ spec:
api:
```
Full documentation is accessible via [godoc](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#ClusterSpec).
Full documentation is accessible via [godoc](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#ClusterSpec).
The `ClusterSpec` allows a user to set configurations for such values as Docker log driver, Kubernetes API server log level, VPC for reusing a VPC (`NetworkID`), and the Kubernetes version.
@ -321,7 +321,7 @@ metadata:
spec:
```
Full documentation is accessible via [godocs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#InstanceGroupSpec).
Full documentation is accessible via [godocs](https://pkg.go.dev/k8s.io/kops/pkg/apis/kOps#InstanceGroupSpec).
Instance Groups map to Auto Scaling Groups in AWS, and Instance Groups in GCE. They are an API level description of a group of compute instances used as Masters or Nodes.
@ -329,10 +329,10 @@ More documentation is available in the [Instance Group](instance_groups.md) docu
## Closing Thoughts
Using YAML or JSON-based configuration for building and managing kops clusters is powerful, but use this strategy with caution.
Using YAML or JSON-based configuration for building and managing kOps clusters is powerful, but use this strategy with caution.
- If you do not need to define or customize a value, let kops set that value. Setting too many values prevents kops from doing its job in setting up the cluster and you may end up with strange bugs.
- If you end up with strange bugs, try letting kops do more.
- If you do not need to define or customize a value, let kOps set that value. Setting too many values prevents kOps from doing its job in setting up the cluster and you may end up with strange bugs.
- If you end up with strange bugs, try letting kOps do more.
- Be cautious, take care, and test outside of production!
If you need to run a custom version of Kubernetes Controller Manager, set `kubeControllerManager.image` and update your cluster. This is the beauty of using a manifest for your cluster!

View File

@ -1,4 +1,4 @@
# kops & MFA
# kOps & MFA
You can secure `kops` with MFA by creating an AWS role & policy that requires MFA to access to the `KOPS_STATE_STORE` bucket. Unfortunately the Go AWS SDK does not transparently support assuming roles with required MFA. This may change in a future version. `kops` plans to support this behavior eventually. You can track progress in this [Github issue](https://github.com/kubernetes/kops/issues/226). If you'd like to use MFA with `kops`, you'll need a work around until then.

View File

@ -39,7 +39,7 @@ You can specify the network provider via the `--networking` command line switch.
### Kubenet (default)
Kubernetes Operations (kops) uses `kubenet` networking by default. This sets up networking on AWS using VPC
Kubernetes Operations (kOps) uses `kubenet` networking by default. This sets up networking on AWS using VPC
networking, where the master allocates a /24 CIDR to each Node, drawing from the Node network.
Using `kubenet` mode routes for each node are then configured in the AWS VPC routing tables.
@ -68,7 +68,7 @@ For more on the `kubenet` networking provider, please see the [`kubenet` section
and libraries for writing plugins to configure network interfaces in Linux containers. Kubernetes
has built in support for CNI networking components.
Several CNI providers are currently built into kops:
Several CNI providers are currently built into kOps:
* [AWS VPC](networking/aws-vpc.md)
* [Calico](networking/calico.md)
@ -81,7 +81,7 @@ Several CNI providers are currently built into kops:
* [Weave](networking/weave.md)
kOps makes it easy for cluster operators to choose one of these options. The manifests for the providers
are included with kops, and you simply use `--networking <provider-name>`. Replace the provider name
are included with kOps, and you simply use `--networking <provider-name>`. Replace the provider name
with the name listed in the provider's documentation (from the list above) when you run
`kops cluster create`. For instance, for a default Calico installation, execute the following:

View File

@ -11,7 +11,7 @@ To use Amazon VPC, specify the following in the cluster spec:
amazonvpc: {}
```
in the cluster spec file or pass the `--networking amazonvpc` option on the command line to kops:
in the cluster spec file or pass the `--networking amazonvpc` option on the command line to kOps:
```sh
export ZONES=<mylistofzones>

View File

@ -56,7 +56,7 @@ To enable this mode in a cluster, add the following to the cluster spec:
crossSubnet: true
```
In the case of AWS, EC2 instances have source/destination checks enabled by default.
When you enable cross-subnet mode in kops 1.19+, it is equivalent to:
When you enable cross-subnet mode in kOps 1.19+, it is equivalent to:
```yaml
networking:
calico:
@ -64,14 +64,14 @@ When you enable cross-subnet mode in kops 1.19+, it is equivalent to:
IPIPMode: CrossSubnet
```
An IAM policy will be added to all nodes to allow Calico to execute `ec2:DescribeInstances` and `ec2:ModifyNetworkInterfaceAttribute`, as required when [awsSrcDstCheck](https://docs.projectcalico.org/reference/resources/felixconfig#spec) is set.
For older versions of kops, an addon controller ([k8s-ec2-srcdst](https://github.com/ottoyiu/k8s-ec2-srcdst))
For older versions of kOps, an addon controller ([k8s-ec2-srcdst](https://github.com/ottoyiu/k8s-ec2-srcdst))
will be deployed as a Pod (which will be scheduled on one of the masters) to facilitate the disabling of said source/destination address checks.
Only the control plane nodes have an IAM policy to allow k8s-ec2-srcdst to execute `ec2:ModifyInstanceAttribute`.
### Configuring Calico MTU
The Calico MTU is configurable by editing the cluster and setting `mtu` option in the calico configuration.
AWS VPCs support jumbo frames, so on cluster creation kops sets the calico MTU to 8912 bytes (9001 minus overhead).
AWS VPCs support jumbo frames, so on cluster creation kOps sets the calico MTU to 8912 bytes (9001 minus overhead).
For more details on Calico MTU please see the [Calico Docs](https://docs.projectcalico.org/networking/mtu#determine-mtu-size).
@ -113,7 +113,7 @@ For more details on enabling the eBPF dataplane please refer the [Calico Docs](h
### Configuring WireGuard
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.16') }}
Calico supports WireGuard to encrypt pod-to-pod traffic. If you enable this options, WireGuard encryption is automatically enabled for all nodes. At the moment, kops installs WireGuard automatically only when the host OS is *Ubuntu*. For other OSes, WireGuard has to be part of the base image or installed via a hook.
Calico supports WireGuard to encrypt pod-to-pod traffic. If you enable this options, WireGuard encryption is automatically enabled for all nodes. At the moment, kOps installs WireGuard automatically only when the host OS is *Ubuntu*. For other OSes, WireGuard has to be part of the base image or installed via a hook.
For more details of Calico WireGuard please refer the [Calico Docs](https://docs.projectcalico.org/security/encrypt-cluster-pod-traffic).
@ -142,8 +142,8 @@ For more general information on options available with Calico see the official [
### New nodes are taking minutes for syncing ip routes and new pods on them can't reach kubedns
This is caused by nodes in the Calico etcd nodestore no longer existing. Due to the ephemeral nature of AWS EC2 instances, new nodes are brought up with different hostnames, and nodes that are taken offline remain in the Calico nodestore. This is unlike most datacentre deployments where the hostnames are mostly static in a cluster. Read more about this issue at https://github.com/kubernetes/kops/issues/3224
This has been solved in kops 1.9.0, when creating a new cluster no action is needed, but if the cluster was created with a prior kops version the following actions should be taken:
This has been solved in kOps 1.9.0, when creating a new cluster no action is needed, but if the cluster was created with a prior kops version the following actions should be taken:
* Use kops to update the cluster ```kops update cluster <name> --yes``` and wait for calico-kube-controllers deployment and calico-node daemonset pods to be updated
* Use kOps to update the cluster ```kops update cluster <name> --yes``` and wait for calico-kube-controllers deployment and calico-node daemonset pods to be updated
* Decommission all invalid nodes, [see here](https://docs.projectcalico.org/v2.6/usage/decommissioning-a-node)
* All nodes that are deleted from the cluster after this actions should be cleaned from calico's etcd storage and the delay programming routes should be solved.

View File

@ -27,9 +27,9 @@ kops create cluster \
### Using etcd for agent state sync
This feature is in beta state as of kops 1.18.
This feature is in beta state as of kOps 1.18.
By default, Cilium will use CRDs for synchronizing agent state. This can cause performance problems on larger clusters. As of kops 1.18, kops can manage an etcd cluster using etcd-manager dedicated for cilium agent state sync. The [Cilium docs](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-external-etcd/) contains recommendations for when this must be enabled.
By default, Cilium will use CRDs for synchronizing agent state. This can cause performance problems on larger clusters. As of kOps 1.18, kOps can manage an etcd cluster using etcd-manager dedicated for cilium agent state sync. The [Cilium docs](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-external-etcd/) contains recommendations for when this must be enabled.
For new clusters you can use the `cilium-etcd` networking provider:
@ -75,7 +75,7 @@ Then enable etcd as kvstore:
### Enabling BPF NodePort
As of kops 1.19, BPF NodePort is enabled by default for new clusters if the kubernetes version is 1.12 or newer. It can be safely enabled as of kops 1.18.
As of kOps 1.19, BPF NodePort is enabled by default for new clusters if the kubernetes version is 1.12 or newer. It can be safely enabled as of kOps 1.18.
In this mode, the cluster is fully functional without kube-proxy, with Cilium replacing kube-proxy's NodePort implementation using BPF.
Read more about this in the [Cilium docs](https://docs.cilium.io/en/stable/gettingstarted/nodeport/)
@ -103,7 +103,7 @@ kops rolling-update cluster --yes
### Enabling Cilium ENI IPAM
This feature is in beta state as of kops 1.18.
This feature is in beta state as of kOps 1.18.
As of kOps 1.18, you can have Cilium provision AWS managed adresses and attach them directly to Pods much like Lyft VPC and AWS VPC. See [the Cilium docs for more information](https://docs.cilium.io/en/v1.6/concepts/ipam/eni/)
@ -118,7 +118,7 @@ When using ENI IPAM you need to disable masquerading in Cilium as well.
Note that since Cilium Operator is the entity that interacts with the EC2 API to provision and attaching ENIs, we force it to run on the master nodes when this IPAM is used.
Also note that this feature has only been tested on the default kops AMIs.
Also note that this feature has only been tested on the default kOps AMIs.
#### Enabling Encryption in Cilium
{{ kops_feature_table(kops_added_default='1.19', k8s_min='1.17') }}

View File

@ -13,7 +13,7 @@ To use the Lyft CNI, specify the following in the cluster spec.
lyftvpc: {}
```
in the cluster spec file or pass the `--networking lyftvpc` option on the command line to kops:
in the cluster spec file or pass the `--networking lyftvpc` option on the command line to kOps:
```console
$ export ZONES=mylistofzones

View File

@ -1,6 +1,6 @@
# Romana
Support for Romana is deprecated as of kops 1.18 and removed in kops 1.19.
Support for Romana is deprecated as of kOps 1.18 and removed in kOps 1.19.
## Installing

View File

@ -23,7 +23,7 @@ kops create cluster \
### Configuring Weave MTU
The Weave MTU is configurable by editing the cluster and setting `mtu` option in the weave configuration.
AWS VPCs support jumbo frames, so on cluster creation kops sets the weave MTU to 8912 bytes (9001 minus overhead).
AWS VPCs support jumbo frames, so on cluster creation kOps sets the weave MTU to 8912 bytes (9001 minus overhead).
```yaml
spec:
@ -64,7 +64,7 @@ Note that it is possible to break the cluster networking if flags are improperly
The Weave network encryption is configurable by creating a weave network secret password.
Weaveworks recommends choosing a secret with [at least 50 bits of entropy](https://www.weave.works/docs/net/latest/tasks/manage/security-untrusted-networks/).
If no password is supplied, kops will generate one at random.
If no password is supplied, kOps will generate one at random.
```sh
cat /dev/urandom | tr -dc A-Za-z0-9 | head -c9 > password

View File

@ -1,7 +1,7 @@
### **Node Authorization Service**
:warning: The node authorization service is deprecated.
As of Kubernetes 1.19 kops will, on AWS, ignore the `nodeAuthorization` field of the cluster spec and
As of Kubernetes 1.19 kOps will, on AWS, ignore the `nodeAuthorization` field of the cluster spec and
worker nodes will obtain client certificates for kubelet and other purposes through kops-controller.
The [node authorization service] is an experimental service which in the absence of a kops-apiserver provides the distribution of tokens to the worker nodes. Bootstrap tokens provide worker nodes a short-time credential to request access kubeconfig certificate. A gist of the flow is;
@ -10,7 +10,7 @@ The [node authorization service] is an experimental service which in the absence
- the token is distributed to the node by _some_ means and then used as the bearer token of the initial request to the kubernetes api.
- the token itself is bound to the cluster role which grants permission to generate a CSR, an additional cluster role provides access for the controller to auto-approve this CSR requests as well.
- two certificates are generated by the kubelet using bootstrap process, one for the kubelet api and the other a client certificate to the kubelet itself.
- the client certificate by default is added into the system:nodes rbac group _(note, if you are using PSP this is automatically bound by kops on your behalf)_.
- the client certificate by default is added into the system:nodes rbac group _(note, if you are using PSP this is automatically bound by kOps on your behalf)_.
- the kubelet at this point has a server certificate and the client api certificate and good to go.
#### **Integration with kOps**

View File

@ -1,7 +1,7 @@
# Kubernetes Addons and Addon Manager
## Addons
With kops you manage addons by using kubectl.
With kOps you manage addons by using kubectl.
(For a description of the addon-manager, please see [addon_management](#addon-management).)
@ -154,7 +154,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons
*This addon is deprecated. Please use [external-dns](https://github.com/kubernetes-sigs/external-dns) instead.*
Please note that kops installs a Route53 DNS controller automatically (it is required for cluster discovery).
Please note that kOps installs a Route53 DNS controller automatically (it is required for cluster discovery).
The functionality of the route53-mapper overlaps with the dns-controller, but some users will prefer to
use one or the other.
[README for the included dns-controller](https://github.com/kubernetes/kops/blob/master/dns-controller/README.md)
@ -172,27 +172,27 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons
## Addon Management
kops incorporates management of some addons; we _have_ to manage some addons which are needed before
kOps incorporates management of some addons; we _have_ to manage some addons which are needed before
the kubernetes API is functional.
In addition, kops offers end-user management of addons via the `channels` tool (which is still experimental,
In addition, kOps offers end-user management of addons via the `channels` tool (which is still experimental,
but we are working on making it a recommended part of kubernetes addon management). We ship some
curated addons in the [addons directory](https://github.com/kubernetes/kops/tree/master/addons), more information in the [addons document](addons.md).
kops uses the `channels` tool for system addon management also. Because kops uses the same tool
kOps uses the `channels` tool for system addon management also. Because kOps uses the same tool
for *system* addon management as it does for *user* addon management, this means that
addons installed by kops as part of cluster bringup can be managed alongside additional addons.
(Though note that bootstrap addons are much more likely to be replaced during a kops upgrade).
addons installed by kOps as part of cluster bringup can be managed alongside additional addons.
(Though note that bootstrap addons are much more likely to be replaced during a kOps upgrade).
The general kops philosophy is to try to make the set of bootstrap addons minimal, and
The general kOps philosophy is to try to make the set of bootstrap addons minimal, and
to make installation of subsequent addons easy.
Thus, `kube-dns` and the networking overlay (if any) are the canonical bootstrap addons.
But addons such as the dashboard or the EFK stack are easily installed after kops bootstrap,
But addons such as the dashboard or the EFK stack are easily installed after kOps bootstrap,
with a `kubectl apply -f https://...` or with the channels tool.
In future, we may as a convenience make it easy to add optional addons to the kops manifest,
In future, we may as a convenience make it easy to add optional addons to the kOps manifest,
though this will just be a convenience wrapper around doing it manually.
### Update BootStrap Addons
@ -205,7 +205,7 @@ If you want to update the bootstrap addons, you can run the following command to
### Versioning
The channels tool adds a manifest-of-manifests file, of `Kind: Addons`, which allows for a description
of the various manifest versions that are available. In this way kops can manage updates
of the various manifest versions that are available. In this way kOps can manage updates
as new versions of the addon are released. For example,
the [dashboard addon](https://github.com/kubernetes/kops/blob/master/addons/kubernetes-dashboard/addon.yaml)
lists multiple versions.

View File

@ -16,7 +16,7 @@ Take a snapshot of your EBS volumes; export all your data from kubectl etc.**
Limitations:
* kops splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but you will lose your events history.
* kOps splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but you will lose your events history.
* Doubtless others not yet known - please open issues if you encounter them!
### Overview
@ -190,7 +190,7 @@ This method provides zero-downtime when migrating a cluster from `kube-up` to `k
Limitations:
- If you're using the default networking (`kubenet`), there is a account limit of 50 entries in a VPC's route table. If your cluster contains more than ~25 nodes, this strategy, as-is, will not work.
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kops networking docs](../networking.md) for more information.
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kOps networking docs](../networking.md) for more information.
+ One solution is to gradually shift traffic from one cluster to the other, scaling down the number of nodes on the old cluster, and scaling up the number of nodes on the new cluster.
### Steps

View File

@ -2,13 +2,13 @@
## etcd-manager
etcd-manager is a kubernetes-associated project that kops uses to manage
etcd-manager is a kubernetes-associated project that kOps uses to manage
etcd.
etcd-manager uses many of the same ideas as the existing etcd implementation
built into kops, but it addresses some limitations also:
built into kOps, but it addresses some limitations also:
* separate from kops - can be used by other projects
* separate from kOps - can be used by other projects
* allows etcd2 -> etcd3 upgrade (along with minor upgrades)
* allows cluster resizing (e.g. going from 1 to 3 nodes)
@ -16,7 +16,7 @@ When using kubernetes >= 1.12 etcd-manager will be used by default. See [../etcd
## Backups
Backups and restores of etcd on kops are covered in [etcd_backup_restore_encryption.md](etcd_backup_restore_encryption.md)
Backups and restores of etcd on kOps are covered in [etcd_backup_restore_encryption.md](etcd_backup_restore_encryption.md)
## Direct Data Access

View File

@ -8,7 +8,7 @@ can be found [here](https://kubernetes.io/docs/admin/etcd/) and
### Backup requirement
A Kubernetes cluster deployed with kops stores the etcd state in two different
A Kubernetes cluster deployed with kOps stores the etcd state in two different
AWS EBS volumes per master node. One volume is used to store the Kubernetes
main data, the other one for events. For a HA master with three nodes this will
result in six volumes for etcd data (one in each AZ). An EBS volume is designed
@ -20,7 +20,7 @@ of 0.1%-0.2% per year.
## Taking backups
Backups are done periodically and before cluster modifications using [etcd-manager](etcd_administration.md)
(introduced in kops 1.12). Backups for both the `main` and `events` etcd clusters
(introduced in kOps 1.12). Backups for both the `main` and `events` etcd clusters
are stored in object storage (like S3) together with the cluster configuration.
By default, backups are taken every 15 min. Hourly backups are kept for 1 week and

View File

@ -4,7 +4,7 @@
For testing purposes, kubernetes works just fine with a single master. However, when the master becomes unavailable, for example due to upgrade or instance failure, the kubernetes API will be unavailable. Pods and services that are running in the cluster continue to operate as long as they do not depend on interacting with the API, but operations such as adding nodes, scaling pods, replacing terminated pods will not work. Running kubectl will also not work.
kops runs each master in a dedicated autoscaling groups (ASG) and stores data on EBS volumes. That way, if a master node is terminated the ASG will launch a new master instance with the master's volume. Because of the dedicated EBS volumes, each master is bound to a fixed Availability Zone (AZ). If the AZ becomes unavailable, the master instance in that AZ will also become unavailable.
kOps runs each master in a dedicated autoscaling groups (ASG) and stores data on EBS volumes. That way, if a master node is terminated the ASG will launch a new master instance with the master's volume. Because of the dedicated EBS volumes, each master is bound to a fixed Availability Zone (AZ). If the AZ becomes unavailable, the master instance in that AZ will also become unavailable.
For production use, you therefore want to run kubernetes in a HA setup with multiple masters. With multiple master nodes, you will be able both to do graceful (zero-downtime) upgrades and you will be able to survive AZ failures.
@ -19,7 +19,7 @@ Note that running clusters spanning several AZs is more expensive than running c
### Example 1: public topology
The simplest way to get started with a HA cluster is to run `kops create cluster` as shown below. The `--master-zones` flag lists the zones you want your masters
to run in. By default, kops will create one master per AZ. Since the kubernetes etcd cluster runs on the master nodes, you have to specify an odd number of zones in order to obtain quorum.
to run in. By default, kOps will create one master per AZ. Since the kubernetes etcd cluster runs on the master nodes, you have to specify an odd number of zones in order to obtain quorum.
```
kops create cluster \

View File

@ -1,6 +1,6 @@
# Updates and Upgrades
## Updating kops
## Updating kOps
### MacOS
@ -61,7 +61,7 @@ node restart), but currently you must:
* `kops update cluster $NAME` to preview, then `kops update cluster $NAME --yes`
* `kops rolling-update cluster $NAME` to preview, then `kops rolling-update cluster $NAME --yes`
Upgrade uses the latest Kubernetes version considered stable by kops, defined in `https://github.com/kubernetes/kops/blob/master/channels/stable`.
Upgrade uses the latest Kubernetes version considered stable by kOps, defined in `https://github.com/kubernetes/kops/blob/master/channels/stable`.
### Terraform Users

View File

@ -1,6 +1,6 @@
kops: Operate Kubernetes the Kubernetes Way
kOps: Operate Kubernetes the Kubernetes Way
kops (Kubernetes-Ops) is a set of tools for installing, operating and deleting Kubernetes clusters.
kOps (Kubernetes-Ops) is a set of tools for installing, operating and deleting Kubernetes clusters.
It follows the Kubernetes design philosophy: the user creates a Cluster configuration object in JSON/YAML,
and then controllers create the Cluster.
@ -8,7 +8,7 @@ and then controllers create the Cluster.
Each component (kubelet, kube-apiserver...) is explicitly configured: We reuse the k8s componentconfig types
where we can, and we create additional types for the configuration of additional components.
kops can:
kOps can:
* create a cluster
* upgrade a cluster
@ -18,8 +18,8 @@ kops can:
* delete a cluster
Some users will need or prefer to use tools like Terraform for cluster configuration,
so kops can also output the equivalent configuration for those tools also (currently just Terraform, others
planned). After creation with your preferred tool, you can still use the rest of the kops tooling to operate
so kOps can also output the equivalent configuration for those tools also (currently just Terraform, others
planned). After creation with your preferred tool, you can still use the rest of the kOps tooling to operate
your cluster.
## Primary API types

View File

@ -1,30 +1,30 @@
** This file documents the new release process, as used from kops 1.19
onwards. For the process used for versions up to kops 1.18, please
** This file documents the new release process, as used from kOps 1.19
onwards. For the process used for versions up to kOps 1.18, please
see [the old release process](development/release.md)**
# Release Process
The kops project is released on an as-needed basis. The process is as follows:
The kOps project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. All [OWNERS](https://github.com/kubernetes/kops/blob/master/OWNERS) must LGTM this release
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kops $VERSION is released`
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kOps $VERSION is released`
## Branches
We maintain a `release-1.17` branch for kops 1.17.X, `release-1.18` for kops 1.18.X
We maintain a `release-1.17` branch for kOps 1.17.X, `release-1.18` for kOps 1.18.X
etc.
`master` is where development happens. We create new branches from master as a
new kops version is released, or in preparation for a new release. As we are
new kOps version is released, or in preparation for a new release. As we are
preparing for a new kubernetes release, we will try to advance the master branch
to focus on the new functionality, and start cherry-picking back more selectively
to the release branches only as needed.
Generally we don't encourage users to run older kops versions, or older
branches, because newer versions of kops should remain compatible with older
branches, because newer versions of kOps should remain compatible with older
versions of Kubernetes.
Releases should be done from the `release-1.X` branch. The tags should be made
@ -183,7 +183,7 @@ Currently we send the image and non-image artifact promotion PRs separately.
```
git add -p
git commit -m "Promote kops $VERSION images"
git commit -m "Promote kOps $VERSION images"
git push ${USER}
hub pull-request
```
@ -207,7 +207,7 @@ Verify, then send a PR:
```
git add artifacts/manifests/k8s-staging-kops/${VERSION}.yaml
git commit -m "Promote kops $VERSION binary artifacts"
git commit -m "Promote kOps $VERSION binary artifacts"
git push ${USER}
hub pull-request
```
@ -279,7 +279,7 @@ chmod +x ko
./ko version
```
Also run through a kops create cluster flow, ideally verifying that
Also run through a `kops create cluster` flow, ideally verifying that
everything is pulling from the new locations.
## On github
@ -289,7 +289,7 @@ everything is pulling from the new locations.
* Add notes
* Publish it
## Release kops to homebrew
## Release kOps to homebrew
* Following the [documentation](homebrew.md) we must release a compatible homebrew formulae with the release.
* This should be done at the same time as the release, and we will iterate on how to improve timing of this.
@ -298,11 +298,11 @@ everything is pulling from the new locations.
Once we are satisfied the release is sound:
* Bump the kops recommended version in the alpha channel
* Bump the kOps recommended version in the alpha channel
Once we are satisfied the release is stable:
* Bump the kops recommended version in the stable channel
* Bump the kOps recommended version in the stable channel
## Update conformance results with CNCF

View File

@ -1,4 +1,4 @@
## Release notes for kops 1.18 series
## Release notes for kOps 1.18 series
# Significant changes
@ -62,7 +62,7 @@
* The `kops.k8s.io/v1alpha1` API has been removed. Users of `kops replace` will need to supply v1alpha2 resources.
* Please see the notes in the 1.15 release about the apiGroup changing from kops to kops.k8s.io
* Please see the notes in the 1.15 release about the apiGroup changing from kOps to kops.k8s.io
# Required Actions
@ -117,17 +117,17 @@
# Known Issues
* AWS clusters with an ACM certificate attached to the API ELB (the cluster's `spec.api.loadBalancer.sslCertificate` is set) will need to reenable basic auth to use the kubeconfig context created by `kops export kubecfg`. Set `spec.kubeAPIServer.disableBasicAuth: false` before running `kops export kubecfg`. See [#9756](https://github.com/kubernetes/kops/issues/9756) for more information.
* AWS clusters with an ACM certificate attached to the API ELB (the cluster's `spec.api.loadBalancer.sslCertificate` is set) will need to reenable basic auth to use the kubeconfig context created by `kOps export kubecfg`. Set `spec.kubeAPIServer.disableBasicAuth: false` before running `kOps export kubecfg`. See [#9756](https://github.com/kubernetes/kops/issues/9756) for more information.
# Deprecations
* Support for Kubernetes versions 1.9 and 1.10 are deprecated and will be removed in kops 1.19.
* Support for Kubernetes versions 1.9 and 1.10 are deprecated and will be removed in kOps 1.19.
* Support for Ubuntu 16.04 (Xenial) has been deprecated and will be removed in future versions of kOps.
* Support for the Romana networking provider is deprecated and will be removed in kops 1.19.
* Support for the Romana networking provider is deprecated and will be removed in kOps 1.19.
* Support for legacy IAM permissions is deprecated and will be removed in kops 1.19.
* Support for legacy IAM permissions is deprecated and will be removed in kOps 1.19.
# Full change list since 1.17.0 release

View File

@ -1,6 +1,6 @@
## Release notes for kops 1.19 series
## Release notes for kOps 1.19 series
(The kops 1.19 release has not been released yet; this is a document to gather the notes prior to the release).
(The kOps 1.19 release has not been released yet; this is a document to gather the notes prior to the release).
# Significant changes
@ -17,9 +17,9 @@ credentials may be specified as a value of the `--admin` flag. To get the previo
## OpenStack Cinder plugin
kOps will install the Cinder plugin for kops running kubernetes 1.16 or newer. If you already have this plugin installed you should remove it before upgrading.
kOps will install the Cinder plugin for kOps running kubernetes 1.16 or newer. If you already have this plugin installed you should remove it before upgrading.
If you already have a default `StorageClass`, you should set `cloudConfig.Openstack.BlockStorage.CreateStorageClass: false` to prevent kops from installing one.
If you already have a default `StorageClass`, you should set `cloudConfig.Openstack.BlockStorage.CreateStorageClass: false` to prevent kOps from installing one.
## Other significant changes by kind
@ -27,7 +27,7 @@ If you already have a default `StorageClass`, you should set `cloudConfig.Openst
* New clusters will now have one nodes group per zone. The number of nodes now defaults to the number of zones.
* On AWS kops now defaults to using launch templates instead of launch configurations.
* On AWS kOps now defaults to using launch templates instead of launch configurations.
* There is now Alpha support for Hashicorp Vault as a store for secrets and keys. See the [Vault state store docs](/state/#vault-vault).
@ -38,7 +38,7 @@ The expiration times vary randomly so that nodes are likely to have their certs
### CLI
* The `kops update cluster` command will now refuse to run on a cluster that
has been updated by a newer version of kops unless it is given the `--allow-kops-downgrade` flag.
has been updated by a newer version of kOps unless it is given the `--allow-kops-downgrade` flag.
* New command for deleting a single instance: [kops delete instance](/docs/cli/kops_delete_instance/)
@ -68,7 +68,7 @@ has been updated by a newer version of kops unless it is given the `--allow-kops
* Support for the Romana networking provider has been removed.
* Support for legacy IAM permissions has been removed. This removal may be temporarily deferred to kops 1.20 by setting the `LegacyIAM` feature flag.
* Support for legacy IAM permissions has been removed. This removal may be temporarily deferred to kOps 1.20 by setting the `LegacyIAM` feature flag.
# Required Actions
@ -90,11 +90,11 @@ To prevent downtime, follow these steps with the new version of Kops:
# Deprecations
* Support for Kubernetes versions 1.11 and 1.12 are deprecated and will be removed in kops 1.20.
* Support for Kubernetes versions 1.11 and 1.12 are deprecated and will be removed in kOps 1.20.
* Support for Terraform version 0.11 has been deprecated and will be removed in kops 1.20.
* Support for Terraform version 0.11 has been deprecated and will be removed in kOps 1.20.
* Support for feature flag `Terraform-0.12` has been deprecated and will be removed in kops 1.20. All generated Terraform HCL2/JSON files will support versions `0.12.26+` and `0.13.0+`.
* Support for feature flag `Terraform-0.12` has been deprecated and will be removed in kOps 1.20. All generated Terraform HCL2/JSON files will support versions `0.12.26+` and `0.13.0+`.
* The [manifest based metrics server addon](https://github.com/kubernetes/kops/tree/master/addons/metrics-server) has been deprecated in favour of a configurable addon.

View File

@ -4,7 +4,7 @@
## Delete all secrets
Delete all secrets & keypairs that kops is holding:
Delete all secrets & keypairs that kOps is holding:
```shell
kops get secrets | grep ^Secret | awk '{print $2}' | xargs -I {} kops delete secret secret {}

View File

@ -1,7 +1,7 @@
## Running in a shared VPC
When launching into a shared VPC, kops will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
or NAT Gateway you can tell kops to ignore egress. By default, kops creates a new subnet per zone and a new route table,
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
or NAT Gateway you can tell kOps to ignore egress. By default, kops creates a new subnet per zone and a new route table,
but you can instead use a shared subnet (see [below](#shared-subnets)).
1. Use `kops create cluster` with the `--vpc` argument for your existing VPC:
@ -45,7 +45,7 @@ When launching into a shared VPC, kops will reuse the VPC and Internet Gateway.
Review the changes to make sure they are OK—the Kubernetes settings might
not be ones you want on a shared VPC (in which case, open an issue!)
**Note also the Kubernetes VPCs (currently) require `EnableDNSHostnames=true`. kops will detect the required change,
**Note also the Kubernetes VPCs (currently) require `EnableDNSHostnames=true`. kOps will detect the required change,
but refuse to make it automatically because it is a shared VPC. Please review the implications and make the change
to the VPC manually.**
@ -56,7 +56,7 @@ When launching into a shared VPC, kops will reuse the VPC and Internet Gateway.
```
This will add an additional tag to your AWS VPC resource. This tag
will be removed automatically if you delete your kops cluster.
will be removed automatically if you delete your kOps cluster.
```
"kubernetes.io/cluster/<cluster-name>" = "shared"
@ -139,7 +139,7 @@ spec:
### Subnet Tags
By default, kops will tag your existing subnets with the standard tags:
By default, kOps will tag your existing subnets with the standard tags:
Public/Utility Subnets:
```
@ -157,7 +157,7 @@ spec:
These tags are important, for example, your services will be unable to create public or private Elastic Load Balancers (ELBs) if the respective `elb` or `internal-elb` tags are missing.
If you would like to manage these tags externally then specify `--disable-subnet-tags` during your cluster creation. This will prevent kops from tagging existing subnets and allow some custom control, such as separate subnets for internal ELBs.
If you would like to manage these tags externally then specify `--disable-subnet-tags` during your cluster creation. This will prevent kOps from tagging existing subnets and allow some custom control, such as separate subnets for internal ELBs.
### Shared NAT Egress
@ -191,17 +191,17 @@ spec:
Please note:
* You must specify pre-created subnets for either all of the subnets or none of them.
* kops won't alter your existing subnets. They must be correctly set up with route tables, etc. The
* kOps won't alter your existing subnets. They must be correctly set up with route tables, etc. The
Public or Utility subnets should have public IPs and an Internet Gateway configured as their default route
in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway
configured as their default route.
* kops won't create a route-table at all if it's not creating subnets.
* kOps won't create a route-table at all if it's not creating subnets.
* In the example above the first subnet is using a shared NAT Gateway while the
second one is using a shared NAT Instance
### Externally Managed Egress
If you are using an unsupported egress configuration in your VPC, kops can be told to ignore egress by using a configuration such as:
If you are using an unsupported egress configuration in your VPC, kOps can be told to ignore egress by using a configuration such as:
```yaml
spec:
@ -223,7 +223,7 @@ spec:
egress: External
```
This tells kops that egress is managed externally. This is preferable when using virtual private gateways
This tells kOps that egress is managed externally. This is preferable when using virtual private gateways
(currently unsupported) or using other configurations to handle egress routing.
### Proxy VPC Egress

View File

@ -38,7 +38,7 @@ spec:
```
## Workaround for changing secrets with type "Secret"
As it is currently not possible to modify or delete + create secrets of type "Secret" with the CLI you have to modify them directly in the kops s3 bucket.
As it is currently not possible to modify or delete + create secrets of type "Secret" with the CLI you have to modify them directly in the kOps s3 bucket.
They are stored /clustername/secrets/ and contain the secret as a base64 encoded string. To change the secret base64 encode it with:

View File

@ -121,7 +121,7 @@ After about 5 minutes all three masters should have found each other. Run the fo
kops validate cluster --wait 10m
```
While rotating the original master is not strictly necessary, kops will say it needs updating because of the configuration change.
While rotating the original master is not strictly necessary, kOps will say it needs updating because of the configuration change.
```
kops rolling-update cluster --yes

View File

@ -1,13 +1,13 @@
# The State Store
kops has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
kOps has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
here not only when you first create a cluster, but also you can change the state and apply changes to a running cluster.
Eventually, kubernetes services will also pull from the state store, so that we don't need to marshal all our
configuration through a channel like user-data. (This is currently done for secrets and SSL keys, for example,
though we have to copy the data from the state store to a file where components like kubelet can read them).
The state store uses kops's VFS implementation, so can in theory be stored anywhere.
The state store uses kOps's VFS implementation, so can in theory be stored anywhere.
As of now the following state stores are supported:
* Amazon AWS S3 (`s3://`)
* local filesystem (`file://`) (only for dry-run purposes, see [note](#local-filesystem-state-stores) below)
@ -107,7 +107,7 @@ Repeat for each cluster needing to be moved.
#### Cross Account State-store
Many enterprises prefer to run many AWS accounts. In these setups, having a shared cross-account S3 bucket for state may make inventory and management easier.
Consider the S3 bucket living in Account B and the kops cluster living in Account A. In order to achieve this, you first need to let Account A access the s3 bucket. This is done by adding the following _bucket policy_ on the S3 bucket:
Consider the S3 bucket living in Account B and the kOps cluster living in Account A. In order to achieve this, you first need to let Account A access the s3 bucket. This is done by adding the following _bucket policy_ on the S3 bucket:
```json
{
@ -193,7 +193,7 @@ gcsClient, err := storage.New(httpClient)
kOps has support for using Vault as state store. It is currently an experimental feature and you have to enable the `VFSVaultSupport` feature flag to enable it.
The goal of the vault store is to be a safe storage for the kops keys and secrets store. It will not work to use this as a kops registry/config store. Among other things, etcd-manager is unable to read VFS control files from vault. Vault also cannot be used as backend for etcd backups.
The goal of the vault store is to be a safe storage for the kOps keys and secrets store. It will not work to use this as a kOps registry/config store. Among other things, etcd-manager is unable to read VFS control files from vault. Vault also cannot be used as backend for etcd backups.
```sh
@ -205,7 +205,7 @@ The vault store uses IAM auth to authenticate against the vault server and expec
Instructions for configuring your vault server to accept IAM authentication are at https://learn.hashicorp.com/vault/identity-access-management/iam-authentication
To configure kops to use the Vault store, add this to the cluster spec:
To configure kOps to use the Vault store, add this to the cluster spec:
```yaml
spec:
@ -218,7 +218,7 @@ Each of the paths specified above can be configurable, but they must be unique a
After launching your cluster you need to add the cluster roles to Vault, binding them to the cluster's IAM identity and granting them access to the appropriate secrets and keys. The nodes will wait until they can authenticate before completing provisioning.
#### Vault policies
Note that contrary to the S3 state store, kops will not provision any policies for you. You have to provide roles for both operators and nodes.
Note that contrary to the S3 state store, kOps will not provision any policies for you. You have to provide roles for both operators and nodes.
Using the example paths above, a policy for the cluster nodes can be:

View File

@ -2,11 +2,11 @@
kOps can generate Terraform configurations, and then you can apply them using the `terraform plan` and `terraform apply` tools. This is very handy if you are already using Terraform, or if you want to check in the Terraform output into version control.
The gist of it is that, instead of letting kops apply the changes, you tell kops what you want, and then kops spits out what it wants done into a `.tf` file. **_You_** are then responsible for turning those plans into reality.
The gist of it is that, instead of letting kOps apply the changes, you tell kOps what you want, and then kOps spits out what it wants done into a `.tf` file. **_You_** are then responsible for turning those plans into reality.
The Terraform output should be reasonably stable (i.e. the text files should only change where something has actually changed - items should appear in the same order etc). This is extremely useful when using version control as you can diff your changes easily.
Note that if you modify the Terraform files that kops spits out, it will override your changes with the configuration state defined by its own configs. In other terms, kops's own state is the ultimate source of truth (as far as kops is concerned), and Terraform is a representation of that state for your convenience.
Note that if you modify the Terraform files that kOps spits out, it will override your changes with the configuration state defined by its own configs. In other terms, kOps's own state is the ultimate source of truth (as far as kOps is concerned), and Terraform is a representation of that state for your convenience.
### Terraform Version Compatibility
| kOps Version | Terraform Version | Feature Flag Notes |
@ -95,7 +95,7 @@ $ kops edit cluster \
# editor opens, make your changes ...
```
Then output your changes/edits to kops cluster state into the Terraform files. Run `kops update` with `--target` and `--out` parameters:
Then output your changes/edits to kOps cluster state into the Terraform files. Run `kops update` with `--target` and `--out` parameters:
```
$ kops update cluster \
@ -118,7 +118,7 @@ Keep in mind that some changes will require a `kops rolling-update` to be applie
#### Teardown the cluster
When you eventually `terraform destroy` the cluster, you should still run `kops delete cluster`, to remove the kops cluster specification and any dynamically created Kubernetes resources (ELBs or volumes). To do this, run:
When you eventually `terraform destroy` the cluster, you should still run `kops delete cluster`, to remove the kOps cluster specification and any dynamically created Kubernetes resources (ELBs or volumes). To do this, run:
```
$ terraform plan -destroy
@ -128,7 +128,7 @@ $ kops delete cluster --yes \
--state=s3://mycompany.kubernetes
```
Ps: You don't have to `kops delete cluster` if you just want to recreate from scratch. Deleting kops cluster state means that you've have to `kops create` again.
Ps: You don't have to `kops delete cluster` if you just want to recreate from scratch. Deleting kOps cluster state means that you've have to `kops create` again.
### Caveats

View File

@ -47,7 +47,7 @@ More information about [networking options](networking.md) can be found in our d
## Changing Topology of the API server
To change the ELB that fronts the API server from Internet facing to Internal only there are a few steps to accomplish
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kops recreate the ELB for us.
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kOps recreate the ELB for us.
### Steps to change the ELB from Internet-Facing to Internal
- Edit the cluster: `kops edit cluster $NAME`
@ -62,6 +62,6 @@ The AWS ELB does not support changing from internet facing to Internal. However
- Run the update command to check the config: `kops update cluster $NAME`
- BEFORE DOING the same command with the `--yes` option go into the AWS console and DELETE the api ELB
- Now run: `kops update cluster $NAME --yes`
- Finally execute a rolling update so that the instances register with the new internal ELB, execute: `kops rolling-update cluster --cloudonly --force` command. We have to use the `--cloudonly` option because we deleted the api ELB so there is no way to talk to the cluster through the k8s api. The force option is there because kops / terraform doesn't know that we need to update the instances with the ELB so we have to force it.
- Finally execute a rolling update so that the instances register with the new internal ELB, execute: `kops rolling-update cluster --cloudonly --force` command. We have to use the `--cloudonly` option because we deleted the api ELB so there is no way to talk to the cluster through the k8s api. The force option is there because kOps / terraform doesn't know that we need to update the instances with the ELB so we have to force it.
Once the rolling update has completed you have an internal only ELB that has the master k8s nodes registered with it.

View File

@ -1,12 +1,12 @@
# Upgrading kubernetes
Upgrading kubernetes is very easy with kops, as long as you are using a compatible version of kops.
The kops `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kops.
The kOps `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
are on a best-effort basis and will have little if any testing. kops `1.18` will not support the kubernetes
`1.19` series, and for full support of kubernetes `1.19` it is best to wait for the kops `1.19` series release.
We aim to release the next major version of kops within a few weeks of the equivalent major release of kubernetes,
so kops `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
`1.19` series, and for full support of kubernetes `1.19` it is best to wait for the kOps `1.19` series release.
We aim to release the next major version of kOps within a few weeks of the equivalent major release of kubernetes,
so kOps `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
(alpha or beta) is available at the kubernetes release, for early adopters.
Upgrading kubernetes is similar to changing the image on an InstanceGroup, except that the kubernetes version is

View File

@ -1,6 +1,6 @@
# Managinging Instance Groups
kops has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
an AutoScalingGroup.
By default, a cluster has:
@ -103,7 +103,7 @@ Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your edit
the image or the machineType, you could do that here as well. There are actually a lot more fields,
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kops to
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kOps to
apply your changes to the cloud.
We use the same `kops update cluster` command that we used when initially creating the cluster; when
@ -121,7 +121,7 @@ This is saying that we will alter the `TargetSize` property of the `InstanceGrou
That's what we want, so we `kops update cluster --yes`.
kops will resize the GCE managed instance group from 2 to 3, which will create a new GCE instance,
kOps will resize the GCE managed instance group from 2 to 3, which will create a new GCE instance,
which will then boot and join the cluster. Within a minute or so you should see the new node join:
```
@ -186,7 +186,7 @@ that the instances had not yet been reconfigured. There's a hint at the bottom:
Changes may require instances to restart: kops rolling-update cluster`
```
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kops
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kOps
can perform a rolling update to minimize disruption, but even so you might not want to perform the update right away;
you might want to make more changes or you might want to wait for off-peak hours. You might just want to wait for
the instances to terminate naturally - new instances will come up with the new configuration - though if you're not
@ -519,7 +519,7 @@ spec:
If `openstack.kops.io/osVolumeSize` is not set it will default to the minimum disk specified by the image.
# Working with InstanceGroups
The kops InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
The kOps InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
can change the instance type you're using, the number of nodes you have, the OS image you're running - essentially
all the per-node configuration is in the InstanceGroup.

View File

@ -1,4 +1,4 @@
# Upgrading from kube-up to kops
# Upgrading from kube-up to kOps
kOps let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
kops.
@ -8,7 +8,7 @@ Take a snapshot of your EBS volumes; export all your data from kubectl etc. **
Limitations:
* kops splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but
* kOps splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but
you will lose your events history.
## Overview

View File

@ -1,12 +1,12 @@
## Getting Involved and Contributing
Are you interested in contributing to kops? We, the maintainers and community,
Are you interested in contributing to kOps? We, the maintainers and community,
would love your suggestions, contributions, and help! We have a quick-start
guide on [adding a feature](../development/adding_a_feature.md). Also, the
maintainers can be contacted at any time to learn more about how to get
involved.
In the interest of getting more newer folks involved with kops, we are starting to
In the interest of getting more newer folks involved with kOps, we are starting to
tag issues with `good-starter-issue`. These are typically issues that have
smaller scope but are good ways to start to get acquainted with the codebase.
@ -56,18 +56,18 @@ If you think you have found a bug please follow the instructions below.
- Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
- Set `-v 10` command line option and save the log output. Please paste this into your issue.
- Note the version of kops you are running (from `kops version`), and the command line options you are using.
- Note the version of kOps you are running (from `kops version`), and the command line options you are using.
- Open a [new issue](https://github.com/kubernetes/kops/issues/new).
- Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
- Feel free to reach out to the kops community on [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
- Feel free to reach out to the kOps community on [kubernetes slack](https://github.com/kubernetes/community/blob/master/communication.md#social-media).
### Features
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kops become even more awesome follow the steps below.
We also use the issue tracker to track features. If you have an idea for a feature, or think you can help kOps become even more awesome follow the steps below.
- Open a [new issue](https://github.com/kubernetes/kops/issues/new).
- Remember users might be searching for your issue in the future, so please give it a meaningful title to helps others.
- Clearly define the use case, using concrete examples. EG: I type `this` and kops does `that`.
- Clearly define the use case, using concrete examples. EG: I type `this` and kOps does `that`.
- Some of our larger features will require some design. If you would like to include a technical design for your feature please include it in the issue.
- After the new feature is well understood, and the design agreed upon we can start coding the feature. We would love for you to code it. So please open up a **WIP** *(work in progress)* pull request, and happy coding.

View File

@ -14,13 +14,13 @@ Our office hours call is recorded, but the tone tends to be casual. First-timers
- Members planning for what we want to get done for the next release
- Strategizing for larger initiatives, such as those that involve more than one sig or potentially more moving pieces
- Help wanted requests
- Demonstrations of cool stuff. PoCs. Fresh ideas. Show us how you use kops to go beyond the norm- help us define the future!
- Demonstrations of cool stuff. PoCs. Fresh ideas. Show us how you use kOps to go beyond the norm- help us define the future!
Office hours are designed for ALL of those contributing to kops or the community. Contributions are not limited to those who commit source code. There are so many important ways to be involved:
Office hours are designed for ALL of those contributing to kOps or the community. Contributions are not limited to those who commit source code. There are so many important ways to be involved:
- helping in the slack channels
- triaging/writing issues
- thinking about the topics raised at office hours and forming and advocating for your good ideas forming opinions
- testing pre-(and official) releases
Although not exhaustive, the above activities are extremely important to our continued success and are all worth contributions. If you want to talk about kops and you have doubt, just come.
Although not exhaustive, the above activities are extremely important to our continued success and are all worth contributions. If you want to talk about kOps and you have doubt, just come.

View File

@ -1,14 +1,14 @@
# kOps Releases & Versioning
kOps intends to be backward compatible. It is always recommended using the
latest version of kops with whatever version of Kubernetes you are using. We suggest
kops users run one of the [3 minor versions](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew) Kubernetes is supporting however we
latest version of kOps with whatever version of Kubernetes you are using. We suggest
kOps users run one of the [3 minor versions](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew) Kubernetes is supporting however we
do our best to support previous releases for some period.
kOps does not, however, support Kubernetes releases that have either a greater major
release number or greater minor release number than it.
(The numbers before the first and second dots are the major and minor release numbers, respectively.)
For example, kops 1.16.0 does not support Kubernetes 1.17.0, but does
For example, kOps 1.16.0 does not support Kubernetes 1.17.0, but does
support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
## Compatibility Matrix
@ -22,7 +22,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
| ~~1.14.x~~ | ✔ | ⚫ | ⚫ | ⚫ | ⚫ |
Use the latest version of kops for all releases of Kubernetes, with the caveat
Use the latest version of kOps for all releases of Kubernetes, with the caveat
that higher versions of Kubernetes are not _officially_ supported by kops.
Releases which are ~~crossed out~~ _should_ work, but we suggest they be upgraded soon.
@ -34,6 +34,6 @@ releases about a month after the corresponding Kubernetes release. This time
allows for the Kubernetes project to resolve any issues introduced by the new
version and ensures that we can support the latest features. kOps will release
alpha and beta pre-releases for people that are eager to try the latest
Kubernetes release. Please only use pre-GA kops releases in environments that
Kubernetes release. Please only use pre-GA kOps releases in environments that
can tolerate the quirks of new releases, and please do report any issues
encountered.

View File

@ -1,6 +1,6 @@
## Examples of how to embed kops / use the kops API
## Examples of how to embed kOps / use the kOps API
The kops API is still a work in progress, but this is where we will put examples of how it can be used.
The kOps API is still a work in progress, but this is where we will put examples of how it can be used.
```
make examples

View File

@ -4,6 +4,6 @@ The source code within this directory is provided under the [Apache License, ver
Note that the NVIDIA software installed by this hook's container may be subject to [NVIDIA's own license terms](http://www.nvidia.com/content/DriverDownload-March2009/licence.php?lang=us).
This is an experimental hook for installing the nvidia drivers as part of the kops boot process.
This is an experimental hook for installing the nvidia drivers as part of the kOps boot process.
Please see the [GPU docs](/docs/gpu.md) for more details on how to use this.

View File

@ -52,7 +52,7 @@ images, it has only been tested with the following:
### Test Matrix
This kops hook was developed against the following version combinations.
This kOps hook was developed against the following version combinations.
| Kops Version | Kubernetes Version | GPU Mode | OS Image |
| ------------- | ------------------ | ------------ | -------- |

View File

@ -1,5 +1,5 @@
## Prepull images
This docker image can be used as a [kops hook](/docs/hooks.md) to pre-pull docker images.
This docker image can be used as a [kOps hook](/docs/hooks.md) to pre-pull docker images.
The images to be pulled can be passed as arguments. (TODO: Add example)