mirror of https://github.com/kubernetes/kops.git
Update kops as kOps and remove extra spaces from .md files
- Updated kops as kOps in .md files. - Remove extra spaces from .md files
This commit is contained in:
parent
20cb30828b
commit
3033caa5e7
|
@ -14,8 +14,7 @@ Instructions for reporting a vulnerability can be found on the
|
|||
## Supported Versions
|
||||
|
||||
Information about supported kOps versions and the Kubernetes versions they support can be found on the
|
||||
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the
|
||||
[Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
|
||||
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
|
||||
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
|
||||
|
|
|
@ -11,7 +11,7 @@ complete lifecycle of Ambassador in your cluster. It also automates many of the
|
|||
Ambassador. Once installed, the Operator will automatically complete rapid installations and seamless upgrades to new
|
||||
versions of Ambassador.
|
||||
|
||||
This addon deploys Ambassador Operator which installs Ambassador in a kops cluster.
|
||||
This addon deploys Ambassador Operator which installs Ambassador in a kOps cluster.
|
||||
|
||||
##### Note:
|
||||
The operator requires widely scoped permissions in order to install and manage Ambassador's lifecycle. Both, the
|
||||
|
|
|
@ -32,7 +32,7 @@ kubectl apply -f ${addon}
|
|||
An enhanced script which also adds the IAM policies is included here [cluster-autoscaler.sh](cluster-autoscaler.sh)
|
||||
|
||||
Question: Which ASG group should be autoscaled?
|
||||
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
|
||||
Answer: By default, kOps creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
|
||||
|
||||
Question: The cluster-autoscaler [documentation](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws) mentions an IAM Policy. Which IAM Role should the Policy be attached to?
|
||||
Answer: Kops creates two Roles, nodes.$CLUSTER_NAME and masters.$CLUSTER_NAME. Currently the example scripts run the autoscaler process on the k8s master node, so the IAM Policy should be assigned to masters.$CLUSTER_NAME (substituting that variable for your actual cluster name).
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Deploying Citrix Ingress Controller through KOPS
|
||||
# Deploying Citrix Ingress Controller through kOps
|
||||
|
||||
This guide explains how to deploy [Citrix Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) through KOPS addon.
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Download kOps config spec file
|
||||
|
||||
KOPS operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
|
||||
kOps operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
|
||||
|
||||
If you download the config spec file on a running cluster that is configured the way you like it, you can just pass that config spec file in to the create command and have kops create the cluster for you , `kops create -f spec_file` in a completely unattended manner.
|
||||
If you download the config spec file on a running cluster that is configured the way you like it, you can just pass that config spec file in to the create command and have kOps create the cluster for you, `kops create -f spec_file` in a completely unattended manner.
|
||||
|
||||
Let us say you create your cluster with the following configuration options:
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Kubernetes Bootstrap
|
||||
|
||||
This is an overview of how a Kubernetes cluster comes up, when using kops.
|
||||
This is an overview of how a Kubernetes cluster comes up, when using kOps.
|
||||
|
||||
## From spec to complete configuration
|
||||
|
||||
|
@ -31,7 +31,7 @@ In addition, nodeup installs:
|
|||
|
||||
## /etc/kubernetes/manifests
|
||||
|
||||
kubelet starts pods as controlled by the files in /etc/kubernetes/manifests These files are created
|
||||
kubelet starts pods as controlled by the files in /etc/kubernetes/manifests. These files are created
|
||||
by nodeup and protokube (ideally all by protokube, but currently split between the two).
|
||||
|
||||
These pods are declared using the standard k8s manifests, just as if they were stored in the API.
|
||||
|
|
|
@ -7,7 +7,7 @@ Output shell completion code for the given shell (bash or zsh).
|
|||
|
||||
### Synopsis
|
||||
|
||||
Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of kops commands. This can be done by sourcing it from the .bash_profile.
|
||||
Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of kOps commands. This can be done by sourcing it from the .bash_profile.
|
||||
|
||||
Note: this requires the bash-completion framework, which is not installed by default on Mac. Once installed, bash_completion must be evaluated. This can be done by adding the following line to the .bash_profile
|
||||
|
||||
|
|
|
@ -22,8 +22,7 @@ spec:
|
|||
```
|
||||
|
||||
|
||||
When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The `type`
|
||||
field should be `Public` or `Internal`.
|
||||
When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The `type` field should be `Public` or `Internal`.
|
||||
|
||||
Also, you can add precreated additional security groups to the load balancer by setting `additionalSecurityGroups`.
|
||||
|
||||
|
|
|
@ -144,12 +144,10 @@ which would end up in a drop-in file on nodes of the instance group in question.
|
|||
|
||||
## mixedInstancesPolicy (AWS Only)
|
||||
|
||||
A Mixed Instances Policy utilizing EC2 Spot and the `capacity-optimized` allocation strategy allows an EC2 Autoscaling Group to
|
||||
select the instance types with the highest capacity. This reduces the chance of a spot interruption on your instance group.
|
||||
A Mixed Instances Policy utilizing EC2 Spot and the `capacity-optimized` allocation strategy allows an EC2 Autoscaling Group to select the instance types with the highest capacity. This reduces the chance of a spot interruption on your instance group.
|
||||
|
||||
Instance groups with a mixedInstancesPolicy can be generated with the `kops toolbox instance-selector` command.
|
||||
The instance-selector accepts user supplied resource parameters like vcpus, memory, and much more to dynamically select instance types
|
||||
that match your criteria.
|
||||
The instance-selector accepts user supplied resource parameters like vcpus, memory, and much more to dynamically select instance types that match your criteria.
|
||||
|
||||
```bash
|
||||
kops toolbox instance-selector --vcpus 4 --flexible --usage-class spot --instance-group-name spotgroup
|
||||
|
@ -187,7 +185,7 @@ spec:
|
|||
|
||||
### Instances
|
||||
|
||||
Instances is a list of instance types which we are willing to run in the EC2 Auto Scaling group
|
||||
Instances is a list of instance types which we are willing to run in the EC2 Auto Scaling group.
|
||||
|
||||
### onDemandAllocationStrategy
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ Backups and restores of etcd on kOps are covered in [etcd_backup_restore_encrypt
|
|||
|
||||
## Direct Data Access
|
||||
|
||||
It's not typically necessary to view or manipulate the data inside of etcd directly with etcdctl, because all operations usually go through kubectl commands. However, it can be informative during troubleshooting, or just to understand kubernetes better. Here are the steps to accomplish that on kops.
|
||||
It's not typically necessary to view or manipulate the data inside of etcd directly with etcdctl, because all operations usually go through kubectl commands. However, it can be informative during troubleshooting, or just to understand kubernetes better. Here are the steps to accomplish that on kOps.
|
||||
|
||||
1\. Connect to an etcd-manager pod
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ You can also rerun [these steps](../development/building.md) if previously built
|
|||
|
||||
## Upgrading Kubernetes
|
||||
|
||||
Upgrading Kubernetes is easy with kops. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
Upgrading Kubernetes is easy with kOps. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
|
||||
The `kops upgrade` command also automates checking for and applying updates.
|
||||
|
||||
|
|
|
@ -40,8 +40,7 @@ The API objects are currently stored in an abstraction called a ["state store"](
|
|||
Configuration of a kubernetes cluster is actually relatively complicated: there are a lot of options, and many combinations
|
||||
must be configured consistently with each other.
|
||||
|
||||
Similar to the way creating a Kubernetes object populates other spec values, the `kops create cluster` command will infer other values
|
||||
that are not set, so that you can specify a minimal set of values (but if you don't want to override the default value, you simply specify the fields!).
|
||||
Similar to the way creating a Kubernetes object populates other spec values, the `kops create cluster` command will infer other values that are not set, so that you can specify a minimal set of values (but if you don't want to override the default value, you simply specify the fields!).
|
||||
|
||||
Because more values are inferred than with simpler k8s objects, we record the user-created spec separately from the
|
||||
complete inferred specification. This means we can keep track of which values were actually set by the user, vs just being
|
||||
|
|
|
@ -23,7 +23,7 @@ preparing for a new kubernetes release, we will try to advance the master branch
|
|||
to focus on the new functionality, and start cherry-picking back more selectively
|
||||
to the release branches only as needed.
|
||||
|
||||
Generally we don't encourage users to run older kops versions, or older
|
||||
Generally we don't encourage users to run older kOps versions, or older
|
||||
branches, because newer versions of kOps should remain compatible with older
|
||||
versions of Kubernetes.
|
||||
|
||||
|
@ -118,8 +118,7 @@ git fetch origin # sync back up
|
|||
|
||||
## Wait for CI job to complete
|
||||
|
||||
The staging CI job should now see the tag, and build it (from the
|
||||
trusted prow cluster, using Google Cloud Build).
|
||||
The staging CI job should now see the tag, and build it (from the trusted prow cluster, using Google Cloud Build).
|
||||
|
||||
The job is here: https://testgrid.k8s.io/sig-cluster-lifecycle-kops#kops-postsubmit-push-to-staging
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Release notes for kops 1.20 series
|
||||
## Release notes for kOps 1.20 series
|
||||
|
||||
(The kops 1.20 release has not been released yet; this is a document to gather the notes prior to the release).
|
||||
(The kOps 1.20 release has not been released yet; this is a document to gather the notes prior to the release).
|
||||
|
||||
# Significant changes
|
||||
|
||||
|
@ -16,7 +16,7 @@
|
|||
|
||||
# Deprecations
|
||||
|
||||
* Support for Kubernetes versions 1.13 and 1.14 are deprecated and will be removed in kops 1.21.
|
||||
* Support for Kubernetes versions 1.13 and 1.14 are deprecated and will be removed in kOps 1.21.
|
||||
|
||||
* The [manifest based metrics server addon](https://github.com/kubernetes/kops/tree/master/addons/metrics-server) has been deprecated in favour of a configurable addon.
|
||||
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
## Running in a shared VPC
|
||||
|
||||
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
|
||||
or NAT Gateway you can tell kOps to ignore egress. By default, kops creates a new subnet per zone and a new route table,
|
||||
but you can instead use a shared subnet (see [below](#shared-subnets)).
|
||||
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway or NAT Gateway you can tell kOps to ignore egress. By default, kOps creates a new subnet per zone and a new route table, but you can instead use a shared subnet (see [below](#shared-subnets)).
|
||||
|
||||
1. Use `kops create cluster` with the `--vpc` argument for your existing VPC:
|
||||
|
||||
|
@ -161,7 +159,7 @@ spec:
|
|||
|
||||
### Shared NAT Egress
|
||||
|
||||
On AWS in private [topology](topology.md), kops creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that `kops` deploys private resources to, it is possible to specify the ID and have `kops`/`kubernetes` use it.
|
||||
On AWS in private [topology](topology.md), kOps creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that `kops` deploys private resources to, it is possible to specify the ID and have `kops`/`kubernetes` use it.
|
||||
|
||||
If you don't want to use NAT Gateways but have setup [EC2 NAT Instances](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) in your VPC that you can share, it's possible to specify the IDs of said instances and have `kops`/`kubernetes` use them.
|
||||
|
||||
|
|
|
@ -85,7 +85,7 @@ Wait for the cluster to initialize. If all goes well, you should have a working
|
|||
|
||||
#### Editing the cluster
|
||||
|
||||
It's possible to use Terraform to make changes to your infrastructure as defined by kops. In the example below we'd like to change some cluster configs:
|
||||
It's possible to use Terraform to make changes to your infrastructure as defined by kOps. In the example below we'd like to change some cluster configs:
|
||||
|
||||
```
|
||||
$ kops edit cluster \
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Upgrading kubernetes
|
||||
|
||||
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kops.
|
||||
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kOps.
|
||||
The kOps `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
|
||||
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
|
||||
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# Managinging Instance Groups
|
||||
|
||||
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
|
||||
an AutoScalingGroup.
|
||||
an Auto Scaling group.
|
||||
|
||||
By default, a cluster has:
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# Upgrading from kube-up to kOps
|
||||
|
||||
kOps let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
|
||||
kops.
|
||||
kOps.
|
||||
|
||||
** This is a slightly risky procedure, so we recommend backing up important data before proceeding.
|
||||
Take a snapshot of your EBS volumes; export all your data from kubectl etc. **
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Office Hours
|
||||
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kOps. This session is open to both developers and users.
|
||||
|
||||
Office hours are hosted on a [zoom video chat](https://zoom.us/j/97072789944?pwd=VVlUR3dhN2h5TEFQZHZTVVd4SnJUdz09) on Fridays at [12 noon (Eastern Time)/9 am (Pacific Time)](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12) during weeks with odd "numbers". To check this weeks' number, run: `date +%V`. If the response is odd, join us on Friday for office hours!
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
|
|||
|
||||
## Compatibility Matrix
|
||||
|
||||
| kops version | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x | k8s 1.17.x | k8s 1.18.x |
|
||||
| kOps version | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x | k8s 1.17.x | k8s 1.18.x |
|
||||
|---------------|------------|------------|------------|------------|------------|
|
||||
| 1.18.0 | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| 1.17.x | ✔ | ✔ | ✔ | ✔ | ⚫ |
|
||||
|
@ -23,7 +23,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
|
|||
|
||||
|
||||
Use the latest version of kOps for all releases of Kubernetes, with the caveat
|
||||
that higher versions of Kubernetes are not _officially_ supported by kops.
|
||||
that higher versions of Kubernetes are not _officially_ supported by kOps.
|
||||
Releases which are ~~crossed out~~ _should_ work, but we suggest they be upgraded soon.
|
||||
|
||||
## Release Schedule
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
This directory contains docs that add contextual help to error messages.
|
||||
|
||||
The links are baked into kops, and thus we cannot rename or move these files (at least not quickly).
|
||||
The links are baked into kOps, and thus we cannot rename or move these files (at least not quickly).
|
|
@ -3,7 +3,7 @@
|
|||
Kops has established a deprecation policy for Kubernetes version support.
|
||||
Kops will remove support for Kubernetes versions as follows:
|
||||
|
||||
| kops version | Removes support for Kubernetes version |
|
||||
| kOps version | Removes support for Kubernetes version |
|
||||
|--------------|----------------------------------------|
|
||||
| 1.18 | 1.8 and below |
|
||||
| 1.19 | 1.9 and 1.10 |
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# Kops Upgrade Recommended
|
||||
|
||||
You are running a version of kops that we recommend upgrading.
|
||||
You are running a version of kOps that we recommend upgrading.
|
||||
|
||||
The latest releases are available from [Github Releases](https://github.com/kubernetes/kops/releases)
|
Loading…
Reference in New Issue