mirror of https://github.com/kubernetes/kops.git
Update kops as kOps and remove extra spaces from .md files
- Updated kops as kOps in .md files. - Remove extra spaces from .md files
This commit is contained in:
parent
20cb30828b
commit
3033caa5e7
|
@ -14,8 +14,7 @@ Instructions for reporting a vulnerability can be found on the
|
|||
## Supported Versions
|
||||
|
||||
Information about supported kOps versions and the Kubernetes versions they support can be found on the
|
||||
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the
|
||||
[Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
|
||||
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
|
||||
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
|
||||
|
|
|
@ -11,7 +11,7 @@ complete lifecycle of Ambassador in your cluster. It also automates many of the
|
|||
Ambassador. Once installed, the Operator will automatically complete rapid installations and seamless upgrades to new
|
||||
versions of Ambassador.
|
||||
|
||||
This addon deploys Ambassador Operator which installs Ambassador in a kops cluster.
|
||||
This addon deploys Ambassador Operator which installs Ambassador in a kOps cluster.
|
||||
|
||||
##### Note:
|
||||
The operator requires widely scoped permissions in order to install and manage Ambassador's lifecycle. Both, the
|
||||
|
|
|
@ -32,7 +32,7 @@ kubectl apply -f ${addon}
|
|||
An enhanced script which also adds the IAM policies is included here [cluster-autoscaler.sh](cluster-autoscaler.sh)
|
||||
|
||||
Question: Which ASG group should be autoscaled?
|
||||
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
|
||||
Answer: By default, kOps creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
|
||||
|
||||
Question: The cluster-autoscaler [documentation](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws) mentions an IAM Policy. Which IAM Role should the Policy be attached to?
|
||||
Answer: Kops creates two Roles, nodes.$CLUSTER_NAME and masters.$CLUSTER_NAME. Currently the example scripts run the autoscaler process on the k8s master node, so the IAM Policy should be assigned to masters.$CLUSTER_NAME (substituting that variable for your actual cluster name).
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Deploying Citrix Ingress Controller through KOPS
|
||||
# Deploying Citrix Ingress Controller through kOps
|
||||
|
||||
This guide explains how to deploy [Citrix Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) through KOPS addon.
|
||||
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Download kOps config spec file
|
||||
|
||||
KOPS operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
|
||||
kOps operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
|
||||
|
||||
If you download the config spec file on a running cluster that is configured the way you like it, you can just pass that config spec file in to the create command and have kops create the cluster for you , `kops create -f spec_file` in a completely unattended manner.
|
||||
If you download the config spec file on a running cluster that is configured the way you like it, you can just pass that config spec file in to the create command and have kOps create the cluster for you, `kops create -f spec_file` in a completely unattended manner.
|
||||
|
||||
Let us say you create your cluster with the following configuration options:
|
||||
|
||||
|
@ -43,7 +43,7 @@ For more information on how to use and modify the configurations see [here](../m
|
|||
|
||||
## Managing instance groups
|
||||
|
||||
You can also manage instance groups in separate YAML files as well. The command `kops get --name $NAME -o yaml > $NAME.yml` exports the entire cluster. An option is to have a YAML file for the cluster, and individual YAML files for the instance groups. This allows you to do stuff like:
|
||||
You can also manage instance groups in separate YAML files as well. The command `kops get --name $NAME -o yaml > $NAME.yml` exports the entire cluster. An option is to have a YAML file for the cluster, and individual YAML files for the instance groups. This allows you to do stuff like:
|
||||
|
||||
```shell
|
||||
if ! kops get cluster --name "$NAME"; then
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
# Authentication
|
||||
|
||||
kOps has support for configuring authentication systems. This should not be used with kubernetes versions
|
||||
kOps has support for configuring authentication systems. This should not be used with kubernetes versions
|
||||
before 1.8.5 because of a serious bug with apimachinery [#55022](https://github.com/kubernetes/kubernetes/issues/55022).
|
||||
|
||||
## kopeio authentication
|
||||
|
||||
If you want to experiment with kopeio authentication, you can use
|
||||
`--authentication kopeio`. However please be aware that kopeio authentication
|
||||
`--authentication kopeio`. However please be aware that kopeio authentication
|
||||
has not yet been formally released, and thus there is not a lot of upstream
|
||||
documentation.
|
||||
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
## Kubernetes Bootstrap
|
||||
|
||||
This is an overview of how a Kubernetes cluster comes up, when using kops.
|
||||
This is an overview of how a Kubernetes cluster comes up, when using kOps.
|
||||
|
||||
## From spec to complete configuration
|
||||
|
||||
The kOps tool itself takes the (minimal) spec of a cluster that the user specifies,
|
||||
and computes a complete configuration, setting defaults where values are not specified,
|
||||
and deriving appropriate dependencies. The "complete" specification includes the set
|
||||
of all flags that will be passed to all components. All decisions about how to install the
|
||||
and deriving appropriate dependencies. The "complete" specification includes the set
|
||||
of all flags that will be passed to all components. All decisions about how to install the
|
||||
cluster are made at this stage, and thus every decision can in theory be changed if the user
|
||||
specifies a value in the spec.
|
||||
|
||||
|
@ -22,7 +22,7 @@ On both AWS & GCE, everything (nodes & masters) runs in an ASG/MIG; this means t
|
|||
nodeup is the component that installs packages and sets up the OS, sufficiently for
|
||||
Kubelet. The core requirements are:
|
||||
|
||||
* Docker must be installed. nodeup will install Docker 1.13.1, the version of Docker tested with Kubernetes 1.8
|
||||
* Docker must be installed. nodeup will install Docker 1.13.1, the version of Docker tested with Kubernetes 1.8
|
||||
* Kubelet, which is installed a systemd service
|
||||
|
||||
In addition, nodeup installs:
|
||||
|
@ -31,7 +31,7 @@ In addition, nodeup installs:
|
|||
|
||||
## /etc/kubernetes/manifests
|
||||
|
||||
kubelet starts pods as controlled by the files in /etc/kubernetes/manifests These files are created
|
||||
kubelet starts pods as controlled by the files in /etc/kubernetes/manifests. These files are created
|
||||
by nodeup and protokube (ideally all by protokube, but currently split between the two).
|
||||
|
||||
These pods are declared using the standard k8s manifests, just as if they were stored in the API.
|
||||
|
@ -59,19 +59,19 @@ doesn't fit into `additionalUserData` or `hooks`.
|
|||
Kubelet starts up, starts (and restarts) all the containers in /etc/kubernetes/manifests.
|
||||
|
||||
It also tries to contact the API server (which the master kubelet will itself eventually start),
|
||||
register the node. Once a node is registered, kube-controller-manager will allocate it a PodCIDR,
|
||||
which is an allocation of the k8s-network IP range. kube-controller-manager updates the node, setting
|
||||
the PodCIDR field. Once kubelet sees this allocation, it will set up the
|
||||
local bridge with this CIDR, which allows docker to start. Before this happens, only pods
|
||||
register the node. Once a node is registered, kube-controller-manager will allocate it a PodCIDR,
|
||||
which is an allocation of the k8s-network IP range. kube-controller-manager updates the node, setting
|
||||
the PodCIDR field. Once kubelet sees this allocation, it will set up the
|
||||
local bridge with this CIDR, which allows docker to start. Before this happens, only pods
|
||||
that have hostNetwork will work - so all the "core" containers run with hostNetwork=true.
|
||||
|
||||
## api-server bringup
|
||||
|
||||
APIServer also listens on the HTTPS port (443) on all interfaces. This is a secured endpoint,
|
||||
and requires valid authentication/authorization to use it. This is the endpoint that node kubelets
|
||||
APIServer also listens on the HTTPS port (443) on all interfaces. This is a secured endpoint,
|
||||
and requires valid authentication/authorization to use it. This is the endpoint that node kubelets
|
||||
will reach, and also that end-users will reach.
|
||||
|
||||
kOps uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
|
||||
kOps uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
|
||||
/etc/kubernetes/manifests) includes annotations that will cause the dns-controller to create the
|
||||
records. It creates `api.internal.mycluster.com` for use inside the cluster (using InternalIP addresses),
|
||||
and it creates `api.mycluster.com` for use outside the cluster (using ExternalIP addresses).
|
||||
|
@ -89,7 +89,7 @@ kOps follows CoreOS's recommend procedure for [bring-up of etcd on clouds](https
|
|||
* We set up etcd with a static cluster, with those DNS names
|
||||
|
||||
Because the data is persistent and the cluster membership is also a static set of DNS names, this
|
||||
means we don't need to manage etcd directly. We just try to make sure that some master always have
|
||||
means we don't need to manage etcd directly. We just try to make sure that some master always have
|
||||
each volume mounted with etcd running and DNS set correctly. That is the job of protokube.
|
||||
|
||||
Protokube:
|
||||
|
@ -107,8 +107,8 @@ Most of this has focused on things that happen on the master, but the node bring
|
|||
* nodeup installs docker & kubelet
|
||||
* in /etc/kubernetes/manifests, we have kube-proxy
|
||||
|
||||
So kubelet will start up, as will kube-proxy. It will try to reach the api-server on the internal DNS name,
|
||||
and once the master is up it will succeed. Then:
|
||||
So kubelet will start up, as will kube-proxy. It will try to reach the api-server on the internal DNS name,
|
||||
and once the master is up it will succeed. Then:
|
||||
|
||||
* kubelet creates a Node object representing itself
|
||||
* kube-controller-manager sees the node creation and assigns it a PodCIDR
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Changing a cluster configuration
|
||||
|
||||
(This procedure is currently unnecessarily convoluted. Expect it to get streamlined!)
|
||||
(This procedure is currently unnecessarily convoluted. Expect it to get streamlined!)
|
||||
|
||||
* Edit the cluster spec: `kops edit cluster ${NAME}`
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ Output shell completion code for the given shell (bash or zsh).
|
|||
|
||||
### Synopsis
|
||||
|
||||
Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of kops commands. This can be done by sourcing it from the .bash_profile.
|
||||
Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of kOps commands. This can be done by sourcing it from the .bash_profile.
|
||||
|
||||
Note: this requires the bash-completion framework, which is not installed by default on Mac. Once installed, bash_completion must be evaluated. This can be done by adding the following line to the .bash_profile
|
||||
|
||||
|
|
|
@ -113,7 +113,7 @@ kops create cluster [flags]
|
|||
--out string Path to write any local output
|
||||
-o, --output string Output format. One of json|yaml. Used with the --dry-run flag.
|
||||
--project string Project to use (must be set on GCE)
|
||||
--ssh-access strings Restrict SSH access to this CIDR. If not set, access will not be restricted by IP. (default [0.0.0.0/0])
|
||||
--ssh-access strings Restrict SSH access to this CIDR. If not set, access will not be restricted by IP. (default [0.0.0.0/0])
|
||||
--ssh-public-key string SSH public key to use (defaults to ~/.ssh/id_rsa.pub on AWS)
|
||||
--subnets strings Set to use shared subnets
|
||||
--target string Valid targets: direct, terraform, cloudformation. Set this flag to terraform if you want kops to generate terraform (default "direct")
|
||||
|
|
|
@ -7,7 +7,7 @@ Delete instancegroup
|
|||
|
||||
### Synopsis
|
||||
|
||||
Delete an instancegroup configuration. kops has the concept of "instance groups", which are a group of similar virtual machines. On AWS, they map to an AutoScalingGroup. An ig work either as a Kubernetes master or a node.
|
||||
Delete an instancegroup configuration. kops has the concept of "instance groups", which are a group of similar virtual machines. On AWS, they map to an AutoScalingGroup. An ig work either as a Kubernetes master or a node.
|
||||
|
||||
```
|
||||
kops delete instancegroup [flags]
|
||||
|
|
|
@ -22,8 +22,7 @@ spec:
|
|||
```
|
||||
|
||||
|
||||
When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The `type`
|
||||
field should be `Public` or `Internal`.
|
||||
When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The `type` field should be `Public` or `Internal`.
|
||||
|
||||
Also, you can add precreated additional security groups to the load balancer by setting `additionalSecurityGroups`.
|
||||
|
||||
|
@ -37,7 +36,7 @@ spec:
|
|||
- sg-xxxxxxxx
|
||||
```
|
||||
|
||||
Additionally, you can increase idle timeout of the load balancer by setting its `idleTimeoutSeconds`. The default idle timeout is 5 minutes, with a maximum of 3600 seconds (60 minutes) being allowed by AWS. Note this value is ignored for load balancer Class `Network`.
|
||||
Additionally, you can increase idle timeout of the load balancer by setting its `idleTimeoutSeconds`. The default idle timeout is 5 minutes, with a maximum of 3600 seconds (60 minutes) being allowed by AWS. Note this value is ignored for load balancer Class `Network`.
|
||||
For more information see [configuring idle timeouts](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).
|
||||
|
||||
```yaml
|
||||
|
@ -84,7 +83,7 @@ spec:
|
|||
|
||||
{{ kops_feature_table(kops_added_default='1.19') }}
|
||||
|
||||
You can choose to have a Network Load Balancer instead of a Classic Load Balancer. The `class` field should be either `Network` or `Classic` (default).
|
||||
You can choose to have a Network Load Balancer instead of a Classic Load Balancer. The `class` field should be either `Network` or `Classic` (default).
|
||||
|
||||
**Note**: changing the class of load balancer in an existing cluster is a disruptive operation. Until the masters have gone through a rolling update, new connections to the apiserver will fail due to the old master's TLS certificates containing the old load balancer's IP address.
|
||||
```yaml
|
||||
|
@ -307,7 +306,7 @@ spec:
|
|||
|
||||
**Note**: The auditPolicyFile is needed. If the flag is omitted, no events are logged.
|
||||
|
||||
You could use the [fileAssets](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#fileassets) feature to push an advanced audit policy file on the master nodes.
|
||||
You could use the [fileAssets](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#fileassets) feature to push an advanced audit policy file on the master nodes.
|
||||
|
||||
Example policy file can be found [here](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/audit/audit-policy.yaml)
|
||||
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
HTTP Forward Proxy Support
|
||||
==========================
|
||||
|
||||
It is possible to launch a Kubernetes cluster from behind an http forward proxy ("corporate proxy"). To do so, you will need to configure the `egressProxy` for the cluster.
|
||||
It is possible to launch a Kubernetes cluster from behind an http forward proxy ("corporate proxy"). To do so, you will need to configure the `egressProxy` for the cluster.
|
||||
|
||||
It is assumed the proxy is already existing. If you want a private topology on AWS, for example, with a proxy instead of a NAT instance, you'll need to create the proxy yourself. See [Running in a shared VPC](run_in_existing_vpc.md).
|
||||
It is assumed the proxy is already existing. If you want a private topology on AWS, for example, with a proxy instead of a NAT instance, you'll need to create the proxy yourself. See [Running in a shared VPC](run_in_existing_vpc.md).
|
||||
|
||||
This configuration only manages proxy configurations for kOps and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
|
||||
This configuration only manages proxy configurations for kOps and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -24,7 +24,7 @@ Currently we assume the same configuration for http and https traffic.
|
|||
|
||||
## Proxy Excludes
|
||||
|
||||
Most clients will blindly try to use the proxy to make all calls, even to localhost and the local subnet, unless configured otherwise. Some basic exclusions necessary for successful launch and operation are added for you at initial cluster creation. If you wish to add additional exclusions, add or edit `egressProxy.excludes` with a comma separated list of hostnames. Matching is based on suffix, ie, `corp.local` will match `images.corp.local`, and `.corp.local` will match `corp.local` and `images.corp.local`, following typical `no_proxy` environment variable conventions.
|
||||
Most clients will blindly try to use the proxy to make all calls, even to localhost and the local subnet, unless configured otherwise. Some basic exclusions necessary for successful launch and operation are added for you at initial cluster creation. If you wish to add additional exclusions, add or edit `egressProxy.excludes` with a comma separated list of hostnames. Matching is based on suffix, ie, `corp.local` will match `images.corp.local`, and `.corp.local` will match `corp.local` and `images.corp.local`, following typical `no_proxy` environment variable conventions.
|
||||
|
||||
``` yaml
|
||||
spec:
|
||||
|
@ -37,4 +37,4 @@ spec:
|
|||
|
||||
## AWS VPC Endpoints and S3 access
|
||||
|
||||
If you are hosting on AWS have configured VPC "Endpoints" for S3 or other services, you may want to add these to the `spec.egressProxy.excludes`. Keep in mind that the S3 bucket must be in the same region as the VPC for it to be accessible via the endpoint.
|
||||
If you are hosting on AWS have configured VPC "Endpoints" for S3 or other services, you may want to add these to the `spec.egressProxy.excludes`. Keep in mind that the S3 bucket must be in the same region as the VPC for it to be accessible via the endpoint.
|
||||
|
|
|
@ -36,7 +36,7 @@ You can also [install from source](development/building.md).
|
|||
|
||||
## kubectl
|
||||
|
||||
`kubectl` is the CLI tool to manage and operate Kubernetes clusters. You can install it as follows.
|
||||
`kubectl` is the CLI tool to manage and operate Kubernetes clusters. You can install it as follows.
|
||||
|
||||
### MacOS
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ Autoscaling groups automatically include multiple [scaling processes](https://do
|
|||
that keep our ASGs healthy. In some cases, you may want to disable certain scaling activities.
|
||||
|
||||
An example of this is if you are running multiple AZs in an ASG while using a Kubernetes Autoscaler.
|
||||
The autoscaler will remove specific instances that are not being used. In some cases, the `AZRebalance` process
|
||||
The autoscaler will remove specific instances that are not being used. In some cases, the `AZRebalance` process
|
||||
will rescale the ASG without warning.
|
||||
|
||||
```YAML
|
||||
|
@ -144,12 +144,10 @@ which would end up in a drop-in file on nodes of the instance group in question.
|
|||
|
||||
## mixedInstancesPolicy (AWS Only)
|
||||
|
||||
A Mixed Instances Policy utilizing EC2 Spot and the `capacity-optimized` allocation strategy allows an EC2 Autoscaling Group to
|
||||
select the instance types with the highest capacity. This reduces the chance of a spot interruption on your instance group.
|
||||
A Mixed Instances Policy utilizing EC2 Spot and the `capacity-optimized` allocation strategy allows an EC2 Autoscaling Group to select the instance types with the highest capacity. This reduces the chance of a spot interruption on your instance group.
|
||||
|
||||
Instance groups with a mixedInstancesPolicy can be generated with the `kops toolbox instance-selector` command.
|
||||
The instance-selector accepts user supplied resource parameters like vcpus, memory, and much more to dynamically select instance types
|
||||
that match your criteria.
|
||||
The instance-selector accepts user supplied resource parameters like vcpus, memory, and much more to dynamically select instance types that match your criteria.
|
||||
|
||||
```bash
|
||||
kops toolbox instance-selector --vcpus 4 --flexible --usage-class spot --instance-group-name spotgroup
|
||||
|
@ -187,7 +185,7 @@ spec:
|
|||
|
||||
### Instances
|
||||
|
||||
Instances is a list of instance types which we are willing to run in the EC2 Auto Scaling group
|
||||
Instances is a list of instance types which we are willing to run in the EC2 Auto Scaling group.
|
||||
|
||||
### onDemandAllocationStrategy
|
||||
|
||||
|
|
|
@ -308,7 +308,7 @@ Thus a manifest will actually look like this:
|
|||
|
||||
Note that the two addons have the same version, but a different `kubernetesVersion` selector.
|
||||
But they have different `id` values; addons with matching semvers but different `id`s will
|
||||
be upgraded. (We will never downgrade to an older semver though, regardless of `id`)
|
||||
be upgraded. (We will never downgrade to an older semver though, regardless of `id`)
|
||||
|
||||
So now in the above scenario after the downgrade to 1.5, although the semver is the same,
|
||||
the id will not match, and the `pre-k8s-16` will be installed. (And when we upgrade back
|
||||
|
|
|
@ -52,7 +52,7 @@ kops get cluster ${OLD_NAME} -oyaml
|
|||
|
||||
## Move resources to a new cluster
|
||||
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things this step does:
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things this step does:
|
||||
|
||||
* It resizes existing autoscaling groups to size 0
|
||||
* It will stop the existing master
|
||||
|
|
|
@ -20,7 +20,7 @@ Backups and restores of etcd on kOps are covered in [etcd_backup_restore_encrypt
|
|||
|
||||
## Direct Data Access
|
||||
|
||||
It's not typically necessary to view or manipulate the data inside of etcd directly with etcdctl, because all operations usually go through kubectl commands. However, it can be informative during troubleshooting, or just to understand kubernetes better. Here are the steps to accomplish that on kops.
|
||||
It's not typically necessary to view or manipulate the data inside of etcd directly with etcdctl, because all operations usually go through kubectl commands. However, it can be informative during troubleshooting, or just to understand kubernetes better. Here are the steps to accomplish that on kOps.
|
||||
|
||||
1\. Connect to an etcd-manager pod
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ You can also rerun [these steps](../development/building.md) if previously built
|
|||
|
||||
## Upgrading Kubernetes
|
||||
|
||||
Upgrading Kubernetes is easy with kops. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
Upgrading Kubernetes is easy with kOps. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
|
||||
The `kops upgrade` command also automates checking for and applying updates.
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ kOps can:
|
|||
|
||||
Some users will need or prefer to use tools like Terraform for cluster configuration,
|
||||
so kOps can also output the equivalent configuration for those tools also (currently just Terraform, others
|
||||
planned). After creation with your preferred tool, you can still use the rest of the kOps tooling to operate
|
||||
planned). After creation with your preferred tool, you can still use the rest of the kOps tooling to operate
|
||||
your cluster.
|
||||
|
||||
## Primary API types
|
||||
|
@ -29,7 +29,7 @@ There are two primary types:
|
|||
* Cluster represents the overall cluster configuration (such as the version of kubernetes we are running), and contains default values for the individual nodes.
|
||||
|
||||
* InstanceGroup is a group of instances with similar configuration that are managed together.
|
||||
Typically this is a group of Nodes or a single master instance. On AWS, it is currently implemented by an AutoScalingGroup.
|
||||
Typically this is a group of Nodes or a single master instance. On AWS, it is currently implemented by an AutoScalingGroup.
|
||||
|
||||
## State Store
|
||||
|
||||
|
@ -40,15 +40,14 @@ The API objects are currently stored in an abstraction called a ["state store"](
|
|||
Configuration of a kubernetes cluster is actually relatively complicated: there are a lot of options, and many combinations
|
||||
must be configured consistently with each other.
|
||||
|
||||
Similar to the way creating a Kubernetes object populates other spec values, the `kops create cluster` command will infer other values
|
||||
that are not set, so that you can specify a minimal set of values (but if you don't want to override the default value, you simply specify the fields!).
|
||||
Similar to the way creating a Kubernetes object populates other spec values, the `kops create cluster` command will infer other values that are not set, so that you can specify a minimal set of values (but if you don't want to override the default value, you simply specify the fields!).
|
||||
|
||||
Because more values are inferred than with simpler k8s objects, we record the user-created spec separately from the
|
||||
complete inferred specification. This means we can keep track of which values were actually set by the user, vs just being
|
||||
complete inferred specification. This means we can keep track of which values were actually set by the user, vs just being
|
||||
default values; this lets us avoid some of the problems e.g. with ClusterIP on a Service.
|
||||
|
||||
We aim to remove any computation logic from the downstream pieces (i.e. nodeup & protokube); this means there is a
|
||||
single source of truth and it is practical to implement alternatives to nodeup & protokube. For example, components
|
||||
single source of truth and it is practical to implement alternatives to nodeup & protokube. For example, components
|
||||
such as kubelet might read their configuration directly from the state store in future, eliminating the need to
|
||||
have a management process that copies values around.
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ preparing for a new kubernetes release, we will try to advance the master branch
|
|||
to focus on the new functionality, and start cherry-picking back more selectively
|
||||
to the release branches only as needed.
|
||||
|
||||
Generally we don't encourage users to run older kops versions, or older
|
||||
Generally we don't encourage users to run older kOps versions, or older
|
||||
branches, because newer versions of kOps should remain compatible with older
|
||||
versions of Kubernetes.
|
||||
|
||||
|
@ -118,8 +118,7 @@ git fetch origin # sync back up
|
|||
|
||||
## Wait for CI job to complete
|
||||
|
||||
The staging CI job should now see the tag, and build it (from the
|
||||
trusted prow cluster, using Google Cloud Build).
|
||||
The staging CI job should now see the tag, and build it (from the trusted prow cluster, using Google Cloud Build).
|
||||
|
||||
The job is here: https://testgrid.k8s.io/sig-cluster-lifecycle-kops#kops-postsubmit-push-to-staging
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Release notes for kops 1.20 series
|
||||
## Release notes for kOps 1.20 series
|
||||
|
||||
(The kops 1.20 release has not been released yet; this is a document to gather the notes prior to the release).
|
||||
(The kOps 1.20 release has not been released yet; this is a document to gather the notes prior to the release).
|
||||
|
||||
# Significant changes
|
||||
|
||||
|
@ -16,7 +16,7 @@
|
|||
|
||||
# Deprecations
|
||||
|
||||
* Support for Kubernetes versions 1.13 and 1.14 are deprecated and will be removed in kops 1.21.
|
||||
* Support for Kubernetes versions 1.13 and 1.14 are deprecated and will be removed in kOps 1.21.
|
||||
|
||||
* The [manifest based metrics server addon](https://github.com/kubernetes/kops/tree/master/addons/metrics-server) has been deprecated in favour of a configurable addon.
|
||||
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
## Running in a shared VPC
|
||||
|
||||
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
|
||||
or NAT Gateway you can tell kOps to ignore egress. By default, kops creates a new subnet per zone and a new route table,
|
||||
but you can instead use a shared subnet (see [below](#shared-subnets)).
|
||||
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway or NAT Gateway you can tell kOps to ignore egress. By default, kOps creates a new subnet per zone and a new route table, but you can instead use a shared subnet (see [below](#shared-subnets)).
|
||||
|
||||
1. Use `kops create cluster` with the `--vpc` argument for your existing VPC:
|
||||
|
||||
|
@ -161,7 +159,7 @@ spec:
|
|||
|
||||
### Shared NAT Egress
|
||||
|
||||
On AWS in private [topology](topology.md), kops creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that `kops` deploys private resources to, it is possible to specify the ID and have `kops`/`kubernetes` use it.
|
||||
On AWS in private [topology](topology.md), kOps creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that `kops` deploys private resources to, it is possible to specify the ID and have `kops`/`kubernetes` use it.
|
||||
|
||||
If you don't want to use NAT Gateways but have setup [EC2 NAT Instances](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) in your VPC that you can share, it's possible to specify the IDs of said instances and have `kops`/`kubernetes` use them.
|
||||
|
||||
|
@ -191,9 +189,9 @@ spec:
|
|||
Please note:
|
||||
|
||||
* You must specify pre-created subnets for either all of the subnets or none of them.
|
||||
* kOps won't alter your existing subnets. They must be correctly set up with route tables, etc. The
|
||||
* kOps won't alter your existing subnets. They must be correctly set up with route tables, etc. The
|
||||
Public or Utility subnets should have public IPs and an Internet Gateway configured as their default route
|
||||
in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway
|
||||
in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway
|
||||
configured as their default route.
|
||||
* kOps won't create a route-table at all if it's not creating subnets.
|
||||
* In the example above the first subnet is using a shared NAT Gateway while the
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
# The State Store
|
||||
|
||||
kOps has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
|
||||
kOps has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
|
||||
here not only when you first create a cluster, but also you can change the state and apply changes to a running cluster.
|
||||
|
||||
Eventually, kubernetes services will also pull from the state store, so that we don't need to marshal all our
|
||||
configuration through a channel like user-data. (This is currently done for secrets and SSL keys, for example,
|
||||
configuration through a channel like user-data. (This is currently done for secrets and SSL keys, for example,
|
||||
though we have to copy the data from the state store to a file where components like kubelet can read them).
|
||||
|
||||
The state store uses kOps's VFS implementation, so can in theory be stored anywhere.
|
||||
|
@ -22,7 +22,7 @@ The state store is just files; you can copy the files down and put them into git
|
|||
|
||||
## {statestore}/config
|
||||
|
||||
One of the most important files in the state store is the top-level config file. This file stores the main
|
||||
One of the most important files in the state store is the top-level config file. This file stores the main
|
||||
configuration for your cluster (instance types, zones, etc)\
|
||||
|
||||
When you run `kops create cluster`, we create a state store entry for you based on the command line options you specify.
|
||||
|
@ -39,7 +39,7 @@ reconfiguring your cluster - for example just `kops create cluster` after a dry-
|
|||
|
||||
## State store configuration
|
||||
|
||||
There are a few ways to configure your state store. In priority order:
|
||||
There are a few ways to configure your state store. In priority order:
|
||||
|
||||
+ command line argument `--state s3://yourstatestore`
|
||||
+ environment variable `export KOPS_STATE_STORE=s3://yourstatestore`
|
||||
|
|
|
@ -85,7 +85,7 @@ Wait for the cluster to initialize. If all goes well, you should have a working
|
|||
|
||||
#### Editing the cluster
|
||||
|
||||
It's possible to use Terraform to make changes to your infrastructure as defined by kops. In the example below we'd like to change some cluster configs:
|
||||
It's possible to use Terraform to make changes to your infrastructure as defined by kOps. In the example below we'd like to change some cluster configs:
|
||||
|
||||
```
|
||||
$ kops edit cluster \
|
||||
|
|
|
@ -47,7 +47,7 @@ More information about [networking options](networking.md) can be found in our d
|
|||
## Changing Topology of the API server
|
||||
To change the ELB that fronts the API server from Internet facing to Internal only there are a few steps to accomplish
|
||||
|
||||
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kOps recreate the ELB for us.
|
||||
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kOps recreate the ELB for us.
|
||||
|
||||
### Steps to change the ELB from Internet-Facing to Internal
|
||||
- Edit the cluster: `kops edit cluster $NAME`
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
# Upgrading kubernetes
|
||||
|
||||
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kops.
|
||||
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kOps.
|
||||
The kOps `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
|
||||
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
|
||||
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
|
||||
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
|
||||
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
|
||||
`1.19` series, and for full support of kubernetes `1.19` it is best to wait for the kOps `1.19` series release.
|
||||
We aim to release the next major version of kOps within a few weeks of the equivalent major release of kubernetes,
|
||||
so kOps `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
|
||||
so kOps `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
|
||||
(alpha or beta) is available at the kubernetes release, for early adopters.
|
||||
|
||||
Upgrading kubernetes is similar to changing the image on an InstanceGroup, except that the kubernetes version is
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
# Managinging Instance Groups
|
||||
|
||||
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
|
||||
an AutoScalingGroup.
|
||||
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
|
||||
an Auto Scaling group.
|
||||
|
||||
By default, a cluster has:
|
||||
|
||||
* An instance group called `nodes` spanning all the zones; these instances are your workers.
|
||||
* One instance group for each master zone, called `master-<zone>` (e.g. `master-us-east-1c`). These normally have
|
||||
minimum size and maximum size = 1, so they will run a single instance. We do this so that the cloud will
|
||||
always relaunch masters, even if everything is terminated at once. We have an instance group per zone
|
||||
minimum size and maximum size = 1, so they will run a single instance. We do this so that the cloud will
|
||||
always relaunch masters, even if everything is terminated at once. We have an instance group per zone
|
||||
because we need to force the cloud to run an instance in every zone, so we can mount the master volumes - we
|
||||
cannot do that across zones.
|
||||
|
||||
|
@ -37,8 +37,8 @@ You can also use the `kops get ig` alias.
|
|||
|
||||
## Change the instance type in an instance group
|
||||
|
||||
First you edit the instance group spec, using `kops edit ig nodes`. Change the machine type to `t2.large`,
|
||||
for example. Now if you `kops get ig`, you will see the large instance size. Note though that these changes
|
||||
First you edit the instance group spec, using `kops edit ig nodes`. Change the machine type to `t2.large`,
|
||||
for example. Now if you `kops get ig`, you will see the large instance size. Note though that these changes
|
||||
have not yet been applied (this may change soon though!).
|
||||
|
||||
To preview the change:
|
||||
|
@ -76,7 +76,7 @@ master-us-central1-a Master n1-standard-1 1 1 us-central1
|
|||
nodes Node n1-standard-2 2 2 us-central1
|
||||
```
|
||||
|
||||
Let's change the number of nodes to 3. We'll edit the InstanceGroup configuration using `kops edit` (which
|
||||
Let's change the number of nodes to 3. We'll edit the InstanceGroup configuration using `kops edit` (which
|
||||
should be very familiar to you if you've used `kubectl edit`). `kops edit ig nodes` will open
|
||||
the InstanceGroup in your editor, looking a bit like this:
|
||||
|
||||
|
@ -99,11 +99,11 @@ spec:
|
|||
- us-central1-a
|
||||
```
|
||||
|
||||
Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your editor. If you wanted to change
|
||||
the image or the machineType, you could do that here as well. There are actually a lot more fields,
|
||||
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
|
||||
Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your editor. If you wanted to change
|
||||
the image or the machineType, you could do that here as well. There are actually a lot more fields,
|
||||
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
|
||||
|
||||
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kOps to
|
||||
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kOps to
|
||||
apply your changes to the cloud.
|
||||
|
||||
We use the same `kops update cluster` command that we used when initially creating the cluster; when
|
||||
|
@ -122,7 +122,7 @@ This is saying that we will alter the `TargetSize` property of the `InstanceGrou
|
|||
That's what we want, so we `kops update cluster --yes`.
|
||||
|
||||
kOps will resize the GCE managed instance group from 2 to 3, which will create a new GCE instance,
|
||||
which will then boot and join the cluster. Within a minute or so you should see the new node join:
|
||||
which will then boot and join the cluster. Within a minute or so you should see the new node join:
|
||||
|
||||
```
|
||||
> kubectl get nodes
|
||||
|
@ -138,7 +138,7 @@ nodes-z2cz Ready 1s v1.7.2
|
|||
|
||||
## Changing the image
|
||||
|
||||
That was a fairly simple change, because we didn't have to reboot the nodes. Most changes though do
|
||||
That was a fairly simple change, because we didn't have to reboot the nodes. Most changes though do
|
||||
require rolling your instances - this is actually a deliberate design decision, in that we are aiming
|
||||
for immutable nodes. An example is changing your image. We're using `cos-stable`, which is Google's
|
||||
Container OS. Let's try Debian Stretch instead.
|
||||
|
@ -180,15 +180,15 @@ Will modify resources:
|
|||
Note that the `BootDiskImage` is indeed set to the debian 9 image you requested.
|
||||
|
||||
`kops update cluster --yes` will now apply the change, but if you were to run `kubectl get nodes` you would see
|
||||
that the instances had not yet been reconfigured. There's a hint at the bottom:
|
||||
that the instances had not yet been reconfigured. There's a hint at the bottom:
|
||||
|
||||
```
|
||||
Changes may require instances to restart: kops rolling-update cluster`
|
||||
```
|
||||
|
||||
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kOps
|
||||
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kOps
|
||||
can perform a rolling update to minimize disruption, but even so you might not want to perform the update right away;
|
||||
you might want to make more changes or you might want to wait for off-peak hours. You might just want to wait for
|
||||
you might want to make more changes or you might want to wait for off-peak hours. You might just want to wait for
|
||||
the instances to terminate naturally - new instances will come up with the new configuration - though if you're not
|
||||
using preemptible/spot instances you might be waiting for a long time.
|
||||
|
||||
|
@ -333,7 +333,7 @@ $ df -h | grep nvme[12]
|
|||
|
||||
## Creating a new instance group
|
||||
|
||||
Suppose you want to add a new group of nodes, perhaps with a different instance type. You do this using `kops create ig <InstanceGroupName> --subnet <zone(s)>`. Currently the
|
||||
Suppose you want to add a new group of nodes, perhaps with a different instance type. You do this using `kops create ig <InstanceGroupName> --subnet <zone(s)>`. Currently the
|
||||
`--subnet` flag is required, and it receives the zone(s) of the subnet(s) in which the instance group will be. The command opens an editor with a skeleton configuration, allowing you to edit it before creation.
|
||||
|
||||
So the procedure is:
|
||||
|
@ -519,7 +519,7 @@ spec:
|
|||
If `openstack.kops.io/osVolumeSize` is not set it will default to the minimum disk specified by the image.
|
||||
# Working with InstanceGroups
|
||||
|
||||
The kOps InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
|
||||
The kOps InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
|
||||
can change the instance type you're using, the number of nodes you have, the OS image you're running - essentially
|
||||
all the per-node configuration is in the InstanceGroup.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# Upgrading from kube-up to kOps
|
||||
|
||||
kOps let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
|
||||
kops.
|
||||
kOps.
|
||||
|
||||
** This is a slightly risky procedure, so we recommend backing up important data before proceeding.
|
||||
Take a snapshot of your EBS volumes; export all your data from kubectl etc. **
|
||||
|
@ -28,7 +28,7 @@ configuration.
|
|||
|
||||
Make sure you have set `export KOPS_STATE_STORE=s3://<mybucket>`
|
||||
|
||||
Then import the cluster; setting `--name` and `--region` to match the old cluster. If you're not sure
|
||||
Then import the cluster; setting `--name` and `--region` to match the old cluster. If you're not sure
|
||||
of the old cluster name, you can find it by looking at the `KubernetesCluster` tag on your AWS resources.
|
||||
|
||||
```
|
||||
|
@ -39,7 +39,7 @@ kops import cluster --region ${REGION} --name ${OLD_NAME}
|
|||
|
||||
## Verify the cluster configuration
|
||||
|
||||
Now have a look at the cluster configuration, to make sure it looks right. If it doesn't, please
|
||||
Now have a look at the cluster configuration, to make sure it looks right. If it doesn't, please
|
||||
open an issue.
|
||||
|
||||
```
|
||||
|
@ -48,7 +48,7 @@ kops get cluster ${OLD_NAME} -oyaml
|
|||
|
||||
## Move resources to a new cluster
|
||||
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things
|
||||
this step does:
|
||||
|
||||
* It resizes existing autoscaling groups to size 0
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Office Hours
|
||||
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kOps. This session is open to both developers and users.
|
||||
|
||||
Office hours are hosted on a [zoom video chat](https://zoom.us/j/97072789944?pwd=VVlUR3dhN2h5TEFQZHZTVVd4SnJUdz09) on Fridays at [12 noon (Eastern Time)/9 am (Pacific Time)](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12) during weeks with odd "numbers". To check this weeks' number, run: `date +%V`. If the response is odd, join us on Friday for office hours!
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
|
|||
|
||||
## Compatibility Matrix
|
||||
|
||||
| kops version | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x | k8s 1.17.x | k8s 1.18.x |
|
||||
| kOps version | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x | k8s 1.17.x | k8s 1.18.x |
|
||||
|---------------|------------|------------|------------|------------|------------|
|
||||
| 1.18.0 | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| 1.17.x | ✔ | ✔ | ✔ | ✔ | ⚫ |
|
||||
|
@ -23,7 +23,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
|
|||
|
||||
|
||||
Use the latest version of kOps for all releases of Kubernetes, with the caveat
|
||||
that higher versions of Kubernetes are not _officially_ supported by kops.
|
||||
that higher versions of Kubernetes are not _officially_ supported by kOps.
|
||||
Releases which are ~~crossed out~~ _should_ work, but we suggest they be upgraded soon.
|
||||
|
||||
## Release Schedule
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
This directory contains docs that add contextual help to error messages.
|
||||
|
||||
The links are baked into kops, and thus we cannot rename or move these files (at least not quickly).
|
||||
The links are baked into kOps, and thus we cannot rename or move these files (at least not quickly).
|
|
@ -3,7 +3,7 @@
|
|||
Kops has established a deprecation policy for Kubernetes version support.
|
||||
Kops will remove support for Kubernetes versions as follows:
|
||||
|
||||
| kops version | Removes support for Kubernetes version |
|
||||
| kOps version | Removes support for Kubernetes version |
|
||||
|--------------|----------------------------------------|
|
||||
| 1.18 | 1.8 and below |
|
||||
| 1.19 | 1.9 and 1.10 |
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# Kops Upgrade Recommended
|
||||
|
||||
You are running a version of kops that we recommend upgrading.
|
||||
You are running a version of kOps that we recommend upgrading.
|
||||
|
||||
The latest releases are available from [Github Releases](https://github.com/kubernetes/kops/releases)
|
Loading…
Reference in New Issue