mirror of https://github.com/kubernetes/kops.git
Docs: More List fixes
This commit is contained in:
parent
dec416ffb0
commit
4b41eb2435
|
@ -94,6 +94,7 @@ running the updated image.
|
|||
`kops rolling-update cluster --name $CLUSTER --yes`
|
||||
|
||||
## Resources / Notes
|
||||
|
||||
- https://aws.amazon.com/de/security/security-bulletins/AWS-2018-013/
|
||||
- https://security.googleblog.com/2018/01/todays-cpu-vulnerability-what-you-need.html
|
||||
- https://coreos.com/blog/container-linux-meltdown-patch
|
||||
|
|
|
@ -98,6 +98,7 @@ means we don't need to manage etcd directly. We just try to make sure that some
|
|||
each volume mounted with etcd running and DNS set correctly. That is the job of protokube.
|
||||
|
||||
Protokube:
|
||||
|
||||
* discovers EBS volumes that hold etcd data (using tags)
|
||||
* tries to safe_format_and_mount them
|
||||
* if successful in mounting the volume, it will write a manifest for etcd into /etc/kubernetes/manifests
|
||||
|
|
|
@ -5,6 +5,7 @@ the requirements, upgrade process, and configuration to install
|
|||
Calico Version 3.
|
||||
|
||||
## Requirements
|
||||
|
||||
- The main requirement needed for Calico Version 3 is the etcd v3 API available
|
||||
with etcd server version 3.
|
||||
- Another requirement is for the Kubernetes version to be a minimum of v1.7.0.
|
||||
|
@ -18,6 +19,7 @@ to remain on Calico V2 or update to etcdv3.
|
|||
## Configuration of a new cluster
|
||||
To ensure a new cluster will have Calico Version 3 installed the following
|
||||
two configurations options should be set:
|
||||
|
||||
- `spec.etcdClusters.etcdMembers[0].Version` (Main cluster) should be
|
||||
set to a Version of etcd greater than 3.x or the default version
|
||||
needs to be greater than 3.x.
|
||||
|
@ -46,6 +48,7 @@ Assuming your cluster meets the requirements it is possible to upgrade
|
|||
your Calico Kops cluster.
|
||||
|
||||
A few notes about the upgrade:
|
||||
|
||||
- During the first portion of the migration, while the calico-kube-controllers
|
||||
pod is running its Init, no new policies will be applied though already
|
||||
applied policy will be active.
|
||||
|
|
|
@ -55,6 +55,7 @@ test-infra/jobs/ci-kubernetes-e2e-kops-aws.sh |& tee /tmp/testlog
|
|||
```
|
||||
|
||||
This:
|
||||
|
||||
* Brings up a cluster using the latest `kops` build from `master` (see below for how to use your current build)
|
||||
* Runs the default series of tests (which the Kubernetes team is [also
|
||||
running here](https://k8s-testgrid.appspot.com/google-aws#kops-aws)) (see below for how to override the test list)
|
||||
|
|
|
@ -24,9 +24,9 @@ found in the source code. Follow these steps to run the update process:
|
|||
2. Run `make gomod` to start the update process. If this step is
|
||||
successful, the imported dependency will be added to the `vendor`
|
||||
subdirectory.
|
||||
1. Commit any changes, including changes to the `vendor` directory,
|
||||
3. Commit any changes, including changes to the `vendor` directory,
|
||||
`go.mod`, and `go.sum`.
|
||||
1. Open a pull request with these changes separately from other work
|
||||
4. Open a pull request with these changes separately from other work
|
||||
so that it is easier to review.
|
||||
|
||||
## Updating a dependency in the vendor directory (e.g. aws-sdk-go)
|
||||
|
@ -36,5 +36,5 @@ so that it is easier to review.
|
|||
3. Review the changes to ensure that they are as intended / trustworthy.
|
||||
4. Commit any changes, including changes to the `vendor` directory,
|
||||
`go.mod` and `go.sum`.
|
||||
1. Open a pull request with these changes separately from other work so that it
|
||||
5. Open a pull request with these changes separately from other work so that it
|
||||
is easier to review. Please include any significant changes you observed.
|
||||
|
|
|
@ -17,6 +17,7 @@ tooling into composable tooling that can be upgraded (or even used) separately.
|
|||
## Limitations
|
||||
|
||||
The current approach for managing etcd makes certain tasks hard:
|
||||
|
||||
* upgrades/downgrades between etcd versions
|
||||
* resizing the cluster
|
||||
|
||||
|
|
|
@ -787,6 +787,7 @@ Your cluster mycluster01.kopsclustertest.example.org is ready
|
|||
You can see how your cluster scaled up to 3 nodes.
|
||||
|
||||
**SCALING RECOMMENDATIONS:**
|
||||
|
||||
- Always think ahead. If you want to ensure to have the capability to scale-up to all available zones in the region, ensure to add them to the "--zones=" argument when using the "kops create cluster" command. Example: --zones=us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1e. That will make things simpler later.
|
||||
- For the masters, always consider "odd" numbers starting from 3. Like many other cluster, odd numbers starting from "3" are the proper way to create a fully redundant multi-master solution. In the specific case of "kops", you add masters by adding zones to the "--master-zones" argument on "kops create command".
|
||||
|
||||
|
|
|
@ -136,6 +136,7 @@ kops create instancegroup bastions --role Bastion --subnet utility-us-east-1a --
|
|||
```
|
||||
|
||||
**Explanation of this command:**
|
||||
|
||||
- This command will add to our cluster definition a new instance group called "bastions" with the "Bastion" role on the aws subnet "utility-us-east-1a". Note that the "Bastion" role need the first letter to be a capital (Bastion=ok, bastion=not ok).
|
||||
- The subnet "utility-us-east-1a" was created when we created our cluster the first time. KOPS add the "utility-" prefix to all subnets created on all specified AZ's. In other words, if we instructed kops to deploy our instances on us-east-1a, use-east-1b and use-east-1c, kops will create the subnets "utility-us-east-1a", "utility-us-east-1b" and "utility-us-east-1c". Because we need to tell kops where to deploy our bastion (or bastions), we need to specify the subnet.
|
||||
|
||||
|
@ -307,7 +308,7 @@ ip-172-20-74-55.ec2.internal master True
|
|||
Your cluster privatekopscluster.k8s.local is ready
|
||||
```
|
||||
|
||||
## MAKING THE BASTION LAYER "HIGHLY AVAILABLE".
|
||||
## MAKING THE BASTION LAYER "HIGHLY AVAILABLE"
|
||||
|
||||
If for any reason any "legendary monster from the comics" decides to destroy the amazon AZ that contains our bastion, we'll basically be unable to enter to our instances. Let's add some H.A. to our bastion layer and force amazon to deploy additional bastion instances on other availability zones.
|
||||
|
||||
|
|
|
@ -48,6 +48,7 @@ Examples:
|
|||
`cloud-labels` specifies tags for instance groups in AWS. The supported format is a CSV list of key=value pairs.
|
||||
Keys and values must not contain embedded commas but they may contain equals signs ('=') as long as the field is
|
||||
quoted:
|
||||
|
||||
* `--cloud-labels "Project=\"Name=Foo Customer=Acme\",Owner=Jane Doe"` will be parsed as {Project:"Name=Foo Customer=Acme",
|
||||
Owner: "Jane Doe"}
|
||||
|
||||
|
|
|
@ -53,6 +53,7 @@ kops delete cluster my-cluster.example.com --yes
|
|||
## Features Still in Development
|
||||
|
||||
kops for DigitalOcean currently does not support these features:
|
||||
|
||||
* multi master kubernetes clusters
|
||||
* rolling update for instance groups
|
||||
* multi-region clusters
|
||||
|
|
|
@ -61,6 +61,7 @@ kops delete cluster my-cluster.k8s.local --yes
|
|||
```
|
||||
|
||||
#### Optional flags
|
||||
|
||||
* `--os-kubelet-ignore-az=true` Nova and Cinder have different availability zones, more information [Kubernetes docs](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#block-storage)
|
||||
* `--os-octavia=true` If Octavia Loadbalancer api should be used instead of old lbaas v2 api.
|
||||
* `--os-dns-servers=8.8.8.8,8.8.4.4` You can define dns servers to be used in your cluster if your openstack setup does not have working dnssetup by default
|
||||
|
|
|
@ -8,6 +8,7 @@ By default Kops creates two IAM roles for the cluster: one for the masters, and
|
|||
Work has been done on scoping permissions to the minimum required for a functional Kubernetes Cluster, resulting in a fully revised set of IAM policies for both master & compute nodes.
|
||||
|
||||
An example of the new IAM policies can be found here:
|
||||
|
||||
- Master Nodes: https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_master_strict.json
|
||||
- Compute Nodes: https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_node_strict.json
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ An example:
|
|||
|
||||
```
|
||||
...
|
||||
spec:
|
||||
spec:
|
||||
cloudLabels:
|
||||
team: me
|
||||
project: ion
|
||||
|
@ -30,7 +30,7 @@ spec:
|
|||
|
||||
```
|
||||
...
|
||||
spec:
|
||||
spec:
|
||||
cloudLabels:
|
||||
team: me
|
||||
project: ion
|
||||
|
@ -50,7 +50,7 @@ An example:
|
|||
|
||||
```
|
||||
...
|
||||
spec:
|
||||
spec:
|
||||
nodeLabels:
|
||||
spot: "false"
|
||||
...
|
||||
|
|
|
@ -217,6 +217,7 @@ semver. The channels tool keeps track of the current version installed (current
|
|||
of an annotation on the `kube-system` namespace).
|
||||
|
||||
The channel tool updates the installed version when any of the following conditions apply.
|
||||
|
||||
* The version declared in the addon manifest is greater then the currently installed version.
|
||||
* The version number's match, but the ids are different
|
||||
* The version number and ids match, but the hash of the addon's manifest has changed since it was installed.
|
||||
|
|
|
@ -15,6 +15,7 @@ At some point you will almost definitely want to upgrade the Kubernetes version
|
|||
Take a snapshot of your EBS volumes; export all your data from kubectl etc.**
|
||||
|
||||
Limitations:
|
||||
|
||||
* kops splits etcd onto two volumes now: `main` and `events`. We will keep the `main` data, but you will lose your events history.
|
||||
* Doubtless others not yet known - please open issues if you encounter them!
|
||||
|
||||
|
@ -187,6 +188,7 @@ You will also need to release the old ElasticIP manually.
|
|||
This method provides zero-downtime when migrating a cluster from `kube-up` to `kops`. It does so by creating a logically separate `kops`-managed cluster in the existing `kube-up` VPC and then swapping the DNS entries (or your reverse proxy's upstream) to point to the new cluster's services.
|
||||
|
||||
Limitations:
|
||||
|
||||
- If you're using the default networking (`kubenet`), there is a account limit of 50 entries in a VPC's route table. If your cluster contains more than ~25 nodes, this strategy, as-is, will not work.
|
||||
+ Shifting to a CNI-compatible overlay network like `weave`, `kopeio-vxlan` (`kopeio`), `calico`, `canal`, `romana`, and similar. See the [kops networking docs](../networking.md) for more information.
|
||||
+ One solution is to gradually shift traffic from one cluster to the other, scaling down the number of nodes on the old cluster, and scaling up the number of nodes on the new cluster.
|
||||
|
|
|
@ -9,6 +9,7 @@ Each component (kubelet, kube-apiserver...) is explicitly configured: We reuse t
|
|||
where we can, and we create additional types for the configuration of additional components.
|
||||
|
||||
kops can:
|
||||
|
||||
* create a cluster
|
||||
* upgrade a cluster
|
||||
* reconfigure the components
|
||||
|
@ -32,7 +33,7 @@ There are two primary types:
|
|||
|
||||
## State Store
|
||||
|
||||
The API objects are currently stored in an abstraction called a "state store". [state.md](/docs/state.md) has more detail.
|
||||
The API objects are currently stored in an abstraction called a ["state store"](/state.md) has more detail.
|
||||
|
||||
## Configuration inference
|
||||
|
||||
|
|
|
@ -15,10 +15,10 @@
|
|||
|
||||
#### 3. Release kops to github
|
||||
|
||||
* Darwin binary (generated)
|
||||
* Linux binary (generated)
|
||||
* Source code (zip)
|
||||
* Source code (tar.gz)
|
||||
* Darwin binary (generated)
|
||||
* Linux binary (generated)
|
||||
* Source code (zip)
|
||||
* Source code (tar.gz)
|
||||
|
||||
#### 4. Release kops to homebrew
|
||||
|
||||
|
|
|
@ -1,13 +1,15 @@
|
|||
*Please see [1.6-NOTES.md](1.6-NOTES.md) for known issues*
|
||||
|
||||
Features:
|
||||
|
||||
* `kops get` can now output a complete cluster spec (thanks @geojaz)
|
||||
* `kops create` can set master/node volume size (thanks @matthew-marchetti)
|
||||
* Add ability to set cross-subnet mode in Calico (thanks @ottoyiu)
|
||||
* Make Weave MTU configurable and configure jumbo frame support for new clusters on AWS (thanks @jordanjennings)
|
||||
* Initial support for external-dns project (thanks @sethpollack)
|
||||
|
||||
|
||||
Fixes:
|
||||
|
||||
* Fix calico bootstrapping problems (thanks @ottoyiu, @ozdanborne)
|
||||
* Update to latest release of calico (thanks @mad01)
|
||||
* Update canal manifests for 1.6 & RBAC (thanks @heschlie)
|
||||
|
|
|
@ -41,6 +41,3 @@ kubectl delete pods -lk8s-app=kube-dns --namespace=kube-system
|
|||
kubectl delete pods -lk8s-app=kube-dns-autoscaler --namespace=kube-system
|
||||
pkill -f kube-controller-manager
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -206,7 +206,7 @@ Please note:
|
|||
* kops won't create a route-table at all if we're not creating subnets.
|
||||
* In the example above the first subnet is using a shared NAT Gateway while the
|
||||
second one is using a shared NAT Instance
|
||||
|
||||
|
||||
### Externally Managed Egress
|
||||
|
||||
If you are using an unsupported egress configuration in your VPC, _kops_ can be told to ignore egress by using a configuration like:
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
This document describes how to go from a single-master cluster (created by kops)
|
||||
to a multi-master cluster. If you are using etcd-manager you just need to perform some steps of the migration.
|
||||
|
||||
# etcd-manager
|
||||
# etcd-manager
|
||||
|
||||
If you are using etcd-manager, just perform the steps in this section. Etcd-manager is default for kops 1.12. Etcd-manager makes the upgrade to multi-master much smoother.
|
||||
|
||||
|
@ -16,12 +16,12 @@ The list references steps of the next section. To upgrade from a single master t
|
|||
- add the masters to your etcd cluster definition (both in section named main and events)
|
||||
- Skip Step 3 and 4
|
||||
- Now you are ready to update the AWS configuration:
|
||||
- `kops update cluster your-cluster-name`
|
||||
- `kops update cluster your-cluster-name`
|
||||
- AWS will launch two new masters, they will be discovered and then configured by etcd-manager
|
||||
- check with `kubectl get nodes` to see everything is ready
|
||||
- Cleanup (Step 5) to do a rolling restart of all masters (just in case)
|
||||
|
||||
# Etcd without etcd-manager
|
||||
# Etcd without etcd-manager
|
||||
|
||||
## 0 - Warnings
|
||||
|
||||
|
|
|
@ -11,6 +11,7 @@ The field in the clusterSpec, `.NonMasqueradeCIDR`, captures the IP
|
|||
range of the cluster.
|
||||
|
||||
Within this IP range, smaller IP ranges are then carved out for:
|
||||
|
||||
* Service IPs - as defined as `.serviceClusterIPRange`
|
||||
* Pod IPs - as defined as `.kubeControllerManager.clusterCIDR`
|
||||
|
||||
|
|
|
@ -14,8 +14,8 @@ P2: Rarely occurring issues and features that will bring vSphere support closer
|
|||
|
||||
**Notes:**
|
||||
|
||||
* Effort estimation includes fix for an issue or implementation for a feature, testing and generating a PR.
|
||||
* There are a few issues that are related to startup and base image. If we can resolve "Use PhotonOS for vSphere node template" issue first and replace init-cloud with guestinfo, those issues **might** get resolved automatically. But further investigation is needed and fixed issues will need verifications and testings.
|
||||
* Effort estimation includes fix for an issue or implementation for a feature, testing and generating a PR.
|
||||
* There are a few issues that are related to startup and base image. If we can resolve "Use PhotonOS for vSphere node template" issue first and replace init-cloud with guestinfo, those issues **might** get resolved automatically. But further investigation is needed and fixed issues will need verifications and testings.
|
||||
|
||||
|Priority|Task|Type (bug, feature, test|Effort estimate(in days)|Remarks|
|
||||
|--- |--- |--- |--- |--- |
|
||||
|
@ -47,10 +47,10 @@ List of all kops commands and how they behave for vSphere cloud provider, as of
|
|||
|
||||
# Column explanation
|
||||
|
||||
* Command, option and usage example are self-explanatory.
|
||||
* vSphere support: whether or not the command is supported for vSphere cloud provider (Yes/No), followed by current status of that command and explanation of any failures.
|
||||
* Graceful termination needed: If the command will not supported, does it need additional code to fail gracefully for vSphere provider?
|
||||
* Remark: Miscellaneous comments about the command.
|
||||
* Command, option and usage example are self-explanatory.
|
||||
* vSphere support: whether or not the command is supported for vSphere cloud provider (Yes/No), followed by current status of that command and explanation of any failures.
|
||||
* Graceful termination needed: If the command will not supported, does it need additional code to fail gracefully for vSphere provider?
|
||||
* Remark: Miscellaneous comments about the command.
|
||||
|
||||
|Command|Option|Usage example|vSphere support|Graceful termination needed (if not fixed)|Remark|
|
||||
|--- |--- |--- |--- |--- |--- |
|
||||
|
|
|
@ -25,13 +25,13 @@ __Issues__
|
|||
|
||||
- Help read and triage issues, assist when possible.
|
||||
- Point out issues that are duplicates, out of date, etc.
|
||||
- Even if you don't have tagging permissions, make a note and tag maintainers (`/close`,`/dupe #127`).
|
||||
- Even if you don't have tagging permissions, make a note and tag maintainers (`/close`,`/dupe #127`).
|
||||
|
||||
__Pull Requests__
|
||||
|
||||
- Read and review the code. Leave comments, questions, and critiques (`/lgtm` ).
|
||||
- Download, compile, and run the code and make sure the tests pass (make test).
|
||||
- Also verify that the new feature seems sane, follows best architectural patterns, and includes tests.
|
||||
- Also verify that the new feature seems sane, follows best architectural patterns, and includes tests.
|
||||
|
||||
This repository uses the Kubernetes bots. See a full list of the commands [here](
|
||||
https://go.k8s.io/bot-commands).
|
||||
|
|
Loading…
Reference in New Issue