mirror of https://github.com/kubernetes/kops.git
Merge pull request #7907 from johngmyers/fix-doc
Change doc cross-references from absolute to relative links
This commit is contained in:
commit
f803a2e093
|
@ -119,7 +119,7 @@ data:
|
|||
|
||||
### Creating a new cluster with IAM Authenticator on.
|
||||
|
||||
* Create a cluster following the [AWS getting started guide](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md)
|
||||
* Create a cluster following the [AWS getting started guide](getting_started/aws.md)
|
||||
* When you reach the "Customize Cluster Configuration" section of the guide modify the cluster spec and add the Authentication and Authorization configs to the YAML config.
|
||||
* Continue following the cluster creation guide to build the cluster.
|
||||
* :warning: When the cluster first comes up the aws-iam-authenticator PODs will be in a bad state.
|
||||
|
|
|
@ -260,7 +260,7 @@ By enabling this feature you instructing two things;
|
|||
- master nodes will bypass the bootstrap token but they _will_ build kubeconfigs with unique usernames in the system:nodes group _(this ensure's the master nodes confirm with the node authorization mode https://kubernetes.io/docs/reference/access-authn-authz/node/)_
|
||||
- secondly the nodes will be configured to use a bootstrap token located by default at `/var/lib/kubelet/bootstrap-kubeconfig` _(though this can be override in the kubelet spec)_. The nodes will sit the until a bootstrap file is created and once available attempt to provision the node.
|
||||
|
||||
**Note** enabling bootstrap tokens does not provision bootstrap tokens for the worker nodes. Under this configuration it is assumed a third-party process is provisioning the tokens on behalf of the worker nodes. For the full setup please read [Node Authorizer Service](https://github.com/kubernetes/kops/blob/master/docs/node_authorization.md)
|
||||
**Note** enabling bootstrap tokens does not provision bootstrap tokens for the worker nodes. Under this configuration it is assumed a third-party process is provisioning the tokens on behalf of the worker nodes. For the full setup please read [Node Authorizer Service](node_authorization.md)
|
||||
|
||||
#### Max Requests Inflight
|
||||
|
||||
|
|
|
@ -164,7 +164,7 @@ Values:
|
|||
* unset means to use the default policy, which is currently to apply OS security updates unless they require a reboot
|
||||
```
|
||||
|
||||
Additionally, consider adding documentation of your new feature to the docs in [/docs](/). If your feature touches configuration options in `config` or `cluster.spec`, document them in [cluster_spec.md](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md).
|
||||
Additionally, consider adding documentation of your new feature to the docs in [/docs](/). If your feature touches configuration options in `config` or `cluster.spec`, document them in [cluster_spec.md](../cluster_spec.md).
|
||||
|
||||
## Testing
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state
|
|||
Some things to note from here:
|
||||
|
||||
- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "coreosbasedkopscluster.k8s.local".
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name needs to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md)
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name needs to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](../getting_started/aws.md)
|
||||
|
||||
|
||||
## COREOS IMAGE INFORMATION:
|
||||
|
|
|
@ -48,7 +48,7 @@ export KOPS_STATE_STORE=s3://my-kops-s3-bucket-for-cluster-state
|
|||
Some things to note from here:
|
||||
|
||||
- "NAME" will be an environment variable that we'll use from now in order to refer to our cluster name. For this practical exercise, our cluster name is "privatekopscluster.k8s.local".
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name need to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md)
|
||||
- Because we'll use gossip DNS instead of a valid DNS domain on AWS ROUTE53 service, our cluster name need to include the string **".k8s.local"** at the end (this is covered on our AWS tutorials). You can see more about this on our [Getting Started Doc.](../getting_started/aws.md)
|
||||
|
||||
|
||||
## KOPS PRIVATE CLUSTER CREATION:
|
||||
|
@ -77,7 +77,7 @@ A few things to note here:
|
|||
- The "--topology private" argument will ensure that all our instances will have private IP's and no public IP's from amazon.
|
||||
- We are including the arguments "--node-size" and "master-size" to specify the "instance types" for both our masters and worker nodes.
|
||||
- Because we are just doing a simple LAB, we are using "t2.micro" machines. Please DON'T USE t2.micro on real production systems. Start with "t2.medium" as a minimum realistic/workable machine type.
|
||||
- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kops which networking subsystem to use. More information about kops supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](https://github.com/kubernetes/kops/blob/master/docs/networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short).
|
||||
- And finally, the "--networking kopeio-vxlan" argument. With the private networking model, we need to tell kops which networking subsystem to use. More information about kops supported networking models can be obtained from the [KOPS Kubernetes Networking Documentation](../networking.md). For this exercise we'll use "kopeio-vxlan" (or "kopeio" for short).
|
||||
|
||||
**NOTE**: You can add the "--bastion" argument here if you are not using "gossip dns" and create the bastion from start, but if you are using "gossip-dns" this will make this cluster to fail (this is a bug we are correcting now). For the moment don't use "--bastion" when using gossip DNS. We'll show you how to get around this by first creating the private cluster, then creation the bastion instance group once the cluster is running.
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ Note that keys and values are strings, so you need quotes around values that YAM
|
|||
|
||||
To apply changes, you'll need to do a `kops update cluster` and then likely a `kops rolling-update cluster`
|
||||
|
||||
For AWS if `kops rolling-update cluster --instance-group nodes` returns "No rolling-update required." the [kops rolling-update cluster](https://github.com/kubernetes/kops/blob/8bc48ef10a44a3e481b604f5dbb663420c68dcab/docs/cli/kops_rolling-update_cluster.md) `--force` flag can be used to force a rolling update, even when no changes are identified.
|
||||
For AWS if `kops rolling-update cluster --instance-group nodes` returns "No rolling-update required." the [kops rolling-update cluster](cli/kops_rolling-update_cluster.md) `--force` flag can be used to force a rolling update, even when no changes are identified.
|
||||
|
||||
Example:
|
||||
|
||||
|
|
|
@ -120,7 +120,7 @@ curl -s https://coreos.com/dist/aws/aws-stable.json | jq -r '.["us-east-1"].hvm'
|
|||
|
||||
* You can specify the name using the `coreos.com` owner alias, for example `coreos.com/CoreOS-stable-1409.8.0-hvm` or leave it at `595879546273/CoreOS-stable-1409.8.0-hvm` if you prefer to do so.
|
||||
|
||||
As part of our documentation, you will find a practical exercise using CoreOS with KOPS. See the file ["coreos-kops-tests-multimaster.md"](https://github.com/kubernetes/kops/blob/master/docs/examples/coreos-kops-tests-multimaster.md) in the "examples" directory. This exercise covers not only using kops with CoreOS, but also a practical view of KOPS with a multi-master kubernetes setup.
|
||||
As part of our documentation, you will find a practical exercise using CoreOS with KOPS. See the file ["coreos-kops-tests-multimaster.md"](../examples/coreos-kops-tests-multimaster.md) in the "examples" directory. This exercise covers not only using kops with CoreOS, but also a practical view of KOPS with a multi-master kubernetes setup.
|
||||
|
||||
> Note: SSH username for CoreOS based instances will be `core`
|
||||
|
||||
|
|
|
@ -5,18 +5,18 @@
|
|||
* kops 1.12 enables etcd-manager by default. For kubernetes 1.12 (and later) we
|
||||
default to etcd3. We also enable TLS for etcd communications when using
|
||||
etcd-manager. **The upgrade is therefore disruptive to the masters.** More information is in the [etcd migration
|
||||
documentation](https://github.com/kubernetes/kops/blob/master/docs/etcd3-migration.md).
|
||||
documentation](../etcd3-migration.md).
|
||||
This documentation is useful even if you are already using etcd3 with TLS.
|
||||
|
||||
* Components are no longer allowed to interact with etcd directly. Calico will
|
||||
be switched to use CRDs instead of directly with etcd. This is a disruptive
|
||||
upgrade, please read the calico notes in the [etcd migration
|
||||
documentation](https://github.com/kubernetes/kops/blob/master/docs/etcd3-migration.md)
|
||||
documentation](../etcd3-migration.md)
|
||||
|
||||
# Required Actions
|
||||
|
||||
* Please back-up important data before upgrading, as the [etcd2 to etcd3
|
||||
migration](https://github.com/kubernetes/kops/blob/master/docs/etcd3-migration.md)
|
||||
migration](../etcd3-migration.md)
|
||||
is higher risk than most upgrades. **The upgrade is disruptive to the masters, see notes above.**
|
||||
* Note that **the upgrade for Calico users is disruptive**, because it requires
|
||||
switching from direct-etcd-storage to CRD backed storage.
|
||||
|
|
|
@ -30,7 +30,7 @@ Please follow all the backup steps before attempting it. Please read the
|
|||
[etcd admin guide](https://github.com/coreos/etcd/blob/v2.2.1/Documentation/admin_guide.md)
|
||||
before attempting it.
|
||||
|
||||
We can migrate from a single-master cluster to a multi-master cluster, but this is a complicated operation. It is easier to create a multi-master cluster using Kops (described [here](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)). If possible, try to plan this at time of cluster creation.
|
||||
We can migrate from a single-master cluster to a multi-master cluster, but this is a complicated operation. It is easier to create a multi-master cluster using Kops (described [here](operations/high_availability.md)). If possible, try to plan this at time of cluster creation.
|
||||
|
||||
During this procedure, you will experience **downtime** on the API server, but
|
||||
not on the end user services. During this downtime, existing pods will continue
|
||||
|
|
|
@ -66,7 +66,7 @@ export DNSCONTROLLER_IMAGE=cnastorage/dns-controller
|
|||
|
||||
## Setting up cluster state storage
|
||||
Kops requires the state of clusters to be stored inside certain storage service. AWS S3 is the default option.
|
||||
More about using AWS S3 for cluster state store can be found at "Cluster State storage" on this [page](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md).
|
||||
More about using AWS S3 for cluster state store can be found at "Cluster State storage" on this [page](getting_started/aws.md).
|
||||
|
||||
Users can also setup their own S3 server and use the following instructions to use user-defined S3-compatible applications for cluster state storage.
|
||||
This is recommended if you don't have AWS account or you don't want to store the status of your clusters on public cloud storage.
|
||||
|
|
Loading…
Reference in New Issue