mirror of https://github.com/kubernetes/kops.git
Merge pull request #9308 from rifelpet/docs-cleanup
Docs - add syntax highlighting + markdown cleanup
This commit is contained in:
commit
99adb5690b
|
|
@ -6,7 +6,7 @@ If you download the config spec file on a running cluster that is configured the
|
|||
|
||||
Let us say you create your cluster with the following configuration options:
|
||||
|
||||
```
|
||||
```shell
|
||||
export KOPS_STATE_STORE=s3://k8s-us-west
|
||||
export CLOUD=aws
|
||||
export ZONE="us-west-1a"
|
||||
|
|
@ -19,7 +19,7 @@ export WORKER_SIZE="m4.large"
|
|||
```
|
||||
Next you call the kops command to create the cluster in your terminal:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops create cluster $NAME \
|
||||
--cloud=$CLOUD \
|
||||
--zones="$ZONE" \
|
||||
|
|
|
|||
|
|
@ -12,14 +12,14 @@ documentation.
|
|||
|
||||
Alternatively, you can add this block to your cluster:
|
||||
|
||||
```
|
||||
```yaml
|
||||
authentication:
|
||||
kopeio: {}
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: kops.k8s.io/v1alpha2
|
||||
kind: Cluster
|
||||
metadata:
|
||||
|
|
@ -36,14 +36,14 @@ spec:
|
|||
To turn on AWS IAM Authenticator, you'll need to add the stanza bellow
|
||||
to your cluster configuration.
|
||||
|
||||
```
|
||||
```yaml
|
||||
authentication:
|
||||
aws: {}
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: kops.k8s.io/v1alpha2
|
||||
kind: Cluster
|
||||
metadata:
|
||||
|
|
@ -60,7 +60,7 @@ For more details on AWS IAM authenticator please visit [kubernetes-sigs/aws-iam-
|
|||
|
||||
Example config:
|
||||
|
||||
```
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ Default output format [None]:
|
|||
|
||||
And export it correctly.
|
||||
|
||||
```console
|
||||
```shell
|
||||
export AWS_REGION=$(aws configure get region)
|
||||
```
|
||||
|
||||
|
|
@ -43,7 +43,7 @@ Thanks to `gossip`, this section can be skipped safely as well.
|
|||
|
||||
Since we are provisioning a cluster in AWS China Region, we need to create a dedicated S3 bucket in AWS China Region.
|
||||
|
||||
```console
|
||||
```shell
|
||||
aws s3api create-bucket --bucket prefix-example-com-state-store --create-bucket-configuration LocationConstraint=$AWS_REGION
|
||||
```
|
||||
|
||||
|
|
@ -63,7 +63,7 @@ First, launch an instance in a private subnet which accesses the internet fast a
|
|||
|
||||
Because the instance launched in a private subnet, we need to ensure it can be connected by using the private ip via a VPN or a bastion.
|
||||
|
||||
```console
|
||||
```shell
|
||||
SUBNET_ID=<subnet id> # a private subnet
|
||||
SECURITY_GROUP_ID=<security group id>
|
||||
KEY_NAME=<key pair name on aws>
|
||||
|
|
@ -75,7 +75,7 @@ aws ec2 create-tags --resources ${INSTANCE_ID} --tags Key=k8s.io/role/imagebuild
|
|||
|
||||
Now follow the documentation of [ImageBuilder][4] in `kube-deploy` to build the image.
|
||||
|
||||
```console
|
||||
```shell
|
||||
go get k8s.io/kube-deploy/imagebuilder
|
||||
cd ${GOPATH}/src/k8s.io/kube-deploy/imagebuilder
|
||||
|
||||
|
|
@ -103,7 +103,7 @@ No matter how to build the AMI, we get an AMI finally, e.g. `k8s-1.9-debian-jess
|
|||
|
||||
Set up a few environment variables.
|
||||
|
||||
```console
|
||||
```shell
|
||||
export NAME=example.k8s.local
|
||||
export KOPS_STATE_STORE=s3://prefix-example-com-state-store
|
||||
```
|
||||
|
|
@ -112,13 +112,13 @@ export KOPS_STATE_STORE=s3://prefix-example-com-state-store
|
|||
|
||||
We will need to note which availability zones are available to us. AWS China (Beijing) Region only has two availability zones. It will have [the same problem][6], like other regions having less than three AZs, that there is no true HA support in two AZs. You can [add more master nodes](#add-more-master-nodes) to improve the reliability in one AZ.
|
||||
|
||||
```console
|
||||
```shell
|
||||
aws ec2 describe-availability-zones
|
||||
```
|
||||
|
||||
Below is a `create cluster` command which will create a complete internal cluster [in an existing VPC](run_in_existing_vpc.md). The below command will generate a cluster configuration, but not start building it. Make sure that you have generated SSH key pair before creating the cluster.
|
||||
|
||||
```console
|
||||
```shell
|
||||
VPC_ID=<vpc id>
|
||||
VPC_NETWORK_CIDR=<vpc network cidr> # e.g. 172.30.0.0/16
|
||||
AMI=<owner id/ami name> # e.g. 123456890/k8s-1.9-debian-jessie-amd64-hvm-ebs-2018-07-18
|
||||
|
|
@ -139,7 +139,7 @@ kops create cluster \
|
|||
|
||||
Now we have a cluster configuration, we adjust the subnet config to reuse [shared subnets](run_in_existing_vpc.md#shared-subnets) by editing the description.
|
||||
|
||||
```console
|
||||
```shell
|
||||
kops edit cluster $NAME
|
||||
```
|
||||
|
||||
|
|
@ -183,14 +183,14 @@ Please note that this mirror *MIGHT BE* not suitable for some cases. It's can be
|
|||
|
||||
To achieve this, we can add more parameters to `kops create cluster`.
|
||||
|
||||
```console
|
||||
```shell
|
||||
--master-zones ${AWS_REGION}a --master-count 3 \
|
||||
--zones ${AWS_REGION}a --node-count 2 \
|
||||
```
|
||||
|
||||
#### In two AZs
|
||||
|
||||
```console
|
||||
```shell
|
||||
--master-zones ${AWS_REGION}a,${AWS_REGION}b --master-count 3 \
|
||||
--zones ${AWS_REGION}a,${AWS_REGION}b --node-count 2 \
|
||||
```
|
||||
|
|
@ -201,7 +201,7 @@ To achieve this, we can add more parameters to `kops create cluster`.
|
|||
|
||||
Here is a naive, uncompleted attempt to provision a cluster in a way minimizing the requirements to the internet because even with some kind of proxies or VPN it's still not that fast and it's always much more expensive than downloading from S3.
|
||||
|
||||
```console
|
||||
```shell
|
||||
## Setup vars
|
||||
|
||||
KUBERNETES_VERSION=$(curl -fsSL --retry 5 "https://dl.k8s.io/release/stable.txt")
|
||||
|
|
@ -276,7 +276,7 @@ aws configure set default.s3.multipart_threshold $AWS_S3_DEFAULT_MULTIPART_THRES
|
|||
|
||||
When create the cluster, add these parameters to the command line.
|
||||
|
||||
```console
|
||||
```shell
|
||||
--kubernetes-version https://s3.cn-north-1.amazonaws.com.cn/$ASSET_BUCKET/kubernetes/release/$KUBERNETES_VERSION
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -200,7 +200,7 @@ The resource identifier (ID) of something in your existing VPC that you would li
|
|||
|
||||
This feature was originally envisioned to allow re-use of NAT gateways. In this case, the usage is as follows. Although NAT gateways are "public"-facing resources, in the Cluster spec, you must specify them in the private subnet section. One way to think about this is that you are specifying "egress", which is the default route out from this private subnet.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
subnets:
|
||||
- cidr: 10.20.64.0/21
|
||||
|
|
@ -217,7 +217,7 @@ spec:
|
|||
|
||||
In the case that you don't use NAT gateways or internet gateways, Kops 1.12.0 introduced the "External" flag for egress to force kops to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kops, typically with an existing cluster.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
subnets:
|
||||
- cidr: 10.20.64.0/21
|
||||
|
|
@ -230,7 +230,7 @@ spec:
|
|||
### publicIP
|
||||
The IP of an existing EIP that you would like to attach to the NAT gateway.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
subnets:
|
||||
- cidr: 10.20.64.0/21
|
||||
|
|
@ -431,7 +431,7 @@ Will result in the flag `--resolv-conf=` being built.
|
|||
To disable CPU CFS quota enforcement for containers that specify CPU limits (default true) we have to set the flag `--cpu-cfs-quota` to `false`
|
||||
on all the kubelets. We can specify that in the `kubelet` spec in our cluster.yml.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
kubelet:
|
||||
cpuCFSQuota: false
|
||||
|
|
@ -440,7 +440,7 @@ spec:
|
|||
### Configure CPU CFS Period
|
||||
Configure CPU CFS quota period value (cpu.cfs_period_us). Example:
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
kubelet:
|
||||
cpuCFSQuotaPeriod: "100ms"
|
||||
|
|
@ -450,7 +450,7 @@ spec:
|
|||
To use custom metrics in kubernetes as per [custom metrics doc](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)
|
||||
we have to set the flag `--enable-custom-metrics` to `true` on all the kubelets. We can specify that in the `kubelet` spec in our cluster.yml.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
kubelet:
|
||||
enableCustomMetrics: true
|
||||
|
|
@ -460,7 +460,7 @@ spec:
|
|||
Kops 1.12.0 added support for enabling cpu management policies in kubernetes as per [cpu management doc](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies)
|
||||
we have to set the flag `--cpu-manager-policy` to the appropriate value on all the kubelets. This must be specified in the `kubelet` spec in our cluster.yml.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
kubelet:
|
||||
cpuManagerPolicy: static
|
||||
|
|
@ -688,7 +688,7 @@ Hooks allow for the execution of an action before the installation of Kubernetes
|
|||
|
||||
When creating a systemd unit hook using the `manifest` field, the hook system will construct a systemd unit file for you. It creates the `[Unit]` section, adding an automated description and setting `Before` and `Requires` values based on the `before` and `requires` fields. The value of the `manifest` field is used as the `[Service]` section of the unit file. To override this behavior, and instead specify the entire unit file yourself, you may specify `useRawManifest: true`. In this case, the contents of the `manifest` field will be used as a systemd unit, unmodified. The `before` and `requires` fields may not be used together with `useRawManifest`.
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
# many sections removed
|
||||
|
||||
|
|
@ -749,7 +749,7 @@ spec:
|
|||
|
||||
Install Ceph
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
# many sections removed
|
||||
hooks:
|
||||
|
|
@ -763,7 +763,7 @@ spec:
|
|||
|
||||
Install cachefilesd
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
# many sections removed
|
||||
hooks:
|
||||
|
|
@ -804,6 +804,7 @@ spec:
|
|||
### disableSecurityGroupIngress
|
||||
If you are using aws as `cloudProvider`, you can disable authorization of ELB security group to Kubernetes Nodes security group. In other words, it will not add security group rule.
|
||||
This can be useful to avoid AWS limit: 50 rules per security group.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
cloudConfig:
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ To add an option for Cilium to use the ENI IPAM mode.
|
|||
|
||||
We want to make this an option, so we need to add a field to CiliumNetworkingSpec:
|
||||
|
||||
```
|
||||
```go
|
||||
// Ipam specifies the IP address allocation mode to use.
|
||||
// Possible values are "crd" and "eni".
|
||||
// "eni" will use AWS native networking for pods. Eni requires masquerade to be set to false.
|
||||
|
|
@ -33,7 +33,7 @@ We should add some validation that the value entered is valid. We only accept `
|
|||
|
||||
Validation is done in validation.go, and is fairly simple - we just add an error to a slice if something is not valid:
|
||||
|
||||
```
|
||||
```go
|
||||
if v.Ipam != "" {
|
||||
// "azure" not supported by kops
|
||||
allErrs = append(allErrs, IsValidValue(fldPath.Child("ipam"), &v.Ipam, []string{"crd", "eni"})...)
|
||||
|
|
@ -56,7 +56,7 @@ one file per range of Kubernetes versions. These files are referenced by upup/pk
|
|||
|
||||
First we add to the `cilium-config` ConfigMap:
|
||||
|
||||
```
|
||||
```go
|
||||
{{ with .Ipam }}
|
||||
ipam: {{ . }}
|
||||
{{ if eq . "eni" }}
|
||||
|
|
@ -69,7 +69,7 @@ First we add to the `cilium-config` ConfigMap:
|
|||
|
||||
Then we conditionally move cilium-operator to masters:
|
||||
|
||||
```
|
||||
```go
|
||||
{{ if eq .Ipam "eni" }}
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
|
|
@ -95,7 +95,7 @@ passes the local IP address to `kubelet` when the `UsesSecondaryIP()` receiver o
|
|||
|
||||
So we modify `UsesSecondaryIP()` to also return `true` when Cilium is in ENI mode:
|
||||
|
||||
```
|
||||
```go
|
||||
return (c.Cluster.Spec.Networking.CNI != nil && c.Cluster.Spec.Networking.CNI.UsesSecondaryIP) || c.Cluster.Spec.Networking.AmazonVPC != nil || c.Cluster.Spec.Networking.LyftVPC != nil ||
|
||||
(c.Cluster.Spec.Networking.Cilium != nil && c.Cluster.Spec.Networking.Cilium.Ipam == kops.CiliumIpamEni)
|
||||
```
|
||||
|
|
@ -105,13 +105,13 @@ return (c.Cluster.Spec.Networking.CNI != nil && c.Cluster.Spec.Networking.CNI.Us
|
|||
When Cilium is in ENI mode, `cilium-operator` on the master nodes needs additional IAM permissions. The masters' IAM permissions
|
||||
are built by `BuildAWSPolicyMaster()` in pkg/model/iam/iam_builder.go:
|
||||
|
||||
```
|
||||
```go
|
||||
if b.Cluster.Spec.Networking != nil && b.Cluster.Spec.Networking.Cilium != nil && b.Cluster.Spec.Networking.Cilium.Ipam == kops.CiliumIpamEni {
|
||||
addCiliumEniPermissions(p, resource, b.Cluster.Spec.IAM.Legacy)
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
```go
|
||||
func addCiliumEniPermissions(p *Policy, resource stringorslice.StringOrSlice, legacyIAM bool) {
|
||||
if legacyIAM {
|
||||
// Legacy IAM provides ec2:*, so no additional permissions required
|
||||
|
|
@ -146,7 +146,7 @@ Prior to testing this for real, it can be handy to write a few unit tests.
|
|||
|
||||
We should test that validation works as we expect (in validation_test.go):
|
||||
|
||||
```
|
||||
```go
|
||||
func Test_Validate_Cilium(t *testing.T) {
|
||||
grid := []struct {
|
||||
Cilium kops.CiliumNetworkingSpec
|
||||
|
|
@ -273,7 +273,7 @@ export KOPSCONTROLLER_IMAGE=${DOCKER_IMAGE_PREFIX}kops-controller:${KOPS_VERSION
|
|||
## Using the feature
|
||||
|
||||
Users would simply `kops edit cluster`, and add a value like:
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
networking:
|
||||
cilium:
|
||||
|
|
|
|||
|
|
@ -35,8 +35,9 @@ Sources:
|
|||
### Check admission plugins
|
||||
|
||||
Sources:
|
||||
|
||||
* https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use
|
||||
* https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/config-default.sh)
|
||||
* https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/config-default.sh
|
||||
|
||||
### Check for new deprecated flags
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,7 @@
|
|||
# Vendoring Go dependencies
|
||||
|
||||
kops uses [dep](https://github.com/golang/dep) to manage vendored
|
||||
dependencies in most versions leading up to kops 1.14.
|
||||
kops uses [go mod](https://github.com/golang/go/wiki/Modules) to manage
|
||||
vendored dependencies in versions 1.15 and newer.
|
||||
vendored dependencies.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
@ -11,8 +9,7 @@ The following software must be installed prior to running the
|
|||
update commands:
|
||||
|
||||
* [bazel](https://github.com/bazelbuild/bazel)
|
||||
* [dep](https://github.com/golang/dep) for kops 1.14 or older
|
||||
* [go mod](https://github.com/golang/go/wiki/Modules) for kops 1.15 and newer branches (including master)
|
||||
* [go mod](https://github.com/golang/go/wiki/Modules)
|
||||
* [hg](https://www.mercurial-scm.org/wiki/Download)
|
||||
|
||||
## Adding a dependency to the vendor directory
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.com.
|
|||
|
||||
Note: The NS values here are for the **SUBDOMAIN**
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Comment": "Create a subdomain NS record in the parent domain",
|
||||
"Changes": [
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
## OSX From Homebrew
|
||||
|
||||
```console
|
||||
```shell
|
||||
brew update && brew install kops
|
||||
```
|
||||
|
||||
|
|
@ -14,7 +14,7 @@ The `kops` binary is also available via our [releases](https://github.com/kubern
|
|||
|
||||
## Linux
|
||||
|
||||
```console
|
||||
```shell
|
||||
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
|
||||
chmod +x kops-linux-amd64
|
||||
sudo mv kops-linux-amd64 /usr/local/bin/kops
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ iam:
|
|||
|
||||
Following this, run a cluster update to have the changes take effect:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
||||
|
|
@ -86,7 +86,7 @@ to add DynamoDB and Elasticsearch permissions to your nodes.
|
|||
|
||||
Edit your cluster via `kops edit cluster ${CLUSTER_NAME}` and add the following to the spec:
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
additionalPolicies:
|
||||
node: |
|
||||
|
|
@ -106,7 +106,7 @@ spec:
|
|||
|
||||
After you're finished editing, your cluster spec should look something like this:
|
||||
|
||||
```
|
||||
```yaml
|
||||
metadata:
|
||||
creationTimestamp: "2016-06-27T14:23:34Z"
|
||||
name: ${CLUSTER_NAME}
|
||||
|
|
@ -136,13 +136,13 @@ spec:
|
|||
|
||||
Now you can run a cluster update to have the changes take effect:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
||||
You can have an additional policy for each kops role (node, master, bastion). For instance, if you wanted to apply one set of additional permissions to the master instances, and another to the nodes, you could do the following:
|
||||
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
additionalPolicies:
|
||||
node: |
|
||||
|
|
@ -175,13 +175,13 @@ This is due to the lifecycle overrides being used to prevent creation of the IAM
|
|||
|
||||
To do this, get a list of instance group names for the cluster:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops get ig --name ${CLUSTER_NAME}
|
||||
```
|
||||
|
||||
And update every instance group's spec with the desired instance profile ARNs:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops edit ig --name ${CLUSTER_NAME} ${INSTANCE_GROUP_NAME}
|
||||
```
|
||||
|
||||
|
|
@ -195,7 +195,7 @@ spec:
|
|||
|
||||
Now run a cluster update to create the new launch configuration, using [lifecycle overrides](./cli/kops_update_cluster.md#options) to prevent IAM-related resources from being created:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops update cluster ${CLUSTER_NAME} --yes --lifecycle-overrides IAMRole=ExistsAndWarnIfChanges,IAMRolePolicy=ExistsAndWarnIfChanges,IAMInstanceProfileRole=ExistsAndWarnIfChanges
|
||||
```
|
||||
|
||||
|
|
@ -203,6 +203,6 @@ kops update cluster ${CLUSTER_NAME} --yes --lifecycle-overrides IAMRole=ExistsAn
|
|||
|
||||
Finally, perform a rolling update in order to replace EC2 instances in the ASG with the new launch configuration:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops rolling-update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
|
|
|||
|
|
@ -145,7 +145,7 @@ which would end up in a drop-in file on nodes of the instance group in question.
|
|||
|
||||
### Example
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: kops.k8s.io/v1alpha2
|
||||
kind: InstanceGroup
|
||||
metadata:
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ An example:
|
|||
|
||||
`kops edit ig nodes`
|
||||
|
||||
```
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
cloudLabels:
|
||||
|
|
@ -28,7 +28,7 @@ spec:
|
|||
|
||||
`kops edit cluster`
|
||||
|
||||
```
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
cloudLabels:
|
||||
|
|
@ -48,7 +48,7 @@ An example:
|
|||
|
||||
`kops edit ig nodes`
|
||||
|
||||
```
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
nodeLabels:
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ Download the script as `kops-mfa`, make it executable, put it on `$PATH`, set th
|
|||
|
||||
## The Workaround #2
|
||||
Use [awsudo](https://github.com/makethunder/awsudo) to generate temp credentials. This is similar to previous but shorter:
|
||||
```
|
||||
```shell
|
||||
pip install awsudo
|
||||
env $(awsudo ${AWS_PROFILE} | grep AWS | xargs) kops ...
|
||||
```
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Read the [prerequisites](https://github.com/lyft/cni-ipvlan-vpc-k8s#prerequisite
|
|||
|
||||
To use the Lyft CNI, specify the following in the cluster spec.
|
||||
|
||||
```
|
||||
```yaml
|
||||
networking:
|
||||
lyftvpc: {}
|
||||
```
|
||||
|
|
@ -33,7 +33,7 @@ $ kops create cluster \
|
|||
|
||||
You can specify which subnets to use for allocating Pod IPs by specifying
|
||||
|
||||
```
|
||||
```yaml
|
||||
networking:
|
||||
lyftvpc:
|
||||
subnetTags:
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ Assuming all the conditions are met a secret token is generated and returned to
|
|||
|
||||
Enabling the node authorization service is as follows; firstly you must enable the feature flag as node authorization is still experimental; export KOPS_FEATURE_FLAGS=EnableNodeAuthorization
|
||||
|
||||
```
|
||||
```yaml
|
||||
# in the cluster spec
|
||||
nodeAuthorization:
|
||||
# enable the service under the node authorization section, please review the settings in the components.go
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ It's necessary to add your own RBAC permission to the dashboard. Please read the
|
|||
|
||||
Below you see an example giving **cluster-admin access** to the dashboard.
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
|
|
@ -115,7 +115,7 @@ subjects:
|
|||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
```
|
||||
```
|
||||
|
||||
### Monitoring with Heapster - Standalone
|
||||
|
||||
|
|
@ -199,7 +199,7 @@ lists multiple versions.
|
|||
|
||||
For example, a typical addons declaration might looks like this:
|
||||
|
||||
```
|
||||
```yaml
|
||||
- version: 1.4.0
|
||||
selector:
|
||||
k8s-addon: kubernetes-dashboard.addons.k8s.io
|
||||
|
|
@ -243,7 +243,7 @@ to beta. As such it is easier to have two separate manifests.
|
|||
|
||||
For example:
|
||||
|
||||
```
|
||||
```yaml
|
||||
- version: 1.5.0
|
||||
selector:
|
||||
k8s-addon: kube-dashboard.addons.k8s.io
|
||||
|
|
@ -278,7 +278,7 @@ We need a way to break the ties between the semvers, and thus we introduce the `
|
|||
|
||||
Thus a manifest will actually look like this:
|
||||
|
||||
```
|
||||
```yaml
|
||||
- version: 1.6.0
|
||||
selector:
|
||||
k8s-addon: kube-dns.addons.k8s.io
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ ssh admin@<IP-of-master-node>
|
|||
|
||||
in whatever manner you prefer. Here is one example.
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /usr/local
|
||||
sudo wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz
|
||||
sudo tar -xvf go1.13.3.linux-amd64.tar.gz
|
||||
|
|
@ -85,7 +85,7 @@ which go
|
|||
|
||||
3\. Install etcdhelper
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p ~/go/src/github.com/
|
||||
cd ~/go/src/github.com/
|
||||
git clone https://github.com/openshift/origin openshift
|
||||
|
|
|
|||
|
|
@ -141,7 +141,7 @@ Edit your cluster to add `encryptedVolume: true` to each etcd volume:
|
|||
|
||||
`kops edit cluster ${CLUSTER_NAME}`
|
||||
|
||||
```
|
||||
```yaml
|
||||
...
|
||||
etcdClusters:
|
||||
- etcdMembers:
|
||||
|
|
@ -171,7 +171,7 @@ Edit your cluster to add `encryptedVolume: true` to each etcd volume:
|
|||
|
||||
`kops edit cluster ${CLUSTER_NAME}`
|
||||
|
||||
```
|
||||
```yaml
|
||||
...
|
||||
etcdClusters:
|
||||
- etcdMembers:
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ Some services, such as Istio and Envoy's Secret Discovery Service (SDS), take ad
|
|||
|
||||
1. In order to enable this feature for Kubernetes 1.12+, add the following config to your cluster spec:
|
||||
|
||||
```
|
||||
```yaml
|
||||
kubeAPIServer:
|
||||
apiAudiences:
|
||||
- api
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
Delete all secrets & keypairs that kops is holding:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops get secrets | grep ^Secret | awk '{print $2}' | xargs -I {} kops delete secret secret {}
|
||||
|
||||
kops get secrets | grep ^Keypair | awk '{print $2}' | xargs -I {} kops delete secret keypair {}
|
||||
|
|
@ -59,7 +59,7 @@ Now the service account tokens will need to be regenerated inside the cluster:
|
|||
|
||||
Then `ssh admin@${IP}` and run this to delete all the service account tokens:
|
||||
|
||||
```
|
||||
```shell
|
||||
# Delete all service account tokens in all namespaces
|
||||
NS=`kubectl get namespaces -o 'jsonpath={.items[*].metadata.name}'`
|
||||
for i in ${NS}; do kubectl get secrets --namespace=${i} --no-headers | grep "kubernetes.io/service-account-token" | awk '{print $1}' | xargs -I {} kubectl delete secret --namespace=$i {}; done
|
||||
|
|
|
|||
|
|
@ -120,7 +120,7 @@ spec:
|
|||
|
||||
2. Then `kops edit cluster ${CLUSTER_NAME}` will show you something like:
|
||||
|
||||
```
|
||||
```yaml
|
||||
metadata:
|
||||
creationTimestamp: "2016-06-27T14:23:34Z"
|
||||
name: ${CLUSTER_NAME}
|
||||
|
|
@ -139,7 +139,7 @@ spec:
|
|||
|
||||
3. Once you're happy, you can create the cluster using:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ example:
|
|||
Note: it is currently not possible to delete secrets from the keystore that have the type "Secret"
|
||||
|
||||
### adding ssh credential from spec file
|
||||
```bash
|
||||
```yaml
|
||||
apiVersion: kops.k8s.io/v1alpha2
|
||||
kind: SSHCredential
|
||||
metadata:
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ spec:
|
|||
|
||||
Now run a cluster update to create the new launch configuration, using [lifecycle overrides](./cli/kops_update_cluster.md#options) to prevent Security Group resources from being created:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops update cluster ${CLUSTER_NAME} --yes --lifecycle-overrides SecurityGroup=ExistsAndWarnIfChanges,SecurityGroupRule=ExistsAndWarnIfChanges
|
||||
```
|
||||
|
||||
|
|
@ -53,6 +53,6 @@ kops update cluster ${CLUSTER_NAME} --yes --lifecycle-overrides SecurityGroup=Ex
|
|||
|
||||
Then perform a rolling update in order to replace EC2 instances in the ASG with the new launch configuration:
|
||||
|
||||
```
|
||||
```shell
|
||||
kops rolling-update cluster ${CLUSTER_NAME} --yes
|
||||
```
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ Repeat for each cluster needing to be moved.
|
|||
Many enterprises prefer to run many AWS accounts. In these setups, having a shared cross-account S3 bucket for state may make inventory and management easier.
|
||||
Consider the S3 bucket living in Account B and the kops cluster living in Account A. In order to achieve this, you first need to let Account A access the s3 bucket. This is done by adding the following _bucket policy_ on the S3 bucket:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Id": "123",
|
||||
"Version": "2012-10-17",
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ Ps: Steps below assume a recent version of Terraform. There's a workaround for a
|
|||
|
||||
You could keep your Terraform state locally, but we **strongly recommend** saving it on S3 with versioning turned on that bucket. Configure a remote S3 store with a setting like below:
|
||||
|
||||
```
|
||||
```terraform
|
||||
terraform {
|
||||
backend "s3" {
|
||||
bucket = "mybucket"
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ The AWS ELB does not support changing from internet facing to Internal. However
|
|||
### Steps to change the ELB from Internet-Facing to Internal
|
||||
- Edit the cluster: `kops edit cluster $NAME`
|
||||
- Change the api load balancer type from: Public to Internal... should look like this when done:
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
api:
|
||||
loadBalancer:
|
||||
|
|
@ -60,7 +60,7 @@ The AWS ELB does not support changing from internet facing to Internal. However
|
|||
```
|
||||
- Quit the edit
|
||||
- Run the update command to check the config: `kops update cluster $NAME`
|
||||
- BEFORE DOING the same command with the `--yes` option go into the AWS console and DELETE the api ELB!!!!!!
|
||||
- BEFORE DOING the same command with the `--yes` option go into the AWS console and DELETE the api ELB
|
||||
- Now run: `kops update cluster $NAME --yes`
|
||||
- Finally execute a rolling update so that the instances register with the new internal ELB, execute: `kops rolling-update cluster --cloudonly --force` command. We have to use the `--cloudonly` option because we deleted the api ELB so there is no way to talk to the cluster through the k8s api. The force option is there because kops / terraform doesn't know that we need to update the instances with the ELB so we have to force it.
|
||||
Once the rolling update has completed you have an internal only ELB that has the master k8s nodes registered with it.
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ open an issue.
|
|||
|
||||
```
|
||||
kops get cluster ${OLD_NAME} -oyaml
|
||||
````
|
||||
```
|
||||
|
||||
## Move resources to a new cluster
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ site_name: Kubernetes Operations - kops
|
|||
# strict: true
|
||||
repo_name: 'kubernetes/kops'
|
||||
repo_url: 'https://github.com/kubernetes/kops'
|
||||
site_url: 'https://kubernetes-kops.netlify.com'
|
||||
site_url: 'https://kops.sigs.k8s.io'
|
||||
markdown_extensions:
|
||||
- admonition
|
||||
- codehilite
|
||||
|
|
|
|||
Loading…
Reference in New Issue