mirror of https://github.com/kubernetes/kops.git
Merge pull request #10235 from axpraka/update-kops-as-kOps
Update kops as kOps and remove extra spaces from .md files
This commit is contained in:
commit
9e14b29867
|
|
@ -14,8 +14,7 @@ Instructions for reporting a vulnerability can be found on the
|
|||
## Supported Versions
|
||||
|
||||
Information about supported kOps versions and the Kubernetes versions they support can be found on the
|
||||
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the
|
||||
[Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
[Releases and versioning](https://kops.sigs.k8s.io/welcome/releases/) page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
|
||||
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
|
||||
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ complete lifecycle of Ambassador in your cluster. It also automates many of the
|
|||
Ambassador. Once installed, the Operator will automatically complete rapid installations and seamless upgrades to new
|
||||
versions of Ambassador.
|
||||
|
||||
This addon deploys Ambassador Operator which installs Ambassador in a kops cluster.
|
||||
This addon deploys Ambassador Operator which installs Ambassador in a kOps cluster.
|
||||
|
||||
##### Note:
|
||||
The operator requires widely scoped permissions in order to install and manage Ambassador's lifecycle. Both, the
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ kubectl apply -f ${addon}
|
|||
An enhanced script which also adds the IAM policies is included here [cluster-autoscaler.sh](cluster-autoscaler.sh)
|
||||
|
||||
Question: Which ASG group should be autoscaled?
|
||||
Answer: By default, kops creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
|
||||
Answer: By default, kOps creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
|
||||
|
||||
Question: The cluster-autoscaler [documentation](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws) mentions an IAM Policy. Which IAM Role should the Policy be attached to?
|
||||
Answer: Kops creates two Roles, nodes.$CLUSTER_NAME and masters.$CLUSTER_NAME. Currently the example scripts run the autoscaler process on the k8s master node, so the IAM Policy should be assigned to masters.$CLUSTER_NAME (substituting that variable for your actual cluster name).
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# Deploying Citrix Ingress Controller through KOPS
|
||||
# Deploying Citrix Ingress Controller through kOps
|
||||
|
||||
This guide explains how to deploy [Citrix Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) through KOPS addon.
|
||||
|
||||
|
|
|
|||
|
|
@ -59,11 +59,11 @@ var (
|
|||
completion_long = templates.LongDesc(i18n.T(`
|
||||
Output shell completion code for the specified shell (bash or zsh).
|
||||
The shell code must be evaluated to provide interactive
|
||||
completion of kops commands. This can be done by sourcing it from
|
||||
completion of kops commands. This can be done by sourcing it from
|
||||
the .bash_profile.
|
||||
|
||||
Note: this requires the bash-completion framework, which is not installed
|
||||
by default on Mac. Once installed, bash_completion must be evaluated. This can be done by adding the
|
||||
by default on Mac. Once installed, bash_completion must be evaluated. This can be done by adding the
|
||||
following line to the .bash_profile
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -203,7 +203,7 @@ func NewCmdCreateCluster(f *util.Factory, out io.Writer) *cobra.Command {
|
|||
}
|
||||
|
||||
cmd.Flags().BoolVarP(&options.Yes, "yes", "y", options.Yes, "Specify --yes to immediately create the cluster")
|
||||
cmd.Flags().StringVar(&options.Target, "target", options.Target, fmt.Sprintf("Valid targets: %s, %s, %s. Set this flag to %s if you want kops to generate terraform", cloudup.TargetDirect, cloudup.TargetTerraform, cloudup.TargetCloudformation, cloudup.TargetTerraform))
|
||||
cmd.Flags().StringVar(&options.Target, "target", options.Target, fmt.Sprintf("Valid targets: %s, %s, %s. Set this flag to %s if you want kOps to generate terraform", cloudup.TargetDirect, cloudup.TargetTerraform, cloudup.TargetCloudformation, cloudup.TargetTerraform))
|
||||
|
||||
// Configuration / state location
|
||||
if featureflag.EnableSeparateConfigBase.Enabled() {
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ func NewCmdCreateSecretCiliumEncryptionConfig(f *util.Factory, out io.Writer) *c
|
|||
}
|
||||
|
||||
cmd.Flags().StringVarP(&options.CiliumPasswordFilePath, "", "f", "", "Path to the cilium encryption config file")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kops secret if it already exists")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kOps secret if it already exists")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ func NewCmdCreateSecretDockerConfig(f *util.Factory, out io.Writer) *cobra.Comma
|
|||
}
|
||||
|
||||
cmd.Flags().StringVarP(&options.DockerConfigPath, "", "f", "", "Path to docker config JSON file")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kops secret if it already exists")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kOps secret if it already exists")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ func NewCmdCreateSecretEncryptionConfig(f *util.Factory, out io.Writer) *cobra.C
|
|||
}
|
||||
|
||||
cmd.Flags().StringVarP(&options.EncryptionConfigPath, "", "f", "", "Path to encryption config yaml file")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kops secret if it already exists")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kOps secret if it already exists")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ var (
|
|||
Create a new weave encryption secret, and store it in the state store.
|
||||
Used to weave networking to use encrypted communication between nodes.
|
||||
|
||||
If no password is provided, kops will generate one at random.
|
||||
If no password is provided, kOps will generate one at random.
|
||||
|
||||
WARNING: cannot be enabled on a running cluster without downtime.`))
|
||||
|
||||
|
|
@ -89,7 +89,7 @@ func NewCmdCreateSecretWeaveEncryptionConfig(f *util.Factory, out io.Writer) *co
|
|||
}
|
||||
|
||||
cmd.Flags().StringVarP(&options.WeavePasswordFilePath, "", "f", "", "Path to the weave password file (optional)")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kops secret if it already exists")
|
||||
cmd.Flags().BoolVar(&options.Force, "force", options.Force, "Force replace the kOps secret if it already exists")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ type deleteInstanceOptions struct {
|
|||
Yes bool
|
||||
CloudOnly bool
|
||||
|
||||
// The following two variables are when kops is validating a cluster
|
||||
// The following two variables are when kOps is validating a cluster
|
||||
// between detach and deletion.
|
||||
|
||||
// FailOnDrainError fail deletion if drain errors.
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ import (
|
|||
|
||||
var (
|
||||
deleteIgLong = templates.LongDesc(i18n.T(`
|
||||
Delete an instancegroup configuration. kops has the concept of "instance groups",
|
||||
Delete an instancegroup configuration. kOps has the concept of "instance groups",
|
||||
which are a group of similar virtual machines. On AWS, they map to an
|
||||
AutoScalingGroup. An ig work either as a Kubernetes master or a node.`))
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ var (
|
|||
This command changes the desired configuration in the registry.
|
||||
|
||||
To set your preferred editor, you can define the EDITOR environment variable.
|
||||
When you have done this, kops will use the editor that you have set.
|
||||
When you have done this, kOps will use the editor that you have set.
|
||||
|
||||
kops edit does not update the cloud resources, to apply the changes use "kops update cluster".
|
||||
`))
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ var (
|
|||
This command changes the desired cluster configuration in the registry.
|
||||
|
||||
To set your preferred editor, you can define the EDITOR environment variable.
|
||||
When you have done this, kops will use the editor that you have set.
|
||||
When you have done this, kOps will use the editor that you have set.
|
||||
|
||||
kops edit does not update the cloud resources, to apply the changes use "kops update cluster".`))
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ var (
|
|||
This command changes the instancegroup desired configuration in the registry.
|
||||
|
||||
To set your preferred editor, you can define the EDITOR environment variable.
|
||||
When you have done this, kops will use the editor that you have set.
|
||||
When you have done this, kOps will use the editor that you have set.
|
||||
|
||||
kops edit does not update the cloud resources, to apply the changes use "kops update cluster".`))
|
||||
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ type ExportKubecfgOptions struct {
|
|||
user string
|
||||
internal bool
|
||||
|
||||
// UseKopsAuthenticationPlugin controls whether we should use the kops auth helper instead of a static credential
|
||||
// UseKopsAuthenticationPlugin controls whether we should use the kOps auth helper instead of a static credential
|
||||
UseKopsAuthenticationPlugin bool
|
||||
}
|
||||
|
||||
|
|
@ -83,12 +83,12 @@ func NewCmdExportKubecfg(f *util.Factory, out io.Writer) *cobra.Command {
|
|||
}
|
||||
|
||||
cmd.Flags().StringVar(&options.KubeConfigPath, "kubeconfig", options.KubeConfigPath, "the location of the kubeconfig file to create.")
|
||||
cmd.Flags().BoolVar(&options.all, "all", options.all, "export all clusters from the kops state store")
|
||||
cmd.Flags().BoolVar(&options.all, "all", options.all, "export all clusters from the kOps state store")
|
||||
cmd.Flags().DurationVar(&options.admin, "admin", options.admin, "export a cluster admin user credential with the given lifetime and add it to the cluster context")
|
||||
cmd.Flags().Lookup("admin").NoOptDefVal = kubeconfig.DefaultKubecfgAdminLifetime.String()
|
||||
cmd.Flags().StringVar(&options.user, "user", options.user, "add an existing user to the cluster context")
|
||||
cmd.Flags().BoolVar(&options.internal, "internal", options.internal, "use the cluster's internal DNS name")
|
||||
cmd.Flags().BoolVar(&options.UseKopsAuthenticationPlugin, "auth-plugin", options.UseKopsAuthenticationPlugin, "use the kops authentication plugin")
|
||||
cmd.Flags().BoolVar(&options.UseKopsAuthenticationPlugin, "auth-plugin", options.UseKopsAuthenticationPlugin, "use the kOps authentication plugin")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
|||
|
|
@ -67,11 +67,11 @@ var (
|
|||
# Preview a rolling-update.
|
||||
kops rolling-update cluster
|
||||
|
||||
# Roll the currently selected kops cluster with defaults.
|
||||
# Roll the currently selected kOps cluster with defaults.
|
||||
# Nodes will be drained and the cluster will be validated between node replacement.
|
||||
kops rolling-update cluster --yes
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# do not fail if the cluster does not validate,
|
||||
# wait 8 min to create new node, and wait at least
|
||||
# 8 min to validate the cluster.
|
||||
|
|
@ -80,7 +80,7 @@ var (
|
|||
--master-interval=8m \
|
||||
--node-interval=8m
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# do not validate the cluster because of the cloudonly flag.
|
||||
# Force the entire cluster to roll, even if rolling update
|
||||
# reports that the cluster does not need to be rolled.
|
||||
|
|
@ -88,7 +88,7 @@ var (
|
|||
--cloudonly \
|
||||
--force
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# only roll the node instancegroup,
|
||||
# use the new drain and validate functionality.
|
||||
kops rolling-update cluster k8s-cluster.example.com --yes \
|
||||
|
|
@ -106,7 +106,7 @@ type RollingUpdateOptions struct {
|
|||
Force bool
|
||||
CloudOnly bool
|
||||
|
||||
// The following two variables are when kops is validating a cluster
|
||||
// The following two variables are when kOps is validating a cluster
|
||||
// during a rolling update.
|
||||
|
||||
// FailOnDrainError fail rolling-update if drain errors.
|
||||
|
|
|
|||
|
|
@ -52,17 +52,17 @@ const (
|
|||
|
||||
var (
|
||||
rootLong = templates.LongDesc(i18n.T(`
|
||||
kops is Kubernetes ops.
|
||||
kOps is Kubernetes Operations.
|
||||
|
||||
kops is the easiest way to get a production grade Kubernetes cluster up and running.
|
||||
kOps is the easiest way to get a production grade Kubernetes cluster up and running.
|
||||
We like to think of it as kubectl for clusters.
|
||||
|
||||
kops helps you create, destroy, upgrade and maintain production-grade, highly available,
|
||||
kOps helps you create, destroy, upgrade and maintain production-grade, highly available,
|
||||
Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently
|
||||
officially supported, with GCE and OpenStack in beta support.
|
||||
`))
|
||||
|
||||
rootShort = i18n.T(`kops is Kubernetes ops.`)
|
||||
rootShort = i18n.T(`kOps is Kubernetes Operations.`)
|
||||
)
|
||||
|
||||
type Factory interface {
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ import (
|
|||
|
||||
var (
|
||||
toolboxConvertImportedLong = templates.LongDesc(i18n.T(`
|
||||
Convert an imported cluster into a kops cluster.`))
|
||||
Convert an imported cluster into a kOps cluster.`))
|
||||
|
||||
toolboxConvertImportedExample = templates.Examples(i18n.T(`
|
||||
|
||||
|
|
@ -45,7 +45,7 @@ var (
|
|||
--newname k8s-cluster.example.com
|
||||
`))
|
||||
|
||||
toolboxConvertImportedShort = i18n.T(`Convert an imported cluster into a kops cluster.`)
|
||||
toolboxConvertImportedShort = i18n.T(`Convert an imported cluster into a kOps cluster.`)
|
||||
)
|
||||
|
||||
type ToolboxConvertImportedOptions struct {
|
||||
|
|
|
|||
|
|
@ -125,7 +125,7 @@ func NewCmdUpdateCluster(f *util.Factory, out io.Writer) *cobra.Command {
|
|||
cmd.Flags().Lookup("admin").NoOptDefVal = kubeconfig.DefaultKubecfgAdminLifetime.String()
|
||||
cmd.Flags().StringVar(&options.user, "user", options.user, "Existing user to add to the cluster context. Implies --create-kube-config")
|
||||
cmd.Flags().BoolVar(&options.internal, "internal", options.internal, "Use the cluster's internal DNS name. Implies --create-kube-config")
|
||||
cmd.Flags().BoolVar(&options.AllowKopsDowngrade, "allow-kops-downgrade", options.AllowKopsDowngrade, "Allow an older version of kops to update the cluster than last used")
|
||||
cmd.Flags().BoolVar(&options.AllowKopsDowngrade, "allow-kops-downgrade", options.AllowKopsDowngrade, "Allow an older version of kOps to update the cluster than last used")
|
||||
cmd.Flags().StringVar(&options.Phase, "phase", options.Phase, "Subset of tasks to run: "+strings.Join(cloudup.Phases.List(), ", "))
|
||||
cmd.Flags().StringSliceVar(&options.LifecycleOverrides, "lifecycle-overrides", options.LifecycleOverrides, "comma separated list of phase overrides, example: SecurityGroups=Ignore,InternetGateway=ExistsAndWarnIfChanges")
|
||||
viper.BindPFlag("lifecycle-overrides", cmd.Flags().Lookup("lifecycle-overrides"))
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ var (
|
|||
# Kops will try for 10 minutes to validate the cluster 3 times.
|
||||
kops validate cluster --wait 10m --count 3`))
|
||||
|
||||
validateShort = i18n.T(`Validate a kops cluster.`)
|
||||
validateShort = i18n.T(`Validate a kOps cluster.`)
|
||||
)
|
||||
|
||||
func NewCmdValidate(f *util.Factory, out io.Writer) *cobra.Command {
|
||||
|
|
|
|||
|
|
@ -28,12 +28,12 @@ import (
|
|||
|
||||
var (
|
||||
versionLong = templates.LongDesc(i18n.T(`
|
||||
Print the kops version and git SHA.`))
|
||||
Print the kOps version and git SHA.`))
|
||||
|
||||
versionExample = templates.Examples(i18n.T(`
|
||||
kops version`))
|
||||
|
||||
versionShort = i18n.T(`Print the kops version information.`)
|
||||
versionShort = i18n.T(`Print the kOps version information.`)
|
||||
)
|
||||
|
||||
// NewCmdVersion builds a cobra command for the kops version command
|
||||
|
|
@ -54,7 +54,7 @@ func NewCmdVersion(f *util.Factory, out io.Writer) *cobra.Command {
|
|||
}
|
||||
}
|
||||
|
||||
cmd.Flags().BoolVar(&options.Short, "short", options.Short, "only print the main kops version, useful for scripting")
|
||||
cmd.Flags().BoolVar(&options.Short, "short", options.Short, "only print the main kOps version, useful for scripting")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
# Download kOps config spec file
|
||||
|
||||
KOPS operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
|
||||
kOps operates off of a config spec file that is generated during the create phase. It is uploaded to the amazon s3 bucket that is passed in during create.
|
||||
|
||||
If you download the config spec file on a running cluster that is configured the way you like it, you can just pass that config spec file in to the create command and have kops create the cluster for you , `kops create -f spec_file` in a completely unattended manner.
|
||||
If you download the config spec file on a running cluster that is configured the way you like it, you can just pass that config spec file in to the create command and have kOps create the cluster for you, `kops create -f spec_file` in a completely unattended manner.
|
||||
|
||||
Let us say you create your cluster with the following configuration options:
|
||||
|
||||
|
|
@ -43,7 +43,7 @@ For more information on how to use and modify the configurations see [here](../m
|
|||
|
||||
## Managing instance groups
|
||||
|
||||
You can also manage instance groups in separate YAML files as well. The command `kops get --name $NAME -o yaml > $NAME.yml` exports the entire cluster. An option is to have a YAML file for the cluster, and individual YAML files for the instance groups. This allows you to do stuff like:
|
||||
You can also manage instance groups in separate YAML files as well. The command `kops get --name $NAME -o yaml > $NAME.yml` exports the entire cluster. An option is to have a YAML file for the cluster, and individual YAML files for the instance groups. This allows you to do stuff like:
|
||||
|
||||
```shell
|
||||
if ! kops get cluster --name "$NAME"; then
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
# Architecture: kops-controller
|
||||
|
||||
kops-controller runs as a DaemonSet on the master node(s). It is a kubebuilder
|
||||
controller that performs runtime reconciliation for kops.
|
||||
kops-controller runs as a DaemonSet on the master node(s). It is a kubebuilder
|
||||
controller that performs runtime reconciliation for kOps.
|
||||
|
||||
Controllers in kops-controller:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
# Authentication
|
||||
|
||||
kOps has support for configuring authentication systems. This should not be used with kubernetes versions
|
||||
kOps has support for configuring authentication systems. This should not be used with kubernetes versions
|
||||
before 1.8.5 because of a serious bug with apimachinery [#55022](https://github.com/kubernetes/kubernetes/issues/55022).
|
||||
|
||||
## kopeio authentication
|
||||
|
||||
If you want to experiment with kopeio authentication, you can use
|
||||
`--authentication kopeio`. However please be aware that kopeio authentication
|
||||
`--authentication kopeio`. However please be aware that kopeio authentication
|
||||
has not yet been formally released, and thus there is not a lot of upstream
|
||||
documentation.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
## Kubernetes Bootstrap
|
||||
|
||||
This is an overview of how a Kubernetes cluster comes up, when using kops.
|
||||
This is an overview of how a Kubernetes cluster comes up, when using kOps.
|
||||
|
||||
## From spec to complete configuration
|
||||
|
||||
The kOps tool itself takes the (minimal) spec of a cluster that the user specifies,
|
||||
and computes a complete configuration, setting defaults where values are not specified,
|
||||
and deriving appropriate dependencies. The "complete" specification includes the set
|
||||
of all flags that will be passed to all components. All decisions about how to install the
|
||||
and deriving appropriate dependencies. The "complete" specification includes the set
|
||||
of all flags that will be passed to all components. All decisions about how to install the
|
||||
cluster are made at this stage, and thus every decision can in theory be changed if the user
|
||||
specifies a value in the spec.
|
||||
|
||||
|
|
@ -22,7 +22,7 @@ On both AWS & GCE, everything (nodes & masters) runs in an ASG/MIG; this means t
|
|||
nodeup is the component that installs packages and sets up the OS, sufficiently for
|
||||
Kubelet. The core requirements are:
|
||||
|
||||
* Docker must be installed. nodeup will install Docker 1.13.1, the version of Docker tested with Kubernetes 1.8
|
||||
* Docker must be installed. nodeup will install Docker 1.13.1, the version of Docker tested with Kubernetes 1.8
|
||||
* Kubelet, which is installed a systemd service
|
||||
|
||||
In addition, nodeup installs:
|
||||
|
|
@ -31,7 +31,7 @@ In addition, nodeup installs:
|
|||
|
||||
## /etc/kubernetes/manifests
|
||||
|
||||
kubelet starts pods as controlled by the files in /etc/kubernetes/manifests These files are created
|
||||
kubelet starts pods as controlled by the files in /etc/kubernetes/manifests. These files are created
|
||||
by nodeup and protokube (ideally all by protokube, but currently split between the two).
|
||||
|
||||
These pods are declared using the standard k8s manifests, just as if they were stored in the API.
|
||||
|
|
@ -59,19 +59,19 @@ doesn't fit into `additionalUserData` or `hooks`.
|
|||
Kubelet starts up, starts (and restarts) all the containers in /etc/kubernetes/manifests.
|
||||
|
||||
It also tries to contact the API server (which the master kubelet will itself eventually start),
|
||||
register the node. Once a node is registered, kube-controller-manager will allocate it a PodCIDR,
|
||||
which is an allocation of the k8s-network IP range. kube-controller-manager updates the node, setting
|
||||
the PodCIDR field. Once kubelet sees this allocation, it will set up the
|
||||
local bridge with this CIDR, which allows docker to start. Before this happens, only pods
|
||||
register the node. Once a node is registered, kube-controller-manager will allocate it a PodCIDR,
|
||||
which is an allocation of the k8s-network IP range. kube-controller-manager updates the node, setting
|
||||
the PodCIDR field. Once kubelet sees this allocation, it will set up the
|
||||
local bridge with this CIDR, which allows docker to start. Before this happens, only pods
|
||||
that have hostNetwork will work - so all the "core" containers run with hostNetwork=true.
|
||||
|
||||
## api-server bringup
|
||||
|
||||
APIServer also listens on the HTTPS port (443) on all interfaces. This is a secured endpoint,
|
||||
and requires valid authentication/authorization to use it. This is the endpoint that node kubelets
|
||||
APIServer also listens on the HTTPS port (443) on all interfaces. This is a secured endpoint,
|
||||
and requires valid authentication/authorization to use it. This is the endpoint that node kubelets
|
||||
will reach, and also that end-users will reach.
|
||||
|
||||
kOps uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
|
||||
kOps uses DNS to allow nodes and end-users to discover the api-server. The apiserver pod manifest (in
|
||||
/etc/kubernetes/manifests) includes annotations that will cause the dns-controller to create the
|
||||
records. It creates `api.internal.mycluster.com` for use inside the cluster (using InternalIP addresses),
|
||||
and it creates `api.mycluster.com` for use outside the cluster (using ExternalIP addresses).
|
||||
|
|
@ -89,7 +89,7 @@ kOps follows CoreOS's recommend procedure for [bring-up of etcd on clouds](https
|
|||
* We set up etcd with a static cluster, with those DNS names
|
||||
|
||||
Because the data is persistent and the cluster membership is also a static set of DNS names, this
|
||||
means we don't need to manage etcd directly. We just try to make sure that some master always have
|
||||
means we don't need to manage etcd directly. We just try to make sure that some master always have
|
||||
each volume mounted with etcd running and DNS set correctly. That is the job of protokube.
|
||||
|
||||
Protokube:
|
||||
|
|
@ -107,8 +107,8 @@ Most of this has focused on things that happen on the master, but the node bring
|
|||
* nodeup installs docker & kubelet
|
||||
* in /etc/kubernetes/manifests, we have kube-proxy
|
||||
|
||||
So kubelet will start up, as will kube-proxy. It will try to reach the api-server on the internal DNS name,
|
||||
and once the master is up it will succeed. Then:
|
||||
So kubelet will start up, as will kube-proxy. It will try to reach the api-server on the internal DNS name,
|
||||
and once the master is up it will succeed. Then:
|
||||
|
||||
* kubelet creates a Node object representing itself
|
||||
* kube-controller-manager sees the node creation and assigns it a PodCIDR
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
## Changing a cluster configuration
|
||||
|
||||
(This procedure is currently unnecessarily convoluted. Expect it to get streamlined!)
|
||||
(This procedure is currently unnecessarily convoluted. Expect it to get streamlined!)
|
||||
|
||||
* Edit the cluster spec: `kops edit cluster ${NAME}`
|
||||
|
||||
|
|
|
|||
|
|
@ -3,15 +3,15 @@
|
|||
|
||||
## kops
|
||||
|
||||
kops is Kubernetes ops.
|
||||
kOps is Kubernetes Operations.
|
||||
|
||||
### Synopsis
|
||||
|
||||
kops is Kubernetes ops.
|
||||
kOps is Kubernetes Operations.
|
||||
|
||||
kops is the easiest way to get a production grade Kubernetes cluster up and running. We like to think of it as kubectl for clusters.
|
||||
kOps is the easiest way to get a production grade Kubernetes cluster up and running. We like to think of it as kubectl for clusters.
|
||||
|
||||
kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE and OpenStack in beta support.
|
||||
kOps helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE and OpenStack in beta support.
|
||||
|
||||
### Options
|
||||
|
||||
|
|
@ -50,6 +50,6 @@ kops is Kubernetes ops.
|
|||
* [kops toolbox](kops_toolbox.md) - Misc infrequently used commands.
|
||||
* [kops update](kops_update.md) - Update a cluster.
|
||||
* [kops upgrade](kops_upgrade.md) - Upgrade a kubernetes cluster.
|
||||
* [kops validate](kops_validate.md) - Validate a kops cluster.
|
||||
* [kops version](kops_version.md) - Print the kops version information.
|
||||
* [kops validate](kops_validate.md) - Validate a kOps cluster.
|
||||
* [kops version](kops_version.md) - Print the kOps version information.
|
||||
|
||||
|
|
|
|||
|
|
@ -7,9 +7,9 @@ Output shell completion code for the given shell (bash or zsh).
|
|||
|
||||
### Synopsis
|
||||
|
||||
Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of kops commands. This can be done by sourcing it from the .bash_profile.
|
||||
Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide interactive completion of kops commands. This can be done by sourcing it from the .bash_profile.
|
||||
|
||||
Note: this requires the bash-completion framework, which is not installed by default on Mac. Once installed, bash_completion must be evaluated. This can be done by adding the following line to the .bash_profile
|
||||
Note: this requires the bash-completion framework, which is not installed by default on Mac. Once installed, bash_completion must be evaluated. This can be done by adding the following line to the .bash_profile
|
||||
|
||||
Note for zsh users: zsh completions are only supported in versions of zsh >= 5.2
|
||||
|
||||
|
|
@ -68,5 +68,5 @@ kops completion [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ kops create -f FILENAME [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops create cluster](kops_create_cluster.md) - Create a Kubernetes cluster.
|
||||
* [kops create instancegroup](kops_create_instancegroup.md) - Create an instancegroup.
|
||||
* [kops create secret](kops_create_secret.md) - Create a secret.
|
||||
|
|
|
|||
|
|
@ -116,7 +116,7 @@ kops create cluster [flags]
|
|||
--ssh-access strings Restrict SSH access to this CIDR. If not set, access will not be restricted by IP. (default [0.0.0.0/0])
|
||||
--ssh-public-key string SSH public key to use (defaults to ~/.ssh/id_rsa.pub on AWS)
|
||||
--subnets strings Set to use shared subnets
|
||||
--target string Valid targets: direct, terraform, cloudformation. Set this flag to terraform if you want kops to generate terraform (default "direct")
|
||||
--target string Valid targets: direct, terraform, cloudformation. Set this flag to terraform if you want kOps to generate terraform (default "direct")
|
||||
-t, --topology string Controls network topology for the cluster: public|private. (default "public")
|
||||
--utility-subnets strings Set to use shared utility subnets
|
||||
--vpc string Set to use a shared VPC
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ kops create secret ciliumpassword [flags]
|
|||
|
||||
```
|
||||
-f, -- string Path to the cilium encryption config file
|
||||
--force Force replace the kops secret if it already exists
|
||||
--force Force replace the kOps secret if it already exists
|
||||
-h, --help help for ciliumpassword
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ kops create secret dockerconfig [flags]
|
|||
|
||||
```
|
||||
-f, -- string Path to docker config JSON file
|
||||
--force Force replace the kops secret if it already exists
|
||||
--force Force replace the kOps secret if it already exists
|
||||
-h, --help help for dockerconfig
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ kops create secret encryptionconfig [flags]
|
|||
|
||||
```
|
||||
-f, -- string Path to encryption config yaml file
|
||||
--force Force replace the kops secret if it already exists
|
||||
--force Force replace the kOps secret if it already exists
|
||||
-h, --help help for encryptionconfig
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ Create a weave encryption config.
|
|||
|
||||
Create a new weave encryption secret, and store it in the state store. Used to weave networking to use encrypted communication between nodes.
|
||||
|
||||
If no password is provided, kops will generate one at random.
|
||||
If no password is provided, kOps will generate one at random.
|
||||
|
||||
WARNING: cannot be enabled on a running cluster without downtime.
|
||||
|
||||
|
|
@ -38,7 +38,7 @@ kops create secret weavepassword [flags]
|
|||
|
||||
```
|
||||
-f, -- string Path to the weave password file (optional)
|
||||
--force Force replace the kops secret if it already exists
|
||||
--force Force replace the kOps secret if it already exists
|
||||
-h, --help help for weavepassword
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ kops delete -f FILENAME [--yes] [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops delete cluster](kops_delete_cluster.md) - Delete a cluster.
|
||||
* [kops delete instance](kops_delete_instance.md) - Delete an instance
|
||||
* [kops delete instancegroup](kops_delete_instancegroup.md) - Delete instancegroup
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Delete instancegroup
|
|||
|
||||
### Synopsis
|
||||
|
||||
Delete an instancegroup configuration. kops has the concept of "instance groups", which are a group of similar virtual machines. On AWS, they map to an AutoScalingGroup. An ig work either as a Kubernetes master or a node.
|
||||
Delete an instancegroup configuration. kOps has the concept of "instance groups", which are a group of similar virtual machines. On AWS, they map to an AutoScalingGroup. An ig work either as a Kubernetes master or a node.
|
||||
|
||||
```
|
||||
kops delete instancegroup [flags]
|
||||
|
|
|
|||
|
|
@ -43,6 +43,6 @@ Get additional information about cloud and cluster resources.
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops describe secrets](kops_describe_secrets.md) - Describe a cluster secret
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ Edit clusters and other resources.
|
|||
Edit a resource configuration. This command changes the desired configuration in the registry.
|
||||
|
||||
To set your preferred editor, you can define the EDITOR environment variable.
|
||||
When you have done this, kops will use the editor that you have set.
|
||||
When you have done this, kOps will use the editor that you have set.
|
||||
|
||||
kops edit does not update the cloud resources, to apply the changes use "kops update cluster".
|
||||
|
||||
|
|
@ -53,7 +53,7 @@ Edit a resource configuration. This command changes the desired configuration in
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops edit cluster](kops_edit_cluster.md) - Edit cluster.
|
||||
* [kops edit instancegroup](kops_edit_instancegroup.md) - Edit instancegroup.
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ Edit a cluster configuration.
|
|||
This command changes the desired cluster configuration in the registry.
|
||||
|
||||
To set your preferred editor, you can define the EDITOR environment variable.
|
||||
When you have done this, kops will use the editor that you have set.
|
||||
When you have done this, kOps will use the editor that you have set.
|
||||
|
||||
kops edit does not update the cloud resources, to apply the changes use "kops update cluster".
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ Edit a cluster configuration.
|
|||
This command changes the instancegroup desired configuration in the registry.
|
||||
|
||||
To set your preferred editor, you can define the EDITOR environment variable.
|
||||
When you have done this, kops will use the editor that you have set.
|
||||
When you have done this, kOps will use the editor that you have set.
|
||||
|
||||
kops edit does not update the cloud resources, to apply the changes use "kops update cluster".
|
||||
|
||||
|
|
|
|||
|
|
@ -44,6 +44,6 @@ Export configurations from a cluster.
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops export kubecfg](kops_export_kubecfg.md) - Export kubecfg.
|
||||
|
||||
|
|
|
|||
|
|
@ -30,8 +30,8 @@ kops export kubecfg CLUSTERNAME [flags]
|
|||
|
||||
```
|
||||
--admin duration[=18h0m0s] export a cluster admin user credential with the given lifetime and add it to the cluster context
|
||||
--all export all clusters from the kops state store
|
||||
--auth-plugin use the kops authentication plugin
|
||||
--all export all clusters from the kOps state store
|
||||
--auth-plugin use the kOps authentication plugin
|
||||
-h, --help help for kubecfg
|
||||
--internal use the cluster's internal DNS name
|
||||
--kubeconfig string the location of the kubeconfig file to create.
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ kops get [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops get clusters](kops_get_clusters.md) - Get one or many clusters.
|
||||
* [kops get instancegroups](kops_get_instancegroups.md) - Get one or many instancegroups
|
||||
* [kops get instances](kops_get_instances.md) - Display cluster instances.
|
||||
|
|
|
|||
|
|
@ -45,6 +45,6 @@ Imports a kubernetes cluster created by kube-up.sh into a state store. This com
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops import cluster](kops_import_cluster.md) - Import a cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -56,5 +56,5 @@ kops replace -f FILENAME [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
|
||||
|
|
|
|||
|
|
@ -31,11 +31,11 @@ Note: terraform users will need to run all of the following commands from the sa
|
|||
# Preview a rolling-update.
|
||||
kops rolling-update cluster
|
||||
|
||||
# Roll the currently selected kops cluster with defaults.
|
||||
# Roll the currently selected kOps cluster with defaults.
|
||||
# Nodes will be drained and the cluster will be validated between node replacement.
|
||||
kops rolling-update cluster --yes
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# do not fail if the cluster does not validate,
|
||||
# wait 8 min to create new node, and wait at least
|
||||
# 8 min to validate the cluster.
|
||||
|
|
@ -44,7 +44,7 @@ Note: terraform users will need to run all of the following commands from the sa
|
|||
--master-interval=8m \
|
||||
--node-interval=8m
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# do not validate the cluster because of the cloudonly flag.
|
||||
# Force the entire cluster to roll, even if rolling update
|
||||
# reports that the cluster does not need to be rolled.
|
||||
|
|
@ -52,7 +52,7 @@ Note: terraform users will need to run all of the following commands from the sa
|
|||
--cloudonly \
|
||||
--force
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# only roll the node instancegroup,
|
||||
# use the new drain and validate functionality.
|
||||
kops rolling-update cluster k8s-cluster.example.com --yes \
|
||||
|
|
@ -89,6 +89,6 @@ Note: terraform users will need to run all of the following commands from the sa
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops rolling-update cluster](kops_rolling-update_cluster.md) - Rolling update a cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -35,11 +35,11 @@ kops rolling-update cluster [flags]
|
|||
# Preview a rolling-update.
|
||||
kops rolling-update cluster
|
||||
|
||||
# Roll the currently selected kops cluster with defaults.
|
||||
# Roll the currently selected kOps cluster with defaults.
|
||||
# Nodes will be drained and the cluster will be validated between node replacement.
|
||||
kops rolling-update cluster --yes
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# do not fail if the cluster does not validate,
|
||||
# wait 8 min to create new node, and wait at least
|
||||
# 8 min to validate the cluster.
|
||||
|
|
@ -48,7 +48,7 @@ kops rolling-update cluster [flags]
|
|||
--master-interval=8m \
|
||||
--node-interval=8m
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# do not validate the cluster because of the cloudonly flag.
|
||||
# Force the entire cluster to roll, even if rolling update
|
||||
# reports that the cluster does not need to be rolled.
|
||||
|
|
@ -56,7 +56,7 @@ kops rolling-update cluster [flags]
|
|||
--cloudonly \
|
||||
--force
|
||||
|
||||
# Roll the k8s-cluster.example.com kops cluster,
|
||||
# Roll the k8s-cluster.example.com kOps cluster,
|
||||
# only roll the node instancegroup,
|
||||
# use the new drain and validate functionality.
|
||||
kops rolling-update cluster k8s-cluster.example.com --yes \
|
||||
|
|
|
|||
|
|
@ -46,6 +46,6 @@ Set a configuration field.
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops set cluster](kops_set_cluster.md) - Set cluster fields.
|
||||
|
||||
|
|
|
|||
|
|
@ -44,8 +44,8 @@ Misc infrequently used commands.
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops toolbox convert-imported](kops_toolbox_convert-imported.md) - Convert an imported cluster into a kops cluster.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops toolbox convert-imported](kops_toolbox_convert-imported.md) - Convert an imported cluster into a kOps cluster.
|
||||
* [kops toolbox dump](kops_toolbox_dump.md) - Dump cluster information
|
||||
* [kops toolbox instance-selector](kops_toolbox_instance-selector.md) - Generate on-demand or spot instance-group specs by providing resource specs like vcpus and memory.
|
||||
* [kops toolbox template](kops_toolbox_template.md) - Generate cluster.yaml from template
|
||||
|
|
|
|||
|
|
@ -3,11 +3,11 @@
|
|||
|
||||
## kops toolbox convert-imported
|
||||
|
||||
Convert an imported cluster into a kops cluster.
|
||||
Convert an imported cluster into a kOps cluster.
|
||||
|
||||
### Synopsis
|
||||
|
||||
Convert an imported cluster into a kops cluster.
|
||||
Convert an imported cluster into a kOps cluster.
|
||||
|
||||
```
|
||||
kops toolbox convert-imported [flags]
|
||||
|
|
|
|||
|
|
@ -44,6 +44,6 @@ Creates or updates cloud resources to match cluster desired configuration.
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops update cluster](kops_update_cluster.md) - Update a cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ kops update cluster [flags]
|
|||
|
||||
```
|
||||
--admin duration[=18h0m0s] Also export a cluster admin user credential with the specified lifetime and add it to the cluster context
|
||||
--allow-kops-downgrade Allow an older version of kops to update the cluster than last used
|
||||
--allow-kops-downgrade Allow an older version of kOps to update the cluster than last used
|
||||
--create-kube-config Will control automatically creating the kube config file on your local filesystem (default true)
|
||||
-h, --help help for cluster
|
||||
--internal Use the cluster's internal DNS name. Implies --create-kube-config
|
||||
|
|
|
|||
|
|
@ -44,6 +44,6 @@ Automates checking for and applying Kubernetes updates. This upgrades a cluster
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops upgrade cluster](kops_upgrade_cluster.md) - Upgrade a kubernetes cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
|
||||
## kops validate
|
||||
|
||||
Validate a kops cluster.
|
||||
Validate a kOps cluster.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
|
@ -45,6 +45,6 @@ This command validates a cluster. See: kops validate cluster -h
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops validate cluster](kops_validate_cluster.md) - Validate a kops cluster.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
* [kops validate cluster](kops_validate_cluster.md) - Validate a kOps cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
|
||||
## kops validate cluster
|
||||
|
||||
Validate a kops cluster.
|
||||
Validate a kOps cluster.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
|
@ -58,5 +58,5 @@ kops validate cluster [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops validate](kops_validate.md) - Validate a kops cluster.
|
||||
* [kops validate](kops_validate.md) - Validate a kOps cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -3,11 +3,11 @@
|
|||
|
||||
## kops version
|
||||
|
||||
Print the kops version information.
|
||||
Print the kOps version information.
|
||||
|
||||
### Synopsis
|
||||
|
||||
Print the kops version and git SHA.
|
||||
Print the kOps version and git SHA.
|
||||
|
||||
```
|
||||
kops version [flags]
|
||||
|
|
@ -23,7 +23,7 @@ kops version [flags]
|
|||
|
||||
```
|
||||
-h, --help help for version
|
||||
--short only print the main kops version, useful for scripting
|
||||
--short only print the main kOps version, useful for scripting
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
|
@ -48,5 +48,5 @@ kops version [flags]
|
|||
|
||||
### SEE ALSO
|
||||
|
||||
* [kops](kops.md) - kops is Kubernetes ops.
|
||||
* [kops](kops.md) - kOps is Kubernetes Operations.
|
||||
|
||||
|
|
|
|||
|
|
@ -22,8 +22,7 @@ spec:
|
|||
```
|
||||
|
||||
|
||||
When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The `type`
|
||||
field should be `Public` or `Internal`.
|
||||
When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The `type` field should be `Public` or `Internal`.
|
||||
|
||||
Also, you can add precreated additional security groups to the load balancer by setting `additionalSecurityGroups`.
|
||||
|
||||
|
|
@ -37,7 +36,7 @@ spec:
|
|||
- sg-xxxxxxxx
|
||||
```
|
||||
|
||||
Additionally, you can increase idle timeout of the load balancer by setting its `idleTimeoutSeconds`. The default idle timeout is 5 minutes, with a maximum of 3600 seconds (60 minutes) being allowed by AWS. Note this value is ignored for load balancer Class `Network`.
|
||||
Additionally, you can increase idle timeout of the load balancer by setting its `idleTimeoutSeconds`. The default idle timeout is 5 minutes, with a maximum of 3600 seconds (60 minutes) being allowed by AWS. Note this value is ignored for load balancer Class `Network`.
|
||||
For more information see [configuring idle timeouts](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).
|
||||
|
||||
```yaml
|
||||
|
|
@ -84,7 +83,7 @@ spec:
|
|||
|
||||
{{ kops_feature_table(kops_added_default='1.19') }}
|
||||
|
||||
You can choose to have a Network Load Balancer instead of a Classic Load Balancer. The `class` field should be either `Network` or `Classic` (default).
|
||||
You can choose to have a Network Load Balancer instead of a Classic Load Balancer. The `class` field should be either `Network` or `Classic` (default).
|
||||
|
||||
**Note**: changing the class of load balancer in an existing cluster is a disruptive operation. Until the masters have gone through a rolling update, new connections to the apiserver will fail due to the old master's TLS certificates containing the old load balancer's IP address.
|
||||
```yaml
|
||||
|
|
@ -307,7 +306,7 @@ spec:
|
|||
|
||||
**Note**: The auditPolicyFile is needed. If the flag is omitted, no events are logged.
|
||||
|
||||
You could use the [fileAssets](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#fileassets) feature to push an advanced audit policy file on the master nodes.
|
||||
You could use the [fileAssets](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#fileassets) feature to push an advanced audit policy file on the master nodes.
|
||||
|
||||
Example policy file can be found [here](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/audit/audit-policy.yaml)
|
||||
|
||||
|
|
|
|||
|
|
@ -2,11 +2,11 @@
|
|||
HTTP Forward Proxy Support
|
||||
==========================
|
||||
|
||||
It is possible to launch a Kubernetes cluster from behind an http forward proxy ("corporate proxy"). To do so, you will need to configure the `egressProxy` for the cluster.
|
||||
It is possible to launch a Kubernetes cluster from behind an http forward proxy ("corporate proxy"). To do so, you will need to configure the `egressProxy` for the cluster.
|
||||
|
||||
It is assumed the proxy is already existing. If you want a private topology on AWS, for example, with a proxy instead of a NAT instance, you'll need to create the proxy yourself. See [Running in a shared VPC](run_in_existing_vpc.md).
|
||||
It is assumed the proxy is already existing. If you want a private topology on AWS, for example, with a proxy instead of a NAT instance, you'll need to create the proxy yourself. See [Running in a shared VPC](run_in_existing_vpc.md).
|
||||
|
||||
This configuration only manages proxy configurations for kOps and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
|
||||
This configuration only manages proxy configurations for kOps and the Kubernetes cluster. We can not handle proxy configuration for application containers and pods.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
|
@ -24,7 +24,7 @@ Currently we assume the same configuration for http and https traffic.
|
|||
|
||||
## Proxy Excludes
|
||||
|
||||
Most clients will blindly try to use the proxy to make all calls, even to localhost and the local subnet, unless configured otherwise. Some basic exclusions necessary for successful launch and operation are added for you at initial cluster creation. If you wish to add additional exclusions, add or edit `egressProxy.excludes` with a comma separated list of hostnames. Matching is based on suffix, ie, `corp.local` will match `images.corp.local`, and `.corp.local` will match `corp.local` and `images.corp.local`, following typical `no_proxy` environment variable conventions.
|
||||
Most clients will blindly try to use the proxy to make all calls, even to localhost and the local subnet, unless configured otherwise. Some basic exclusions necessary for successful launch and operation are added for you at initial cluster creation. If you wish to add additional exclusions, add or edit `egressProxy.excludes` with a comma separated list of hostnames. Matching is based on suffix, ie, `corp.local` will match `images.corp.local`, and `.corp.local` will match `corp.local` and `images.corp.local`, following typical `no_proxy` environment variable conventions.
|
||||
|
||||
``` yaml
|
||||
spec:
|
||||
|
|
@ -37,4 +37,4 @@ spec:
|
|||
|
||||
## AWS VPC Endpoints and S3 access
|
||||
|
||||
If you are hosting on AWS have configured VPC "Endpoints" for S3 or other services, you may want to add these to the `spec.egressProxy.excludes`. Keep in mind that the S3 bucket must be in the same region as the VPC for it to be accessible via the endpoint.
|
||||
If you are hosting on AWS have configured VPC "Endpoints" for S3 or other services, you may want to add these to the `spec.egressProxy.excludes`. Keep in mind that the S3 bucket must be in the same region as the VPC for it to be accessible via the endpoint.
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ You can also [install from source](development/building.md).
|
|||
|
||||
## kubectl
|
||||
|
||||
`kubectl` is the CLI tool to manage and operate Kubernetes clusters. You can install it as follows.
|
||||
`kubectl` is the CLI tool to manage and operate Kubernetes clusters. You can install it as follows.
|
||||
|
||||
### MacOS
|
||||
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ Autoscaling groups automatically include multiple [scaling processes](https://do
|
|||
that keep our ASGs healthy. In some cases, you may want to disable certain scaling activities.
|
||||
|
||||
An example of this is if you are running multiple AZs in an ASG while using a Kubernetes Autoscaler.
|
||||
The autoscaler will remove specific instances that are not being used. In some cases, the `AZRebalance` process
|
||||
The autoscaler will remove specific instances that are not being used. In some cases, the `AZRebalance` process
|
||||
will rescale the ASG without warning.
|
||||
|
||||
```YAML
|
||||
|
|
@ -144,12 +144,10 @@ which would end up in a drop-in file on nodes of the instance group in question.
|
|||
|
||||
## mixedInstancesPolicy (AWS Only)
|
||||
|
||||
A Mixed Instances Policy utilizing EC2 Spot and the `capacity-optimized` allocation strategy allows an EC2 Autoscaling Group to
|
||||
select the instance types with the highest capacity. This reduces the chance of a spot interruption on your instance group.
|
||||
A Mixed Instances Policy utilizing EC2 Spot and the `capacity-optimized` allocation strategy allows an EC2 Autoscaling Group to select the instance types with the highest capacity. This reduces the chance of a spot interruption on your instance group.
|
||||
|
||||
Instance groups with a mixedInstancesPolicy can be generated with the `kops toolbox instance-selector` command.
|
||||
The instance-selector accepts user supplied resource parameters like vcpus, memory, and much more to dynamically select instance types
|
||||
that match your criteria.
|
||||
The instance-selector accepts user supplied resource parameters like vcpus, memory, and much more to dynamically select instance types that match your criteria.
|
||||
|
||||
```bash
|
||||
kops toolbox instance-selector --vcpus 4 --flexible --usage-class spot --instance-group-name spotgroup
|
||||
|
|
@ -187,7 +185,7 @@ spec:
|
|||
|
||||
### Instances
|
||||
|
||||
Instances is a list of instance types which we are willing to run in the EC2 Auto Scaling group
|
||||
Instances is a list of instance types which we are willing to run in the EC2 Auto Scaling group.
|
||||
|
||||
### onDemandAllocationStrategy
|
||||
|
||||
|
|
|
|||
|
|
@ -308,7 +308,7 @@ Thus a manifest will actually look like this:
|
|||
|
||||
Note that the two addons have the same version, but a different `kubernetesVersion` selector.
|
||||
But they have different `id` values; addons with matching semvers but different `id`s will
|
||||
be upgraded. (We will never downgrade to an older semver though, regardless of `id`)
|
||||
be upgraded. (We will never downgrade to an older semver though, regardless of `id`)
|
||||
|
||||
So now in the above scenario after the downgrade to 1.5, although the semver is the same,
|
||||
the id will not match, and the `pre-k8s-16` will be installed. (And when we upgrade back
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ kops get cluster ${OLD_NAME} -oyaml
|
|||
|
||||
## Move resources to a new cluster
|
||||
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things this step does:
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things this step does:
|
||||
|
||||
* It resizes existing autoscaling groups to size 0
|
||||
* It will stop the existing master
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ Backups and restores of etcd on kOps are covered in [etcd_backup_restore_encrypt
|
|||
|
||||
## Direct Data Access
|
||||
|
||||
It's not typically necessary to view or manipulate the data inside of etcd directly with etcdctl, because all operations usually go through kubectl commands. However, it can be informative during troubleshooting, or just to understand kubernetes better. Here are the steps to accomplish that on kops.
|
||||
It's not typically necessary to view or manipulate the data inside of etcd directly with etcdctl, because all operations usually go through kubectl commands. However, it can be informative during troubleshooting, or just to understand kubernetes better. Here are the steps to accomplish that on kOps.
|
||||
|
||||
1\. Connect to an etcd-manager pod
|
||||
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ You can also rerun [these steps](../development/building.md) if previously built
|
|||
|
||||
## Upgrading Kubernetes
|
||||
|
||||
Upgrading Kubernetes is easy with kops. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
Upgrading Kubernetes is easy with kOps. The cluster spec contains a `kubernetesVersion`, so you can simply edit it with `kops edit`, and apply the updated configuration to your cluster.
|
||||
|
||||
The `kops upgrade` command also automates checking for and applying updates.
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ kOps can:
|
|||
|
||||
Some users will need or prefer to use tools like Terraform for cluster configuration,
|
||||
so kOps can also output the equivalent configuration for those tools also (currently just Terraform, others
|
||||
planned). After creation with your preferred tool, you can still use the rest of the kOps tooling to operate
|
||||
planned). After creation with your preferred tool, you can still use the rest of the kOps tooling to operate
|
||||
your cluster.
|
||||
|
||||
## Primary API types
|
||||
|
|
@ -29,7 +29,7 @@ There are two primary types:
|
|||
* Cluster represents the overall cluster configuration (such as the version of kubernetes we are running), and contains default values for the individual nodes.
|
||||
|
||||
* InstanceGroup is a group of instances with similar configuration that are managed together.
|
||||
Typically this is a group of Nodes or a single master instance. On AWS, it is currently implemented by an AutoScalingGroup.
|
||||
Typically this is a group of Nodes or a single master instance. On AWS, it is currently implemented by an AutoScalingGroup.
|
||||
|
||||
## State Store
|
||||
|
||||
|
|
@ -40,15 +40,14 @@ The API objects are currently stored in an abstraction called a ["state store"](
|
|||
Configuration of a kubernetes cluster is actually relatively complicated: there are a lot of options, and many combinations
|
||||
must be configured consistently with each other.
|
||||
|
||||
Similar to the way creating a Kubernetes object populates other spec values, the `kops create cluster` command will infer other values
|
||||
that are not set, so that you can specify a minimal set of values (but if you don't want to override the default value, you simply specify the fields!).
|
||||
Similar to the way creating a Kubernetes object populates other spec values, the `kops create cluster` command will infer other values that are not set, so that you can specify a minimal set of values (but if you don't want to override the default value, you simply specify the fields!).
|
||||
|
||||
Because more values are inferred than with simpler k8s objects, we record the user-created spec separately from the
|
||||
complete inferred specification. This means we can keep track of which values were actually set by the user, vs just being
|
||||
complete inferred specification. This means we can keep track of which values were actually set by the user, vs just being
|
||||
default values; this lets us avoid some of the problems e.g. with ClusterIP on a Service.
|
||||
|
||||
We aim to remove any computation logic from the downstream pieces (i.e. nodeup & protokube); this means there is a
|
||||
single source of truth and it is practical to implement alternatives to nodeup & protokube. For example, components
|
||||
single source of truth and it is practical to implement alternatives to nodeup & protokube. For example, components
|
||||
such as kubelet might read their configuration directly from the state store in future, eliminating the need to
|
||||
have a management process that copies values around.
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ preparing for a new kubernetes release, we will try to advance the master branch
|
|||
to focus on the new functionality, and start cherry-picking back more selectively
|
||||
to the release branches only as needed.
|
||||
|
||||
Generally we don't encourage users to run older kops versions, or older
|
||||
Generally we don't encourage users to run older kOps versions, or older
|
||||
branches, because newer versions of kOps should remain compatible with older
|
||||
versions of Kubernetes.
|
||||
|
||||
|
|
@ -118,8 +118,7 @@ git fetch origin # sync back up
|
|||
|
||||
## Wait for CI job to complete
|
||||
|
||||
The staging CI job should now see the tag, and build it (from the
|
||||
trusted prow cluster, using Google Cloud Build).
|
||||
The staging CI job should now see the tag, and build it (from the trusted prow cluster, using Google Cloud Build).
|
||||
|
||||
The job is here: https://testgrid.k8s.io/sig-cluster-lifecycle-kops#kops-postsubmit-push-to-staging
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
## Release notes for kops 1.20 series
|
||||
## Release notes for kOps 1.20 series
|
||||
|
||||
(The kops 1.20 release has not been released yet; this is a document to gather the notes prior to the release).
|
||||
(The kOps 1.20 release has not been released yet; this is a document to gather the notes prior to the release).
|
||||
|
||||
# Significant changes
|
||||
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
|
||||
# Deprecations
|
||||
|
||||
* Support for Kubernetes versions 1.13 and 1.14 are deprecated and will be removed in kops 1.21.
|
||||
* Support for Kubernetes versions 1.13 and 1.14 are deprecated and will be removed in kOps 1.21.
|
||||
|
||||
* The [manifest based metrics server addon](https://github.com/kubernetes/kops/tree/master/addons/metrics-server) has been deprecated in favour of a configurable addon.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
## Running in a shared VPC
|
||||
|
||||
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway
|
||||
or NAT Gateway you can tell kOps to ignore egress. By default, kops creates a new subnet per zone and a new route table,
|
||||
but you can instead use a shared subnet (see [below](#shared-subnets)).
|
||||
When launching into a shared VPC, kOps will reuse the VPC and Internet Gateway. If you are not using an Internet Gateway or NAT Gateway you can tell kOps to ignore egress. By default, kOps creates a new subnet per zone and a new route table, but you can instead use a shared subnet (see [below](#shared-subnets)).
|
||||
|
||||
1. Use `kops create cluster` with the `--vpc` argument for your existing VPC:
|
||||
|
||||
|
|
@ -161,7 +159,7 @@ spec:
|
|||
|
||||
### Shared NAT Egress
|
||||
|
||||
On AWS in private [topology](topology.md), kops creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that `kops` deploys private resources to, it is possible to specify the ID and have `kops`/`kubernetes` use it.
|
||||
On AWS in private [topology](topology.md), kOps creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that `kops` deploys private resources to, it is possible to specify the ID and have `kops`/`kubernetes` use it.
|
||||
|
||||
If you don't want to use NAT Gateways but have setup [EC2 NAT Instances](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) in your VPC that you can share, it's possible to specify the IDs of said instances and have `kops`/`kubernetes` use them.
|
||||
|
||||
|
|
@ -191,9 +189,9 @@ spec:
|
|||
Please note:
|
||||
|
||||
* You must specify pre-created subnets for either all of the subnets or none of them.
|
||||
* kOps won't alter your existing subnets. They must be correctly set up with route tables, etc. The
|
||||
* kOps won't alter your existing subnets. They must be correctly set up with route tables, etc. The
|
||||
Public or Utility subnets should have public IPs and an Internet Gateway configured as their default route
|
||||
in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway
|
||||
in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway
|
||||
configured as their default route.
|
||||
* kOps won't create a route-table at all if it's not creating subnets.
|
||||
* In the example above the first subnet is using a shared NAT Gateway while the
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
# The State Store
|
||||
|
||||
kOps has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
|
||||
kOps has the notion of a 'state store'; a location where we store the configuration of your cluster. State is stored
|
||||
here not only when you first create a cluster, but also you can change the state and apply changes to a running cluster.
|
||||
|
||||
Eventually, kubernetes services will also pull from the state store, so that we don't need to marshal all our
|
||||
configuration through a channel like user-data. (This is currently done for secrets and SSL keys, for example,
|
||||
configuration through a channel like user-data. (This is currently done for secrets and SSL keys, for example,
|
||||
though we have to copy the data from the state store to a file where components like kubelet can read them).
|
||||
|
||||
The state store uses kOps's VFS implementation, so can in theory be stored anywhere.
|
||||
|
|
@ -22,7 +22,7 @@ The state store is just files; you can copy the files down and put them into git
|
|||
|
||||
## {statestore}/config
|
||||
|
||||
One of the most important files in the state store is the top-level config file. This file stores the main
|
||||
One of the most important files in the state store is the top-level config file. This file stores the main
|
||||
configuration for your cluster (instance types, zones, etc)\
|
||||
|
||||
When you run `kops create cluster`, we create a state store entry for you based on the command line options you specify.
|
||||
|
|
@ -39,7 +39,7 @@ reconfiguring your cluster - for example just `kops create cluster` after a dry-
|
|||
|
||||
## State store configuration
|
||||
|
||||
There are a few ways to configure your state store. In priority order:
|
||||
There are a few ways to configure your state store. In priority order:
|
||||
|
||||
+ command line argument `--state s3://yourstatestore`
|
||||
+ environment variable `export KOPS_STATE_STORE=s3://yourstatestore`
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ Wait for the cluster to initialize. If all goes well, you should have a working
|
|||
|
||||
#### Editing the cluster
|
||||
|
||||
It's possible to use Terraform to make changes to your infrastructure as defined by kops. In the example below we'd like to change some cluster configs:
|
||||
It's possible to use Terraform to make changes to your infrastructure as defined by kOps. In the example below we'd like to change some cluster configs:
|
||||
|
||||
```
|
||||
$ kops edit cluster \
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ More information about [networking options](networking.md) can be found in our d
|
|||
## Changing Topology of the API server
|
||||
To change the ELB that fronts the API server from Internet facing to Internal only there are a few steps to accomplish
|
||||
|
||||
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kOps recreate the ELB for us.
|
||||
The AWS ELB does not support changing from internet facing to Internal. However what we can do is have kOps recreate the ELB for us.
|
||||
|
||||
### Steps to change the ELB from Internet-Facing to Internal
|
||||
- Edit the cluster: `kops edit cluster $NAME`
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
# Upgrading kubernetes
|
||||
|
||||
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kops.
|
||||
Upgrading kubernetes is very easy with kOps, as long as you are using a compatible version of kOps.
|
||||
The kOps `1.18.x` series (for example) supports the kubernetes 1.16, 1.17 and 1.18 series,
|
||||
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
|
||||
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
|
||||
as per the kubernetes deprecation policy. Older versions of kubernetes will likely still work, but these
|
||||
are on a best-effort basis and will have little if any testing. kOps `1.18` will not support the kubernetes
|
||||
`1.19` series, and for full support of kubernetes `1.19` it is best to wait for the kOps `1.19` series release.
|
||||
We aim to release the next major version of kOps within a few weeks of the equivalent major release of kubernetes,
|
||||
so kOps `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
|
||||
so kOps `1.19.0` will be released within a few weeks of kubernetes `1.19.0`. We try to ensure that a 1.19 pre-release
|
||||
(alpha or beta) is available at the kubernetes release, for early adopters.
|
||||
|
||||
Upgrading kubernetes is similar to changing the image on an InstanceGroup, except that the kubernetes version is
|
||||
|
|
|
|||
|
|
@ -1,14 +1,14 @@
|
|||
# Managinging Instance Groups
|
||||
|
||||
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
|
||||
an AutoScalingGroup.
|
||||
kOps has the concept of "instance groups", which are a group of similar machines. On AWS, they map to
|
||||
an Auto Scaling group.
|
||||
|
||||
By default, a cluster has:
|
||||
|
||||
* An instance group called `nodes` spanning all the zones; these instances are your workers.
|
||||
* One instance group for each master zone, called `master-<zone>` (e.g. `master-us-east-1c`). These normally have
|
||||
minimum size and maximum size = 1, so they will run a single instance. We do this so that the cloud will
|
||||
always relaunch masters, even if everything is terminated at once. We have an instance group per zone
|
||||
minimum size and maximum size = 1, so they will run a single instance. We do this so that the cloud will
|
||||
always relaunch masters, even if everything is terminated at once. We have an instance group per zone
|
||||
because we need to force the cloud to run an instance in every zone, so we can mount the master volumes - we
|
||||
cannot do that across zones.
|
||||
|
||||
|
|
@ -37,8 +37,8 @@ You can also use the `kops get ig` alias.
|
|||
|
||||
## Change the instance type in an instance group
|
||||
|
||||
First you edit the instance group spec, using `kops edit ig nodes`. Change the machine type to `t2.large`,
|
||||
for example. Now if you `kops get ig`, you will see the large instance size. Note though that these changes
|
||||
First you edit the instance group spec, using `kops edit ig nodes`. Change the machine type to `t2.large`,
|
||||
for example. Now if you `kops get ig`, you will see the large instance size. Note though that these changes
|
||||
have not yet been applied (this may change soon though!).
|
||||
|
||||
To preview the change:
|
||||
|
|
@ -76,7 +76,7 @@ master-us-central1-a Master n1-standard-1 1 1 us-central1
|
|||
nodes Node n1-standard-2 2 2 us-central1
|
||||
```
|
||||
|
||||
Let's change the number of nodes to 3. We'll edit the InstanceGroup configuration using `kops edit` (which
|
||||
Let's change the number of nodes to 3. We'll edit the InstanceGroup configuration using `kops edit` (which
|
||||
should be very familiar to you if you've used `kubectl edit`). `kops edit ig nodes` will open
|
||||
the InstanceGroup in your editor, looking a bit like this:
|
||||
|
||||
|
|
@ -99,11 +99,11 @@ spec:
|
|||
- us-central1-a
|
||||
```
|
||||
|
||||
Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your editor. If you wanted to change
|
||||
the image or the machineType, you could do that here as well. There are actually a lot more fields,
|
||||
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
|
||||
Edit `minSize` and `maxSize`, changing both from 2 to 3, save and exit your editor. If you wanted to change
|
||||
the image or the machineType, you could do that here as well. There are actually a lot more fields,
|
||||
but most of them have their default values, so won't show up unless they are set. The general approach is the same though.
|
||||
|
||||
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kOps to
|
||||
On saving you'll note that nothing happens. Although you've changed the model, you need to tell kOps to
|
||||
apply your changes to the cloud.
|
||||
|
||||
We use the same `kops update cluster` command that we used when initially creating the cluster; when
|
||||
|
|
@ -122,7 +122,7 @@ This is saying that we will alter the `TargetSize` property of the `InstanceGrou
|
|||
That's what we want, so we `kops update cluster --yes`.
|
||||
|
||||
kOps will resize the GCE managed instance group from 2 to 3, which will create a new GCE instance,
|
||||
which will then boot and join the cluster. Within a minute or so you should see the new node join:
|
||||
which will then boot and join the cluster. Within a minute or so you should see the new node join:
|
||||
|
||||
```
|
||||
> kubectl get nodes
|
||||
|
|
@ -138,7 +138,7 @@ nodes-z2cz Ready 1s v1.7.2
|
|||
|
||||
## Changing the image
|
||||
|
||||
That was a fairly simple change, because we didn't have to reboot the nodes. Most changes though do
|
||||
That was a fairly simple change, because we didn't have to reboot the nodes. Most changes though do
|
||||
require rolling your instances - this is actually a deliberate design decision, in that we are aiming
|
||||
for immutable nodes. An example is changing your image. We're using `cos-stable`, which is Google's
|
||||
Container OS. Let's try Debian Stretch instead.
|
||||
|
|
@ -180,15 +180,15 @@ Will modify resources:
|
|||
Note that the `BootDiskImage` is indeed set to the debian 9 image you requested.
|
||||
|
||||
`kops update cluster --yes` will now apply the change, but if you were to run `kubectl get nodes` you would see
|
||||
that the instances had not yet been reconfigured. There's a hint at the bottom:
|
||||
that the instances had not yet been reconfigured. There's a hint at the bottom:
|
||||
|
||||
```
|
||||
Changes may require instances to restart: kops rolling-update cluster`
|
||||
```
|
||||
|
||||
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kOps
|
||||
These changes require your instances to restart (we'll remove the COS images and replace them with Debian images). kOps
|
||||
can perform a rolling update to minimize disruption, but even so you might not want to perform the update right away;
|
||||
you might want to make more changes or you might want to wait for off-peak hours. You might just want to wait for
|
||||
you might want to make more changes or you might want to wait for off-peak hours. You might just want to wait for
|
||||
the instances to terminate naturally - new instances will come up with the new configuration - though if you're not
|
||||
using preemptible/spot instances you might be waiting for a long time.
|
||||
|
||||
|
|
@ -333,7 +333,7 @@ $ df -h | grep nvme[12]
|
|||
|
||||
## Creating a new instance group
|
||||
|
||||
Suppose you want to add a new group of nodes, perhaps with a different instance type. You do this using `kops create ig <InstanceGroupName> --subnet <zone(s)>`. Currently the
|
||||
Suppose you want to add a new group of nodes, perhaps with a different instance type. You do this using `kops create ig <InstanceGroupName> --subnet <zone(s)>`. Currently the
|
||||
`--subnet` flag is required, and it receives the zone(s) of the subnet(s) in which the instance group will be. The command opens an editor with a skeleton configuration, allowing you to edit it before creation.
|
||||
|
||||
So the procedure is:
|
||||
|
|
@ -519,7 +519,7 @@ spec:
|
|||
If `openstack.kops.io/osVolumeSize` is not set it will default to the minimum disk specified by the image.
|
||||
# Working with InstanceGroups
|
||||
|
||||
The kOps InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
|
||||
The kOps InstanceGroup is a declarative model of a group of nodes. By modifying the object, you
|
||||
can change the instance type you're using, the number of nodes you have, the OS image you're running - essentially
|
||||
all the per-node configuration is in the InstanceGroup.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
# Upgrading from kube-up to kOps
|
||||
|
||||
kOps let you upgrade an existing kubernetes cluster installed using kube-up, to a cluster managed by
|
||||
kops.
|
||||
kOps.
|
||||
|
||||
** This is a slightly risky procedure, so we recommend backing up important data before proceeding.
|
||||
Take a snapshot of your EBS volumes; export all your data from kubectl etc. **
|
||||
|
|
@ -28,7 +28,7 @@ configuration.
|
|||
|
||||
Make sure you have set `export KOPS_STATE_STORE=s3://<mybucket>`
|
||||
|
||||
Then import the cluster; setting `--name` and `--region` to match the old cluster. If you're not sure
|
||||
Then import the cluster; setting `--name` and `--region` to match the old cluster. If you're not sure
|
||||
of the old cluster name, you can find it by looking at the `KubernetesCluster` tag on your AWS resources.
|
||||
|
||||
```
|
||||
|
|
@ -39,7 +39,7 @@ kops import cluster --region ${REGION} --name ${OLD_NAME}
|
|||
|
||||
## Verify the cluster configuration
|
||||
|
||||
Now have a look at the cluster configuration, to make sure it looks right. If it doesn't, please
|
||||
Now have a look at the cluster configuration, to make sure it looks right. If it doesn't, please
|
||||
open an issue.
|
||||
|
||||
```
|
||||
|
|
@ -48,7 +48,7 @@ kops get cluster ${OLD_NAME} -oyaml
|
|||
|
||||
## Move resources to a new cluster
|
||||
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things
|
||||
The upgrade moves some resources so they will be adopted by the new cluster. There are a number of things
|
||||
this step does:
|
||||
|
||||
* It resizes existing autoscaling groups to size 0
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ https://go.k8s.io/bot-commands).
|
|||
|
||||
## Office Hours
|
||||
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kOps. This session is open to both developers and users.
|
||||
|
||||
For more information, checkout the [office hours page.](office_hours.md)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
## Office Hours
|
||||
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kops. This session is open to both developers and users.
|
||||
kOps maintainers set aside one hour every other week for **public** office hours. This time is used to gather with community members interested in kOps. This session is open to both developers and users.
|
||||
|
||||
Office hours are hosted on a [zoom video chat](https://zoom.us/j/97072789944?pwd=VVlUR3dhN2h5TEFQZHZTVVd4SnJUdz09) on Fridays at [12 noon (Eastern Time)/9 am (Pacific Time)](http://www.worldtimebuddy.com/?pl=1&lid=100,5,8,12) during weeks with odd "numbers". To check this weeks' number, run: `date +%V`. If the response is odd, join us on Friday for office hours!
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
|
|||
|
||||
## Compatibility Matrix
|
||||
|
||||
| kops version | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x | k8s 1.17.x | k8s 1.18.x |
|
||||
| kOps version | k8s 1.14.x | k8s 1.15.x | k8s 1.16.x | k8s 1.17.x | k8s 1.18.x |
|
||||
|---------------|------------|------------|------------|------------|------------|
|
||||
| 1.18.0 | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| 1.17.x | ✔ | ✔ | ✔ | ✔ | ⚫ |
|
||||
|
|
@ -23,7 +23,7 @@ support Kubernetes 1.16.5, 1.15.2, and several previous Kubernetes versions.
|
|||
|
||||
|
||||
Use the latest version of kOps for all releases of Kubernetes, with the caveat
|
||||
that higher versions of Kubernetes are not _officially_ supported by kops.
|
||||
that higher versions of Kubernetes are not _officially_ supported by kOps.
|
||||
Releases which are ~~crossed out~~ _should_ work, but we suggest they be upgraded soon.
|
||||
|
||||
## Release Schedule
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
This directory contains docs that add contextual help to error messages.
|
||||
|
||||
The links are baked into kops, and thus we cannot rename or move these files (at least not quickly).
|
||||
The links are baked into kOps, and thus we cannot rename or move these files (at least not quickly).
|
||||
|
|
@ -3,7 +3,7 @@
|
|||
Kops has established a deprecation policy for Kubernetes version support.
|
||||
Kops will remove support for Kubernetes versions as follows:
|
||||
|
||||
| kops version | Removes support for Kubernetes version |
|
||||
| kOps version | Removes support for Kubernetes version |
|
||||
|--------------|----------------------------------------|
|
||||
| 1.18 | 1.8 and below |
|
||||
| 1.19 | 1.9 and 1.10 |
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# Kops Upgrade Recommended
|
||||
|
||||
You are running a version of kops that we recommend upgrading.
|
||||
You are running a version of kOps that we recommend upgrading.
|
||||
|
||||
The latest releases are available from [Github Releases](https://github.com/kubernetes/kops/releases)
|
||||
Loading…
Reference in New Issue