Kill all federation v1 related tasks pages. (#17949)

This commit is contained in:
Jacky Wu 2020-02-20 05:03:46 +08:00 committed by vineeth
parent f368463034
commit 9db5a04590
17 changed files with 0 additions and 2736 deletions

View File

@ -1,5 +0,0 @@
---
title: "Federation"
weight: 120
---

View File

@ -1,5 +0,0 @@
---
title: "Administer Federation Control Plane"
weight: 160
---

View File

@ -1,119 +0,0 @@
---
title: Federated Cluster
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use Clusters API resource in a Federation control plane.
Different than other Kubernetes resources, such as Deployments, Services and ConfigMaps,
clusters only exist in the federation context, i.e. those requests must be submitted to the
federation api-server.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic [working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general.
{{% /capture %}}
{{% capture steps %}}
## Listing Clusters
To list the clusters available in your federation, you can use [kubectl](/docs/user-guide/kubectl/) by
running:
``` shell
kubectl --context=federation get clusters
```
The `--context=federation` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster. If you submit it to a k8s cluster, you will receive an error saying
```the server doesn't have a resource type "clusters"```
If you passed the correct Federation context but received a message error saying
```No resources found.```
it means that you haven't
added any cluster to the Federation yet.
## Creating a Federated Cluster
Creating a `cluster` resource in federation means joining it to the federation. To do so, you can use
`kubefed join`. Basically, you need to give the new cluster a name and say what is the name of the
context that corresponds to a cluster that hosts the federation. The following example command adds
the cluster `gondor` to the federation running on host cluster `rivendell`:
``` shell
kubefed join gondor --host-cluster-context=rivendell
```
You can find more details on how to do that in the respective section in the
[kubefed guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/#adding-a-cluster-to-a-federation).
## Deleting a Federated Cluster
Converse to creating a cluster, deleting a cluster means unjoining this cluster from the
federation. This can be done with `kubefed unjoin` command. To remove the `gondor` cluster, just do:
``` shell
kubefed unjoin gondor --host-cluster-context=rivendell
```
You can find more details on unjoin in the
[kubefed guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/#removing-a-cluster-from-a-federation).
## Labeling Clusters
You can label clusters the same way as any other Kubernetes object, which can help with grouping clusters and can also be leveraged by the ClusterSelector.
``` shell
kubectl --context=rivendell label cluster gondor key1=value1 key2=value2
```
## ClusterSelector Annotation
You can use a (deprecated) annotation for directing objects across the federated clusters: `federation.alpha.kubernetes.io/cluster-selector`. The *ClusterSelector* is conceptually similar to `nodeSelector`, but instead of selecting against labels on nodes, it selects against labels on federated clusters.
The annotation value must be JSON formatted and must be parsable into the [ClusterSelector API type](/docs/reference/federation/v1beta1/definitions/#_v1beta1_clusterselector). For example: `[{"key": "load", "operator": "Lt", "values": ["10"]}]`. Content that doesn't parse correctly will throw an error and prevent distribution of the object to any federated clusters. Objects of type ConfigMap, Secret, Daemonset, Service and Ingress are included in the alpha implementation.
Here is an example ClusterSelector annotation, which will only select clusters WITH the label `pci=true` and WITHOUT the label `environment=test`:
``` yaml
metadata:
annotations:
federation.alpha.kubernetes.io/cluster-selector: '[{"key": "pci", "operator":
"In", "values": ["true"]}, {"key": "environment", "operator": "NotIn", "values":
["test"]}]'
```
The *key* is matched against label names on the federated clusters.
The *values* are matched against the label values on the federated clusters.
The possible *operators* are: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
The *values* field is expected to be empty when `Exists` or `DoesNotExist` is specified and may include more than one string when `In` or `NotIn` are used.
Currently, only integers are supported with `Gt` or `Lt`.
## Clusters API reference
The full clusters API reference is currently in `federation/v1beta1` and more details can be found in the
[Federation API reference page](/docs/reference/federation/).
{{% /capture %}}

View File

@ -1,89 +0,0 @@
---
title: Federated ConfigMap
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use ConfigMaps in a Federation control plane.
Federated ConfigMaps are very similar to the traditional [Kubernetes
ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) and provide the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a Federated ConfigMap
The API for Federated ConfigMap is 100% compatible with the
API for traditional Kubernetes ConfigMap. You can create a ConfigMap by sending
a request to the federation apiserver.
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
``` shell
kubectl --context=federation-cluster create -f myconfigmap.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a Federated ConfigMap is created, the federation control plane will create
a matching ConfigMap in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get configmap myconfigmap
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone.
These ConfigMaps in underlying clusters will match the Federated ConfigMap.
## Updating a Federated ConfigMap
You can update a Federated ConfigMap as you would update a Kubernetes
ConfigMap; however, for a Federated ConfigMap, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
The federation control plane ensures that whenever the Federated ConfigMap is
updated, it updates the corresponding ConfigMaps in all underlying clusters to
match it.
## Deleting a Federated ConfigMap
You can delete a Federated ConfigMap as you would delete a Kubernetes
ConfigMap; however, for a Federated ConfigMap, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete configmap
```
{{< note >}}
Deleting a Federated ConfigMap does not delete the corresponding ConfigMaps from underlying clusters. You must delete the underlying ConfigMaps manually.
{{< /note >}}
{{% /capture %}}

View File

@ -1,83 +0,0 @@
---
title: Federated DaemonSet
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use DaemonSets in a federation control plane.
DaemonSets in the federation control plane ("Federated Daemonsets" in
this guide) are very similar to the traditional Kubernetes
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/) and provide the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [DaemonSets](/docs/concepts/workloads/controllers/daemonset/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a Federated Daemonset
The API for Federated Daemonset is 100% compatible with the
API for traditional Kubernetes DaemonSet. You can create a DaemonSet by sending
a request to the federation apiserver.
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
``` shell
kubectl --context=federation-cluster create -f mydaemonset.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a Federated Daemonset is created, the federation control plane will create
a matching DaemonSet in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get daemonset mydaemonset
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone.
## Updating a Federated Daemonset
You can update a Federated Daemonset as you would update a Kubernetes
DaemonSet; however, for a Federated Daemonset, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
The federation control plane ensures that whenever the Federated Daemonset is
updated, it updates the corresponding DaemonSets in all underlying clusters to
match it.
## Deleting a Federated Daemonset
You can delete a Federated Daemonset as you would delete a Kubernetes
DaemonSet; however, for a Federated Daemonset, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete daemonset mydaemonset
```
{{% /capture %}}

View File

@ -1,111 +0,0 @@
---
title: Federated Deployment
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use Deployments in the Federation control plane.
Deployments in the federation control plane (referred to as "Federated Deployments" in
this guide) are very similar to the traditional [Kubernetes
Deployment](/docs/concepts/workloads/controllers/deployment/) and provide the same functionality.
Creating them in the federation control plane ensures that the desired number of
replicas exist across the registered clusters.
{{< feature-state for_k8s_version="1.5" state="alpha" >}}
Some features
(such as full rollout compatibility) are still in development.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [Deployments](/docs/concepts/workloads/controllers/deployment/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a Federated Deployment
The API for Federated Deployment is compatible with the
API for traditional Kubernetes Deployment. You can create a Deployment by sending
a request to the federation apiserver.
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
``` shell
kubectl --context=federation-cluster create -f mydeployment.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a Federated Deployment is created, the federation control plane will create
a Deployment in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get deployment mydep
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone.
These Deployments in underlying clusters will match the federation Deployment
_except_ in the number of replicas and revision-related annotations.
Federation control plane ensures that the
sum of replicas in each cluster combined matches the desired number of replicas in the
Federated Deployment.
### Spreading Replicas in Underlying Clusters
By default, replicas are spread equally in all the underlying clusters. For example:
if you have 3 registered clusters and you create a Federated Deployment with
`spec.replicas = 9`, then each Deployment in the 3 clusters will have
`spec.replicas=3`.
To modify the number of replicas in each cluster, you can specify
[FederatedReplicaSetPreference](https://github.com/kubernetes/federation/blob/{{< param "githubbranch" >}}/apis/federation/types.go)
as an annotation with key `federation.kubernetes.io/deployment-preferences`
on Federated Deployment.
## Updating a Federated Deployment
You can update a Federated Deployment as you would update a Kubernetes
Deployment; however, for a Federated Deployment, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
The federation control plane ensures that whenever the Federated Deployment is
updated, it updates the corresponding Deployments in all underlying clusters to
match it. So if the rolling update strategy was chosen then the underlying
cluster will do the rolling update independently and `maxSurge` and `maxUnavailable`
will apply only to individual clusters. This behavior may change in the future.
If your update includes a change in number of replicas, the federation
control plane will change the number of replicas in underlying clusters to
ensure that their sum remains equal to the number of desired replicas in
Federated Deployment.
## Deleting a Federated Deployment
You can delete a Federated Deployment as you would delete a Kubernetes
Deployment; however, for a Federated Deployment, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete deployment mydep
```
{{% /capture %}}

View File

@ -1,50 +0,0 @@
---
title: Federated Events
content_template: templates/concept
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use events in federation control plane to help in debugging.
{{% /capture %}}
{{% capture body %}}
## Prerequisites
This guide assumes that you have a running Kubernetes Cluster
Federation installation. If not, then head over to the
[federation admin guide](/docs/concepts/cluster-administration/federation/) to learn how to
bring up a cluster federation (or have your cluster administrator do
this for you). Other tutorials, for example
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
by Kelsey Hightower, are also available to help you.
You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general.
## View federation events
Events in federation control plane (referred to as "federation events" in
this guide) are very similar to the traditional Kubernetes
Events providing the same functionality.
Federation Events are stored only in federation control plane and are not passed on to the underlying Kubernetes clusters.
Federation controllers create events as they process API resources to surface to the
user, the state that they are in.
You can get all events from federation apiserver by running:
```shell
kubectl --context=federation-cluster get events
```
The standard kubectl get, update, delete commands will all work.
{{% /capture %}}

View File

@ -1,184 +0,0 @@
---
title: Federated Horizontal Pod Autoscalers (HPA)
content_template: templates/task
---
{{% capture overview %}}
{{< feature-state state="alpha" >}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use federated horizontal pod autoscalers (HPAs) in the federation control plane.
HPAs in the federation control plane are similar to the traditional [Kubernetes
HPAs](/docs/tasks/run-application/horizontal-pod-autoscale/), and provide the same functionality.
Creating an HPA targeting a federated object in the federation control plane ensures that the
desired number of replicas of the target object are scaled across the registered clusters,
instead of a single cluster. Also, the control plane keeps monitoring the status of each
individual HPA in the federated clusters and ensures the workload replicas move where they are
needed most by manipulating the min and max limits of the HPA objects in the federated clusters.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [HPAs](/docs/tasks/run-application/horizontal-pod-autoscale/) in particular.
The federated HPA is an alpha feature. The API is not enabled by default on the
federated API server. To use this feature, the user or the admin deploying the federation control
plane needs to run the federated API server with option `--runtime-config=api/all=true` to
enable all APIs, including alpha APIs. Additionally, the federated HPA only works
when used with CPU utilization metrics.
{{% /capture %}}
{{% capture steps %}}
## Creating a federated HPA
The API for federated HPAs is 100% compatible with the
API for traditional Kubernetes HPA. You can create an HPA by sending
a request to the federation API server.
You can do that with [kubectl](/docs/user-guide/kubectl/) by running:
```shell
cat <<EOF | kubectl --context=federation-cluster create -f -
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
EOF
```
The `--context=federation-cluster` flag tells `kubectl` to submit the
request to the federation API server instead of sending it to a Kubernetes
cluster.
Once a federated HPA is created, the federation control plane partitions and
creates the HPA in all underlying Kubernetes clusters. As of Kubernetes V1.7,
[cluster selectors](/docs/tasks/administer-federation/cluster/#clusterselector-annotation)
can also be used to restrict any federated object, including the HPAs in a subset
of clusters.
You can verify the creation by checking each of the underlying clusters. For example, with a context named `gce-asia-east1a`
configured in your client for your cluster in that zone:
```shell
kubectl --context=gce-asia-east1a get HPA php-apache
```
The HPA in the underlying clusters will match the federation HPA
except in the number of min and max replicas. The federation control plane ensures that the sum of max replicas in each cluster matches the specified
max replicas on the federated HPA object, and the sum of minimum replicas will be greater
than or equal to the minimum specified on the federated HPA object.
{{< note >}}
A particular cluster cannot have a minimum replica sum of 0.
{{< /note >}}
### Spreading HPA min and max replicas in underlying clusters
By default, first max replicas are spread equally in all the underlying clusters, then min replicas are distributed to those clusters that received their maximum value. This means
that each cluster will get an HPA if the specified max replicas are greater than
the total clusters participating in this federation, and some clusters will be
skipped if specified max replicas are less than the total clusters participating
in the federation.
For example: if you have 3 registered clusters and you create a federated HPA with
`spec.maxReplicas = 9`, and `spec.minReplicas = 2`, then each HPA in the 3 clusters
will get `spec.maxReplicas=3` and `spec.minReplicas = 1`.
Currently the default distribution is only available on the federated HPA, but in the
future, users preferences could also be specified to control and/or restrict this
distribution.
## Updating a federated HPA
You can update a federated HPA as you would update a Kubernetes
HPA; however, for a federated HPA, you must send the request to
the federation API server instead of sending it to a specific Kubernetes cluster.
The Federation control plane ensures that whenever the federated HPA is
updated, it updates the corresponding HPA in all underlying clusters to
match it.
If your update includes a change in the number of replicas, the federation
control plane will change the number of replicas in underlying clusters to
ensure that the sum of the max and min replicas remains matched as specified
in the previous section.
## Deleting a federated HPA
You can delete a federated HPA as you would delete a Kubernetes
HPA; however, for a federated HPA, you must send the request to
the federation API server instead of to a specific Kubernetes cluster.
{{< note >}}
For the federated resource to be deleted from all underlying clusters, [cascading deletion](/docs/concepts/cluster-administration/federation/#cascading-deletion) should be used.
{{< /note >}}
For example, you can do that using `kubectl` by running:
```shell
kubectl --context=federation-cluster delete HPA php-apache
```
## Alternative ways to use federated HPA
To a federation user interacting with federated control plane (or simply federation),
the interaction is almost identical to interacting with a normal Kubernetes cluster (but
with a limited set of APIs that are federated). As both Deployments and
HorizontalPodAutoscalers are now federated, `kubectl` commands like `kubectl run`
and `kubectl autoscale` work on federation. Given this fact, the mechanism specified in
[horizontal pod autoscaler walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
will also work when used with federation.
Care however will need to be taken that when
[generating load on a target deployment](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#step-three-increase-load),
it should be done against a specific federated cluster (or multiple clusters) not the federation.
## Conclusion
The use of federated HPA is to ensure workload replicas move to the cluster(s) where
they are needed most, or in other words where the load is beyond expected threshold.
The federated HPA feature achieves this by manipulating the min and max replicas on the
HPAs it creates in the federated clusters. It does not directly monitor the target
object metrics from the federated clusters. It actually relies on the in-cluster HPA
controllers to monitor the metrics and update relevant fields. The in-cluster HPA
controller monitors the target pod metrics and updates the fields like desired
replicas (after metrics based calculations) and current replicas (observing the
current status of in cluster pods). The federated HPA controller, on the other hand,
monitors only the cluster-specific HPA object fields and updates the min replica and
max replica fields of those in cluster HPA objects, which have replicas matching thresholds.
For example, if a cluster has both desired replicas and current replicas the same as the max replicas,
and averaged current CPU utilization still higher than the target CPU utilization (all of which
are fields on local HPA object), then the target app in this cluster
needs more replicas, and the scaling is currently restricted by max replicas set on this local
HPA object. In such a scenario, the federated HPA controller scans all clusters and tries to
find clusters which do not have such a condition (meaning the desired replicas are less
than the max, and current averaged CPU utilization is lower then the threshold). If it finds such
a cluster, it reduces the max replica on the HPA in this cluster and increases the max replicas
on the HPA in the cluster which needed the replicas.
There are many other similar conditions which the federated HPA controller checks and moves the max
replicas and min replicas around the local HPAs in federated clusters to eventually ensure that
the replicas move (or remain) in the cluster(s) which need them.
For more information, see ["federated HPA design proposal"](https://github.com/kubernetes/community/pull/593).
{{% /capture %}}

View File

@ -1,311 +0,0 @@
---
title: Federated Ingress
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This page explains how to use Kubernetes Federated Ingress to deploy
a common HTTP(S) virtual IP load balancer across a federated service running in
multiple Kubernetes clusters. As of v1.4, clusters hosted in Google
Cloud (both Google Kubernetes Engine and GCE, or both) are supported. This makes it
easy to deploy a service that reliably serves HTTP(S) traffic
originating from web clients around the globe on a single, static IP
address. Low network latency, high fault tolerance and easy administration are
ensured through intelligent request routing and automatic replica
relocation (using [Federated ReplicaSets](/docs/tasks/administer-federation/replicaset/).
Clients are automatically routed, via the shortest network path, to
the cluster closest to them with available capacity (despite the fact
that all clients use exactly the same static IP address). The load balancer
automatically checks the health of the pods comprising the service,
and avoids sending requests to unresponsive or slow pods (or entire
unresponsive clusters).
Federated Ingress is released as an alpha feature, and supports Google Cloud Platform (Google Kubernetes Engine,
GCE and hybrid scenarios involving both) in Kubernetes v1.4. Work is under way to support other cloud
providers such as AWS, and other hybrid cloud scenarios (e.g. services
spanning private on-premises as well as public cloud Kubernetes
clusters).
You create Federated Ingresses in much that same way as traditional
[Kubernetes Ingresses](/docs/concepts/services-networking/ingress/): by making an API
call which specifies the desired properties of your logical ingress point. In the
case of Federated Ingress, this API call is directed to the
Federation API endpoint, rather than a Kubernetes cluster API
endpoint. The API for Federated Ingress is 100% compatible with the
API for traditional Kubernetes Services.
Once created, the Federated Ingress automatically:
* Creates matching Kubernetes Ingress objects in every cluster underlying your Cluster Federation
* Ensures that all of these in-cluster ingress objects share the same
logical global L7 (that is, HTTP(S)) load balancer and IP address
* Monitors the health and capacity of the service shards (that is, your pods) behind this ingress in each cluster
* Ensures that all client connections are routed to an appropriate healthy backend service endpoint at all times, even in the event of pod, cluster, availability zone or regional outages
Note that in the case of Google Cloud, the logical L7 load balancer is
not a single physical device (which would present both a single point
of failure, and a single global network routing choke point), but
rather a
[truly global, highly available load balancing managed service](https://cloud.google.com/load-balancing/),
globally reachable via a single, static IP address.
Clients inside your federated Kubernetes clusters (Pods) will be
automatically routed to the cluster-local shard of the Federated Service
backing the Ingress in their cluster if it exists and is healthy, or the closest healthy shard in a
different cluster if it does not. Note that this involves a network
trip to the HTTP(s) load balancer, which resides outside your local
Kubernetes cluster but inside the same GCP region.
{{% /capture %}}
{{% capture prerequisites %}}
This document assumes that you have a running Kubernetes Cluster
Federation installation. If not, then see the
[federation admin guide](/docs/tasks/federation/set-up-cluster-federation-kubefed/) to learn how to
bring up a cluster federation (or have your cluster administrator do
this for you). Other tutorials, for example
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
by Kelsey Hightower, are also available to help you.
You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general, and [Ingress](/docs/concepts/services-networking/ingress/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a federated ingress
You can create a federated ingress in any of the usual ways, for example, using kubectl:
``` shell
kubectl --context=federation-cluster create -f myingress.yaml
```
For example ingress YAML configurations, see the [Ingress User Guide](/docs/concepts/services-networking/ingress/).
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation API endpoint, with the appropriate
credentials. If you have not yet configured such a context, see the
[federation admin guide](/docs/admin/federation/) or one of the
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
to find out how to do so.
The Federated Ingress automatically creates
and maintains matching Kubernetes ingresses in all of the clusters
underlying your federation. These cluster-specific ingresses (and
their associated ingress controllers) configure and manage the load
balancing and health checking infrastructure that ensures that traffic
is load balanced to each cluster appropriately.
You can verify this by checking in each of the underlying clusters. For example:
``` shell
kubectl --context=gce-asia-east1a get ingress myingress
NAME HOSTS ADDRESS PORTS AGE
myingress * 130.211.5.194 80, 443 1m
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone. The name and
namespace of the underlying ingress automatically matches those of
the Federated Ingress that you created above (and if you happen to
have had ingresses of the same name and namespace already existing in
any of those clusters, they will be automatically adopted by the
Federation and updated to conform with the specification of your
Federated Ingress. Either way, the end result will be the same).
The status of your Federated Ingress automatically reflects the
real-time status of the underlying Kubernetes ingresses. For example:
``` shell
kubectl --context=federation-cluster describe ingress myingress
Name: myingress
Namespace: default
Address: 130.211.5.194
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
* * echoheaders-https:80 (10.152.1.3:8080,10.152.2.4:8080)
Annotations:
https-target-proxy: k8s-tps-default-myingress--ff1107f83ed600c0
target-proxy: k8s-tp-default-myingress--ff1107f83ed600c0
url-map: k8s-um-default-myingress--ff1107f83ed600c0
backends: {"k8s-be-30301--ff1107f83ed600c0":"Unknown"}
forwarding-rule: k8s-fw-default-myingress--ff1107f83ed600c0
https-forwarding-rule: k8s-fws-default-myingress--ff1107f83ed600c0
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 {loadbalancer-controller } Normal ADD default/myingress
2m 2m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.5.194
```
Note that:
* The address of your Federated Ingress
corresponds with the address of all of the
underlying Kubernetes ingresses (once these have been allocated - this
may take up to a few minutes).
* You have not yet provisioned any backend Pods to receive
the network traffic directed to this ingress (that is, 'Service
Endpoints' behind the service backing the Ingress), so the Federated Ingress does not yet consider these to
be healthy shards and will not direct traffic to any of these clusters.
* The federation control system
automatically reconfigures the load balancer controllers in all of the
clusters in your federation to make them consistent, and allows
them to share global load balancers. But this reconfiguration can
only complete successfully if there are no pre-existing Ingresses in
those clusters (this is a safety feature to prevent accidental
breakage of existing ingresses). So, to ensure that your federated
ingresses function correctly, either start with new, empty clusters, or make
sure that you delete (and recreate if necessary) all pre-existing
Ingresses in the clusters comprising your federation.
## Adding backend services and pods
To render the underlying ingress shards healthy, you need to add
backend Pods behind the service upon which the Ingress is based. There are several ways to achieve this, but
the easiest is to create a Federated Service and
Federated ReplicaSet. To
create appropriately labelled pods and services in the 13 underlying clusters of
your federation:
``` shell
kubectl --context=federation-cluster create -f services/nginx.yaml
```
``` shell
kubectl --context=federation-cluster create -f myreplicaset.yaml
```
Note that in order for your federated ingress to work correctly on
Google Cloud, the node ports of all of the underlying cluster-local
services need to be identical. If you're using a federated service
this is easy to do. Simply pick a node port that is not already
being used in any of your clusters, and add that to the spec of your
federated service. If you do not specify a node port for your
federated service, each cluster will choose its own node port for
its cluster-local shard of the service, and these will probably end
up being different, which is not what you want.
You can verify this by checking in each of the underlying clusters. For example:
``` shell
kubectl --context=gce-asia-east1a get services nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.63.250.98 104.199.136.89 80/TCP 9m
```
## Hybrid cloud capabilities
Federations of Kubernetes Clusters can include clusters running in
different cloud providers (for example, Google Cloud, AWS), and on-premises
(for example, on OpenStack). However, in Kubernetes v1.4, Federated Ingress is only
supported across Google Cloud clusters.
## Discovering a federated ingress
Ingress objects (in both plain Kubernetes clusters, and in federations
of clusters) expose one or more IP addresses (via
the Status.Loadbalancer.Ingress field) that remains static for the lifetime
of the Ingress object (in future, automatically managed DNS names
might also be added). All clients (whether internal to your cluster,
or on the external network or internet) should connect to one of these IP
or DNS addresses. All client requests are automatically
routed, via the shortest network path, to a healthy pod in the
closest cluster to the origin of the request. So for example, HTTP(S)
requests from internet
users in Europe will be routed directly to the closest cluster in
Europe that has available capacity. If there are no such clusters in
Europe, the request will be routed to the next closest cluster
(typically in the U.S.).
## Handling failures of backend pods and whole clusters
Ingresses are backed by Services, which are typically (but not always)
backed by one or more ReplicaSets. For Federated Ingresses, it is
common practise to use the federated variants of Services and
ReplicaSets for this purpose.
In particular, Federated ReplicaSets ensure that the desired number of
pods are kept running in each cluster, even in the event of node
failures. In the event of entire cluster or availability zone
failures, Federated ReplicaSets automatically place additional
replicas in the other available clusters in the federation to accommodate the
traffic which was previously being served by the now unavailable
cluster. While the Federated ReplicaSet ensures that sufficient replicas are
kept running, the Federated Ingress ensures that user traffic is
automatically redirected away from the failed cluster to other
available clusters.
## Troubleshooting
#### I cannot connect to my cluster federation API.
Check that your:
1. Client (typically `kubectl`) is correctly configured (including API endpoints and login credentials).
2. Cluster Federation API server is running and network-reachable.
See the [federation admin guide](/docs/admin/federation/) to learn
how to bring up a cluster federation correctly (or have your cluster administrator do this for you), and how to correctly configure your client.
#### I can create a Federated Ingress/service/replicaset successfully against the cluster federation API, but no matching ingresses/services/replicasets are created in my underlying clusters.
Check that:
1. Your clusters are correctly registered in the Cluster Federation API. (`kubectl describe clusters`)
2. Your clusters are all 'Active'. This means that the cluster
Federation system was able to connect and authenticate against the
clusters' endpoints. If not, consult the event logs of the federation-controller-manager pod to ascertain what the failure might be. (`kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -o name`)
3. That the login credentials provided to the Cluster Federation API
for the clusters have the correct authorization and quota to create
ingresses/services/replicasets in the relevant namespace in the
clusters. Again you should see associated error messages providing
more detail in the above event log file if this is not the case.
4. Whether any other error is preventing the service creation
operation from succeeding (look for `ingress-controller`,
`service-controller` or `replicaset-controller`,
errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
#### I can create a federated ingress successfully, but request load is not correctly distributed across the underlying clusters.
Check that:
1. The services underlying your federated ingress in each cluster have
identical node ports. See [above](#creating_a_federated_ingress) for further explanation.
2. The load balancer controllers in each of your clusters are of the
correct type ("GLBC") and have been correctly reconfigured by the
federation control plane to share a global GCE load balancer (this
should happen automatically). If they are of the correct type, and
have been correctly reconfigured, the UID data item in the GLBC
configmap in each cluster will be identical across all clusters.
See
[the GLBC docs](https://github.com/kubernetes/ingress/blob/7dcb4ae17d5def23d3e9c878f3146ac6df61b09d/controllers/gce/README.md)
for further details.
If this is not the case, check the logs of your federation
controller manager to determine why this automated reconfiguration
might be failing.
3. No ingresses have been manually created in any of your clusters before the above
reconfiguration of the load balancer controller completed
successfully. Ingresses created before the reconfiguration of
your GLBC will interfere with the behavior of your federated
ingresses created after the reconfiguration (see
[the GLBC docs](https://github.com/kubernetes/ingress/blob/7dcb4ae17d5def23d3e9c878f3146ac6df61b09d/controllers/gce/README.md)
for further information). To remedy this,
delete any ingresses created before the cluster joined the
federation (and had its GLBC reconfigured), and recreate them if
necessary.
{{% /capture %}}
{{% capture whatsnext %}}
* If you need assistance, use one of the [support channels](/docs/tasks/debug-application-cluster/troubleshooting/) to seek assistance.
* For details about use cases that motivated this work, see
[Federation proposal](https://git.k8s.io/community/contributors/design-proposals/multicluster/federation.md).
{{% /capture %}}

View File

@ -1,109 +0,0 @@
---
title: Federated Jobs
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use jobs in the federation control plane.
Jobs in the federation control plane (referred to as "federated jobs" in
this guide) are similar to the traditional [Kubernetes
jobs](/docs/concepts/workloads/controllers/job/), and provide the same functionality.
Creating jobs in the federation control plane ensures that the desired number of
parallelism and completions exist across the registered clusters.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a federated job
The API for federated jobs is fully compatible with the
API for traditional Kubernetes jobs. You can create a job by sending
a request to the federation apiserver.
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
``` shell
kubectl --context=federation-cluster create -f myjob.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the federation API server instead of sending it to a Kubernetes
cluster.
Once a federated job is created, the federation control plane creates
a job in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get job myjob
```
The previous example assumes that you have a context named `gce-asia-east1a`
configured in your client for your cluster in that zone.
The jobs in the underlying clusters match the federated job
except in the number of parallelism and completions. The federation control plane ensures that the
sum of the parallelism and completions in each cluster matches the desired number of parallelism and completions in the
federated job.
### Spreading job tasks in underlying clusters
By default, parallelism and completions are spread equally in all underlying clusters. For example:
if you have 3 registered clusters and you create a federated job with
`spec.parallelism = 9` and `spec.completions = 18`, then each job in the 3 clusters has
`spec.parallelism = 3` and `spec.completions = 6`.
To modify the number of parallelism and completions in each cluster, you can specify
[ReplicaAllocationPreferences](https://github.com/kubernetes/federation/blob/{{< param "githubbranch" >}}/apis/federation/types.go)
as an annotation with key `federation.kubernetes.io/job-preferences`
on the federated job.
## Updating a federated job
You can update a federated job as you would update a Kubernetes
job; however, for a federated job, you must send the request to
the federation API server instead of sending it to a specific Kubernetes cluster.
The federation control plane ensures that whenever the federated job is
updated, it updates the corresponding job in all underlying clusters to
match it.
If your update includes a change in number of parallelism and completions, the federation
control plane changes the number of parallelism and completions in underlying clusters to
ensure that their sum remains equal to the number of desired parallelism and completions in
federated job.
## Deleting a federated job
You can delete a federated job as you would delete a Kubernetes
job; however, for a federated job, you must send the request to
the federation API server instead of sending it to a specific Kubernetes cluster.
For example, with kubectl:
```shell
kubectl --context=federation-cluster delete job myjob
```
{{< note >}}
Deleting a federated job will not delete the
corresponding jobs from underlying clusters.
You must delete the underlying jobs manually.
{{< /note >}}
{{% /capture %}}

View File

@ -1,92 +0,0 @@
---
title: Federated Namespaces
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use Namespaces in Federation control plane.
Namespaces in federation control plane (referred to as "federated Namespaces" in
this guide) are very similar to the traditional [Kubernetes
Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) providing the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You are also expected to have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a Federated Namespace
The API for Federated Namespaces is 100% compatible with the
API for traditional Kubernetes Namespaces. You can create a Namespace by sending
a request to the federation apiserver.
You can do that using kubectl by running:
``` shell
kubectl --context=federation-cluster create -f myns.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a federated Namespace is created, the federation control plane will create
a matching Namespace in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get namespaces myns
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone. The name and
spec of the underlying Namespace will match those of
the Federated Namespace that you created above.
## Updating a Federated Namespace
You can update a federated Namespace as you would update a Kubernetes
Namespace, just send the request to federation apiserver instead of sending it
to a specific Kubernetes cluster.
Federation control plane will ensure that whenever the federated Namespace is
updated, it updates the corresponding Namespaces in all underlying clusters to
match it.
## Deleting a Federated Namespace
You can delete a federated Namespace as you would delete a Kubernetes
Namespace, just send the request to federation apiserver instead of sending it
to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete ns myns
```
As in Kubernetes, deleting a federated Namespace will delete all resources in that
Namespace from the federation control plane.
{{< note >}}
At this point, deleting a federated Namespace will not delete the corresponding Namespace, or resources in those Namespaces, from underlying clusters. Users must delete them manually. We intend to fix this in the future.
{{< /note >}}
{{% /capture %}}

View File

@ -1,132 +0,0 @@
---
title: Federated ReplicaSets
content_template: templates/task
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use ReplicaSets in the Federation control plane.
ReplicaSets in the federation control plane (referred to as "federated ReplicaSets" in
this guide) are very similar to the traditional [Kubernetes
ReplicaSets](/docs/concepts/workloads/controllers/replicaset/), and provide the same functionality.
Creating them in the federation control plane ensures that the desired number of
replicas exist across the registered clusters.
{{% /capture %}}
{{% capture prerequisites %}}
* {{< include "federated-task-tutorial-prereqs.md" >}}
* You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) in particular.
{{% /capture %}}
{{% capture steps %}}
## Creating a Federated ReplicaSet
The API for Federated ReplicaSet is 100% compatible with the
API for traditional Kubernetes ReplicaSet. You can create a ReplicaSet by sending
a request to the federation apiserver.
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
``` shell
kubectl --context=federation-cluster create -f myrs.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a federated ReplicaSet is created, the federation control plane will create
a ReplicaSet in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get rs myrs
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone.
The ReplicaSets in the underlying clusters will match the federation ReplicaSet
except in the number of replicas. The federation control plane will ensure that the
sum of the replicas in each cluster match the desired number of replicas in the
federation ReplicaSet.
### Spreading Replicas in Underlying Clusters
By default, replicas are spread equally in all the underlying clusters. For example:
if you have 3 registered clusters and you create a federated ReplicaSet with
`spec.replicas = 9`, then each ReplicaSet in the 3 clusters will have
`spec.replicas=3`.
To modify the number of replicas in each cluster, you can add an annotation with
key `federation.kubernetes.io/replica-set-preferences` to the federated ReplicaSet.
The value of the annoation is a serialized JSON that contains fields shown in
the following example:
```
{
"rebalance": true,
"clusters": {
"foo": {
"minReplicas": 10,
"maxReplicas": 50,
"weight": 100
},
"bar": {
"minReplicas": 10,
"maxReplicas": 100,
"weight": 200
}
}
}
```
The `rebalance` boolean field specifies whether replicas already scheduled and running
may be moved in order to match current state to the specified preferences.
The `clusters` object field contains a map where users can specify the constraints
for replica placement across the clusters (`foo` and `bar` in the example).
For each cluster, you can specify the minimum number of replicas that should be
assigned to it (default is zero), the maximum number of replicas the cluster can
accept (default is unbounded) and a number expressing the relative weight of
preferences to place additional replicas to that cluster.
## Updating a Federated ReplicaSet
You can update a federated ReplicaSet as you would update a Kubernetes
ReplicaSet; however, for a federated ReplicaSet, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
The Federation control plane ensures that whenever the federated ReplicaSet is
updated, it updates the corresponding ReplicaSet in all underlying clusters to
match it.
If your update includes a change in number of replicas, the federation
control plane will change the number of replicas in underlying clusters to
ensure that their sum remains equal to the number of desired replicas in
federated ReplicaSet.
## Deleting a Federated ReplicaSet
You can delete a federated ReplicaSet as you would delete a Kubernetes
ReplicaSet; however, for a federated ReplicaSet, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete rs myrs
```
{{< note >}}
At this point, deleting a federated ReplicaSet will not delete the corresponding ReplicaSets from underlying clusters. You must delete the underlying ReplicaSets manually. We intend to fix this in the future.
{{< /note >}}
{{% /capture %}}

View File

@ -1,93 +0,0 @@
---
title: Federated Secrets
content_template: templates/concept
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use secrets in Federation control plane.
Secrets in federation control plane (referred to as "federated secrets" in
this guide) are very similar to the traditional [Kubernetes
Secrets](/docs/concepts/configuration/secret/) providing the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.
{{% /capture %}}
{{% capture body %}}
## Prerequisites
This guide assumes that you have a running Kubernetes Cluster
Federation installation. If not, then head over to the
[federation admin guide](/docs/admin/federation/) to learn how to
bring up a cluster federation (or have your cluster administrator do
this for you). Other tutorials, for example
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
by Kelsey Hightower, are also available to help you.
You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general and [Secrets](/docs/concepts/configuration/secret/) in particular.
## Creating a Federated Secret
The API for Federated Secret is 100% compatible with the
API for traditional Kubernetes Secret. You can create a secret by sending
a request to the federation apiserver.
You can do that using [kubectl](/docs/user-guide/kubectl/) by running:
``` shell
kubectl --context=federation-cluster create -f mysecret.yaml
```
The `--context=federation-cluster` flag tells kubectl to submit the
request to the Federation apiserver instead of sending it to a Kubernetes
cluster.
Once a federated secret is created, the federation control plane will create
a matching secret in all underlying Kubernetes clusters.
You can verify this by checking each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get secret mysecret
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone.
These secrets in underlying clusters will match the federated secret.
## Updating a Federated Secret
You can update a federated secret as you would update a Kubernetes
secret; however, for a federated secret, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
The Federation control plane ensures that whenever the federated secret is
updated, it updates the corresponding secrets in all underlying clusters to
match it.
## Deleting a Federated Secret
You can delete a federated secret as you would delete a Kubernetes
secret; however, for a federated secret, you must send the request to
the federation apiserver instead of sending it to a specific Kubernetes cluster.
For example, you can do that using kubectl by running:
```shell
kubectl --context=federation-cluster delete secret mysecret
```
{{< note >}}
At this point, deleting a federated secret will not delete the corresponding secrets from underlying clusters. You must delete the underlying secrets manually. We intend to fix this in the future.
{{< /note >}}
{{% /capture %}}

View File

@ -1,416 +0,0 @@
---
title: Cross-cluster Service Discovery using Federated Services
reviewers:
- bprashanth
- quinton-hoole
content_template: templates/task
weight: 140
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This guide explains how to use Kubernetes Federated Services to deploy
a common Service across multiple Kubernetes clusters. This makes it
easy to achieve cross-cluster service discovery and availability zone
fault tolerance for your Kubernetes applications.
Federated Services are created in much that same way as traditional
[Kubernetes Services](/docs/concepts/services-networking/service/) by making an API
call which specifies the desired properties of your service. In the
case of Federated Services, this API call is directed to the
Federation API endpoint, rather than a Kubernetes cluster API
endpoint. The API for Federated Services is 100% compatible with the
API for traditional Kubernetes Services.
Once created, the Federated Service automatically:
1. Creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
2. Monitors the health of those service "shards" (and the clusters in which they reside), and
3. Manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster,
availability zone or regional outages.
Clients inside your federated Kubernetes clusters (that is Pods) will
automatically find the local shard of the Federated Service in their
cluster if it exists and is healthy, or the closest healthy shard in a
different cluster if it does not.
{{% /capture %}}
{{< toc >}}
{{% capture prerequisites %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{% /capture %}}
{{% capture steps %}}
## Prerequisites
This guide assumes that you have a running Kubernetes Cluster
Federation installation. If not, then head over to the
[federation admin guide](/docs/admin/federation/) to learn how to
bring up a cluster federation (or have your cluster administrator do
this for you). Other tutorials, for example
[this one](https://github.com/kelseyhightower/kubernetes-cluster-federation)
by Kelsey Hightower, are also available to help you.
You should also have a basic
[working knowledge of Kubernetes](/docs/tutorials/kubernetes-basics/) in
general, and [Services](/docs/concepts/services-networking/service/) in particular.
## Hybrid cloud capabilities
Federations of Kubernetes Clusters can include clusters running in
different cloud providers (such as Google Cloud or AWS), and on-premises
(such as on OpenStack). Simply create all of the clusters that you
require, in the appropriate cloud providers and/or locations, and
register each cluster's API endpoint and credentials with your
Federation API Server (See the
[federation admin guide](/docs/admin/federation/) for details).
Thereafter, your applications and services can span different clusters
and cloud providers as described in more detail below.
## Creating a federated service
This is done in the usual way, for example:
``` shell
kubectl --context=federation-cluster create -f services/nginx.yaml
```
The '--context=federation-cluster' flag tells kubectl to submit the
request to the Federation API endpoint, with the appropriate
credentials. If you have not yet configured such a context, visit the
[federation admin guide](/docs/admin/federation/) or one of the
[administration tutorials](https://github.com/kelseyhightower/kubernetes-cluster-federation)
to find out how to do so.
As described above, the Federated Service will automatically create
and maintain matching Kubernetes services in all of the clusters
underlying your federation.
You can verify this by checking in each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get services nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.63.250.98 104.199.136.89 80/TCP 9m
```
The above assumes that you have a context named 'gce-asia-east1a'
configured in your client for your cluster in that zone. The name and
namespace of the underlying services will automatically match those of
the Federated Service that you created above (and if you happen to
have had services of the same name and namespace already existing in
any of those clusters, they will be automatically adopted by the
Federation and updated to conform with the specification of your
Federated Service - either way, the end result will be the same).
The status of your Federated Service will automatically reflect the
real-time status of the underlying Kubernetes services, for example:
``` shell
kubectl --context=federation-cluster describe services nginx
```
```
Name: nginx
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: LoadBalancer
IP: 10.63.250.98
LoadBalancer Ingress: 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89, ...
Port: http 80/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
```
{{< note >}}
The 'LoadBalancer Ingress' addresses of your Federated Service
correspond with the 'LoadBalancer Ingress' addresses of all of the
underlying Kubernetes services (once these have been allocated - this
may take a few seconds). For inter-cluster and inter-cloud-provider
networking between service shards to work correctly, your services
need to have an externally visible IP address. [Service Type:
Loadbalancer](/docs/concepts/services-networking/service/#loadbalancer)
is typically used for this, although other options
(for example [External IPs](/docs/concepts/services-networking/service/#external-ips)) exist.
{{< /note >}}
Note also that we have not yet provisioned any backend Pods to receive
the network traffic directed to these addresses (that is 'Service
Endpoints'), so the Federated Service does not yet consider these to
be healthy service shards, and has accordingly not yet added their
addresses to the DNS records for this Federated Service (more on this
aspect later).
## Adding backend pods
To render the underlying service shards healthy, we need to add
backend Pods behind them. This is currently done directly against the
API endpoints of the underlying clusters (although in future the
Federation server will be able to do all this for you with a single
command, to save you the trouble). For example, to create backend Pods
in 13 underlying clusters:
``` shell
for CLUSTER in asia-east1-c asia-east1-a asia-east1-b \
europe-west1-d europe-west1-c europe-west1-b \
us-central1-f us-central1-a us-central1-b us-central1-c \
us-east1-d us-east1-c us-east1-b
do
kubectl --context=$CLUSTER run nginx --image=nginx:1.11.1-alpine --port=80
done
```
Note that `kubectl run` automatically adds the `run=nginx` labels required to associate the backend pods with their services.
## Verifying public DNS records
Once the above Pods have successfully started and have begun listening
for connections, Kubernetes will report them as healthy endpoints of
the service in that cluster (through automatic health checks). The Cluster
Federation will in turn consider each of these
service 'shards' to be healthy, and place them in serving by
automatically configuring corresponding public DNS records. You can
use your preferred interface to your configured DNS provider to verify
this. For example, if your Federation is configured to use Google
Cloud DNS, and a managed DNS domain 'example.com':
``` shell
gcloud dns managed-zones describe example-dot-com
```
```
creationTime: '2016-06-26T18:18:39.229Z'
description: Example domain for Kubernetes Cluster Federation
dnsName: example.com.
id: '3229332181334243121'
kind: dns#managedZone
name: example-dot-com
nameServers:
- ns-cloud-a1.googledomains.com.
- ns-cloud-a2.googledomains.com.
- ns-cloud-a3.googledomains.com.
- ns-cloud-a4.googledomains.com.
```
```shell
gcloud dns record-sets list --zone example-dot-com
```
```
NAME TYPE TTL DATA
example.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.
example.com. OA 21600 ns-cloud-e1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300
nginx.mynamespace.myfederation.svc.example.com. A 180 104.197.246.190, 130.211.57.243, 104.196.14.231, 104.199.136.89,...
nginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.197.247.191
nginx.mynamespace.myfederation.svc.us-central1-b.example.com. A 180 104.197.244.180
nginx.mynamespace.myfederation.svc.us-central1-c.example.com. A 180 104.197.245.170
nginx.mynamespace.myfederation.svc.us-central1-f.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.us-central1.example.com.
nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.197.247.191, 104.197.244.180, 104.197.245.170
nginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.211.57.243
nginx.mynamespace.myfederation.svc.asia-east1-b.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.asia-east1.example.com.
nginx.mynamespace.myfederation.svc.asia-east1-c.example.com. A 180 130.211.56.221
nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.211.57.243, 130.211.56.221
nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.
nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.europe-west1.example.com.
... etc.
```
{{< note >}}
If your Federation is configured to use AWS Route53, you can use one of the equivalent AWS tools, for example:
``` shell
aws route53 list-hosted-zones
```
and
``` shell
aws route53 list-resource-record-sets --hosted-zone-id Z3ECL0L9QLOVBX
```
{{< /note >}}
Whatever DNS provider you use, any DNS query tool (for example 'dig'
or 'nslookup') will of course also allow you to see the records
created by the Federation for you. Note that you should either point
these tools directly at your DNS provider (such as `dig
@ns-cloud-e1.googledomains.com...`) or expect delays in the order of
your configured TTL (180 seconds, by default) before seeing updates,
due to caching by intermediate DNS servers.
### Some notes about the above example
1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example, in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
2. Similarly, there are regional 'A' records which include all healthy shards in that region. For example, 'us-central1'. These regional records are useful for clients which do not have a particular zone preference, and as a building block for the automated locality and failover mechanism described below.
3. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (that is Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
4. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.').
The above set of DNS records is automatically kept in sync with the
current state of health of all service shards globally by the
Federated Service system. DNS resolver libraries (which are invoked by
all clients) automatically traverse the hierarchy of 'CNAME' and 'A'
records to return the correct set of healthy IP addresses. Clients can
then select any one of the returned addresses to initiate a network
connection (and fail over automatically to one of the other equivalent
addresses if required).
## Discovering a federated service
### From pods inside your federated clusters
By default, Kubernetes clusters come pre-configured with a
cluster-local DNS server ('KubeDNS'), as well as an intelligently
constructed DNS search path which together ensure that DNS queries
like "myservice", "myservice.mynamespace",
"bobsservice.othernamespace" etc issued by your software running
inside Pods are automatically expanded and resolved correctly to the
appropriate service IP of services running in the local cluster.
With the introduction of Federated Services and Cross-Cluster Service
Discovery, this concept is extended to cover Kubernetes services
running in any other cluster across your Cluster Federation, globally.
To take advantage of this extended range, you use a slightly different
DNS name of the form ```"<servicename>.<namespace>.<federationname>"```
to resolve Federated Services. For example, you might use
`myservice.mynamespace.myfederation`. Using a different DNS name also
avoids having your existing applications accidentally traversing
cross-zone or cross-region networks and you incurring perhaps unwanted
network charges or latency, without you explicitly opting in to this
behavior.
So, using our NGINX example service above, and the Federated Service
DNS name form just described, let's consider an example: A Pod in a
cluster in the `us-central1-f` availability zone needs to contact our
NGINX service. Rather than use the service's traditional cluster-local
DNS name (`"nginx.mynamespace"`, which is automatically expanded
to `"nginx.mynamespace.svc.cluster.local"`) it can now use the
service's Federated DNS name, which is
`"nginx.mynamespace.myfederation"`. This will be automatically
expanded and resolved to the closest healthy shard of my NGINX
service, wherever in the world that may be. If a healthy shard exists
in the local cluster, that service's cluster-local (typically
10.x.y.z) IP address will be returned (by the cluster-local KubeDNS).
This is almost exactly equivalent to non-federated service resolution
(almost because KubeDNS actually returns both a CNAME and an A record
for local federated services, but applications will be oblivious
to this minor technical difference).
But if the service does not exist in the local cluster (or it exists
but has no healthy backend pods), the DNS query is automatically
expanded to ```"nginx.mynamespace.myfederation.svc.us-central1-f.example.com"```
(that is, logically "find the external IP of one of the shards closest to
my availability zone"). This expansion is performed automatically by
KubeDNS, which returns the associated CNAME record. This results in
automatic traversal of the hierarchy of DNS records in the above
example, and ends up at one of the external IPs of the Federated
Service in the local us-central1 region (that is 104.197.247.191,
104.197.244.180 or 104.197.245.170).
It is of course possible to explicitly target service shards in
availability zones and regions other than the ones local to a Pod by
specifying the appropriate DNS names explicitly, and not relying on
automatic DNS expansion. For example,
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will
resolve to all of the currently healthy service shards in Europe, even
if the Pod issuing the lookup is located in the U.S., and irrespective
of whether or not there are healthy shards of the service in the U.S.
This is useful for remote monitoring and other similar applications.
### From other clients outside your federated clusters
Much of the above discussion applies equally to external clients,
except that the automatic DNS expansion described is no longer
possible. So external clients need to specify one of the fully
qualified DNS names of the Federated Service, be that a zonal,
regional or global name. For convenience reasons, it is often a good
idea to manually configure additional static CNAME records in your
service, for example:
``` shell
eu.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.europe-west1.example.com.
us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.example.com.
nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
```
That way your clients can always use the short form on the left, and
always be automatically routed to the closest healthy shard on their
home continent. All of the required failover is handled for you
automatically by Kubernetes Cluster Federation. Future releases will
improve upon this even further.
## Handling failures of backend pods and whole clusters
Standard Kubernetes service cluster-IP's already ensure that
non-responsive individual Pod endpoints are automatically taken out of
service with low latency (a few seconds). In addition, as alluded
above, the Kubernetes Cluster Federation system automatically monitors
the health of clusters and the endpoints behind all of the shards of
your Federated Service, taking shards in and out of service as
required (for example, when all of the endpoints behind a service, or perhaps
the entire cluster or availability zone go down, or conversely recover
from an outage). Due to the latency inherent in DNS caching (the cache
timeout, or TTL for Federated Service DNS records is configured to 3
minutes, by default, but can be adjusted), it may take up to that long
for all clients to completely fail over to an alternative cluster in
the case of catastrophic failure. However, given the number of
discrete IP addresses which can be returned for each regional service
endpoint (such as us-central1 above, which has three alternatives)
many clients will fail over automatically to one of the alternative
IP's in less time than that given appropriate configuration.
{{% /capture %}}
{{% capture discussion %}}
## Troubleshooting
### I cannot connect to my cluster federation API
Check that your
1. Client (typically kubectl) is correctly configured (including API endpoints and login credentials).
2. Cluster Federation API server is running and network-reachable.
See the [federation admin guide](/docs/admin/federation/) to learn
how to bring up a cluster federation correctly (or have your cluster administrator do this for you), and how to correctly configure your client.
### I can create a federated service successfully against the cluster federation API, but no matching services are created in my underlying clusters
Check that:
1. Your clusters are correctly registered in the Cluster Federation API (`kubectl describe clusters`).
2. Your clusters are all 'Active'. This means that the cluster Federation system was able to connect and authenticate against the clusters' endpoints. If not, consult the logs of the federation-controller-manager pod to ascertain what the failure might be.
```
kubectl --namespace=federation logs $(kubectl get pods --namespace=federation -l module=federation-controller-manager -o name)
```
3. That the login credentials provided to the Cluster Federation API for the clusters have the correct authorization and quota to create services in the relevant namespace in the clusters. Again you should see associated error messages providing more detail in the above log file if this is not the case.
4. Whether any other error is preventing the service creation operation from succeeding (look for `service-controller` errors in the output of `kubectl logs federation-controller-manager --namespace federation`).
### I can create a federated service successfully, but no matching DNS records are created in my DNS provider.
Check that:
1. Your federation name, DNS provider, DNS domain name are configured correctly. Consult the [federation admin guide](/docs/admin/federation/) or [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) to learn
how to configure your Cluster Federation system's DNS provider (or have your cluster administrator do this for you).
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`).
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in its logs explaining in more detail what's failing).
### Matching DNS records are created in my DNS provider, but clients are unable to resolve against those names
Check that:
1. The DNS registrar that manages your federation DNS domain has been correctly configured to point to your configured DNS provider's nameservers. See for example [Google Domains Documentation](https://support.google.com/domains/answer/3290309?hl=en&ref_topic=3251230) and [Google Cloud DNS Documentation](https://cloud.google.com/dns/update-name-servers), or equivalent guidance from your domain registrar and DNS provider.
### This troubleshooting guide did not help me solve my problem
1. Please use one of our [support channels](/docs/tasks/debug-application-cluster/troubleshooting/) to seek assistance.
## For more information
* [Federation proposal](https://git.k8s.io/community/contributors/design-proposals/multicluster/federation.md) details use cases that motivated this work.
{{% /capture %}}

View File

@ -1,564 +0,0 @@
---
title: Set up Cluster Federation with Kubefed
reviewers:
- madhusudancs
content_template: templates/task
weight: 125
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
Kubernetes version 1.5 and above includes a new command line tool called
[`kubefed`](/docs/admin/kubefed/) to help you administrate your federated
clusters. `kubefed` helps you to deploy a new Kubernetes cluster federation
control plane, and to add clusters to or remove clusters from an existing
federation control plane.
This guide explains how to administer a Kubernetes Cluster Federation
using `kubefed`.
> Note: `kubefed` is a beta feature in Kubernetes 1.6.
{{% /capture %}}
{{% capture prerequisites %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{% /capture %}}
{{% capture steps %}}
## Prerequisites
This guide assumes that you have a running Kubernetes cluster. Please
see one of the [getting started](/docs/setup/) guides
for installation instructions for your platform.
## Getting `kubefed`
Download the client tarball corresponding to the particular release and
extract the binaries in the tarball:
{{< note >}}
Until Kubernetes version `1.8.x` the federation project was
maintained as part of the [core kubernetes repo](https://github.com/kubernetes/kubernetes).
Between Kubernetes releases `1.8` and `1.9`, the federation project moved into
a separate [federation repo](https://github.com/kubernetes/federation), where it is
now maintained. Consequently, the federation release information is available on the
[release page](https://github.com/kubernetes/federation/releases).
{{< /note >}}
### For Kubernetes versions 1.8.x and earlier:
```shell
curl -LO https://storage.googleapis.com/kubernetes-release/release/${RELEASE-VERSION}/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
```
{{< note >}}
The `RELEASE-VERSION` variable should either be set to or replaced with the actual version needed.
{{< /note >}}
Copy the extracted binary to one of the directories in your `$PATH`
and set the executable permission on the binary.
```shell
sudo cp kubernetes/client/bin/kubefed /usr/local/bin
sudo chmod +x /usr/local/bin/kubefed
```
### For Kubernetes versions 1.9.x and above:
```shell
curl -LO https://storage.cloud.google.com/kubernetes-federation-release/release/${RELEASE-VERSION}/federation-client-linux-amd64.tar.gz
tar -xzvf federation-client-linux-amd64.tar.gz
```
{{< note >}}
The `RELEASE-VERSION` variable should be replaced with one of the release versions available at [federation release page](https://github.com/kubernetes/federation/releases).
{{< /note >}}
Copy the extracted binary to one of the directories in your `$PATH`
and set the executable permission on the binary.
```shell
sudo cp federation/client/bin/kubefed /usr/local/bin
sudo chmod +x /usr/local/bin/kubefed
```
### Install kubectl
You can install a matching version of kubectl using the instructions on
the [kubectl install page](/docs/tasks/tools/install-kubectl/).
## Choosing a host cluster.
You'll need to choose one of your Kubernetes clusters to be the
*host cluster*. The host cluster hosts the components that make up
your federation control plane. Ensure that you have a `kubeconfig`
entry in your local `kubeconfig` that corresponds to the host cluster.
You can verify that you have the required `kubeconfig` entry by
running:
```shell
kubectl config get-contexts
```
The output should contain an entry corresponding to your host cluster,
similar to the following:
```
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1
```
You'll need to provide the `kubeconfig` context (called name in the
entry above) for your host cluster when you deploy your federation
control plane.
## Deploying a federation control plane
To deploy a federation control plane on your host cluster, run
[`kubefed init`](/docs/admin/kubefed_init/) command. When you use
`kubefed init`, you must provide the following:
* Federation name
* `--host-cluster-context`, the `kubeconfig` context for the host cluster
* `--dns-provider`, one of `'google-clouddns'`, `aws-route53` or `coredns`
* `--dns-zone-name`, a domain name suffix for your federated services
If your host cluster is running in a non-cloud environment or an
environment that doesn't support common cloud primitives such as
load balancers, you might need additional flags. Please see the
[on-premises host clusters](#on-premises-host-clusters) section below.
The following example command deploys a federation control plane with
the name `fellowship`, a host cluster context `rivendell`, and the
domain suffix `example.com.`:
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="google-clouddns" \
--dns-zone-name="example.com."
```
The domain suffix specified in `--dns-zone-name` must be an existing
domain that you control, and that is programmable by your DNS provider.
It must also end with a trailing dot.
Once the federation control plane is initialized, query the namespaces:
```shell
kubectl get namespace --context=fellowship
```
If you do not see the `default` namespace listed (this is due to a
[bug](https://github.com/kubernetes/kubernetes/issues/33292)). Create it
yourself with the following command:
```shell
kubectl create namespace default --context=fellowship
```
The machines in your host cluster must have the appropriate permissions
to program the DNS service that you are using. For example, if your
cluster is running on Google Compute Engine, you must enable the
Google Cloud DNS API for your project.
The machines in Google Kubernetes Engine clusters are created
without the Google Cloud DNS API scope by default. If you want to use a
Google Kubernetes Engine cluster as a Federation host, you must create it using the `gcloud`
command with the appropriate value in the `--scopes` field. You cannot
modify a Google Kubernetes Engine cluster directly to add this scope, but you can create a
new node pool for your cluster and delete the old one.
{{< note >}}
This will cause pods in the cluster to be rescheduled.
{{< /note >}}
To add the new node pool, run:
```shell
scopes="$(gcloud container node-pools describe --cluster=gke-cluster default-pool --format='value[delimiter=","](config.oauthScopes)')"
gcloud container node-pools create new-np \
--cluster=gke-cluster \
--scopes="${scopes},https://www.googleapis.com/auth/ndev.clouddns.readwrite"
```
To delete the old node pool, run:
```shell
gcloud container node-pools delete default-pool --cluster gke-cluster
```
`kubefed init` sets up the federation control plane in the host
cluster and also adds an entry for the federation API server in your
local kubeconfig.
{{< note >}}
In the beta release of Kubernetes 1.6, `kubefed init` does not automatically set the current context to the
newly deployed federation. You can set the current context manually by running:
```shell
kubectl config use-context fellowship
```
where `fellowship` is the name of your federation.
{{< /note >}}
### Basic and token authentication support
`kubefed init` by default only generates TLS certificates and keys
to authenticate with the federation API server and writes them to
your local kubeconfig file. If you wish to enable basic authentication
or token authentication for debugging purposes, you can enable them by
passing the `--apiserver-enable-basic-auth` flag or the
`--apiserver-enable-token-auth` flag.
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="google-clouddns" \
--dns-zone-name="example.com." \
--apiserver-enable-basic-auth=true \
--apiserver-enable-token-auth=true
```
### Passing command line arguments to federation components
`kubefed init` bootstraps a federation control plane with default
arguments to federation API server and federation controller manager.
Some of these arguments are derived from `kubefed init`'s flags.
However, you can override these command line arguments by passing
them via the appropriate override flags.
You can override the federation API server arguments by passing them
to `--apiserver-arg-overrides` and override the federation controller
manager arguments by passing them to
`--controllermanager-arg-overrides`.
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="google-clouddns" \
--dns-zone-name="example.com." \
--apiserver-arg-overrides="--anonymous-auth=false,--v=4" \
--controllermanager-arg-overrides="--controllers=services=false"
```
### Configuring a DNS provider
The Federated service controller programs a DNS provider to expose
federated services via DNS names. Certain cloud providers
automatically provide the configuration required to program the
DNS provider if the host cluster's cloud provider is same as the DNS
provider. In all other cases, you have to provide the DNS provider
configuration to your federation controller manager which will in-turn
be passed to the federated service controller. You can provide this
configuration to federation controller manager by storing it in a file
and passing the file's local filesystem path to `kubefed init`'s
`--dns-provider-config` flag. For example, save the config below in
`$HOME/coredns-provider.conf`.
```ini
[Global]
etcd-endpoints = http://etcd-cluster.ns:2379
zones = example.com.
```
And then pass this file to `kubefed init`:
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="coredns" \
--dns-zone-name="example.com." \
--dns-provider-config="$HOME/coredns-provider.conf"
```
### On-premises host clusters
#### API server service type
`kubefed init` exposes the federation API server as a Kubernetes
[service](/docs/concepts/services-networking/service/) on the host cluster. By default,
this service is exposed as a
[load balanced service](/docs/concepts/services-networking/service/#loadbalancer).
Most on-premises and bare-metal environments, and some cloud
environments lack support for load balanced services. `kubefed init`
allows exposing the federation API server as a
[`NodePort` service](/docs/concepts/services-networking/service/#nodeport) on
such environments. This can be accomplished by passing
the `--api-server-service-type=NodePort` flag. You can also specify
the preferred address to advertise the federation API server by
passing the `--api-server-advertise-address=<IP-address>`
flag. Otherwise, one of the host cluster's node address is chosen as
the default.
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="google-clouddns" \
--dns-zone-name="example.com." \
--api-server-service-type="NodePort" \
--api-server-advertise-address="10.0.10.20"
```
#### Provisioning storage for etcd
Federation control plane stores its state in
[`etcd`](https://coreos.com/etcd/docs/latest/).
[`etcd`](https://coreos.com/etcd/docs/latest/) data must be stored in
a persistent storage volume to ensure correct operation across
federation control plane restarts. On host clusters that support
[dynamic provisioning of storage volumes](/docs/concepts/storage/persistent-volumes/#dynamic),
`kubefed init` dynamically provisions a
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes)
and binds it to a
[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
to store [`etcd`](https://coreos.com/etcd/docs/latest/) data. If your
host cluster doesn't support dynamic provisioning, you can also
statically provision a
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes).
`kubefed init` creates a
[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
that has the following configuration:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: "yes"
labels:
app: federated-cluster
name: fellowship-federation-apiserver-etcd-claim
namespace: federation-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
To statically provision a
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes),
you must ensure that the
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/#persistent-volumes)
that you create has the matching storage class, access mode and
at least as much capacity as the requested
[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
Alternatively, you can disable persistent storage completely
by passing `--etcd-persistent-storage=false` to `kubefed init`.
However, we do not recommended this because your federation control
plane cannot survive restarts in this mode.
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="google-clouddns" \
--dns-zone-name="example.com." \
--etcd-persistent-storage=false
```
`kubefed init` still doesn't support attaching an existing
[`PersistentVolumeClaim`](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
to the federation control plane that it bootstraps. We are planning to
support this in a future version of `kubefed`.
#### CoreDNS support
Federated services now support [CoreDNS](https://coredns.io/) as one
of the DNS providers. If you are running your clusters and federation
in an environment that does not have access to cloud-based DNS
providers, then you can run your own [CoreDNS](https://coredns.io/)
instance and publish the federated service DNS names to that server.
You can configure your federation to use
[CoreDNS](https://coredns.io/), by passing appropriate values to
`kubefed init`'s `--dns-provider` and `--dns-provider-config` flags.
```shell
kubefed init fellowship \
--host-cluster-context=rivendell \
--dns-provider="coredns" \
--dns-zone-name="example.com." \
--dns-provider-config="$HOME/coredns-provider.conf"
```
For more information see
[Setting up CoreDNS as DNS provider for Cluster Federation](/docs/tasks/federation/set-up-coredns-provider-federation/).
#### AWS Route53 support
It is possible to utilize AWS Route53 as a cloud DNS provider when the
federation controller-manager is run on-premise. The controller-manager
Deployment must be configured with AWS credentials since it cannot implicitly
gather them from a VM running on AWS.
Currently, `kubefed init` does not read AWS Route53 credentials from the
`--dns-provider-config` flag, so a patch must be applied.
Specify AWS Route53 as your DNS provider when initializing your on-premise
federation controller-manager by passing the flag `--dns-provider="aws-route53"`
to `kubefed init`.
Create a patch file with your AWS credentials:
```yaml
spec:
template:
spec:
containers:
- name: controller-manager
env:
- name: AWS_ACCESS_KEY_ID
value: "ABCDEFG1234567890"
- name: AWS_SECRET_ACCESS_KEY
value: "ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890"
```
Patch the Deployment:
```shell
kubectl -n federation-system patch deployment controller-manager --patch "$(cat <patch-file-name>.yml)"
```
Where `<patch-file-name>` is the name of the file you created above.
## Adding a cluster to a federation
After you've deployed a federation control plane, you'll need to make that control plane aware of the clusters it should manage.
To join clusters into the federation:
1. Change the context:
```shell
kubectl config use-context fellowship
```
1. If you are using a managed cluster service, allow the service to access the cluster. To do this, create a `clusterrolebinding` for the account associated with your cluster service:
```shell
kubectl create clusterrolebinding <your_user>-cluster-admin-binding --clusterrole=cluster-admin --user=<your_user>@example.org --context=<joining_cluster_context>
```
1. Join the cluster to the federation, using `kubefed join`, and make sure you provide the following:
* The name of the cluster that you are joining to the federation
* `--host-cluster-context`, the kubeconfig context for the host cluster
For example, this command adds the cluster `gondor` to the federation running on host cluster `rivendell`:
```shell
kubefed join gondor --host-cluster-context=rivendell
```
A new context has now been added to your kubeconfig named `fellowship` (after the name of your federation).
{{< note >}}
The name that you provide to the `join` command is used as the joining cluster's identity in federation. This name should adhere to the rules described in the [identifiers doc](/docs/concepts/overview/working-with-objects/names/). If the context
corresponding to your joining cluster conforms to these rules, you can use the same name in the join command. Otherwise, you must choose a different name for your cluster's identity.
{{< /note >}}
### Naming rules and customization
The cluster name you supply to `kubefed join` must be a valid
[RFC 1035](https://www.ietf.org/rfc/rfc1035.txt) label and are
enumerated in the [Identifiers doc](/docs/concepts/overview/working-with-objects/names/).
Furthermore, federation control plane requires credentials of the
joined clusters to operate on them. These credentials are obtained
from the local kubeconfig. `kubefed join` uses the cluster name
specified as the argument to look for the cluster's context in the
local kubeconfig. If it fails to find a matching context, it exits
with an error.
This might cause issues in cases where context names for each cluster
in the federation don't follow
[RFC 1035](https://www.ietf.org/rfc/rfc1035.txt) label naming rules.
In such cases, you can specify a cluster name that conforms to the
[RFC 1035](https://www.ietf.org/rfc/rfc1035.txt) label naming rules
and specify the cluster context using the `--cluster-context` flag.
For example, if context of the cluster you are joining is
`gondor_needs-no_king`, then you can join the cluster by running:
```shell
kubefed join gondor --host-cluster-context=rivendell --cluster-context=gondor_needs-no_king
```
#### Secret name
Cluster credentials required by the federation control plane as
described above are stored as a secret in the host cluster. The name
of the secret is also derived from the cluster name.
However, the name of a secret object in Kubernetes should conform
to the DNS subdomain name specification described in
[RFC 1123](https://tools.ietf.org/html/rfc1123). If this isn't the
case, you can pass the secret name to `kubefed join` using the
`--secret-name` flag. For example, if the cluster name is `noldor` and
the secret name is `11kingdom`, you can join the cluster by
running:
```shell
kubefed join noldor --host-cluster-context=rivendell --secret-name=11kingdom
```
{{< note >}}
If your cluster name does not conform to the DNS subdomain name specification, all you need to do is supply the secret name using the `--secret-name` flag. `kubefed join` automatically creates the secret for you.
{{< /note >}}
### `kube-dns` configuration
`kube-dns` configuration must be updated in each joining cluster to
enable federated service discovery. If the joining Kubernetes cluster
is version 1.5 or newer and your `kubefed` is version 1.6 or newer,
then this configuration is automatically managed for you when the
clusters are joined or unjoined using `kubefed join` or `unjoin`
commands.
In all other cases, you must update `kube-dns` configuration manually
as described in the
[Updating KubeDNS section of the admin guide](/docs/admin/federation/).
## Removing a cluster from a federation
To remove a cluster from a federation, run the [`kubefed unjoin`](/docs/reference/setup-tools/kubefed/kubefed_unjoin/)
command with the cluster name and the federation's
`--host-cluster-context`:
```shell
kubefed unjoin gondor --host-cluster-context=rivendell
```
## Turning down the federation control plane
Proper cleanup of federation control plane is not fully implemented in
this beta release of `kubefed`. However, for the time being, deleting
the federation system namespace should remove all the resources except
the persistent storage volume dynamically provisioned for the
federation control plane's etcd. You can delete the federation
namespace by running the following command:
```shell
kubectl delete ns federation-system --context=rivendell
```
{{< note >}}
`rivendell` is the host cluster name. Replace that name with the appropriate name in your configuration.
{{< /note >}}
{{% /capture %}}

View File

@ -1,152 +0,0 @@
---
title: Set up CoreDNS as DNS provider for Cluster Federation
content_template: templates/tutorial
weight: 130
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This page shows how to configure and deploy CoreDNS to be used as the
DNS provider for Cluster Federation.
{{% /capture %}}
{{% capture objectives %}}
* Configure and deploy CoreDNS server
* Bring up federation with CoreDNS as dns provider
* Setup CoreDNS server in nameserver lookup chain
{{% /capture %}}
{{% capture prerequisites %}}
* You need to have a running Kubernetes cluster (which is
referenced as host cluster). Please see one of the
[getting started](/docs/setup/) guides for
installation instructions for your platform.
* Support for `LoadBalancer` services in member clusters of federation is
mandatory to enable `CoreDNS` for service discovery across federated clusters.
{{% /capture %}}
{{% capture lessoncontent %}}
## Deploying CoreDNS and etcd charts
CoreDNS can be deployed in various configurations. Explained below is a
reference and can be tweaked to suit the needs of the platform and the
cluster federation.
To deploy CoreDNS, we shall make use of helm charts. CoreDNS will be
deployed with [etcd](https://coreos.com/etcd) as the backend and should
be pre-installed. etcd can also be deployed using helm charts. Shown
below are the instructions to deploy etcd.
helm install --namespace my-namespace --name etcd-operator stable/etcd-operator
helm upgrade --namespace my-namespace --set cluster.enabled=true etcd-operator stable/etcd-operator
*Note: etcd default deployment configurations can be overridden, suiting the
host cluster.*
After deployment succeeds, etcd can be accessed with the
[http://etcd-cluster.my-namespace:2379](http://etcd-cluster.my-namespace:2379) endpoint within the host cluster.
The CoreDNS default configuration should be customized to suit the federation.
Shown below is the Values.yaml, which overrides the default
configuration parameters on the CoreDNS chart.
```yaml
isClusterService: false
serviceType: "LoadBalancer"
plugins:
kubernetes:
enabled: false
etcd:
enabled: true
zones:
- "example.com."
endpoint: "http://etcd-cluster.my-namespace:2379"
```
The above configuration file needs some explanation:
- `isClusterService` specifies whether CoreDNS should be deployed as a
cluster-service, which is the default. You need to set it to false, so
that CoreDNS is deployed as a Kubernetes application service.
- `serviceType` specifies the type of Kubernetes service to be created
for CoreDNS. You need to choose either "LoadBalancer" or "NodePort" to
make the CoreDNS service accessible outside the Kubernetes cluster.
- Disable `plugins.kubernetes`, which is enabled by default by
setting `plugins.kubernetes.enabled` to false.
- Enable `plugins.etcd` by setting `plugins.etcd.enabled` to
true.
- Configure the DNS zone (federation domain) for which CoreDNS is
authoritative by setting `plugins.etcd.zones` as shown above.
- Configure the etcd endpoint which was deployed earlier by setting
`plugins.etcd.endpoint`
Now deploy CoreDNS by running
helm install --namespace my-namespace --name coredns -f Values.yaml stable/coredns
Verify that both etcd and CoreDNS pods are running as expected.
## Deploying Federation with CoreDNS as DNS provider
The Federation control plane can be deployed using `kubefed init`. CoreDNS
can be chosen as the DNS provider by specifying two additional parameters.
--dns-provider=coredns
--dns-provider-config=coredns-provider.conf
coredns-provider.conf has below format:
[Global]
etcd-endpoints = http://etcd-cluster.my-namespace:2379
zones = example.com.
coredns-endpoints = <coredns-server-ip>:<port>
- `etcd-endpoints` is the endpoint to access etcd.
- `zones` is the federation domain for which CoreDNS is authoritative and is same as --dns-zone-name flag of `kubefed init`.
- `coredns-endpoints` is the endpoint to access CoreDNS server. This is an optional parameter introduced from v1.7 onwards.
{{< note >}}
`plugins.etcd.zones` in the CoreDNS configuration and the `--dns-zone-name` flag to `kubefed init` should match.
{{< /note >}}
## Setup CoreDNS server in nameserver resolv.conf chain
{{< note >}}
The following section applies only to versions prior to v1.7
and will be automatically taken care of if the `coredns-endpoints`
parameter is configured in `coredns-provider.conf` as described in
section above.
{{< /note >}}
Once the federation control plane is deployed and federated clusters
are joined to the federation, you need to add the CoreDNS server to the
pod's nameserver resolv.conf chain in all the federated clusters as this
self hosted CoreDNS server is not discoverable publicly. This can be
achieved by adding the below line to `dnsmasq` container's arg in
`kube-dns` deployment.
--server=/example.com./<CoreDNS endpoint>
Replace `example.com` above with federation domain.
Now the federated cluster is ready for cross-cluster service discovery!
{{% /capture %}}

View File

@ -1,221 +0,0 @@
---
title: Set up placement policies in Federation
content_template: templates/task
weight: 135
---
{{% capture overview %}}
{{< deprecationfilewarning >}}
{{< include "federation-deprecation-warning-note.md" >}}
{{< /deprecationfilewarning >}}
This page shows how to enforce policy-based placement decisions over Federated
resources using an external policy engine.
{{% /capture %}}
{{% capture prerequisites %}}
You need to have a running Kubernetes cluster (which is referenced as host
cluster). Please see one of the [getting started](/docs/setup/)
guides for installation instructions for your platform.
{{% /capture %}}
{{% capture steps %}}
## Deploying Federation and configuring an external policy engine
The Federation control plane can be deployed using `kubefed init`.
After deploying the Federation control plane, you must configure an Admission
Controller in the Federation API server that enforces placement decisions
received from the external policy engine.
kubectl apply -f scheduling-policy-admission.yaml
Shown below is an example ConfigMap for the Admission Controller:
{{< codenew file="federation/scheduling-policy-admission.yaml" >}}
The ConfigMap contains three files:
* `config.yml` specifies the location of the `SchedulingPolicy` Admission
Controller config file.
* `scheduling-policy-config.yml` specifies the location of the kubeconfig file
required to contact the external policy engine. This file can also include a
`retryBackoff` value that controls the initial retry backoff delay in
milliseconds.
* `opa-kubeconfig` is a standard kubeconfig containing the URL and credentials
needed to contact the external policy engine.
Edit the Federation API server deployment to enable the `SchedulingPolicy`
Admission Controller.
kubectl -n federation-system edit deployment federation-apiserver
Update the Federation API server command line arguments to enable the Admission
Controller and mount the ConfigMap into the container. If there's an existing
`--enable-admission-plugins` flag, append `,SchedulingPolicy` instead of adding
another line.
--enable-admission-plugins=SchedulingPolicy
--admission-control-config-file=/etc/kubernetes/admission/config.yml
Add the following volume to the Federation API server pod:
- name: admission-config
configMap:
name: admission
Add the following volume mount the Federation API server `apiserver` container:
volumeMounts:
- name: admission-config
mountPath: /etc/kubernetes/admission
## Deploying an external policy engine
The [Open Policy Agent (OPA)](http://openpolicyagent.org) is an open source,
general-purpose policy engine that you can use to enforce policy-based placement
decisions in the Federation control plane.
Create a Service in the host cluster to contact the external policy engine:
kubectl apply -f policy-engine-service.yaml
Shown below is an example Service for OPA.
{{< codenew file="federation/policy-engine-service.yaml" >}}
Create a Deployment in the host cluster with the Federation control plane:
kubectl apply -f policy-engine-deployment.yaml
Shown below is an example Deployment for OPA.
{{< codenew file="federation/policy-engine-deployment.yaml" >}}
## Configuring placement policies via ConfigMaps
The external policy engine will discover placement policies created in the
`kube-federation-scheduling-policy` namespace in the Federation API server.
Create the namespace if it does not already exist:
kubectl --context=federation create namespace kube-federation-scheduling-policy
Configure a sample policy to test the external policy engine:
```
# OPA supports a high-level declarative language named Rego for authoring and
# enforcing policies. For more information on Rego, visit
# http://openpolicyagent.org.
# Rego policies are namespaced by the "package" directive.
package kubernetes.placement
# Imports provide aliases for data inside the policy engine. In this case, the
# policy simply refers to "clusters" below.
import data.kubernetes.clusters
# The "annotations" rule generates a JSON object containing the key
# "federation.kubernetes.io/replica-set-preferences" mapped to <preferences>.
# The preferences values is generated dynamically by OPA when it evaluates the
# rule.
#
# The SchedulingPolicy Admission Controller running inside the Federation API
# server will merge these annotations into incoming Federated resources. By
# setting replica-set-preferences, we can control the placement of Federated
# ReplicaSets.
#
# Rules are defined to generate JSON values (booleans, strings, objects, etc.)
# When OPA evaluates a rule, it generates a value IF all of the expressions in
# the body evaluate successfully. All rules can be understood intuitively as
# <head> if <body> where <body> is true if <expr-1> AND <expr-2> AND ...
# <expr-N> is true (for some set of data.)
annotations["federation.kubernetes.io/replica-set-preferences"] = preferences {
input.kind = "ReplicaSet"
value = {"clusters": cluster_map, "rebalance": true}
json.marshal(value, preferences)
}
# This "annotations" rule generates a value for the "federation.alpha.kubernetes.io/cluster-selector"
# annotation.
#
# In English, the policy asserts that resources in the "production" namespace
# that are not annotated with "criticality=low" MUST be placed on clusters
# labelled with "on-premises=true".
annotations["federation.alpha.kubernetes.io/cluster-selector"] = selector {
input.metadata.namespace = "production"
not input.metadata.annotations.criticality = "low"
json.marshal([{
"operator": "=",
"key": "on-premises",
"values": "[true]",
}], selector)
}
# Generates a set of cluster names that satisfy the incoming Federated
# ReplicaSet's requirements. In this case, just PCI compliance.
replica_set_clusters[cluster_name] {
clusters[cluster_name]
not insufficient_pci[cluster_name]
}
# Generates a set of clusters that must not be used for Federated ReplicaSets
# that request PCI compliance.
insufficient_pci[cluster_name] {
clusters[cluster_name]
input.metadata.annotations["requires-pci"] = "true"
not pci_clusters[cluster_name]
}
# Generates a set of clusters that are PCI certified. In this case, we assume
# clusters are annotated to indicate if they have passed PCI compliance audits.
pci_clusters[cluster_name] {
clusters[cluster_name].metadata.annotations["pci-certified"] = "true"
}
# Helper rule to generate a mapping of desired clusters to weights. In this
# case, weights are static.
cluster_map[cluster_name] = {"weight": 1} {
replica_set_clusters[cluster_name]
}
```
Shown below is the command to create the sample policy:
kubectl --context=federation -n kube-federation-scheduling-policy create configmap scheduling-policy --from-file=policy.rego
This sample policy illustrates a few key ideas:
* Placement policies can refer to any field in Federated resources.
* Placement policies can leverage external context (for example, Cluster
metadata) to make decisions.
* Administrative policy can be managed centrally.
* Policies can define simple interfaces (such as the `requires-pci` annotation) to
avoid duplicating logic in manifests.
## Testing placement policies
Annotate one of the clusters to indicate that it is PCI certified.
kubectl --context=federation annotate clusters cluster-name-1 pci-certified=true
Deploy a Federated ReplicaSet to test the placement policy.
{{< codenew file="federation/replicaset-example-policy.yaml" >}}
Shown below is the command to deploy a ReplicaSet that *does* match the policy.
kubectl --context=federation create -f replicaset-example-policy.yaml
Inspect the ReplicaSet to confirm the appropriate annotations have been applied:
kubectl --context=federation get rs nginx-pci -o jsonpath='{.metadata.annotations}'
{{% /capture %}}