examples/staging/sharing-clusters
Marek Siarkowicz e4cc298ab6 Import kubernetes updates (#210)
* Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created

An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.

* Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created

An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.

* move hardPodAffinitySymmetricWeight to scheduler policy config

* Added Bind method to Scheduler Extender

- only one extender can support the bind method
- if an extender supports bind, scheduler delegates the pod binding to the extender

* examples/podsecuritypolicy/rbac: allow to use projected volumes in restricted PSP.

* fix typo

* SPBM policy ID support in vsphere cloud provider

* fix the invalid link

* DeamonSet-DaemonSet

* Update GlusterFS examples readme.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>

* fix some typo in example/volumes

* Fix  spelling in example/spark

* Correct spelling in quobyte

* Support custom domains in the cockroachdb example's init container

This switches from using v0.1 of the peer-finder image to a version that
includes https://github.com/kubernetes/contrib/pull/2013

While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1

* Update docs/ URLs to point to proper locations

* Adds --insecure to cockroachdb client command

Cockroach errors out when using said command:

```shell
▶  kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
Waiting for pod default/cockroach-client to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
                                                      Error attaching, falling back to logs: unable to upgrade connection: container cockroach-client not found in pod cockroach-client_default
Error: problem using security settings, did you mean to use --insecure?: problem with CA certificate: not found
Failed running "sql"
Waiting for pod default/cockroach-client to terminate, status is Running
pod "cockroach-client" deleted
```

This PR updates the README.md to include --insecure in the client command

* Add StorageOS volume plugin

* examples/volumes/flexvolume/nfs: check for jq and simplify quoting.

* Remove broken getvolumename and pass PV or volume name to attach call

* Remove controller node plugin driver dependency for non-attachable flex volume drivers (Ex: NFS).

* Add `imageFeatures` parameter for RBD volume plugin, which is used to
customize RBD image format 2 features.
Update RBD docs in examples/persistent-volume-provisioning/README.md.

* Only `layering` RBD image format 2 feature should be supported for now.

* Formatted Dockerfile to be cleaner and precise

* Update docs for user-guide

* Make the Quota creation optional

* Remove duplicated line from ceph-secret-admin.yaml

* Update CockroachDB tag to v1.0.3

* Correct the comment in PSP examples.

* Update wordpress to 4.8.0

* Cassandra example, use nodetool drain in preStop

* Add termination gracePeriod

* Use buildozer to remove deprecated automanaged tags

* Use buildozer to delete licenses() rules except under third_party/

* NR Infrastructure agent example daemonset

Copy of previous newrelic example, then modified to use the new agent
"newrelic-infra" instead of "nrsysmond".

Also maps all of host node's root fs into /host in the container (ro,
but still exposes underlying node info into a container).

Updates to README

* Reduce one time url direction

Reduce one time url direction

* update to rbac v1 in yaml file

* Replicate the persistent volume label admission plugin in a controller in
the cloud-controller-manager

* update related files

* Paramaterize stickyMaxAgeMinutes for service in API

* Update example to CockroachDB v1.0.5

* Remove storage-class annotations in examples

* PodSecurityPolicy.allowedCapabilities: add support for using * to allow to request any capabilities.

Also modify "privileged" PSP to use it and allow privileged users to use
any capabilities.

* Add examples pods to demonstrate CPU manager.

* Tag broken examples test as manual

* bazel: use autogenerated all-srcs rules instead of manually-curated sources rules

* Update CockroachDB tag to v1.1.0

* update BUILD files

* pkg/api/legacyscheme: fixup imports

* Update bazel

* [examples.storage/minio] update deploy config version

* Volunteer to help review examples

I would like to do some code review for examples about how to run real applications with Kubernetes

* examples/podsecuritypolicy/rbac: fix names in comments and sync with examples repository.

* Update storageclass version to v1 in examples

* pkg/apis/core: mechanical import fixes in dependencies

* Use k8s.gcr.io vanity domain for container images

* Update generated files

* gcloud docker now auths k8s.gcr.io by default

* -Add scheduler optimization options, short circuit all predicates if one predicate fails

* Revert k8s.gcr.io vanity domain

This reverts commit eba5b6092afcae27a7c925afea76b85d903e87a9.

Fixes https://github.com/kubernetes/kubernetes/issues/57526

* Autogenerate BUILD files

* Move scheduler code out of plugin directory.

This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.

Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.

* Fix scheduler refs in BUILD files.

Update references to moved scheduler code.

* Switch to k8s.gcr.io vanity domain

This is the 2nd attempt.  The previous was reverted while we figured out
the regional mirrors (oops).

New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest.  To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today).  For now the staging is an alias to
gcr.io/google_containers (the legacy URL).

When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.

We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it.  Nice and
visible, easy to keep track of.

* Remove apiVersion from scheduler extender example configuration

* Update examples to use PSPs from the policy API group.

* fix all the typos across the project

* Autogenerated: hack/update-bazel.sh

* Modify PodSecurityPolicy admission plugin to additionally allow authorizing via "use" verb in policy API group.

* fix todo: add validate method for &schedulerapi.Policy

* examples/podsecuritypolicy: add owners.

* Adding dummy and dummy-attachable example Flexvolume drivers; adding DaemonSet deployment example

* Fix relative links in README
2018-03-14 11:26:26 -07:00
..
README.md mv examples over to /staging folder 2017-05-19 23:01:06 +02:00
make_secret.go Import kubernetes updates (#210) 2018-03-14 11:26:26 -07:00

README.md

Sharing Clusters

This example demonstrates how to access one kubernetes cluster from another. It only works if both clusters are running on the same network, on a cloud provider that provides a private ip range per network (eg: GCE, GKE, AWS).

Setup

Create a cluster in US (you don't need to do this if you already have a running kubernetes cluster)

$ cluster/kube-up.sh

Before creating our second cluster, lets have a look at the kubectl config:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://104.197.84.16
  name: <clustername_us>
...
current-context: <clustername_us>
...

Now spin up the second cluster in Europe

$ ./cluster/kube-up.sh
$ KUBE_GCE_ZONE=europe-west1-b KUBE_GCE_INSTANCE_PREFIX=eu ./cluster/kube-up.sh

Your kubectl config should contain both clusters:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://146.148.25.221
  name: <clustername_eu>
- cluster:
    certificate-authority-data: REDACTED
    server: https://104.197.84.16
  name: <clustername_us>
...
current-context: kubernetesdev_eu
...

And kubectl get nodes should agree:

$ kubectl get nodes
NAME             LABELS                                  STATUS
eu-node-0n61     kubernetes.io/hostname=eu-node-0n61     Ready
eu-node-79ua     kubernetes.io/hostname=eu-node-79ua     Ready
eu-node-7wz7     kubernetes.io/hostname=eu-node-7wz7     Ready
eu-node-loh2     kubernetes.io/hostname=eu-node-loh2     Ready

$ kubectl config use-context <clustername_us>
$ kubectl get nodes
NAME                     LABELS                                                            STATUS
kubernetes-node-5jtd     kubernetes.io/hostname=kubernetes-node-5jtd                       Ready
kubernetes-node-lqfc     kubernetes.io/hostname=kubernetes-node-lqfc                       Ready
kubernetes-node-sjra     kubernetes.io/hostname=kubernetes-node-sjra                       Ready
kubernetes-node-wul8     kubernetes.io/hostname=kubernetes-node-wul8                       Ready

Testing reachability

For this test to work we'll need to create a service in europe:

$ kubectl config use-context <clustername_eu>
$ kubectl create -f /tmp/secret.json
$ kubectl create -f examples/https-nginx/nginx-app.yaml
$ kubectl exec -it my-nginx-luiln -- echo "Europe nginx" >> /usr/share/nginx/html/index.html
$ kubectl get ep
NAME         ENDPOINTS
kubernetes   10.240.249.92:443
nginxsvc     10.244.0.4:80,10.244.0.4:443

Just to test reachability, we'll try hitting the Europe nginx from our initial US central cluster. Create a basic curl pod in the US cluster:

apiVersion: v1
kind: Pod
metadata:
  name: curlpod
spec:
  containers:
  - image: radial/busyboxplus:curl
    command:
      - sleep
      - "360000000"
    imagePullPolicy: IfNotPresent
    name: curlcontainer
  restartPolicy: Always

And test that you can actually reach the test nginx service across continents

$ kubectl config use-context <clustername_us>
$ kubectl -it exec curlpod -- /bin/sh
[ root@curlpod:/ ]$ curl http://10.244.0.4:80
Europe nginx

Granting access to the remote cluster

We will grant the US cluster access to the Europe cluster. Basically we're going to setup a secret that allows kubectl to function in a pod running in the US cluster, just like it did on our local machine in the previous step. First create a secret with the contents of the current .kube/config:

$ kubectl config use-context <clustername_eu>
$ go run ./make_secret.go --kubeconfig=$HOME/.kube/config > /tmp/secret.json
$ kubectl config use-context <clustername_us>
$ kubectl create -f /tmp/secret.json

Create a kubectl pod that uses the secret, in the US cluster.

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "kubectl-tester"
  },
  "spec": {
    "volumes": [
       {
            "name": "secret-volume",
            "secret": {
                "secretName": "kubeconfig"
            }
        }
    ],
    "containers": [
      {
        "name": "kubectl",
        "image": "bprashanth/kubectl:0.0",
        "imagePullPolicy": "Always",
        "env": [
            {
                "name": "KUBECONFIG",
                "value": "/.kube/config"
            }
        ],
        "args": [
          "proxy", "-p", "8001"
        ],
        "volumeMounts": [
          {
              "name": "secret-volume",
               "mountPath": "/.kube"
          }
        ]
      }
    ]
  }
}

And check that you can access the remote cluster

$ kubectl config use-context <clustername_us>
$ kubectl exec -it kubectl-tester bash

kubectl-tester $ kubectl get nodes
NAME             LABELS                                  STATUS
eu-node-0n61     kubernetes.io/hostname=eu-node-0n61     Ready
eu-node-79ua     kubernetes.io/hostname=eu-node-79ua     Ready
eu-node-7wz7     kubernetes.io/hostname=eu-node-7wz7     Ready
eu-node-loh2     kubernetes.io/hostname=eu-node-loh2     Ready

For a more advanced example of sharing clusters, see the service-loadbalancer

Analytics