Clean up broken analytics links
This commit is contained in:
parent
1b9f904307
commit
e6f04c498b
|
@ -28,7 +28,3 @@ expand this capability.
|
|||
|
||||
This in affect makes every node a seed provider, which is not a recommended best practice. This increases maintenance and reduces gossip performance.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -265,7 +265,3 @@ redis-replica
|
|||
Tip: To turn down your Kubernetes cluster, follow the corresponding instructions in the version of the
|
||||
[Getting Started Guides](https://kubernetes.io/docs/getting-started-guides/) that you previously used to create your cluster.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -137,8 +137,3 @@ Each example must be well-structured and documented to ensure clarity and usabil
|
|||
potential incompatibility with current Kubernetes versions.
|
||||
* Community Maintenance: SIG Apps will act as the overall steward, and individual example
|
||||
maintainers (original authors or new volunteers) are crucial for the health of the repository.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -45,7 +45,3 @@ This should now
|
|||
3. Mount it on the kubelet's host machine
|
||||
4. Spin up a container with this volume mounted to the path specified in the pod definition
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -180,7 +180,3 @@ kubectl create -f examples/staging/cluster-dns/dns-frontend-pod.yaml
|
|||
kubectl logs dns-frontend
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -119,7 +119,3 @@ we can clean up everything that we created in one quick command using a selector
|
|||
kubectl delete statefulsets,persistentvolumes,persistentvolumeclaims,services,poddisruptionbudget -l app=cockroachdb
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -189,6 +189,3 @@ You should see something similar to the following:
|
|||
}
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -190,7 +190,3 @@ You should see something similar to the following:
|
|||
}
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -127,7 +127,3 @@ Error: <*>lookup elasticsearch-logging: no such host
|
|||
</body></html>
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -124,6 +124,3 @@ $ curl https://104.198.1.26:30028 -k
|
|||
|
||||
For more information on how to run this in a kubernetes cluster, please see the [user-guide](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/).
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -128,7 +128,3 @@ kubectl delete -f examples/javaee/mysql-service.yaml
|
|||
kubectl delete -f examples/javaee/wildfly-rc.yaml
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -177,8 +177,3 @@ All resources created in this application can be deleted:
|
|||
$ kubectl delete -f examples/javaweb-tomcat-sidecar/javaweb-2.yaml
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
||||
[]()
|
||||
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -1,7 +1,3 @@
|
|||
|
||||
This file has moved to: http://kubernetes.io/docs/user-guide/jobs/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -1,7 +1,3 @@
|
|||
|
||||
This file has moved to: http://kubernetes.io/docs/user-guide/jobs/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -1,7 +1,3 @@
|
|||
|
||||
This file has moved to: http://kubernetes.io/docs/user-guide/jobs/
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -209,7 +209,3 @@ container section:
|
|||
],
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -8,7 +8,3 @@ To build and push the base meteor-kubernetes image:
|
|||
docker build -t chees/meteor-kubernetes .
|
||||
docker push chees/meteor-kubernetes
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -141,6 +141,3 @@ The daemonset instructs Kubernetes to spawn pods on each node, mapping /dev/, /r
|
|||
|
||||
It's a bit cludgy to define the environment variables like we do here in these config files. There is [another issue](https://github.com/kubernetes/kubernetes/issues/4710) to discuss adding mapping secrets to environment variables in Kubernetes. (Personally I don't like that method and prefer to use the config secrets.)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -155,6 +155,3 @@ When the New Relic agent starts, `NRSYSMOND_hostname` is set using the output of
|
|||
|
||||
It's a bit cludgy to define the environment variables like we do here in these config files. There is [another issue](https://github.com/kubernetes/kubernetes/issues/4710) to discuss adding mapping secrets to environment variables in Kubernetes.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -277,6 +277,3 @@ With Google Cloud Platform, get the IP address of all load balancers with the fo
|
|||
gcloud compute forwarding-rules list
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -205,7 +205,3 @@ Clean up your cluster from resources created with this example:
|
|||
$ ${OPENSHIFT_EXAMPLE}/cleanup.sh
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -544,6 +544,3 @@ $ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
|
|||
/dev/rbd1 2.9G 4.5M 2.8G 1% /var/lib/www/html
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -210,7 +210,3 @@ To turn down a Kubernetes cluster:
|
|||
$ cluster/kube-down.sh
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -243,6 +243,3 @@ $ kubectl get pod nginx -o yaml | egrep "psp|privileged"
|
|||
privileged: true
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -280,6 +280,3 @@ You see that our BestEffort pod goes in a restart cycle, but the pods with great
|
|||
As you can see, we rely on the Kernel to react to system OOM events. Depending on how your host operating
|
||||
system was configured, and which process the Kernel ultimately decides to kill on your Node, you may experience unstable results. In addition, during an OOM event, while the kernel is cleaning up processes, the system may experience significant periods of slow down or appear unresponsive. As a result, while the system allows you to overcommit on memory, we recommend to not induce a Kernel sys OOM.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -188,7 +188,3 @@ kubectl delete deployment selenium-python
|
|||
kubectl delete svc selenium-hub
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -181,7 +181,3 @@ eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready
|
|||
|
||||
For a more advanced example of sharing clusters, see the [service-loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer/README.md)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -68,7 +68,3 @@ kubectl delete deployment my-nginx
|
|||
Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](https://kubernetes.io/docs/user-journeys/users/application-developer/foundational/#section-2)
|
||||
is given in a different document.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -371,6 +371,3 @@ Then visit [http://localhost:8080/](http://localhost:8080/).
|
|||
top right as well), the `port-forward` probably failed and needs to be
|
||||
restarted. See #12179.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -118,6 +118,3 @@ While still in the container, you can see the output of your Spark Job in the Di
|
|||
root@spark-master-controller-c1sqd:/# ls -l /mnt/glusterfs/output
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -230,7 +230,3 @@ kubectl scale deployment hazelcast --replicas 4
|
|||
|
||||
See [here](https://github.com/pires/hazelcast-kubernetes-bootstrapper/blob/master/src/main/java/com/github/pires/hazelcast/HazelcastDiscoveryController.java)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -130,8 +130,3 @@ At this point, there is a working cluster that can begin being used via the pxc-
|
|||
|
||||
This setup certainly can become more fluid and dynamic. One idea is to perhaps use an etcd container to store information about node state. Originally, there was a read-only kubernetes API available to each container but that has since been removed. Also, Kelsey Hightower is working on moving the functionality of confd to Kubernetes. This could replace the shell duct tape that builds the cluster configuration file for the image.
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -128,7 +128,3 @@ kubectl scale rc redis-sentinel --replicas=3
|
|||
kubectl delete pods redis-master
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -124,7 +124,3 @@ the generated pods which is using `nodeSelector` to force k8s to schedule contai
|
|||
|
||||
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -107,7 +107,3 @@ in Vitess. Each page number is assigned to one of the shards using a
|
|||
|
||||
You may also want to remove any firewall rules you created.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -166,7 +166,3 @@ Make sure the Nimbus Pod is running.
|
|||
|
||||
```kubectl create -f storm-worker-controller.yaml```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -20,8 +20,3 @@ https://github.com/draios/sysdig-cloud-scripts/tree/master/agent_deploy/kubernet
|
|||
Please see the Sysdig Cloud support site for the latest documentation:
|
||||
http://support.sysdigcloud.com/hc/en-us/sections/200959909
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -31,7 +31,3 @@ You should now be able to query your web server:
|
|||
$ Hello World
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -18,6 +18,3 @@ Launch the Pod:
|
|||
# kubectl create -f examples/staging/volumes/azure_disk/azure.yaml
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -57,6 +57,3 @@ The same mechanism can also be used to mount the Azure File Storage using a Pers
|
|||
|
||||
Correspondingly, you then mount the volume inside pods using the normal `persistentVolumeClaim` reference. This mechanism is used in the sample pod YAML [azure-2.yaml](azure-2.yaml).
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -21,7 +21,3 @@ You should now be able to query your web server:
|
|||
$ Hello World
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -86,7 +86,3 @@ $ systemctl enable --now multipathd.service
|
|||
inaccessible block devices as they will be claimed by multipath and
|
||||
exposed as a device in /dev/mapper/*.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -110,6 +110,3 @@ Read more about the [Flocker Cluster Architecture](https://docs.clusterhq.com/en
|
|||
|
||||
To see a demo example of using Kubernetes and Flocker, visit [Flocker's blog post on High Availability with Kubernetes and Flocker](https://clusterhq.com/2015/12/22/ha-demo-kubernetes-flocker/)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -170,9 +170,3 @@ Thu Oct 22 19:28:55 UTC 2015
|
|||
nfs-busybox-w3s4t
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -6,8 +6,3 @@ only NFSv3 is opened in this container.
|
|||
|
||||
Available as `gcr.io/google-samples/nfs-server`
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -364,7 +364,3 @@ create Portworx volumes out of band and they will be created automatically.
|
|||
pvpod 1/1 Running 0 48m
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -93,6 +93,3 @@ $ docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt"}}{{ .
|
|||
/var/lib/kubelet/plugins/kubernetes.io~quobyte/root#root@testVolume
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -243,6 +243,4 @@ NAME READY STATUS RESTARTS AGE
|
|||
pod-0 1/1 Running 0 23m
|
||||
pod-sio-small 1/1 Running 0 5s
|
||||
```
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
||||
|
|
|
@ -675,6 +675,3 @@ vSphere volumes can be consumed by Stateful Sets.
|
|||
|
||||
[Download example](simple-statefulset.yaml)
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -38,7 +38,3 @@ Here are the commands:
|
|||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -114,6 +114,3 @@ $ kubectl exec glusterfs -- mount | grep gluster
|
|||
|
||||
You may also run `docker ps` on the host to see the actual container.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -130,7 +130,3 @@ Run *docker inspect* and verify the container mounted the host directory into th
|
|||
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -59,7 +59,3 @@ On the Kubernetes host, I got these in mount output
|
|||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
@ -470,6 +470,3 @@ StorageOS supports the following storage class parameters:
|
|||
test-storageos-redis-sc-pvc 1/1 Running 0 44s
|
||||
```
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
Loading…
Reference in New Issue