diff --git a/docs/concepts/cluster-administration/manage-deployment.md b/docs/concepts/cluster-administration/manage-deployment.md index 2f97b153d9..4a94607125 100644 --- a/docs/concepts/cluster-administration/manage-deployment.md +++ b/docs/concepts/cluster-administration/manage-deployment.md @@ -137,7 +137,7 @@ If you're interested in learning more about `kubectl`, go ahead and read [kubect The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another. -For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels: +For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels: ```yaml labels: diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index febbc6726f..3690f36ea6 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -19,11 +19,11 @@ This is a living document. If you think of something that is not on this list bu - Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly. -- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax. +- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax. Note also that many `kubectl` commands can be called on a directory, so you can also call `kubectl create` on a directory of config files. See below for more details. -- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to reduce error. For example, omit the selector and labels in a `ReplicationController` if you want them to be the same as the labels in its `podTemplate`, since those fields are populated from the `podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/frontend-deployment.yaml) of this. +- Don't specify default values unnecessarily, in order to simplify and minimize configs, and to reduce error. For example, omit the selector and labels in a `ReplicationController` if you want them to be the same as the labels in its `podTemplate`, since those fields are populated from the `podTemplate` labels by default. See the [guestbook app's](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) .yaml files for some [examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/frontend-deployment.yaml) of this. - Put an object description in an annotation to allow better introspection. @@ -58,7 +58,7 @@ This is a living document. If you think of something that is not on this list bu ## Using Labels -- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) app for an example of this approach. +- Define and use [labels](/docs/user-guide/labels/) that identify __semantic attributes__ of your application or deployment. For example, instead of attaching a label to a set of pods to explicitly represent some service (For example, `service: myservice`), or explicitly representing the replication controller managing the pods (for example, `controller: mycontroller`), attach labels that identify semantic attributes, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. This will let you select the object groups appropriate to the context— for example, a service for all "tier: frontend" pods, or all "test" phase components of app "myapp". See the [guestbook](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) app for an example of this approach. A service can be made to span multiple deployments, such as is done across [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/), by simply omitting release-specific labels from its selector, rather than updating a service's selector to match the replication controller's selector fully. diff --git a/docs/concepts/services-networking/connect-applications-service.md b/docs/concepts/services-networking/connect-applications-service.md index 3572e9e9f2..3209c50afb 100644 --- a/docs/concepts/services-networking/connect-applications-service.md +++ b/docs/concepts/services-networking/connect-applications-service.md @@ -169,7 +169,7 @@ Till now we have only accessed the nginx server from within the cluster. Before * An nginx server configured to use the certificates * A [secret](/docs/user-guide/secrets) that makes the certificates accessible to pods -You can acquire all these from the [nginx https example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/), in short: +You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/https-nginx/), in short: ```shell $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json @@ -188,7 +188,7 @@ Now modify your nginx replicas to start an https server using the certificate in Noteworthy points about the nginx-secure-app manifest: - It contains both Deployment and Service specification in the same file. -- The [nginx server](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports. +- The [nginx server](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports. - Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is setup *before* the nginx server is started. ```shell diff --git a/docs/concepts/services-networking/dns-pod-service.md b/docs/concepts/services-networking/dns-pod-service.md index 7f024f523a..bf3779981a 100644 --- a/docs/concepts/services-networking/dns-pod-service.md +++ b/docs/concepts/services-networking/dns-pod-service.md @@ -345,7 +345,7 @@ kube-dns 10.180.3.17:53,10.180.3.17:53 1h If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/tasks/debug-application-cluster/debug-service/). -For additional Kubernetes DNS examples, see the [cluster-dns examples](https://git.k8s.io/kubernetes/examples/cluster-dns) in the Kubernetes GitHub repository. +For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) in the Kubernetes GitHub repository. ## Kubernetes Federation (Multiple Zone support) diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index fae77cfdfe..f20004aff7 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -536,7 +536,7 @@ parameters: ``` $ kubectl create secret generic heketi-secret --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' --namespace=default ``` - Example of a secret can be found in [glusterfs-provisioning-secret.yaml](https://git.k8s.io/kubernetes/examples/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml). + Example of a secret can be found in [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml). * `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex: "8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter. * `gidMin`, `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively. @@ -631,7 +631,7 @@ parameters: vSphere Infrastructure(VI) administrator can specify storage requirements for applications in terms of storage capabilities while creating a storage class inside Kubernetes. Please note that while creating a StorageClass, administrator should specify storage capability names used in the table above as these names might differ from the ones used by VSAN. For example - Number of disk stripes per object is referred to as stripeWidth in VSAN documentation however vSphere Cloud Provider uses a friendly name diskStripes. -You can see [vSphere example](https://git.k8s.io/kubernetes/examples/volumes/vsphere) for more details. +You can see [vSphere example](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere) for more details. #### Ceph RBD diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 28cb140e48..868b5ad613 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -300,7 +300,7 @@ writers simultaneously. **Important:** You must have your own NFS server running with the share exported before you can use it. {: .caution} -See the [NFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/nfs) for more details. +See the [NFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/nfs) for more details. ### iscsi @@ -319,7 +319,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed. -See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/iscsi) for more details. +See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/iscsi) for more details. ### fc (fibre channel) @@ -331,7 +331,7 @@ targetWWNs expect that those WWNs are from multi-path connections. **Important:** You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. {: .caution} -See the [FC example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/fibre_channel) for more details. +See the [FC example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/fibre_channel) for more details. ### flocker @@ -347,7 +347,7 @@ can be "handed off" between pods as required. **Important:** You must have your own Flocker installation running before you can use it. {: .caution} -See the [Flocker example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flocker) for more details. +See the [Flocker example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/flocker) for more details. ### glusterfs @@ -362,7 +362,7 @@ simultaneously. **Important:** You must have your own GlusterFS installation running before you can use it. {: .caution} -See the [GlusterFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/glusterfs) for more details. +See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/glusterfs) for more details. ### rbd @@ -382,7 +382,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately, RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed. -See the [RBD example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/rbd) for more details. +See the [RBD example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/rbd) for more details. ### cephfs @@ -396,7 +396,7 @@ writers simultaneously. **Important:** You must have your own Ceph server running with the share exported before you can use it. {: .caution} -See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/cephfs/) for more details. +See the [CephFS example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/cephfs/) for more details. ### gitRepo @@ -555,20 +555,20 @@ A `FlexVolume` enables users to mount vendor volumes into a pod. It expects vend drivers are installed in the volume plugin path on each kubelet node. This is an alpha feature and may change in future. -More details are in [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/flexvolume/README.md). +More details are in [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/flexvolume/README.md). ### AzureFileVolume A `AzureFileVolume` is used to mount a Microsoft Azure File Volume (SMB 2.1 and 3.0) into a Pod. -More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_file/README.md). +More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_file/README.md). ### AzureDiskVolume A `AzureDiskVolume` is used to mount a Microsoft Azure [Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/) into a Pod. -More details can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/azure_disk/README.md). +More details can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/azure_disk/README.md). ### vsphereVolume @@ -626,7 +626,7 @@ spec: volumePath: "[DatastoreName] volumes/myDisk" fsType: ext4 ``` -More examples can be found [here](https://git.k8s.io/kubernetes/examples/volumes/vsphere). +More examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere). ### Quobyte @@ -636,7 +636,7 @@ A `Quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume t **Important:** You must have your own Quobyte setup running with the volumes created before you can use it. {: .caution} -See the [Quobyte example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/quobyte) for more details. +See the [Quobyte example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/quobyte) for more details. ### PortworxVolume A `PortworxVolume` is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a @@ -669,7 +669,7 @@ spec: **Important:** Make sure you have an existing PortworxVolume with name `pxvol` before using it in the pod. {: .caution} -More details and examples can be found [here](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/portworx/README.md). +More details and examples can be found [here](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/portworx/README.md). ### ScaleIO ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable @@ -705,7 +705,7 @@ spec: fsType: xfs ``` -For further detail, please the see the [ScaleIO examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/scaleio). +For further detail, please the see the [ScaleIO examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/scaleio). ### StorageOS A `storageos` volume allows an existing [StorageOS](https://www.storageos.com) volume to be mounted into your pod. @@ -747,7 +747,7 @@ spec: fsType: ext4 ``` -For more information including Dynamic Provisioning and Persistent Volume Claims, please see the [StorageOS examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/storageos). +For more information including Dynamic Provisioning and Persistent Volume Claims, please see the [StorageOS examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/volumes/storageos). ### local diff --git a/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 23f604abab..1c5ceed3df 100644 --- a/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -366,7 +366,7 @@ of custom controller for those pods. This allows the most flexibility, but may complicated to get started with and offers less integration with Kubernetes. One example of this pattern would be a Job which starts a Pod which runs a script that in turn -starts a Spark master controller (see [spark example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/spark/README.md)), runs a spark +starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/spark/README.md)), runs a spark driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job diff --git a/docs/concepts/workloads/controllers/petset.md b/docs/concepts/workloads/controllers/petset.md index 87c301259a..42c90cfd96 100644 --- a/docs/concepts/workloads/controllers/petset.md +++ b/docs/concepts/workloads/controllers/petset.md @@ -39,7 +39,7 @@ This doc assumes familiarity with the following Kubernetes concepts: * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Headless Services](/docs/user-guide/services/#headless-services) * [Persistent Volumes](/docs/concepts/storage/volumes/) -* [Persistent Volume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) +* [Persistent Volume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) You need a working Kubernetes cluster at version >= 1.3, with a healthy DNS [cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) at version >= 15. You cannot use PetSet on a hosted Kubernetes provider that has disabled `alpha` resources. @@ -95,7 +95,7 @@ Before you start deploying applications as PetSets, there are a few limitations * PetSet is an *alpha* resource, not available in any Kubernetes release prior to 1.3. * As with all alpha/beta resources, it can be disabled through the `--runtime-config` option passed to the apiserver, and in fact most likely will be disabled on hosted offerings of Kubernetes. * The only updatable field on a PetSet is `replicas`. -* The storage for a given pet must either be provisioned by a [persistent volume provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that persistent volume provisioning is also currently in alpha. +* The storage for a given pet must either be provisioned by a [persistent volume provisioner](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. Note that persistent volume provisioning is also currently in alpha. * Deleting and/or scaling a PetSet down will *not* delete the volumes associated with the PetSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related PetSet resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**. * All PetSets currently require a "governing service", or a Service responsible for the network identity of the pets. The user is responsible for this Service. * Updating an existing PetSet is currently a manual process, meaning you either need to deploy a new PetSet with the new image version, or orphan Pets one by one, update their image, and join them back to the cluster. diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index d52a8c1e1f..52168247c7 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -42,7 +42,7 @@ provides a set of stateless replicas. Controllers such as * StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5. * As with all alpha/beta resources, you can disable StatefulSet through the `--runtime-config` option passed to the apiserver. -* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. +* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. * Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources. * StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service. diff --git a/docs/getting-started-guides/aws.md b/docs/getting-started-guides/aws.md index 033345b7b5..f723837295 100644 --- a/docs/getting-started-guides/aws.md +++ b/docs/getting-started-guides/aws.md @@ -142,9 +142,9 @@ For more information, please read [kubeconfig files](/docs/concepts/cluster-admi See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. -The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) +The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) ## Scaling the cluster diff --git a/docs/getting-started-guides/coreos/bare_metal_offline.md b/docs/getting-started-guides/coreos/bare_metal_offline.md index c60839bf59..a471d2db8d 100644 --- a/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -653,7 +653,7 @@ Now that the CoreOS with Kubernetes installed is up and running lets spin up som See [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/). ## Helping commands for debugging diff --git a/docs/getting-started-guides/dcos.md b/docs/getting-started-guides/dcos.md index 8955334e6d..07aa56c002 100644 --- a/docs/getting-started-guides/dcos.md +++ b/docs/getting-started-guides/dcos.md @@ -33,7 +33,7 @@ Explore the following resources for more information about Kubernetes, Kubernete - [DCOS Documentation](https://docs.mesosphere.com/) - [Managing DCOS Services](https://docs.mesosphere.com/services/kubernetes/) -- [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) +- [Kubernetes Examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) - [Kubernetes on Mesos Documentation](https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/README.md) - [Kubernetes on Mesos Release Notes](https://github.com/mesosphere/kubernetes-mesos/releases) - [Kubernetes on DCOS Package Source](https://github.com/mesosphere/kubernetes-mesos) @@ -110,7 +110,7 @@ $ dcos kubectl get pods --namespace=kube-system Names and ages may vary. -Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/README.md) or the [Kubernetes User Guide](/docs/user-guide/). +Now that Kubernetes is installed on DCOS, you may wish to explore the [Kubernetes Examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/README.md) or the [Kubernetes User Guide](/docs/user-guide/). ## Uninstall diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index a152d0af53..31d95c45f8 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -135,7 +135,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh Then, see [a simple nginx example](/docs/user-guide/simple-nginx) to try out your new cluster. -For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). The [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) is a good "getting started" walkthrough. +For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) is a good "getting started" walkthrough. ### Tearing down the cluster diff --git a/docs/getting-started-guides/mesos-docker.md b/docs/getting-started-guides/mesos-docker.md index a6540bebdb..05a26dac05 100644 --- a/docs/getting-started-guides/mesos-docker.md +++ b/docs/getting-started-guides/mesos-docker.md @@ -216,7 +216,7 @@ sudo route -n add -net 172.17.0.0 $(docker-machine ip kube-dev) To learn more about Pods, Volumes, Labels, Services, and Replication Controllers, start with the [Kubernetes Tutorials](/docs/tutorials/). - To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) + To skip to a more advanced example, see the [Guestbook Example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) 1. Destroy cluster diff --git a/docs/getting-started-guides/mesos/index.md b/docs/getting-started-guides/mesos/index.md index d09b280779..26cc6c248a 100644 --- a/docs/getting-started-guides/mesos/index.md +++ b/docs/getting-started-guides/mesos/index.md @@ -333,7 +333,7 @@ Future work will add instructions to this guide to enable support for Kubernetes [6]: http://mesos.apache.org/ [7]: https://github.com/kubernetes-incubator/kube-mesos-framework/blob/master/docs/issues.md [8]: https://github.com/mesosphere/kubernetes-mesos/issues -[9]: https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples +[9]: https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/ [10]: http://open.mesosphere.com/getting-started/cloud/google/mesosphere/#vpn-setup [11]: https://git.k8s.io/kubernetes/cluster/addons/dns/README.md#kube-dns [12]: https://git.k8s.io/kubernetes/cluster/addons/dns/kubedns-controller.yaml.in diff --git a/docs/getting-started-guides/openstack-heat.md b/docs/getting-started-guides/openstack-heat.md index c405b7eddd..70f20a89f9 100644 --- a/docs/getting-started-guides/openstack-heat.md +++ b/docs/getting-started-guides/openstack-heat.md @@ -167,7 +167,7 @@ Once the nginx pod is running, use the port-forward command to set up a proxy fr You should now see nginx on [http://localhost:8888](). -For more complex examples please see the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). +For more complex examples please see the [examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/). ## Administering your cluster with Openstack diff --git a/docs/getting-started-guides/rkt/index.md b/docs/getting-started-guides/rkt/index.md index d95e073f77..bf048ebaf2 100644 --- a/docs/getting-started-guides/rkt/index.md +++ b/docs/getting-started-guides/rkt/index.md @@ -151,7 +151,7 @@ The `kube-up` script is not yet supported on AWS. Instead, we recommend followin ### Deploy apps to the cluster -After creating the cluster, you can start deploying applications. For an introductory example, [deploy a simple nginx web server](/docs/user-guide/simple-nginx). Note that this example did not have to be modified for use with a "rktnetes" cluster. More examples can be found in the [Kubernetes examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/). +After creating the cluster, you can start deploying applications. For an introductory example, [deploy a simple nginx web server](/docs/user-guide/simple-nginx). Note that this example did not have to be modified for use with a "rktnetes" cluster. More examples can be found in the [Kubernetes examples directory](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/). ## Modular isolation with interchangeable stage1 images diff --git a/docs/getting-started-guides/ubuntu/manual.md b/docs/getting-started-guides/ubuntu/manual.md index e45fc69a33..7957e10bef 100644 --- a/docs/getting-started-guides/ubuntu/manual.md +++ b/docs/getting-started-guides/ubuntu/manual.md @@ -170,7 +170,7 @@ NAME STATUS AGE VERSION 10.10.103.250 Ready 3d v1.6.0+fff5156 ``` -Also you can run Kubernetes [guest-example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) to build a redis backend cluster. +Also you can run Kubernetes [guest-example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) to build a redis backend cluster. ### Deploy addons diff --git a/docs/getting-started-guides/vsphere.md b/docs/getting-started-guides/vsphere.md index 44f7b1287e..cc62a8e215 100644 --- a/docs/getting-started-guides/vsphere.md +++ b/docs/getting-started-guides/vsphere.md @@ -32,7 +32,7 @@ For more detail visit [vSphere Storage for Kubernetes Documentation](https://vmw Documentation for how to use vSphere managed storage can be found in the [persistent volumes user guide](/docs/concepts/storage/persistent-volumes/#vsphere) and the [volumes user guide](/docs/concepts/storage/volumes/#vspherevolume). -Examples can be found [here](https://git.k8s.io/kubernetes/examples/volumes/vsphere). +Examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere). #### Enable vSphere Cloud Provider diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index a623ad36bf..9d218e816f 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -23,7 +23,7 @@ Check the location and credentials that kubectl knows about with this command: $ kubectl config view ``` -Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using +Many of the [examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) provide an introduction to using kubectl and complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index). ### Directly accessing the REST API @@ -172,7 +172,7 @@ From within a pod the recommended ways to connect to API are: process within a container. This proxies the Kubernetes API to the localhost interface of the pod, so that other processes in any container of the pod can access it. See this [example of using kubectl proxy - in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/). + in a pod](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/kubectl-container/). - use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions. They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go) diff --git a/docs/tasks/administer-cluster/access-cluster-api.md b/docs/tasks/administer-cluster/access-cluster-api.md index 995e8ad69d..f1fd4ea5c1 100644 --- a/docs/tasks/administer-cluster/access-cluster-api.md +++ b/docs/tasks/administer-cluster/access-cluster-api.md @@ -31,7 +31,7 @@ Check the location and credentials that kubectl knows about with this command: $ kubectl config view ``` -Many of the [examples](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/) provide an introduction to using +Many of the [examples](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/) provide an introduction to using kubectl. Complete documentation is found in the [kubectl manual](/docs/user-guide/kubectl/index). ### Directly accessing the REST API @@ -194,7 +194,7 @@ From within a pod the recommended ways to connect to API are: process within a container. This proxies the Kubernetes API to the localhost interface of the pod, so that other processes in any container of the pod can access it. See this [example of using kubectl proxy - in a pod](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/kubectl-container/). + in a pod](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/kubectl-container/). - use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions. They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go) diff --git a/docs/tasks/job/fine-parallel-processing-work-queue/index.md b/docs/tasks/job/fine-parallel-processing-work-queue/index.md index 1b4ff01698..887f5e6dd2 100644 --- a/docs/tasks/job/fine-parallel-processing-work-queue/index.md +++ b/docs/tasks/job/fine-parallel-processing-work-queue/index.md @@ -31,7 +31,7 @@ Here is an overview of the steps in this example: ## Starting Redis For this example, for simplicity, we will start a single instance of Redis. -See the [Redis Example](https://git.k8s.io/kubernetes/examples/guestbook) for an example +See the [Redis Example](https://github.com/kubernetes/examples/tree/master/guestbook) for an example of deploying Redis scalably and redundantly. Start a temporary Pod running Redis and a service so we can find it. diff --git a/docs/tutorials/stateful-application/basic-stateful-set.md b/docs/tutorials/stateful-application/basic-stateful-set.md index 30c13a4f8e..32420ddf8b 100644 --- a/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/docs/tutorials/stateful-application/basic-stateful-set.md @@ -23,7 +23,7 @@ following Kubernetes concepts. * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/docs/concepts/storage/volumes/) -* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/) +* [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) * [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) * [kubectl CLI](/docs/user-guide/kubectl) diff --git a/docs/tutorials/stateful-application/zookeeper.md b/docs/tutorials/stateful-application/zookeeper.md index 1ef52a7a23..09159d471d 100644 --- a/docs/tutorials/stateful-application/zookeeper.md +++ b/docs/tutorials/stateful-application/zookeeper.md @@ -26,7 +26,7 @@ Kubernetes concepts. * [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) * [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/docs/concepts/storage/volumes/) -* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/) +* [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/staging/persistent-volume-provisioning/) * [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) * [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) * [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) diff --git a/docs/user-guide/walkthrough/index.md b/docs/user-guide/walkthrough/index.md index cfd8d1e2e6..ae5a938ec9 100644 --- a/docs/user-guide/walkthrough/index.md +++ b/docs/user-guide/walkthrough/index.md @@ -162,4 +162,4 @@ Finally, we have also introduced an environment variable to the `git-monitor` co ## What's Next? Continue on to [Kubernetes 201](/docs/user-guide/walkthrough/k8s201) or -for a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/) +for a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/) diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index 594cf05c3c..f5f42d7120 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -225,4 +225,4 @@ For more information about health checking, see [Container Probes](/docs/user-gu ## What's Next? -For a complete application see the [guestbook example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/guestbook/). +For a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{page.githubbranch}}/guestbook/).