Update User Guide and Admin links to point to new resources. (#3438)

* Update links to outdated user-guide and admin docs

* Add script for updating outdated links.

* Update regex to include init-containers file.

* Pull upstream, rewrite links in and to namespaces walkthrough.
This commit is contained in:
PaulJuliusMartinez 2017-04-19 10:56:47 -07:00 committed by Andrew Chen
parent 1592494620
commit 7f0294c579
89 changed files with 304 additions and 219 deletions

View File

@ -8,7 +8,7 @@ title: Controlling Accessing to the Kubernetes API
Users [access the API](/docs/user-guide/accessing-the-cluster) using `kubectl`,
client libraries, or by making REST requests. Both human users and
[Kubernetes service accounts](/docs/user-guide/service-accounts/) can be
[Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/) can be
authorized for API access.
When a request reaches the API, it goes through several stages, illustrated in the
following diagram:

View File

@ -205,7 +205,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](/docs/admin/resourcequota/) for more details.
See the [resourceQuota design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_resource_quota.md) and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for more details.
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
@ -218,7 +218,7 @@ your Kubernetes deployment, you MUST use this plug-in to enforce those constrain
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](/docs/admin/limitrange/) for more details.
See the [limitRange design doc](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/admission_control_limit_range.md) and the [example of Limit Range](/docs/tasks/configure-pod-container/limit-range/) for more details.
### InitialResources (experimental)

View File

@ -67,7 +67,7 @@ DELETE | delete (for individual resources), deletecollection (for collections
Some components perform authorization checks for additional permissions using specialized verbs. For example:
* [PodSecurityPolicy](/docs/user-guide/pod-security-policy/) checks for authorization of the `use` verb on `podsecuritypolicies` resources in the `extensions` API group.
* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) checks for authorization of the `use` verb on `podsecuritypolicies` resources in the `extensions` API group.
* [RBAC](/docs/admin/authorization/rbac/#privilege-escalation-prevention-and-bootstrapping) checks for authorization
of the `bind` verb on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group.
* [Authentication](/docs/admin/authentication/) layer checks for authorization of the `impersonate` verb on `users`, `groups`, and `userextras` in the `authentication.k8s.io` API group, and the `serviceaccounts` in the core API group.

View File

@ -247,7 +247,7 @@ Group information in Kubernetes is currently provided by the Authenticator
modules. Groups, like users, are represented as strings, and that string
has no format requirements, other than that the prefix `system:` is reserved.
[Service Accounts](/docs/user-guide/service-accounts/) have usernames with the `system:serviceaccount:` prefix and belong
[Service Accounts](/docs/tasks/configure-pod-container/configure-service-account/) have usernames with the `system:serviceaccount:` prefix and belong
to groups with the `system:serviceaccounts` prefix.
#### Role Binding Examples
@ -668,7 +668,7 @@ In order from most secure to least secure, the approaches are:
--namespace=my-namespace
```
Many [add-ons](/docs/admin/addons/) currently run as the "default" service account in the "kube-system" namespace.
Many [add-ons](/docs/concepts/cluster-administration/addons/) currently run as the "default" service account in the "kube-system" namespace.
To allow those add-ons to run with super-user access, grant cluster-admin permissions to the "default" service account in the "kube-system" namespace.
NOTE: Enabling this means the "kube-system" namespace contains secrets that grant super-user access to the API.

View File

@ -17,7 +17,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
* [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is an overlay network provider that can be used with Kubernetes.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/user-guide/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/networkpolicies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
## Visualization & Control

View File

@ -83,7 +83,7 @@ In other environments you may need to configure the machine yourself and tell th
If you are using GCE or GKE, you can configure your cluster so that it is automatically rescaled based on
pod needs.
As described in [Compute Resource](/docs/user-guide/compute-resources/), users can reserve how much CPU and memory is allocated to pods.
As described in [Compute Resource](/docs/concepts/configuration/manage-compute-resources-container/), users can reserve how much CPU and memory is allocated to pods.
This information is used by the Kubernetes scheduler to find a place to run the pod. If there is
no node that has enough free capacity (or doesn't match other pod requirements) then the pod has
to wait until some pods are terminated or a new node is added.

View File

@ -26,12 +26,12 @@ by Kelsey Hightower, are also available to help you.
You are also expected to have a basic
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
general, and [Services](/docs/user-guide/services/) in particular.
general, and [Services](/docs/concepts/services-networking/service/) in particular.
## Overview
Federated Services are created in much that same way as traditional
[Kubernetes Services](/docs/user-guide/services/) by making an API
[Kubernetes Services](/docs/concepts/services-networking/service/) by making an API
call which specifies the desired properties of your service. In the
case of Federated Services, this API call is directed to the
Federation API endpoint, rather than a Kubernetes cluster API

View File

@ -72,4 +72,4 @@ Including:
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
See [Configuring Out Of Resource Handling](/docs/admin/out-of-resource/) for more details.
See [Configuring Out Of Resource Handling](/docs/concepts/cluster-administration/out-of-resource/) for more details.

View File

@ -195,10 +195,10 @@ to significant resource consumption. Moreover, you won't be able to access
those logs using `kubectl logs` command, because they are not controlled
by the kubelet.
As an example, you could use [Stackdriver](/docs/user-guide/logging/stackdriver/),
As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
which uses fluentd as a logging agent. Here are two configuration files that
you can use to implement this approach. The first file contains
a [ConfigMap](/docs/user-guide/configmap/) to configure fluentd.
a [ConfigMap](/docs/tasks/configure-pod-container/configmap/) to configure fluentd.
{% include code.html language="yaml" file="fluentd-sidecar-config.yaml" ghlink="/docs/concepts/cluster-administration/fluentd-sidecar-config.yaml" %}

View File

@ -273,7 +273,7 @@ metadata:
...
```
For more information, please see [annotations](/docs/user-guide/annotations/) and [kubectl annotate](/docs/user-guide/kubectl/v1.6/#annotate) document.
For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/user-guide/kubectl/v1.6/#annotate) document.
## Scaling your application
@ -430,9 +430,9 @@ To update to version 1.9.1, simply change `.spec.template.spec.containers[0].ima
$ kubectl edit deployment/my-nginx
```
That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/user-guide/deployments/).
That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/).
## What's next?
- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/user-guide/introspection-and-debugging/)
- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/tasks/debug-application-cluster/debug-application-introspection/)
- [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/)

View File

@ -8,10 +8,10 @@ Kubernetes approaches networking somewhat differently than Docker does by
default. There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container communications: this is solved by
[pods](/docs/user-guide/pods/) and `localhost` communications.
[pods](/docs/concepts/workloads/pods/pod/) and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document.
3. Pod-to-Service communications: this is covered by [services](/docs/user-guide/services/).
4. External-to-Service communications: this is covered by [services](/docs/user-guide/services/).
3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
* TOC
{:toc}
@ -202,7 +202,7 @@ Calico can also be run in policy enforcement mode in conjunction with other netw
### Romana
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/user-guide/networkpolicies/) to provide isolation across network namespaces.
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/networkpolicies/) to provide isolation across network namespaces.
### Weave Net from Weaveworks

View File

@ -67,7 +67,7 @@ of configurations are not currently supported by the kubelet. For example, it is
*not OK* to store volumes and logs in a dedicated `filesystem`.
In future releases, the `kubelet` will deprecate the existing [garbage
collection](/docs/admin/garbage-collection/) support in favor of eviction in
collection](/docs/concepts/cluster-administration/kubelet-garbage-collection/) support in favor of eviction in
response to disk pressure.
### Eviction Thresholds

View File

@ -4,7 +4,7 @@ assignees:
title: Static Pods
---
**If you are running clustered Kubernetes and are using static pods to run a pod on every node, you should probably be using a [DaemonSet](/docs/admin/daemons/)!**
**If you are running clustered Kubernetes and are using static pods to run a pod on every node, you should probably be using a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)!**
*Static pods* are managed directly by kubelet daemon on a specific node, without API server observing it. It does not have associated any replication controller, kubelet daemon itself watches it and restarts it when it crashes. There is no health check though. Static pods are always bound to one kubelet daemon and always run on the same node with it.

View File

@ -118,4 +118,4 @@ spec:
**Note**: a pod with the _unsafe_ sysctls specified above will fail to launch on
any node which has not enabled those two _unsafe_ sysctls explicitly. As with
_node-level_ sysctls it is recommended to use [_taints and toleration_
feature](/docs/user-guide/kubectl/v1.6/#taint) or [labels on nodes](/docs/user-guide/node-selection/) to schedule those pods onto the right nodes.
feature](/docs/user-guide/kubectl/v1.6/#taint) or [labels on nodes](/docs/concepts/configuration/assign-pod-node/) to schedule those pods onto the right nodes.

View File

@ -6,7 +6,7 @@ assignees:
title: Assigning Pods to Nodes
---
You can constrain a [pod](/docs/user-guide/pods/) to only be able to run on particular [nodes](/docs/admin/node/) or to prefer to
You can constrain a [pod](/docs/concepts/workloads/pods/pod/) to only be able to run on particular [nodes](/docs/concepts/nodes/node/) or to prefer to
run on particular nodes. There are several ways to do this, and they all use
[label selectors](/docs/user-guide/labels/) to make the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement

View File

@ -242,7 +242,7 @@ system daemons use a portion of the available resources. The `allocatable` field
gives the amount of resources that are available to Pods. For more information, see
[Node Allocatable Resources](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md).
The [resource quota](/docs/admin/resourcequota/) feature can be configured
The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.

View File

@ -39,8 +39,8 @@ This document is meant to highlight and consolidate in one place configuration b
## Services
- It's typically best to create a [service](/docs/user-guide/services/) before corresponding [replication
controllers](/docs/user-guide/replication-controller/), so that the scheduler can spread the pods comprising the
- It's typically best to create a [service](/docs/concepts/services-networking/service/) before corresponding [replication
controllers](/docs/concepts/workloads/controllers/replicationcontroller/), so that the scheduler can spread the pods comprising the
service. You can also create a replication controller without specifying replicas (this will set
replicas=1), create a service, then scale up the replication controller. This can be useful in
ensuring that one replica works before creating lots of them.
@ -50,8 +50,8 @@ This document is meant to highlight and consolidate in one place configuration b
number of places that pod can be scheduled, due to port conflicts— you can only schedule as many
such Pods as there are nodes in your Kubernetes cluster.
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](/docs/user-guide/connecting-to-applications-proxy/) or [kubectl port-forward](/docs/user-guide/connecting-to-applications-port-forward/).
You can use a [Service](/docs/user-guide/services/) object for external service access.
If you only need access to the port for debugging purposes, you can use the [kubectl proxy and apiserver proxy](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) or [kubectl port-forward](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
You can use a [Service](/docs/concepts/services-networking/service/) object for external service access.
If you do need to expose a pod's port on the host machine, consider using a [NodePort](/docs/user-guide/services/#type-nodeport) service before resorting to `hostPort`.
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
@ -78,7 +78,7 @@ This document is meant to highlight and consolidate in one place configuration b
version-agnostic controller names. See the [documentation](/docs/tasks/run-application/rolling-update-replication-controller/) on
the rolling-update command for more detail.
Note that the [Deployment](/docs/user-guide/deployments/) object obviates the need to manage replication
Note that the [Deployment](/docs/concepts/workloads/controllers/deployment/) object obviates the need to manage replication
controller 'version names'. A desired state of an object is described by a Deployment, and if
changes to that spec are _applied_, the deployment controller changes the actual state to the
desired state at a controlled rate. (Deployment objects are currently part of the [`extensions`

View File

@ -87,7 +87,7 @@ See [here](http://releases.k8s.io/HEAD/cluster/addons) for more details.
#### DNS
While the other addons are not strictly required, all Kubernetes
clusters should have [cluster DNS](/docs/admin/dns/), as many examples rely on it.
clusters should have [cluster DNS](/docs/concepts/services-networking/dns-pod-service/), as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your
environment, which serves DNS records for Kubernetes services.

View File

@ -66,18 +66,18 @@ At a minimum, Kubernetes can schedule and run application containers on clusters
Kubernetes satisfies a number of common needs of applications running in production, such as:
* [co-locating helper processes](/docs/user-guide/pods/), facilitating composite applications and preserving the one-application-per-container model,
* [co-locating helper processes](/docs/concepts/workloads/pods/pod/), facilitating composite applications and preserving the one-application-per-container model,
* [mounting storage systems](/docs/concepts/storage/volumes/),
* [distributing secrets](/docs/user-guide/secrets/),
* [distributing secrets](/docs/concepts/configuration/secret/),
* [application health checking](/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks),
* [replicating application instances](/docs/user-guide/replication-controller/),
* [horizontal auto-scaling](/docs/user-guide/horizontal-pod-autoscaling/),
* [naming and discovery](/docs/user-guide/connecting-applications/),
* [load balancing](/docs/user-guide/services/),
* [replicating application instances](/docs/concepts/workloads/controllers/replicationcontroller/),
* [horizontal auto-scaling](/docs/tasks/run-application/horizontal-pod-autoscale/),
* [naming and discovery](/docs/concepts/services-networking/connect-applications-service/),
* [load balancing](/docs/concepts/services-networking/service/),
* [rolling updates](/docs/tasks/run-application/rolling-update-replication-controller/),
* [resource monitoring](/docs/user-guide/monitoring/),
* [log access and ingestion](/docs/user-guide/logging/overview/),
* [support for introspection and debugging](/docs/user-guide/introspection-and-debugging/), and
* [resource monitoring](/docs/concepts/cluster-administration/resource-usage-monitoring/),
* [log access and ingestion](/docs/concepts/clusters/logging/),
* [support for introspection and debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/), and
* [identity and authorization](/docs/admin/authorization/).
This provides the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and facilitates portability across infrastructure providers.
@ -88,7 +88,7 @@ For more details, see the [user guide](/docs/user-guide/).
Even though Kubernetes provides a lot of functionality, there are always new scenarios that would benefit from new features. Application-specific workflows can be streamlined to accelerate developer velocity. Ad hoc orchestration that is acceptable initially often requires robust automation at scale. This is why Kubernetes was also designed to serve as a platform for building an ecosystem of components and tools to make it easier to deploy, scale, and manage applications.
[Labels](/docs/user-guide/labels/) empower users to organize their resources however they please. [Annotations](/docs/user-guide/annotations/) enable users to decorate resources with custom information to facilitate their workflows and provide an easy way for management tools to checkpoint state.
[Labels](/docs/user-guide/labels/) empower users to organize their resources however they please. [Annotations](/docs/concepts/overview/working-with-objects/annotations/) enable users to decorate resources with custom information to facilitate their workflows and provide an easy way for management tools to checkpoint state.
Additionally, the [Kubernetes control plane](/docs/admin/cluster-components) is built upon the same [APIs](/docs/api/) that are available to developers and users. Users can write their own controllers, [schedulers](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/scheduler.md), etc., if they choose, with [their own APIs](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/extending-api.md) that can be targeted by a general-purpose [command-line tool](/docs/user-guide/kubectl-overview/).

View File

@ -157,7 +157,7 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon
#### Resources that support set-based requirements
Newer resources, such as [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/), [`Deployment`](/docs/user-guide/deployments/), [`Replica Set`](/docs/user-guide/replicasets/), and [`Daemon Set`](/docs/admin/daemons/), support _set-based_ requirements as well.
Newer resources, such as [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/), [`Deployment`](/docs/concepts/workloads/controllers/deployment/), [`Replica Set`](/docs/concepts/workloads/controllers/replicaset/), and [`Daemon Set`](/docs/concepts/workloads/controllers/daemonset/), support _set-based_ requirements as well.
```yaml
selector:

View File

@ -18,7 +18,7 @@ need the features they provide.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.
Namespaces are a way to divide cluster resources between multiple uses (via [resource quota](/docs/admin/resourcequota/)).
Namespaces are a way to divide cluster resources between multiple uses (via [resource quota](/docs/concepts/policy/resource-quotas/)).
In future versions of Kubernetes, objects in the same namespace will have the same
access control policies by default.

View File

@ -88,9 +88,9 @@ spec:
{% capture whatsnext %}
* [Security Context](/docs/user-guide/security-context/)
* [Security Context](/docs/concepts/policy/security-context/)
* [Pod Security Policy](/docs/user-guide/pod-security-policy/)
* [Pod Security Policy](/docs/concepts/policy/pod-security-policy/)
* [SecurityContext](/docs/resources-reference/v1.6/#securitycontext-v1-core)

View File

@ -26,7 +26,7 @@ Resource quotas work like this:
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
the LimitRange admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/admin/resourcequota/walkthrough/) for an example to avoid this problem.
See the [walkthrough](/docs/tasks/configure-pod-container/apply-resource-quota-limit/) for an example to avoid this problem.
Examples of policies that could be created using namespaces and quotas are:
@ -233,7 +233,7 @@ restrictions around nodes: pods from several namespaces may run on the same node
## Example
See a [detailed example for how to use resource quota](/docs/admin/resourcequota/walkthrough/).
See a [detailed example for how to use resource quota](/docs/tasks/configure-pod-container/apply-resource-quota-limit/).
## Read More

View File

@ -304,7 +304,7 @@ kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
...
```
If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](/docs/user-guide/debugging-services/) for more information.
If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](/docs/tasks/debug-application-cluster/debug-service/) for more information.
#### Are DNS endpoints exposed?
@ -320,7 +320,7 @@ NAME ENDPOINTS AGE
kube-dns 10.180.3.17:53,10.180.3.17:53 1h
```
If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/user-guide/debugging-services/).
If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/tasks/debug-application-cluster/debug-service/).
For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/kubernetes/tree/master/examples/cluster-dns) in the Kubernetes GitHub repository.

View File

@ -14,8 +14,8 @@ Throughout this doc you will see a few terms that are sometimes used interchange
* Node: A single virtual or physical machine in a Kubernetes cluster.
* Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloudprovider or a physical piece of hardware.
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](/docs/admin/networking/). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](/docs/admin/ovs-networking/).
* Service: A Kubernetes [Service](/docs/user-guide/services/) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the [Kubernetes networking model](/docs/concepts/cluster-administration/networking/). Examples of a Cluster network include Overlays such as [flannel](https://github.com/coreos/flannel#flannel) or SDNs such as [OVS](/docs/admin/ovs-networking/).
* Service: A Kubernetes [Service](/docs/concepts/services-networking/service/) that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
## What is Ingress?
@ -273,7 +273,7 @@ You can achieve the same by invoking `kubectl replace -f` on a modified Ingress
## Failing across availability zones
Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant Ingress controller for details. Please refer to the federation [doc](/docs/user-guide/federation/) for details on deploying Ingress in a federated cluster.
Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant Ingress controller for details. Please refer to the federation [doc](/docs/concepts/cluster-administration/federation.md) for details on deploying Ingress in a federated cluster.
## Future Work

View File

@ -424,7 +424,7 @@ A `persistentVolumeClaim` volume is used to mount a
way for users to "claim" durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
See the [PersistentVolumes example](/docs/user-guide/persistent-volumes/) for more
See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) for more
details.
### downwardAPI
@ -432,7 +432,7 @@ details.
A `downwardAPI` volume is used to make downward API data available to applications.
It mounts a directory and writes the requested data in plain text files.
See the [`downwardAPI` volume example](/docs/user-guide/downward-api/volume/) for more details.
See the [`downwardAPI` volume example](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) for more details.
### FlexVolume

View File

@ -30,7 +30,7 @@ different flags and/or different memory and cpu requests for different hardware
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications/),
[configuring containers](/docs/user-guide/configuring-containers/), and [working with resources](/docs/user-guide/working-with-resources/) documents.
[configuring containers](/docs/user-guide/configuring-containers/), and [working with resources](/docs/concepts/tools/kubectl/object-management-overview/) documents.
A DaemonSet also needs a [`.spec`](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status) section.
@ -55,7 +55,7 @@ a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) or other new re
The `spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](/docs/user-guide/replication-controller/)
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/)
* `matchExpressions` - allows to build more sophisticated selectors by specifying key,
list of values and an operator that relates the key and values.
@ -74,7 +74,7 @@ a node for testing.
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on nodes which match that [node
selector](/docs/user-guide/node-selection/). Likewise if you specify a `.spec.template.spec.affinity`
selector](/docs/concepts/configuration/assign-pod-node/). Likewise if you specify a `.spec.template.spec.affinity`
then DaemonSet controller will create pods on nodes which match that [node affinity](/docs/concepts/configuration/assign-pod-node/).
If you do not specify either, then the DaemonSet controller will create pods on all nodes.
@ -156,7 +156,7 @@ use a DaemonSet rather than creating individual pods.
### Static Pods
It is possible to create pods by writing a file to a certain directory watched by Kubelet. These
are called [static pods](/docs/admin/static-pods/).
are called [static pods](/docs/concepts/cluster-administration/static-pod/).
Unlike DaemonSet, static pods cannot be managed with kubectl
or other Kubernetes API clients. Static pods do not depend on the apiserver, making them useful
in cluster bootstrapping cases. Also, static pods may be deprecated in the future.

View File

@ -9,7 +9,7 @@ title: Deployments
## What is a Deployment?
A _Deployment_ provides declarative updates for [Pods](/docs/user-guide/pods/) and [Replica Sets](/docs/user-guide/replicasets/) (the next-generation Replication Controller).
A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and [Replica Sets](/docs/concepts/workloads/controllers/replicaset/) (the next-generation Replication Controller).
You only need to describe the desired state in a Deployment object, and the Deployment
controller will change the actual state to the desired state at a controlled rate for you.
You can define Deployments to create new resources, or replace existing ones
@ -684,7 +684,7 @@ the same schema as a [Pod](/docs/user-guide/pods), except it is nested and does
In addition to required fields for a Pod, a pod template in a Deployment must specify appropriate
labels (i.e. don't overlap with other controllers, see [selector](#selector)) and an appropriate restart policy.
Only a [`.spec.template.spec.restartPolicy`](/docs/user-guide/pod-states/) equal to `Always` is allowed, which is the default
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/) equal to `Always` is allowed, which is the default
if not specified.
### Replicas

View File

@ -39,7 +39,7 @@ __Prerequisites__
This doc assumes familiarity with the following Kubernetes concepts:
* [Pods](/docs/user-guide/pods/single-container/)
* [Cluster DNS](/docs/admin/dns/)
* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/docs/user-guide/services/#headless-services)
* [Persistent Volumes](/docs/concepts/storage/volumes/)
* [Persistent Volume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/README.md)
@ -230,7 +230,7 @@ web-1
A pet can piece together its own identity:
1. Use the [downward api](/docs/user-guide/downward-api/) to find its pod name
1. Use the [downward api](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/) to find its pod name
2. Run `hostname` to find its DNS name
3. Run `mount` or `df` to find its volumes (usually this is unnecessary)

View File

@ -13,7 +13,7 @@ title: Replica Sets
ReplicaSet is the next-generation Replication Controller. The only difference
between a _ReplicaSet_ and a
[_Replication Controller_](/docs/user-guide/replication-controller/) right now is
[_Replication Controller_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is
the selector support. ReplicaSet supports the new set-based selector requirements
as described in the [labels user guide](/docs/user-guide/labels/#label-selectors)
whereas a Replication Controller only supports equality-based selector requirements.
@ -28,7 +28,7 @@ imperative whereas Deployments are declarative, so we recommend using Deployment
through the [`rollout`](/docs/user-guide/kubectl/v1.6/#rollout) command.
While ReplicaSets can be used independently, today it's mainly used by
[Deployments](/docs/user-guide/deployments/) as a mechanism to orchestrate pod
[Deployments](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod
creation, deletion and updates. When you use Deployments you don't have to worry
about managing the ReplicaSets that they create. Deployments own and manage
their ReplicaSets.
@ -79,7 +79,7 @@ frontend-qhloh 1/1 Running 0 1m
## ReplicaSet as an Horizontal Pod Autoscaler target
A ReplicaSet can also be a target for
[Horizontal Pod Autoscalers (HPA)](/docs/user-guide/horizontal-pod-autoscaling/),
[Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/),
i.e. a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
the ReplicaSet we created in the previous example.

View File

@ -85,7 +85,7 @@ specifies an expression that just gets the name from each pod in the returned li
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [here](/docs/user-guide/simple-yaml/),
[here](/docs/user-guide/configuring-containers/), and [here](/docs/user-guide/working-with-resources/).
[here](/docs/user-guide/configuring-containers/), and [here](/docs/concepts/tools/kubectl/object-management-overview/).
A ReplicationController also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
@ -94,13 +94,13 @@ A ReplicationController also needs a [`.spec` section](https://github.com/kubern
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](#pod-template). It has exactly
the same schema as a [pod](/docs/user-guide/pods/), except it is nested and does not have an `apiVersion` or
the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or
`kind`.
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
labels (i.e. don't overlap with other controllers, see [pod selector](#pod-selector)) and an appropriate restart policy.
Only a [`.spec.template.spec.restartPolicy`](/docs/user-guide/pod-states/) equal to `Always` is allowed, which is the default
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/) equal to `Always` is allowed, which is the default
if not specified.
For local container restarts, ReplicationControllers delegate to an agent on the node,
@ -229,14 +229,14 @@ object](/docs/api-reference/v1.6/#replicationcontroller-v1-core).
### ReplicaSet
[`ReplicaSet`](/docs/user-guide/replicasets/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement).
Its mainly used by [`Deployment`](/docs/user-guide/deployments/) as a mechanism to orchestrate pod creation, deletion and updates.
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/user-guide/labels/#set-based-requirement).
Its mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or dont require updates at all.
### Deployment (Recommended)
[`Deployment`](/docs/user-guide/deployments/) is a higher-level API object that updates its underlying Replica Sets and their Pods
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
@ -251,7 +251,7 @@ Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead o
### DaemonSet
Use a [`DaemonSet`](/docs/admin/daemons/) instead of a ReplicationController for pods that provide a
Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicationController for pods that provide a
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

View File

@ -39,8 +39,8 @@ In the above, stable is synonymous with persistence across Pod (re)schedulings.
If an application doesn't require any stable identifiers or ordered deployment,
deletion, or scaling, you should deploy your application with a controller that
provides a set of stateless replicas. Controllers such as
[Deployment](/docs/user-guide/deployments/) or
[ReplicaSet](/docs/user-guide/replicasets/) may be better suited to your stateless needs.
[Deployment](/docs/concepts/workloads/controllers/deployment/) or
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) may be better suited to your stateless needs.
## Limitations
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.

View File

@ -158,18 +158,18 @@ duration (determined by the master) will expire and be automatically destroyed.
Three types of controllers are available:
- Use a [Job](/docs/user-guide/jobs/) for Pods that are expected to terminate,
- Use a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) for Pods that are expected to terminate,
for example, batch computations. Jobs are appropriate only for Pods with
`restartPolicy` equal to OnFailure or Never.
- Use a [ReplicationController](/docs/user-guide/replication-controller/),
[ReplicaSet](/docs/user-guide/replicasets/), or
[Deployment](/docs/user-guide/deployments/)
- Use a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/),
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/), or
[Deployment](/docs/concepts/workloads/controllers/deployment/)
for Pods that are not expected to terminate, for example, web servers.
ReplicationControllers are appropriate only for Pods with a `restartPolicy` of
Always.
- Use a [DaemonSet](/docs/admin/daemons/) for Pods that need to run one per
- Use a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for Pods that need to run one per
machine, because they provide a machine-specific system service.
All three types of controllers contain a PodTemplate. It

View File

@ -75,8 +75,8 @@ In general, Controllers use a Pod Template that you provide to create the Pods f
## Pod Templates
Pod templates are pod specifications which are included in other objects, such as
[Replication Controllers](/docs/user-guide/replication-controller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
[DaemonSets](/docs/admin/daemons/). Controllers use Pod Templates to make actual pods.
[Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/). Controllers use Pod Templates to make actual pods.
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive.

View File

@ -44,13 +44,13 @@ a group of Docker containers with shared namespaces and shared
Like individual application containers, pods are considered to be relatively
ephemeral (rather than durable) entities. As discussed in [life of a
pod](/docs/user-guide/pod-states/), pods are created, assigned a unique ID (UID), and
pod](/docs/concepts/workloads/pods/pod-lifecycle/), pods are created, assigned a unique ID (UID), and
scheduled to nodes where they remain until termination (according to restart
policy) or deletion. If a node dies, the pods scheduled to that node are
scheduled for deletion, after a timeout period. A given pod (as defined by a UID) is not
"rescheduled" to a new node; instead, it can be replaced by an identical pod,
with even the same name if desired, but with a new UID (see [replication
controller](/docs/user-guide/replication-controller/) for more details). (In the future, a
controller](/docs/concepts/workloads/controllers/replicationcontroller/) for more details). (In the future, a
higher-level API may support pod migration.)
When something is said to have the same lifetime as a pod, such as a volume,
@ -86,7 +86,7 @@ Each pod has an IP address in a flat shared networking space that has full
communication with other physical computers and pods across the network.
The hostname is set to the pod's Name for the application containers within the
pod. [More details on networking](/docs/admin/networking/).
pod. [More details on networking](/docs/concepts/cluster-administration/networking/).
In addition to defining the application containers that run in the pod, the pod
specifies a set of shared storage volumes. Volumes enable data to survive
@ -137,7 +137,7 @@ simplified management.
Pods aren't intended to be treated as durable entities. They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance.
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [Deployments](/docs/user-guide/deployments/)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [Deployments](/docs/concepts/workloads/controllers/deployment/)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
@ -150,7 +150,7 @@ Pod is exposed as a primitive in order to facilitate:
* clean composition of Kubelet-level functionality with cluster-level functionality &mdash; Kubelet is effectively the "pod controller"
* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949)
There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/user-guide/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/).
There is new first-class support for stateful pods with the [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/) controller (currently in beta). The feature was alpha in 1.4 and was called [PetSet](/docs/concepts/workloads/controllers/petset/). For prior versions of Kubernetes, best practice for having stateful pods is to create a replication controller with `replicas` equal to `1` and a corresponding service, see [this MySQL deployment example](/docs/tutorials/stateful-application/run-stateful-application/).
## Termination of Pods

View File

@ -269,7 +269,7 @@ distributed with OSX.
### Accessing the cluster with a browser
We install two UIs on Kubernetes. The original KubeUI and [the newer kube
dashboard](/docs/user-guide/ui/). When you create a cluster, the script should output URLs for these
dashboard](/docs/tasks/web-ui-dashboard/). When you create a cluster, the script should output URLs for these
interfaces like this:
KubeUI is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kube-ui/proxy```
@ -323,7 +323,7 @@ Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-s
These are the known items that don't work on CenturyLink cloud but do work on other cloud providers:
- At this time, there is no support services of the type [LoadBalancer](/docs/user-guide/load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016.
- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016.
- At this time, there is no support for persistent storage volumes provided by
CenturyLink Cloud. However, customers can bring their own persistent storage

View File

@ -17,7 +17,7 @@ title: Fedora (Single Node)
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/admin/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](/docs/concepts/cluster-administration/networking/) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.

View File

@ -145,7 +145,7 @@ for production clusters!
### Explore other add-ons
See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp; control of your Kubernetes cluster.
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp; control of your Kubernetes cluster.
## What's next

View File

@ -21,7 +21,7 @@ This document describes how to run Kubernetes using [rkt](https://github.com/cor
### Kubernetes CNI networking
You can configure Kubernetes pod networking with the usual Container Network Interface (CNI) [network plugins](/docs/admin/network-plugins/) by setting the kubelet's `--network-plugin` and `--network-plugin-dir` options appropriately. Configured in this fashion, the rkt container engine will be unaware of network details, and expects to connect pods to the provided subnet.
You can configure Kubernetes pod networking with the usual Container Network Interface (CNI) [network plugins](/docs/concepts/cluster-administration/network-plugins/) by setting the kubelet's `--network-plugin` and `--network-plugin-dir` options appropriately. Configured in this fashion, the rkt container engine will be unaware of network details, and expects to connect pods to the provided subnet.
#### kubenet: Google Compute Engine (GCE) network

View File

@ -132,7 +132,7 @@ Also, you need to pick a static IP for master node.
Kubernetes enables the definition of fine-grained network policy between Pods using the [NetworkPolicy](/docs/concepts/services-networking/networkpolicies/) resource.
Not all networking providers support the Kubernetes NetworkPolicy API, see [Using Network Policy](/docs/getting-started-guides/network-policy/walkthrough/) for more information.
Not all networking providers support the Kubernetes NetworkPolicy API, see [Using Network Policy](/docs/tasks/configure-pod-container/declare-network-policy/) for more information.
### Cluster Naming
@ -822,7 +822,7 @@ Notes for setting up each cluster service are given below:
* Cluster DNS:
* required for many Kubernetes examples
* [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/)
* [Admin Guide](/docs/admin/dns/)
* [Admin Guide](/docs/concepts/services-networking/dns-pod-service/)
* Cluster-level Logging
* [Cluster-level Logging Overview](/docs/user-guide/logging/overview)
* [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch)

View File

@ -21,7 +21,7 @@ In the reference section, you can find reference documentation for Kubernetes AP
## Glossary
Explore the glossary of essential Kubernetes concepts. Some good starting points are the entries for [Pods](/docs/user-guide/pods/), [Nodes](/docs/admin/node/), [Services](/docs/user-guide/services/), and [ReplicaSets](/docs/user-guide/replicasets/).
Explore the glossary of essential Kubernetes concepts. Some good starting points are the entries for [Pods](/docs/concepts/workloads/pods/pod/), [Nodes](/docs/concepts/nodes/node/), [Services](/docs/concepts/services-networking/service/), and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
## Design Docs

View File

@ -111,7 +111,7 @@ to create a Service.
{% capture whatsnext %}
Learn more about
[connecting applications with services](/docs/user-guide/connecting-applications/).
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
{% endcapture %}
{% include templates/tutorial.md %}

View File

@ -84,7 +84,7 @@ for details about addon manager and how to disable individual addons.
{% endcapture %}
{% capture whatsnext %}
* Learn more about [StorageClasses](/docs/user-guide/persistent-volumes/).
* Learn more about [StorageClasses](/docs/concepts/storage/persistent-volumes/).
{% endcapture %}
{% include templates/task.md %}

View File

@ -67,7 +67,7 @@ the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to
{% endcapture %}
{% capture whatsnext %}
* Learn more about [PersistentVolumes](/docs/user-guide/persistent-volumes/).
* Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/).
* Learn more about [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims).
### Reference

View File

@ -11,7 +11,7 @@ Kubernetes cluster.
* {% include task-tutorial-prereqs.md %}
* Make sure the [DNS feature](/docs/admin/dns/) itself is enabled.
* Make sure the [DNS feature](/docs/concepts/services-networking/dns-pod-service/) itself is enabled.
* Kubernetes version 1.4.0 or later is recommended.
@ -225,7 +225,7 @@ a future development. The current implementation, which uses the number of nodes
and cores in cluster, is limited.
Support for custom metrics, similar to that provided by
[Horizontal Pod Autoscaling](/docs/user-guide/horizontal-pod-autoscaling/),
[Horizontal Pod Autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale/),
is under consideration as a future development.
{% endcapture %}

View File

@ -9,7 +9,7 @@ Kubernetes _namespaces_ help different projects, teams, or customers to share a
It does this by providing the following:
1. A scope for [Names](/docs/user-guide/identifiers/).
1. A scope for [Names](/docs/concepts/overview/working-with-objects/names/).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
Use of multiple namespaces is optional.
@ -21,7 +21,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu
This example assumes the following:
1. You have an [existing Kubernetes cluster](/docs/getting-started-guides/).
2. You have a basic understanding of Kubernetes _[Pods](/docs/user-guide/pods/)_, _[Services](/docs/user-guide/services/)_, and _[Deployments](/docs/user-guide/deployments/)_.
2. You have a basic understanding of Kubernetes _[Pods](/docs/concepts/workloads/pods/pod/)_, _[Services](/docs/concepts/services-networking/service/)_, and _[Deployments](/docs/concepts/workloads/controllers/deployment/)_.
### Step One: Understand the default namespace

View File

@ -66,7 +66,7 @@ project](/docs/admin/salt).
## Multi-tenant support
* **Resource Quota** ([resourcequota](/docs/admin/resourcequota/))
* **Resource Quota** ([resourcequota](/docs/concepts/policy/resource-quotas/))
## Security
@ -87,6 +87,6 @@ project](/docs/admin/salt).
* **Audit** [audit](/docs/admin/audit)
* **Securing the kubelet**
* [Master-Node communication](/docs/admin/master-node-communication/)
* [Master-Node communication](/docs/concepts/cluster-administration/master-node-communication/)
* [TLS bootstrapping](/docs/admin/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)

View File

@ -14,7 +14,7 @@ disruption SLOs you have specified using PodDisruptionBudget.
This task assumes that you have met the following prerequisites:
* You are using Kubernetes release >= 1.5.
* You have created [PodDisruptionBudget(s)](/docs/admin/disruptions/) to express the
* You have created [PodDisruptionBudget(s)](/docs/tasks/configure-pod-container/configure-pod-disruption-budget/) to express the
application-level disruption SLOs you want the system to enforce.
{% endcapture %}

View File

@ -20,12 +20,12 @@ might also help you create a Federated Kubernetes cluster.
You should also have a basic
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
general and [ConfigMaps](/docs/user-guide/configmap/) in particular.
general and [ConfigMaps](/docs/tasks/configure-pod-container/configmap/) in particular.
## Overview
Federated ConfigMaps are very similar to the traditional [Kubernetes
ConfigMaps](/docs/user-guide/configmap/) and provide the same functionality.
ConfigMaps](/docs/tasks/configure-pod-container/configmap/) and provide the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.

View File

@ -40,12 +40,12 @@ by Kelsey Hightower, are also available to help you.
You are also expected to have a basic
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
general, and [Ingress](/docs/user-guide/ingress/) in particular.
general, and [Ingress](/docs/concepts/services-networking/ingress/) in particular.
## Overview
Federated Ingresses are created in much that same way as traditional
[Kubernetes Ingresses](/docs/user-guide/ingress/): by making an API
[Kubernetes Ingresses](/docs/concepts/services-networking/ingress/): by making an API
call which specifies the desired properties of your logical ingress point. In the
case of Federated Ingress, this API call is directed to the
Federation API endpoint, rather than a Kubernetes cluster API
@ -87,7 +87,7 @@ You can create a federated ingress in any of the usual ways, for example using k
``` shell
kubectl --context=federation-cluster create -f myingress.yaml
```
For example ingress YAML configurations, see the [Ingress User Guide](/docs/user-guide/ingress/)
For example ingress YAML configurations, see the [Ingress User Guide](/docs/concepts/services-networking/ingress/)
The '--context=federation-cluster' flag tells kubectl to submit the
request to the Federation API endpoint, with the appropriate
credentials. If you have not yet configured such a context, visit the

View File

@ -19,13 +19,13 @@ by Kelsey Hightower, are also available to help you.
You are also expected to have a basic
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
general and [Namespaces](/docs/user-guide/namespaces/) in particular.
general and [Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) in particular.
## Overview
Namespaces in federation control plane (referred to as "federated namespaces" in
this guide) are very similar to the traditional [Kubernetes
Namespaces](/docs/user-guide/namespaces/) providing the same functionality.
Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) providing the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.

View File

@ -19,13 +19,13 @@ by Kelsey Hightower, are also available to help you.
You are also expected to have a basic
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
general and [ReplicaSets](/docs/user-guide/replicasets/) in particular.
general and [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) in particular.
## Overview
Replica Sets in federation control plane (referred to as "federated replica sets" in
this guide) are very similar to the traditional [Kubernetes
ReplicaSets](/docs/user-guide/replicasets/), and provide the same functionality.
ReplicaSets](/docs/concepts/workloads/controllers/replicaset/), and provide the same functionality.
Creating them in the federation control plane ensures that the desired number of
replicas exist across the registered clusters.

View File

@ -19,13 +19,13 @@ by Kelsey Hightower, are also available to help you.
You are also expected to have a basic
[working knowledge of Kubernetes](/docs/getting-started-guides/) in
general and [Secrets](/docs/user-guide/secrets/) in particular.
general and [Secrets](/docs/concepts/configuration/secret/) in particular.
## Overview
Secrets in federation control plane (referred to as "federated secrets" in
this guide) are very similar to the traditional [Kubernetes
Secrets](/docs/user-guide/secrets/) providing the same functionality.
Secrets](/docs/concepts/configuration/secret/) providing the same functionality.
Creating them in the federation control plane ensures that they are synchronized
across all the clusters in federation.

View File

@ -102,16 +102,16 @@ Default limits are applied according to a limit range for the default
to see the default limits.
For information about why you would want to specify limits, see
[Setting Pod CPU and Memory Limits](/docs/admin/limitrange/).
[Setting Pod CPU and Memory Limits](/docs/tasks/configure-pod-container/limit-range/).
For information about what happens if you don't specify CPU and RAM requests, see
[Resource Requests and Limits of Pod and Container](/docs/user-guide/compute-resources/).
[Resource Requests and Limits of Pod and Container](/docs/concepts/configuration/manage-compute-resources-container/).
{% endcapture %}
{% capture whatsnext %}
* Learn more about [managing compute resources](/docs/user-guide/compute-resources/).
* Learn more about [managing compute resources](/docs/concepts/configuration/manage-compute-resources-container/).
* See [ResourceRequirements](/docs/api-reference/v1.6/#resourcerequirements-v1-core).
{% endcapture %}

View File

@ -80,7 +80,7 @@ unless the Pod's grace period expires. For more details, see
{% capture whatsnext %}
* Learn more about [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
* Learn more about the [lifecycle of a Pod](/docs/user-guide/pod-states/).
* Learn more about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
### Reference

View File

@ -17,7 +17,7 @@ coarse-grained information like entire config files or JSON blobs.
The ConfigMap API resource holds key-value pairs of configuration data that can be consumed in pods
or used to store configuration data for system components such as controllers. ConfigMap is similar
to [Secrets](/docs/user-guide/secrets/), but designed to more conveniently support working with strings that do not
to [Secrets](/docs/concepts/configuration/secret/), but designed to more conveniently support working with strings that do not
contain sensitive information.
Note: ConfigMaps are not intended to act as a replacement for a properties file. ConfigMaps are intended to act as a reference to multiple properties files. You can think of them as way to represent something similar to the /etc directory, and the files within, on a Linux computer. One example of this model is creating Kubernetes Volumes from ConfigMaps, where each data item in the ConfigMap becomes a new file.
@ -462,7 +462,7 @@ very
#### Projecting keys to specific paths and file permissions
You can project keys to specific paths and specific permissions on a per-file
basis. The [Secrets](/docs/user-guide/secrets/) user guide explains the syntax.
basis. The [Secrets](/docs/concepts/configuration/secret/) user guide explains the syntax.
#### Optional ConfigMap via volume plugin

View File

@ -28,7 +28,7 @@ do not already have a single-node cluster, you can create one by using
[Minikube](/docs/getting-started-guides/minikube).
* Familiarize yourself with the material in
[Persistent Volumes](/docs/user-guide/persistent-volumes/).
[Persistent Volumes](/docs/concepts/storage/persistent-volumes/).
{% endcapture %}
@ -196,7 +196,7 @@ PersistentVolume are not present on the Pod resource itself.
{% capture whatsnext %}
* Learn more about [PersistentVolumes](/docs/user-guide/persistent-volumes/).
* Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/).
* Read the [Persistent Storage design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/persistent-storage.md).
### Reference

View File

@ -8,9 +8,9 @@ Kubernetes can be used to declare network policies which govern how Pods can com
In this article, we assume a Kubernetes cluster has been created with network policy support. There are a number of network providers that support NetworkPolicy including:
* [Calico](/docs/getting-started-guides/network-policy/calico/)
* [Romana](/docs/getting-started-guides/network-policy/romana/)
* [Weave Net](/docs/getting-started-guides/network-policy/weave/)
* [Calico](/docs/tasks/configure-pod-container/calico-network-policy/)
* [Romana](/docs/tasks/configure-pod-container/romana-network-policy/)
* [Weave Net](/docs/tasks/configure-pod-container/weave-network-policy/)
Add-ons are sorted alphabetically - the ordering does not imply any preferential status.

View File

@ -74,9 +74,9 @@ you can define arguments by using environment variables:
This means you can define an argument for a Pod using any of
the techniques available for defining environment variables, including
[ConfigMaps](/docs/user-guide/configmap/)
[ConfigMaps](/docs/tasks/configure-pod-container/configmap/)
and
[Secrets](/docs/user-guide/secrets/).
[Secrets](/docs/concepts/configuration/secret/).
NOTE: The environment variable appears in parentheses, `"$(VAR)"`. This is
required for the variable to be expanded in the `command` or `args` field.
@ -96,7 +96,7 @@ script. To run your command in a shell, wrap it like this:
* Learn more about [containers and commands](/docs/user-guide/containers/).
* Learn more about [configuring containers](/docs/user-guide/configuring-containers/).
* Learn more about [running commands in a container](/docs/user-guide/getting-into-containers/).
* Learn more about [running commands in a container](/docs/tasks/kubectl/get-shell-running-container/).
* See [Container](/docs/api-reference/v1.6/#container-v1-core).
{% endcapture %}

View File

@ -67,7 +67,7 @@ Pod:
{% capture whatsnext %}
* Learn more about [environment variables](/docs/user-guide/environment-guide/).
* Learn more about [environment variables](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/).
* Learn about [using secrets as environment variables](/docs/user-guide/secrets/#using-secrets-as-environment-variables).
* See [EnvVarSource](/docs/api-reference/v1.6/#envvarsource-v1-core).

View File

@ -160,7 +160,7 @@ Here is a configuration file you can use to create a Pod:
{% capture whatsnext %}
* Learn more about [Secrets](/docs/user-guide/secrets/).
* Learn more about [Secrets](/docs/concepts/configuration/secret/).
* Learn about [Volumes](/docs/concepts/storage/volumes/).
### Reference

View File

@ -210,7 +210,7 @@ Downward API defaults to the node allocatable value for CPU and memory.
You can project keys to specific paths and specific permissions on a per-file
basis. For more information, see
[Secrets](/docs/user-guide/secrets/).
[Secrets](/docs/concepts/configuration/secret/).
## Motivation for the Downward API

View File

@ -26,11 +26,11 @@ may be too small to be useful, but big enough for the waste to be costly over th
the cluster operator may want to set limits that a pod must consume at least 20% of the memory and CPU of their
average node size in order to provide for more uniform scheduling and limit waste.
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/admin/namespaces/walkthrough/) to control
This example demonstrates how limits can be applied to a Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces-walkthrough/) to control
min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value.
See [LimitRange design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/user-guide/compute-resources/)
See [LimitRange design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](/docs/concepts/configuration/manage-compute-resources-container/)
## Step 0: Prerequisites

View File

@ -120,7 +120,7 @@ Create a Pod that uses your Secret, and verify that the Pod is running:
{% capture whatsnext %}
* Learn more about [Secrets](/docs/user-guide/secrets/).
* Learn more about [Secrets](/docs/concepts/configuration/secret/).
* Learn more about
[using a private registry](/docs/concepts/containers/images/#using-a-private-registry).
* See [kubectl create secret docker-registry](/docs/user-guide/kubectl/v1.6/#-em-secret-docker-registry-em-).

View File

@ -23,8 +23,8 @@ Init Containers.
{% capture prerequisites %}
* You should be familiar with the basics of
[Init Containers](/docs/user-guide/pods/init-container/).
* You should have a [Pod](/docs/user-guide/pods/) you want to debug that uses
[Init Containers](/docs/concepts/abstractions/init-containers/).
* You should have a [Pod](/docs/concepts/workloads/pods/pod/) you want to debug that uses
Init Containers. The example command lines below refer to the Pod as
`<pod-name>` and the Init Containers as `<init-container-1>` and
`<init-container-2>`.

View File

@ -48,7 +48,7 @@ case you can try several things:
kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'
kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'
The [resource quota](/docs/admin/resourcequota/)
The [resource quota](/docs/concepts/policy/resource-quotas/)
feature can be configured to limit the total amount of
resources that can be consumed. If used in conjunction with namespaces, it can
prevent one team from hogging all the resources.

View File

@ -68,7 +68,7 @@ monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
The `fluentd-elasticsearch` pods gather logs from each node and send them to
the `elasticsearch-logging` pods, which are part of a
[service](/docs/user-guide/services/) named `elasticsearch-logging`. These
[service](/docs/concepts/services-networking/service/) named `elasticsearch-logging`. These
Elasticsearch pods store the logs and expose them via a REST API.
The `kibana-logging` pod provides a web UI for reading the logs stored in
Elasticsearch, and is part of a service named `kibana-logging`.

View File

@ -10,7 +10,7 @@ title: Monitoring Node Health
## Node Problem Detector
*Node problem detector* is a [DaemonSet](/docs/admin/daemons/) monitoring the
*Node problem detector* is a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) monitoring the
node health. It collects node problems from various daemons and reports them
to the apiserver as [NodeCondition](/docs/admin/node/#node-condition) and
[Event](/docs/api-reference/v1.6/#event-v1-core).
@ -120,7 +120,7 @@ Just create `node-problem-detector.yaml`, and put it under the addon pods direct
The [default configuration](https://github.com/kubernetes/node-problem-detector/tree/v0.1/config)
is embedded when building the docker image of node problem detector.
However, you can use [ConfigMap](/docs/user-guide/configmap/) to overwrite it
However, you can use [ConfigMap](/docs/tasks/configure-pod-container/configmap/) to overwrite it
following the steps:
* **Step 1:** Change the config files in `config/`.

View File

@ -11,7 +11,7 @@ Horizontal Pod Autoscaling automatically scales the number of pods
in a replication controller, deployment or replica set based on observed CPU utilization
(or, with alpha support, on some other, application-provided metrics).
This document walks you through an example of enabling Horizontal Pod Autoscaling for the php-apache server. For more information on how Horizontal Pod Autoscaling behaves, see the [Horizontal Pod Autoscaling user guide](/docs/user-guide/horizontal-pod-autoscaling/).
This document walks you through an example of enabling Horizontal Pod Autoscaling for the php-apache server. For more information on how Horizontal Pod Autoscaling behaves, see the [Horizontal Pod Autoscaling user guide](/docs/tasks/run-application/horizontal-pod-autoscale/).
## Prerequisites

View File

@ -140,4 +140,4 @@ available at [the k8s.io/metrics repository](https://github.com/kubernetes/metri
* Design documentation: [Horizontal Pod Autoscaling](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/horizontal-pod-autoscaler.md).
* kubectl autoscale command: [kubectl autoscale](/docs/user-guide/kubectl/v1.6/#autoscale).
* Usage example of [Horizontal Pod Autoscaler](/docs/user-guide/horizontal-pod-autoscaling/walkthrough/).
* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).

View File

@ -12,7 +12,7 @@ title: Rolling Update Replication Controller
To update a service without an outage, `kubectl` supports what is called ['rolling update'](/docs/user-guide/kubectl/v1.6/#rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/docs/design/simple-rolling-update.md) and the [example of rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) for more information.
Note that `kubectl rolling-update` only supports Replication Controllers. However, if you deploy applications with Replication Controllers,
consider switching them to [Deployments](/docs/user-guide/deployments/). A Deployment is a higher-level controller that automates rolling updates
consider switching them to [Deployments](/docs/concepts/workloads/controllers/deployment/). A Deployment is a higher-level controller that automates rolling updates
of applications declaratively, and therefore is recommended. If you still want to keep your Replication Controllers and use `kubectl rolling-update`, keep reading:
A rolling update applies changes to the configuration of pods being managed by
@ -48,7 +48,7 @@ The configuration file must:
* Use the same `metadata.namespace`.
Replication controller configuration files are described in
[Creating Replication Controllers](/docs/user-guide/replication-controller/operations/).
[Creating Replication Controllers](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
### Examples

View File

@ -48,7 +48,7 @@ If the username and password are configured but unknown to you, then use `kubect
## Welcome view
When you access Dashboard on an empty cluster, you'll see the welcome page. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by default in the `kube-system` [namespace](/docs/admin/namespaces/) of your cluster, for example the Dashboard itself.
When you access Dashboard on an empty cluster, you'll see the welcome page. This page contains a link to this document as well as a button to deploy your first application. In addition, you can view which system applications are running by default in the `kube-system` [namespace](/docs/tasks/administer-cluster/namespaces/) of your cluster, for example the Dashboard itself.
![Kubernetes Dashboard welcome page](/images/docs/ui-dashboard-zerostate.png)
@ -66,7 +66,7 @@ The deploy wizard expects that you provide the following information:
- **App name** (mandatory): Name for your application. A [label](/docs/user-guide/labels/) with the name will be added to the Deployment and Service, if any, that will be deployed.
The application name must be unique within the selected Kubernetes [namespace](/docs/admin/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored.
The application name must be unique within the selected Kubernetes [namespace](/docs/tasks/administer-cluster/namespaces/). It must start with a lowercase character, and end with a lowercase character or a number, and contain only lowercase letters, numbers and dashes (-). It is limited to 24 characters. Leading and trailing spaces are ignored.
- **Container image** (mandatory): The URL of a public Docker [container image](/docs/concepts/containers/images/) on any registry, or a private image (commonly hosted on the Google Container Registry or Docker Hub). The container image specification must end with a colon.
@ -74,7 +74,7 @@ The deploy wizard expects that you provide the following information:
A [Deployment](/docs/concepts/workloads/controllers/deployment/) will be created to maintain the desired number of Pods across your cluster.
- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a [Service](http://kubernetes.io/docs/user-guide/services/) onto an external, maybe public IP address outside of your cluster (external Service). For external Services, you may need to open up one or more ports to do so. Find more details [here](/docs/user-guide/services-firewalls/).
- **Service** (optional): For some parts of your application (e.g. frontends) you may want to expose a [Service](http://kubernetes.io/docs/user-guide/services/) onto an external, maybe public IP address outside of your cluster (external Service). For external Services, you may need to open up one or more ports to do so. Find more details [here](/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/).
Other Services that are only visible from inside the cluster are called internal Services.
@ -82,7 +82,7 @@ The deploy wizard expects that you provide the following information:
If needed, you can expand the **Advanced options** section where you can specify more settings:
- **Description**: The text you enter here will be added as an [annotation](/docs/user-guide/annotations/) to the Deployment and displayed in the application's details.
- **Description**: The text you enter here will be added as an [annotation](/docs/concepts/overview/working-with-objects/annotations/) to the Deployment and displayed in the application's details.
- **Labels**: Default [labels](/docs/user-guide/labels/) to be used for your application are application name and version. You can specify additional labels to be applied to the Deployment, Service (if any), and Pods, such as release, environment, tier, partition, and release track.
@ -95,20 +95,20 @@ environment=pod
track=stable
```
- **Namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/admin/namespaces/). They let you partition resources into logically named groups.
- **Namespace**: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called [namespaces](/docs/tasks/administer-cluster/namespaces/). They let you partition resources into logically named groups.
Dashboard offers all available namespaces in a dropdown list, and allows you to create a new namespace. The namespace name may contain a maximum of 63 alphanumeric characters and dashes (-) but can not contain capital letters.
Namespace names should not consist of only numbers. If the name is set as a number, such as 10, the pod will be put in the default namespace.
In case the creation of the namespace is successful, it is selected by default. If the creation fails, the first namespace is selected.
- **Image Pull Secret**: In case the specified Docker container image is private, it may require [pull secret](/docs/user-guide/secrets/) credentials.
- **Image Pull Secret**: In case the specified Docker container image is private, it may require [pull secret](/docs/concepts/configuration/secret/) credentials.
Dashboard offers all available secrets in a dropdown list, and allows you to create a new secret. The secret name must follow the DNS domain name syntax, e.g. `new.image-pull.secret`. The content of a secret must be base64-encoded and specified in a [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) file. The secret name may consist of a maximum of 253 characters.
In case the creation of the image pull secret is successful, it is selected by default. If the creation fails, no secret is applied.
- **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/admin/limitrange/) for the container. By default, Pods run with unbounded CPU and memory limits.
- **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/tasks/configure-pod-container/limit-range/) for the container. By default, Pods run with unbounded CPU and memory limits.
- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default.

View File

@ -20,7 +20,7 @@ Kubernetes contains the following built-in tools:
##### Kubefed
[`kubefed`](/docs/admin/federation/kubefed/) is the command line tool
[`kubefed`](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) is the command line tool
to help you administrate your federated clusters.
##### Minikube
@ -31,7 +31,7 @@ development and testing purposes.
##### Dashboard
[Dashboard](/docs/user-guide/ui/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
[Dashboard](/docs/tasks/web-ui-dashboard/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself.
#### Third-Party Tools

View File

@ -52,9 +52,9 @@ gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0
Now that we have our scheduler in a container image, we can just create a pod
config for it and run it in our Kubernetes cluster. But instead of creating a pod
directly in the cluster, let's use a [Deployment](/docs/user-guide/deployments/)
for this example. A [Deployment](/docs/user-guide/deployments/) manages a
[Replica Set](/docs/user-guide/replicasets/) which in turn manages the pods,
directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/)
for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a
[Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods,
thereby making the scheduler resilient to failures. Here is the deployment
config. Save it as `my-scheduler.yaml`:

View File

@ -26,7 +26,7 @@ frontend and backend are connected using a Kubernetes Service object.
* {% include task-tutorial-prereqs.md %}
* This tutorial uses
[Services with external load balancers](/docs/user-guide/load-balancer/), which
[Services with external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/), which
require a supported environment. If your environment does not
support this, you can use a Service of type
[NodePort](/docs/user-guide/services/#type-nodeport) instead.
@ -132,7 +132,7 @@ service "frontend" created
**Note**: The nginx configuration is baked into the
[container image](/docs/tutorials/connecting-apps/frontend/Dockerfile).
A better way to do this would be to use a
[ConfigMap](/docs/user-guide/configmap/), so
[ConfigMap](/docs/tasks/configure-pod-container/configmap/), so
that you can change the configuration more easily.
### Interact with the frontend Service
@ -179,8 +179,8 @@ The output shows the message generated by the backend:
{% capture whatsnext %}
* Learn more about [Services](/docs/user-guide/services/)
* Learn more about [ConfigMaps](/docs/user-guide/configmap/)
* Learn more about [Services](/docs/concepts/services-networking/service/)
* Learn more about [ConfigMaps](/docs/tasks/configure-pod-container/configmap/)
{% endcapture %}

View File

@ -253,7 +253,7 @@ kubefed init fellowship \
#### API server service type
`kubefed init` exposes the federation API server as a Kubernetes
[service](/docs/user-guide/services/) on the host cluster. By default,
[service](/docs/concepts/services-networking/service/) on the host cluster. By default,
this service is exposed as a
[load balanced service](/docs/user-guide/services/#type-loadbalancer).
Most on-premises and bare-metal enviroments, and some cloud

View File

@ -334,7 +334,7 @@ $ kubectl delete deployment source-ip-app
{% endcapture %}
{% capture whatsnext %}
* Learn more about [connecting applications via services](/docs/user-guide/connecting-applications/)
* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/)
* Learn more about [loadbalancing](/docs/user-guide/load-balancer)
{% endcapture %}

View File

@ -22,7 +22,7 @@ Before you begin this tutorial, you should familiarize yourself with the
following Kubernetes concepts.
* [Pods](/docs/user-guide/pods/single-container/)
* [Cluster DNS](/docs/admin/dns/)
* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/docs/user-guide/services/#headless-services)
* [PersistentVolumes](/docs/concepts/storage/volumes/)
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/)

View File

@ -28,11 +28,11 @@ on general patterns for running stateful applications in Kubernetes.
* {% include task-tutorial-prereqs.md %}
* {% include default-storage-class-prereqs.md %}
* This tutorial assumes you are familiar with
[PersistentVolumes](/docs/user-guide/persistent-volumes/)
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
and [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/),
as well as other core concepts like [Pods](/docs/user-guide/pods/),
[Services](/docs/user-guide/services/), and
[ConfigMaps](/docs/user-guide/configmap/).
as well as other core concepts like [Pods](/docs/concepts/workloads/pods/pod/),
[Services](/docs/concepts/services-networking/service/), and
[ConfigMaps](/docs/tasks/configure-pod-container/configmap/).
* Some familiarity with MySQL helps, but this tutorial aims to present
general patterns that should be useful for other systems.

View File

@ -55,7 +55,7 @@ that points to the Compute Engine disk above:
Notice that the `pdName: mysql-disk` line matches the name of the disk
in the Compute Engine environment. See the
[Persistent Volumes](/docs/user-guide/persistent-volumes/)
[Persistent Volumes](/docs/concepts/storage/persistent-volumes/)
for details on writing a PersistentVolume configuration file for other
environments.
@ -78,7 +78,7 @@ satisfied by any volume that meets the requirements, in this case, the
volume created above.
Note: The password is defined in the config yaml, and this is insecure. See
[Kubernetes Secrets](/docs/user-guide/secrets/)
[Kubernetes Secrets](/docs/concepts/configuration/secret/)
for a secure solution.
{% include code.html language="yaml" file="mysql-deployment.yaml" ghlink="/docs/tutorials/stateful-application/mysql-deployment.yaml" %}
@ -180,7 +180,7 @@ specific to stateful apps:
* Don't scale the app. This setup is for single-instance apps
only. The underlying PersistentVolume can only be mounted to one
Pod. For clustered stateful apps, see the
[StatefulSet documentation](/docs/user-guide/petset/).
[StatefulSet documentation](/docs/concepts/workloads/controllers/petset/).
* Use `strategy:` `type: Recreate` in the Deployment configuration
YAML file. This instructs Kubernetes to _not_ use rolling
updates. Rolling updates will not work, as you cannot have more than
@ -208,13 +208,13 @@ gcloud compute disks delete mysql-disk
{% capture whatsnext %}
* Learn more about [Deployment objects](/docs/user-guide/deployments/).
* Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
* Learn more about [Deploying applications](/docs/user-guide/deploying-applications/)
* [kubectl run documentation](/docs/user-guide/kubectl/v1.6/#run)
* [Volumes](/docs/concepts/storage/volumes/) and [Persistent Volumes](/docs/user-guide/persistent-volumes/)
* [Volumes](/docs/concepts/storage/volumes/) and [Persistent Volumes](/docs/concepts/storage/persistent-volumes/)
{% endcapture %}

View File

@ -23,11 +23,11 @@ Before starting this tutorial, you should be familiar with the following
Kubernetes concepts.
* [Pods](/docs/user-guide/pods/single-container/)
* [Cluster DNS](/docs/admin/dns/)
* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/)
* [Headless Services](/docs/user-guide/services/#headless-services)
* [PersistentVolumes](/docs/concepts/storage/volumes/)
* [PersistentVolume Provisioning](http://releases.k8s.io/{{page.githubbranch}}/examples/persistent-volume-provisioning/)
* [ConfigMaps](/docs/user-guide/configmap/)
* [ConfigMaps](/docs/tasks/configure-pod-container/configmap/)
* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/)
* [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget)
* [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)
@ -90,7 +90,7 @@ safely discarded.
The manifest below contains a
[Headless Service](/docs/user-guide/services/#headless-services),
a [ConfigMap](/docs/user-guide/configmap/),
a [ConfigMap](/docs/tasks/configure-pod-container/configmap/),
a [PodDisruptionBudget](/docs/admin/disruptions/#specifying-a-poddisruptionbudget),
and a [StatefulSet](/docs/concepts/abstractions/controllers/statefulsets/).
@ -209,7 +209,7 @@ zk-1.zk-headless.default.svc.cluster.local
zk-2.zk-headless.default.svc.cluster.local
```
The A records in [Kubernetes DNS](/docs/admin/dns/) resolve the FQDNs to the Pods' IP addresses.
The A records in [Kubernetes DNS](/docs/concepts/services-networking/dns-pod-service/) resolve the FQDNs to the Pods' IP addresses.
If the Pods are rescheduled, the A records will be updated with the Pods' new IP
addresses, but the A record's names will not change.
@ -726,7 +726,7 @@ container to rotate and ship your logs.
The best practices with respect to allowing an application to run as a privileged
user inside of a container are a matter of debate. If your organization requires
that applications be run as a non-privileged user you can use a
[SecurityContext](/docs/user-guide/security-context/) to control the user that
[SecurityContext](/docs/concepts/policy/security-context/) to control the user that
the entry point runs as.
The `zk` StatefulSet's Pod `template` contains a SecurityContext.

View File

@ -39,11 +39,11 @@ provides load balancing for an application that has two running instances.
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a
[Deployment](/docs/user-guide/deployments/)
[Deployment](/docs/concepts/workloads/controllers/deployment/)
object and an associated
[ReplicaSet](/docs/user-guide/replicasets/)
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
object. The ReplicaSet has two
[Pods](/docs/user-guide/pods/),
[Pods](/docs/concepts/workloads/pods/pod/),
each of which runs the Hello World application.
1. Display information about the Deployment:
@ -140,7 +140,7 @@ the Hello World application, enter this command:
{% capture whatsnext %}
Learn more about
[connecting applications with services](/docs/user-guide/connecting-applications/).
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
{% endcapture %}
{% include templates/tutorial.md %}

View File

@ -16,7 +16,7 @@ external IP address.
* Use a cloud provider like Google Container Engine or Amazon Web Services to
create a Kubernetes cluster. This tutorial creates an
[external load balancer](/docs/user-guide/load-balancer/),
[external load balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/),
which requires a cloud provider.
* Configure `kubectl` to communicate with your Kubernetes API server. For
@ -43,11 +43,11 @@ external IP address.
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a
[Deployment](/docs/user-guide/deployments/)
[Deployment](/docs/concepts/workloads/controllers/deployment/)
object and an associated
[ReplicaSet](/docs/user-guide/replicasets/)
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
object. The ReplicaSet has five
[Pods](/docs/user-guide/pods/),
[Pods](/docs/concepts/workloads/pods/pod/),
each of which runs the Hello World application.
1. Display information about the Deployment:
@ -146,7 +146,7 @@ the Hello World application, enter this command:
{% capture whatsnext %}
Learn more about
[connecting applications with services](/docs/user-guide/connecting-applications/).
[connecting applications with services](/docs/concepts/services-networking/connect-applications-service/).
{% endcapture %}
{% include templates/tutorial.md %}

View File

@ -161,7 +161,7 @@ Now the Minikube VM can run the image you built.
## Create a Deployment
A Kubernetes [*Pod*](/docs/user-guide/pods/) is a group of one or more Containers,
A Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) is a group of one or more Containers,
tied together for the purposes of administration and networking. The Pod in this
tutorial has only one Container. A Kubernetes
[*Deployment*](/docs/user-guide/deployments) checks on the health of your
@ -225,7 +225,7 @@ For more information about `kubectl`commands, see the
By default, the Pod is only accessible by its internal IP address within the
Kubernetes cluster. To make the `hello-node` Container accessible from outside the
Kubernetes virtual network, you have to expose the Pod as a
Kubernetes [*Service*](/docs/user-guide/services/).
Kubernetes [*Service*](/docs/concepts/services-networking/service/).
From your development machine, you can expose the Pod to the public internet
using the `kubectl expose` command:
@ -315,9 +315,9 @@ minikube stop
{% capture whatsnext %}
* Learn more about [Deployment objects](/docs/user-guide/deployments/).
* Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
* Learn more about [Deploying applications](/docs/user-guide/deploying-applications/).
* Learn more about [Service objects](/docs/user-guide/services/).
* Learn more about [Service objects](/docs/concepts/services-networking/service/).
{% endcapture %}

View File

@ -133,7 +133,7 @@ Delete the deployment by name:
{% capture whatsnext %}
* Learn more about [Deployment objects](/docs/user-guide/deployments/).
* Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
* Learn more about [Deploying applications](/docs/user-guide/deploying-applications/)

View File

@ -0,0 +1,85 @@
import subprocess
import re
# Finds the docuements to rewrite by for files that include user-guide-content-moved.md.
# Then opens these files and processes the stuff after those lines to figure out where
# the line should move to.
# Returns a list of ('old/path', 'new/path') tuples.
def find_documents_to_rewrite():
cmd = "ag --markdown -Q -l \"{% include user-guide-content-moved.md %}\""
moved_docs = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.read().splitlines()
rewrites = []
for doc in moved_docs:
location = doc_location(doc)
destinations = get_desinations_for_doc(doc)
if len(destinations) == 0:
print("Unable to get possible destinations for %s" % doc)
elif len(destinations) > 1:
print("%s has multiple potential destinations. Not rewriting links." % doc)
else:
# print("%s --> %s" % (location, destinations[0]))
rewrites.append((location, destinations[0]))
return rewrites
# Returns the location of the documentation as we will refer to it in the markdown.
# /docs/path/to/foo/index.md are available at /docs/path/to/foo/
# /docs/path/to/foo/bar.md are available at /docs/path/to/foo/bar/
def doc_location(filename):
if filename.endswith('/index.md'):
return "/docs/" + filename[:-9] + "/"
else:
return "/docs/" + filename[:-3] + "/"
REDIRECT_REGEX = re.compile("^.*\[(.*)\]\((.*)\)$")
def get_desinations_for_doc(filename):
destination_paths = []
with open(filename) as f:
lines = [line.rstrip('\n').rstrip('\r') for line in f.readlines()]
# Remove empty lines
lines = filter(bool, lines)
content_moved_index = lines.index("{% include user-guide-content-moved.md %}")
# Get everything after that line.
destinations = lines[content_moved_index + 1:]
for destination in destinations:
result = REDIRECT_REGEX.match(destination)
if not result:
return []
doc_title = result.group(1) # Unused, can print it out for more info.
new_path = result.group(2)
destination_paths.append(new_path)
return destination_paths
# Given a list of (old/path, new/path) tuples executes a sed command across all files in
# to replace (/docs/path/to/old/doc/) with (/docs/path/to/new/doc/).
def rewrite_documents(rewrites):
cmd = "find . -name '*.md' -type f -exec sed -i.bak 's@(%s)@(%s)@g' '{}' \;"
for original, new in rewrites:
print("%s --> %s" % (original, new))
original = original.replace('-', '\-')
new = new.replace('-', '\-')
#print(cmd % (original, new))
subprocess.call(cmd % (original, new), shell=True)
# We can't have in-line replace across multiple files without sudo (I think), so it
# creates a lot of backups that we have to delete.
def remove_sed_backups():
cmd = "find . -name '*.bak' -delete"
subprocess.call(cmd, shell=True)
def main():
rewrites = find_documents_to_rewrite()
rewrite_documents(rewrites)
remove_sed_backups()
if __name__ == "__main__":
main()

View File

@ -9,7 +9,7 @@ title: User Guide
The Kubernetes **Guides** can help you work with various aspects of the Kubernetes system.
* The Kubernetes [User Guide](#user-guide-internal) can help you run programs and services on an existing Kubernetes cluster.
* The [Cluster Admin Guide](/docs/admin/) can help you set up and administrate your own Kubernetes cluster.
* The [Cluster Admin Guide](/docs/tasks/administer-cluster/overview/) can help you set up and administrate your own Kubernetes cluster.
* The [Developer Guide] can help you either write code to directly access the Kubernetes API, or to contribute directly to the Kubernetes project.
## <a name="user-guide-internal"></a>Kubernetes User Guide
@ -19,28 +19,28 @@ The following topics in the Kubernetes User Guide can help you run applications
1. [Quick start: launch and expose an application](/docs/user-guide/quick-start/)
1. [Configuring and launching containers: configuring common container parameters](/docs/user-guide/configuring-containers/)
1. [Deploying continuously running applications](/docs/user-guide/deploying-applications/)
1. [Connecting applications: exposing applications to clients and users](/docs/user-guide/connecting-applications/)
1. [Connecting applications: exposing applications to clients and users](/docs/concepts/services-networking/connect-applications-service/)
1. [Working with containers in production](/docs/user-guide/production-pods/)
1. [Managing deployments](/docs/concepts/cluster-administration/manage-deployment/)
1. [Application introspection and debugging](/docs/user-guide/introspection-and-debugging/)
1. [Using the Kubernetes web user interface](/docs/user-guide/ui/)
1. [Logging](/docs/user-guide/logging/overview/)
1. [Monitoring](/docs/user-guide/monitoring/)
1. [Getting into containers via `exec`](/docs/user-guide/getting-into-containers/)
1. [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy/)
1. [Connecting to containers via port forwarding](/docs/user-guide/connecting-to-applications-port-forward/)
1. [Application introspection and debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/)
1. [Using the Kubernetes web user interface](/docs/tasks/web-ui-dashboard/)
1. [Logging](/docs/concepts/clusters/logging/)
1. [Monitoring](/docs/concepts/cluster-administration/resource-usage-monitoring/)
1. [Getting into containers via `exec`](/docs/tasks/kubectl/get-shell-running-container/)
1. [Connecting to containers via proxies](/docs/tasks/access-kubernetes-api/http-proxy-access-api/)
1. [Connecting to containers via port forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
Before running examples in the user guides, please ensure you have completed [installing kubectl](/docs/tasks/kubectl/install/).
## Kubernetes Concepts
[**Cluster**](/docs/admin/)
[**Cluster**](/docs/tasks/administer-cluster/overview/)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.
[**Node**](/docs/admin/node/)
[**Node**](/docs/concepts/nodes/node/)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
[**Pod**](/docs/user-guide/pods/)
[**Pod**](/docs/concepts/workloads/pods/pod/)
: A pod is a co-located group of containers and volumes.
[**Label**](/docs/user-guide/labels/)
@ -49,44 +49,44 @@ Before running examples in the user guides, please ensure you have completed [in
[**Selector**](/docs/user-guide/labels/#label-selectors)
: A selector is an expression that matches labels in order to identify related resources, such as which pods are targeted by a load-balanced service.
[**Replication Controller**](/docs/user-guide/replication-controller/)
[**Replication Controller**](/docs/concepts/workloads/controllers/replicationcontroller/)
: A replication controller ensures that a specified number of pod replicas are running at any one time. It both allows for easy scaling of replicated systems and handles re-creation of a pod when the machine it is on reboots or otherwise fails.
[**Service**](/docs/user-guide/services/)
[**Service**](/docs/concepts/services-networking/service/)
: A service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name.
[**Volume**](/docs/concepts/storage/volumes/)
: A volume is a directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes volumes build upon [Docker Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/), adding provisioning of the volume directory and/or device.
[**Secret**](/docs/user-guide/secrets/)
[**Secret**](/docs/concepts/configuration/secret/)
: A secret stores sensitive data, such as authentication tokens, which can be made available to containers upon request.
[**Name**](/docs/user-guide/identifiers/)
[**Name**](/docs/concepts/overview/working-with-objects/names/)
: A user- or client-provided name for a resource.
[**Namespace**](/docs/user-guide/namespaces/)
[**Namespace**](/docs/concepts/overview/working-with-objects/namespaces/)
: A namespace is like a prefix to the name of a resource. Namespaces help different projects, teams, or customers to share a cluster, such as by preventing name collisions between unrelated teams.
[**Annotation**](/docs/user-guide/annotations/)
[**Annotation**](/docs/concepts/overview/working-with-objects/annotations/)
: A key/value pair that can hold larger (compared to a label), and possibly not human-readable, data, intended to store non-identifying auxiliary data, especially data manipulated by tools and system extensions. Efficient filtering by annotation values is not supported.
## Further reading
API resources
* [Working with resources](/docs/user-guide/working-with-resources/)
* [Working with resources](/docs/concepts/tools/kubectl/object-management-overview/)
Pods and containers
* [Pod lifecycle and restart policies](/docs/user-guide/pod-states/)
* [Pod lifecycle and restart policies](/docs/concepts/workloads/pods/pod-lifecycle/)
* [Lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/)
* [Compute resources, such as cpu and memory](/docs/user-guide/compute-resources/)
* [Compute resources, such as cpu and memory](/docs/concepts/configuration/manage-compute-resources-container/)
* [Specifying commands and requesting capabilities](/docs/user-guide/containers/)
* [Downward API: accessing system configuration from a pod](/docs/user-guide/downward-api/)
* [Downward API: accessing system configuration from a pod](/docs/tasks/configure-pod-container/downward-api-volume-expose-pod-information/)
* [Images and registries](/docs/concepts/containers/images/)
* [Migrating from docker-cli to kubectl](/docs/user-guide/docker-cli-to-kubectl/)
* [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/)
* [Assign pods to selected nodes](/docs/user-guide/node-selection/)
* [Assign pods to selected nodes](/docs/concepts/configuration/assign-pod-node/)
* [Perform a rolling update on a running group of pods](/docs/tasks/run-application/rolling-update-replication-controller/)
[Developer Guide]: https://github.com/kubernetes/community/blob/master/contributors/devel/README.md

View File

@ -27,7 +27,7 @@ If you haven't installed and configured kubectl, finish [installing kubectl](/do
In Kubernetes, a group of one or more containers is called a _pod_. Containers in a pod are deployed together, and are started, stopped, and replicated as a group.
See [pods](/docs/user-guide/pods/) for more details.
See [pods](/docs/concepts/workloads/pods/pod/) for more details.
#### Pod Definition
@ -55,7 +55,7 @@ List all pods:
$ kubectl get pods
```
On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/getting-into-containers/) for details.
On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](/docs/tasks/kubectl/get-shell-running-container/) for details.
Provided the pod IP is accessible, you should be able to access its http endpoint with wget on port 80:

View File

@ -106,7 +106,7 @@ Delete the Deployment by name:
kubectl delete deployment nginx-deployment
```
For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/user-guide/deployments/).
For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/concepts/workloads/controllers/deployment/).
## Services
@ -156,7 +156,7 @@ kubectl delete service nginx-service
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some Pod that is a member of the set identified by the label selector in the Service.
For more information, see [Services](/docs/user-guide/services/).
For more information, see [Services](/docs/concepts/services-networking/service/).
## Health Checking