Merge pull request #20924 from celestehorgan/fix-broken-links

Fix broken links identified by linkchecker
This commit is contained in:
Kubernetes Prow Robot 2020-05-14 07:22:22 -07:00 committed by GitHub
commit f1929c87ea
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 58 additions and 59 deletions

View File

@ -204,7 +204,7 @@ to balance progress between request flows.
The queuing configuration allows tuning the fair queuing algorithm for a The queuing configuration allows tuning the fair queuing algorithm for a
priority level. Details of the algorithm can be read in the [enhancement priority level. Details of the algorithm can be read in the [enhancement
proposal](#what-s-next), but in short: proposal](#whats-next), but in short:
* Increasing `queues` reduces the rate of collisions between different flows, at * Increasing `queues` reduces the rate of collisions between different flows, at
the cost of increased memory usage. A value of 1 here effectively disables the the cost of increased memory usage. A value of 1 here effectively disables the

View File

@ -106,7 +106,7 @@ as well as keeping the existing service in good shape.
## Writing your own Operator {#writing-operator} ## Writing your own Operator {#writing-operator}
If there isn't an Operator in the ecosystem that implements the behavior you If there isn't an Operator in the ecosystem that implements the behavior you
want, you can code your own. In [What's next](#what-s-next) you'll find a few want, you can code your own. In [What's next](#whats-next) you'll find a few
links to libraries and tools you can use to write your own cloud native links to libraries and tools you can use to write your own cloud native
Operator. Operator.

View File

@ -157,13 +157,13 @@ the three things:
1. **wait** (with a timeout) \ 1. **wait** (with a timeout) \
If a Permit plugin returns "wait", then the Pod is kept in an internal "waiting" If a Permit plugin returns "wait", then the Pod is kept in an internal "waiting"
Pods list, and the binding cycle of this Pod starts but directly blocks until it Pods list, and the binding cycle of this Pod starts but directly blocks until it
gets [approved](#frameworkhandle). If a timeout occurs, **wait** becomes **deny** gets approved. If a timeout occurs, **wait** becomes **deny**
and the Pod is returned to the scheduling queue, triggering [Unreserve](#unreserve) and the Pod is returned to the scheduling queue, triggering [Unreserve](#unreserve)
plugins. plugins.
{{< note >}} {{< note >}}
While any plugin can access the list of "waiting" Pods and approve them While any plugin can access the list of "waiting" Pods and approve them
(see [`FrameworkHandle`](#frameworkhandle)), we expect only the permit (see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
is approved, it is sent to the [PreBind](#pre-bind) phase. is approved, it is sent to the [PreBind](#pre-bind) phase.
{{< /note >}} {{< /note >}}

View File

@ -171,7 +171,7 @@ following pod-specific DNS policies. These policies are specified in the
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes - "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
environment. All DNS settings are supposed to be provided using the environment. All DNS settings are supposed to be provided using the
`dnsConfig` field in the Pod Spec. `dnsConfig` field in the Pod Spec.
See [Pod's DNS config](#pod-s-dns-config) subsection below. See [Pod's DNS config](#pod-dns-config) subsection below.
{{< note >}} {{< note >}}
"Default" is not the default DNS policy. If `dnsPolicy` is not "Default" is not the default DNS policy. If `dnsPolicy` is not
@ -201,7 +201,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet dnsPolicy: ClusterFirstWithHostNet
``` ```
### Pod's DNS Config ### Pod's DNS Config {#pod-dns-config}
Pod's DNS Config allows users more control on the DNS settings for a Pod. Pod's DNS Config allows users more control on the DNS settings for a Pod.
@ -270,5 +270,3 @@ For guidance on administering DNS configurations, check
[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/) [Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/)
{{% /capture %}} {{% /capture %}}

View File

@ -241,7 +241,7 @@ myapp-pod 1/1 Running 0 9m
``` ```
This simple example should provide some inspiration for you to create your own This simple example should provide some inspiration for you to create your own
init containers. [What's next](#what-s-next) contains a link to a more detailed example. init containers. [What's next](#whats-next) contains a link to a more detailed example.
## Detailed behavior ## Detailed behavior

View File

@ -73,8 +73,7 @@ true:
[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is [Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is
the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow
enables chatbot-style commands to handle GitHub actions across the Kubernetes enables chatbot-style commands to handle GitHub actions across the Kubernetes
organization, like [adding and removing organization, like [adding and removing labels](#adding-and-removing-issue-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
labels](#add-and-remove-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
The most common prow commands reviewers and approvers use are: The most common prow commands reviewers and approvers use are:

View File

@ -28,7 +28,7 @@ For information how to create a cluster with kubeadm once you have performed thi
* 2 GB or more of RAM per machine (any less will leave little room for your apps) * 2 GB or more of RAM per machine (any less will leave little room for your apps)
* 2 CPUs or more * 2 CPUs or more
* Full network connectivity between all machines in the cluster (public or private network is fine) * Full network connectivity between all machines in the cluster (public or private network is fine)
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details. * Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
* Certain ports are open on your machines. See [here](#check-required-ports) for more details. * Certain ports are open on your machines. See [here](#check-required-ports) for more details.
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. * Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
@ -36,7 +36,7 @@ For information how to create a cluster with kubeadm once you have performed thi
{{% capture steps %}} {{% capture steps %}}
## Verify the MAC address and product_uuid are unique for every node ## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}
* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a` * You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`
* The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid` * The product_uuid can be checked by using the command `sudo cat /sys/class/dmi/id/product_uuid`

View File

@ -21,7 +21,7 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in
* openSUSE Leap 15 * openSUSE Leap 15
* continuous integration tests * continuous integration tests
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops). To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
{{% /capture %}} {{% /capture %}}

View File

@ -277,7 +277,7 @@ This shows the proxy-verb URL for accessing each service.
For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached
at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed. Logging can also be reached through a kubectl proxy, for example at: at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` if suitable credentials are passed. Logging can also be reached through a kubectl proxy, for example at:
`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`. `http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`.
(See [above](#accessing-the-cluster-api) for how to pass credentials or use kubectl proxy.) (See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/) for how to pass credentials or use kubectl proxy.)
#### Manually constructing apiserver proxy URLs #### Manually constructing apiserver proxy URLs

View File

@ -362,7 +362,7 @@ Structural schemas are a requirement for `apiextensions.k8s.io/v1`, and disables
* [Webhook Conversion](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#webhook-conversion) * [Webhook Conversion](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#webhook-conversion)
* [Pruning](#preserving-unknown-fields) * [Pruning](#preserving-unknown-fields)
### Pruning versus preserving unknown fields ### Pruning versus preserving unknown fields {#preserving-unknown-fields}
{{< feature-state state="stable" for_k8s_version="v1.16" >}} {{< feature-state state="stable" for_k8s_version="v1.16" >}}

View File

@ -287,7 +287,7 @@ by eye).
If an error occurs, the kubelet reports it in the `Node.Status.Config.Error` If an error occurs, the kubelet reports it in the `Node.Status.Config.Error`
structure. Possible errors are listed in structure. Possible errors are listed in
[Understanding Node.Status.Config.Error messages](#understanding-node-status-config-error-messages). [Understanding Node.Status.Config.Error messages](#understanding-node-config-status-errors).
You can search for the identical text in the kubelet log for additional details You can search for the identical text in the kubelet log for additional details
and context about the error. and context about the error.
@ -355,7 +355,7 @@ metadata and checkpoints. The structure of the kubelet's checkpointing directory
| - ... | - ...
``` ```
## Understanding Node.Status.Config.Error messages ## Understanding Node.Status.Config.Error messages {#understanding-node-config-status-errors}
The following table describes error messages that can occur The following table describes error messages that can occur
when using Dynamic Kubelet Config. You can search for the identical text when using Dynamic Kubelet Config. You can search for the identical text

View File

@ -173,7 +173,7 @@ For example: in Centos, you can do this using the tuned toolset.
Memory pressure at the node level leads to System OOMs which affects the entire Memory pressure at the node level leads to System OOMs which affects the entire
node and all pods running on it. Nodes can go offline temporarily until memory node and all pods running on it. Nodes can go offline temporarily until memory
has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet
provides [`Out of Resource`](./out-of-resource.md) management. Evictions are provides [`Out of Resource`](/docs/tasks/administer-cluster/out-of-resource/) management. Evictions are
supported for `memory` and `ephemeral-storage` only. By reserving some memory via supported for `memory` and `ephemeral-storage` only. By reserving some memory via
`--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory `--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory
availability on the node drops below the reserved value. Hypothetically, if availability on the node drops below the reserved value. Hypothetically, if
@ -190,7 +190,7 @@ The scheduler treats `Allocatable` as the available `capacity` for pods.
`kubelet` enforce `Allocatable` across pods by default. Enforcement is performed `kubelet` enforce `Allocatable` across pods by default. Enforcement is performed
by evicting pods whenever the overall usage across all pods exceeds by evicting pods whenever the overall usage across all pods exceeds
`Allocatable`. More details on eviction policy can be found `Allocatable`. More details on eviction policy can be found
[here](./out-of-resource.md#eviction-policy). This enforcement is controlled by [here](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy). This enforcement is controlled by
specifying `pods` value to the kubelet flag `--enforce-node-allocatable`. specifying `pods` value to the kubelet flag `--enforce-node-allocatable`.

View File

@ -16,6 +16,10 @@ This page shows how to use the `runAsUserName` setting for Pods and containers t
You need to have a Kubernetes cluster and the kubectl command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes where pods with containers running Windows workloads will get scheduled. You need to have a Kubernetes cluster and the kubectl command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes where pods with containers running Windows workloads will get scheduled.
{{% /capture %}}
{{% capture steps %}}
## Set the Username for a Pod ## Set the Username for a Pod
To specify the username with which to execute the Pod's container processes, include the `securityContext` field ([PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core) in the Pod specification, and within it, the `windowsOptions` ([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core) field containing the `runAsUserName` field. To specify the username with which to execute the Pod's container processes, include the `securityContext` field ([PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core) in the Pod specification, and within it, the `windowsOptions` ([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core) field containing the `runAsUserName` field.

View File

@ -9,7 +9,7 @@ toc_hide: true
{{% capture overview %}} {{% capture overview %}}
{{< note >}} {{< note >}}
Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document. Be sure to also [create an entry in the table of contents](/docs/contribute/style/write-new-topic/#placing-your-topic-in-the-table-of-contents) for your new document.
{{< /note >}} {{< /note >}}
This page shows how to ... This page shows how to ...
@ -29,7 +29,7 @@ This page shows how to ...
## Doing ... ## Doing ...
1. Do this. 1. Do this.
1. Do this next. Possibly read this [related explanation](...). 1. Do this next. Possibly read this [related explanation](#).
{{% /capture %}} {{% /capture %}}
@ -50,5 +50,3 @@ Here's an interesting thing to know about the steps you just did.
* See [Using Page Templates - Task template](/docs/home/contribute/page-templates/#task_template) for how to use this template. * See [Using Page Templates - Task template](/docs/home/contribute/page-templates/#task_template) for how to use this template.
{{% /capture %}} {{% /capture %}}

View File

@ -11,9 +11,9 @@ card:
--- ---
{{% capture overview %}} {{% capture overview %}}
This tutorial builds upon the [PHP Guestbook with Redis](../guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components: This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
* A running instance of the [PHP Guestbook with Redis tutorial](../guestbook) * A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
* Elasticsearch and Kibana * Elasticsearch and Kibana
* Filebeat * Filebeat
* Metricbeat * Metricbeat
@ -36,7 +36,7 @@ This tutorial builds upon the [PHP Guestbook with Redis](../guestbook) tutorial.
Additionally you need: Additionally you need:
* A running deployment of the [PHP Guestbook with Redis](../guestbook) tutorial. * A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co), run the [download files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts). * A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co), run the [download files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html) on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
@ -45,7 +45,7 @@ Additionally you need:
{{% capture lessoncontent %}} {{% capture lessoncontent %}}
## Start up the PHP Guestbook with Redis ## Start up the PHP Guestbook with Redis
This tutorial builds on the [PHP Guestbook with Redis](../guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running. This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
## Add a Cluster role binding ## Add a Cluster role binding
Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system). Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).