Merge pull request #22981 from shuuji3/en/replace-special-quote-with-normal-ones

Replace special quote characters with normal ones
This commit is contained in:
Kubernetes Prow Robot 2020-08-26 14:55:02 -07:00 committed by GitHub
commit 70b75e16f0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
41 changed files with 100 additions and 100 deletions

View File

@ -140,7 +140,7 @@ the {{< glossary_tooltip term_id="kube-controller-manager" >}}. These
built-in controllers provide important core behaviors.
The Deployment controller and Job controller are examples of controllers that
come as part of Kubernetes itself (“built-in” controllers).
come as part of Kubernetes itself ("built-in" controllers).
Kubernetes lets you run a resilient control plane, so that if any of the built-in
controllers were to fail, another part of the control plane will take over the work.

View File

@ -39,7 +39,7 @@ Before choosing a guide, here are some considerations:
## Managing a cluster
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your clusters master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster's master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).

View File

@ -359,7 +359,7 @@ The only component that considers both QoS and Pod priority is
[kubelet out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
The kubelet ranks Pods for eviction first by whether or not their usage of the
starved resource exceeds requests, then by Priority, and then by the consumption
of the starved compute resource relative to the Pods scheduling requests.
of the starved compute resource relative to the Pods' scheduling requests.
See
[evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
for more details.

View File

@ -914,7 +914,7 @@ Create the Secret:
kubectl apply -f mysecret.yaml
```
Use `envFrom` to define all of the Secrets data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
Use `envFrom` to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
```yaml
apiVersion: v1

View File

@ -46,7 +46,7 @@ list of devices it manages, and the kubelet is then in charge of advertising tho
resources to the API server as part of the kubelet node status update.
For example, after a device plugin registers `hardware-vendor.example/foo` with the kubelet
and reports two healthy devices on a node, the node status is updated
to advertise that the node has 2 “Foo” devices installed and available.
to advertise that the node has 2 "Foo" devices installed and available.
Then, users can request devices in a
[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)

View File

@ -59,17 +59,17 @@ That's how Kubernetes comes to the rescue! Kubernetes provides you with a framew
Kubernetes provides you with:
* **Service discovery and load balancing**
* **Service discovery and load balancing**
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
* **Storage orchestration**
* **Storage orchestration**
Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
* **Automated rollouts and rollbacks**
* **Automated rollouts and rollbacks**
You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
* **Automatic bin packing**
* **Automatic bin packing**
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
* **Self-healing**
Kubernetes restarts containers that fail, replaces containers, kills containers that dont respond to your user-defined health check, and doesnt advertise them to clients until they are ready to serve.
* **Secret and configuration management**
* **Self-healing**
Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
* **Secret and configuration management**
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
## What Kubernetes is not
@ -84,7 +84,7 @@ Kubernetes:
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldnt matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.

View File

@ -69,7 +69,7 @@ If the prefix is omitted, the annotation Key is presumed to be private to the us
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
For example, heres the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
For example, here's the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
```yaml
@ -85,7 +85,7 @@ spec:
image: nginx:1.14.2
ports:
- containerPort: 80
```

View File

@ -54,7 +54,7 @@ The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core com
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
For example, heres the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
```yaml

View File

@ -54,7 +54,7 @@ Some resource types require their names to be able to be safely encoded as a
path segment. In other words, the name may not be "." or ".." and the name may
not contain "/" or "%".
Heres an example manifest for a Pod named `nginx-demo`.
Here's an example manifest for a Pod named `nginx-demo`.
```yaml
apiVersion: v1

View File

@ -62,7 +62,7 @@ tolerations:
effect: "NoSchedule"
```
Heres an example of a pod that uses tolerations:
Here's an example of a pod that uses tolerations:
{{< codenew file="pods/pod-with-toleration.yaml" >}}

View File

@ -317,6 +317,6 @@ restrict privileged permissions is lessened when the workload is isolated from t
kernel. This allows for workloads requiring heightened permissions to still be isolated.
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.

View File

@ -15,7 +15,7 @@ weight: 30
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker.
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machines own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically.
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically.
Coordinating port allocations across multiple developers or teams that provide containers is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.

View File

@ -72,12 +72,12 @@ In general a pod has the following DNS resolution:
`pod-ip-address.my-namespace.pod.cluster-domain.example`.
For example, if a pod in the `default` namespace has the IP address 172.17.0.3,
For example, if a pod in the `default` namespace has the IP address 172.17.0.3,
and the domain name for your cluster is `cluster.local`, then the Pod has a DNS name:
`172-17-0-3.default.pod.cluster.local`.
Any pods created by a Deployment or DaemonSet exposed by a Service have the
Any pods created by a Deployment or DaemonSet exposed by a Service have the
following DNS resolution available:
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`.
@ -209,7 +209,7 @@ following pod-specific DNS policies. These policies are specified in the
{{< note >}}
"Default" is not the default DNS policy. If `dnsPolicy` is not
explicitly specified, then “ClusterFirst” is used.
explicitly specified, then "ClusterFirst" is used.
{{< /note >}}

View File

@ -94,7 +94,7 @@ __egress__: Each NetworkPolicy may include a list of allowed `egress` rules. Ea
So, the example NetworkPolicy:
1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated)
2. (Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
2. (Ingress rules) allows connections to all pods in the "default" namespace with the label "role=db" on TCP port 6379 from:
* any pod in the "default" namespace with the label "role=frontend"
* any pod in a namespace with the label "project=myproject"

View File

@ -33,8 +33,8 @@ Each Pod gets its own IP address, however in a Deployment, the set of Pods
running in one moment in time could be different from
the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them “backends”) provides
functionality to other Pods (call them “frontends”) inside your cluster,
This leads to a problem: if some set of Pods (call them "backends") provides
functionality to other Pods (call them "frontends") inside your cluster,
how do the frontends find out and keep track of which IP address to connect
to, so that the frontend can use the backend part of the workload?
@ -91,7 +91,7 @@ spec:
targetPort: 9376
```
This specification creates a new Service object named “my-service”, which
This specification creates a new Service object named "my-service", which
targets TCP port 9376 on any Pod with the `app=MyApp` label.
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
@ -100,7 +100,7 @@ which is used by the Service proxies
The controller for the Service selector continuously scans for Pods that
match its selector, and then POSTs any updates to an Endpoint object
also named “my-service”.
also named "my-service".
{{< note >}}
A Service can map _any_ incoming `port` to a `targetPort`. By default and
@ -316,7 +316,7 @@ falls back to running in iptables proxy mode.
![Services overview diagram for IPVS proxy](/images/docs/services-ipvs-overview.svg)
In these proxy models, the traffic bound for the Services IP:Port is
In these proxy models, the traffic bound for the Service's IP:Port is
proxied to an appropriate backend without the clients knowing anything
about Kubernetes or Services or Pods.
@ -444,7 +444,7 @@ You can find more information about `ExternalName` resolution in
## Headless Services
Sometimes you don't need load-balancing and a single Service IP. In
this case, you can create what are termed “headless” Services, by explicitly
this case, you can create what are termed "headless" Services, by explicitly
specifying `"None"` for the cluster IP (`.spec.clusterIP`).
You can use a headless Service to interface with other service discovery mechanisms,
@ -682,7 +682,7 @@ metadata:
```yaml
[...]
metadata:
annotations:
annotations:
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
[...]
```
@ -691,7 +691,7 @@ metadata:
```yaml
[...]
metadata:
annotations:
annotations:
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet"
[...]
```
@ -950,25 +950,25 @@ There are other annotations for managing Cloud Load Balancers on TKE as shown be
# ID of an existing load balancer
service.kubernetes.io/tke-existed-lbidlb-6swtxxxx
# Custom parameters for the load balancer (LB), does not support modification of LB type yet
service.kubernetes.io/service.extensiveParameters: ""
# Custom parameters for the LB listener
# Custom parameters for the LB listener
service.kubernetes.io/service.listenerParameters: ""
# Specifies the type of Load balancer;
# valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer)
service.kubernetes.io/loadbalance-type: xxxxx
# Specifies the public network bandwidth billing method;
# Specifies the public network bandwidth billing method;
# valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth).
service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx
# Specifies the bandwidth value (value range: [1,2000] Mbps).
service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10"
# When this annotation is setthe loadbalancers will only register nodes
# When this annotation is setthe loadbalancers will only register nodes
# with pod running on it, otherwise all nodes will be registered.
service.kubernetes.io/local-svc-only-bind-node-with-pod: true
```
@ -1117,7 +1117,7 @@ connections on it.
When a client connects to the Service's virtual IP address, the iptables
rule kicks in, and redirects the packets to the proxy's own port.
The “Service proxy” chooses a backend, and starts proxying traffic from the client to the backend.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
This means that Service owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware

View File

@ -33,7 +33,7 @@ from the API group `storage.k8s.io`. A cluster administrator can define as many
that provisioner when provisioning.
A cluster administrator can define and expose multiple flavors of storage (from
the same or different storage systems) within a cluster, each with a custom set
of parameters. This design also ensures that end users dont have to worry
of parameters. This design also ensures that end users don't have to worry
about the complexity and nuances of how storage is provisioned, but still
have the ability to select from multiple storage options.
@ -85,8 +85,8 @@ is deprecated since v1.6. Users now can and should instead use the
this field must match the name of a `StorageClass` configured by the
administrator (see [below](#enabling-dynamic-provisioning)).
To select the “fast” storage class, for example, a user would create the
following `PersistentVolumeClaim`:
To select the "fast" storage class, for example, a user would create the
following PersistentVolumeClaim:
```yaml
apiVersion: v1

View File

@ -296,7 +296,7 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
Once the original is deleted, you can create a new ReplicaSet to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods.
However, it will not make any effort to make existing Pods match a new, different pod template.
To update Pods to a new spec in a controlled way, use a
To update Pods to a new spec in a controlled way, use a
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly.
### Isolating Pods from a ReplicaSet
@ -341,7 +341,7 @@ kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update
them and their Pods via declarative, server-side rolling updates.
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
creation, deletion and updates. When you use Deployments you dont have to worry about managing the ReplicaSets that
creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that
they create. Deployments own and manage their ReplicaSets.
As such, it is recommended to use Deployments when you want ReplicaSets.

View File

@ -254,8 +254,8 @@ API object can be found at:
### ReplicaSet
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
Its mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or dont require updates at all.
It's mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all.
### Deployment (Recommended)

View File

@ -96,7 +96,7 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:&lt;any value&gt;" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod cant satisfy the constraint.
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:&lt;any value&gt;" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint.
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
@ -114,7 +114,7 @@ You can tweak the Pod spec to meet various kinds of requirements:
- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed onto "zoneA" as well.
- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4".
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, its preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)
### Example: Multiple TopologySpreadConstraints
@ -163,7 +163,7 @@ There are some implicit conventions worth noting here:
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA".
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
- Be aware of what will happen if the incoming Pods `topologySpreadConstraints[*].labelSelector` doesnt match its own labels. In the above example, if we remove the incoming Pods labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - its still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workloads `topologySpreadConstraints[*].labelSelector` to match its own labels.
- Be aware of what will happen if the incomingPod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels.
- If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed.

View File

@ -184,8 +184,8 @@ Begin and end meetings on time.
### Recording meetings on Zoom
When youre ready to start the recording, click Record to Cloud.
When you're ready to start the recording, click Record to Cloud.
When youre ready to stop recording, click Stop.
When you're ready to stop recording, click Stop.
The video uploads automatically to YouTube.

View File

@ -127,7 +127,7 @@ Monitor your cherry-pick pull request until it is merged into the release branch
{{< note >}}
Proposing a cherry pick requires that you have permission to set a label and a
milestone in your pull request. If you dont have those permissions, you will
milestone in your pull request. If you don't have those permissions, you will
need to work with someone who can set the label and milestone for you.
{{< /note >}}

View File

@ -20,7 +20,7 @@ Each day in a week-long shift as PR Wrangler:
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality and adherence to the [Style](/docs/contribute/style/style-guide/) and [Content](/docs/contribute/style/content-guide/) guides.
- Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`). Review as many PRs as you can.
- Make sure PR contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md).
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that havent signed the CLA to do so.
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that haven't signed the CLA to do so.
- Provide feedback on changes and ask for technical reviews from members of other SIGs.
- Provide inline suggestions on the PR for the proposed content changes.
- If you need to verify content, comment on the PR and request more details.

View File

@ -447,7 +447,7 @@ Use three hyphens (`---`) to create a horizontal rule. Use horizontal rules for
{{< table caption = "Do and Don't - Links" >}}
Do | Don't
:--| :-----
Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See <a href="#check-required-ports">Check required ports</a> for more details. | Use ambiguous terms such as “click here”. For example: Certain ports are open on your machines. See <a href="#check-required-ports">here</a> for more details.
Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See <a href="#check-required-ports">Check required ports</a> for more details. | Use ambiguous terms such as "click here". For example: Certain ports are open on your machines. See <a href="#check-required-ports">here</a> for more details.
Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)` and the output is [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions). | Write HTML-style links: `<a href="/media/examples/link-element-example.css" target="_blank">Visit our tutorial!</a>`, or create links that open in new tabs or windows. For example: `[example website](https://example.com){target="_blank"}`
{{< /table >}}

View File

@ -53,7 +53,7 @@ cards:
button_path: /docs/reference
- name: contribute
title: Contribute to the docs
description: Anyone can contribute, whether youre new to the project or youve been around a long time.
description: Anyone can contribute, whether you're new to the project or you've been around a long time.
button: Contribute to the docs
button_path: /docs/contribute
- name: release-notes
@ -64,4 +64,4 @@ cards:
- name: about
title: About the documentation
description: This website contains documentation for the current and previous 4 versions of Kubernetes.
---
---

View File

@ -30,10 +30,10 @@ In this regard, _Kubernetes does not have objects which represent normal user
accounts._ Normal users cannot be added to a cluster through an API call.
Even though normal user cannot be added via an API call, but any user that
presents a valid certificate signed by the clusters certificate authority
presents a valid certificate signed by the cluster's certificate authority
(CA) is considered authenticated. In this configuration, Kubernetes determines
the username from the common name field in the subject of the cert (e.g.,
“/CN=bob”). From there, the role based access control (RBAC) sub-system would
the username from the common name field in the 'subject' of the cert (e.g.,
"/CN=bob"). From there, the role based access control (RBAC) sub-system would
determine whether the user is authorized to perform a specific operation on a
resource. For more details, refer to the normal users topic in
[certificate request](/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user)
@ -329,7 +329,7 @@ Kubernetes does not provide an OpenID Connect Identity Provider.
You can use an existing public OpenID Connect Identity Provider (such as Google, or
[others](https://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers)).
Or, you can run your own Identity Provider, such as CoreOS [dex](https://github.com/coreos/dex),
[Keycloak](https://github.com/keycloak/keycloak),
[Keycloak](https://github.com/keycloak/keycloak),
CloudFoundry [UAA](https://github.com/cloudfoundry/uaa), or
Tremolo Security's [OpenUnison](https://github.com/tremolosecurity/openunison).

View File

@ -15,7 +15,7 @@ A piece of code that intercepts requests to the Kubernetes API server prior to p
<!--more-->
Admission controllers are configurable for the Kubernetes API server and may be “validating”, “mutating”, or
Admission controllers are configurable for the Kubernetes API server and may be "validating", "mutating", or
both. Any admission controller may reject the request. Mutating controllers may modify the objects they admit;
validating controllers may not.

View File

@ -6,13 +6,13 @@ full_link: /docs/reference/generated/kubelet
short_description: >
An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
aka:
aka:
tags:
- fundamental
- core-object
---
An agent that runs on each {{< glossary_tooltip text="node" term_id="node" >}} in the cluster. It makes sure that {{< glossary_tooltip text="containers" term_id="container" >}} are running in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
<!--more-->
<!--more-->
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesnt manage containers which were not created by Kubernetes.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.

View File

@ -6,14 +6,14 @@ full_link: /docs/concepts/architecture/nodes/
short_description: >
A node is a worker machine in Kubernetes.
aka:
aka:
tags:
- fundamental
---
A node is a worker machine in Kubernetes.
<!--more-->
<!--more-->
A worker node may be a VM or physical machine, depending on the cluster. It has local daemons or services necessary to run {{< glossary_tooltip text="Pods" term_id="pod" >}} and is managed by the control plane. The daemons on a node include {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}, and a container runtime implementing the {{< glossary_tooltip text="CRI" term_id="cri" >}} such as {{< glossary_tooltip term_id="docker" >}}.
In early Kubernetes versions, Nodes were called “Minions”.
In early Kubernetes versions, Nodes were called "Minions".

View File

@ -23,7 +23,7 @@ You can also subscribe to an RSS feed of the above using [this link](https://gro
## Report a Vulnerability
Were extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.
We're extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.
To make a report, submit your vulnerability to the [Kubernetes bug bounty program](https://hackerone.com/kubernetes). This allows triage and handling of the vulnerability with standardized response times.

View File

@ -8,7 +8,7 @@ card:
weight: 40
---
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice “fast paths” for creating Kubernetes clusters.
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice "fast paths" for creating Kubernetes clusters.
kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines. Likewise, installing various nice-to-have addons, like the Kubernetes Dashboard, monitoring solutions, and cloud-specific addons, is not in scope.

View File

@ -55,7 +55,7 @@ This brief demo guides you on how to start, use, and delete Minikube locally. Fo
2. Now, you can interact with your cluster using kubectl. For more information, see [Interacting with Your Cluster](#interacting-with-your-cluster).
Lets create a Kubernetes Deployment using an existing image named `echoserver`, which is a simple HTTP server and expose it on port 8080 using `--port`.
Let's create a Kubernetes Deployment using an existing image named `echoserver`, which is a simple HTTP server and expose it on port 8080 using `--port`.
```shell
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10

View File

@ -81,7 +81,7 @@ apt-get update && apt-get install -y \
```
```shell
# Add Dockers official GPG key:
# Add Docker's official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
```
@ -381,7 +381,7 @@ apt-get update && apt-get install -y apt-transport-https ca-certificates curl so
```
```shell
## Add Dockers official GPG key
## Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
```

View File

@ -259,10 +259,10 @@ Cluster DNS (CoreDNS) will not start up before a network is installed.**
- Take care that your Pod network must not overlap with any of the host
networks: you are likely to see problems if there is any overlap.
(If you find a collision between your network plugins preferred Pod
(If you find a collision between your network plugin's preferred Pod
network and some of your host networks, you should think of a suitable
CIDR block to use instead, then use that during `kubeadm init` with
`--pod-network-cidr` and as a replacement in your network plugins YAML).
`--pod-network-cidr` and as a replacement in your network plugin's YAML).
- By default, `kubeadm` sets up your cluster to use and enforce use of
[RBAC](/docs/reference/access-authn-authz/rbac/) (role based access

View File

@ -150,7 +150,7 @@ so that you can change the configuration more easily.
## Interact with the frontend Service
Once youve created a Service of type LoadBalancer, you can use this
Once you've created a Service of type LoadBalancer, you can use this
command to find the external IP:
```shell

View File

@ -191,7 +191,7 @@ affect the HTTP liveness probe.
A third type of liveness probe uses a TCP socket. With this configuration, the
kubelet will attempt to open a socket to your container on the specified port.
If it can establish a connection, the container is considered healthy, if it
cant it is considered a failure.
can't it is considered a failure.
{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
@ -284,7 +284,7 @@ Sometimes, applications are temporarily unable to serve traffic.
For example, an application might need to load large data or configuration
files during startup, or depend on external services after startup.
In such cases, you don't want to kill the application,
but you dont want to send it requests either. Kubernetes provides
but you don't want to send it requests either. Kubernetes provides
readiness probes to detect and mitigate these situations. A pod with containers
reporting that they are not ready does not receive traffic through Kubernetes
Services.
@ -348,7 +348,7 @@ set "Host" in httpHeaders instead.
in the range 1 to 65535.
For an HTTP probe, the kubelet sends an HTTP request to the specified path and
port to perform the check. The kubelet sends the probe to the pods IP address,
port to perform the check. The kubelet sends the probe to the pod's IP address,
unless the address is overridden by the optional `host` field in `httpGet`. If
`scheme` field is set to `HTTPS`, the kubelet sends an HTTPS request skipping the
certificate verification. In most scenarios, you do not want to set the `host` field.

View File

@ -260,8 +260,8 @@ metadata:
```
When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID
is applied to all containers in the Pod in the same way that GIDs specified in the
Pods security context are. Every GID, whether it originates from a PersistentVolume
annotation or the Pods specification, is applied to the first process run in
Pod's security context are. Every GID, whether it originates from a PersistentVolume
annotation or the Pod's specification, is applied to the first process run in
each container.
{{< note >}}

View File

@ -365,7 +365,7 @@ on Linux, Chocolatey on Windows, and Homebrew on macOS. Any package
manager will be suitable if it can place new executables placed somewhere
in the user's `PATH`.
As a plugin author, if you pick this option then you also have the burden
of updating your kubectl plugins distribution package across multiple
of updating your kubectl plugin's distribution package across multiple
platforms for each release.
### Source code {#distributing-source-code}

View File

@ -223,7 +223,7 @@ This functionality is available in Kubernetes v1.6 and later.
kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'
```
* Use envFrom to define all of the Secrets data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
* Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}

View File

@ -200,30 +200,30 @@ The following information is available to containers through environment
variables and `downwardAPI` volumes:
* Information available via `fieldRef`:
* `metadata.name` - the pods name
* `metadata.namespace` - the pods namespace
* `metadata.uid` - the pods UID, available since v1.8.0-alpha.2
* `metadata.labels['<KEY>']` - the value of the pods label `<KEY>` (for example, `metadata.labels['mylabel']`); available in Kubernetes 1.9+
* `metadata.annotations['<KEY>']` - the value of the pods annotation `<KEY>` (for example, `metadata.annotations['myannotation']`); available in Kubernetes 1.9+
* `metadata.name` - the pod's name
* `metadata.namespace` - the pod's namespace
* `metadata.uid` - the pod's UID, available since v1.8.0-alpha.2
* `metadata.labels['<KEY>']` - the value of the pod's label `<KEY>` (for example, `metadata.labels['mylabel']`); available in Kubernetes 1.9+
* `metadata.annotations['<KEY>']` - the value of the pod's annotation `<KEY>` (for example, `metadata.annotations['myannotation']`); available in Kubernetes 1.9+
* Information available via `resourceFieldRef`:
* A Containers CPU limit
* A Containers CPU request
* A Containers memory limit
* A Containers memory request
* A Containers ephemeral-storage limit, available since v1.8.0-beta.0
* A Containers ephemeral-storage request, available since v1.8.0-beta.0
* A Container's CPU limit
* A Container's CPU request
* A Container's memory limit
* A Container's memory request
* A Container's ephemeral-storage limit, available since v1.8.0-beta.0
* A Container's ephemeral-storage request, available since v1.8.0-beta.0
In addition, the following information is available through
`downwardAPI` volume `fieldRef`:
* `metadata.labels` - all of the pods labels, formatted as `label-key="escaped-label-value"` with one label per line
* `metadata.annotations` - all of the pods annotations, formatted as `annotation-key="escaped-annotation-value"` with one annotation per line
* `metadata.labels` - all of the pod's labels, formatted as `label-key="escaped-label-value"` with one label per line
* `metadata.annotations` - all of the pod's annotations, formatted as `annotation-key="escaped-annotation-value"` with one annotation per line
The following information is available through environment variables:
* `status.podIP` - the pods IP address
* `spec.serviceAccountName` - the pods service account name, available since v1.4.0-alpha.3
* `spec.nodeName` - the nodes name, available since v1.4.0-alpha.3
* `status.podIP` - the pod's IP address
* `spec.serviceAccountName` - the pod's service account name, available since v1.4.0-alpha.3
* `spec.nodeName` - the node's name, available since v1.4.0-alpha.3
* `status.hostIP` - the node's IP, available since v1.7.0-alpha.1
{{< note >}}

View File

@ -90,7 +90,7 @@ Before using `--driver=none`, consult [this documentation](https://minikube.sigs
Minikube also supports a `vm-driver=podman` similar to the Docker driver. Podman run as superuser privilege (root user) is the best way to ensure that your containers have full access to any feature available on your system.
{{< caution >}}
The `podman` driver requires running the containers as root because regular user accounts dont have full access to all operating system features that their containers might need to run.
The `podman` driver requires running the containers as root because regular user accounts don't have full access to all operating system features that their containers might need to run.
{{< /caution >}}
### Install Minikube using a package

View File

@ -243,7 +243,7 @@ This setting is for your safety because your data is more valuable than automati
{{< warning >}}
Depending on the storage class and reclaim policy, deleting the *PersistentVolumeClaims* may cause the associated volumes
to also be deleted. Never assume youll be able to access data if its volume claims are deleted.
to also be deleted. Never assume you'll be able to access data if its volume claims are deleted.
{{< /warning >}}
1. Run the following commands (chained together into a single command) to delete everything in the Cassandra StatefulSet: