Merge pull request #28883 from chrisnegus/merged-main-dev-1.22
Merged main dev-1.22 to keep in sync
This commit is contained in:
commit
11a343a2c5
|
@ -11,7 +11,7 @@
|
|||
For overall help on editing and submitting pull requests, visit:
|
||||
https://kubernetes.io/docs/contribute/start/#improve-existing-content
|
||||
|
||||
Use the default base branch, “master”, if you're documenting existing
|
||||
Use the default base branch, “main”, if you're documenting existing
|
||||
features in the English localization.
|
||||
|
||||
If you're working on a different localization (not English), see
|
||||
|
|
|
@ -235,10 +235,12 @@ aliases:
|
|||
- parispittman
|
||||
# authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES
|
||||
sig-release-leads:
|
||||
- cpanato # SIG Technical Lead
|
||||
- hasheddan # SIG Technical Lead
|
||||
- jeremyrickard # SIG Technical Lead
|
||||
- justaugustus # SIG Chair
|
||||
- LappleApple # SIG Program Manager
|
||||
- puerco # SIG Technical Lead
|
||||
- saschagrunert # SIG Chair
|
||||
release-engineering-approvers:
|
||||
- cpanato # Release Manager
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
# The Kubernetes documentation
|
||||
-->
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
<!--
|
||||
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# The Kubernetes documentation
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
|
||||
|
||||
|
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "{{ replace .Name "-" " " | title }}"
|
||||
date: {{ .Date }}
|
||||
draft: true
|
||||
slug: <seo-friendly-version-of-title-separated-by-dashes>
|
||||
---
|
||||
|
||||
**Author:** <your name> (<your organization name>), <another author's name> (<their organization>)
|
||||
|
||||
<!--
|
||||
Instructions:
|
||||
- Replace these instructions and the following text with your content.
|
||||
- Replace `<angle bracket placeholders>` with actual values. For example, you would update `date: <yyyy>-<mm>-<dd>` to look something like `date: 2021-10-21`.
|
||||
- For convenience, use third-party tools to author and collaborate on your content.
|
||||
- To save time and effort in reviews, check your content's spelling, grammar, and style before contributing.
|
||||
- Feel free to ask for assistance in the Kubernetes Slack channel, [#sig-docs-blog](https://kubernetes.slack.com/archives/CJDHVD54J).
|
||||
-->
|
||||
|
||||
Replace this first line of your content with one to three sentences that summarize the blog post.
|
||||
|
||||
## This is a section heading
|
||||
|
||||
To help the reader, organize your content into sections that contain about three to six paragraphs.
|
||||
|
||||
If you're documenting commands, separate the commands from the outputs, like this:
|
||||
|
||||
1. Verify that the Secret exists by running the following command:
|
||||
|
||||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
|
||||
The response should be like this:
|
||||
|
||||
```shell
|
||||
NAME TYPE DATA AGE
|
||||
mysql-pass-c57bb4t7mf Opaque 1 9s
|
||||
```
|
||||
|
||||
You're free to create any sections you like. Below are a few common patterns we see at the end of blog posts.
|
||||
|
||||
## What’s next?
|
||||
|
||||
This optional section describes the future of the thing you've just described in the post.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
This optional section provides links to more information. Please avoid promoting and over-represent your organization.
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
An optional section that links to resources for readers to get involved, and acknowledgments of individual contributors, such as:
|
||||
|
||||
* [The name of a channel on Slack, #a-channel](https://<a-workspace>.slack.com/messages/<a-channel>)
|
||||
|
||||
* [A link to a "contribute" page with more information](<https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact>).
|
||||
|
||||
* Acknowledgements and thanks to the contributors. <person's name> ([<github id>](https://github.com/<github id>)) who did X, Y, and Z.
|
||||
|
||||
* Those interested in getting involved with the design and development of <project>, join the [<name of the SIG>](https://github.com/project/community/tree/master/<sig-group>). We’re rapidly growing and always welcome new contributors.
|
|
@ -7,6 +7,7 @@ timeout: 1200s
|
|||
options:
|
||||
substitution_option: ALLOW_LOOSE
|
||||
steps:
|
||||
# It's fine to bump the tag to a recent version, as needed
|
||||
- name: "gcr.io/k8s-testimages/gcb-docker-gcloud:v20190906-745fed4"
|
||||
entrypoint: make
|
||||
env:
|
||||
|
|
|
@ -48,7 +48,7 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise
|
|||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu21" button id="desktopKCButton">Revisit KubeCon EU 2021</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -140,7 +140,7 @@ The local persistent volume beta feature is not complete by far. Some notable en
|
|||
|
||||
## Complementary features
|
||||
|
||||
[Pod priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it.
|
||||
[Pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it.
|
||||
|
||||
[Pod disruption budget](/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade.
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ JOSH BERKUS: That goes into release notes. I mean, keep in mind that one of the
|
|||
|
||||
However, stuff happens, and we do occasionally have to do those. And so far, our main way to identify that to people actually is in the release notes. If you look at [the current release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#no-really-you-must-do-this-before-you-upgrade), there are actually two things in there right now that are sort of breaking changes.
|
||||
|
||||
One of them is the bit with [priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was...
|
||||
One of them is the bit with [priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was...
|
||||
|
||||
TIM PEPPER: The [JSON capitalization case sensitivity](https://github.com/kubernetes/kubernetes/issues/64612).
|
||||
|
||||
|
|
|
@ -166,7 +166,7 @@ Some critical state is held outside etcd. Certificates, container images, and ot
|
|||
* Cloud provider specific account and configuration data
|
||||
|
||||
## Considerations for your production workloads
|
||||
Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/configuration/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
|
||||
Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/scheduling-eviction/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
|
||||
|
||||
For stateful services, external attached volume mounts are the standard Kubernetes recommendation for a non-clustered service (e.g., a typical SQL database). At this time Kubernetes managed snapshots of these external volumes is in the category of a [roadmap feature request](https://docs.google.com/presentation/d/1dgxfnroRAu0aF67s-_bmeWpkM1h2LCxe6lB1l1oS0EQ/edit#slide=id.g3ca07c98c2_0_47), likely to align with the Container Storage Interface (CSI) integration. Thus performing backups of such a service would involve application specific, in-pod activity that is beyond the scope of this document. While awaiting better Kubernetes support for a snapshot and backup workflow, running your database service in a VM rather than a container, and exposing it to your Kubernetes workload may be worth considering.
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ date: 2019-04-16
|
|||
|
||||
Kubernetes is well-known for running scalable workloads. It scales your workloads based on their resource usage. When a workload is scaled up, more instances of the application get created. When the application is critical for your product, you want to make sure that these new instances are scheduled even when your cluster is under resource pressure. One obvious solution to this problem is to over-provision your cluster resources to have some amount of slack resources available for scale-up situations. This approach often works, but costs more as you would have to pay for the resources that are idle most of the time.
|
||||
|
||||
[Pod priority and preemption](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads.
|
||||
[Pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads.
|
||||
|
||||
## Guaranteed scheduling with controlled cost
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ The team has made progress in the last few months that is well worth celebrating
|
|||
|
||||
- The K8s-Infrastructure Working Group released an automated billing report that they start every meeting off by reviewing as a group.
|
||||
- DNS for k8s.io and kubernetes.io are also fully [community-owned](https://groups.google.com/g/kubernetes-dev/c/LZTYJorGh7c/m/u-ydk-yNEgAJ), with community members able to [file issues](https://github.com/kubernetes/k8s.io/issues/new?assignees=&labels=wg%2Fk8s-infra&template=dns-request.md&title=DNS+REQUEST%3A+%3Cyour-dns-record%3E) to manage records.
|
||||
- The container registry [k8s.gcr.io](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io) is also fully community-owned and available for all Kubernetes subprojects to use.
|
||||
- The container registry [k8s.gcr.io](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io) is also fully community-owned and available for all Kubernetes subprojects to use.
|
||||
- The Kubernetes [publishing-bot](https://github.com/kubernetes/publishing-bot) responsible for keeping k8s.io/kubernetes/staging repositories published to their own top-level repos (For example: [kubernetes/api](https://github.com/kubernetes/api)) runs on a community-owned cluster.
|
||||
- The gcsweb.k8s.io service used to provide anonymous access to GCS buckets for kubernetes artifacts runs on a community-owned cluster.
|
||||
- There is also an automated process of promoting all our container images. This includes a fully documented infrastructure, managed by the Kubernetes community, with automated processes for provisioning permissions.
|
||||
|
|
|
@ -186,7 +186,7 @@ metadata:
|
|||
|
||||
### Role Oriented Design
|
||||
|
||||
When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow Gateway API to adapt to different organizational models and implementations well into the future.
|
||||
When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API is not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow the Gateway API to adapt to different organizational models and implementations well into the future.
|
||||
|
||||
### Try it out and get involved
|
||||
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Announcing Kubernetes Community Group Annual Reports"
|
||||
description: >
|
||||
Introducing brand new Kubernetes Community Group Annual Reports for
|
||||
Special Interest Groups and Working Groups.
|
||||
date: 2021-06-28T10:00:00-08:00
|
||||
slug: Announcing-Kubernetes-Community-Group-Annual-Reports
|
||||
---
|
||||
|
||||
**Authors:** Divya Mohan
|
||||
|
||||
{{< figure src="k8s_annual_report_2020.svg" alt="Community annual report 2020" link="https://www.cncf.io/reports/kubernetes-community-annual-report-2020/" >}}
|
||||
|
||||
Given the growth and scale of the Kubernetes project, the existing reporting mechanisms were proving to be inadequate and challenging.
|
||||
Kubernetes is a large open source project. With over 100000 commits just to the main k/kubernetes repository, hundreds of other code
|
||||
repositories in the project, and thousands of contributors, there's a lot going on. In fact, there are 37 contributor groups at the time of
|
||||
writing. We also value all forms of contribution and not just code changes.
|
||||
|
||||
With that context in mind, the challenge of reporting on all this activity was a call to action for exploring better options. Therefore
|
||||
inspired by the Apache Software Foundation’s [open guide to PMC Reporting](https://www.apache.org/foundation/board/reporting) and the
|
||||
[CNCF project Annual Reporting](https://www.cncf.io/cncf-annual-report-2020/), the Kubernetes project is proud to announce the
|
||||
**Kubernetes Community Group Annual Reports for Special Interest Groups (SIGs) and Working Groups (WGs)**. In its flagship edition,
|
||||
the [2020 Summary report](https://www.cncf.io/reports/kubernetes-community-annual-report-2020/) focuses on bettering the
|
||||
Kubernetes ecosystem by assessing and promoting the healthiness of the groups within the upstream community.
|
||||
|
||||
Previously, the mechanisms for the Kubernetes project overall to report on groups and their activities were
|
||||
[devstats](https://k8s.devstats.cncf.io/), GitHub data, issues, to measure the healthiness of a given UG/WG/SIG/Committee. As a
|
||||
project spanning several diverse communities, it was essential to have something that captured the human side of things. With 50,000+
|
||||
contributors, it’s easy to assume that the project has enough help and this report surfaces more information than /help-wanted and
|
||||
/good-first-issue for end users. This is how we sustain the project. Paraphrasing one of the Steering Committee members,
|
||||
[Paris Pittman](https://github.com/parispittman), “There was a requirement for tighter feedback loops - ones that involved more than just
|
||||
GitHub data and issues. Given that Kubernetes, as a project, has grown in scale and number of contributors over the years, we have
|
||||
outgrown the existing reporting mechanisms."
|
||||
|
||||
The existing communication channels between the Steering committee members and the folks leading the groups and committees were also required
|
||||
to be made as open and as bi-directional as possible. Towards achieving this very purpose, every group and committee has been assigned a
|
||||
liaison from among the steering committee members for kick off, help, or guidance needed throughout the process. According to
|
||||
[Davanum Srinivas a.k.a. dims](https://github.com/dims), “... That was one of the main motivations behind this report. People (leading the
|
||||
groups/committees) know that they can reach out to us and there’s a vehicle for them to reach out to us… This is our way of setting up a
|
||||
two-way feedback for them." The progress on these action items would be updated and tracked on the monthly Steering Committee meetings
|
||||
ensuring that this is not a one-off activity. Quoting [Nikhita Raghunath](https://github.com/nikhita), one of the Steering Committee members,
|
||||
“... Once we have a base, the liaisons will work with these groups to ensure that the problems are resolved. When we have a report next year,
|
||||
we’ll have a look at the progress made and how we could still do better. But the idea is definitely to not stop at the report.”
|
||||
|
||||
With this report, we hope to empower our end user communities with information that they can use to identify ways in which they can support
|
||||
the project as well as a sneak peek into the roadmap for upcoming features. As a community, we thrive on feedback and would love to hear your
|
||||
views about the report. You can get in touch with the [Steering Committee](https://github.com/kubernetes/steering#contact) via
|
||||
[Slack](https://kubernetes.slack.com/messages/steering-committee) or via the [mailing list](steering@kubernetes.io).
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 1.2 MiB |
|
@ -83,8 +83,11 @@ As an example, you can find detailed information about how `kube-up.sh` sets
|
|||
up logging for COS image on GCP in the corresponding
|
||||
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
|
||||
|
||||
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet
|
||||
sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
|
||||
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure.
|
||||
The kubelet sends this information to the CRI container runtime and the runtime writes the container logs to the given location.
|
||||
The two kubelet parameters [`containerLogMaxSize` and `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
in [kubelet config file](/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||
can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
|
||||
|
||||
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
|
||||
the basic logging example, the kubelet on the node handles the request and
|
||||
|
|
|
@ -1235,10 +1235,6 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa
|
|||
- A user who can create a Pod that uses a secret can also see the value of that secret. Even
|
||||
if the API server policy does not allow that user to read the Secret, the user could
|
||||
run a Pod which exposes the secret.
|
||||
- Currently, anyone with root permission on any node can read _any_ secret from the API server,
|
||||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
nodes that actually require them, to restrict the impact of a root exploit on a
|
||||
single node.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -77,6 +77,20 @@ the pull policy of any object after its initial creation.
|
|||
|
||||
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
|
||||
|
||||
### ImagePullBackOff
|
||||
|
||||
When a kubelet starts creating containers for a Pod using a container runtime,
|
||||
it might be possible the container is in [Waiting](/docs/concepts/workloads/pods/pod-lifecycle/#container-state-waiting)
|
||||
state because of `ImagePullBackOff`.
|
||||
|
||||
The status `ImagePullBackOff` means that a container could not start because Kubernetes
|
||||
could not pull a container image (for reasons such as invalid image name, or pulling
|
||||
from a private registry without `imagePullSecret`). The `BackOff` part indicates
|
||||
that Kubernetes will keep trying to pull the image, with an increasing back-off delay.
|
||||
|
||||
Kubernetes raises the delay between each attempt until it reaches a compiled-in limit,
|
||||
which is 300 seconds (5 minutes).
|
||||
|
||||
## Multi-architecture images with image indexes
|
||||
|
||||
As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
|
||||
|
|
|
@ -51,8 +51,7 @@ Some of the things that you can use an operator to automate include:
|
|||
* choosing a leader for a distributed application without an internal
|
||||
member election process
|
||||
|
||||
What might an Operator look like in more detail? Here's an example in more
|
||||
detail:
|
||||
What might an Operator look like in more detail? Here's an example:
|
||||
|
||||
1. A custom resource named SampleDB, that you can configure into the cluster.
|
||||
2. A Deployment that makes sure a Pod is running that contains the
|
||||
|
@ -124,6 +123,7 @@ Operator.
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} [Operator White Paper](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md).
|
||||
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
|
||||
* [Publish](https://operatorhub.io/) your operator for other people to use
|
||||
|
|
|
@ -11,7 +11,8 @@ weight: 30
|
|||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25.
|
||||
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. For more information on the deprecation,
|
||||
see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/).
|
||||
|
||||
Pod Security Policies enable fine-grained authorization of pod creation and
|
||||
updates.
|
||||
|
@ -48,13 +49,12 @@ administrator to control the following:
|
|||
|
||||
## Enabling Pod Security Policies
|
||||
|
||||
Pod security policy control is implemented as an optional (but recommended)
|
||||
[admission
|
||||
controller](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy). PodSecurityPolicies
|
||||
are enforced by [enabling the admission
|
||||
Pod security policy control is implemented as an optional [admission
|
||||
controller](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy).
|
||||
PodSecurityPolicies are enforced by [enabling the admission
|
||||
controller](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in),
|
||||
but doing so without authorizing any policies **will prevent any pods from being
|
||||
created** in the cluster.
|
||||
but doing so without authorizing any policies **will prevent any pods from being created** in the
|
||||
cluster.
|
||||
|
||||
Since the pod security policy API (`policy/v1beta1/podsecuritypolicy`) is
|
||||
enabled independently of the admission controller, for existing clusters it is
|
||||
|
@ -110,7 +110,11 @@ roleRef:
|
|||
name: <role name>
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
# Authorize specific service accounts:
|
||||
# Authorize all service accounts in a namespace (recommended):
|
||||
- kind: Group
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: system:serviceaccounts:<authorized namespace>
|
||||
# Authorize specific service accounts (not recommended):
|
||||
- kind: ServiceAccount
|
||||
name: <authorized service account name>
|
||||
namespace: <authorized pod namespace>
|
||||
|
@ -139,6 +143,40 @@ Examples](/docs/reference/access-authn-authz/rbac#role-binding-examples).
|
|||
For a complete example of authorizing a PodSecurityPolicy, see
|
||||
[below](#example).
|
||||
|
||||
### Recommended Practice
|
||||
|
||||
PodSecurityPolicy is being replaced by a new, simplified `PodSecurity` {{< glossary_tooltip
|
||||
text="admission controller" term_id="admission-controller" >}}. For more details on this change, see
|
||||
[PodSecurityPolicy Deprecation: Past, Present, and
|
||||
Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). Follow these
|
||||
guidelines to simplify migration from PodSecurityPolicy to the new admission controller:
|
||||
|
||||
1. Limit your PodSecurityPolicies to the policies defined by the [Pod Security Standards](/docs/concepts/security/pod-security-standards):
|
||||
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
|
||||
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}
|
||||
- {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}}
|
||||
|
||||
2. Only bind PSPs to entire namespaces, by using the `system:serviceaccounts:<namespace>` group
|
||||
(where `<namespace>` is the target namespace). For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
# This cluster role binding allows all pods in the "development" namespace to use the baseline PSP.
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp-baseline-namespaces
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: psp-baseline
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:serviceaccounts:development
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
- kind: Group
|
||||
name: system:serviceaccounts:canary
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
|
@ -661,8 +699,10 @@ Refer to the [Sysctl documentation](
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- See [PodSecurityPolicy Deprecation: Past, Present, and
|
||||
Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) to learn about
|
||||
the future of pod security policy.
|
||||
|
||||
- See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
|
||||
|
||||
- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
|
||||
|
||||
|
||||
|
|
|
@ -193,7 +193,7 @@ resources based on the filesystems on the node.
|
|||
If the node has a dedicated `imagefs` filesystem for container runtimes to use,
|
||||
the kubelet does the following:
|
||||
|
||||
* If the `nodefs` filesystem meets the eviction threshlds, the kubelet garbage collects
|
||||
* If the `nodefs` filesystem meets the eviction thresholds, the kubelet garbage collects
|
||||
dead pods and containers.
|
||||
* If the `imagefs` filesystem meets the eviction thresholds, the kubelet
|
||||
deletes all unused images.
|
||||
|
|
|
@ -266,9 +266,23 @@ This ensures that DaemonSet pods are never evicted due to these problems.
|
|||
|
||||
## Taint Nodes by Condition
|
||||
|
||||
The node lifecycle controller automatically creates taints corresponding to
|
||||
Node conditions with `NoSchedule` effect.
|
||||
Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
|
||||
The control plane, using the node {{<glossary_tooltip text="controller" term_id="controller">}},
|
||||
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/pod-eviction#node-conditions).
|
||||
|
||||
The scheduler checks taints, not node conditions, when it makes scheduling
|
||||
decisions. This ensures that node conditions don't directly affect scheduling.
|
||||
For example, if the `DiskPressure` node condition is active, the control plane
|
||||
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
|
||||
onto the affected node. If the `MemoryPressure` node condition is active, the
|
||||
control plane adds the `node.kubernetes.io/memory-pressure` taint.
|
||||
|
||||
You can ignore node conditions for newly created pods by adding the corresponding
|
||||
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
|
||||
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
|
||||
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
|
||||
or `Burstable` QoS classes (even pods with no memory request set) as if they are
|
||||
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
|
||||
onto the affected node.
|
||||
|
||||
The DaemonSet controller automatically adds the following `NoSchedule`
|
||||
tolerations to all daemons, to prevent DaemonSets from breaking.
|
||||
|
@ -282,7 +296,6 @@ tolerations to all daemons, to prevent DaemonSets from breaking.
|
|||
Adding these tolerations ensures backward compatibility. You can also add
|
||||
arbitrary tolerations to DaemonSets.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about [out of resource handling](/docs/concepts/scheduling-eviction/out-of-resource/) and how you can configure it
|
||||
|
|
|
@ -86,7 +86,7 @@ enforced/disallowed:
|
|||
<tr>
|
||||
<td>Capabilities</td>
|
||||
<td>
|
||||
Adding additional capabilities beyond the <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities">default set</a> must be disallowed.<br>
|
||||
Adding <tt>NET_RAW</tt> or capabilities beyond the <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities">default set</a> must be disallowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.containers[*].securityContext.capabilities.add<br>
|
||||
spec.initContainers[*].securityContext.capabilities.add<br>
|
||||
|
@ -194,7 +194,7 @@ well as lower-trust users.The following listed controls should be enforced/disal
|
|||
<tr>
|
||||
<td>Volume Types</td>
|
||||
<td>
|
||||
In addition to restricting HostPath volumes, the restricted profile limits usage of non-core volume types to those defined through PersistentVolumes.<br>
|
||||
In addition to restricting HostPath volumes, the restricted profile limits usage of non-ephemeral volume types to those defined through PersistentVolumes.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.volumes[*].hostPath<br>
|
||||
spec.volumes[*].gcePersistentDisk<br>
|
||||
|
@ -216,7 +216,6 @@ well as lower-trust users.The following listed controls should be enforced/disal
|
|||
spec.volumes[*].portworxVolume<br>
|
||||
spec.volumes[*].scaleIO<br>
|
||||
spec.volumes[*].storageos<br>
|
||||
spec.volumes[*].csi<br>
|
||||
<br><b>Allowed Values:</b> undefined/nil<br>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
|
@ -50,7 +50,7 @@ options ndots:5
|
|||
```
|
||||
|
||||
In summary, a pod in the _test_ namespace can successfully resolve either
|
||||
`data.prod` or `data.prod.cluster.local`.
|
||||
`data.prod` or `data.prod.svc.cluster.local`.
|
||||
|
||||
### DNS Records
|
||||
|
||||
|
|
|
@ -189,7 +189,7 @@ and pre-created PVs, but you'll need to look at the documentation for a specific
|
|||
to see its supported topology keys and examples.
|
||||
|
||||
{{< note >}}
|
||||
If you choose to use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec
|
||||
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
|
||||
to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
|
||||
|
||||
Instead, you can use node selector for hostname in this case as shown below.
|
||||
|
@ -658,11 +658,11 @@ metadata:
|
|||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
storageaccounttype: Standard_LRS
|
||||
kind: Shared
|
||||
kind: managed
|
||||
```
|
||||
|
||||
* `storageaccounttype`: Azure storage account Sku tier. Default is empty.
|
||||
* `kind`: Possible values are `shared` (default), `dedicated`, and `managed`.
|
||||
* `kind`: Possible values are `shared`, `dedicated`, and `managed` (default).
|
||||
When `kind` is `shared`, all unmanaged disks are created in a few shared
|
||||
storage accounts in the same resource group as the cluster. When `kind` is
|
||||
`dedicated`, a new dedicated storage account will be created for the new
|
||||
|
|
|
@ -529,6 +529,15 @@ See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{< para
|
|||
|
||||
### hostPath {#hostpath}
|
||||
|
||||
{{< warning >}}
|
||||
HostPath volumes present many security risks, and it is a best practice to avoid the use of
|
||||
HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the
|
||||
required file or directory, and mounted as ReadOnly.
|
||||
|
||||
If restricting HostPath access to specific directories through AdmissionPolicy, `volumeMounts` MUST
|
||||
be required to use `readOnly` mounts for the policy to be effective.
|
||||
{{< /warning >}}
|
||||
|
||||
A `hostPath` volume mounts a file or directory from the host node's filesystem
|
||||
into your Pod. This is not something that most Pods will need, but it offers a
|
||||
powerful escape hatch for some applications.
|
||||
|
@ -558,6 +567,9 @@ The supported values for field `type` are:
|
|||
|
||||
Watch out when using this type of volume, because:
|
||||
|
||||
* HostPaths can expose privileged system credentials (such as for the Kubelet) or privileged APIs
|
||||
(such as container runtime socket), which can be used for container escape or to attack other
|
||||
parts of the cluster.
|
||||
* Pods with identical configuration (such as created from a PodTemplate) may
|
||||
behave differently on different nodes due to different files on the nodes
|
||||
* The files or directories created on the underlying hosts are only writable by root. You
|
||||
|
|
|
@ -82,12 +82,11 @@ spec:
|
|||
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
|
||||
|
||||
- **maxSkew** describes the degree to which Pods may be unevenly distributed.
|
||||
It's the maximum permitted difference between the number of matching Pods in
|
||||
any two topology domains of a given topology type. It must be greater than
|
||||
zero. Its semantics differs according to the value of `whenUnsatisfiable`:
|
||||
It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`:
|
||||
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
|
||||
permitted difference between the number of matching pods in the target
|
||||
topology and the global minimum.
|
||||
topology and the global minimum
|
||||
(the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, The global minimum is 0).
|
||||
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
|
||||
precedence to topologies that would help reduce the skew.
|
||||
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
|
||||
|
@ -96,6 +95,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
|
|||
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
|
||||
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
|
||||
|
||||
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
|
||||
|
||||
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
|
||||
|
||||
### Example: One TopologySpreadConstraint
|
||||
|
@ -387,7 +388,8 @@ for more details.
|
|||
|
||||
## Known Limitations
|
||||
|
||||
- Scaling down a Deployment may result in imbalanced Pods distribution.
|
||||
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
|
||||
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
|
||||
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -79,6 +79,10 @@ operator to use or manage a cluster.
|
|||
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
|
||||
* [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/)
|
||||
|
||||
## Config API for kubeadm
|
||||
|
||||
* [v1beta2](/docs/reference/config-api/kubeadm-config.v1beta2/)
|
||||
|
||||
## Design Docs
|
||||
|
||||
An archive of the design docs for Kubernetes functionality. Good starting points are
|
||||
|
|
|
@ -56,11 +56,11 @@ state for some duration:
|
|||
|
||||
* Approved requests: automatically deleted after 1 hour
|
||||
* Denied requests: automatically deleted after 1 hour
|
||||
* Pending requests: automatically deleted after 1 hour
|
||||
* Pending requests: automatically deleted after 24 hours
|
||||
|
||||
## Signers
|
||||
|
||||
All signers should provide information about how they work so that clients can predict what will happen to their CSRs.
|
||||
Custom signerNames can also be specified. All signers should provide information about how they work so that clients can predict what will happen to their CSRs.
|
||||
This includes:
|
||||
|
||||
1. **Trust distribution**: how trust (CA bundles) are distributed.
|
||||
|
|
|
@ -282,7 +282,7 @@ Of course you need to set up the webhook server to handle these authentications.
|
|||
|
||||
### Request
|
||||
|
||||
Webhooks are sent a POST request, with `Content-Type: application/json`,
|
||||
Webhooks are sent as POST requests, with `Content-Type: application/json`,
|
||||
with an `AdmissionReview` API object in the `admission.k8s.io` API group
|
||||
serialized to JSON as the body.
|
||||
|
||||
|
|
|
@ -59,6 +59,7 @@ different Kubernetes components.
|
|||
| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | |
|
||||
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
|
||||
| `AppArmor` | `true` | Beta | 1.4 | |
|
||||
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | |
|
||||
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
|
||||
| `CPUManager` | `true` | Beta | 1.10 | |
|
||||
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
|
||||
|
@ -523,6 +524,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`.
|
||||
Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
|
||||
for more details.
|
||||
- `ControllerManagerLeaderMigration`: Enables Leader Migration for
|
||||
[kube-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) and
|
||||
[cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager) which allows a cluster operator to live migrate
|
||||
controllers from the kube-controller-manager into an external controller-manager
|
||||
(e.g. the cloud-controller-manager) in an HA cluster without downtime.
|
||||
- `CPUManager`: Enable container level CPU affinity support, see
|
||||
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
|
||||
- `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime. The default max size of a log file is 10MB and the
|
||||
|
@ -785,7 +791,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
feature to account for pod overheads.
|
||||
- `PodPriority`: Enable the descheduling and preemption of Pods based on their
|
||||
[priorities](/docs/concepts/configuration/pod-priority-preemption/).
|
||||
[priorities](/docs/concepts/scheduling-eviction/pod-priority-preemption/).
|
||||
- `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending
|
||||
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
|
||||
for more details.
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -2,7 +2,7 @@
|
|||
title: Pod Priority
|
||||
id: pod-priority
|
||||
date: 2019-01-31
|
||||
full_link: /docs/concepts/configuration/pod-priority-preemption/#pod-priority
|
||||
full_link: /docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority
|
||||
short_description: >
|
||||
Pod Priority indicates the importance of a Pod relative to other Pods.
|
||||
|
||||
|
@ -14,4 +14,4 @@ tags:
|
|||
|
||||
<!--more-->
|
||||
|
||||
[Pod Priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority) gives the ability to set scheduling priority of a Pod to be higher and lower than other Pods — an important feature for production clusters workload.
|
||||
[Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) gives the ability to set scheduling priority of a Pod to be higher and lower than other Pods — an important feature for production clusters workload.
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Preemption
|
||||
id: preemption
|
||||
date: 2019-01-31
|
||||
full_link: /docs/concepts/configuration/pod-priority-preemption/#preemption
|
||||
full_link: /docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption
|
||||
short_description: >
|
||||
Preemption logic in Kubernetes helps a pending Pod to find a suitable Node by evicting low priority Pods existing on that Node.
|
||||
|
||||
|
@ -14,4 +14,4 @@ tags:
|
|||
|
||||
<!--more-->
|
||||
|
||||
If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/configuration/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible.
|
||||
If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible.
|
||||
|
|
|
@ -192,7 +192,21 @@ For example, if there are 1,253 pods on the cluster and the client wants to rece
|
|||
}
|
||||
```
|
||||
|
||||
Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates.
|
||||
Note that the `resourceVersion` of the list remains constant across each request,
|
||||
indicating the server is showing us a consistent snapshot of the pods. Pods that
|
||||
are created, updated, or deleted after version `10245` would not be shown unless
|
||||
the user makes a list request without the `continue` token. This allows clients
|
||||
to break large requests into smaller chunks and then perform a watch operation
|
||||
on the full set without missing any updates.
|
||||
|
||||
`remainingItemCount` is the number of subsequent items in the list which are not
|
||||
included in this list response. If the list request contained label or field selectors,
|
||||
then the number of remaining items is unknown and the API server does not include
|
||||
a `remainingItemCount` field in its response. If the list is complete (either
|
||||
because it is not chunking or because this is the last chunk), then there are no
|
||||
more remaining items and the API server does not include a `remainingItemCount`
|
||||
field in its response. The intended use of the `remainingItemCount` is estimating
|
||||
the size of a collection.
|
||||
|
||||
|
||||
## Receiving resources as Tables
|
||||
|
|
|
@ -89,23 +89,11 @@ If you notice that `kubeadm init` hangs after printing out the following line:
|
|||
This may be caused by a number of problems. The most common are:
|
||||
|
||||
- network connection problems. Check that your machine has full network connectivity before continuing.
|
||||
- the default cgroup driver configuration for the kubelet differs from that used by Docker.
|
||||
Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following:
|
||||
|
||||
```shell
|
||||
error: failed to run Kubelet: failed to create kubelet:
|
||||
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
|
||||
```
|
||||
|
||||
There are two common ways to fix the cgroup driver problem:
|
||||
|
||||
1. Install Docker again following instructions
|
||||
[here](/docs/setup/production-environment/container-runtimes/#docker).
|
||||
|
||||
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
|
||||
[Configure cgroup driver used by kubelet on control-plane node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node)
|
||||
|
||||
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
|
||||
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
|
||||
configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
|
||||
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
|
||||
and investigating each container by running `docker logs`. For other container runtime see
|
||||
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug-application-cluster/crictl/).
|
||||
|
||||
## kubeadm blocks when removing managed containers
|
||||
|
||||
|
@ -224,7 +212,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
|
|||
|
||||
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
|
||||
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
|
||||
in kube-apserver logs. To fix the issue you must follow these steps:
|
||||
in kube-apiserver logs. To fix the issue you must follow these steps:
|
||||
|
||||
1. Backup and delete `/etc/kubernetes/kubelet.conf` and `/var/lib/kubelet/pki/kubelet-client*` from the failed node.
|
||||
1. From a working control plane node in the cluster that has `/etc/kubernetes/pki/ca.key` execute
|
||||
|
|
|
@ -102,6 +102,8 @@ limitation and compatibility rules will change.
|
|||
|
||||
Microsoft maintains a Windows pause infrastructure container at
|
||||
`mcr.microsoft.com/oss/kubernetes/pause:3.4.1`.
|
||||
Kubernetes maintains a multi-architecture image `k8s.gcr.io/pause:3.5` that
|
||||
supports Linux as well as Windows.
|
||||
|
||||
#### Compute
|
||||
|
||||
|
|
|
@ -116,6 +116,9 @@ manually through `easyrsa`, `openssl` or `cfssl`.
|
|||
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
|
||||
-CAcreateserial -out server.crt -days 10000 \
|
||||
-extensions v3_ext -extfile csr.conf
|
||||
1. View the certificate signing request:
|
||||
|
||||
openssl req -noout -text -in ./server.csr
|
||||
1. View the certificate:
|
||||
|
||||
openssl x509 -noout -text -in ./server.crt
|
||||
|
|
|
@ -10,7 +10,7 @@ aliases: [ '/docs/tasks/administer-cluster/highly-available-master/' ]
|
|||
|
||||
{{< feature-state for_k8s_version="v1.5" state="alpha" >}}
|
||||
|
||||
You can replicate Kubernetes control plane nodes in `kube-up` or `kube-down` scripts for Google Compute Engine.
|
||||
You can replicate Kubernetes control plane nodes in `kube-up` or `kube-down` scripts for Google Compute Engine. However this scripts are not suitable for any sort of production use, it's widely used in the project's CI.
|
||||
This document describes how to use kube-up/down scripts to manage a highly available (HA) control plane and how HA control planes are implemented for use with GCE.
|
||||
|
||||
|
||||
|
@ -156,14 +156,14 @@ and the IP address of the first replica will be promoted to IP address of load b
|
|||
Similarly, after removal of the penultimate control plane node, the load balancer will be removed and its IP address will be assigned to the last remaining replica.
|
||||
Please note that creation and removal of load balancer are complex operations and it may take some time (~20 minutes) for them to propagate.
|
||||
|
||||
### Master service & kubelets
|
||||
### Control plane service & kubelets
|
||||
|
||||
Instead of trying to keep an up-to-date list of Kubernetes apiserver in the Kubernetes service,
|
||||
the system directs all traffic to the external IP:
|
||||
|
||||
* in case of a single node control plane, the IP points to the control plane node,
|
||||
|
||||
* in case of an HA control plane, the IP points to the load balancer in-front of the masters.
|
||||
* in case of an HA control plane, the IP points to the load balancer in-front of the control plane nodes.
|
||||
|
||||
Similarly, the external IP will be used by kubelets to communicate with the control plane.
|
||||
|
||||
|
|
|
@ -78,7 +78,8 @@ A namespace can be in one of two phases:
|
|||
* `Active` the namespace is in use
|
||||
* `Terminating` the namespace is being deleted, and can not be used for new objects
|
||||
|
||||
See the [design doc](https://git.k8s.io/community/contributors/design-proposals/architecture/namespaces.md#phases) for more details.
|
||||
For more details, see [Namespace](/docs/reference/kubernetes-api/cluster-resources/namespace-v1/)
|
||||
in the API reference.
|
||||
|
||||
## Creating a new namespace
|
||||
|
||||
|
|
|
@ -380,5 +380,5 @@ JWKS URI is required to use the `https` scheme.
|
|||
See also:
|
||||
|
||||
- [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md)
|
||||
- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery)
|
||||
- [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)
|
||||
|
|
|
@ -24,7 +24,7 @@ a Pod or Container. Security context settings include, but are not limited to:
|
|||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/): Use program profiles to restrict the capabilities of individual programs.
|
||||
|
||||
* [Seccomp](https://en.wikipedia.org/wiki/Seccomp): Filter a process's system calls.
|
||||
* [Seccomp](/docs/tutorials/clusters/seccomp/): Filter a process's system calls.
|
||||
|
||||
* AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the [`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has `CAP_SYS_ADMIN`.
|
||||
|
||||
|
|
|
@ -154,22 +154,26 @@ from the YAML you used to create it:
|
|||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: List
|
||||
items:
|
||||
- apiVersion: stable.example.com/v1
|
||||
kind: CronTab
|
||||
metadata:
|
||||
creationTimestamp: 2017-05-31T12:56:35Z
|
||||
annotations:
|
||||
kubectl.kubernetes.io/last-applied-configuration: |
|
||||
{"apiVersion":"stable.example.com/v1","kind":"CronTab","metadata":{"annotations":{},"name":"my-new-cron-object","namespace":"default"},"spec":{"cronSpec":"* * * * */5","image":"my-awesome-cron-image"}}
|
||||
creationTimestamp: "2021-06-20T07:35:27Z"
|
||||
generation: 1
|
||||
name: my-new-cron-object
|
||||
namespace: default
|
||||
resourceVersion: "285"
|
||||
uid: 9423255b-4600-11e7-af6a-28d2447dc82b
|
||||
resourceVersion: "1326"
|
||||
uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9
|
||||
spec:
|
||||
cronSpec: '* * * * */5'
|
||||
image: my-awesome-cron-image
|
||||
kind: List
|
||||
metadata:
|
||||
resourceVersion: ""
|
||||
selfLink: ""
|
||||
```
|
||||
|
||||
## Delete a CustomResourceDefinition
|
||||
|
|
|
@ -3,7 +3,7 @@ reviewers:
|
|||
- rickypai
|
||||
- thockin
|
||||
title: Adding entries to Pod /etc/hosts with HostAliases
|
||||
content_type: concept
|
||||
content_type: task
|
||||
weight: 60
|
||||
min-kubernetes-server-version: 1.7
|
||||
---
|
||||
|
@ -16,7 +16,7 @@ Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostn
|
|||
Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
<!-- steps -->
|
||||
|
||||
## Default hosts file content
|
||||
|
|
@ -82,6 +82,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
|
|||
If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory:
|
||||
|
||||
```bash
|
||||
chmod +x kubectl
|
||||
mkdir -p ~/.local/bin/kubectl
|
||||
mv ./kubectl ~/.local/bin/kubectl
|
||||
# and then add ~/.local/bin/kubectl to $PATH
|
||||
|
|
|
@ -25,7 +25,8 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">Continue to Module 2<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Home<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">Continue to Module 2 ><span class=""></span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -37,7 +37,9 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">Continue to Module 3<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button"> < Return to Module 1<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Home<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">Continue to Module 3 ><span class=""></span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -29,7 +29,9 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/expose/expose-intro/" role="button">Continue to Module 4<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">< Return to Module 2<span class="btn"></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Home<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/expose/expose-intro/" role="button">Continue to Module 4 ><span class="btn"></span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -74,11 +74,11 @@ weight: 10
|
|||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2>Nodes</h2>
|
||||
<p>A Pod always runs on a <b>Node</b>. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master's automatic scheduling takes into account the available resources on each Node.</p>
|
||||
<p>A Pod always runs on a <b>Node</b>. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the control plane. A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster. The control plane's automatic scheduling takes into account the available resources on each Node.</p>
|
||||
|
||||
<p>Every Kubernetes Node runs at least:</p>
|
||||
<ul>
|
||||
<li>Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.</li>
|
||||
<li>Kubelet, a process responsible for communication between the Kubernetes control plane and the Node; it manages the Pods and the containers running on a machine.</li>
|
||||
<li>A container runtime (like Docker) responsible for pulling the container image from a registry, unpacking the container, and running the application.</li>
|
||||
</ul>
|
||||
|
||||
|
|
|
@ -26,7 +26,9 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/scale/scale-intro/" role="button">Continue to Module 5<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">< Return to Module 3<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Home<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/scale/scale-intro/" role="button">Continue to Module 5 ><span class=""></span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -26,7 +26,9 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/update/update-intro/" role="button">Continue to Module 6<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/expose/expose-interactive/" role="button">< Return to Module 4<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Home<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/update/update-intro/" role="button">Continue to Module 6 ><span class=""></span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -26,7 +26,8 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Back to Kubernetes Basics<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/scale/scale-interactive/" role="button">< Return to Module 5<span class=""></span></a>
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/" role="button">Return to Kubernetes Basics<span class=""></span></a>
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
|
@ -35,3 +36,7 @@ weight: 20
|
|||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -14,7 +14,10 @@ source: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
This tutorial shows you how to build and deploy a simple _(not production
|
||||
ready)_, multi-tier web application using Kubernetes and
|
||||
[Docker](https://www.docker.com/). This example consists of the following
|
||||
components:
|
||||
|
||||
* A single-instance [Redis](https://www.redis.com/) to store guestbook entries
|
||||
* Multiple web frontend instances
|
||||
|
@ -65,7 +68,7 @@ The manifest file, included below, specifies a Deployment controller that runs a
|
|||
|
||||
The response should be similar to this:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s
|
||||
```
|
||||
|
@ -78,7 +81,10 @@ The manifest file, included below, specifies a Deployment controller that runs a
|
|||
|
||||
### Creating the Redis leader Service
|
||||
|
||||
The guestbook application needs to communicate to the Redis to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis Pod. A Service defines a policy to access the Pods.
|
||||
The guestbook application needs to communicate to the Redis to write its data.
|
||||
You need to apply a [Service](/docs/concepts/services-networking/service/) to
|
||||
proxy the traffic to the Redis Pod. A Service defines a policy to access the
|
||||
Pods.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-leader-service.yaml" >}}
|
||||
|
||||
|
@ -101,23 +107,26 @@ The guestbook application needs to communicate to the Redis to write its data. Y
|
|||
|
||||
The response should be similar to this:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
|
||||
redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 16s
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
This manifest file creates a Service named `redis-leader` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis Pod.
|
||||
This manifest file creates a Service named `redis-leader` with a set of labels
|
||||
that match the labels previously defined, so the Service routes network
|
||||
traffic to the Redis Pod.
|
||||
{{< /note >}}
|
||||
|
||||
### Set up Redis followers
|
||||
|
||||
Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.
|
||||
Although the Redis leader is a single Pod, you can make it highly available
|
||||
and meet traffic demands by adding a few Redis followers, or replicas.
|
||||
|
||||
{{< codenew file="application/guestbook/redis-follower-deployment.yaml" >}}
|
||||
|
||||
1. Apply the Redis Service from the following `redis-follower-deployment.yaml` file:
|
||||
1. Apply the Redis Deployment from the following `redis-follower-deployment.yaml` file:
|
||||
|
||||
<!---
|
||||
for local testing of the content via relative file path
|
||||
|
@ -136,15 +145,18 @@ Although the Redis leader is a single Pod, you can make it highly available and
|
|||
|
||||
The response should be similar to this:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s
|
||||
redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s
|
||||
redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m
|
||||
```
|
||||
|
||||
### Creating the Redis follower service
|
||||
|
||||
The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another [Service](/docs/concepts/services-networking/service/).
|
||||
The guestbook application needs to communicate with the Redis followers to
|
||||
read data. To make the Redis followers discoverable, you must set up another
|
||||
[Service](/docs/concepts/services-networking/service/).
|
||||
|
||||
{{< codenew file="application/guestbook/redis-follower-service.yaml" >}}
|
||||
|
||||
|
@ -167,7 +179,7 @@ The guestbook application needs to communicate with the Redis followers to read
|
|||
|
||||
The response should be similar to this:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h
|
||||
redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s
|
||||
|
@ -175,14 +187,21 @@ The guestbook application needs to communicate with the Redis followers to read
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
This manifest file creates a Service named `redis-follower` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis Pod.
|
||||
This manifest file creates a Service named `redis-follower` with a set of
|
||||
labels that match the labels previously defined, so the Service routes network
|
||||
traffic to the Redis Pod.
|
||||
{{< /note >}}
|
||||
|
||||
## Set up and Expose the Guestbook Frontend
|
||||
|
||||
Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.
|
||||
Now that you have the Redis storage of your guestbook up and running, start
|
||||
the guestbook web servers. Like the Redis followers, the frontend is deployed
|
||||
using a Kubernetes Deployment.
|
||||
|
||||
The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.
|
||||
The guestbook app uses a PHP frontend. It is configured to communicate with
|
||||
either the Redis follower or leader Services, depending on whether the request
|
||||
is a read or a write. The frontend exposes a JSON interface, and serves a
|
||||
jQuery-Ajax-based UX.
|
||||
|
||||
### Creating the Guestbook Frontend Deployment
|
||||
|
||||
|
@ -216,12 +235,22 @@ The guestbook app uses a PHP frontend. It is configured to communicate with eith
|
|||
|
||||
### Creating the Frontend Service
|
||||
|
||||
The `Redis` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services-service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
|
||||
The `Redis` Services you applied is only accessible within the Kubernetes
|
||||
cluster because the default type for a Service is
|
||||
[ClusterIP](/docs/concepts/services-networking/service/#publishing-services-service-types).
|
||||
`ClusterIP` provides a single IP address for the set of Pods the Service is
|
||||
pointing to. This IP address is accessible only within the cluster.
|
||||
|
||||
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
|
||||
If you want guests to be able to access your guestbook, you must configure the
|
||||
frontend Service to be externally visible, so a client can request the Service
|
||||
from outside the Kubernetes cluster. However a Kubernetes user you can use
|
||||
`kubectl port-forward` to access the service even though it uses a
|
||||
`ClusterIP`.
|
||||
|
||||
{{< note >}}
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment `type: LoadBalancer`.
|
||||
Some cloud providers, like Google Compute Engine or Google Kubernetes Engine,
|
||||
support external load balancers. If your cloud provider supports load
|
||||
balancers and you want to use it, uncomment `type: LoadBalancer`.
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
|
||||
|
@ -272,7 +301,8 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
|
|||
|
||||
### Viewing the Frontend Service via `LoadBalancer`
|
||||
|
||||
If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` you need to find the IP address to view your Guestbook.
|
||||
If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer`
|
||||
you need to find the IP address to view your Guestbook.
|
||||
|
||||
1. Run the following command to get the IP address for the frontend Service.
|
||||
|
||||
|
@ -290,12 +320,15 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y
|
|||
1. Copy the external IP address, and load the page in your browser to view your guestbook.
|
||||
|
||||
{{< note >}}
|
||||
Try adding some guestbook entries by typing in a message, and clicking Submit. The message you typed appears in the frontend. This message indicates that data is successfully added to Redis through the Services you created earlier.
|
||||
Try adding some guestbook entries by typing in a message, and clicking Submit.
|
||||
The message you typed appears in the frontend. This message indicates that
|
||||
data is successfully added to Redis through the Services you created earlier.
|
||||
{{< /note >}}
|
||||
|
||||
## Scale the Web Frontend
|
||||
|
||||
You can scale up or down as needed because your servers are defined as a Service that uses a Deployment controller.
|
||||
You can scale up or down as needed because your servers are defined as a
|
||||
Service that uses a Deployment controller.
|
||||
|
||||
1. Run the following command to scale up the number of frontend Pods:
|
||||
|
||||
|
@ -348,7 +381,8 @@ You can scale up or down as needed because your servers are defined as a Service
|
|||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
|
||||
Deleting the Deployments and Services also deletes any running Pods. Use
|
||||
labels to delete multiple resources with one command.
|
||||
|
||||
1. Run the following commands to delete all Pods, Deployments, and Services.
|
||||
|
||||
|
|
|
@ -6,20 +6,16 @@ metadata:
|
|||
# Optional: Allow the default AppArmor profile, requires setting the default.
|
||||
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
|
||||
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
|
||||
# Optional: Allow the default seccomp profile, requires setting the default.
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default,unconfined'
|
||||
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'unconfined'
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
|
||||
spec:
|
||||
privileged: false
|
||||
# The moby default capability set, defined here:
|
||||
# https://github.com/moby/moby/blob/0a5cec2833f82a6ad797d70acbf9cbbaf8956017/oci/caps/defaults.go#L6-L19
|
||||
# The moby default capability set, minus NET_RAW
|
||||
allowedCapabilities:
|
||||
- 'CHOWN'
|
||||
- 'DAC_OVERRIDE'
|
||||
- 'FSETID'
|
||||
- 'FOWNER'
|
||||
- 'MKNOD'
|
||||
- 'NET_RAW'
|
||||
- 'SETGID'
|
||||
- 'SETUID'
|
||||
- 'SETFCAP'
|
||||
|
@ -36,15 +32,16 @@ spec:
|
|||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
# Assume that persistentVolumes set up by the cluster admin are safe to use.
|
||||
# Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
|
||||
- 'csi'
|
||||
- 'persistentVolumeClaim'
|
||||
- 'ephemeral'
|
||||
# Allow all other non-hostpath volume types.
|
||||
- 'awsElasticBlockStore'
|
||||
- 'azureDisk'
|
||||
- 'azureFile'
|
||||
- 'cephFS'
|
||||
- 'cinder'
|
||||
- 'csi'
|
||||
- 'fc'
|
||||
- 'flexVolume'
|
||||
- 'flocker'
|
||||
|
@ -67,6 +64,9 @@ spec:
|
|||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
# This policy assumes the nodes are using AppArmor rather than SELinux.
|
||||
# The PSP SELinux API cannot express the SELinux Pod Security Standards,
|
||||
# so if using SELinux, you must choose a more restrictive default.
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'RunAsAny'
|
||||
|
|
|
@ -5,14 +5,11 @@ metadata:
|
|||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
|
||||
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
|
||||
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
|
||||
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
# This is redundant with non-root + disallow privilege escalation,
|
||||
# but we can provide it for defense in depth.
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
# Allow core volume types.
|
||||
|
@ -22,8 +19,10 @@ spec:
|
|||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
# Assume that persistentVolumes set up by the cluster admin are safe to use.
|
||||
# Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
|
||||
- 'csi'
|
||||
- 'persistentVolumeClaim'
|
||||
- 'ephemeral'
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: false
|
||||
|
|
|
@ -76,12 +76,11 @@ Timelines may vary with the severity of bug fixes, but for easier planning we
|
|||
will target the following monthly release points. Unplanned, critical
|
||||
releases may also occur in between these.
|
||||
|
||||
| Monthly Patch Release | Target date |
|
||||
| --------------------- | ----------- |
|
||||
| June 2021 | 2021-06-16 |
|
||||
| July 2021 | 2021-07-14 |
|
||||
| August 2021 | 2021-08-11 |
|
||||
| September 2021 | 2021-09-15 |
|
||||
| Monthly Patch Release | Cherry Pick Deadline | Target date |
|
||||
| --------------------- | -------------------- | ----------- |
|
||||
| July 2021 | 2021-07-10 | 2021-07-14 |
|
||||
| August 2021 | 2021-08-07 | 2021-08-11 |
|
||||
| September 2021 | 2021-09-11 | 2021-09-15 |
|
||||
|
||||
## Detailed Release History for Active Branches
|
||||
|
||||
|
@ -93,6 +92,7 @@ End of Life for **1.21** is **2022-06-28**
|
|||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
|
||||
| 1.21.3 | 2021-07-10 | 2021-07-14 | |
|
||||
| 1.21.2 | 2021-06-12 | 2021-06-16 | |
|
||||
| 1.21.1 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |
|
||||
|
||||
|
@ -104,6 +104,7 @@ End of Life for **1.20** is **2022-02-28**
|
|||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- |
|
||||
| 1.20.9 | 2021-07-10 | 2021-07-14 | |
|
||||
| 1.20.8 | 2021-06-12 | 2021-06-16 | |
|
||||
| 1.20.7 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |
|
||||
| 1.20.6 | 2021-04-09 | 2021-04-14 | |
|
||||
|
@ -121,6 +122,7 @@ End of Life for **1.19** is **2021-10-28**
|
|||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| ------------- | -------------------- | ----------- | ------------------------------------------------------------------------- |
|
||||
| 1.19.13 | 2021-07-10 | 2021-07-14 | |
|
||||
| 1.19.12 | 2021-06-12 | 2021-06-16 | |
|
||||
| 1.19.11 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) |
|
||||
| 1.19.10 | 2021-04-09 | 2021-04-14 | |
|
||||
|
|
0
content/es/docs/concepts/overview/object-management-kubectl/_index.md
Executable file → Normal file
0
content/es/docs/concepts/overview/object-management-kubectl/_index.md
Executable file → Normal file
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue