diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 785257142d..88d163d67c 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -11,7 +11,7 @@ For overall help on editing and submitting pull requests, visit: https://kubernetes.io/docs/contribute/start/#improve-existing-content - Use the default base branch, “master”, if you're documenting existing + Use the default base branch, “main”, if you're documenting existing features in the English localization. If you're working on a different localization (not English), see diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 2feb3cdb2d..8dda114690 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -235,10 +235,12 @@ aliases: - parispittman # authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES sig-release-leads: + - cpanato # SIG Technical Lead - hasheddan # SIG Technical Lead - jeremyrickard # SIG Technical Lead - justaugustus # SIG Chair - LappleApple # SIG Program Manager + - puerco # SIG Technical Lead - saschagrunert # SIG Chair release-engineering-approvers: - cpanato # Release Manager diff --git a/README-zh.md b/README-zh.md index ef259ef2d0..6c594b3934 100644 --- a/README-zh.md +++ b/README-zh.md @@ -4,7 +4,7 @@ # The Kubernetes documentation --> -[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) +[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) + +Replace this first line of your content with one to three sentences that summarize the blog post. + +## This is a section heading + +To help the reader, organize your content into sections that contain about three to six paragraphs. + +If you're documenting commands, separate the commands from the outputs, like this: + +1. Verify that the Secret exists by running the following command: + + ```shell + kubectl get secrets + ``` + + The response should be like this: + + ```shell + NAME TYPE DATA AGE + mysql-pass-c57bb4t7mf Opaque 1 9s + ``` + +You're free to create any sections you like. Below are a few common patterns we see at the end of blog posts. + +## What’s next? + +This optional section describes the future of the thing you've just described in the post. + +## How can I learn more? + +This optional section provides links to more information. Please avoid promoting and over-represent your organization. + +## How do I get involved? + +An optional section that links to resources for readers to get involved, and acknowledgments of individual contributors, such as: + +* [The name of a channel on Slack, #a-channel](https://.slack.com/messages/) + +* [A link to a "contribute" page with more information](). + +* Acknowledgements and thanks to the contributors. ([](https://github.com/)) who did X, Y, and Z. + +* Those interested in getting involved with the design and development of , join the [](https://github.com/project/community/tree/master/). We’re rapidly growing and always welcome new contributors. diff --git a/cloudbuild.yaml b/cloudbuild.yaml index 6d2ac0a024..61b5adc5f4 100644 --- a/cloudbuild.yaml +++ b/cloudbuild.yaml @@ -7,6 +7,7 @@ timeout: 1200s options: substitution_option: ALLOW_LOOSE steps: + # It's fine to bump the tag to a recent version, as needed - name: "gcr.io/k8s-testimages/gcb-docker-gcloud:v20190906-745fed4" entrypoint: make env: diff --git a/content/en/_index.html b/content/en/_index.html index 2abc22985c..db4c966102 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -48,7 +48,7 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise


- Revisit KubeCon EU 2021 + Attend KubeCon Europe on May 17-20, 2022
diff --git a/content/en/blog/_posts/2018-04-13-local-persistent-volumes-beta.md b/content/en/blog/_posts/2018-04-13-local-persistent-volumes-beta.md index 71a0fa26d9..a7cabde710 100644 --- a/content/en/blog/_posts/2018-04-13-local-persistent-volumes-beta.md +++ b/content/en/blog/_posts/2018-04-13-local-persistent-volumes-beta.md @@ -140,7 +140,7 @@ The local persistent volume beta feature is not complete by far. Some notable en ## Complementary features -[Pod priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it. +[Pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it. [Pod disruption budget](/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade. diff --git a/content/en/blog/_posts/2018-07-16-kubernetes-1-11-release-interview.md b/content/en/blog/_posts/2018-07-16-kubernetes-1-11-release-interview.md index 4326924029..25758bc213 100644 --- a/content/en/blog/_posts/2018-07-16-kubernetes-1-11-release-interview.md +++ b/content/en/blog/_posts/2018-07-16-kubernetes-1-11-release-interview.md @@ -94,7 +94,7 @@ JOSH BERKUS: That goes into release notes. I mean, keep in mind that one of the However, stuff happens, and we do occasionally have to do those. And so far, our main way to identify that to people actually is in the release notes. If you look at [the current release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#no-really-you-must-do-this-before-you-upgrade), there are actually two things in there right now that are sort of breaking changes. -One of them is the bit with [priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was... +One of them is the bit with [priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was... TIM PEPPER: The [JSON capitalization case sensitivity](https://github.com/kubernetes/kubernetes/issues/64612). diff --git a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md index a28196d568..a786475a67 100644 --- a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md +++ b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md @@ -166,7 +166,7 @@ Some critical state is held outside etcd. Certificates, container images, and ot * Cloud provider specific account and configuration data ## Considerations for your production workloads -Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/configuration/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads. +Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/scheduling-eviction/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads. For stateful services, external attached volume mounts are the standard Kubernetes recommendation for a non-clustered service (e.g., a typical SQL database). At this time Kubernetes managed snapshots of these external volumes is in the category of a [roadmap feature request](https://docs.google.com/presentation/d/1dgxfnroRAu0aF67s-_bmeWpkM1h2LCxe6lB1l1oS0EQ/edit#slide=id.g3ca07c98c2_0_47), likely to align with the Container Storage Interface (CSI) integration. Thus performing backups of such a service would involve application specific, in-pod activity that is beyond the scope of this document. While awaiting better Kubernetes support for a snapshot and backup workflow, running your database service in a VM rather than a container, and exposing it to your Kubernetes workload may be worth considering. diff --git a/content/en/blog/_posts/2019-04-16-pod-priority-and-preemption-in-kubernetes.md b/content/en/blog/_posts/2019-04-16-pod-priority-and-preemption-in-kubernetes.md index 49516da96a..88907e3e4d 100644 --- a/content/en/blog/_posts/2019-04-16-pod-priority-and-preemption-in-kubernetes.md +++ b/content/en/blog/_posts/2019-04-16-pod-priority-and-preemption-in-kubernetes.md @@ -8,7 +8,7 @@ date: 2019-04-16 Kubernetes is well-known for running scalable workloads. It scales your workloads based on their resource usage. When a workload is scaled up, more instances of the application get created. When the application is critical for your product, you want to make sure that these new instances are scheduled even when your cluster is under resource pressure. One obvious solution to this problem is to over-provision your cluster resources to have some amount of slack resources available for scale-up situations. This approach often works, but costs more as you would have to pay for the resources that are idle most of the time. -[Pod priority and preemption](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads. +[Pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads. ## Guaranteed scheduling with controlled cost diff --git a/content/en/blog/_posts/2020-05-27-An-Introduction-to-the-K8s-Infrastructure-Working-Group.md b/content/en/blog/_posts/2020-05-27-An-Introduction-to-the-K8s-Infrastructure-Working-Group.md index f2a74914d8..efd5e196f7 100644 --- a/content/en/blog/_posts/2020-05-27-An-Introduction-to-the-K8s-Infrastructure-Working-Group.md +++ b/content/en/blog/_posts/2020-05-27-An-Introduction-to-the-K8s-Infrastructure-Working-Group.md @@ -55,7 +55,7 @@ The team has made progress in the last few months that is well worth celebrating - The K8s-Infrastructure Working Group released an automated billing report that they start every meeting off by reviewing as a group. - DNS for k8s.io and kubernetes.io are also fully [community-owned](https://groups.google.com/g/kubernetes-dev/c/LZTYJorGh7c/m/u-ydk-yNEgAJ), with community members able to [file issues](https://github.com/kubernetes/k8s.io/issues/new?assignees=&labels=wg%2Fk8s-infra&template=dns-request.md&title=DNS+REQUEST%3A+%3Cyour-dns-record%3E) to manage records. -- The container registry [k8s.gcr.io](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io) is also fully community-owned and available for all Kubernetes subprojects to use. +- The container registry [k8s.gcr.io](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io) is also fully community-owned and available for all Kubernetes subprojects to use. - The Kubernetes [publishing-bot](https://github.com/kubernetes/publishing-bot) responsible for keeping k8s.io/kubernetes/staging repositories published to their own top-level repos (For example: [kubernetes/api](https://github.com/kubernetes/api)) runs on a community-owned cluster. - The gcsweb.k8s.io service used to provide anonymous access to GCS buckets for kubernetes artifacts runs on a community-owned cluster. - There is also an automated process of promoting all our container images. This includes a fully documented infrastructure, managed by the Kubernetes community, with automated processes for provisioning permissions. diff --git a/content/en/blog/_posts/2021-04-22-gateway-api/index.md b/content/en/blog/_posts/2021-04-22-gateway-api/index.md index a7d54ad645..d9c798a5b1 100644 --- a/content/en/blog/_posts/2021-04-22-gateway-api/index.md +++ b/content/en/blog/_posts/2021-04-22-gateway-api/index.md @@ -186,7 +186,7 @@ metadata: ### Role Oriented Design -When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow Gateway API to adapt to different organizational models and implementations well into the future. +When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API is not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow the Gateway API to adapt to different organizational models and implementations well into the future. ### Try it out and get involved @@ -194,4 +194,4 @@ There are many resources to check out to learn more. * Check out the [user guides](https://gateway-api.sigs.k8s.io/guides/getting-started/) to see what use-cases can be addressed. * Try out one of the [existing Gateway controllers ](https://gateway-api.sigs.k8s.io/references/implementations/) -* Or [get involved](https://gateway-api.sigs.k8s.io/contributing/community/) and help design and influence the future of Kubernetes service networking! \ No newline at end of file +* Or [get involved](https://gateway-api.sigs.k8s.io/contributing/community/) and help design and influence the future of Kubernetes service networking! diff --git a/content/en/blog/_posts/2021-06-28-announcing-kubernetes-community-group-annual-reports/index.md b/content/en/blog/_posts/2021-06-28-announcing-kubernetes-community-group-annual-reports/index.md new file mode 100644 index 0000000000..e31484abdf --- /dev/null +++ b/content/en/blog/_posts/2021-06-28-announcing-kubernetes-community-group-annual-reports/index.md @@ -0,0 +1,49 @@ +--- +layout: blog +title: "Announcing Kubernetes Community Group Annual Reports" +description: > + Introducing brand new Kubernetes Community Group Annual Reports for + Special Interest Groups and Working Groups. +date: 2021-06-28T10:00:00-08:00 +slug: Announcing-Kubernetes-Community-Group-Annual-Reports +--- + +**Authors:** Divya Mohan + +{{< figure src="k8s_annual_report_2020.svg" alt="Community annual report 2020" link="https://www.cncf.io/reports/kubernetes-community-annual-report-2020/" >}} + +Given the growth and scale of the Kubernetes project, the existing reporting mechanisms were proving to be inadequate and challenging. +Kubernetes is a large open source project. With over 100000 commits just to the main k/kubernetes repository, hundreds of other code +repositories in the project, and thousands of contributors, there's a lot going on. In fact, there are 37 contributor groups at the time of +writing. We also value all forms of contribution and not just code changes. + +With that context in mind, the challenge of reporting on all this activity was a call to action for exploring better options. Therefore +inspired by the Apache Software Foundation’s [open guide to PMC Reporting](https://www.apache.org/foundation/board/reporting) and the +[CNCF project Annual Reporting](https://www.cncf.io/cncf-annual-report-2020/), the Kubernetes project is proud to announce the +**Kubernetes Community Group Annual Reports for Special Interest Groups (SIGs) and Working Groups (WGs)**. In its flagship edition, +the [2020 Summary report](https://www.cncf.io/reports/kubernetes-community-annual-report-2020/) focuses on bettering the +Kubernetes ecosystem by assessing and promoting the healthiness of the groups within the upstream community. + +Previously, the mechanisms for the Kubernetes project overall to report on groups and their activities were +[devstats](https://k8s.devstats.cncf.io/), GitHub data, issues, to measure the healthiness of a given UG/WG/SIG/Committee. As a +project spanning several diverse communities, it was essential to have something that captured the human side of things. With 50,000+ +contributors, it’s easy to assume that the project has enough help and this report surfaces more information than /help-wanted and +/good-first-issue for end users. This is how we sustain the project. Paraphrasing one of the Steering Committee members, +[Paris Pittman](https://github.com/parispittman), “There was a requirement for tighter feedback loops - ones that involved more than just +GitHub data and issues. Given that Kubernetes, as a project, has grown in scale and number of contributors over the years, we have +outgrown the existing reporting mechanisms." + +The existing communication channels between the Steering committee members and the folks leading the groups and committees were also required +to be made as open and as bi-directional as possible. Towards achieving this very purpose, every group and committee has been assigned a +liaison from among the steering committee members for kick off, help, or guidance needed throughout the process. According to +[Davanum Srinivas a.k.a. dims](https://github.com/dims), “... That was one of the main motivations behind this report. People (leading the +groups/committees) know that they can reach out to us and there’s a vehicle for them to reach out to us… This is our way of setting up a +two-way feedback for them." The progress on these action items would be updated and tracked on the monthly Steering Committee meetings +ensuring that this is not a one-off activity. Quoting [Nikhita Raghunath](https://github.com/nikhita), one of the Steering Committee members, +“... Once we have a base, the liaisons will work with these groups to ensure that the problems are resolved. When we have a report next year, +we’ll have a look at the progress made and how we could still do better. But the idea is definitely to not stop at the report.” + +With this report, we hope to empower our end user communities with information that they can use to identify ways in which they can support +the project as well as a sneak peek into the roadmap for upcoming features. As a community, we thrive on feedback and would love to hear your +views about the report. You can get in touch with the [Steering Committee](https://github.com/kubernetes/steering#contact) via +[Slack](https://kubernetes.slack.com/messages/steering-committee) or via the [mailing list](steering@kubernetes.io). diff --git a/content/en/blog/_posts/2021-06-28-announcing-kubernetes-community-group-annual-reports/k8s_annual_report_2020.svg b/content/en/blog/_posts/2021-06-28-announcing-kubernetes-community-group-annual-reports/k8s_annual_report_2020.svg new file mode 100644 index 0000000000..179201d13b --- /dev/null +++ b/content/en/blog/_posts/2021-06-28-announcing-kubernetes-community-group-annual-reports/k8s_annual_report_2020.svg @@ -0,0 +1,16130 @@ + + + + diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index e75fdea4a5..406420f5bf 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -83,8 +83,11 @@ As an example, you can find detailed information about how `kube-up.sh` sets up logging for COS image on GCP in the corresponding [`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh). -When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet -sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively. +When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. +The kubelet sends this information to the CRI container runtime and the runtime writes the container logs to the given location. +The two kubelet parameters [`containerLogMaxSize` and `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +in [kubelet config file](/docs/tasks/administer-cluster/kubelet-config-file/) +can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively. When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in the basic logging example, the kubelet on the node handles the request and diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 48ac53ed47..933d30918b 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -1235,10 +1235,6 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa - A user who can create a Pod that uses a secret can also see the value of that secret. Even if the API server policy does not allow that user to read the Secret, the user could run a Pod which exposes the secret. - - Currently, anyone with root permission on any node can read _any_ secret from the API server, - by impersonating the kubelet. It is a planned feature to only send secrets to - nodes that actually require them, to restrict the impact of a root exploit on a - single node. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 1cd678e4a8..64d239ff0e 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -77,6 +77,20 @@ the pull policy of any object after its initial creation. When `imagePullPolicy` is defined without a specific value, it is also set to `Always`. +### ImagePullBackOff + +When a kubelet starts creating containers for a Pod using a container runtime, +it might be possible the container is in [Waiting](/docs/concepts/workloads/pods/pod-lifecycle/#container-state-waiting) +state because of `ImagePullBackOff`. + +The status `ImagePullBackOff` means that a container could not start because Kubernetes +could not pull a container image (for reasons such as invalid image name, or pulling +from a private registry without `imagePullSecret`). The `BackOff` part indicates +that Kubernetes will keep trying to pull the image, with an increasing back-off delay. + +Kubernetes raises the delay between each attempt until it reaches a compiled-in limit, +which is 300 seconds (5 minutes). + ## Multi-architecture images with image indexes As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using. diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index 68234d6665..5f9ad4da35 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -51,8 +51,7 @@ Some of the things that you can use an operator to automate include: * choosing a leader for a distributed application without an internal member election process -What might an Operator look like in more detail? Here's an example in more -detail: +What might an Operator look like in more detail? Here's an example: 1. A custom resource named SampleDB, that you can configure into the cluster. 2. A Deployment that makes sure a Pod is running that contains the @@ -124,6 +123,7 @@ Operator. ## {{% heading "whatsnext" %}} +* Read the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} [Operator White Paper](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md). * Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case * [Publish](https://operatorhub.io/) your operator for other people to use diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index fac2b1205e..36172faba5 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -11,7 +11,8 @@ weight: 30 {{< feature-state for_k8s_version="v1.21" state="deprecated" >}} -PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. +PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. For more information on the deprecation, +see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). Pod Security Policies enable fine-grained authorization of pod creation and updates. @@ -48,13 +49,12 @@ administrator to control the following: ## Enabling Pod Security Policies -Pod security policy control is implemented as an optional (but recommended) -[admission -controller](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy). PodSecurityPolicies -are enforced by [enabling the admission +Pod security policy control is implemented as an optional [admission +controller](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy). +PodSecurityPolicies are enforced by [enabling the admission controller](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in), -but doing so without authorizing any policies **will prevent any pods from being -created** in the cluster. +but doing so without authorizing any policies **will prevent any pods from being created** in the +cluster. Since the pod security policy API (`policy/v1beta1/podsecuritypolicy`) is enabled independently of the admission controller, for existing clusters it is @@ -110,7 +110,11 @@ roleRef: name: apiGroup: rbac.authorization.k8s.io subjects: -# Authorize specific service accounts: +# Authorize all service accounts in a namespace (recommended): +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:serviceaccounts: +# Authorize specific service accounts (not recommended): - kind: ServiceAccount name: namespace: @@ -139,6 +143,40 @@ Examples](/docs/reference/access-authn-authz/rbac#role-binding-examples). For a complete example of authorizing a PodSecurityPolicy, see [below](#example). +### Recommended Practice + +PodSecurityPolicy is being replaced by a new, simplified `PodSecurity` {{< glossary_tooltip +text="admission controller" term_id="admission-controller" >}}. For more details on this change, see +[PodSecurityPolicy Deprecation: Past, Present, and +Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). Follow these +guidelines to simplify migration from PodSecurityPolicy to the new admission controller: + +1. Limit your PodSecurityPolicies to the policies defined by the [Pod Security Standards](/docs/concepts/security/pod-security-standards): + - {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}} + - {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}} + - {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}} + +2. Only bind PSPs to entire namespaces, by using the `system:serviceaccounts:` group + (where `` is the target namespace). For example: + + ```yaml + apiVersion: rbac.authorization.k8s.io/v1 + # This cluster role binding allows all pods in the "development" namespace to use the baseline PSP. + kind: ClusterRoleBinding + metadata: + name: psp-baseline-namespaces + roleRef: + kind: ClusterRole + name: psp-baseline + apiGroup: rbac.authorization.k8s.io + subjects: + - kind: Group + name: system:serviceaccounts:development + apiGroup: rbac.authorization.k8s.io + - kind: Group + name: system:serviceaccounts:canary + apiGroup: rbac.authorization.k8s.io + ``` ### Troubleshooting @@ -661,8 +699,10 @@ Refer to the [Sysctl documentation]( ## {{% heading "whatsnext" %}} +- See [PodSecurityPolicy Deprecation: Past, Present, and + Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) to learn about + the future of pod security policy. + - See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations. - Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details. - - diff --git a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md index 595bad0d3a..e832d1d48c 100644 --- a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -193,7 +193,7 @@ resources based on the filesystems on the node. If the node has a dedicated `imagefs` filesystem for container runtimes to use, the kubelet does the following: - * If the `nodefs` filesystem meets the eviction threshlds, the kubelet garbage collects + * If the `nodefs` filesystem meets the eviction thresholds, the kubelet garbage collects dead pods and containers. * If the `imagefs` filesystem meets the eviction thresholds, the kubelet deletes all unused images. diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index b0eff0eb81..710be78c88 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -266,9 +266,23 @@ This ensures that DaemonSet pods are never evicted due to these problems. ## Taint Nodes by Condition -The node lifecycle controller automatically creates taints corresponding to -Node conditions with `NoSchedule` effect. -Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations. +The control plane, using the node {{}}, +automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/pod-eviction#node-conditions). + +The scheduler checks taints, not node conditions, when it makes scheduling +decisions. This ensures that node conditions don't directly affect scheduling. +For example, if the `DiskPressure` node condition is active, the control plane +adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods +onto the affected node. If the `MemoryPressure` node condition is active, the +control plane adds the `node.kubernetes.io/memory-pressure` taint. + +You can ignore node conditions for newly created pods by adding the corresponding +Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure` +toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}} +other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed` +or `Burstable` QoS classes (even pods with no memory request set) as if they are +able to cope with memory pressure, while new `BestEffort` pods are not scheduled +onto the affected node. The DaemonSet controller automatically adds the following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from breaking. @@ -282,7 +296,6 @@ tolerations to all daemons, to prevent DaemonSets from breaking. Adding these tolerations ensures backward compatibility. You can also add arbitrary tolerations to DaemonSets. - ## {{% heading "whatsnext" %}} * Read about [out of resource handling](/docs/concepts/scheduling-eviction/out-of-resource/) and how you can configure it diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index 5a69cc8e64..66104d307f 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -86,7 +86,7 @@ enforced/disallowed: Capabilities - Adding additional capabilities beyond the default set must be disallowed.
+ Adding NET_RAW or capabilities beyond the default set must be disallowed.

Restricted Fields:
spec.containers[*].securityContext.capabilities.add
spec.initContainers[*].securityContext.capabilities.add
@@ -194,7 +194,7 @@ well as lower-trust users.The following listed controls should be enforced/disal Volume Types - In addition to restricting HostPath volumes, the restricted profile limits usage of non-core volume types to those defined through PersistentVolumes.
+ In addition to restricting HostPath volumes, the restricted profile limits usage of non-ephemeral volume types to those defined through PersistentVolumes.

Restricted Fields:
spec.volumes[*].hostPath
spec.volumes[*].gcePersistentDisk
@@ -216,7 +216,6 @@ well as lower-trust users.The following listed controls should be enforced/disal spec.volumes[*].portworxVolume
spec.volumes[*].scaleIO
spec.volumes[*].storageos
- spec.volumes[*].csi

Allowed Values: undefined/nil
diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 84049d40ac..f43eeff22b 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -50,7 +50,7 @@ options ndots:5 ``` In summary, a pod in the _test_ namespace can successfully resolve either -`data.prod` or `data.prod.cluster.local`. +`data.prod` or `data.prod.svc.cluster.local`. ### DNS Records diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 5e6851d94a..5fb8ce6791 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -189,7 +189,7 @@ and pre-created PVs, but you'll need to look at the documentation for a specific to see its supported topology keys and examples. {{< note >}} - If you choose to use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec + If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state. Instead, you can use node selector for hostname in this case as shown below. @@ -658,11 +658,11 @@ metadata: provisioner: kubernetes.io/azure-disk parameters: storageaccounttype: Standard_LRS - kind: Shared + kind: managed ``` * `storageaccounttype`: Azure storage account Sku tier. Default is empty. -* `kind`: Possible values are `shared` (default), `dedicated`, and `managed`. +* `kind`: Possible values are `shared`, `dedicated`, and `managed` (default). When `kind` is `shared`, all unmanaged disks are created in a few shared storage accounts in the same resource group as the cluster. When `kind` is `dedicated`, a new dedicated storage account will be created for the new diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index d693e057ef..44e674031d 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -529,6 +529,15 @@ See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{< para ### hostPath {#hostpath} +{{< warning >}} +HostPath volumes present many security risks, and it is a best practice to avoid the use of +HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the +required file or directory, and mounted as ReadOnly. + +If restricting HostPath access to specific directories through AdmissionPolicy, `volumeMounts` MUST +be required to use `readOnly` mounts for the policy to be effective. +{{< /warning >}} + A `hostPath` volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications. @@ -558,6 +567,9 @@ The supported values for field `type` are: Watch out when using this type of volume, because: +* HostPaths can expose privileged system credentials (such as for the Kubelet) or privileged APIs + (such as container runtime socket), which can be used for container escape or to attack other + parts of the cluster. * Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes * The files or directories created on the underlying hosts are only writable by root. You diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index a84a10fd38..3639873382 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -82,12 +82,11 @@ spec: You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are: - **maxSkew** describes the degree to which Pods may be unevenly distributed. - It's the maximum permitted difference between the number of matching Pods in - any two topology domains of a given topology type. It must be greater than - zero. Its semantics differs according to the value of `whenUnsatisfiable`: + It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`: - when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum permitted difference between the number of matching pods in the target - topology and the global minimum. + topology and the global minimum + (the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, The global minimum is 0). - when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher precedence to topologies that would help reduce the skew. - **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain. @@ -96,6 +95,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. - **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details. +When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints. + You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. ### Example: One TopologySpreadConstraint @@ -387,7 +388,8 @@ for more details. ## Known Limitations -- Scaling down a Deployment may result in imbalanced Pods distribution. +- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution. +You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution. - Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921) ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 376fb69ba4..7a61443525 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -79,6 +79,10 @@ operator to use or manage a cluster. * [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) * [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/) +## Config API for kubeadm + +* [v1beta2](/docs/reference/config-api/kubeadm-config.v1beta2/) + ## Design Docs An archive of the design docs for Kubernetes functionality. Good starting points are diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index 450bedf541..9cbbb6cda7 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -56,11 +56,11 @@ state for some duration: * Approved requests: automatically deleted after 1 hour * Denied requests: automatically deleted after 1 hour -* Pending requests: automatically deleted after 1 hour +* Pending requests: automatically deleted after 24 hours ## Signers -All signers should provide information about how they work so that clients can predict what will happen to their CSRs. +Custom signerNames can also be specified. All signers should provide information about how they work so that clients can predict what will happen to their CSRs. This includes: 1. **Trust distribution**: how trust (CA bundles) are distributed. diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index 26a7634c2a..aa714bdaa2 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -282,7 +282,7 @@ Of course you need to set up the webhook server to handle these authentications. ### Request -Webhooks are sent a POST request, with `Content-Type: application/json`, +Webhooks are sent as POST requests, with `Content-Type: application/json`, with an `AdmissionReview` API object in the `admission.k8s.io` API group serialized to JSON as the body. diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 5cf0de3b9c..f1f0e2442e 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -59,6 +59,7 @@ different Kubernetes components. | `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | | | `AnyVolumeDataSource` | `false` | Alpha | 1.18 | | | `AppArmor` | `true` | Beta | 1.4 | | +| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | | | `CPUManager` | `false` | Alpha | 1.8 | 1.9 | | `CPUManager` | `true` | Beta | 1.10 | | | `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 | @@ -523,6 +524,11 @@ Each feature gate is designed for enabling/disabling a specific feature: extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`. Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md) for more details. +- `ControllerManagerLeaderMigration`: Enables Leader Migration for + [kube-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) and + [cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager) which allows a cluster operator to live migrate + controllers from the kube-controller-manager into an external controller-manager + (e.g. the cloud-controller-manager) in an HA cluster without downtime. - `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/). - `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime. The default max size of a log file is 10MB and the @@ -785,7 +791,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/) feature to account for pod overheads. - `PodPriority`: Enable the descheduling and preemption of Pods based on their - [priorities](/docs/concepts/configuration/pod-priority-preemption/). + [priorities](/docs/concepts/scheduling-eviction/pod-priority-preemption/). - `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate) for more details. diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md new file mode 100644 index 0000000000..293c7dc779 --- /dev/null +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md @@ -0,0 +1,1489 @@ +--- +title: kubeadm Configuration (v1beta2) +content_type: tool-reference +package: kubeadm.k8s.io/v1beta2 +auto_generated: true +--- +Package v1beta2 defines the v1beta2 version of the kubeadm configuration file format. +This version improves on the v1beta1 format by fixing some minor issues and adding a few new fields. + +A list of changes since v1beta1: + +- `certificateKey" field is added to InitConfiguration and JoinConfiguration. +- "ignorePreflightErrors" field is added to the NodeRegistrationOptions. +- The JSON "omitempty" tag is used in a more places where appropriate. +- The JSON "omitempty" tag of the "taints" field (inside NodeRegistrationOptions) is removed. +See the Kubernetes 1.15 changelog for further details. + +## Migration from old kubeadm config versions + +Please convert your v1beta1 configuration files to v1beta2 using the "kubeadm config migrate" command of kubeadm v1.15.x +(conversion from older releases of kubeadm config files requires older release of kubeadm as well e.g. + +- kubeadm v1.11 should be used to migrate v1alpha1 to v1alpha2; kubeadm v1.12 should be used to translate v1alpha2 to v1alpha3; +- kubeadm v1.13 or v1.14 should be used to translate v1alpha3 to v1beta1) + +Nevertheless, kubeadm v1.15.x will support reading from v1beta1 version of the kubeadm config file format. + +## Basics + +The preferred way to configure kubeadm is to pass an YAML configuration file with the --config option. Some of the +configuration options defined in the kubeadm config file are also available as command line flags, but only +the most common/simple use case are supported with this approach. + +A kubeadm config file could contain multiple configuration types separated using three dashes (“---”). + +kubeadm supports the following configuration types: + +```yaml +apiVersion: kubeadm.k8s.io/v1beta2 +kind: InitConfiguration + +apiVersion: kubeadm.k8s.io/v1beta2 +kind: ClusterConfiguration + +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration + +apiVersion: kubeproxy.config.k8s.io/v1alpha1 +kind: KubeProxyConfiguration + +apiVersion: kubeadm.k8s.io/v1beta2 +kind: JoinConfiguration +``` + +To print the defaults for "init" and "join" actions use the following commands: + +```shell +kubeadm config print init-defaults +kubeadm config print join-defaults +``` + +The list of configuration types that must be included in a configuration file depends by the action you are +performing (init or join) and by the configuration options you are going to use (defaults or advanced customization). + +If some configuration types are not provided, or provided only partially, kubeadm will use default values; defaults +provided by kubeadm includes also enforcing consistency of values across components when required (e.g. +cluster-cidr flag on controller manager and clusterCIDR on kube-proxy). + +Users are always allowed to override default values, with the only exception of a small subset of setting with +relevance for security (e.g. enforce authorization-mode Node and RBAC on api server) + +If the user provides a configuration types that is not expected for the action you are performing, kubeadm will +ignore those types and print a warning. + +## Kubeadm init configuration types + +When executing kubeadm init with the `--config` option, the following configuration types could be used: +InitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration, but only one +between InitConfiguration and ClusterConfiguration is mandatory. + +```yaml +apiVersion: kubeadm.k8s.io/v1beta2 +kind: InitConfiguration +bootstrapTokens: + ... +nodeRegistration: + ... +``` + +The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init +are the configuration of the bootstrap token and all the setting which are specific to the node where kubeadm +is executed, including: + +- NodeRegistration, that holds fields that relate to registering the new node to the cluster; + use it to customize the node name, the CRI socket to use or any other settings that should apply to this + node only (e.g. the node ip). + +- LocalAPIEndpoint, that represents the endpoint of the instance of the API server to be deployed on this node; + use it e.g. to customize the API server advertise address. + + ```yaml + apiVersion: kubeadm.k8s.io/v1beta2 + kind: ClusterConfiguration + networking: + ... + etcd: + ... + apiServer: + extraArgs: + ... + extraVolumes: + ... + ``` + +The ClusterConfiguration type should be used to configure cluster-wide settings, +including settings for: + +- Networking, that holds configuration for the networking topology of the cluster; use it e.g. to customize + node subnet or services subnet. +- Etcd configurations; use it e.g. to customize the local etcd or to configure the API server + for using an external etcd cluster. +- kube-apiserver, kube-scheduler, kube-controller-manager configurations; use it to customize control-plane + components by adding customized setting or overriding kubeadm default settings. + + ```yaml + apiVersion: kubeproxy.config.k8s.io/v1alpha1 + kind: KubeProxyConfiguration + ... + ``` + +The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed +in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults. + +See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration +for kube proxy official documentation. + +```yaml +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +... +``` + +The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances +deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults. + +See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration +for kubelet official documentation. + +Here is a fully populated example of a single YAML file containing multiple +configuration types to be used during a `kubeadm init` run. + +```yaml +apiVersion: kubeadm.k8s.io/v1beta2 +kind: InitConfiguration +bootstrapTokens: + - token: "9a08jv.c0izixklcxtmnze7" + description: "kubeadm bootstrap token" + ttl: "24h" + - token: "783bde.3f89s0fje9f38fhf" + description: "another bootstrap token" + usages: + - authentication + - signing + groups: + - system:bootstrappers:kubeadm:default-node-token +nodeRegistration: + name: "ec2-10-100-0-1" + criSocket: "/var/run/dockershim.sock" + taints: + - key: "kubeadmNode" + value: "master" + effect: "NoSchedule" + kubeletExtraArgs: + cgroup-driver: "cgroupfs" + ignorePreflightErrors: + - IsPrivilegedUser +localAPIEndpoint: + advertiseAddress: "10.100.0.1" + bindPort: 6443 +certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204" +--- +apiVersion: kubeadm.k8s.io/v1beta2 +kind: ClusterConfiguration +etcd: + # one of local or external + local: + imageRepository: "k8s.gcr.io" + imageTag: "3.2.24" + dataDir: "/var/lib/etcd" + extraArgs: + listen-client-urls: "http://10.100.0.1:2379" + serverCertSANs: + - "ec2-10-100-0-1.compute-1.amazonaws.com" + peerCertSANs: + - "10.100.0.1" + # external: + # endpoints: + # - "10.100.0.1:2379" + # - "10.100.0.2:2379" + # caFile: "/etcd/kubernetes/pki/etcd/etcd-ca.crt" + # certFile: "/etcd/kubernetes/pki/etcd/etcd.crt" + # keyFile: "/etcd/kubernetes/pki/etcd/etcd.key" + networking: + serviceSubnet: "10.96.0.0/12" + podSubnet: "10.100.0.1/24" + dnsDomain: "cluster.local" + kubernetesVersion: "v1.12.0" + controlPlaneEndpoint: "10.100.0.1:6443" + apiServer: + extraArgs: + authorization-mode: "Node,RBAC" + extraVolumes: + - name: "some-volume" + hostPath: "/etc/some-path" + mountPath: "/etc/some-pod-path" + readOnly: false + pathType: File + certSANs: + - "10.100.1.1" + - "ec2-10-100-0-1.compute-1.amazonaws.com" + timeoutForControlPlane: 4m0s + controllerManager: + extraArgs: + "node-cidr-mask-size": "20" + extraVolumes: + - name: "some-volume" + hostPath: "/etc/some-path" + mountPath: "/etc/some-pod-path" + readOnly: false + pathType: File + scheduler: + extraArgs: + address: "10.100.0.1" + extraVolumes: + - name: "some-volume" + hostPath: "/etc/some-path" + mountPath: "/etc/some-pod-path" + readOnly: false + pathType: File +certificatesDir: "/etc/kubernetes/pki" +imageRepository: "k8s.gcr.io" +useHyperKubeImage: false +clusterName: "example-cluster" +--- +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +# kubelet specific options here +--- +apiVersion: kubeproxy.config.k8s.io/v1alpha1 +kind: KubeProxyConfiguration +# kube-proxy specific options here +``` + +## Kubeadm join configuration types + +When executing kubeadm join with the `--config` option, the JoinConfiguration type should be provided. + +```yaml +apiVersion: kubeadm.k8s.io/v1beta2 +kind: JoinConfiguration +... +``` + +The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join +are the discovery method used for accessing the cluster info and all the setting which are specific +to the node where kubeadm is executed, including: + +- NodeRegistration, that holds fields that relate to registering the new node to the cluster; + use it to customize the node name, the CRI socket to use or any other settings that should apply to this + node only (e.g. the node ip). + +- APIEndpoint, that represents the endpoint of the instance of the API server to be eventually deployed on this node. + +## Resource Types + + +- [ClusterConfiguration](#kubeadm-k8s-io-v1beta2-ClusterConfiguration) +- [ClusterStatus](#kubeadm-k8s-io-v1beta2-ClusterStatus) +- [InitConfiguration](#kubeadm-k8s-io-v1beta2-InitConfiguration) +- [JoinConfiguration](#kubeadm-k8s-io-v1beta2-JoinConfiguration) + + + + +## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta2-ClusterConfiguration} + + + + + +ClusterConfiguration contains cluster-wide configuration for a kubeadm cluster + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
kubeadm.k8s.io/v1beta2
kind
string
ClusterConfiguration
etcd [Required]
+Etcd +
+ `etcd` holds configuration for etcd.
networking [Required]
+Networking +
+ `networking` holds configuration for the networking topology of the cluster.
kubernetesVersion [Required]
+string +
+ `kubernetesVersion` is the target version of the control plane.
controlPlaneEndpoint [Required]
+string +
+ `controlPlaneEndpoint` sets a stable IP address or DNS name for the control plane; it +can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port. +In case the ControlPlaneEndpoint is not specified, the AdvertiseAddress + BindPort +are used; in case the ControlPlaneEndpoint is specified but without a TCP port, +the BindPort is used. +Possible usages are: + +- In a cluster with more than one control plane instances, this field should be + assigned the address of the external load balancer in front of the + control plane instances. +- In environments with enforced node recycling, the ControlPlaneEndpoint + could be used for assigning a stable DNS to the control plane.
apiServer [Required]
+APIServer +
+ `apiServer` contains extra settings for the API server.
controllerManager [Required]
+ControlPlaneComponent +
+ `controllerManager` contains extra settings for the controller manager.
scheduler [Required]
+ControlPlaneComponent +
+ `scheduler` contains extra settings for the scheduler.
dns [Required]
+DNS +
+ `dns` defines the options for the DNS add-on.
certificatesDir [Required]
+string +
+ `certificatesDir` specifies where to store or look for all required certificates.
imageRepository [Required]
+string +
+ `imageRepository` sets the container registry to pull images from. +If empty, `k8s.gcr.io` will be used by default; in case of kubernetes version is +a CI build (kubernetes version starts with `ci/` or `ci-cross/`) +`gcr.io/k8s-staging-ci-images` will be used as a default for control plane +components and for kube-proxy, while `k8s.gcr.io` will be used for all the other images.
useHyperKubeImage [Required]
+bool +
+ `useHyperKubeImage` controls if hyperkube should be used for Kubernetes +components instead of their respective separate images +DEPRECATED: As hyperkube is itself deprecated, this fields is too. It will +be removed in future kubeadm config versions, kubeadm will print multiple +warnings when this is set to true, and at some point it may become ignored.
featureGates [Required]
+map[string]bool +
+ Feature gates enabled by the user.
clusterName [Required]
+string +
+ The cluster name
+ + + +## `ClusterStatus` {#kubeadm-k8s-io-v1beta2-ClusterStatus} + + + + + +ClusterStatus contains the cluster status. The ClusterStatus will be stored in the kubeadm-config +ConfigMap in the cluster, and then updated by kubeadm when additional control plane instance joins or leaves the cluster. + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
kubeadm.k8s.io/v1beta2
kind
string
ClusterStatus
apiEndpoints [Required]
+map[string]github.com/tengqm/kubeconfig/config/kubeadm/v1beta2.APIEndpoint +
+ `apiEndpoints` currently available in the cluster, one for each control +plane/API server instance. The key of the map is the IP of the host's default interface
+ + + +## `InitConfiguration` {#kubeadm-k8s-io-v1beta2-InitConfiguration} + + + + + +InitConfiguration contains a list of elements that is specific "kubeadm init"-only runtime +information. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
kubeadm.k8s.io/v1beta2
kind
string
InitConfiguration
bootstrapTokens [Required]
+[]BootstrapToken +
+ `bootstrapTokens` is respected at `kubeadm init` time and describes a set of Bootstrap Tokens to create. +This information IS NOT uploaded to the kubeadm cluster configmap, partly because of its sensitive nature
nodeRegistration [Required]
+NodeRegistrationOptions +
+ `nodeRegistration` holds fields that relate to registering the new control-plane node to the cluster
localAPIEndpoint [Required]
+APIEndpoint +
+ `localAPIEndpoint` represents the endpoint of the API server instance that's deployed on this control plane node +In HA setups, this differs from ClusterConfiguration.ControlPlaneEndpoint in the sense that ControlPlaneEndpoint +is the global endpoint for the cluster, which then loadbalances the requests to each individual API server. This +configuration object lets you customize what IP/DNS name and port the local API server advertises it's accessible +on. By default, kubeadm tries to auto-detect the IP of the default interface and use that, but in case that process +fails you may set the desired value here.
certificateKey [Required]
+string +
+ `certificateKey` sets the key with which certificates and keys are encrypted prior to being uploaded in +a Secret in the cluster during the "uploadcerts" init phase.
+ + + +## `JoinConfiguration` {#kubeadm-k8s-io-v1beta2-JoinConfiguration} + + + + + +JoinConfiguration contains elements describing a particular node. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
kubeadm.k8s.io/v1beta2
kind
string
JoinConfiguration
nodeRegistration [Required]
+NodeRegistrationOptions +
+ `nodeRegistration` holds fields that relate to registering the new control-plane +node to the cluster
caCertPath [Required]
+string +
+ `caCertPath` is the path to the SSL certificate authority used to +secure comunications between node and control-plane. +Defaults to "/etc/kubernetes/pki/ca.crt".
discovery [Required]
+Discovery +
+ `discovery` specifies the options for the kubelet to use during the TLS Bootstrap +process
controlPlane [Required]
+JoinControlPlane +
+ `controlPlane` defines the additional control plane instance to be deployed on the +joining node. If nil, no additional control plane instance will be deployed.
+ + + +## `APIEndpoint` {#kubeadm-k8s-io-v1beta2-APIEndpoint} + + + + +**Appears in:** + +- [ClusterStatus](#kubeadm-k8s-io-v1beta2-ClusterStatus) + +- [InitConfiguration](#kubeadm-k8s-io-v1beta2-InitConfiguration) + +- [JoinControlPlane](#kubeadm-k8s-io-v1beta2-JoinControlPlane) + + +APIEndpoint struct contains elements of API server instance deployed on a node. + + + + + + + + + + + + + + + + + + +
FieldDescription
advertiseAddress [Required]
+string +
+ `advertiseAddress` sets the IP address for the API server to advertise.
bindPort [Required]
+int32 +
+ `bindPort` sets the secure port for the API Server to bind to. Defaults to 6443.
+ + + +## `APIServer` {#kubeadm-k8s-io-v1beta2-APIServer} + + + + +**Appears in:** + +- [ClusterConfiguration](#kubeadm-k8s-io-v1beta2-ClusterConfiguration) + + +APIServer holds settings necessary for API server deployments in the cluster + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ControlPlaneComponent [Required]
+ControlPlaneComponent +
(Members of ControlPlaneComponent are embedded into this type.) + No description provided. +
certSANs [Required]
+[]string +
+ `certSANs` sets extra Subject Alternative Names for the API Server signing cert.
timeoutForControlPlane [Required]
+invalid type +
+ `timeoutForControlPlane` controls the timeout that we use for API server to appear
+ + + +## `BootstrapToken` {#kubeadm-k8s-io-v1beta2-BootstrapToken} + + + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta2-InitConfiguration) + + +BootstrapToken describes one bootstrap token, stored as a Secret in the cluster + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
token [Required]
+BootstrapTokenString +
+ `token` used for establishing bidirectional trust between nodes and control-planes. +Used for joining nodes in the cluster.
description [Required]
+string +
+ `description` sets a human-friendly message why this token exists and what it's used +for, so other administrators can know its purpose.
ttl [Required]
+invalid type +
+ `ttl` defines the time to live for this token. Defaults to "24h". +`expires` and `ttl` are mutually exclusive.
expires [Required]
+invalid type +
+ `expires` specifies the timestamp when this token expires. Defaults to being set +dynamically at runtime based on the `ttl`. `expires` and `ttl` are mutually exclusive.
usages [Required]
+[]string +
+ `usages` describes the ways in which this token can be used. Can by default be used +for establishing bidirectional trust, but that can be changed here.
groups [Required]
+[]string +
+ `groups` specifies the extra groups that this token will authenticate as when/if +used for authentication
+ + + +## `BootstrapTokenDiscovery` {#kubeadm-k8s-io-v1beta2-BootstrapTokenDiscovery} + + + + +**Appears in:** + +- [Discovery](#kubeadm-k8s-io-v1beta2-Discovery) + + +BootstrapTokenDiscovery is used to set the options for bootstrap token based discovery + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
token [Required]
+string +
+ `token` is a token used to validate cluster information fetched from the control-plane.
apiServerEndpoint [Required]
+string +
+ `apiServerEndpoint` is an IP or domain name to the API server from which +information will be fetched.
caCertHashes [Required]
+[]string +
+ discovery is used. The root CA found during discovery must match one of these +values. Specifying an empty set disables root CA pinning, which can be unsafe. +Each hash is specified as `:`, where the only currently supported +type is "sha256". This is a hex-encoded SHA-256 hash of the Subject Public Key +Info (SPKI) object in DER-encoded ASN.1. These hashes can be calculated using, +for example, OpenSSL.
unsafeSkipCAVerification [Required]
+bool +
+ `unsafeSkipCAVerification` allows token-based discovery without CA verification +via `caCertHashes`. This can weaken the security of kubeadm since other nodes +can impersonate the control-plane.
+ + + +## `BootstrapTokenString` {#kubeadm-k8s-io-v1beta2-BootstrapTokenString} + + + + +**Appears in:** + +- [BootstrapToken](#kubeadm-k8s-io-v1beta2-BootstrapToken) + + +BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used +for both validation of the practically of the API server from a joining node's point +of view and as an authentication method for the node in the bootstrap phase of +"kubeadm join". This token is and should be short-lived + + + + + + + + + + + + + + + + + + +
FieldDescription
- [Required]
+string +
+ No description provided. +
- [Required]
+string +
+ No description provided. +
+ + + +## `ControlPlaneComponent` {#kubeadm-k8s-io-v1beta2-ControlPlaneComponent} + + + + +**Appears in:** + +- [ClusterConfiguration](#kubeadm-k8s-io-v1beta2-ClusterConfiguration) + +- [APIServer](#kubeadm-k8s-io-v1beta2-APIServer) + + +ControlPlaneComponent holds settings common to control plane component of the cluster + + + + + + + + + + + + + + + + + + +
FieldDescription
extraArgs [Required]
+map[string]string +
+ `extraArgs` is an extra set of flags to pass to the control plane component.
extraVolumes [Required]
+[]HostPathMount +
+ `extraVolumes` is an extra set of host volumes, mounted to the control plane component.
+ + + +## `DNS` {#kubeadm-k8s-io-v1beta2-DNS} + + + + +**Appears in:** + +- [ClusterConfiguration](#kubeadm-k8s-io-v1beta2-ClusterConfiguration) + + +DNS defines the DNS addon that should be used in the cluster + + + + + + + + + + + + + + + + + + +
FieldDescription
type [Required]
+DNSAddOnType +
+ `type` defines the DNS add-on to use.
ImageMeta [Required]
+ImageMeta +
(Members of ImageMeta are embedded into this type.) + `imageMeta` allows to customize the image used for the DNS.
+ + + +## `DNSAddOnType` {#kubeadm-k8s-io-v1beta2-DNSAddOnType} + +(Alias of `string`) + + +**Appears in:** + +- [DNS](#kubeadm-k8s-io-v1beta2-DNS) + + +DNSAddOnType defines string identifying DNS add-on types + + + + + +## `Discovery` {#kubeadm-k8s-io-v1beta2-Discovery} + + + + +**Appears in:** + +- [JoinConfiguration](#kubeadm-k8s-io-v1beta2-JoinConfiguration) + + +Discovery specifies the options for the kubelet to use during the TLS Bootstrap process + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
bootstrapToken [Required]
+BootstrapTokenDiscovery +
+ `bootstrapToken` is used to set the options for bootstrap token based discovery. +`bootstrapToken` and `file` are mutually exclusive.
file [Required]
+FileDiscovery +
+ `file` specifies a file or URL to a kubeconfig file from which to load cluster information. +`bootstrapToken` and `file` are mutually exclusive.
tlsBootstrapToken [Required]
+string +
+ `tlsBootstrapToken` is a token used for TLS bootstrapping. +If `bootstrapToken` is set, this field is defaulted to `bootstrapToken.token`, +but can be overridden. +If `file` is set, this field ∗∗must be set∗∗ in case the KubeConfigFile does +not contain any other authentication information
timeout [Required]
+invalid type +
+ `timeout` modifies the discovery timeout.
+ + + +## `Etcd` {#kubeadm-k8s-io-v1beta2-Etcd} + + + + +**Appears in:** + +- [ClusterConfiguration](#kubeadm-k8s-io-v1beta2-ClusterConfiguration) + + +Etcd contains elements describing Etcd configuration. + + + + + + + + + + + + + + + + + + +
FieldDescription
local [Required]
+LocalEtcd +
+ `local` provides configuration knobs for configuring the local etcd instance. +`local` and `external` are mutually exclusive.
external [Required]
+ExternalEtcd +
+ `external` describes how to connect to an external etcd cluster. +`local` and `external` are mutually exclusive.
+ + + +## `ExternalEtcd` {#kubeadm-k8s-io-v1beta2-ExternalEtcd} + + + + +**Appears in:** + +- [Etcd](#kubeadm-k8s-io-v1beta2-Etcd) + + +ExternalEtcd describes an external etcd cluster. +Kubeadm has no knowledge of where certificate files live and they must be supplied. + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
endpoints [Required]
+[]string +
+ `endpoints` are endpoints of etcd members. This field is required.
caFile [Required]
+string +
+ `caFile` is an SSL Certificate Authority file used to secure etcd communication. +Required if using a TLS connection.
certFile [Required]
+string +
+ `certFile` is an SSL certification file used to secure etcd communication. +Required if using a TLS connection.
keyFile [Required]
+string +
+ `keyFile` is an SSL key file used to secure etcd communication. +Required if using a TLS connection.
+ + + +## `FileDiscovery` {#kubeadm-k8s-io-v1beta2-FileDiscovery} + + + + +**Appears in:** + +- [Discovery](#kubeadm-k8s-io-v1beta2-Discovery) + + +FileDiscovery is used to specify a file or URL to a kubeconfig file from which to load cluster information + + + + + + + + + + + + + +
FieldDescription
kubeConfigPath [Required]
+string +
+ `kubeConfigPath` specifies the actual file path or URL to the kubeconfig file +from which to load cluster information
+ + + +## `HostPathMount` {#kubeadm-k8s-io-v1beta2-HostPathMount} + + + + +**Appears in:** + +- [ControlPlaneComponent](#kubeadm-k8s-io-v1beta2-ControlPlaneComponent) + + +HostPathMount contains elements describing volumes that are mounted from the host. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+ `name` is the volume name inside the Pod template.
hostPath [Required]
+string +
+ `hostPath` is the path in the host that will be mounted inside the Pod.
mountPath [Required]
+string +
+ `mountPath` is the path inside the Pod where the `hostPath` volume is mounted.
readOnly [Required]
+bool +
+ `readOnly` controls write access to the volume.
pathType [Required]
+invalid type +
+ `pathType` is the type of the `hostPath` volume.
+ + + +## `ImageMeta` {#kubeadm-k8s-io-v1beta2-ImageMeta} + + + + +**Appears in:** + +- [DNS](#kubeadm-k8s-io-v1beta2-DNS) + +- [LocalEtcd](#kubeadm-k8s-io-v1beta2-LocalEtcd) + + +ImageMeta allows to customize the image used for components that are not +originated from the Kubernetes/Kubernetes release process + + + + + + + + + + + + + + + + + + +
FieldDescription
imageRepository [Required]
+string +
+ `imageRepository` sets the container registry to pull images from. +If not set, the ImageRepository defined in ClusterConfiguration will be used instead.
imageTag [Required]
+string +
+ `imageTag` allows to specify a tag for the image. +In case this value is set, kubeadm does not change automatically the +version of the above components during upgrades.
+ + + +## `JoinControlPlane` {#kubeadm-k8s-io-v1beta2-JoinControlPlane} + + + + +**Appears in:** + +- [JoinConfiguration](#kubeadm-k8s-io-v1beta2-JoinConfiguration) + + +JoinControlPlane contains elements describing an additional control plane instance to be deployed on the joining node. + + + + + + + + + + + + + + + + + + +
FieldDescription
localAPIEndpoint [Required]
+APIEndpoint +
+ `localAPIEndpoint` represents the endpoint of the API server instance to be deployed +on this node.
certificateKey [Required]
+string +
+ `certificateKey` is the key that is used for decryption of certificates after they +are downloaded from the secret upon joining a new control plane node. The +corresponding encryption key is in the InitConfiguration.
+ + + +## `LocalEtcd` {#kubeadm-k8s-io-v1beta2-LocalEtcd} + + + + +**Appears in:** + +- [Etcd](#kubeadm-k8s-io-v1beta2-Etcd) + + +LocalEtcd describes that kubeadm should run an etcd cluster locally + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ImageMeta [Required]
+ImageMeta +
(Members of ImageMeta are embedded into this type.) + `ImageMeta` allows to customize the container used for etcd.
dataDir [Required]
+string +
+ `dataDir` is the directory etcd will place its data. +Defaults to "/var/lib/etcd".
extraArgs [Required]
+map[string]string +
+ `extraArgs` are extra arguments provided to the etcd binary +when run inside a static pod.
serverCertSANs [Required]
+[]string +
+ `serverCertSANs` sets extra Subject Alternative Names for the etcd server signing cert.
peerCertSANs [Required]
+[]string +
+ `peerCertSANs` sets extra Subject Alternative Names for the etcd peer signing cert.
+ + + +## `Networking` {#kubeadm-k8s-io-v1beta2-Networking} + + + + +**Appears in:** + +- [ClusterConfiguration](#kubeadm-k8s-io-v1beta2-ClusterConfiguration) + + +Networking contains elements describing cluster's networking configuration + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
serviceSubnet [Required]
+string +
+ `serviceSubnet` is the subnet used by k8s services. Defaults to "10.96.0.0/12".
podSubnet [Required]
+string +
+ `podSubnet` is the subnet used by Pods.
dnsDomain [Required]
+string +
+ `dnsDomain` is the DNS domain used by k8s services. Defaults to "cluster.local".
+ + + +## `NodeRegistrationOptions` {#kubeadm-k8s-io-v1beta2-NodeRegistrationOptions} + + + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta2-InitConfiguration) + +- [JoinConfiguration](#kubeadm-k8s-io-v1beta2-JoinConfiguration) + + +NodeRegistrationOptions holds fields that relate to registering a new control-plane or node to the cluster, either via "kubeadm init" or "kubeadm join" + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+ `name` is the `.metadata.name` field of the Node API object that will be created in this +`kubeadm init` or `kubeadm join` operation. +This field is also used in the CommonName field of the kubelet's client certificate to the +API server. Defaults to the hostname of the node if not provided.
criSocket [Required]
+string +
+ `criSocket` is used to retrieve container runtime info. This information will be +annotated to the Node API object, for later re-use.
taints [Required]
+[]invalid type +
+ `taints` specifies the taints the Node API object should be registered with. If +this field is unset, i.e. nil, in the `kubeadm init` process, it will be defaulted +to `['"node-role.kubernetes.io/master"=""']`. If you don't want to taint your +control-plane node, set this field to an empty list, i.e. `taints: []` in the YAML +file. This field is solely used for Node registration.
kubeletExtraArgs [Required]
+map[string]string +
+ `kubeletExtraArgs` passes through extra arguments to the kubelet. The arguments here +are passed to the kubelet command line via the environment file kubeadm writes at +runtime for the kubelet to source. This overrides the generic base-level +configuration in the "kubelet-config-1.X" ConfigMap. Flags have higher priority when +parsing. These values are local and specific to the node kubeadm is executing on.
ignorePreflightErrors [Required]
+[]string +
+ `ignorePreflightErrors` provides a slice of pre-flight errors to be ignored when +the current node is registered.
+ + diff --git a/content/en/docs/reference/glossary/pod-priority.md b/content/en/docs/reference/glossary/pod-priority.md index 994f8bc4d8..f0e0a0f1c6 100644 --- a/content/en/docs/reference/glossary/pod-priority.md +++ b/content/en/docs/reference/glossary/pod-priority.md @@ -2,7 +2,7 @@ title: Pod Priority id: pod-priority date: 2019-01-31 -full_link: /docs/concepts/configuration/pod-priority-preemption/#pod-priority +full_link: /docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority short_description: > Pod Priority indicates the importance of a Pod relative to other Pods. @@ -14,4 +14,4 @@ tags: -[Pod Priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority) gives the ability to set scheduling priority of a Pod to be higher and lower than other Pods — an important feature for production clusters workload. +[Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) gives the ability to set scheduling priority of a Pod to be higher and lower than other Pods — an important feature for production clusters workload. diff --git a/content/en/docs/reference/glossary/preemption.md b/content/en/docs/reference/glossary/preemption.md index f27e36c66f..1a1f0e929a 100644 --- a/content/en/docs/reference/glossary/preemption.md +++ b/content/en/docs/reference/glossary/preemption.md @@ -2,7 +2,7 @@ title: Preemption id: preemption date: 2019-01-31 -full_link: /docs/concepts/configuration/pod-priority-preemption/#preemption +full_link: /docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption short_description: > Preemption logic in Kubernetes helps a pending Pod to find a suitable Node by evicting low priority Pods existing on that Node. @@ -14,4 +14,4 @@ tags: -If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/configuration/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible. +If a Pod cannot be scheduled, the scheduler tries to [preempt](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) lower priority Pods to make scheduling of the pending Pod possible. diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index da41e7c4c9..d87e4c25de 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -192,7 +192,21 @@ For example, if there are 1,253 pods on the cluster and the client wants to rece } ``` -Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates. +Note that the `resourceVersion` of the list remains constant across each request, +indicating the server is showing us a consistent snapshot of the pods. Pods that +are created, updated, or deleted after version `10245` would not be shown unless +the user makes a list request without the `continue` token. This allows clients +to break large requests into smaller chunks and then perform a watch operation +on the full set without missing any updates. + +`remainingItemCount` is the number of subsequent items in the list which are not +included in this list response. If the list request contained label or field selectors, +then the number of remaining items is unknown and the API server does not include +a `remainingItemCount` field in its response. If the list is complete (either +because it is not chunking or because this is the last chunk), then there are no +more remaining items and the API server does not include a `remainingItemCount` +field in its response. The intended use of the `remainingItemCount` is estimating +the size of a collection. ## Receiving resources as Tables diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index edaa061b16..0de210add6 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -89,23 +89,11 @@ If you notice that `kubeadm init` hangs after printing out the following line: This may be caused by a number of problems. The most common are: - network connection problems. Check that your machine has full network connectivity before continuing. -- the default cgroup driver configuration for the kubelet differs from that used by Docker. - Check the system log file (e.g. `/var/log/message`) or examine the output from `journalctl -u kubelet`. If you see something like the following: - - ```shell - error: failed to run Kubelet: failed to create kubelet: - misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" - ``` - - There are two common ways to fix the cgroup driver problem: - - 1. Install Docker again following instructions - [here](/docs/setup/production-environment/container-runtimes/#docker). - - 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to - [Configure cgroup driver used by kubelet on control-plane node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node) - -- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`. +- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to +configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). +- control plane containers are crashlooping or hanging. You can check this by running `docker ps` +and investigating each container by running `docker logs`. For other container runtime see +[Debugging Kubernetes nodes with crictl](/docs/tasks/debug-application-cluster/crictl/). ## kubeadm blocks when removing managed containers @@ -224,7 +212,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`. If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid` -in kube-apserver logs. To fix the issue you must follow these steps: +in kube-apiserver logs. To fix the issue you must follow these steps: 1. Backup and delete `/etc/kubernetes/kubelet.conf` and `/var/lib/kubelet/pki/kubelet-client*` from the failed node. 1. From a working control plane node in the cluster that has `/etc/kubernetes/pki/ca.key` execute diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 587b34b58f..5585496cbc 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -102,6 +102,8 @@ limitation and compatibility rules will change. Microsoft maintains a Windows pause infrastructure container at `mcr.microsoft.com/oss/kubernetes/pause:3.4.1`. +Kubernetes maintains a multi-architecture image `k8s.gcr.io/pause:3.5` that +supports Linux as well as Windows. #### Compute diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md index 6361b20d16..2338b0cdc7 100644 --- a/content/en/docs/tasks/administer-cluster/certificates.md +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -116,6 +116,9 @@ manually through `easyrsa`, `openssl` or `cfssl`. openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ -CAcreateserial -out server.crt -days 10000 \ -extensions v3_ext -extfile csr.conf +1. View the certificate signing request: + + openssl req -noout -text -in ./server.csr 1. View the certificate: openssl x509 -noout -text -in ./server.crt diff --git a/content/en/docs/tasks/administer-cluster/highly-available-control-plane.md b/content/en/docs/tasks/administer-cluster/highly-available-control-plane.md index 339f48e41a..f50bfb01d9 100644 --- a/content/en/docs/tasks/administer-cluster/highly-available-control-plane.md +++ b/content/en/docs/tasks/administer-cluster/highly-available-control-plane.md @@ -10,7 +10,7 @@ aliases: [ '/docs/tasks/administer-cluster/highly-available-master/' ] {{< feature-state for_k8s_version="v1.5" state="alpha" >}} -You can replicate Kubernetes control plane nodes in `kube-up` or `kube-down` scripts for Google Compute Engine. +You can replicate Kubernetes control plane nodes in `kube-up` or `kube-down` scripts for Google Compute Engine. However this scripts are not suitable for any sort of production use, it's widely used in the project's CI. This document describes how to use kube-up/down scripts to manage a highly available (HA) control plane and how HA control planes are implemented for use with GCE. @@ -156,14 +156,14 @@ and the IP address of the first replica will be promoted to IP address of load b Similarly, after removal of the penultimate control plane node, the load balancer will be removed and its IP address will be assigned to the last remaining replica. Please note that creation and removal of load balancer are complex operations and it may take some time (~20 minutes) for them to propagate. -### Master service & kubelets +### Control plane service & kubelets Instead of trying to keep an up-to-date list of Kubernetes apiserver in the Kubernetes service, the system directs all traffic to the external IP: * in case of a single node control plane, the IP points to the control plane node, -* in case of an HA control plane, the IP points to the load balancer in-front of the masters. +* in case of an HA control plane, the IP points to the load balancer in-front of the control plane nodes. Similarly, the external IP will be used by kubelets to communicate with the control plane. diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 2934e1c0f7..0964033079 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -78,7 +78,8 @@ A namespace can be in one of two phases: * `Active` the namespace is in use * `Terminating` the namespace is being deleted, and can not be used for new objects -See the [design doc](https://git.k8s.io/community/contributors/design-proposals/architecture/namespaces.md#phases) for more details. +For more details, see [Namespace](/docs/reference/kubernetes-api/cluster-resources/namespace-v1/) +in the API reference. ## Creating a new namespace diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 505ec7d755..cf31ecb3c3 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -380,5 +380,5 @@ JWKS URI is required to use the `https` scheme. See also: - [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) -- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md) +- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery) - [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html) diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index 50a02b990e..2a46fd70d8 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -24,7 +24,7 @@ a Pod or Container. Security context settings include, but are not limited to: * [AppArmor](/docs/tutorials/clusters/apparmor/): Use program profiles to restrict the capabilities of individual programs. -* [Seccomp](https://en.wikipedia.org/wiki/Seccomp): Filter a process's system calls. +* [Seccomp](/docs/tutorials/clusters/seccomp/): Filter a process's system calls. * AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the [`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has `CAP_SYS_ADMIN`. diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index 3230b7b73a..cd2d0fb103 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -154,22 +154,26 @@ from the YAML you used to create it: ```yaml apiVersion: v1 -kind: List items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: - creationTimestamp: 2017-05-31T12:56:35Z + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"stable.example.com/v1","kind":"CronTab","metadata":{"annotations":{},"name":"my-new-cron-object","namespace":"default"},"spec":{"cronSpec":"* * * * */5","image":"my-awesome-cron-image"}} + creationTimestamp: "2021-06-20T07:35:27Z" generation: 1 name: my-new-cron-object namespace: default - resourceVersion: "285" - uid: 9423255b-4600-11e7-af6a-28d2447dc82b + resourceVersion: "1326" + uid: 9aab1d66-628e-41bb-a422-57b8b3b1f5a9 spec: cronSpec: '* * * * */5' image: my-awesome-cron-image +kind: List metadata: resourceVersion: "" + selfLink: "" ``` ## Delete a CustomResourceDefinition diff --git a/content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/en/docs/tasks/network/customize-hosts-file-for-pods.md similarity index 98% rename from content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md rename to content/en/docs/tasks/network/customize-hosts-file-for-pods.md index 8eee03bf9b..e396db5d2d 100644 --- a/content/en/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/content/en/docs/tasks/network/customize-hosts-file-for-pods.md @@ -3,7 +3,7 @@ reviewers: - rickypai - thockin title: Adding entries to Pod /etc/hosts with HostAliases -content_type: concept +content_type: task weight: 60 min-kubernetes-server-version: 1.7 --- @@ -16,7 +16,7 @@ Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostn Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart. - + ## Default hosts file content diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index cd04442614..875b324412 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -82,6 +82,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type: If you do not have root access on the target system, you can still install kubectl to the `~/.local/bin` directory: ```bash + chmod +x kubectl mkdir -p ~/.local/bin/kubectl mv ./kubectl ~/.local/bin/kubectl # and then add ~/.local/bin/kubectl to $PATH diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index ad7b856223..106365900c 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -25,7 +25,8 @@ weight: 20
diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index ab008c38af..5a645ea5d6 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -37,7 +37,9 @@ weight: 20 diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html index 8c87cfab18..38c2acd8cc 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -29,7 +29,9 @@ weight: 20 diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html index 3b5a51dc40..6f8cd442b8 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -74,11 +74,11 @@ weight: 10

Nodes

-

A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master's automatic scheduling takes into account the available resources on each Node.

+

A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the control plane. A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster. The control plane's automatic scheduling takes into account the available resources on each Node.

Every Kubernetes Node runs at least:

    -
  • Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.
  • +
  • Kubelet, a process responsible for communication between the Kubernetes control plane and the Node; it manages the Pods and the containers running on a machine.
  • A container runtime (like Docker) responsible for pulling the container image from a registry, unpacking the container, and running the application.
diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html index e89414b917..f99bd50172 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -26,7 +26,9 @@ weight: 20
diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html index 77e707c429..3f87e284c9 100644 --- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -26,7 +26,9 @@ weight: 20
diff --git a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html index 42663ecdaa..b09aa1215c 100644 --- a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -26,7 +26,8 @@ weight: 20 @@ -35,3 +36,7 @@ weight: 20 + + + + diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md index 9f993d22a0..b8c92d66a5 100644 --- a/content/en/docs/tutorials/stateless-application/guestbook.md +++ b/content/en/docs/tutorials/stateless-application/guestbook.md @@ -14,7 +14,10 @@ source: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook --- -This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components: +This tutorial shows you how to build and deploy a simple _(not production +ready)_, multi-tier web application using Kubernetes and +[Docker](https://www.docker.com/). This example consists of the following +components: * A single-instance [Redis](https://www.redis.com/) to store guestbook entries * Multiple web frontend instances @@ -48,141 +51,157 @@ The manifest file, included below, specifies a Deployment controller that runs a 1. Launch a terminal window in the directory you downloaded the manifest files. 1. Apply the Redis Deployment from the `redis-leader-deployment.yaml` file: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml + ``` 1. Query the list of Pods to verify that the Redis Pod is running: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The response should be similar to this: + The response should be similar to this: - ```shell - NAME READY STATUS RESTARTS AGE - redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s - ``` + ``` + NAME READY STATUS RESTARTS AGE + redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s + ``` 1. Run the following command to view the logs from the Redis leader Pod: - ```shell - kubectl logs -f deployment/redis-leader - ``` + ```shell + kubectl logs -f deployment/redis-leader + ``` ### Creating the Redis leader Service -The guestbook application needs to communicate to the Redis to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis Pod. A Service defines a policy to access the Pods. +The guestbook application needs to communicate to the Redis to write its data. +You need to apply a [Service](/docs/concepts/services-networking/service/) to +proxy the traffic to the Redis Pod. A Service defines a policy to access the +Pods. {{< codenew file="application/guestbook/redis-leader-service.yaml" >}} 1. Apply the Redis Service from the following `redis-leader-service.yaml` file: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml + ``` 1. Query the list of Services to verify that the Redis Service is running: - ```shell - kubectl get service - ``` + ```shell + kubectl get service + ``` - The response should be similar to this: + The response should be similar to this: - ```shell - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - kubernetes ClusterIP 10.0.0.1 443/TCP 1m - redis-leader ClusterIP 10.103.78.24 6379/TCP 16s - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.0.0.1 443/TCP 1m + redis-leader ClusterIP 10.103.78.24 6379/TCP 16s + ``` {{< note >}} -This manifest file creates a Service named `redis-leader` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis Pod. +This manifest file creates a Service named `redis-leader` with a set of labels +that match the labels previously defined, so the Service routes network +traffic to the Redis Pod. {{< /note >}} ### Set up Redis followers -Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas. +Although the Redis leader is a single Pod, you can make it highly available +and meet traffic demands by adding a few Redis followers, or replicas. {{< codenew file="application/guestbook/redis-follower-deployment.yaml" >}} -1. Apply the Redis Service from the following `redis-follower-deployment.yaml` file: +1. Apply the Redis Deployment from the following `redis-follower-deployment.yaml` file: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml + ``` 1. Verify that the two Redis follower replicas are running by querying the list of Pods: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The response should be similar to this: + The response should be similar to this: - ```shell - NAME READY STATUS RESTARTS AGE - redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s - redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s - redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m + ``` + NAME READY STATUS RESTARTS AGE + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s + redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m + ``` ### Creating the Redis follower service -The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another [Service](/docs/concepts/services-networking/service/). +The guestbook application needs to communicate with the Redis followers to +read data. To make the Redis followers discoverable, you must set up another +[Service](/docs/concepts/services-networking/service/). {{< codenew file="application/guestbook/redis-follower-service.yaml" >}} 1. Apply the Redis Service from the following `redis-follower-service.yaml` file: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml + ``` 1. Query the list of Services to verify that the Redis Service is running: - ```shell - kubectl get service - ``` + ```shell + kubectl get service + ``` - The response should be similar to this: + The response should be similar to this: - ```shell - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h - redis-follower ClusterIP 10.110.162.42 6379/TCP 9s - redis-leader ClusterIP 10.103.78.24 6379/TCP 6m10s - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h + redis-follower ClusterIP 10.110.162.42 6379/TCP 9s + redis-leader ClusterIP 10.103.78.24 6379/TCP 6m10s + ``` {{< note >}} -This manifest file creates a Service named `redis-follower` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis Pod. +This manifest file creates a Service named `redis-follower` with a set of +labels that match the labels previously defined, so the Service routes network +traffic to the Redis Pod. {{< /note >}} ## Set up and Expose the Guestbook Frontend -Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment. +Now that you have the Redis storage of your guestbook up and running, start +the guestbook web servers. Like the Redis followers, the frontend is deployed +using a Kubernetes Deployment. -The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX. +The guestbook app uses a PHP frontend. It is configured to communicate with +either the Redis follower or leader Services, depending on whether the request +is a read or a write. The frontend exposes a JSON interface, and serves a +jQuery-Ajax-based UX. ### Creating the Guestbook Frontend Deployment @@ -190,199 +209,214 @@ The guestbook app uses a PHP frontend. It is configured to communicate with eith 1. Apply the frontend Deployment from the `frontend-deployment.yaml` file: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml + ``` 1. Query the list of Pods to verify that the three frontend replicas are running: - ```shell - kubectl get pods -l app=guestbook -l tier=frontend - ``` + ```shell + kubectl get pods -l app=guestbook -l tier=frontend + ``` - The response should be similar to this: + The response should be similar to this: - ``` - NAME READY STATUS RESTARTS AGE - frontend-85595f5bf9-5tqhb 1/1 Running 0 47s - frontend-85595f5bf9-qbzwm 1/1 Running 0 47s - frontend-85595f5bf9-zchwc 1/1 Running 0 47s - ``` + ``` + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-5tqhb 1/1 Running 0 47s + frontend-85595f5bf9-qbzwm 1/1 Running 0 47s + frontend-85595f5bf9-zchwc 1/1 Running 0 47s + ``` ### Creating the Frontend Service -The `Redis` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services-service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster. +The `Redis` Services you applied is only accessible within the Kubernetes +cluster because the default type for a Service is +[ClusterIP](/docs/concepts/services-networking/service/#publishing-services-service-types). +`ClusterIP` provides a single IP address for the set of Pods the Service is +pointing to. This IP address is accessible only within the cluster. -If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`. +If you want guests to be able to access your guestbook, you must configure the +frontend Service to be externally visible, so a client can request the Service +from outside the Kubernetes cluster. However a Kubernetes user you can use +`kubectl port-forward` to access the service even though it uses a +`ClusterIP`. {{< note >}} -Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment `type: LoadBalancer`. +Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, +support external load balancers. If your cloud provider supports load +balancers and you want to use it, uncomment `type: LoadBalancer`. {{< /note >}} {{< codenew file="application/guestbook/frontend-service.yaml" >}} 1. Apply the frontend Service from the `frontend-service.yaml` file: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml + ``` 1. Query the list of Services to verify that the frontend Service is running: - ```shell - kubectl get services - ``` + ```shell + kubectl get services + ``` - The response should be similar to this: + The response should be similar to this: - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend ClusterIP 10.97.28.230 80/TCP 19s - kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h - redis-follower ClusterIP 10.110.162.42 6379/TCP 5m48s - redis-leader ClusterIP 10.103.78.24 6379/TCP 11m - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + frontend ClusterIP 10.97.28.230 80/TCP 19s + kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h + redis-follower ClusterIP 10.110.162.42 6379/TCP 5m48s + redis-leader ClusterIP 10.103.78.24 6379/TCP 11m + ``` ### Viewing the Frontend Service via `kubectl port-forward` 1. Run the following command to forward port `8080` on your local machine to port `80` on the service. - ```shell - kubectl port-forward svc/frontend 8080:80 - ``` + ```shell + kubectl port-forward svc/frontend 8080:80 + ``` - The response should be similar to this: + The response should be similar to this: - ``` - Forwarding from 127.0.0.1:8080 -> 80 - Forwarding from [::1]:8080 -> 80 - ``` + ``` + Forwarding from 127.0.0.1:8080 -> 80 + Forwarding from [::1]:8080 -> 80 + ``` 1. load the page [http://localhost:8080](http://localhost:8080) in your browser to view your guestbook. ### Viewing the Frontend Service via `LoadBalancer` -If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` you need to find the IP address to view your Guestbook. +If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` +you need to find the IP address to view your Guestbook. 1. Run the following command to get the IP address for the frontend Service. - ```shell - kubectl get service frontend - ``` + ```shell + kubectl get service frontend + ``` - The response should be similar to this: + The response should be similar to this: - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + frontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m + ``` 1. Copy the external IP address, and load the page in your browser to view your guestbook. {{< note >}} -Try adding some guestbook entries by typing in a message, and clicking Submit. The message you typed appears in the frontend. This message indicates that data is successfully added to Redis through the Services you created earlier. +Try adding some guestbook entries by typing in a message, and clicking Submit. +The message you typed appears in the frontend. This message indicates that +data is successfully added to Redis through the Services you created earlier. {{< /note >}} ## Scale the Web Frontend -You can scale up or down as needed because your servers are defined as a Service that uses a Deployment controller. +You can scale up or down as needed because your servers are defined as a +Service that uses a Deployment controller. 1. Run the following command to scale up the number of frontend Pods: - ```shell - kubectl scale deployment frontend --replicas=5 - ``` + ```shell + kubectl scale deployment frontend --replicas=5 + ``` 1. Query the list of Pods to verify the number of frontend Pods running: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The response should look similar to this: + The response should look similar to this: - ``` - NAME READY STATUS RESTARTS AGE - frontend-85595f5bf9-5df5m 1/1 Running 0 83s - frontend-85595f5bf9-7zmg5 1/1 Running 0 83s - frontend-85595f5bf9-cpskg 1/1 Running 0 15m - frontend-85595f5bf9-l2l54 1/1 Running 0 14m - frontend-85595f5bf9-l9c8z 1/1 Running 0 14m - redis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m - redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m - redis-leader-fb76b4755-xjr2n 1/1 Running 0 108m - ``` + ``` + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-5df5m 1/1 Running 0 83s + frontend-85595f5bf9-7zmg5 1/1 Running 0 83s + frontend-85595f5bf9-cpskg 1/1 Running 0 15m + frontend-85595f5bf9-l2l54 1/1 Running 0 14m + frontend-85595f5bf9-l9c8z 1/1 Running 0 14m + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m + redis-leader-fb76b4755-xjr2n 1/1 Running 0 108m + ``` 1. Run the following command to scale down the number of frontend Pods: - ```shell - kubectl scale deployment frontend --replicas=2 - ``` + ```shell + kubectl scale deployment frontend --replicas=2 + ``` 1. Query the list of Pods to verify the number of frontend Pods running: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The response should look similar to this: + The response should look similar to this: - ``` - NAME READY STATUS RESTARTS AGE - frontend-85595f5bf9-cpskg 1/1 Running 0 16m - frontend-85595f5bf9-l9c8z 1/1 Running 0 15m - redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m - redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m - redis-leader-fb76b4755-xjr2n 1/1 Running 0 109m - ``` + ``` + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-cpskg 1/1 Running 0 16m + frontend-85595f5bf9-l9c8z 1/1 Running 0 15m + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m + redis-leader-fb76b4755-xjr2n 1/1 Running 0 109m + ``` ## {{% heading "cleanup" %}} -Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command. +Deleting the Deployments and Services also deletes any running Pods. Use +labels to delete multiple resources with one command. 1. Run the following commands to delete all Pods, Deployments, and Services. - ```shell - kubectl delete deployment -l app=redis - kubectl delete service -l app=redis - kubectl delete deployment frontend - kubectl delete service frontend - ``` + ```shell + kubectl delete deployment -l app=redis + kubectl delete service -l app=redis + kubectl delete deployment frontend + kubectl delete service frontend + ``` - The response should look similar to this: + The response should look similar to this: - ``` - deployment.apps "redis-follower" deleted - deployment.apps "redis-leader" deleted - deployment.apps "frontend" deleted - service "frontend" deleted - ``` + ``` + deployment.apps "redis-follower" deleted + deployment.apps "redis-leader" deleted + deployment.apps "frontend" deleted + service "frontend" deleted + ``` 1. Query the list of Pods to verify that no Pods are running: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The response should look similar to this: + The response should look similar to this: - ``` - No resources found in default namespace. - ``` + ``` + No resources found in default namespace. + ``` ## {{% heading "whatsnext" %}} * Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials * Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) * Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/) -* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) \ No newline at end of file +* Read more about [Managing Resources](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) diff --git a/content/en/examples/policy/baseline-psp.yaml b/content/en/examples/policy/baseline-psp.yaml index 36e440588b..57258bf313 100644 --- a/content/en/examples/policy/baseline-psp.yaml +++ b/content/en/examples/policy/baseline-psp.yaml @@ -6,20 +6,16 @@ metadata: # Optional: Allow the default AppArmor profile, requires setting the default. apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' - # Optional: Allow the default seccomp profile, requires setting the default. - seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default,unconfined' - seccomp.security.alpha.kubernetes.io/defaultProfileName: 'unconfined' + seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' spec: privileged: false - # The moby default capability set, defined here: - # https://github.com/moby/moby/blob/0a5cec2833f82a6ad797d70acbf9cbbaf8956017/oci/caps/defaults.go#L6-L19 + # The moby default capability set, minus NET_RAW allowedCapabilities: - 'CHOWN' - 'DAC_OVERRIDE' - 'FSETID' - 'FOWNER' - 'MKNOD' - - 'NET_RAW' - 'SETGID' - 'SETUID' - 'SETFCAP' @@ -36,15 +32,16 @@ spec: - 'projected' - 'secret' - 'downwardAPI' - # Assume that persistentVolumes set up by the cluster admin are safe to use. + # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. + - 'csi' - 'persistentVolumeClaim' + - 'ephemeral' # Allow all other non-hostpath volume types. - 'awsElasticBlockStore' - 'azureDisk' - 'azureFile' - 'cephFS' - 'cinder' - - 'csi' - 'fc' - 'flexVolume' - 'flocker' @@ -67,6 +64,9 @@ spec: runAsUser: rule: 'RunAsAny' seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + # The PSP SELinux API cannot express the SELinux Pod Security Standards, + # so if using SELinux, you must choose a more restrictive default. rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' diff --git a/content/en/examples/policy/restricted-psp.yaml b/content/en/examples/policy/restricted-psp.yaml index 4db57688b1..0837c5a3ce 100644 --- a/content/en/examples/policy/restricted-psp.yaml +++ b/content/en/examples/policy/restricted-psp.yaml @@ -5,14 +5,11 @@ metadata: annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' - seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false - # This is redundant with non-root + disallow privilege escalation, - # but we can provide it for defense in depth. requiredDropCapabilities: - ALL # Allow core volume types. @@ -22,8 +19,10 @@ spec: - 'projected' - 'secret' - 'downwardAPI' - # Assume that persistentVolumes set up by the cluster admin are safe to use. + # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. + - 'csi' - 'persistentVolumeClaim' + - 'ephemeral' hostNetwork: false hostIPC: false hostPID: false diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index 95ebc69da2..b44604bc3e 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -76,12 +76,11 @@ Timelines may vary with the severity of bug fixes, but for easier planning we will target the following monthly release points. Unplanned, critical releases may also occur in between these. -| Monthly Patch Release | Target date | -| --------------------- | ----------- | -| June 2021 | 2021-06-16 | -| July 2021 | 2021-07-14 | -| August 2021 | 2021-08-11 | -| September 2021 | 2021-09-15 | +| Monthly Patch Release | Cherry Pick Deadline | Target date | +| --------------------- | -------------------- | ----------- | +| July 2021 | 2021-07-10 | 2021-07-14 | +| August 2021 | 2021-08-07 | 2021-08-11 | +| September 2021 | 2021-09-11 | 2021-09-15 | ## Detailed Release History for Active Branches @@ -93,6 +92,7 @@ End of Life for **1.21** is **2022-06-28** | PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE | | ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- | +| 1.21.3 | 2021-07-10 | 2021-07-14 | | | 1.21.2 | 2021-06-12 | 2021-06-16 | | | 1.21.1 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) | @@ -104,6 +104,7 @@ End of Life for **1.20** is **2022-02-28** | PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE | | ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- | +| 1.20.9 | 2021-07-10 | 2021-07-14 | | | 1.20.8 | 2021-06-12 | 2021-06-16 | | | 1.20.7 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) | | 1.20.6 | 2021-04-09 | 2021-04-14 | | @@ -121,6 +122,7 @@ End of Life for **1.19** is **2021-10-28** | PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE | | ------------- | -------------------- | ----------- | ------------------------------------------------------------------------- | +| 1.19.13 | 2021-07-10 | 2021-07-14 | | | 1.19.12 | 2021-06-12 | 2021-06-16 | | | 1.19.11 | 2021-05-07 | 2021-05-12 | [Regression](https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs) | | 1.19.10 | 2021-04-09 | 2021-04-14 | | diff --git a/content/es/docs/concepts/architecture/_index.md b/content/es/docs/concepts/architecture/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/cluster-administration/_index.md b/content/es/docs/concepts/cluster-administration/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/configuration/_index.md b/content/es/docs/concepts/configuration/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/containers/_index.md b/content/es/docs/concepts/containers/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/overview/_index.md b/content/es/docs/concepts/overview/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/overview/object-management-kubectl/_index.md b/content/es/docs/concepts/overview/object-management-kubectl/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/overview/working-with-objects/_index.md b/content/es/docs/concepts/overview/working-with-objects/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/policy/_index.md b/content/es/docs/concepts/policy/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/services-networking/_index.md b/content/es/docs/concepts/services-networking/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/storage/_index.md b/content/es/docs/concepts/storage/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/workloads/controllers/_index.md b/content/es/docs/concepts/workloads/controllers/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/concepts/workloads/pods/_index.md b/content/es/docs/concepts/workloads/pods/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/getting-started-guides/_index.md b/content/es/docs/getting-started-guides/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/getting-started-guides/fedora/_index.md b/content/es/docs/getting-started-guides/fedora/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/application-architect.md b/content/es/docs/reference/glossary/application-architect.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/application-developer.md b/content/es/docs/reference/glossary/application-developer.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/certificate.md b/content/es/docs/reference/glossary/certificate.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/cluster.md b/content/es/docs/reference/glossary/cluster.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/controller.md b/content/es/docs/reference/glossary/controller.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/docker.md b/content/es/docs/reference/glossary/docker.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/etcd.md b/content/es/docs/reference/glossary/etcd.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/image.md b/content/es/docs/reference/glossary/image.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/index.md b/content/es/docs/reference/glossary/index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/job.md b/content/es/docs/reference/glossary/job.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kops.md b/content/es/docs/reference/glossary/kops.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kube-apiserver.md b/content/es/docs/reference/glossary/kube-apiserver.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kube-controller-manager.md b/content/es/docs/reference/glossary/kube-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kube-proxy.md b/content/es/docs/reference/glossary/kube-proxy.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kube-scheduler.md b/content/es/docs/reference/glossary/kube-scheduler.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kubeadm.md b/content/es/docs/reference/glossary/kubeadm.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kubectl.md b/content/es/docs/reference/glossary/kubectl.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/kubelet.md b/content/es/docs/reference/glossary/kubelet.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/label.md b/content/es/docs/reference/glossary/label.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/limitrange.md b/content/es/docs/reference/glossary/limitrange.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/minikube.md b/content/es/docs/reference/glossary/minikube.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/namespace.md b/content/es/docs/reference/glossary/namespace.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/node.md b/content/es/docs/reference/glossary/node.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/persistent-volume-claim.md b/content/es/docs/reference/glossary/persistent-volume-claim.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/pod.md b/content/es/docs/reference/glossary/pod.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/replica-set.md b/content/es/docs/reference/glossary/replica-set.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/secret.md b/content/es/docs/reference/glossary/secret.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/selector.md b/content/es/docs/reference/glossary/selector.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/sysctl.md b/content/es/docs/reference/glossary/sysctl.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/glossary/volume.md b/content/es/docs/reference/glossary/volume.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/kubectl/_index.md b/content/es/docs/reference/kubectl/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/reference/setup-tools/kubeadm/_index.md b/content/es/docs/reference/setup-tools/kubeadm/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/setup/independent/_index.md b/content/es/docs/setup/independent/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/setup/release/_index.md b/content/es/docs/setup/release/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/access-application-cluster/_index.md b/content/es/docs/tasks/access-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/access-kubernetes-api/_index.md b/content/es/docs/tasks/access-kubernetes-api/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/access-kubernetes-api/custom-resources/_index.md b/content/es/docs/tasks/access-kubernetes-api/custom-resources/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/administer-cluster/_index.md b/content/es/docs/tasks/administer-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/administer-cluster/kubeadm/_index.md b/content/es/docs/tasks/administer-cluster/kubeadm/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/configure-pod-container/_index.md b/content/es/docs/tasks/configure-pod-container/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/federation/_index.md b/content/es/docs/tasks/federation/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/federation/administer-federation/_index.md b/content/es/docs/tasks/federation/administer-federation/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/inject-data-application/_index.md b/content/es/docs/tasks/inject-data-application/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/manage-daemon/_index.md b/content/es/docs/tasks/manage-daemon/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/run-application/_index.md b/content/es/docs/tasks/run-application/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/service-catalog/_index.md b/content/es/docs/tasks/service-catalog/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tasks/tls/_index.md b/content/es/docs/tasks/tls/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/clusters/_index.md b/content/es/docs/tutorials/clusters/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/configuration/_index.md b/content/es/docs/tutorials/configuration/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/kubernetes-basics/_index.md b/content/es/docs/tutorials/kubernetes-basics/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/online-training/_index.md b/content/es/docs/tutorials/online-training/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/services/_index.md b/content/es/docs/tutorials/services/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/stateful-application/_index.md b/content/es/docs/tutorials/stateful-application/_index.md old mode 100755 new mode 100644 diff --git a/content/es/docs/tutorials/stateless-application/_index.md b/content/es/docs/tutorials/stateless-application/_index.md old mode 100755 new mode 100644 diff --git a/content/fr/_index.html b/content/fr/_index.html index 53c11593db..836e5b7504 100644 --- a/content/fr/_index.html +++ b/content/fr/_index.html @@ -43,12 +43,12 @@ Kubernetes est une solution open-source qui vous permet de tirer parti de vos in

- Venez au KubeCon NA Virtuel du 17 au 20 Novembre 2020 + Venez au KubeCon NA Los Angeles, USA du 11 au 15 Octobre 2021



- Venez au KubeCon EU Virtuel du 4 au 7 Mai 2021 + Venez au KubeCon EU Valence, Espagne du 15 au 20 Mai 2022
diff --git a/content/fr/docs/setup/custom-cloud/kubespray.md b/content/fr/docs/setup/custom-cloud/kubespray.md index 2e10c21f46..cde3cbb3f9 100644 --- a/content/fr/docs/setup/custom-cloud/kubespray.md +++ b/content/fr/docs/setup/custom-cloud/kubespray.md @@ -8,7 +8,7 @@ content_type: concept Cette documentation permet d'installer rapidement un cluster Kubernetes hébergé sur GCE, Azure, Openstack, AWS, vSphere, Oracle Cloud Infrastructure (expérimental) ou sur des serveurs physiques (bare metal) grâce à [Kubespray](https://github.com/kubernetes-incubator/kubespray). -Kubespray se base sur des outils de provisioning, des [paramètres](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md) et playbooks [Ansible](http://docs.ansible.com/) ainsi que sur des connaissances spécifiques à Kubernetes et l'installation de systèmes d'exploitation afin de fournir: +Kubespray se base sur des outils de provisioning, des [paramètres](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md) et playbooks [Ansible](https://docs.ansible.com/) ainsi que sur des connaissances spécifiques à Kubernetes et l'installation de systèmes d'exploitation afin de fournir: * Un cluster en haute disponibilité * des composants modulables @@ -49,7 +49,7 @@ Afin de vous aider à préparer votre de votre environnement, Kubespray fournit ### (2/5) Construire un fichier d'inventaire Ansible -Lorsque vos serveurs sont disponibles, créez un fichier d'inventaire Ansible ([inventory](http://docs.ansible.com/ansible/intro_inventory.html)). +Lorsque vos serveurs sont disponibles, créez un fichier d'inventaire Ansible ([inventory](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html)). Vous pouvez le créer manuellement ou en utilisant un script d'inventaire dynamique. Pour plus d'informations se référer à [Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory). ### (3/5) Préparation au déploiement de votre cluster diff --git a/content/id/community/static/cncf-code-of-conduct.md b/content/id/community/static/cncf-code-of-conduct.md index 7ee127a6b3..9ec35edc3d 100644 --- a/content/id/community/static/cncf-code-of-conduct.md +++ b/content/id/community/static/cncf-code-of-conduct.md @@ -24,7 +24,7 @@ Contoh perilaku kasar, melecehkan, atau tidak dapat diterima di Kubernetes dapat Kode Etik ini diadaptasi dari Covenant Contributor , versi 1.2.0, tersedia di - + ### Pedoman Perilaku Acara CNCF diff --git a/content/id/docs/concepts/extend-kubernetes/operator.md b/content/id/docs/concepts/extend-kubernetes/operator.md index 315ae35e3d..b9a8bf5a06 100644 --- a/content/id/docs/concepts/extend-kubernetes/operator.md +++ b/content/id/docs/concepts/extend-kubernetes/operator.md @@ -132,7 +132,7 @@ menggunakan bahasa / _runtime_ yang dapat bertindak sebagai * Menggunakan perangkat yang ada untuk menulis Operator kamu sendiri, misalnya: * menggunakan [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) * menggunakan [kubebuilder](https://book.kubebuilder.io/) - * menggunakan [Metacontroller](https://metacontroller.app/) bersama dengan + * menggunakan [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html) bersama dengan `WebHooks` yang kamu implementasikan sendiri * menggunakan the [Operator _Framework_](https://github.com/operator-framework/getting-started) * [Terbitkan](https://operatorhub.io/) Operator kamu agar dapat digunakan oleh diff --git a/content/id/docs/concepts/policy/_index.md b/content/id/docs/concepts/policy/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/concepts/storage/_index.md b/content/id/docs/concepts/storage/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/concepts/storage/storage-classes.md b/content/id/docs/concepts/storage/storage-classes.md index c5fc71a8de..083620d937 100644 --- a/content/id/docs/concepts/storage/storage-classes.md +++ b/content/id/docs/concepts/storage/storage-classes.md @@ -595,11 +595,11 @@ metadata: provisioner: kubernetes.io/azure-disk parameters: storageaccounttype: Standard_LRS - kind: Shared + kind: managed ``` * `storageaccounttype`: Akun penyimpanan Azure yang ada pada tingkatan Sku. Nilai _default_-nya adalah kosong. -* `kind`: Nilai yang mungkin adalah `shared` (default), `dedicated`, dan `managed`. +* `kind`: Nilai yang mungkin adalah `shared`, `dedicated`, dan `managed` (default). Ketika `kind` yang digunakan adalah `shared`, semua disk yang tidak di-_manage_ akan dibuat pada beberapa akun penyimpanan yang ada pada grup sumber daya yang sama dengan klaster. Ketika `kind` yang digunakan adalah `dedicated`, sebuah akun penyimpanan diff --git a/content/id/docs/reference/glossary/controller.md b/content/id/docs/reference/glossary/controller.md old mode 100755 new mode 100644 diff --git a/content/id/docs/reference/glossary/managed-service.md b/content/id/docs/reference/glossary/managed-service.md old mode 100755 new mode 100644 diff --git a/content/id/docs/reference/glossary/name.md b/content/id/docs/reference/glossary/name.md old mode 100755 new mode 100644 diff --git a/content/id/docs/reference/glossary/service-broker.md b/content/id/docs/reference/glossary/service-broker.md old mode 100755 new mode 100644 diff --git a/content/id/docs/reference/glossary/service-catalog.md b/content/id/docs/reference/glossary/service-catalog.md old mode 100755 new mode 100644 diff --git a/content/id/docs/reference/glossary/service.md b/content/id/docs/reference/glossary/service.md old mode 100755 new mode 100644 diff --git a/content/id/docs/reference/glossary/uid.md b/content/id/docs/reference/glossary/uid.md old mode 100755 new mode 100644 diff --git a/content/id/docs/setup/best-practices/_index.md b/content/id/docs/setup/best-practices/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/access-application-cluster/_index.md b/content/id/docs/tasks/access-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/administer-cluster/_index.md b/content/id/docs/tasks/administer-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/configure-pod-container/_index.md b/content/id/docs/tasks/configure-pod-container/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/debug-application-cluster/_index.md b/content/id/docs/tasks/debug-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/inject-data-application/_index.md b/content/id/docs/tasks/inject-data-application/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md index c2c4b9399f..0e4732848e 100644 --- a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -146,7 +146,7 @@ Semua modifikasi pada sebuah CronJob, terutama `.spec`, akan diterapkan pada pro `.spec.schedule` adalah _field_ yang wajib diisi dari sebuah `.spec` Dibutuhkan sebuah format string [Cron](https://en.wikipedia.org/wiki/Cron), misalnya `0 * * * *` atau `@hourly`, sebagai jadwal Job untuk dibuat dan dieksekusi. -Format ini juga mencakup nilai langkah `Vixie cron`. Seperti penjelasan di [FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): +Format ini juga mencakup nilai langkah "Vixie cron". Seperti penjelasan di [FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): > Nilai langkah dapat digunakan bersama dengan rentang. Sebuah rentang diikuti dengan > `/` menentukan lompatan angka melalui rentang. diff --git a/content/id/docs/tasks/tls/_index.md b/content/id/docs/tasks/tls/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tasks/tools/_index.md b/content/id/docs/tasks/tools/_index.md old mode 100755 new mode 100644 diff --git a/content/id/docs/tutorials/stateful-application/_index.md b/content/id/docs/tutorials/stateful-application/_index.md old mode 100755 new mode 100644 diff --git a/content/it/docs/concepts/architecture/_index.md b/content/it/docs/concepts/architecture/_index.md old mode 100755 new mode 100644 diff --git a/content/it/docs/concepts/architecture/nodes.md b/content/it/docs/concepts/architecture/nodes.md index 0494050420..c3c58be9d4 100644 --- a/content/it/docs/concepts/architecture/nodes.md +++ b/content/it/docs/concepts/architecture/nodes.md @@ -156,8 +156,9 @@ Condizione Notata quando un nodo diventa irraggiungibile (ad esempio, il control ricevere heartbeat per qualche motivo, ad es. a causa del fatto che il nodo si trova in basso), e poi in seguito sfratto tutti i pod dal nodo (usando una terminazione elegante) se il nodo continua essere irraggiungibile. (I timeout predefiniti sono 40 secondi per iniziare la segnalazione -ConditionUnknown e 5m dopo di ciò per iniziare a sfrattare i pod.) Il controller del nodo -controlla lo stato di ogni nodo ogni `--node-monitor-period` secondi. +ConditionUnknown e 5m dopo di ciò per iniziare a sfrattare i pod.) + +Il controller del nodo controlla lo stato di ogni nodo ogni `--node-monitor-period` secondi. Nelle versioni di Kubernetes precedenti alla 1.13, NodeStatus è l'heartbeat di nodo. A partire da Kubernetes 1.13, la funzionalità di lease del nodo viene introdotta come un @@ -191,8 +192,9 @@ lo stesso tempo. Se la frazione di nodi malsani è almeno se il cluster è piccolo (cioè ha meno o uguale a `--large-cluster-size-threshold` nodes - default 50) quindi gli sfratti sono fermato, altrimenti il ​​tasso di sfratto è ridotto a -`--secondary-node-eviction-rate` (default 0.01) al secondo. La ragione per cui -le politiche sono implementate per zona di disponibilità è perché una zona di disponibilità +`--secondary-node-eviction-rate` (default 0.01) al secondo. + +La ragione per cui le politiche sono implementate per zona di disponibilità è perché una zona di disponibilità potrebbe divenire partizionato dal master mentre gli altri rimangono connessi. Se il tuo cluster non si estende su più zone di disponibilità del provider cloud, quindi c'è solo una zona di disponibilità (l'intero cluster). diff --git a/content/it/docs/concepts/cluster-administration/_index.md b/content/it/docs/concepts/cluster-administration/_index.md old mode 100755 new mode 100644 diff --git a/content/it/docs/concepts/containers/_index.md b/content/it/docs/concepts/containers/_index.md old mode 100755 new mode 100644 diff --git a/content/it/docs/concepts/overview/_index.md b/content/it/docs/concepts/overview/_index.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/cloud-controller-manager.md b/content/it/docs/reference/glossary/cloud-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/cluster.md b/content/it/docs/reference/glossary/cluster.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/container-runtime.md b/content/it/docs/reference/glossary/container-runtime.md index 640b3eaa17..4b6b5d1e53 100644 --- a/content/it/docs/reference/glossary/container-runtime.md +++ b/content/it/docs/reference/glossary/container-runtime.md @@ -2,7 +2,7 @@ title: Container Runtime id: container-runtime date: 2019-06-05 -full_link: /docs/reference/generated/container-runtime +full_link: /docs/setup/production-environment/container-runtimes short_description: > Il container runtime è il software che è responsabile per l'esecuzione dei container. diff --git a/content/it/docs/reference/glossary/container.md b/content/it/docs/reference/glossary/container.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/controller.md b/content/it/docs/reference/glossary/controller.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/daemonset.md b/content/it/docs/reference/glossary/daemonset.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/deployment.md b/content/it/docs/reference/glossary/deployment.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/docker.md b/content/it/docs/reference/glossary/docker.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/etcd.md b/content/it/docs/reference/glossary/etcd.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/job.md b/content/it/docs/reference/glossary/job.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/kube-apiserver.md b/content/it/docs/reference/glossary/kube-apiserver.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/kube-controller-manager.md b/content/it/docs/reference/glossary/kube-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/kube-proxy.md b/content/it/docs/reference/glossary/kube-proxy.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/kube-scheduler.md b/content/it/docs/reference/glossary/kube-scheduler.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/kubeadm.md b/content/it/docs/reference/glossary/kubeadm.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/kubelet.md b/content/it/docs/reference/glossary/kubelet.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/label.md b/content/it/docs/reference/glossary/label.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/node.md b/content/it/docs/reference/glossary/node.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/pod.md b/content/it/docs/reference/glossary/pod.md old mode 100755 new mode 100644 diff --git a/content/it/docs/reference/glossary/statefulset.md b/content/it/docs/reference/glossary/statefulset.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/cluster-administration/_index.md b/content/ja/docs/concepts/cluster-administration/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/configuration/_index.md b/content/ja/docs/concepts/configuration/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/containers/_index.md b/content/ja/docs/concepts/containers/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/extend-kubernetes/operator.md b/content/ja/docs/concepts/extend-kubernetes/operator.md index c3857598f2..8d6d4128b2 100644 --- a/content/ja/docs/concepts/extend-kubernetes/operator.md +++ b/content/ja/docs/concepts/extend-kubernetes/operator.md @@ -89,7 +89,7 @@ kubectl edit SampleDB/example-database # 手動でいくつかの設定を変更 * 自前のオペレーターを書くために既存のツールを使います、例: * [KUDO](https://kudo.dev/)(Kubernetes Universal Declarative Operator)を使います * [kubebuilder](https://book.kubebuilder.io/)を使います - * [Metacontroller](https://metacontroller.app/)を自分で実装したWebHooksと一緒に使います + * [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html)を自分で実装したWebHooksと一緒に使います * [Operator Framework](https://operatorframework.io)を使います * 自前のオペレーターを他のユーザーのために[公開](https://operatorhub.io/)します * オペレーターパターンを紹介している[CoreOSオリジナル記事](https://coreos.com/blog/introducing-operators.html)を読みます diff --git a/content/ja/docs/concepts/overview/_index.md b/content/ja/docs/concepts/overview/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/overview/working-with-objects/_index.md b/content/ja/docs/concepts/overview/working-with-objects/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/policy/_index.md b/content/ja/docs/concepts/policy/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/services-networking/_index.md b/content/ja/docs/concepts/services-networking/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/services-networking/network-policies.md b/content/ja/docs/concepts/services-networking/network-policies.md index b11bcead2b..441063aeae 100644 --- a/content/ja/docs/concepts/services-networking/network-policies.md +++ b/content/ja/docs/concepts/services-networking/network-policies.md @@ -207,7 +207,7 @@ SCTPプロトコルのネットワークポリシーをサポートする{{< glo Kubernetes1.20現在、ネットワークポリシーAPIに以下の機能は存在しません。 しかし、オペレーティングシステムのコンポーネント(SELinux、OpenVSwitch、IPTablesなど)、レイヤ7の技術(Ingressコントローラー、サービスメッシュ実装)、もしくはアドミッションコントローラーを使用して回避策を実装できる場合があります。 -Kubernetesのネットワークセキュリティを初めて使用する場合は、ネットワークポリシーAPIを使用して以下ののユーザーストーリーを(まだ)実装できないことに注意してください。これらのユーザーストーリーの一部(全てではありません)は、ネットワークポリシーAPIの将来のリリースで活発に議論されています。 +Kubernetesのネットワークセキュリティを初めて使用する場合は、ネットワークポリシーAPIを使用して以下のユーザーストーリーを(まだ)実装できないことに注意してください。これらのユーザーストーリーの一部(全てではありません)は、ネットワークポリシーAPIの将来のリリースで活発に議論されています。 - クラスター内トラフィックを強制的に共通ゲートウェイを通過させる(これは、サービスメッシュもしくは他のプロキシで提供するのが最適な場合があります)。 - TLS関連のもの(これにはサービスメッシュまたはIngressコントローラを使用します)。 diff --git a/content/ja/docs/concepts/storage/_index.md b/content/ja/docs/concepts/storage/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/concepts/storage/persistent-volumes.md b/content/ja/docs/concepts/storage/persistent-volumes.md index 1940b72cab..b22ed7d8eb 100644 --- a/content/ja/docs/concepts/storage/persistent-volumes.md +++ b/content/ja/docs/concepts/storage/persistent-volumes.md @@ -431,7 +431,7 @@ PVはクラスを持つことができます。これは`storageClassName`属性 ### マウントオプション -Kubernets管理者は永続ボリュームがNodeにマウントされるときの追加マウントオプションを指定できます。 +Kubernetes管理者は永続ボリュームがNodeにマウントされるときの追加マウントオプションを指定できます。 {{< note >}} すべての永続ボリュームタイプがすべてのマウントオプションをサポートするわけではありません。 diff --git a/content/ja/docs/concepts/workloads/pods/_index.md b/content/ja/docs/concepts/workloads/pods/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/contribute/review/for-approvers.md b/content/ja/docs/contribute/review/for-approvers.md index 95281ec0aa..3a96595ea8 100644 --- a/content/ja/docs/contribute/review/for-approvers.md +++ b/content/ja/docs/contribute/review/for-approvers.md @@ -62,7 +62,7 @@ Prowコマンド | Roleの制限 | 説明 `/lgtm` | 誰でも。ただし、オートメーションがトリガされるのはReviewerまたはApproverが使用したときのみ。 | PRのレビューが完了し、変更に納得したことを知らせる。 `/approve` | Approver | PRをマージすることを承認する。 `/assign` | ReviewerまたはApprover | PRのレビューまたは承認するひとを割り当てる。 -`/close` | ReviewerまたはApprover | issueまたはPRをクローンする。 +`/close` | ReviewerまたはApprover | issueまたはPRをcloseする。 `/hold` | 誰でも | `do-not-merge/hold`ラベルを追加して、自動的にマージできないPRであることを示す。 `/hold cancel` | 誰でも | `do-not-merge/hold`ラベルを削除する。 {{< /table >}} @@ -133,7 +133,7 @@ SIG Docsでは、対処方法をドキュメントに書いても良いくらい ### 重服したissue -1つの問題に対して1つ以上のissueがopenしている場合、1つのissueに統合します。あなたはどちらのissueをopenにしておくか(あるいは新しいissueを作成するか)を決断して、すべての関連する情報を移動し、関連するすべてのissueにリンクしなければなりません。最後に、同じ問題について書かれたすべての他のissueに`triage/duplicate`ラベルを付けて、それらをcloseします。作業対象のissueを1つだけにすることで、混乱を晒し、同じ問題に対して作業が重複することを避けられます。 +1つの問題に対して1つ以上のissueがopenしている場合、1つのissueに統合します。あなたはどちらのissueをopenにしておくか(あるいは新しいissueを作成するか)を決断して、すべての関連する情報を移動し、関連するすべてのissueにリンクしなければなりません。最後に、同じ問題について書かれたすべての他のissueに`triage/duplicate`ラベルを付けて、それらをcloseします。作業対象のissueを1つだけにすることで、混乱を減らし、同じ問題に対して作業が重複することを避けられます。 ### リンク切れに関するissue diff --git a/content/ja/docs/reference/glossary/cloud-controller-manager.md b/content/ja/docs/reference/glossary/cloud-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/cluster-operator.md b/content/ja/docs/reference/glossary/cluster-operator.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/cncf.md b/content/ja/docs/reference/glossary/cncf.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/configmap.md b/content/ja/docs/reference/glossary/configmap.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/contributor.md b/content/ja/docs/reference/glossary/contributor.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/controller.md b/content/ja/docs/reference/glossary/controller.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/daemonset.md b/content/ja/docs/reference/glossary/daemonset.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/deployment.md b/content/ja/docs/reference/glossary/deployment.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/docker.md b/content/ja/docs/reference/glossary/docker.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/etcd.md b/content/ja/docs/reference/glossary/etcd.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/image.md b/content/ja/docs/reference/glossary/image.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/index.md b/content/ja/docs/reference/glossary/index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/ingress.md b/content/ja/docs/reference/glossary/ingress.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/kube-apiserver.md b/content/ja/docs/reference/glossary/kube-apiserver.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/kube-controller-manager.md b/content/ja/docs/reference/glossary/kube-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/kube-proxy.md b/content/ja/docs/reference/glossary/kube-proxy.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/kube-scheduler.md b/content/ja/docs/reference/glossary/kube-scheduler.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/kubelet.md b/content/ja/docs/reference/glossary/kubelet.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/label.md b/content/ja/docs/reference/glossary/label.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/member.md b/content/ja/docs/reference/glossary/member.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/mirror-pod.md b/content/ja/docs/reference/glossary/mirror-pod.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/name.md b/content/ja/docs/reference/glossary/name.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/namespace.md b/content/ja/docs/reference/glossary/namespace.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/node.md b/content/ja/docs/reference/glossary/node.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/platform-developer.md b/content/ja/docs/reference/glossary/platform-developer.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/pod.md b/content/ja/docs/reference/glossary/pod.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/secret.md b/content/ja/docs/reference/glossary/secret.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/selector.md b/content/ja/docs/reference/glossary/selector.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/service-catalog.md b/content/ja/docs/reference/glossary/service-catalog.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/service.md b/content/ja/docs/reference/glossary/service.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/sig.md b/content/ja/docs/reference/glossary/sig.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/statefulset.md b/content/ja/docs/reference/glossary/statefulset.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/glossary/uid.md b/content/ja/docs/reference/glossary/uid.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/kubectl/_index.md b/content/ja/docs/reference/kubectl/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/reference/setup-tools/kubeadm/_index.md b/content/ja/docs/reference/setup-tools/kubeadm/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/setup/release/_index.md b/content/ja/docs/setup/release/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/access-application-cluster/_index.md b/content/ja/docs/tasks/access-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/administer-cluster/_index.md b/content/ja/docs/tasks/administer-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/configmap-secret/_index.md b/content/ja/docs/tasks/configmap-secret/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/configure-pod-container/_index.md b/content/ja/docs/tasks/configure-pod-container/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/debug-application-cluster/_index.md b/content/ja/docs/tasks/debug-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/network/_index.md b/content/ja/docs/tasks/network/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/run-application/_index.md b/content/ja/docs/tasks/run-application/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/service-catalog/_index.md b/content/ja/docs/tasks/service-catalog/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/tls/_index.md b/content/ja/docs/tasks/tls/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tasks/tools/_index.md b/content/ja/docs/tasks/tools/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tutorials/clusters/_index.md b/content/ja/docs/tutorials/clusters/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tutorials/configuration/_index.md b/content/ja/docs/tutorials/configuration/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tutorials/services/_index.md b/content/ja/docs/tutorials/services/_index.md old mode 100755 new mode 100644 diff --git a/content/ja/docs/tutorials/stateful-application/_index.md b/content/ja/docs/tutorials/stateful-application/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/_index.html b/content/ko/_index.html index 9aa0ab3cd3..9db1ac0982 100644 --- a/content/ko/_index.html +++ b/content/ko/_index.html @@ -43,12 +43,12 @@ Google이 일주일에 수십억 개의 컨테이너들을 운영하게 해준

- Attend KubeCon NA virtually on November 17-20, 2020 + Attend KubeCon North America on October 11-15, 2021



- Attend KubeCon EU virtually on May 4 – 7, 2021 + Revisit KubeCon EU 2021
diff --git a/content/ko/docs/concepts/architecture/controller.md b/content/ko/docs/concepts/architecture/controller.md index e516dd9cc5..92afd615b6 100644 --- a/content/ko/docs/concepts/architecture/controller.md +++ b/content/ko/docs/concepts/architecture/controller.md @@ -159,11 +159,11 @@ IP 주소 관리 도구, 스토리지 서비스, 클라우드 제공자의 API 또는 쿠버네티스 외부에서 실행할 수 있다. 가장 적합한 것은 특정 컨트롤러의 기능에 따라 달라진다. - - ## {{% heading "whatsnext" %}} * [쿠버네티스 컨트롤 플레인](/ko/docs/concepts/overview/components/#컨트롤-플레인-컴포넌트)에 대해 읽기 * [쿠버네티스 오브젝트](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/)의 몇 가지 기본 사항을 알아보자. * [쿠버네티스 API](/ko/docs/concepts/overview/kubernetes-api/)에 대해 더 배워 보자. -* 만약 자신만의 컨트롤러를 작성하기 원한다면, 쿠버네티스 확장하기의 [확장 패턴](/ko/docs/concepts/extend-kubernetes/extend-cluster/#익스텐션-패턴)을 본다. +* 만약 자신만의 컨트롤러를 작성하기 원한다면, + 쿠버네티스 확장하기의 [확장 패턴](/ko/docs/concepts/extend-kubernetes/#익스텐션-패턴)을 + 본다. diff --git a/content/ko/docs/concepts/cluster-administration/_index.md b/content/ko/docs/concepts/cluster-administration/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md index 95ea899cbb..c64dd127b3 100644 --- a/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md +++ b/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -1,5 +1,4 @@ --- - title: kubelet 가비지(Garbage) 수집 설정하기 content_type: concept weight: 70 @@ -7,12 +6,13 @@ weight: 70 -가비지 수집은 사용되지 않는 [이미지](/ko/docs/concepts/containers/#컨테이너-이미지)들과 [컨테이너](/ko/docs/concepts/containers/)들을 정리하는 kubelet의 유용한 기능이다. Kubelet은 1분마다 컨테이너들에 대하여 가비지 수집을 수행하며, 5분마다 이미지들에 대하여 가비지 수집을 수행한다. - -별도의 가비지 수집 도구들을 사용하는 것은, 이러한 도구들이 존재할 수도 있는 컨테이너들을 제거함으로써 kubelet 을 중단시킬 수도 있으므로 권장하지 않는다. - - +가비지 수집은 사용되지 않는 +[이미지](/ko/docs/concepts/containers/#컨테이너-이미지)들과 +[컨테이너](/ko/docs/concepts/containers/)들을 정리하는 kubelet의 유용한 기능이다. Kubelet은 +1분마다 컨테이너들에 대하여 가비지 수집을 수행하며, 5분마다 이미지들에 대하여 가비지 수집을 수행한다. +별도의 가비지 수집 도구들을 사용하는 것은, 이러한 도구들이 존재할 수도 있는 컨테이너들을 제거함으로써 +kubelet을 중단시킬 수도 있으므로 권장하지 않는다. @@ -28,10 +28,24 @@ weight: 70 ## 컨테이너 수집 -컨테이너에 대한 가비지 수집 정책은 세 가지 사용자 정의 변수들을 고려한다: `MinAge` 는 컨테이너를 가비지 수집 할 수 있는 최소 연령이다. `MaxPerPodContainer` 는 모든 단일 파드 (UID, 컨테이너 이름) 쌍이 가질 수 있는 -최대 비활성 컨테이너의 수량이다. `MaxContainers` 죽은 컨테이너의 최대 수량이다. 이러한 변수는 `MinAge` 를 0으로 설정하고, `MaxPerPodContainer` 와 `MaxContainers` 를 각각 0 보다 작게 설정해서 비활성화 할 수 있다. +컨테이너에 대한 가비지 수집 정책은 세 가지 사용자 정의 변수들을 고려한다. +`MinAge` 는 컨테이너를 가비지 수집할 수 있는 최소 연령이다. +`MaxPerPodContainer` 는 모든 단일 파드(UID, 컨테이너 이름) +쌍이 가질 수 있는 최대 비활성 컨테이너의 수량이다. +`MaxContainers` 는 죽은 컨테이너의 최대 수량이다. +이러한 변수는 `MinAge` 를 0으로 설정하고, +`MaxPerPodContainer` 와 `MaxContainers` 를 각각 0 보다 작게 설정해서 비활성화할 수 있다. -Kubelet은 미확인, 삭제 또는 앞에서 언급 한 플래그가 설정 한 경계를 벗어나거나, 확인되지 않은 컨테이너에 대해 조치를 취한다. 일반적으로 가장 오래된 컨테이너가 먼저 제거된다. `MaxPerPodContainer` 와 `MaxContainer` 는 파드 당 최대 컨테이너 수 (`MaxPerPodContainer`)가 허용 가능한 범위의 전체 죽은 컨테이너의 수(`MaxContainers`)를 벗어나는 상황에서 잠재적으로 서로 충돌할 수 있습니다. 이러한 상황에서 `MaxPerPodContainer` 가 조정된다: 최악의 시나리오는 `MaxPerPodContainer` 를 1로 다운그레이드하고 가장 오래된 컨테이너를 제거하는 것이다. 추가로, 삭제된 파드가 소유 한 컨테이너는 `MinAge` 보다 오래된 컨테이너가 제거된다. +Kubelet은 미확인, 삭제 또는 앞에서 언급한 +플래그가 설정한 경계를 벗어나거나, 확인되지 않은 컨테이너에 대해 조치를 취한다. +일반적으로 가장 오래된 컨테이너가 먼저 제거된다. `MaxPerPodContainer` 와 `MaxContainer` 는 +파드 당 최대 +컨테이너 수(`MaxPerPodContainer`)가 허용 가능한 범위의 +전체 죽은 컨테이너의 수(`MaxContainers`)를 벗어나는 상황에서 잠재적으로 서로 충돌할 수 있다. +다음의 상황에서 `MaxPerPodContainer` 가 조정된다. +최악의 시나리오는 `MaxPerPodContainer` 를 1로 다운그레이드하고 +가장 오래된 컨테이너를 제거하는 것이다. 추가로, 삭제된 파드가 소유한 컨테이너는 +`MinAge` 보다 오래되면 제거된다. kubelet이 관리하지 않는 컨테이너는 컨테이너 가비지 수집 대상이 아니다. @@ -40,9 +54,9 @@ kubelet이 관리하지 않는 컨테이너는 컨테이너 가비지 수집 대 여러분은 후술될 kubelet 플래그들을 통하여 이미지 가비지 수집을 조정하기 위하여 다음의 임계값을 조정할 수 있다. 1. `image-gc-high-threshold`, 이미지 가비지 수집을 발생시키는 디스크 사용량의 비율로 -기본값은 85% 이다. + 기본값은 85% 이다. 2. `image-gc-low-threshold`, 이미지 가비지 수집을 더 이상 시도하지 않는 디스크 사용량의 비율로 -기본값은 80% 이다. + 기본값은 80% 이다. 다음의 kubelet 플래그를 통해 가비지 수집 정책을 사용자 정의할 수 있다. @@ -77,9 +91,7 @@ kubelet이 관리하지 않는 컨테이너는 컨테이너 가비지 수집 대 | `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | 축출이 다른 리소스에 대한 디스크 임계값을 일반화 함 | | `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | 축출이 다른 리소스로의 디스크 압력전환을 일반화 함 | - - ## {{% heading "whatsnext" %}} - -자세한 내용은 [리소스 부족 처리 구성](/docs/tasks/administer-cluster/out-of-resource/)를 본다. +자세한 내용은 [리소스 부족 처리 구성](/docs/concepts/scheduling-eviction/node-pressure-eviction/)를 +본다. diff --git a/content/ko/docs/concepts/cluster-administration/system-logs.md b/content/ko/docs/concepts/cluster-administration/system-logs.md index 13008ebbd8..eff3c05a65 100644 --- a/content/ko/docs/concepts/cluster-administration/system-logs.md +++ b/content/ko/docs/concepts/cluster-administration/system-logs.md @@ -20,7 +20,7 @@ weight: 60 klog는 쿠버네티스의 로깅 라이브러리다. [klog](https://github.com/kubernetes/klog) 는 쿠버네티스 시스템 컴포넌트의 로그 메시지를 생성한다. -klog 설정에 대한 더 많은 정보는, [커맨드라인 툴](/docs/reference/command-line-tools-reference/)을 참고한다. +klog 설정에 대한 더 많은 정보는, [커맨드라인 툴](/ko/docs/reference/command-line-tools-reference/)을 참고한다. klog 네이티브 형식 예 : ``` @@ -61,7 +61,7 @@ I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod= {{}} -JSON 출력은 많은 표준 klog 플래그를 지원하지 않는다. 지원하지 않는 klog 플래그 목록은, [커맨드라인 툴](/docs/reference/command-line-tools-reference/)을 참고한다. +JSON 출력은 많은 표준 klog 플래그를 지원하지 않는다. 지원하지 않는 klog 플래그 목록은, [커맨드라인 툴](/ko/docs/reference/command-line-tools-reference/)을 참고한다. 모든 로그가 JSON 형식으로 작성되는 것은 아니다(예: 프로세스 시작 중). 로그를 파싱하려는 경우 JSON 형식이 아닌 로그 행을 처리할 수 있는지 확인해야 한다. @@ -143,6 +143,6 @@ systemd를 사용하는 시스템에서는, kubelet과 컨테이너 런타임은 ## {{% heading "whatsnext" %}} -* [쿠버네티스 로깅 아키텍처](/docs/concepts/cluster-administration/logging/) 알아보기 +* [쿠버네티스 로깅 아키텍처](/ko/docs/concepts/cluster-administration/logging/) 알아보기 * [구조화된 로깅](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging) 알아보기 * [로깅 심각도(serverity) 규칙](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md) 알아보기 diff --git a/content/ko/docs/concepts/configuration/_index.md b/content/ko/docs/concepts/configuration/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/concepts/configuration/secret.md b/content/ko/docs/concepts/configuration/secret.md index a4544397d7..1e5829b5ea 100644 --- a/content/ko/docs/concepts/configuration/secret.md +++ b/content/ko/docs/concepts/configuration/secret.md @@ -31,7 +31,7 @@ weight: 30 시크릿을 안전하게 사용하려면 (최소한) 다음과 같이 하는 것이 좋다. 1. 시크릿에 대한 [암호화 활성화](/docs/tasks/administer-cluster/encrypt-data/). -2. 시크릿 읽기 및 쓰기를 제한하는 [RBAC 규칙 활성화 또는 구성](/docs/reference/access-authn-authz/authorization/). 파드를 만들 권한이 있는 모든 사용자는 시크릿을 암묵적으로 얻을 수 있다. +2. 시크릿 읽기 및 쓰기를 제한하는 [RBAC 규칙 활성화 또는 구성](/ko/docs/reference/access-authn-authz/authorization/). 파드를 만들 권한이 있는 모든 사용자는 시크릿을 암묵적으로 얻을 수 있다. {{< /caution >}} @@ -48,7 +48,7 @@ weight: 30 - 파드의 [이미지를 가져올 때 kubelet](#imagepullsecrets-사용하기)에 의해 사용. 시크릿 오브젝트의 이름은 유효한 -[DNS 서브도메인 이름](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)이어야 한다. +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야 한다. 사용자는 시크릿을 위한 파일을 구성할 때 `data` 및 (또는) `stringData` 필드를 명시할 수 있다. 해당 `data` 와 `stringData` 필드는 선택적으로 명시할 수 있다. `data` 필드의 모든 키(key)에 해당하는 값(value)은 base64로 인코딩된 문자열이어야 한다. @@ -1156,10 +1156,10 @@ HTTP 요청을 처리하고, 복잡한 비즈니스 로직을 수행한 다음, ### 시크릿 API를 사용하는 클라이언트 -시크릿 API와 상호 작용하는 애플리케이션을 배포할 때, [RBAC]( -/docs/reference/access-authn-authz/rbac/)과 같은 [인가 정책]( -/docs/reference/access-authn-authz/authorization/)을 -사용하여 접근를 제한해야 한다. +시크릿 API와 상호 작용하는 애플리케이션을 배포할 때, +[RBAC](/docs/reference/access-authn-authz/rbac/)과 같은 +[인가 정책](/ko/docs/reference/access-authn-authz/authorization/)을 +사용하여 접근을 제한해야 한다. 시크릿은 종종 다양한 중요도에 걸친 값을 보유하며, 이 중 많은 부분이 쿠버네티스(예: 서비스 어카운트 토큰)와 외부 시스템으로 단계적으로 diff --git a/content/ko/docs/concepts/containers/_index.md b/content/ko/docs/concepts/containers/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md index 2770b1e4b2..953571ec62 100644 --- a/content/ko/docs/concepts/containers/runtime-class.md +++ b/content/ko/docs/concepts/containers/runtime-class.md @@ -68,7 +68,7 @@ handler: myconfiguration # 상응하는 CRI 설정의 이름임 ``` 런타임클래스 오브젝트의 이름은 유효한 -[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)어이야 한다. +[DNS 레이블 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-레이블-이름)어이야 한다. {{< note >}} 런타임클래스 쓰기 작업(create/update/patch/delete)은 @@ -132,7 +132,7 @@ https://github.com/containerd/cri/blob/master/docs/config.md runtime_path = "${PATH_TO_BINARY}" ``` -더 자세한 것은 CRI-O의 [설정 문서](https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md)를 본다. +더 자세한 것은 CRI-O의 [설정 문서](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md)를 본다. ## 스케줄 @@ -175,5 +175,5 @@ PodOverhead를 사용하려면, PodOverhead [기능 게이트](/ko/docs/referenc - [런타임클래스 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md) - [런타임클래스 스케줄링 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling) -- [파드 오버헤드](/ko/docs/concepts/configuration/pod-overhead/) 개념에 대해 읽기 -- [파드 오버헤드 기능 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) +- [파드 오버헤드](/ko/docs/concepts/scheduling-eviction/pod-overhead/) 개념에 대해 읽기 +- [파드 오버헤드 기능 설계](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index b543addee6..0357ac7619 100644 --- a/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -128,7 +128,7 @@ CRD를 사용하면 다른 API 서버를 추가하지 않고도 새로운 타입 ## 커스텀리소스데피니션 -[커스텀리소스데피니션](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) +[커스텀리소스데피니션](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) API 리소스를 사용하면 커스텀 리소스를 정의할 수 있다. CRD 오브젝트를 정의하면 지정한 이름과 스키마를 사용하여 새 커스텀 리소스가 만들어진다. 쿠버네티스 API는 커스텀 리소스의 스토리지를 제공하고 처리한다. diff --git a/content/ko/docs/concepts/overview/_index.md b/content/ko/docs/concepts/overview/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/concepts/overview/kubernetes-api.md b/content/ko/docs/concepts/overview/kubernetes-api.md index 026e0e007c..919d59b459 100644 --- a/content/ko/docs/concepts/overview/kubernetes-api.md +++ b/content/ko/docs/concepts/overview/kubernetes-api.md @@ -20,14 +20,14 @@ card: 쿠버네티스 API를 사용하면 쿠버네티스의 API 오브젝트(예: 파드(Pod), 네임스페이스(Namespace), 컨피그맵(ConfigMap) 그리고 이벤트(Event))를 질의(query)하고 조작할 수 있다. -대부분의 작업은 [kubectl](/docs/reference/kubectl/overview/) +대부분의 작업은 [kubectl](/ko/docs/reference/kubectl/overview/) 커맨드 라인 인터페이스 또는 API를 사용하는 [kubeadm](/ko/docs/reference/setup-tools/kubeadm/)과 같은 다른 커맨드 라인 도구를 통해 수행할 수 있다. 그러나, REST 호출을 사용하여 API에 직접 접근할 수도 있다. 쿠버네티스 API를 사용하여 애플리케이션을 작성하는 경우 -[클라이언트 라이브러리](/docs/reference/using-api/client-libraries/) 중 하나를 사용하는 것이 좋다. +[클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 중 하나를 사용하는 것이 좋다. @@ -130,7 +130,7 @@ API 리소스는 API 그룹, 리소스 유형, 네임스페이스 {{< /note >}} API 버전 수준 정의에 대한 자세한 내용은 -[API 버전 레퍼런스](/ko/docs/reference/using-api/api-overview/#api-버전-규칙)를 참조한다. +[API 버전 레퍼런스](/ko/docs/reference/using-api/#api-버전-규칙)를 참조한다. diff --git a/content/ko/docs/concepts/policy/pod-security-policy.md b/content/ko/docs/concepts/policy/pod-security-policy.md index 8afee5760b..eae69022e6 100644 --- a/content/ko/docs/concepts/policy/pod-security-policy.md +++ b/content/ko/docs/concepts/policy/pod-security-policy.md @@ -464,12 +464,12 @@ podsecuritypolicy "example" deleted 예를 들면 다음과 같습니다. ```yaml -allowedHostPaths: - # 이 정책은 "/foo", "/foo/", "/foo/bar" 등을 허용하지만, - # "/fool", "/etc/foo" 등은 허용하지 않는다. - # "/foo/../" 는 절대 유효하지 않다. - - pathPrefix: "/foo" - readOnly: true # 읽기 전용 마운트만 허용 + allowedHostPaths: + # 이 정책은 "/foo", "/foo/", "/foo/bar" 등을 허용하지만, + # "/fool", "/etc/foo" 등은 허용하지 않는다. + # "/foo/../" 는 절대 유효하지 않다. + - pathPrefix: "/foo" + readOnly: true # 읽기 전용 마운트만 허용 ``` {{< warning >}}호스트 파일시스템에 제한없는 접근을 부여하며, 컨테이너가 특권을 에스컬레이션 diff --git a/content/ko/docs/concepts/policy/resource-quotas.md b/content/ko/docs/concepts/policy/resource-quotas.md index 8e1d918ef4..b5254e4300 100644 --- a/content/ko/docs/concepts/policy/resource-quotas.md +++ b/content/ko/docs/concepts/policy/resource-quotas.md @@ -58,7 +58,8 @@ weight: 20 ## 리소스 쿼터 활성화 많은 쿠버네티스 배포판에 기본적으로 리소스 쿼터 지원이 활성화되어 있다. -{{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}} `--enable-admission-plugins=` 플래그의 인수 중 하나로 +{{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}} +`--enable-admission-plugins=` 플래그의 인수 중 하나로 `ResourceQuota`가 있는 경우 활성화된다. 해당 네임스페이스에 리소스쿼터가 있는 경우 특정 네임스페이스에 @@ -66,7 +67,9 @@ weight: 20 ## 컴퓨트 리소스 쿼터 -지정된 네임스페이스에서 요청할 수 있는 총 [컴퓨트 리소스](/ko/docs/concepts/configuration/manage-resources-containers/) 합을 제한할 수 있다. +지정된 네임스페이스에서 요청할 수 있는 총 +[컴퓨트 리소스](/ko/docs/concepts/configuration/manage-resources-containers/) +합을 제한할 수 있다. 다음과 같은 리소스 유형이 지원된다. @@ -125,7 +128,9 @@ GPU 리소스를 다음과 같이 쿼터를 정의할 수 있다. | `ephemeral-storage` | `requests.ephemeral-storage` 와 같음. | {{< note >}} -CRI 컨테이너 런타임을 사용할 때, 컨테이너 로그는 임시 스토리지 쿼터에 포함된다. 이로 인해 스토리지 쿼터를 소진한 파드가 예기치 않게 축출될 수 있다. 자세한 내용은 [로깅 아키텍처](/ko/docs/concepts/cluster-administration/logging/)를 참조한다. +CRI 컨테이너 런타임을 사용할 때, 컨테이너 로그는 임시 스토리지 쿼터에 포함된다. +이로 인해 스토리지 쿼터를 소진한 파드가 예기치 않게 축출될 수 있다. +자세한 내용은 [로깅 아키텍처](/ko/docs/concepts/cluster-administration/logging/)를 참조한다. {{< /note >}} ## 오브젝트 수 쿼터 @@ -192,7 +197,7 @@ CRI 컨테이너 런타임을 사용할 때, 컨테이너 로그는 임시 스 | `NotTerminating` | `.spec.activeDeadlineSeconds is nil`에 일치하는 파드 | | `BestEffort` | 최상의 서비스 품질을 제공하는 파드 | | `NotBestEffort` | 서비스 품질이 나쁜 파드 | -| `PriorityClass` | 지정된 [프라이어리티 클래스](/ko/docs/concepts/configuration/pod-priority-preemption)를 참조하여 일치하는 파드. | +| `PriorityClass` | 지정된 [프라이어리티클래스](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/)를 참조하여 일치하는 파드. | | `CrossNamespacePodAffinity` | 크로스-네임스페이스 파드 [(안티)어피니티 용어]가 있는 파드 | `BestEffort` 범위는 다음의 리소스를 추적하도록 쿼터를 제한한다. @@ -248,13 +253,14 @@ CRI 컨테이너 런타임을 사용할 때, 컨테이너 로그는 임시 스 {{< feature-state for_k8s_version="v1.17" state="stable" >}} -특정 [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/#파드-우선순위)로 파드를 생성할 수 있다. +특정 [우선 순위](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/#파드-우선순위)로 파드를 생성할 수 있다. 쿼터 스펙의 `scopeSelector` 필드를 사용하여 파드의 우선 순위에 따라 파드의 시스템 리소스 사용을 제어할 수 있다. 쿼터 스펙의 `scopeSelector`가 파드를 선택한 경우에만 쿼터가 일치하고 사용된다. -`scopeSelector` 필드를 사용하여 우선 순위 클래스의 쿼터 범위를 지정하면, 쿼터 오브젝트는 다음의 리소스만 추적하도록 제한된다. +`scopeSelector` 필드를 사용하여 우선 순위 클래스의 쿼터 범위를 지정하면, +쿼터 오브젝트는 다음의 리소스만 추적하도록 제한된다. * `pods` * `cpu` @@ -554,7 +560,7 @@ kubectl create -f ./object-counts.yaml --namespace=myspace kubectl get quota --namespace=myspace ``` -``` +```none NAME AGE compute-resources 30s object-counts 32s @@ -564,7 +570,7 @@ object-counts 32s kubectl describe quota compute-resources --namespace=myspace ``` -``` +```none Name: compute-resources Namespace: myspace Resource Used Hard @@ -580,7 +586,7 @@ requests.nvidia.com/gpu 0 4 kubectl describe quota object-counts --namespace=myspace ``` -``` +```none Name: object-counts Namespace: myspace Resource Used Hard @@ -677,10 +683,10 @@ plugins: {{< codenew file="policy/priority-class-resourcequota.yaml" >}} ```shell -$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system +kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system ``` -``` +```none resourcequota/pods-cluster-services created ``` diff --git a/content/ko/docs/concepts/scheduling-eviction/_index.md b/content/ko/docs/concepts/scheduling-eviction/_index.md index 5ae3f5822e..7128dbe99f 100644 --- a/content/ko/docs/concepts/scheduling-eviction/_index.md +++ b/content/ko/docs/concepts/scheduling-eviction/_index.md @@ -32,6 +32,6 @@ no_list: true {{}} -* [파드 우선순위와 선점](/docs/concepts/scheduling-eviction/pod-priority-preemption/) +* [파드 우선순위와 선점](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/) * [노드-압박 축출](/docs/concepts/scheduling-eviction/node-pressure-eviction/) -* [API를 이용한 축출](/docs/concepts/scheduling-eviction/api-eviction/) +* [API를 이용한 축출](/ko/docs/concepts/scheduling-eviction/api-eviction/) diff --git a/content/ko/docs/concepts/scheduling-eviction/pod-priority-preemption.md b/content/ko/docs/concepts/scheduling-eviction/pod-priority-preemption.md index 581525d833..f149290882 100644 --- a/content/ko/docs/concepts/scheduling-eviction/pod-priority-preemption.md +++ b/content/ko/docs/concepts/scheduling-eviction/pod-priority-preemption.md @@ -25,7 +25,7 @@ weight: 70 관리자는 리소스쿼터를 사용하여 사용자가 우선순위가 높은 파드를 생성하지 못하게 할 수 있다. -자세한 내용은 [기본적으로 프라이어리티 클래스(Priority Class) 소비 제한](/ko/docs/concepts/policy/resource-quotas/#기본적으로-우선-순위-클래스-소비-제한)을 +자세한 내용은 [기본적으로 프라이어리티클래스(Priority Class) 소비 제한](/ko/docs/concepts/policy/resource-quotas/#기본적으로-우선-순위-클래스-소비-제한)을 참고한다. {{< /warning >}} @@ -50,7 +50,7 @@ weight: 70 ## 프라이어리티클래스 -프라이어리티클래스는 프라이어리티 클래스 이름에서 우선순위의 정수 값으로의 매핑을 +프라이어리티클래스는 프라이어리티클래스 이름에서 우선순위의 정수 값으로의 매핑을 정의하는 네임스페이스가 아닌(non-namespaced) 오브젝트이다. 이름은 프라이어리티클래스 오브젝트의 메타데이터의 `name` 필드에 지정된다. 값은 필수 `value` 필드에 지정되어 있다. 값이 클수록, 우선순위가 @@ -96,7 +96,7 @@ metadata: name: high-priority value: 1000000 globalDefault: false -description: "이 프라이어리티 클래스는 XYZ 서비스 파드에만 사용해야 한다." +description: "이 프라이어리티클래스는 XYZ 서비스 파드에만 사용해야 한다." ``` ## 비-선점 프라이어리티클래스 {#non-preempting-priority-class} @@ -142,7 +142,7 @@ metadata: value: 1000000 preemptionPolicy: Never globalDefault: false -description: "이 프라이어리티 클래스는 다른 파드를 축출하지 않는다." +description: "이 프라이어리티클래스는 다른 파드를 축출하지 않는다." ``` ## 파드 우선순위 @@ -150,7 +150,7 @@ description: "이 프라이어리티 클래스는 다른 파드를 축출하지 프라이어리티클래스가 하나 이상 있으면, 그것의 명세에서 이들 프라이어리티클래스 이름 중 하나를 지정하는 파드를 생성할 수 있다. 우선순위 어드미션 컨트롤러는 `priorityClassName` 필드를 사용하고 우선순위의 정수 값을 -채운다. 프라이어리티 클래스를 찾을 수 없으면, 파드가 거부된다. +채운다. 프라이어리티클래스를 찾을 수 없으면, 파드가 거부된다. 다음의 YAML은 이전 예제에서 생성된 프라이어리티클래스를 사용하는 파드 구성의 예이다. 우선순위 어드미션 컨트롤러는 @@ -351,12 +351,12 @@ spec: 축출 대상으로 고려한다. QoS와 파드 우선순위를 모두 고려하는 유일한 컴포넌트는 -[kubelet 리소스 부족 축출](/docs/tasks/administer-cluster/out-of-resource/)이다. +[kubelet 리소스 부족 축출](/docs/concepts/scheduling-eviction/node-pressure-eviction/)이다. kubelet은 부족한 리소스의 사용이 요청을 초과하는지 여부에 따라, 그런 다음 우선순위에 따라, 파드의 스케줄링 요청에 대한 부족한 컴퓨팅 리소스의 소비에 의해 먼저 축출 대상 파드의 순위를 매긴다. 더 자세한 내용은 -[엔드유저 파드 축출](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)을 +[엔드유저 파드 축출](/docs/concepts/scheduling-eviction/node-pressure-eviction/#evicting-end-user-pods)을 참조한다. kubelet 리소스 부족 축출은 사용량이 요청을 초과하지 않는 경우 @@ -367,4 +367,4 @@ kubelet 리소스 부족 축출은 사용량이 요청을 초과하지 않는 ## {{% heading "whatsnext" %}} -* 프라이어리티클래스와 관련하여 리소스쿼터 사용에 대해 [기본적으로 프라이어리티 클래스 소비 제한](/ko/docs/concepts/policy/resource-quotas/#기본적으로-우선-순위-클래스-소비-제한)을 읽어보자. +* 프라이어리티클래스와 관련하여 리소스쿼터 사용에 대해 [기본적으로 프라이어리티클래스 소비 제한](/ko/docs/concepts/policy/resource-quotas/#기본적으로-우선-순위-클래스-소비-제한)을 읽어보자. diff --git a/content/ko/docs/concepts/scheduling-eviction/resource-bin-packing.md b/content/ko/docs/concepts/scheduling-eviction/resource-bin-packing.md index 34ff6f3108..1ac3b81262 100644 --- a/content/ko/docs/concepts/scheduling-eviction/resource-bin-packing.md +++ b/content/ko/docs/concepts/scheduling-eviction/resource-bin-packing.md @@ -26,7 +26,7 @@ kube-scheduler를 미세 조정할 수 있다. 통해 사용자는 적절한 파라미터를 사용해서 확장된 리소스를 빈 팩으로 만들 수 있어 대규모의 클러스터에서 부족한 리소스의 활용도가 향상된다. `RequestedToCapacityRatioResourceAllocation` 우선 순위 기능의 -동작은 `requestedToCapacityRatioArguments`라는 +동작은 `RequestedToCapacityRatioArgs`라는 구성 옵션으로 제어할 수 있다. 이 인수는 `shape`와 `resources` 두 개의 파라미터로 구성된다. `shape` 파라미터는 사용자가 `utilization`과 `score` 값을 기반으로 최소 요청 또는 최대 요청된 대로 기능을 @@ -39,27 +39,29 @@ kube-scheduler를 미세 조정할 수 있다. 설정하는 구성의 예시이다. ```yaml -apiVersion: v1 -kind: Policy +apiVersion: kubescheduler.config.k8s.io/v1beta1 +kind: KubeSchedulerConfiguration +profiles: # ... -priorities: - # ... - - name: RequestedToCapacityRatioPriority - weight: 2 - argument: - requestedToCapacityRatioArguments: - shape: - - utilization: 0 - score: 0 - - utilization: 100 - score: 10 - resources: - - name: intel.com/foo - weight: 3 - - name: intel.com/bar - weight: 5 + pluginConfig: + - name: RequestedToCapacityRatio + args: + shape: + - utilization: 0 + score: 10 + - utilization: 100 + score: 0 + resources: + - name: intel.com/foo + weight: 3 + - name: intel.com/bar + weight: 5 ``` +kube-scheduler 플래그 `--config=/path/to/config/file` 을 사용하여 +`KubeSchedulerConfiguration` 파일을 참조하면 구성이 스케줄러에 +전달된다. + **이 기능은 기본적으로 비활성화되어 있다.** ### 우선 순위 기능 튜닝하기 diff --git a/content/ko/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/ko/docs/concepts/scheduling-eviction/taint-and-toleration.md index 588adee0f7..4465b8a149 100644 --- a/content/ko/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/ko/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -281,5 +281,5 @@ tolerations: ## {{% heading "whatsnext" %}} -* [리소스 부족 다루기](/docs/tasks/administer-cluster/out-of-resource/)와 어떻게 구성하는지에 대해 알아보기 -* [파드 우선순위](/ko/docs/concepts/configuration/pod-priority-preemption/)에 대해 알아보기 +* [리소스 부족 다루기](/docs/concepts/scheduling-eviction/node-pressure-eviction/)와 어떻게 구성하는지에 대해 알아보기 +* [파드 우선순위](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/)에 대해 알아보기 diff --git a/content/ko/docs/concepts/security/overview.md b/content/ko/docs/concepts/security/overview.md index 9cd48a172c..64ed2675b2 100644 --- a/content/ko/docs/concepts/security/overview.md +++ b/content/ko/docs/concepts/security/overview.md @@ -149,7 +149,7 @@ TLS를 통한 접근 | 코드가 TCP를 통해 통신해야 한다면, 미리 * [파드에 대한 네트워크 정책](/ko/docs/concepts/services-networking/network-policies/) * [쿠버네티스 API 접근 제어하기](/ko/docs/concepts/security/controlling-access) * [클러스터 보안](/docs/tasks/administer-cluster/securing-a-cluster/) -* 컨트롤 플레인을 위한 [전송 데이터 암호화](/docs/tasks/tls/managing-tls-in-a-cluster/) +* 컨트롤 플레인을 위한 [전송 데이터 암호화](/ko/docs/tasks/tls/managing-tls-in-a-cluster/) * [Rest에서 데이터 암호화](/docs/tasks/administer-cluster/encrypt-data/) * [쿠버네티스 시크릿](/ko/docs/concepts/configuration/secret/) * [런타임 클래스](/ko/docs/concepts/containers/runtime-class) diff --git a/content/ko/docs/concepts/services-networking/dns-pod-service.md b/content/ko/docs/concepts/services-networking/dns-pod-service.md index e3254d3ba8..8a35c2c6a3 100644 --- a/content/ko/docs/concepts/services-networking/dns-pod-service.md +++ b/content/ko/docs/concepts/services-networking/dns-pod-service.md @@ -7,6 +7,7 @@ content_type: concept weight: 20 --- + 쿠버네티스는 파드와 서비스를 위한 DNS 레코드를 생성한다. 사용자는 IP 주소 대신에 일관된 DNS 네임을 통해서 서비스에 접속할 수 있다. @@ -261,6 +262,8 @@ spec: ### 파드의 DNS 설정 {#pod-dns-config} +{{< feature-state for_k8s_version="v1.14" state="stable" >}} + 사용자들은 파드의 DNS 설정을 통해서 직접 파드의 DNS를 세팅할 수 있다. `dnsConfig` 필드는 선택적이고, `dnsPolicy` 세팅과 함께 동작한다. @@ -310,18 +313,6 @@ search default.svc.cluster-domain.example svc.cluster-domain.example cluster-dom options ndots:5 ``` -### 기능 지원 여부 - -파드 DNS 구성 및 DNS 정책 "`None`"에 대한 지원 정보는 아래에서 확인 할 수 있다. - -| k8s 버전 | 기능 지원 | -| :---------: |:-----------:| -| 1.14 | 안정 | -| 1.10 | 베타 (기본)| -| 1.9 | 알파 | - - - ## {{% heading "whatsnext" %}} diff --git a/content/ko/docs/concepts/services-networking/endpoint-slices.md b/content/ko/docs/concepts/services-networking/endpoint-slices.md index 4e12cf9ff2..4ea1281faa 100644 --- a/content/ko/docs/concepts/services-networking/endpoint-slices.md +++ b/content/ko/docs/concepts/services-networking/endpoint-slices.md @@ -154,7 +154,7 @@ v1beta1 API의 `topology` 필드에 있는 `"topology.kubernetes.io/zone"` ### 관리 -대부분의 경우, 컨트롤 플레인(특히, 엔드포인트 슬라이스 +대부분의 경우, 컨트롤 플레인(특히, 엔드포인트슬라이스 {{< glossary_tooltip text="컨트롤러" term_id="controller" >}})는 엔드포인트슬라이스 오브젝트를 생성하고 관리한다. 다른 엔티티나 컨트롤러가 추가 엔드포인트슬라이스 집합을 관리하게 할 수 있는 서비스 메시 구현과 같이 @@ -165,13 +165,13 @@ v1beta1 API의 `topology` 필드에 있는 `"topology.kubernetes.io/zone"` 엔티티를 나타내는 `endpointslice.kubernetes.io/managed-by` {{< glossary_tooltip term_id="label" text="레이블" >}}을 정의한다. -엔드포인트 슬라이스 컨트롤러는 관리하는 모든 엔드포인트슬라이스에 레이블의 값으로 +엔드포인트슬라이스 컨트롤러는 관리하는 모든 엔드포인트슬라이스에 레이블의 값으로 `endpointslice-controller.k8s.io` 를 설정한다. 엔드포인트슬라이스를 관리하는 다른 엔티티도 이 레이블에 고유한 값을 설정해야 한다. ### 소유권 -대부분의 유스케이스에서, 엔드포인트 슬라이스 오브젝트가 엔드포인트를 +대부분의 유스케이스에서, 엔드포인트슬라이스 오브젝트가 엔드포인트를 추적하는 서비스가 엔드포인트슬라이스를 소유한다. 이 소유권은 각 엔드포인트슬라이스의 소유자 참조와 서비스에 속한 모든 엔드포인트슬라이스의 간단한 조회를 가능하게 하는 `kubernetes.io/service-name` 레이블로 표시된다. @@ -247,5 +247,4 @@ v1beta1 API의 `topology` 필드에 있는 `"topology.kubernetes.io/zone"` ## {{% heading "whatsnext" %}} -* [엔드포인트슬라이스 활성화하기](/docs/tasks/administer-cluster/enabling-endpointslices)에 대해 배우기 -* [애플리케이션을 서비스와 함께 연결하기](/ko/docs/concepts/services-networking/connect-applications-service/)를 읽어보기 +* [서비스와 애플리케이션 연결하기](/ko/docs/concepts/services-networking/connect-applications-service/)를 읽어보기 diff --git a/content/ko/docs/concepts/services-networking/service-traffic-policy.md b/content/ko/docs/concepts/services-networking/service-traffic-policy.md index 5f9d394718..c4f87e2b3e 100644 --- a/content/ko/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/ko/docs/concepts/services-networking/service-traffic-policy.md @@ -21,7 +21,7 @@ _서비스 내부 트래픽 정책_ 을 사용하면 내부 트래픽 제한이 ## 서비스 내부 트래픽 정책 사용 -[기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)에서 +[기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)에서 `ServiceInternalTrafficPolicy`를 활성화한 후에 {{< glossary_tooltip text="서비스" term_id="service" >}}의 `.spec.internalTrafficPolicy`를 `Local`로 설정하여 내부 전용 트래픽 정책을 활성화 할 수 있다. @@ -57,7 +57,7 @@ kube-proxy는 `spec.internalTrafficPolicy` 의 설정에 따라서 라우팅되 엔드포인트를 필터링한다. 이것을 `Local`로 설정하면, 노드 내부 엔드포인트만 고려한다. 이 설정이 `Cluster`이거나 누락되었다면 모든 엔드포인트를 고려한다. -[기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)의 +[기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)의 `ServiceInternalTrafficPolicy`를 활성화한다면, `spec.internalTrafficPolicy`는 기본값 "Cluster"로 설정된다. ## 제약조건 diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md index 7bbb4f6f63..5c4b9edeee 100644 --- a/content/ko/docs/concepts/services-networking/service.md +++ b/content/ko/docs/concepts/services-networking/service.md @@ -215,7 +215,7 @@ API 리소스이다. 개념적으로 엔드포인트와 매우 유사하지만, 오브젝트에 의해 미러링된다. 이 필드는 표준 쿠버네티스 레이블 구문을 따른다. 값은 -[IANA 표준 서비스 이름](http://www.iana.org/assignments/service-names) 또는 +[IANA 표준 서비스 이름](https://www.iana.org/assignments/service-names) 또는 `mycompany.com/my-custom-protocol`과 같은 도메인 접두사 이름 중 하나여야 한다. ## 가상 IP와 서비스 프록시 diff --git a/content/ko/docs/concepts/storage/storage-classes.md b/content/ko/docs/concepts/storage/storage-classes.md index 0bec67ef8a..d8e0be153d 100644 --- a/content/ko/docs/concepts/storage/storage-classes.md +++ b/content/ko/docs/concepts/storage/storage-classes.md @@ -653,11 +653,11 @@ metadata: provisioner: kubernetes.io/azure-disk parameters: storageaccounttype: Standard_LRS - kind: Shared + kind: managed ``` * `storageaccounttype`: Azure 스토리지 계정 Sku 계층. 기본값은 없음. -* `kind`: 가능한 값은 `shared` (기본값), `dedicated`, 그리고 `managed` 이다. +* `kind`: 가능한 값은 `shared`, `dedicated`, 그리고 `managed` (기본값) 이다. `kind` 가 `shared` 인 경우, 모든 비관리 디스크는 클러스터와 동일한 리소스 그룹에 있는 몇 개의 공유 스토리지 계정에 생성된다. `kind` 가 `dedicated` 인 경우, 클러스터와 동일한 리소스 그룹에서 새로운 diff --git a/content/ko/docs/concepts/storage/volumes.md b/content/ko/docs/concepts/storage/volumes.md index 6156a4704b..3bdfead48d 100644 --- a/content/ko/docs/concepts/storage/volumes.md +++ b/content/ko/docs/concepts/storage/volumes.md @@ -544,7 +544,7 @@ glusterfs 볼륨에 데이터를 미리 채울 수 있으며, 파드 간에 데 | | 빈 문자열 (기본값)은 이전 버전과의 호환성을 위한 것으로, hostPath 볼륨은 마운트 하기 전에 아무런 검사도 수행되지 않는다. | | `DirectoryOrCreate` | 만약 주어진 경로에 아무것도 없다면, 필요에 따라 Kubelet이 가지고 있는 동일한 그룹과 소유권, 권한을 0755로 설정한 빈 디렉터리를 생성한다. | | `Directory` | 주어진 경로에 디렉터리가 있어야 함 | -| `FileOrCreate` | 만약 주어진 경로에 아무것도 없다면, 필요에 따라 Kubelet이 가지고 있는 동일한 그룹과 소유권, 권한을 0644로 설정한 빈 디렉터리를 생성한다. | +| `FileOrCreate` | 만약 주어진 경로에 아무것도 없다면, 필요에 따라 Kubelet이 가지고 있는 동일한 그룹과 소유권, 권한을 0644로 설정한 빈 파일을 생성한다. | | `File` | 주어진 경로에 파일이 있어야 함 | | `Socket` | 주어진 경로에 UNIX 소캣이 있어야 함 | | `CharDevice` | 주어진 경로에 문자 디바이스가 있어야 함 | @@ -914,7 +914,7 @@ projected 볼륨 소스를 [`subPath`](#subpath-사용하기) 볼륨으로 마 ### quobyte -`quobyte` 볼륨을 사용하면 기존 [Quobyte](http://www.quobyte.com) 볼륨을 +`quobyte` 볼륨을 사용하면 기존 [Quobyte](https://www.quobyte.com) 볼륨을 파드에 마운트할 수 있다. {{< note >}} diff --git a/content/ko/docs/concepts/workloads/controllers/daemonset.md b/content/ko/docs/concepts/workloads/controllers/daemonset.md index d7d583d142..1496b25ec3 100644 --- a/content/ko/docs/concepts/workloads/controllers/daemonset.md +++ b/content/ko/docs/concepts/workloads/controllers/daemonset.md @@ -1,4 +1,10 @@ --- + + + + + + title: 데몬셋 content_type: concept weight: 40 @@ -26,7 +32,8 @@ _데몬셋_ 은 모든(또는 일부) 노드가 파드의 사본을 실행하도 ### 데몬셋 생성 -YAML 파일로 데몬셋을 설명 할 수 있다. 예를 들어 아래 `daemonset.yaml` 파일은 fluentd-elasticsearch 도커 이미지를 실행하는 데몬셋을 설명한다. +YAML 파일에 데몬셋 명세를 작성할 수 있다. 예를 들어 아래 `daemonset.yaml` 파일은 +fluentd-elasticsearch 도커 이미지를 실행하는 데몬셋을 설명한다. {{< codenew file="controllers/daemonset.yaml" >}} @@ -40,19 +47,23 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml 다른 모든 쿠버네티스 설정과 마찬가지로 데몬셋에는 `apiVersion`, `kind` 그리고 `metadata` 필드가 필요하다. 일반적인 설정파일 작업에 대한 정보는 -[스테이트리스 애플리케이션 실행하기](/docs/tasks/run-application/run-stateless-application-deployment/), -[컨테이너 구성하기](/ko/docs/tasks/) 그리고 [kubectl을 사용한 오브젝트 관리](/ko/docs/concepts/overview/working-with-objects/object-management/) 문서를 참고한다. +[스테이트리스 애플리케이션 실행하기](/docs/tasks/run-application/run-stateless-application-deployment/)와 + [kubectl을 사용한 오브젝트 관리](/ko/docs/concepts/overview/working-with-objects/object-management/)를 참고한다. 데몬셋 오브젝트의 이름은 유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야 한다. -데몬셋에는 [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 섹션도 필요하다. +데몬셋에는 +[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) +섹션도 필요하다. ### 파드 템플릿 `.spec.template` 는 `.spec` 의 필수 필드 중 하나이다. -`.spec.template` 는 [파드 템플릿](/ko/docs/concepts/workloads/pods/#파드-템플릿)이다. 이것은 중첩되어 있다는 점과 `apiVersion` 또는 `kind` 를 가지지 않는 것을 제외하면 {{< glossary_tooltip text="파드" term_id="pod" >}}와 정확히 같은 스키마를 가진다. +`.spec.template` 는 [파드 템플릿](/ko/docs/concepts/workloads/pods/#파드-템플릿)이다. +이것은 중첩되어 있다는 점과 `apiVersion` 또는 `kind` 를 가지지 않는 것을 제외하면 +{{< glossary_tooltip text="파드" term_id="pod" >}}와 정확히 같은 스키마를 가진다. 데몬셋의 파드 템플릿에는 파드의 필수 필드 외에도 적절한 레이블이 명시되어야 한다([파드 셀렉터](#파드-셀렉터)를 본다). @@ -73,19 +84,22 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml `.spec.selector` 는 다음 2개의 필드로 구성된 오브젝트이다. -* `matchLabels` - [레플리케이션 컨트롤러](/ko/docs/concepts/workloads/controllers/replicationcontroller/)의 `.spec.selector` 와 동일하게 작동한다. +* `matchLabels` - [레플리케이션 컨트롤러](/ko/docs/concepts/workloads/controllers/replicationcontroller/)의 +`.spec.selector` 와 동일하게 작동한다. * `matchExpressions` - 키, 값 목록 그리고 키 및 값에 관련된 연산자를 명시해서 보다 정교한 셀렉터를 만들 수 있다. 2개의 필드가 명시되면 두 필드를 모두 만족하는 것(ANDed)이 결과가 된다. -만약 `.spec.selector` 를 명시하면, 이것은 `.spec.template.metadata.labels` 와 일치해야 한다. 일치하지 않는 구성은 API에 의해 거부된다. +만약 `.spec.selector` 를 명시하면, 이것은 `.spec.template.metadata.labels` 와 일치해야 한다. +일치하지 않는 구성은 API에 의해 거부된다. ### 오직 일부 노드에서만 파드 실행 만약 `.spec.template.spec.nodeSelector` 를 명시하면 데몬셋 컨트롤러는 [노드 셀렉터](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#노드-셀렉터-nodeselector)와 -일치하는 노드에 파드를 생성한다. 마찬가지로 `.spec.template.spec.affinity` 를 명시하면 +일치하는 노드에 파드를 생성한다. +마찬가지로 `.spec.template.spec.affinity` 를 명시하면 데몬셋 컨트롤러는 [노드 어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#노드-어피니티)와 일치하는 노드에 파드를 생성한다. 만약 둘 중 하나를 명시하지 않으면 데몬셋 컨트롤러는 모든 노드에서 파드를 생성한다. @@ -100,18 +114,19 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml 데몬셋 파드는 데몬셋 컨트롤러에 의해 생성되고 스케줄된다. 이에 대한 이슈를 소개한다. - * 파드 동작의 불일치: 스케줄 되기 위해서 대기 중인 일반 파드는 `Pending` 상태로 생성된다. - 그러나 데몬셋 파드는 `Pending` 상태로 생성되지 않는다. - 이것은 사용자에게 혼란을 준다. - * [파드 선점](/ko/docs/concepts/configuration/pod-priority-preemption/)은 - 기본 스케줄러에서 처리한다. 선점이 활성화되면 데몬셋 컨트롤러는 - 파드 우선순위와 선점을 고려하지 않고 스케줄 한다. +* 파드 동작의 불일치: 스케줄 되기 위해서 대기 중인 일반 파드는 `Pending` 상태로 생성된다. + 그러나 데몬셋 파드는 `Pending` 상태로 생성되지 않는다. + 이것은 사용자에게 혼란을 준다. +* [파드 선점](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/)은 + 기본 스케줄러에서 처리한다. 선점이 활성화되면 데몬셋 컨트롤러는 + 파드 우선순위와 선점을 고려하지 않고 스케줄 한다. `ScheduleDaemonSetPods` 로 데몬셋 파드에 `.spec.nodeName` 용어 대신 `NodeAffinity` 용어를 추가해서 데몬셋 컨트롤러 대신 기본 스케줄러를 사용해서 데몬셋을 스케줄할 수 있다. 이후에 기본 스케줄러를 사용해서 대상 호스트에 파드를 바인딩한다. 만약 데몬셋 파드에 -이미 노드 선호도가 존재한다면 교체한다(대상 호스트를 선택하기 전에 원래 노드의 어피니티가 고려된다). 데몬셋 컨트롤러는 +이미 노드 선호도가 존재한다면 교체한다(대상 호스트를 선택하기 전에 +원래 노드의 어피니티가 고려된다). 데몬셋 컨트롤러는 데몬셋 파드를 만들거나 수정할 때만 이런 작업을 수행하며, 데몬셋의 `spec.template` 은 변경되지 않는다. @@ -152,10 +167,12 @@ nodeAffinity: - **푸시(Push)**: 데몬셋의 파드는 통계 데이터베이스와 같은 다른 서비스로 업데이트를 보내도록 구성되어있다. 그들은 클라이언트들을 가지지 않는다. -- **노드IP와 알려진 포트**: 데몬셋의 파드는 `호스트 포트`를 사용할 수 있으며, 노드IP를 통해 파드에 접근할 수 있다. 클라이언트는 노드IP를 어떻게든지 알고 있으며, 관례에 따라 포트를 알고 있다. +- **노드IP와 알려진 포트**: 데몬셋의 파드는 `호스트 포트`를 사용할 수 있으며, + 노드IP를 통해 파드에 접근할 수 있다. + 클라이언트는 노드IP를 어떻게든지 알고 있으며, 관례에 따라 포트를 알고 있다. - **DNS**: 동일한 파드 셀렉터로 [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)를 만들고, - 그 다음에 `엔드포인트` 리소스를 사용해서 데몬셋을 찾거나 DNS에서 여러 A레코드를 - 검색한다. + 그 다음에 `엔드포인트` 리소스를 사용해서 데몬셋을 찾거나 + DNS에서 여러 A레코드를 검색한다. - **서비스**: 동일한 파드 셀렉터로 서비스를 생성하고, 서비스를 사용해서 임의의 노드의 데몬에 도달한다(특정 노드에 도달할 방법이 없다). diff --git a/content/ko/docs/concepts/workloads/controllers/job.md b/content/ko/docs/concepts/workloads/controllers/job.md index 6cdab2d6c5..c24beb0fca 100644 --- a/content/ko/docs/concepts/workloads/controllers/job.md +++ b/content/ko/docs/concepts/workloads/controllers/job.md @@ -304,7 +304,7 @@ spec: ### 완료된 잡을 위한 TTL 메커니즘 -{{< feature-state for_k8s_version="v1.12" state="alpha" >}} +{{< feature-state for_k8s_version="v1.21" state="beta" >}} 완료된 잡 (`Complete` 또는 `Failed`)을 자동으로 정리하는 또 다른 방법은 잡의 `.spec.ttlSecondsAfterFinished` 필드를 지정해서 완료된 리소스에 대해 @@ -342,11 +342,6 @@ spec: 삭제되도록 할 수 있다. 만약 필드를 설정하지 않으면, 이 잡이 완료된 후에 TTL 컨트롤러에 의해 정리되지 않는다. -이 TTL 메커니즘은 기능 게이트 `TTLAfterFinished`와 함께 알파 단계이다. 더 -자세한 정보는 완료된 리소스를 위한 -[TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/) -문서를 본다. - ## 잡 패턴 잡 오브젝트를 사용해서 신뢰할 수 있는 파드의 병렬 실행을 지원할 수 있다. 잡 오브젝트는 과학 diff --git a/content/ko/docs/concepts/workloads/pods/disruptions.md b/content/ko/docs/concepts/workloads/pods/disruptions.md index e9263d6461..497d857d11 100644 --- a/content/ko/docs/concepts/workloads/pods/disruptions.md +++ b/content/ko/docs/concepts/workloads/pods/disruptions.md @@ -31,7 +31,7 @@ weight: 60 - 클라우드 공급자 또는 하이퍼바이저의 오류로 인한 VM 장애 - 커널 패닉 - 클러스터 네트워크 파티션의 발생으로 클러스터에서 노드가 사라짐 -- 노드의 [리소스 부족](/docs/tasks/administer-cluster/out-of-resource/)으로 파드가 축출됨 +- 노드의 [리소스 부족](/docs/concepts/scheduling-eviction/node-pressure-eviction/)으로 파드가 축출됨 리소스 부족을 제외한 나머지 조건은 대부분의 사용자가 익숙할 것이다. 왜냐하면 @@ -76,7 +76,7 @@ weight: 60 - 복제된 애플리케이션의 구동 시 훨씬 더 높은 가용성을 위해 랙 전체 ([안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#파드간-어피니티와-안티-어피니티) 이용) 또는 영역 간 - ([다중 영역 클러스터](/docs/setup/multiple-zones)를 이용한다면)에 + ([다중 영역 클러스터](/ko/docs/setup/best-practices/multiple-zones/)를 이용한다면)에 애플리케이션을 분산해야 한다. 자발적 중단의 빈도는 다양하다. 기본적인 쿠버네티스 클러스터에서는 자동화된 자발적 중단은 발생하지 않는다(사용자가 지시한 자발적 중단만 발생한다). @@ -86,7 +86,7 @@ weight: 60 단편화를 제거하고 노드의 효율을 높이는 과정에서 자발적 중단을 야기할 수 있다. 클러스터 관리자 또는 호스팅 공급자는 예측 가능한 자발적 중단 수준에 대해 문서화해야 한다. -파드 스펙 안에 [프라이어리티클래스 사용하기](/ko/docs/concepts/configuration/pod-priority-preemption/)와 같은 특정 환경설정 옵션 +파드 스펙 안에 [프라이어리티클래스 사용하기](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/)와 같은 특정 환경설정 옵션 또한 자발적(+ 비자발적) 중단을 유발할 수 있다. diff --git a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 2601f5c871..3c30e895b6 100644 --- a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -15,7 +15,7 @@ obsolete --> {{< note >}} v1.18 이전 버전의 쿠버네티스에서는 파드 토폴로지 분배 제약조건을 사용하려면 [API 서버](/ko/docs/concepts/overview/components/#kube-apiserver)와 -[스케줄러](/docs/reference/generated/kube-scheduler/)에서 +[스케줄러](/docs/reference/command-line-tools-reference/kube-scheduler/)에서 `EvenPodsSpread`[기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를 활성화해야 한다 {{< /note >}} diff --git a/content/ko/docs/contribute/_index.md b/content/ko/docs/contribute/_index.md index 0582739545..dcb7a68f49 100644 --- a/content/ko/docs/contribute/_index.md +++ b/content/ko/docs/contribute/_index.md @@ -46,7 +46,7 @@ card: 1. CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md)에 서명합니다. 1. [문서 리포지터리](https://github.com/kubernetes/website)와 웹사이트의 [정적 사이트 생성기](https://gohugo.io)를 숙지합니다. -1. [풀 리퀘스트 열기](/ko/docs/contribute/new-content/new-content/)와 +1. [풀 리퀘스트 열기](/ko/docs/contribute/new-content/open-a-pr/)와 [변경 검토](/ko/docs/contribute/review/reviewing-prs/)의 기본 프로세스를 이해하도록 합니다. @@ -60,7 +60,7 @@ card: 기여할 수 있는 다양한 방법에 대해 알아봅니다. - [`kubernetes/website` 이슈 목록](https://github.com/kubernetes/website/issues/)을 확인하여 좋은 진입점이 되는 이슈를 찾을 수 있습니다. -- 기존 문서에 대해 [GitHub을 사용해서 풀 리퀘스트 열거나](/ko/docs/contribute/new-content/new-content/#github을-사용하여-변경하기) +- 기존 문서에 대해 [GitHub을 사용해서 풀 리퀘스트 열거나](/ko/docs/contribute/new-content/open-a-pr/#github을-사용하여-변경하기) GitHub에서의 이슈 제기에 대해 자세히 알아봅니다. - 정확성과 언어에 대해 다른 쿠버네티스 커뮤니티 맴버의 [풀 리퀘스트 검토](/ko/docs/contribute/review/reviewing-prs/)를 합니다. @@ -71,7 +71,7 @@ card: ## 다음 단계 -- 리포지터리의 [로컬 복제본에서 작업](/ko/docs/contribute/new-content/new-content/#fork-the-repo)하는 +- 리포지터리의 [로컬 복제본에서 작업](/ko/docs/contribute/new-content/open-a-pr/#fork-the-repo)하는 방법을 배워봅니다. - [릴리스된 기능](/docs/contribute/new-content/new-features/)을 문서화 합니다. - [SIG Docs](/ko/docs/contribute/participate/)에 참여하고, @@ -96,6 +96,6 @@ SIG Docs는 여러가지 방법으로 의견을 나누고 있습니다. ## 다른 기여 방법들 -- [쿠버네티스 커뮤니티 사이트](/community/)를 방문하십시오. 트위터 또는 스택 오버플로우에 참여하고, 현지 쿠버네티스 모임과 이벤트 등에 대해 알아봅니다. +- [쿠버네티스 커뮤니티 사이트](/ko/community/)를 방문하십시오. 트위터 또는 스택 오버플로우에 참여하고, 현지 쿠버네티스 모임과 이벤트 등에 대해 알아봅니다. - [기여자 치트시트](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet)를 읽고 쿠버네티스 기능 개발에 참여합니다. - [블로그 게시물 또는 사례 연구](/docs/contribute/new-content/blogs-case-studies/)를 제출합니다. diff --git a/content/ko/docs/contribute/analytics.md b/content/ko/docs/contribute/analytics.md new file mode 100644 index 0000000000..d96d1ed576 --- /dev/null +++ b/content/ko/docs/contribute/analytics.md @@ -0,0 +1,25 @@ +--- +title: 사이트 분석 보기 +content_type: concept +weight: 100 +card: + name: contribute + weight: 100 +--- + + + +이 페이지는 kubernetes.io 사이트 분석을 제공하는 대시보드에 대한 정보를 담고 있다. + + + + +[대시보드 보기](https://datastudio.google.com/reporting/fede2672-b2fd-402a-91d2-7473bdb10f04). + +이 대시보드는 Google Data Studio를 사용하여 구축되었으며 kubernetes.io에서 Google Analytics를 사용하여 수집한 정보를 보여준다. + +### 대시보드 사용 + +기본적으로 대시보드는 지난 30일 동안 수집된 모든 데이터의 분석을 제공한다. 날짜 선택을 통해 특정 날짜 범위의 데이터를 볼 수 있다. 그 외 필터링 옵션을 사용하면, 사용자의 위치, 사이트에 접속하는데 사용된 장치, 번역된 문서 언어 등을 기준으로 데이터를 확인할 수 있다. + + 이 대시보드에 문제가 있거나 개선을 요청하려면, [이슈를 오픈](https://github.com/kubernetes/website/issues/new/choose) 한다. diff --git a/content/ko/docs/contribute/localization_ko.md b/content/ko/docs/contribute/localization_ko.md index fe506a7bb1..e2e10f01cc 100644 --- a/content/ko/docs/contribute/localization_ko.md +++ b/content/ko/docs/contribute/localization_ko.md @@ -133,7 +133,7 @@ weight: 10 ### API 오브젝트 용어 한글화 방침 일반적으로 `kubectl api-resources` 결과의 `kind` 에 해당하는 API 오브젝트는 -[국립국어원 외래어 표기법](http://kornorms.korean.go.kr/regltn/regltnView.do?regltn_code=0003#a)에 +[국립국어원 외래어 표기법](https://kornorms.korean.go.kr/regltn/regltnView.do?regltn_code=0003#a)에 따라 한글로 표기하고 영문을 병기한다. 예를 들면 다음과 같다. API 오브젝트(kind) | 한글화(외래어 표기 및 영문 병기) diff --git a/content/ko/docs/contribute/participate/pr-wranglers.md b/content/ko/docs/contribute/participate/pr-wranglers.md index 30c0979969..f3333890d2 100644 --- a/content/ko/docs/contribute/participate/pr-wranglers.md +++ b/content/ko/docs/contribute/participate/pr-wranglers.md @@ -19,7 +19,7 @@ PR 랭글러는 일주일 간 매일 다음의 일을 해야 한다. - 매일 새로 올라오는 이슈를 심사하고 태그를 지정한다. SIG Docs가 메타데이터를 사용하는 방법에 대한 지침은 [이슈 심사 및 분류](/ko/docs/contribute/review/for-approvers/#이슈-심사와-분류)를 참고한다. - [스타일](/docs/contribute/style/style-guide/)과 [콘텐츠](/docs/contribute/style/content-guide/) 가이드를 준수하는지에 대해 [열린(open) 풀 리퀘스트](https://github.com/kubernetes/website/pulls)를 매일 리뷰한다. - 가장 작은 PR(`size/XS`)부터 시작하고, 가장 큰(`size/XXL`) PR까지 리뷰한다. 가능한 한 많은 PR을 리뷰한다. -- PR 기여자들이 [CLA]()에 서명했는지 확인한다. +- PR 기여자들이 [CLA](https://github.com/kubernetes/community/blob/master/CLA.md)에 서명했는지 확인한다. - CLA에 서명하지 않은 기여자에게 CLA에 서명하도록 알리려면 [이](https://github.com/zparnold/k8s-docs-pr-botherer) 스크립트를 사용한다. - 제안된 변경 사항에 대한 피드백을 제공하고 다른 SIG의 멤버에게 기술 리뷰를 요청한다. - 제안된 콘텐츠 변경에 대해 PR에 인라인 제안(inline suggestion)을 제공한다. diff --git a/content/ko/docs/contribute/participate/roles-and-responsibilities.md b/content/ko/docs/contribute/participate/roles-and-responsibilities.md index 448502c0c3..897d638435 100644 --- a/content/ko/docs/contribute/participate/roles-and-responsibilities.md +++ b/content/ko/docs/contribute/participate/roles-and-responsibilities.md @@ -29,7 +29,7 @@ GitHub 계정을 가진 누구나 쿠버네티스에 기여할 수 있다. SIG D 이슈를 올린다. - 풀 리퀘스트에 대해 구속력 없는 피드백을 제공한다. - 현지화에 기여한다. -- [슬랙](http://slack.k8s.io/) 또는 +- [슬랙](https://slack.k8s.io/) 또는 [SIG docs 메일링 리스트](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)에 개선을 제안한다. [CLA에 서명](/ko/docs/contribute/new-content/overview/#sign-the-cla) 후에 누구나 다음을 할 수 있다. @@ -203,7 +203,7 @@ PR은 자동으로 병합된다. SIG Docs 승인자는 추가적인 기술 리 - 주간 로테이션을 위해 [PR Wrangler 로테이션 스케줄](https://github.com/kubernetes/website/wiki/PR-Wranglers)에 참여한다. SIG Docs는 모든 승인자들이 이 로테이션에 참여할 것으로 기대한다. 자세한 내용은 - [PR 랭글러(PR wrangler)](/ko/docs/contribute/participating/pr-wranglers/)를 + [PR 랭글러(PR wrangler)](/ko/docs/contribute/participate/pr-wranglers/)를 참고한다. ## 승인자 되기 @@ -231,4 +231,4 @@ PR은 자동으로 병합된다. SIG Docs 승인자는 추가적인 기술 리 ## {{% heading "whatsnext" %}} -- 모든 승인자가 교대로 수행하는 역할인 [PR 랭글러](/ko/docs/contribute/participating/pr-wranglers)에 대해 읽어보기 +- 모든 승인자가 교대로 수행하는 역할인 [PR 랭글러](/ko/docs/contribute/participate/pr-wranglers)에 대해 읽어보기 diff --git a/content/ko/docs/contribute/review/reviewing-prs.md b/content/ko/docs/contribute/review/reviewing-prs.md index f0a164de00..e0b07a79a9 100644 --- a/content/ko/docs/contribute/review/reviewing-prs.md +++ b/content/ko/docs/contribute/review/reviewing-prs.md @@ -18,7 +18,7 @@ weight: 10 - 적합한 코멘트를 남길 수 있도록 [콘텐츠 가이드](/docs/contribute/style/content-guide/)와 [스타일 가이드](/docs/contribute/style/style-guide/)를 읽는다. - 쿠버네티스 문서화 커뮤니티의 다양한 - [역할과 책임](/ko/docs/contribute/participating/#역할과-책임)을 이해한다. + [역할과 책임](/ko/docs/contribute/participate/#역할과-책임)을 이해한다. @@ -87,7 +87,7 @@ weight: 10 - PR이 새로운 페이지를 소개하는가? 그렇다면, - 페이지가 올바른 [페이지 콘텐츠 타입](/docs/contribute/style/page-content-types/)과 연관된 Hugo 단축 코드를 사용하는가? - 섹션의 측면 탐색에 페이지가 올바르게 나타나는가? - - 페이지가 [문서 홈](/ko/docs/home/) 목록에 나타나야 하는가? + - 페이지가 [문서 홈](/docs/home/) 목록에 나타나야 하는가? - 변경 사항이 Netlify 미리보기에 표시되는가? 목록, 코드 블록, 표, 메모 및 이미지에 특히 주의한다. ### 기타 diff --git a/content/ko/docs/contribute/style/write-new-topic.md b/content/ko/docs/contribute/style/write-new-topic.md index 7441882615..9bb3376933 100644 --- a/content/ko/docs/contribute/style/write-new-topic.md +++ b/content/ko/docs/contribute/style/write-new-topic.md @@ -172,4 +172,4 @@ kubectl create -f https://k8s.io/examples/pods/storage/gce-volume.yaml ## {{% heading "whatsnext" %}} * [페이지 콘텐츠 타입 사용](/docs/contribute/style/page-content-types/)에 대해 알아보기. -* [풀 리퀘스트 작성](/ko/docs/contribute/new-content/new-content/)에 대해 알아보기. +* [풀 리퀘스트 작성](/ko/docs/contribute/new-content/open-a-pr/)에 대해 알아보기. diff --git a/content/ko/docs/contribute/suggesting-improvements.md b/content/ko/docs/contribute/suggesting-improvements.md index 7dd9f80a71..e10faf6ea8 100644 --- a/content/ko/docs/contribute/suggesting-improvements.md +++ b/content/ko/docs/contribute/suggesting-improvements.md @@ -10,7 +10,7 @@ card: -쿠버네티스 문서에 문제가 있거나, 새로운 내용에 대한 아이디어가 있으면, 이슈를 연다. [GitHub 계정](https://github.com/join)과 웹 브라우저만 있으면 된다. +쿠버네티스 문서의 문제를 발견하거나 새로운 내용에 대한 아이디어가 있으면, 이슈를 연다. [GitHub 계정](https://github.com/join)과 웹 브라우저만 있으면 된다. 대부분의 경우, 쿠버네티스 문서에 대한 새로운 작업은 GitHub의 이슈로 시작된다. 그런 다음 쿠버네티스 기여자는 필요에 따라 이슈를 리뷰, 분류하고 태그를 지정한다. 다음으로, 여러분이나 @@ -22,7 +22,7 @@ card: ## 이슈 열기 -기존 콘텐츠에 대한 개선을 제안하거나, 오류를 발견하면, 이슈를 연다. +기존 콘텐츠에 대한 개선을 제안하고 싶거나 오류를 발견하면, 이슈를 연다. 1. 오른쪽 사이드바에서 **문서에 이슈 생성** 링크를 클릭한다. 그러면 헤더가 미리 채워진 GitHub 이슈 페이지로 리디렉션된다. diff --git a/content/ko/docs/home/supported-doc-versions.md b/content/ko/docs/home/supported-doc-versions.md index 07d33e49b9..35f6f9a1b0 100644 --- a/content/ko/docs/home/supported-doc-versions.md +++ b/content/ko/docs/home/supported-doc-versions.md @@ -7,3 +7,6 @@ card: weight: 10 title: 사용 가능한 문서 버전 --- + +이 웹사이트에서는 쿠버네티스 최신 버전 및 +이전 4개 버전에 대한 문서를 제공하고 있다. diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md index a441e80783..68aa8eceb8 100644 --- a/content/ko/docs/reference/_index.md +++ b/content/ko/docs/reference/_index.md @@ -37,7 +37,7 @@ no_list: true - [쿠버네티스 Python 클라이언트 라이브러리](https://github.com/kubernetes-client/python) - [쿠버네티스 Java 클라이언트 라이브러리](https://github.com/kubernetes-client/java) - [쿠버네티스 JavaScript 클라이언트 라이브러리](https://github.com/kubernetes-client/javascript) -- [쿠버네티스 Dotnet 클라이언트 라이브러리](https://github.com/kubernetes-client/csharp) +- [쿠버네티스 C# 클라이언트 라이브러리](https://github.com/kubernetes-client/csharp) - [쿠버네티스 Haskell 클라이언트 라이브러리](https://github.com/kubernetes-client/haskell) ## CLI @@ -55,7 +55,7 @@ no_list: true 파드, 서비스, 레플리케이션 컨트롤러와 같은 API 오브젝트에 대한 검증과 구성을 수행하는 REST API. * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - 쿠버네티스에 탑재된 핵심 제어 루프를 포함하는 데몬. -* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 간단한 +* [kube-proxy](/ko/docs/reference/command-line-tools-reference/kube-proxy/) - 간단한 TCP/UDP 스트림 포워딩이나 백-엔드 집합에 걸쳐서 라운드-로빈 TCP/UDP 포워딩을 할 수 있다. * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - 가용성, 성능 및 용량을 관리하는 스케줄러. diff --git a/content/ko/docs/reference/access-authn-authz/service-accounts-admin.md b/content/ko/docs/reference/access-authn-authz/service-accounts-admin.md index c5a13a5608..ca06783465 100644 --- a/content/ko/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/ko/docs/reference/access-authn-authz/service-accounts-admin.md @@ -53,7 +53,7 @@ weight: 50 1. 이전 단계는 파드에 참조되는 `ServiceAccount` 가 있도록 하고, 그렇지 않으면 이를 거부한다. 1. 서비스어카운트 `automountServiceAccountToken` 와 파드의 `automountServiceAccountToken` 중 어느 것도 `false` 로 설정되어 있지 않다면, API 접근을 위한 토큰이 포함된 `volume` 을 파드에 추가한다. 1. 이전 단계에서 서비스어카운트 토큰을 위한 볼륨이 만들어졌다면, `/var/run/secrets/kubernetes.io/serviceaccount` 에 마운트된 파드의 각 컨테이너에 `volumeSource` 를 추가한다. -1. 파드에 `ImagePullSecrets` 이 없는 경우, `ServiceAccount` 의 `ImagePullSecrets` 이 파드에 추가된다. +1. 파드에 `imagePullSecrets` 이 없는 경우, `ServiceAccount` 의 `imagePullSecrets` 이 파드에 추가된다. #### 바인딩된 서비스 어카운트 토큰 볼륨 @@ -86,14 +86,14 @@ weight: 50 프로젝티드 볼륨은 세 가지로 구성된다. 1. kube-apiserver로부터 TokenRequest API를 통해 얻은 서비스어카운트토큰(ServiceAccountToken). 서비스어카운트토큰은 기본적으로 1시간 뒤에, 또는 파드가 삭제될 때 만료된다. 서비스어카운트토큰은 파드에 연결되며 kube-apiserver를 위해 존재한다. -1. kube-apiserver에 대한 연결을 확인하는 데 사용되는 CA 번들을 포함하는 컨피그맵(ConfigMap). 이 기능은 모든 네임스페이스에 "kube-root-ca.crt" 컨피그맵을 게시하는 기능 게이트인 `RootCAConfigMap`이 활성화되어 있어야 동작한다. `RootCAConfigMap`은 1.20에서 기본적으로 활성화되어 있으며, 1.21 이상에서는 항상 활성화된 상태이다. +1. kube-apiserver에 대한 연결을 확인하는 데 사용되는 CA 번들을 포함하는 컨피그맵(ConfigMap). 이 기능은 모든 네임스페이스에 "kube-root-ca.crt" 컨피그맵을 게시하는 기능 게이트인 `RootCAConfigMap`에 의해 동작한다. `RootCAConfigMap` 기능 게이트는 1.21에서 GA로 전환되었으며 기본적으로 활성화되어 있다. (이 플래그는 1.22에서 `--feature-gate` 인자에서 제외될 예정이다.) 1. 파드의 네임스페이스를 참조하는 DownwardAPI. 상세 사항은 [프로젝티드 볼륨](/docs/tasks/configure-pod-container/configure-projected-volume-storage/)을 참고한다. `BoundServiceAccountTokenVolume` 기능 게이트가 활성화되어 있지 않은 경우, -위의 프로젝티드 볼륨을 파드 스펙에 추가하여 시크릿 기반 서비스 어카운트 볼륨을 프로젝티드 볼륨으로 수동으로 옮길 수 있다. -그러나, `RootCAConfigMap`은 활성화되어 있어야 한다. +위의 프로젝티드 볼륨을 파드 스펙에 추가하여 +시크릿 기반 서비스 어카운트 볼륨을 프로젝티드 볼륨으로 수동으로 옮길 수 있다. ### 토큰 컨트롤러 diff --git a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md index a658d58497..aef90d8db5 100644 --- a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md @@ -611,12 +611,12 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 - `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의 동등성을 캐시할 수 있게 한다. - `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한 - 엔드포인트슬라이스(EndpointSlices)를 활성화한다. [엔드포인트슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. + 엔드포인트슬라이스(EndpointSlices)를 활성화한다. [엔드포인트슬라이스 활성화](/ko/docs/concepts/services-networking/endpoint-slices/)를 참고한다. - `EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다. - `EndpointSliceProxying`: 활성화되면, 리눅스에서 실행되는 kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다. - [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. + [엔드포인트슬라이스 활성화](/ko/docs/concepts/services-networking/endpoint-slices/)를 참고한다. - `EndpointSliceTerminatingCondition`: 엔드포인트슬라이스 `terminating` 및 `serving` 조건 필드를 활성화한다. - `EphemeralContainers`: 파드를 실행하기 위한 @@ -726,7 +726,7 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 [CrossNamespacePodAffinity](/ko/docs/concepts/policy/resource-quotas/#네임스페이스-간-파드-어피니티-쿼터) 쿼터 범위 기능을 활성화한다. - `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/) 기능을 활성화한다. -- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를 +- `PodPriority`: [우선 순위](/ko/docs/concepts/scheduling-eviction/pod-priority-preemption/)를 기반으로 파드의 스케줄링 취소와 선점을 활성화한다. - `PodReadinessGates`: 파드 준비성 평가를 확장하기 위해 `PodReadinessGate` 필드 설정을 활성화한다. 자세한 내용은 [파드의 준비성 게이트](/ko/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)를 @@ -859,12 +859,12 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능 - `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다. - `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서 애플리케이션을 실행할 수 있도록 지원한다. 자세한 내용은 - [RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을 + [RunAsUserName 구성](/ko/docs/tasks/configure-pod-container/configure-runasusername/)을 참고한다. - `WindowsEndpointSliceProxying`: 활성화되면, 윈도우에서 실행되는 kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다. - [엔드포인트 슬라이스 활성화하기](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다. + [엔드포인트슬라이스 활성화하기](/ko/docs/concepts/services-networking/endpoint-slices/)를 참고한다. ## {{% heading "whatsnext" %}} diff --git a/content/ko/docs/reference/glossary/annotation.md b/content/ko/docs/reference/glossary/annotation.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/certificate.md b/content/ko/docs/reference/glossary/certificate.md index b5bc067015..7c40e48795 100644 --- a/content/ko/docs/reference/glossary/certificate.md +++ b/content/ko/docs/reference/glossary/certificate.md @@ -2,7 +2,7 @@ title: 인증서(Certificate) id: certificate date: 2018-04-12 -full_link: /docs/tasks/tls/managing-tls-in-a-cluster/ +full_link: /ko/docs/tasks/tls/managing-tls-in-a-cluster/ short_description: > 암호화된 안전한 파일로 쿠버네티스 클러스터 접근 검증에 사용한다. diff --git a/content/ko/docs/reference/glossary/cluster.md b/content/ko/docs/reference/glossary/cluster.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/configmap.md b/content/ko/docs/reference/glossary/configmap.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/container-env-variables.md b/content/ko/docs/reference/glossary/container-env-variables.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/container.md b/content/ko/docs/reference/glossary/container.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/cronjob.md b/content/ko/docs/reference/glossary/cronjob.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/customresourcedefinition.md b/content/ko/docs/reference/glossary/customresourcedefinition.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/daemonset.md b/content/ko/docs/reference/glossary/daemonset.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/deployment.md b/content/ko/docs/reference/glossary/deployment.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/docker.md b/content/ko/docs/reference/glossary/docker.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/extensions.md b/content/ko/docs/reference/glossary/extensions.md index caf7bfa226..547cd934bc 100644 --- a/content/ko/docs/reference/glossary/extensions.md +++ b/content/ko/docs/reference/glossary/extensions.md @@ -2,7 +2,7 @@ title: 익스텐션(Extensions) id: Extensions date: 2019-02-01 -full_link: /ko/docs/concepts/extend-kubernetes/extend-cluster/#익스텐션 +full_link: /ko/docs/concepts/extend-kubernetes/#익스텐션 short_description: > 익스텐션은 새로운 타입의 하드웨어를 지원하기 위해 쿠버네티스를 확장하고 깊게 통합시키는 소프트웨어 컴포넌트이다. @@ -15,4 +15,4 @@ tags: -대부분의 클러스터 관리자는 호스트된 쿠버네티스 또는 쿠버네티스의 배포 인스턴스를 사용할 것이다. 그 결과, 대부분의 쿠버네티스 사용자는 [익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#익스텐션)의 설치가 필요할 것이며, 일부 사용자만 직접 새로운 것을 만들 것이다. +대부분의 클러스터 관리자는 호스트된 쿠버네티스 또는 쿠버네티스의 배포 인스턴스를 사용할 것이다. 그 결과, 대부분의 쿠버네티스 사용자는 [익스텐션](/ko/docs/concepts/extend-kubernetes/#익스텐션)의 설치가 필요할 것이며, 일부 사용자만 직접 새로운 것을 만들 것이다. diff --git a/content/ko/docs/reference/glossary/image.md b/content/ko/docs/reference/glossary/image.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/index.md b/content/ko/docs/reference/glossary/index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/ingress.md b/content/ko/docs/reference/glossary/ingress.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/init-container.md b/content/ko/docs/reference/glossary/init-container.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/istio.md b/content/ko/docs/reference/glossary/istio.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/job.md b/content/ko/docs/reference/glossary/job.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/kube-proxy.md b/content/ko/docs/reference/glossary/kube-proxy.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/kube-scheduler.md b/content/ko/docs/reference/glossary/kube-scheduler.md index 38562f6087..33f79adf67 100644 --- a/content/ko/docs/reference/glossary/kube-scheduler.md +++ b/content/ko/docs/reference/glossary/kube-scheduler.md @@ -2,7 +2,7 @@ title: kube-scheduler id: kube-scheduler date: 2018-04-12 -full_link: /docs/reference/generated/kube-scheduler/ +full_link: /docs/reference/command-line-tools-reference/kube-scheduler/ short_description: > 노드가 배정되지 않은 새로 생성된 파드를 감지하고, 실행할 노드를 선택하는 컨트롤 플레인 컴포넌트. diff --git a/content/ko/docs/reference/glossary/kubectl.md b/content/ko/docs/reference/glossary/kubectl.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/kubernetes-api.md b/content/ko/docs/reference/glossary/kubernetes-api.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/label.md b/content/ko/docs/reference/glossary/label.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/limitrange.md b/content/ko/docs/reference/glossary/limitrange.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/minikube.md b/content/ko/docs/reference/glossary/minikube.md old mode 100755 new mode 100644 index f43966260e..8efe83c0cd --- a/content/ko/docs/reference/glossary/minikube.md +++ b/content/ko/docs/reference/glossary/minikube.md @@ -2,7 +2,7 @@ title: Minikube id: minikube date: 2018-04-12 -full_link: /ko/docs/setup/learning-environment/minikube/ +full_link: /ko/docs/tasks/tools/#minikube short_description: > 로컬에서 쿠버네티스를 실행하기 위한 도구. diff --git a/content/ko/docs/reference/glossary/mirror-pod.md b/content/ko/docs/reference/glossary/mirror-pod.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/name.md b/content/ko/docs/reference/glossary/name.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/namespace.md b/content/ko/docs/reference/glossary/namespace.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/network-policy.md b/content/ko/docs/reference/glossary/network-policy.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/node.md b/content/ko/docs/reference/glossary/node.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/pod-security-policy.md b/content/ko/docs/reference/glossary/pod-security-policy.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/pod.md b/content/ko/docs/reference/glossary/pod.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/qos-class.md b/content/ko/docs/reference/glossary/qos-class.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/rbac.md b/content/ko/docs/reference/glossary/rbac.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/replica-set.md b/content/ko/docs/reference/glossary/replica-set.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/replication-controller.md b/content/ko/docs/reference/glossary/replication-controller.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/resource-quota.md b/content/ko/docs/reference/glossary/resource-quota.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/selector.md b/content/ko/docs/reference/glossary/selector.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/service-account.md b/content/ko/docs/reference/glossary/service-account.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/service.md b/content/ko/docs/reference/glossary/service.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/statefulset.md b/content/ko/docs/reference/glossary/statefulset.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/static-pod.md b/content/ko/docs/reference/glossary/static-pod.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/uid.md b/content/ko/docs/reference/glossary/uid.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/glossary/volume.md b/content/ko/docs/reference/glossary/volume.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/kubectl/_index.md b/content/ko/docs/reference/kubectl/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/reference/kubectl/kubectl.md b/content/ko/docs/reference/kubectl/kubectl.md index ede8c85457..81e4d3fa74 100644 --- a/content/ko/docs/reference/kubectl/kubectl.md +++ b/content/ko/docs/reference/kubectl/kubectl.md @@ -9,7 +9,7 @@ weight: 30 kubectl은 쿠버네티스 클러스터 관리자를 제어한다. - 자세한 정보는 https://kubernetes.io/docs/reference/kubectl/overview/ 에서 확인한다. + 자세한 정보는 [kubectl 개요](/ko/docs/reference/kubectl/overview/)를 확인한다. ``` kubectl [flags] diff --git a/content/ko/docs/reference/labels-annotations-taints.md b/content/ko/docs/reference/labels-annotations-taints.md new file mode 100644 index 0000000000..0854c1b5cf --- /dev/null +++ b/content/ko/docs/reference/labels-annotations-taints.md @@ -0,0 +1,325 @@ +--- +title: 잘 알려진 레이블, 어노테이션, 테인트(Taint) +content_type: concept +weight: 20 +--- + + + +쿠버네티스는 모든 레이블과 어노테이션을 `kubernetes.io` 네임스페이스 아래에 정의해 놓았다. + +이 문서는 각 값에 대한 레퍼런스를 제공하며, 값을 할당하기 위한 협력 포인트도 제공한다. + + + + + +## kubernetes.io/arch + +예시: `kubernetes.io/arch=amd64` + +적용 대상: 노드 + +Go에 의해 정의된 `runtime.GOARCH` 값을 kubelet이 읽어서 이 레이블의 값으로 채운다. arm 노드와 x86 노드를 혼합하여 사용하는 경우 유용할 수 있다. + +## kubernetes.io/os + +예시: `kubernetes.io/os=linux` + +적용 대상: 노드 + +Go에 의해 정의된 `runtime.GOOS` 값을 kubelet이 읽어서 이 레이블의 값으로 채운다. 클러스터에서 여러 운영체제를 혼합하여 사용(예: 리눅스 및 윈도우 노드)하는 경우 유용할 수 있다. + +## kubernetes.io/metadata.name + +예시: `kubernetes.io/metadata.name=mynamespace` + +적용 대상: 네임스페이스 + +`NamespaceDefaultLabelName` [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)가 +활성화되어 있으면, +쿠버네티스 API 서버가 모든 네임스페이스에 이 레이블을 적용한다. +레이블의 값은 네임스페이스의 이름으로 적용된다. + +레이블 {{< glossary_tooltip text="셀렉터" term_id="selector" >}}를 이용하여 특정 네임스페이스를 지정하고 싶다면 +이 레이블이 유용할 수 있다. + +## beta.kubernetes.io/arch (사용 중단됨) + +이 레이블은 사용 중단되었다. 대신 `kubernetes.io/arch` 을 사용한다. + +## beta.kubernetes.io/os (사용 중단됨) + +이 레이블은 사용 중단되었다. 대신 `kubernetes.io/os` 을 사용한다. + +## kubernetes.io/hostname {#kubernetesiohostname} + +예시: `kubernetes.io/hostname=ip-172-20-114-199.ec2.internal` + +적용 대상: 노드 + +kubelet이 호스트네임을 읽어서 이 레이블의 값으로 채운다. `kubelet` 에 `--hostname-override` 플래그를 전달하여 실제 호스트네임과 다른 값으로 설정할 수도 있다. + +이 레이블은 토폴로지 계층의 일부로도 사용된다. [`topology.kubernetes.io/zone`](#topologykubernetesiozone)에서 세부 사항을 확인한다. + + +## controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost} + +예시: `controller.kubernetes.io/pod-deletion-cost=10` + +적용 대상: Pod + +이 어노테이션은 레플리카셋(ReplicaSet) 다운스케일 순서를 조정할 수 있는 요소인 [파드 삭제 비용](/ko/docs/concepts/workloads/controllers/replicaset/#파드-삭제-비용)을 +설정하기 위해 사용한다. 명시된 값은 `int32` 타입으로 파싱된다. + +## beta.kubernetes.io/instance-type (사용 중단됨) + +{{< note >}} v1.17부터, [`node.kubernetes.io/instance-type`](#nodekubernetesioinstance-type)으로 대체되었다. {{< /note >}} + +## node.kubernetes.io/instance-type {#nodekubernetesioinstance-type} + +예시: `node.kubernetes.io/instance-type=m3.medium` + +적용 대상: 노드 + +`클라우드 제공자`에 의해 정의된 인스턴스 타입의 값을 kubelet이 읽어서 이 레이블의 값으로 채운다. +`클라우드 제공자`를 사용하는 경우에만 이 레이블이 설정된다. +특정 워크로드를 특정 인스턴스 타입에 할당하고 싶다면 이 레이블이 유용할 수 있다. +하지만 일반적으로는 자원 기반 스케줄링을 수행하는 쿠버네티스 스케줄러를 이용하게 된다. 인스턴스 타입 보다는 특성을 기준으로 스케줄링을 고려해야 한다(예: `g2.2xlarge` 를 요구하기보다는, GPU가 필요하다고 요구한다). + +## failure-domain.beta.kubernetes.io/region (사용 중단됨) {#failure-domainbetakubernetesioregion} + +[`topology.kubernetes.io/region`](#topologykubernetesioregion)을 확인한다. + +{{< note >}} v1.17부터, [`topology.kubernetes.io/region`](#topologykubernetesioregion)으로 대체되었다. {{< /note >}} + +## failure-domain.beta.kubernetes.io/zone (사용 중단됨) {#failure-domainbetakubernetesiozone} + +[`topology.kubernetes.io/zone`](#topologykubernetesiozone)을 확인한다. + +{{< note >}} v1.17부터, [`topology.kubernetes.io/zone`](#topologykubernetesiozone)으로 대체되었다. {{< /note >}} + +## statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name} + +예시: + +`statefulset.kubernetes.io/pod-name=mystatefulset-7` + +스테이트풀셋(StatefulSet) 컨트롤러가 파드를 위한 스테이트풀셋을 생성하면, 컨트롤 플레인이 파드에 이 레이블을 설정한다. +생성되는 파드의 이름을 이 레이블의 값으로 설정한다. + +스테이트풀셋 문서의 [파드 이름 레이블](/ko/docs/concepts/workloads/controllers/statefulset/#파드-이름-레이블)에서 +상세 사항을 확인한다. + +## topology.kubernetes.io/region {#topologykubernetesioregion} + +예시: + +`topology.kubernetes.io/region=us-east-1` + +[`topology.kubernetes.io/zone`](#topologykubernetesiozone)을 확인한다. + +## topology.kubernetes.io/zone {#topologykubernetesiozone} + +예시: + +`topology.kubernetes.io/zone=us-east-1c` + +적용 대상: 노드, 퍼시스턴트볼륨(PersistentVolume) + +노드의 경우: `클라우드 제공자`가 제공하는 값을 이용하여 `kubelet` 또는 외부 `cloud-controller-manager`가 이 어노테이션의 값을 설정한다. `클라우드 제공자`를 사용하는 경우에만 이 레이블이 설정된다. 하지만, 토폴로지 내에서 의미가 있는 경우에만 이 레이블을 노드에 설정해야 한다. + +퍼시스턴트볼륨의 경우: 토폴로지 어웨어 볼륨 프로비저너가 자동으로 퍼시스턴트볼륨에 노드 어피니티 제약을 설정한다. + +영역(zone)은 논리적 고장 도메인을 나타낸다. 가용성 향상을 위해 일반적으로 쿠버네티스 클러스터는 여러 영역에 걸쳐 구성된다. 영역에 대한 정확한 정의는 사업자 별 인프라 구현에 따라 다르지만, 일반적으로 영역은 '영역 내 매우 낮은 네트워크 지연시간, 영역 내 네트워크 트래픽 비용 없음, 다른 영역의 고장에 독립적임' 등의 공통적인 특성을 갖는다. 예를 들어, 같은 영역 내의 노드는 하나의 네트워크 스위치를 공유하여 활용할 수 있으며, 반대로 다른 영역에 있는 노드는 하나의 네트워크 스위치를 공유해서는 안 된다. + +지역(region)은 하나 이상의 영역으로 구성된 더 큰 도메인을 나타낸다. 쿠버네티스 클러스터가 여러 지역에 걸쳐 있는 경우는 드물다. 영역이나 지역에 대한 정확한 정의는 사업자 별 인프라 구현에 따라 다르지만, 일반적으로 지역은 '지역 내 네트워크 지연시간보다 지역 간 네트워크 지연시간이 큼, 지역 간 네트워크 트래픽은 비용이 발생함, 다른 영역/지역의 고장에 독립적임' 등의 공통적인 특성을 갖는다. 예를 들어, 같은 지역 내의 노드는 전력 인프라(예: UPS 또는 발전기)를 공유하여 활용할 수 있으며, 반대로 다른 지역에 있는 노드는 일반적으로 전력 인프라를 공유하지 않는다. + +쿠버네티스는 영역과 지역의 구조에 대해 다음과 같이 가정한다. +1) 지역과 영역은 계층적이다. 영역은 지역의 엄격한 부분집합(strict subset)이며, 하나의 영역이 두 개의 지역에 속할 수는 없다. +2) 영역 이름은 모든 지역에 걸쳐서 유일하다. 예를 들어, "africa-east-1" 라는 지역은 "africa-east-1a" 와 "africa-east-1b" 라는 영역으로 구성될 수 있다. + +토폴로지 레이블이 변경되는 일은 없다고 가정할 수 있다. 일반적으로 레이블의 값은 변경될 수 있지만, 특정 노드가 삭제 후 재생성되지 않고서는 다른 영역으로 이동할 수 없기 때문이다. + +쿠버네티스는 이 정보를 다양한 방식으로 활용할 수 있다. 예를 들어, 단일 영역 클러스터에서는 스케줄러가 자동으로 레플리카셋의 파드를 여러 노드에 퍼뜨린다(노드 고장의 영향을 줄이기 위해 - [`kubernetes.io/hostname`](#kubernetesiohostname) 참고). 복수 영역 클러스터에서는, 여러 영역에 퍼뜨린다(영역 고장의 영향을 줄이기 위해). 이는 _SelectorSpreadPriority_ 를 통해 실현된다. + +_SelectorSpreadPriority_ 는 최선 노력(best effort) 배치 방법이다. 클러스터가 위치한 영역들의 특성이 서로 다르다면(예: 노드 숫자가 다름, 노드 타입이 다름, 파드 자원 요구사항이 다름), 파드 숫자를 영역별로 다르게 하여 배치할 수 있다. 필요하다면, 영역들의 특성(노드 숫자/타입)을 일치시켜 불균형 배치의 가능성을 줄일 수 있다. + +스케줄러도 (_VolumeZonePredicate_ 표시자를 이용하여) '파드가 요청하는 볼륨'이 위치하는 영역과 같은 영역에 파드를 배치한다. 여러 영역에서 볼륨에 접근할 수는 없다. + +`PersistentVolumeLabel`이 퍼시스턴트볼륨의 자동 레이블링을 지원하지 않는다면, 레이블을 수동으로 추가하거나 `PersistentVolumeLabel`이 동작하도록 변경할 수 있다. +`PersistentVolumeLabel`이 설정되어 있으면, 스케줄러는 파드가 다른 영역에 있는 볼륨에 마운트하는 것을 막는다. 만약 사용 중인 인프라에 이러한 제약이 없다면, 볼륨에 영역 레이블을 추가할 필요가 전혀 없다. + +## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} + +예시: `node.kubernetes.io/windows-build=10.0.17763` + +적용 대상: 노드 + +kubelet이 Microsoft 윈도우에서 실행되고 있다면, 사용 중인 Windows Server 버전을 기록하기 위해 kubelet이 노드에 이 레이블을 추가한다. + +이 레이블의 값은 "MajorVersion.MinorVersion.BuildNumber"의 형태를 갖는다. + +## service.kubernetes.io/headless {#servicekubernetesioheadless} + +예시: `service.kubernetes.io/headless=""` + +적용 대상: 서비스 + +서비스가 헤드리스(headless)이면, 컨트롤 플레인이 엔드포인트(Endpoints) 오브젝트에 이 레이블을 추가한다. + +## kubernetes.io/service-name {#kubernetesioservice-name} + +예시: `kubernetes.io/service-name="nginx"` + +적용 대상: 서비스 + +쿠버네티스가 여러 서비스를 구분하기 위해 이 레이블을 사용한다. 현재는 `ELB`(Elastic Load Balancer) 를 위해서만 사용되고 있다. + +## endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} + +예시: `endpointslice.kubernetes.io/managed-by="controller"` + +적용 대상: 엔드포인트슬라이스(EndpointSlices) + +이 레이블은 엔드포인트슬라이스(EndpointSlice)를 어떤 컨트롤러나 엔티티가 관리하는지를 나타내기 위해 사용된다. 이 레이블을 사용함으로써 한 클러스터 내에서 여러 엔드포인트슬라이스 오브젝트가 각각 다른 컨트롤러나 엔티티에 의해 관리될 수 있다. + +## endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror} + +예시: `endpointslice.kubernetes.io/skip-mirror="true"` + +적용 대상: 엔드포인트(Endpoints) + +특정 자원에 이 레이블을 `"true"` 로 설정하여, EndpointSliceMirroring 컨트롤러가 엔드포인트슬라이스를 이용하여 해당 자원을 미러링하지 않도록 지시할 수 있다. + +## service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name} + +예시: `service.kubernetes.io/service-proxy-name="foo-bar"` + +적용 대상: 서비스 + +kube-proxy 에는 커스텀 프록시를 위한 이와 같은 레이블이 있으며, 이 레이블은 서비스 컨트롤을 커스텀 프록시에 위임한다. + +## experimental.windows.kubernetes.io/isolation-type + +예시: `experimental.windows.kubernetes.io/isolation-type: "hyperv"` + +적용 대상: 파드 + +Hyper-V 격리(isolation)를 사용하여 윈도우 컨테이너를 실행하려면 이 어노테이션을 사용한다. Hyper-V 격리 기능을 활성화하고 Hyper-V 격리가 적용된 컨테이너를 생성하기 위해, kubelet은 기능 게이트 `HyperVContainer=true` 로 설정하여 실행되어야 하며, 파드에는 `experimental.windows.kubernetes.io/isolation-type=hyperv` 어노테이션이 설정되어 있어야 한다. + +{{< note >}} +이 어노테이션은 하나의 컨테이너로 구성된 파드에만 설정할 수 있다. +{{< /note >}} + +## ingressclass.kubernetes.io/is-default-class + +예시: `ingressclass.kubernetes.io/is-default-class: "true"` + +적용 대상: 인그레스클래스(IngressClass) + +하나의 인그레스클래스 리소스에 이 어노테이션이 `"true"`로 설정된 경우, 클래스가 명시되지 않은 새로운 인그레스(Ingress) 리소스는 해당 기본 클래스로 할당될 것이다. + +## kubernetes.io/ingress.class (사용 중단됨) + +{{< note >}} +v1.18부터, `spec.ingressClassName`으로 대체되었다. +{{< /note >}} + +## storageclass.kubernetes.io/is-default-class + +예시: `storageclass.kubernetes.io/is-default-class=true` + +적용 대상: 스토리지클래스(StorageClass) + +하나의 스토리지클래스(StorageClass) 리소스에 이 어노테이션이 `"true"`로 설정된 경우, +클래스가 명시되지 않은 새로운 퍼시스턴트볼륨클레임(PersistentVolumeClaim) 리소스는 해당 기본 클래스로 할당될 것이다. + +## alpha.kubernetes.io/provided-node-ip + +예시: `alpha.kubernetes.io/provided-node-ip: "10.0.0.1"` + +적용 대상: 노드 + +kubelet이 노드에 할당된 IPv4 주소를 명시하기 위해 이 어노테이션을 사용할 수 있다. + +kubelet이 "외부" 클라우드 제공자에 의해 실행되었다면, 명령줄 플래그(`--node-ip`)를 통해 설정된 IP 주소를 명시하기 위해 kubelet이 이 어노테이션을 노드에 설정한다. cloud-controller-manager는 클라우드 제공자에게 이 IP 주소가 유효한지를 검증한다. + +## batch.kubernetes.io/job-completion-index + +예시: `batch.kubernetes.io/job-completion-index: "3"` + +적용 대상: 파드 + +kube-controller-manager의 잡(Job) 컨트롤러는 +`Indexed` [완료 모드](/ko/docs/concepts/workloads/controllers/job/#완료-모드)로 생성된 파드에 이 어노테이션을 추가한다. + +## kubectl.kubernetes.io/default-container + +예시: `kubectl.kubernetes.io/default-container: "front-end-app"` + +파드의 기본 컨테이너로 사용할 컨테이너 이름을 지정하는 어노테이션이다. 예를 들어, `kubectl logs` 또는 `kubectl exec` 명령을 사용할 때 `-c` 또는 `--container` 플래그를 지정하지 않으면, 이 어노테이션으로 명시된 기본 컨테이너를 대상으로 실행될 것이다. + +## endpoints.kubernetes.io/over-capacity + +예시: `endpoints.kubernetes.io/over-capacity:warning` + +적용 대상: 엔드포인트(Endpoints) + +v1.21 이상의 쿠버네티스 클러스터에서, 엔드포인트(Endpoints) 컨트롤러가 1000개 이상의 엔드포인트를 관리하고 있다면 각 엔드포인트 리소스에 이 어노테이션을 추가한다. 이 어노테이션은 엔드포인트 리소스가 용량 초과 되었음을 나타낸다. + +**이 이후로 나오는 테인트는 모두 '적용 대상: 노드' 이다.** + +## node.kubernetes.io/not-ready + +예시: `node.kubernetes.io/not-ready:NoExecute` + +노드 컨트롤러는 노드의 헬스를 모니터링하여 노드가 사용 가능한 상태인지를 감지하고 그에 따라 이 테인트를 추가하거나 제거한다. + +## node.kubernetes.io/unreachable + +예시: `node.kubernetes.io/unreachable:NoExecute` + +노드 컨트롤러는 [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)이 `Ready`에서 `Unknown`으로 변경된 노드에 이 테인트를 추가한다. + +## node.kubernetes.io/unschedulable + +예시: `node.kubernetes.io/unschedulable:NoSchedule` + +경쟁 상태(race condition) 발생을 막기 위해, 생성 중인 노드에 이 테인트가 추가된다. + +## node.kubernetes.io/memory-pressure + +예시: `node.kubernetes.io/memory-pressure:NoSchedule` + +kubelet은 노드의 `memory.available`와 `allocatableMemory.available`을 관측하여 메모리 압박을 감지한다. 그 뒤, 관측한 값을 kubelet에 설정된 문턱값(threshold)과 비교하여 노드 컨디션과 테인트의 추가/삭제 여부를 결정한다. + +## node.kubernetes.io/disk-pressure + +예시: `node.kubernetes.io/disk-pressure:NoSchedule` + +kubelet은 노드의 `imagefs.available`, `imagefs.inodesFree`, `nodefs.available`, `nodefs.inodesFree`(리눅스에 대해서만)를 관측하여 디스크 압박을 감지한다. 그 뒤, 관측한 값을 kubelet에 설정된 문턱값(threshold)과 비교하여 노드 컨디션과 테인트의 추가/삭제 여부를 결정한다. + +## node.kubernetes.io/network-unavailable + +예시: `node.kubernetes.io/network-unavailable:NoSchedule` + +사용 중인 클라우드 공급자가 추가 네트워크 환경설정을 필요로 한다고 명시하면, kubelet이 이 테인트를 설정한다. 클라우드 상의 네트워크 경로가 올바르게 구성되어야, 클라우드 공급자가 이 테인트를 제거할 것이다. + +## node.kubernetes.io/pid-pressure + +예시: `node.kubernetes.io/pid-pressure:NoSchedule` + +kubelet은 '`/proc/sys/kernel/pid_max`의 크기의 D-값'과 노드에서 쿠버네티스가 사용 중인 PID를 확인하여, `pid.available` 지표라고 불리는 '사용 가능한 PID 수'를 가져온다. 그 뒤, 관측한 지표를 kubelet에 설정된 문턱값(threshold)과 비교하여 노드 컨디션과 테인트의 추가/삭제 여부를 결정한다. + +## node.cloudprovider.kubernetes.io/uninitialized + +예시: `node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` + +kubelet이 "외부" 클라우드 공급자에 의해 실행되었다면 노드가 '사용 불가능'한 상태라고 표시하기 위해 이 테인트가 추가되며, 추후 cloud-controller-manager가 이 노드를 초기화하고 이 테인트를 제거한다. + +## node.cloudprovider.kubernetes.io/shutdown + +예시: `node.cloudprovider.kubernetes.io/shutdown:NoSchedule` + +노드의 상태가 클라우드 공급자가 정의한 'shutdown' 상태이면, 이에 따라 노드에 `node.cloudprovider.kubernetes.io/shutdown` 테인트가 `NoSchedule` 값으로 설정된다. diff --git a/content/ko/docs/reference/tools/_index.md b/content/ko/docs/reference/tools/_index.md index a38158bf14..fb017d3df2 100644 --- a/content/ko/docs/reference/tools/_index.md +++ b/content/ko/docs/reference/tools/_index.md @@ -12,7 +12,7 @@ content_type: concept ## Kubectl -[`kubectl`](/ko/docs/tasks/tools/install-kubectl/)은 쿠버네티스를 위한 커맨드라인 툴이며, 쿠버네티스 클러스터 매니저을 제어한다. +[`kubectl`](/ko/docs/tasks/tools/#kubectl)은 쿠버네티스를 위한 커맨드라인 툴이며, 쿠버네티스 클러스터 매니저을 제어한다. ## Kubeadm diff --git a/content/ko/docs/reference/using-api/client-libraries.md b/content/ko/docs/reference/using-api/client-libraries.md index 639c10ac34..11d2e793bc 100644 --- a/content/ko/docs/reference/using-api/client-libraries.md +++ b/content/ko/docs/reference/using-api/client-libraries.md @@ -65,7 +65,6 @@ API 호출 또는 요청/응답 타입을 직접 구현할 필요는 없다. | PHP | [github.com/maclof/kubernetes-client](https://github.com/maclof/kubernetes-client) | | PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) | | PHP | [github.com/renoki-co/php-k8s](https://github.com/renoki-co/php-k8s) | -| Python | [github.com/eldarion-gondor/pykube](https://github.com/eldarion-gondor/pykube) | | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | diff --git a/content/ko/docs/setup/_index.md b/content/ko/docs/setup/_index.md index b09963d0e2..d6e1ea5a21 100644 --- a/content/ko/docs/setup/_index.md +++ b/content/ko/docs/setup/_index.md @@ -1,17 +1,21 @@ --- -no_issue: true + + + + title: 시작하기 main_menu: true weight: 20 content_type: concept +no_list: true card: name: setup weight: 20 anchors: - anchor: "#학습-환경" title: 학습 환경 - - anchor: "#운영-환경" - title: 운영 환경 + - anchor: "#프로덕션-환경" + title: 프로덕션 환경 --- @@ -20,16 +24,40 @@ card: 쿠버네티스를 설치할 때는 유지보수의 용이성, 보안, 제어, 사용 가능한 리소스, 그리고 클러스터를 운영하고 관리하기 위해 필요한 전문성을 기반으로 설치 유형을 선택한다. -쿠버네티스 클러스터를 로컬 머신에, 클라우드에, 온-프레미스 데이터센터에 배포할 수 있고, 아니면 매니지드 쿠버네티스 클러스터를 선택할 수도 있다. 광범위한 클라우드 제공 업체 또는 베어 메탈 환경에 걸쳐 사용할 수 있는 맞춤형 솔루션도 있다. +[쿠버네티스를 다운로드](/releases/download/)하여 +로컬 머신에, 클라우드에, 데이터센터에 쿠버네티스 클러스터를 구축할 수 있다. + +쿠버네티스 클러스터를 직접 관리하고 싶지 않다면, [인증된 플랫폼](/ko/docs/setup/production-environment/turnkey-solutions/)과 +같은 매니지드 서비스를 선택할 수도 있다. +광범위한 클라우드 또는 베어 메탈 환경에 걸쳐 사용할 수 있는 +표준화된/맞춤형 솔루션도 있다. ## 학습 환경 -쿠버네티스를 배우고 있다면, 쿠버네티스 커뮤니티에서 지원하는 도구나, 로컬 머신에서 쿠버네티스를 설치하기 위한 생태계 내의 도구를 사용하자. +쿠버네티스를 배우고 있다면, 쿠버네티스 커뮤니티에서 지원하는 도구나, +로컬 머신에서 쿠버네티스를 설치하기 위한 생태계 내의 도구를 사용한다. +[도구 설치](/ko/docs/tasks/tools/)를 살펴본다. -## 운영 환경 +## 프로덕션 환경 -운영 환경을 위한 솔루션을 평가할 때에는, 쿠버네티스 클러스터 운영에 대한 어떤 측면(또는 _추상적인 개념_)을 스스로 관리하기를 원하는지, 제공자에게 넘기기를 원하는지 고려하자. +[프로덕션 환경](/ko/docs/setup/production-environment/)을 위한 +솔루션을 평가할 때에는, 쿠버네티스 클러스터(또는 _추상화된 객체_) +운영에 대한 어떤 측면을 스스로 관리하기를 원하는지, +또는 제공자에게 넘기기를 원하는지 고려한다. -[쿠버네티스 파트너](https://kubernetes.io/partners/#conformance)에는 [공인 쿠버네티스](https://github.com/cncf/k8s-conformance/#certified-kubernetes) 공급자 목록이 포함되어 있다. +클러스터를 직접 관리하는 경우, 공식적으로 지원되는 쿠버네티스 구축 도구는 +[kubeadm](/ko/docs/setup/production-environment/tools/kubeadm/)이다. + +## {{% heading "whatsnext" %}} + +- [쿠버네티스를 다운로드](/releases/download/)한다. +- `kubectl`을 포함한 [도구를 설치](/ko/docs/tasks/tools/)한다. +- 새로운 클러스터에 사용할 [컨테이너 런타임](/ko/docs/setup/production-environment/container-runtimes/)을 선택한다. +- 클러스터 구성의 [모범 사례](/ko/docs/setup/best-practices/)를 확인한다. + +쿠버네티스의 {{< glossary_tooltip term_id="control-plane" text="컨트롤 플레인" >}}은 +리눅스에서 실행되어야 한다. 클러스터 내에서는 리눅스 또는 +다른 운영 체제(예: 윈도우)에서 애플리케이션을 실행할 수 있다. +- [윈도우 노드를 포함하는 클러스터 구성하기](/ko/docs/setup/production-environment/windows/)를 살펴본다. diff --git a/content/ko/docs/setup/best-practices/certificates.md b/content/ko/docs/setup/best-practices/certificates.md index e567e90b22..e6640be52d 100644 --- a/content/ko/docs/setup/best-practices/certificates.md +++ b/content/ko/docs/setup/best-practices/certificates.md @@ -36,7 +36,7 @@ etcd 역시 클라이언트와 피어 간에 상호 TLS 인증을 구현한다. ## 인증서를 저장하는 위치 -만약 쿠버네티스를 kubeadm으로 설치했다면 인증서는 `/etc/kubernets/pki`에 저장된다. 이 문서에 언급된 모든 파일 경로는 그 디렉터리에 상대적이다. +만약 쿠버네티스를 kubeadm으로 설치했다면 인증서는 `/etc/kubernetes/pki`에 저장된다. 이 문서에 언급된 모든 파일 경로는 그 디렉터리에 상대적이다. ## 인증서 수동 설정 diff --git a/content/ko/docs/setup/best-practices/cluster-large.md b/content/ko/docs/setup/best-practices/cluster-large.md index d0293e72f6..899c63f6b7 100644 --- a/content/ko/docs/setup/best-practices/cluster-large.md +++ b/content/ko/docs/setup/best-practices/cluster-large.md @@ -6,13 +6,13 @@ weight: 20 클러스터는 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에서 관리하는 쿠버네티스 에이전트를 실행하는 {{< glossary_tooltip text="노드" term_id="node" >}}(물리 또는 가상 머신)의 집합이다. -쿠버네티스 {{}}는 노드 5000개까지의 클러스터를 지원한다. 보다 정확하게는, +쿠버네티스 {{}}는 노드 5,000개까지의 클러스터를 지원한다. 보다 정확하게는, 쿠버네티스는 다음 기준을 *모두* 만족하는 설정을 수용하도록 설계되었다. -* 노드 당 파드 100 개 이하 -* 노드 5000개 이하 -* 전체 파드 150000개 이하 -* 전체 컨테이너 300000개 이하 +* 노드 당 파드 110 개 이하 +* 노드 5,000개 이하 +* 전체 파드 150,000개 이하 +* 전체 컨테이너 300,000개 이하 노드를 추가하거나 제거하여 클러스터를 확장할 수 있다. 이를 수행하는 방법은 클러스터 배포 방법에 따라 다르다. diff --git a/content/ko/docs/setup/production-environment/_index.md b/content/ko/docs/setup/production-environment/_index.md index 3471214564..1394c2f325 100644 --- a/content/ko/docs/setup/production-environment/_index.md +++ b/content/ko/docs/setup/production-environment/_index.md @@ -53,7 +53,7 @@ no_list: true 관리하여, 사용자 및 워크로드가 접근할 수 있는 자원에 대한 제한을 설정할 수 있다. 쿠버네티스 프로덕션 환경을 직접 구축하기 전에, 이 작업의 일부 또는 전체를 -[턴키 클라우드 솔루션](/docs/setup/production-environment/turnkey-solutions/) +[턴키 클라우드 솔루션](/ko/docs/setup/production-environment/turnkey-solutions/) 제공 업체 또는 기타 [쿠버네티스 파트너](/ko/partners/)에게 넘기는 것을 고려할 수 있다. 다음과 같은 옵션이 있다. @@ -151,7 +151,7 @@ etcd는 클러스터 구성 데이터를 저장하므로 [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/), [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/)를 참조한다. 고가용성 컨트롤 플레인 예제는 -[고가용성 토폴로지를 위한 옵션](/docs/setup/production-environment/tools/kubeadm/ha-topology/), +[고가용성 토폴로지를 위한 옵션](/ko/docs/setup/production-environment/tools/kubeadm/ha-topology/), [kubeadm을 이용하여 고가용성 클러스터 생성하기](/docs/setup/production-environment/tools/kubeadm/high-availability/), [쿠버네티스를 위한 etcd 클러스터 운영하기](/docs/tasks/administer-cluster/configure-upgrade-etcd/)를 참조한다. etcd 백업 계획을 세우려면 @@ -274,8 +274,8 @@ DNS 서비스도 확장할 준비가 되어 있어야 한다. ## {{% heading "whatsnext" %}} - 프로덕션 쿠버네티스를 직접 구축할지, -아니면 [턴키 클라우드 솔루션](/docs/setup/production-environment/turnkey-solutions/) 또는 -[쿠버네티스 파트너](/partners/)가 제공하는 서비스를 이용할지 결정한다. +아니면 [턴키 클라우드 솔루션](/ko/docs/setup/production-environment/turnkey-solutions/) 또는 +[쿠버네티스 파트너](/ko/partners/)가 제공하는 서비스를 이용할지 결정한다. - 클러스터를 직접 구축한다면, [인증서](/ko/docs/setup/best-practices/certificates/)를 어떻게 관리할지, [etcd](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)와 diff --git a/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index d978e7d59f..f7e4a50d99 100644 --- a/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/ko/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -9,7 +9,8 @@ weight: 40 {{< feature-state for_k8s_version="v1.12" state="stable" >}} -kubeadm의 `ClusterConfiguration` 오브젝트는 API 서버, 컨트롤러매니저, 스케줄러와 같은 컨트롤 플레인 구성요소에 전달되는 기본 플래그 `extraArgs` 필드를 노출한다. 이 구성요소는 다음 필드를 사용하도록 정의되어 있다. +kubeadm의 `ClusterConfiguration` 오브젝트는 API 서버, 컨트롤러매니저, 스케줄러와 같은 컨트롤 플레인 구성요소에 전달되는 +기본 플래그 `extraArgs` 필드를 노출한다. 이 구성요소는 다음 필드를 사용하도록 정의되어 있다. - `apiServer` - `controllerManager` @@ -19,7 +20,7 @@ kubeadm의 `ClusterConfiguration` 오브젝트는 API 서버, 컨트롤러매니 1. 사용자 구성에서 적절한 필드를 추가한다. 2. 필드에 대체할 플래그를 추가한다. -3. `kubeadm init`에 `--config ` 파라미터를 추가해서 실행한다. +3. `kubeadm init`에 `--config ` 파라미터를 추가해서 실행한다. 각 필드의 구성에서 자세한 정보를 보려면, [API 참고 문서](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#ClusterConfiguration)에서 확인해 볼 수 있다. @@ -34,9 +35,9 @@ kubeadm의 `ClusterConfiguration` 오브젝트는 API 서버, 컨트롤러매니 ## APIServer 플래그 -자세한 내용은 [kube-apiserver에 대한 참고 문서](/docs/reference/command-line-tools-reference/kube-apiserver/)를 확인한다. +자세한 내용은 [kube-apiserver 레퍼런스 문서](/docs/reference/command-line-tools-reference/kube-apiserver/)를 확인한다. -사용 예: +예시: ```yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration @@ -51,9 +52,9 @@ apiServer: ## 컨트롤러매니저 플래그 -자세한 내용은 [kube-controller-manager에 대한 참고 문서](/docs/reference/command-line-tools-reference/kube-controller-manager/)를 확인한다. +자세한 내용은 [kube-controller-manager 레퍼런스 문서](/docs/reference/command-line-tools-reference/kube-controller-manager/)를 확인한다. -사용 예: +예시: ```yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration @@ -67,9 +68,9 @@ controllerManager: ## 스케줄러 플래그 -자세한 내용은 [kube-scheduler에 대한 참고 문서](/docs/reference/command-line-tools-reference/kube-scheduler/)를 확인한다. +자세한 내용은 [kube-scheduler 레퍼런스 문서](/docs/reference/command-line-tools-reference/kube-scheduler/)를 확인한다. -사용 예: +예시: ```yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration diff --git a/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index a7ce213fda..6f50124f8d 100644 --- a/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -169,7 +169,7 @@ kubeadm은 `kubelet` 또는 `kubectl` 을 설치하거나 관리하지 **않으 버전 차이에 대한 자세한 내용은 다음을 참고한다. -* 쿠버네티스 [버전 및 버전-차이 정책](/docs/setup/release/version-skew-policy/) +* 쿠버네티스 [버전 및 버전-차이 정책](/ko/releases/version-skew-policy/) * Kubeadm 관련 [버전 차이 정책](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy) {{< tabs name="k8s_install" >}} diff --git a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index eb48f3f65d..441f6202bd 100644 --- a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -84,7 +84,7 @@ weight: 65 단계별 지침을 제공한다. 이 가이드에는 클러스터 노드와 함께 사용자 애플리케이션을 업그레이드하기 위한 권장 업그레이드 절차가 포함된다. 윈도우 노드는 현재 리눅스 노드와 동일한 방식으로 쿠버네티스 -[버전-스큐(skew) 정책](/ko/docs/setup/release/version-skew-policy/)(노드 대 컨트롤 플레인 +[버전-차이(skew) 정책](/ko/releases/version-skew-policy/)(노드 대 컨트롤 플레인 버전 관리)을 준수한다. @@ -809,7 +809,7 @@ DNS, 라우트, 메트릭과 같은 많은 구성은 리눅스에서와 같이 / 1. [BitLocker](https://docs.microsoft.com/ko-kr/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server)를 사용한 볼륨-레벨 암호화를 사용한다. -[RunAsUsername](/ko/docs/tasks/configure-pod-container/configure-runasusername)은 +[RunAsUsername](/ko/docs/tasks/configure-pod-container/configure-runasusername/)은 컨테이너 프로세스를 노드 기본 사용자로 실행하기 위해 윈도우 파드 또는 컨테이너에 지정할 수 있다. 이것은 [RunAsUser](/ko/docs/concepts/policy/pod-security-policy/#사용자-및-그룹)와 거의 동일하다. diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md index 9875b8426e..5c3d52e475 100644 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -139,7 +139,7 @@ LogMonitor가 로그를 STDOUT으로 푸시할 수 있도록 필요한 엔트리 쿠버네티스 v1.16 부터, 윈도우 컨테이너는 이미지 기본 값과는 다른 username으로 엔트리포인트와 프로세스를 실행하도록 설정할 수 있다. 이 방식은 리눅스 컨테이너에서 지원되는 방식과는 조금 차이가 있다. -[여기](/docs/tasks/configure-pod-container/configure-runasusername/)에서 이에 대해 추가적으로 배울 수 있다. +[여기](/ko/docs/tasks/configure-pod-container/configure-runasusername/)에서 이에 대해 추가적으로 배울 수 있다. ## 그룹 매니지드 서비스 어카운트를 이용하여 워크로드 신원 관리하기 diff --git a/content/ko/docs/tasks/access-application-cluster/_index.md b/content/ko/docs/tasks/access-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 477e310943..8d25bb7ca6 100644 --- a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -7,7 +7,6 @@ card: weight: 40 --- - 이 페이지에서는 구성 파일을 사용하여 다수의 클러스터에 접근할 수 있도록 @@ -21,20 +20,15 @@ card: 반드시 존재해야 한다는 것을 의미하는 것은 아니다. {{< /note >}} - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}이 설치되었는지 확인하려면, `kubectl version --client`을 실행한다. kubectl 버전은 클러스터의 API 서버 버전과 -[마이너 버전 하나 차이 이내](/ko/docs/setup/release/version-skew-policy/#kubectl)여야 +[마이너 버전 하나 차이 이내](/ko/releases/version-skew-policy/#kubectl)여야 한다. - - ## 클러스터, 사용자, 컨텍스트 정의 @@ -49,7 +43,7 @@ scratch 클러스터에 접근하려면 사용자네임과 패스워드로 인 `config-exercise`라는 디렉터리를 생성한다. `config-exercise` 디렉터리에 다음 내용을 가진 `config-demo`라는 파일을 생성한다. -```shell +```yaml apiVersion: v1 kind: Config preferences: {} @@ -114,7 +108,7 @@ kubectl config --kubeconfig=config-demo view 두 클러스터, 두 사용자, 세 컨텍스트들이 출력 결과로 나온다. -```shell +```yaml apiVersion: v1 clusters: - cluster: @@ -186,7 +180,7 @@ kubectl config --kubeconfig=config-demo view --minify `dev-frontend` 컨텍스트에 관련된 구성 정보가 출력 결과로 표시될 것이다. -```shell +```yaml apiVersion: v1 clusters: - cluster: @@ -238,7 +232,6 @@ kubectl config --kubeconfig=config-demo use-context dev-storage 현재 컨텍스트인 `dev-storage`에 관련된 설정을 보자. - ```shell kubectl config --kubeconfig=config-demo view --minify ``` @@ -247,7 +240,7 @@ kubectl config --kubeconfig=config-demo view --minify `config-exercise` 디렉터리에서 다음 내용으로 `config-demo-2`라는 파일을 생성한다. -```shell +```yaml apiVersion: v1 kind: Config preferences: {} @@ -269,13 +262,17 @@ contexts: 예: ### 리눅스 + ```shell -export KUBECONFIG_SAVED=$KUBECONFIG +export KUBECONFIG_SAVED=$KUBECONFIG ``` + ### 윈도우 PowerShell -```shell + +```powershell $Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG ``` + `KUBECONFIG` 환경 변수는 구성 파일들의 경로의 리스트이다. 이 리스트는 리눅스와 Mac에서는 콜론으로 구분되며 윈도우에서는 세미콜론으로 구분된다. `KUBECONFIG` 환경 변수를 가지고 있다면, 리스트에 포함된 구성 파일들에 @@ -284,11 +281,14 @@ $Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG 다음 예와 같이 임시로 `KUBECONFIG` 환경 변수에 두 개의 경로들을 덧붙여보자. ### 리눅스 + ```shell -export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 +export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 ``` + ### 윈도우 PowerShell -```shell + +```powershell $Env:KUBECONFIG=("config-demo;config-demo-2") ``` @@ -303,7 +303,7 @@ kubectl config view 컨텍스트와 `config-demo` 파일의 세 개의 컨텍스트들을 가지고 있다는 것에 주목하길 바란다. -```shell +```yaml contexts: - context: cluster: development @@ -347,12 +347,15 @@ kubeconfig 파일들을 어떻게 병합하는지에 대한 상세정보는 예: ### 리눅스 + ```shell export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config ``` + ### 윈도우 Powershell -```shell - $Env:KUBECONFIG="$Env:KUBECONFIG;$HOME\.kube\config" + +```powershell +$Env:KUBECONFIG="$Env:KUBECONFIG;$HOME\.kube\config" ``` 이제 `KUBECONFIG` 환경 변수에 리스트에 포함된 모든 파일들이 합쳐진 구성 정보를 보자. @@ -367,19 +370,18 @@ kubectl config view `KUBECONFIG` 환경 변수를 원래 값으로 되돌려 놓자. 예를 들면:
### 리눅스 + ```shell export KUBECONFIG=$KUBECONFIG_SAVED ``` ### 윈도우 PowerShell -```shell - $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED + +```powershell +$Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED ``` - - ## {{% heading "whatsnext" %}} - * [kubeconfig 파일을 사용하여 클러스터 접근 구성하기](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/) * [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) diff --git a/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md index 488ea59ff6..11afa655e8 100644 --- a/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -220,4 +220,4 @@ kubectl delete deployment frontend backend * [서비스](/ko/docs/concepts/services-networking/service/)에 대해 더 알아본다. * [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)에 대해 더 알아본다. -* [서비스와 파드용 DNS](/docs/concepts/services-networking/dns-pod-service/)에 대해 더 알아본다. +* [서비스와 파드용 DNS](/ko/docs/concepts/services-networking/dns-pod-service/)에 대해 더 알아본다. diff --git a/content/ko/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/ko/docs/tasks/access-application-cluster/list-all-running-container-images.md index 77f5f5d635..f777d192cd 100644 --- a/content/ko/docs/tasks/access-application-cluster/list-all-running-container-images.md +++ b/content/ko/docs/tasks/access-application-cluster/list-all-running-container-images.md @@ -22,7 +22,7 @@ weight: 100 ## 모든 네임스페이스의 모든 컨테이너 이미지 가져오기 - `kubectl get pods --all-namespaces` 를 사용하여 모든 네임스페이스의 모든 파드 정보를 가져온다. -- 컨테이너 이미지 이름만 출력하기 위해 `-o jsonpath={..image}` 를 사용한다. +- 컨테이너 이미지 이름만 출력하기 위해 `-o jsonpath={.items[*].spec.containers[*].image}` 를 사용한다. 이 명령어는 결과값으로 받은 json을 반복적으로 파싱하여, `image` 필드만을 출력한다. - jsonpath를 사용하는 방법에 대해 더 많은 정보를 얻고 싶다면 @@ -33,7 +33,7 @@ weight: 100 - `uniq` 를 사용하여 이미지 개수를 합산한다. ```shell -kubectl get pods --all-namespaces -o jsonpath="{..image}" |\ +kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c @@ -80,7 +80,7 @@ sort 명령어 결과값은 `app=nginx` 레이블에 일치하는 파드만 출력한다. ```shell -kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx +kubectl get pods --all-namespaces -o=jsonpath="{.items[*].spec.containers[*].image}" -l app=nginx ``` ## 파드 네임스페이스로 필터링된 컨테이너 이미지 목록 보기 @@ -89,7 +89,7 @@ kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx 아래의 명령어 결과값은 `kube-system` 네임스페이스에 있는 파드만 출력한다. ```shell -kubectl get pods --namespace kube-system -o jsonpath="{..image}" +kubectl get pods --namespace kube-system -o jsonpath="{.items[*].spec.containers[*].image}" ``` ## jsonpath 대신 Go 템플릿을 사용하여 컨테이너 이미지 목록 보기 diff --git a/content/ko/docs/tasks/administer-cluster/_index.md b/content/ko/docs/tasks/administer-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tasks/administer-cluster/certificates.md b/content/ko/docs/tasks/administer-cluster/certificates.md index 8c8f6a148b..44159fb22e 100644 --- a/content/ko/docs/tasks/administer-cluster/certificates.md +++ b/content/ko/docs/tasks/administer-cluster/certificates.md @@ -246,5 +246,5 @@ done. ## 인증서 API `certificates.k8s.io` API를 사용해서 -[여기](/docs/tasks/tls/managing-tls-in-a-cluster)에 +[여기](/ko/docs/tasks/tls/managing-tls-in-a-cluster/)에 설명된 대로 인증에 사용할 x509 인증서를 프로비전 할 수 있다. diff --git a/content/ko/docs/tasks/administer-cluster/declare-network-policy.md b/content/ko/docs/tasks/administer-cluster/declare-network-policy.md index 2a476d520d..6d4193e63c 100644 --- a/content/ko/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/ko/docs/tasks/administer-cluster/declare-network-policy.md @@ -89,7 +89,7 @@ remote file exists {{< codenew file="service/networking/nginx-policy.yaml" >}} 네트워크폴리시 오브젝트의 이름은 유효한 -[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)이어야 한다. +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야 한다. {{< note >}} 네트워크폴리시는 정책이 적용되는 파드의 그룹을 선택하는 `podSelector` 를 포함한다. 사용자는 이 정책이 `app=nginx` 레이블을 갖는 파드를 선택하는 것을 볼 수 있다. 레이블은 `nginx` 디플로이먼트에 있는 파드에 자동으로 추가된다. 빈 `podSelector` 는 네임스페이스의 모든 파드를 선택한다. diff --git a/content/ko/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/ko/docs/tasks/administer-cluster/dns-custom-nameservers.md index 9521bb1ec6..f681bc8778 100644 --- a/content/ko/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/ko/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -23,7 +23,7 @@ DNS 변환(DNS resolution) 절차를 사용자 정의하는 방법을 설명한 ## 소개 -DNS는 _애드온 관리자_ 인 [클러스터 애드온](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/README.md)을 +DNS는 _애드온 관리자_ 인 [클러스터 애드온](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/README.md)을 사용하여 자동으로 시작되는 쿠버네티스 내장 서비스이다. diff --git a/content/ko/docs/tasks/administer-cluster/highly-available-master.md b/content/ko/docs/tasks/administer-cluster/highly-available-control-plane.md similarity index 51% rename from content/ko/docs/tasks/administer-cluster/highly-available-master.md rename to content/ko/docs/tasks/administer-cluster/highly-available-control-plane.md index 76a734bd65..ae6f79d690 100644 --- a/content/ko/docs/tasks/administer-cluster/highly-available-master.md +++ b/content/ko/docs/tasks/administer-cluster/highly-available-control-plane.md @@ -1,6 +1,6 @@ --- reviewers: -title: 고가용성 쿠버네티스 클러스터 마스터 설정하기 +title: 고가용성 쿠버네티스 클러스터 컨트롤 플레인 설정하기 content_type: task --- @@ -8,8 +8,8 @@ content_type: task {{< feature-state for_k8s_version="v1.5" state="alpha" >}} -구글 컴퓨트 엔진(Google Compute Engine, 이하 GCE)의 `kube-up`이나 `kube-down` 스크립트에 쿠버네티스 마스터를 복제할 수 있다. -이 문서는 kube-up/down 스크립트를 사용하여 고가용(HA) 마스터를 관리하는 방법과 GCE와 함께 사용하기 위해 HA 마스터를 구현하는 방법에 관해 설명한다. +구글 컴퓨트 엔진(Google Compute Engine, 이하 GCE)의 `kube-up`이나 `kube-down` 스크립트에 쿠버네티스 컨트롤 플레인 노드를 복제할 수 있다. +이 문서는 kube-up/down 스크립트를 사용하여 고가용(HA) 컨트롤 플레인을 관리하는 방법과 GCE와 함께 사용하기 위해 HA 컨트롤 플레인을 구현하는 방법에 관해 설명한다. @@ -27,68 +27,69 @@ content_type: task 새 HA 호환 클러스터를 생성하려면, `kube-up` 스크립트에 다음 플래그를 설정해야 한다. -* `MULTIZONE=true` - 서버의 기본 존(zone)과 다른 존에서 마스터 복제본의 kubelet이 제거되지 않도록 한다. -다른 존에서 마스터 복제본을 실행하려는 경우에 권장하고 필요하다. +* `MULTIZONE=true` - 서버의 기본 영역(zone)과 다른 영역에서 컨트롤 플레인 kubelet이 제거되지 않도록 한다. +여러 영역에서 컨트롤 플레인 노드를 실행(권장됨)하려는 경우에 필요하다. * `ENABLE_ETCD_QUORUM_READ=true` - 모든 API 서버에서 읽은 내용이 최신 데이터를 반환하도록 하기 위한 것이다. true인 경우, Etcd의 리더 복제본에서 읽는다. 이 값을 true로 설정하는 것은 선택 사항이다. 읽기는 더 안정적이지만 느리게 된다. -선택적으로 첫 번째 마스터 복제본이 생성될 GCE 존을 지정할 수 있다. +선택적으로, 첫 번째 컨트롤 플레인 노드가 생성될 GCE 영역을 지정할 수 있다. 다음 플래그를 설정한다. -* `KUBE_GCE_ZONE=zone` - 첫 마스터 복제본이 실행될 존. +* `KUBE_GCE_ZONE=zone` - 첫 번째 컨트롤 플레인 노드가 실행될 영역. -다음 샘플 커맨드는 europe-west1-b GCE 존에 HA 호환 클러스터를 구성한다. +다음 샘플 커맨드는 europe-west1-b GCE 영역에 HA 호환 클러스터를 구성한다. ```shell MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh ``` -위에 커맨드는 하나의 마스터로 클러스터를 생성한다. -그러나 후속 커맨드로 새 마스터 복제본을 추가할 수 있다. +위의 커맨드는 하나의 컨트롤 플레인 노드를 포함하는 클러스터를 생성한다. +그러나 후속 커맨드로 새 컨트롤 플레인 노드를 추가할 수 있다. -## 새 마스터 복제본 추가 +## 새 컨트롤 플레인 노드 추가 -HA 호환 클러스터를 생성한 다음 그것의 마스터 복제본을 추가할 수 있다. -`kube-up` 스크립트에 다음 플래그를 사용하여 마스터 복제본을 추가한다. +HA 호환 클러스터를 생성했다면, 여기에 컨트롤 플레인 노드를 추가할 수 있다. +`kube-up` 스크립트에 다음 플래그를 사용하여 컨트롤 플레인 노드를 추가한다. -* `KUBE_REPLICATE_EXISTING_MASTER=true` - 기존 마스터의 복제본을 +* `KUBE_REPLICATE_EXISTING_MASTER=true` - 기존 컨트롤 플레인 노드의 복제본을 만든다. -* `KUBE_GCE_ZONE=zone` - 마스터 복제본이 실행될 존. -반드시 다른 복제본 존과 동일한 존에 있어야 한다. +* `KUBE_GCE_ZONE=zone` - 컨트롤 플레인 노드가 실행될 영역. +반드시 다른 컨트롤 플레인 노드가 존재하는 영역과 동일한 지역(region)에 있어야 한다. HA 호환 클러스터를 시작할 때, 상속되는 `MULTIZONE`이나 `ENABLE_ETCD_QUORUM_READS` 플래그를 따로 설정할 필요는 없다. -다음 샘플 커맨드는 기존 HA 호환 클러스터에서 마스터를 복제한다. +다음 샘플 커맨드는 기존 HA 호환 클러스터에서 +컨트롤 플레인 노드를 복제한다. ```shell KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh ``` -## 마스터 복제본 제거 +## 컨트롤 플레인 노드 제거 -다음 플래그가 있는 `kube-down` 스크립트를 사용하여 HA 클러스터에서 마스터 복제본을 제거할 수 있다. +다음 플래그가 있는 `kube-down` 스크립트를 사용하여 HA 클러스터에서 컨트롤 플레인 노드를 제거할 수 있다. * `KUBE_DELETE_NODES=false` - kubelet을 삭제하지 않기 위한 것이다. -* `KUBE_GCE_ZONE=zone` - 마스터 복제본이 제거될 존. +* `KUBE_GCE_ZONE=zone` - 컨트롤 플레인 노드가 제거될 영역. -* `KUBE_REPLICA_NAME=replica_name` - (선택) 제거할 마스터 복제본의 이름. -비어있는 경우, 해당 존의 모든 복제본이 제거된다. +* `KUBE_REPLICA_NAME=replica_name` - (선택) 제거할 컨트롤 플레인 노드의 이름. +명시하지 않으면, 해당 영역의 모든 복제본이 제거된다. -다음 샘플 커맨드는 기존 HA 클러스터에서 마스터 복제본을 제거한다. +다음 샘플 커맨드는 기존 HA 클러스터에서 컨트롤 플레인 노드를 제거한다. ```shell KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh ``` -## 마스터 복제 실패 처리 +## 동작에 실패한 컨트롤 플레인 노드 처리 -HA 클러스터의 마스터 복제본 중 하나가 실패하면, -클러스터에서 복제본을 제거하고 동일한 존에서 새 복제본을 추가하는 것이 가장 좋다. +HA 클러스터의 컨트롤 플레인 노드 중 하나가 동작에 실패하면, +클러스터에서 해당 노드를 제거하고 동일한 영역에 새 컨트롤 플레인 노드를 추가하는 것이 가장 좋다. 다음 샘플 커맨드로 이 과정을 시연한다. 1. 손상된 복제본을 제거한다. @@ -97,25 +98,29 @@ HA 클러스터의 마스터 복제본 중 하나가 실패하면, KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh ``` -1. 기존 복제본 대신 새 복제본을 추가한다. +1. 기존 복제본 대신 새 노드를 추가한다. ```shell KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh ``` -## HA 클러스터에서 마스터 복제에 관한 모범 사례 +## HA 클러스터에서 컨트롤 플레인 노드 복제에 관한 모범 사례 -* 다른 존에 마스터 복제본을 배치하도록 한다. 한 존이 실패하는 동안, 해당 존에 있는 마스터도 모두 실패할 것이다. -존 장애를 극복하기 위해 노드를 여러 존에 배치한다 -(더 자세한 내용은 [멀티 존](/ko/docs/setup/best-practices/multiple-zones/)를 참조한다). +* 다른 영역에 컨트롤 플레인 노드를 배치하도록 한다. 한 영역이 동작에 실패하는 동안, +해당 영역에 있는 컨트롤 플레인 노드도 모두 동작에 실패할 것이다. +영역 장애를 극복하기 위해 노드를 여러 영역에 배치한다 +(더 자세한 내용은 [멀티 영역](/ko/docs/setup/best-practices/multiple-zones/)를 참조한다). -* 두 개의 마스터 복제본은 사용하지 않는다. 두 개의 복제 클러스터에 대한 합의는 지속적 상태를 변경해야 할 때 두 복제본 모두 실행해야 한다. -결과적으로 두 복제본 모두 필요하고, 어떤 복제본의 장애에도 클러스터가 대부분 장애 상태로 변한다. -따라서 두 개의 복제본 클러스터는 HA 관점에서 단일 복제 클러스터보다 열등하다. +* 두 개의 노드로 구성된 컨트롤 플레인은 사용하지 않는다. 두 개의 노드로 구성된 +컨트롤 플레인에서의 합의를 위해서는 지속적 상태(persistent state) 변경 시 두 컨트롤 플레인 노드가 모두 정상적으로 동작 중이어야 한다. +결과적으로 두 컨트롤 플레인 노드 모두 필요하고, 둘 중 한 컨트롤 플레인 노드에만 장애가 발생해도 +클러스터의 심각한 장애 상태를 초래한다. +따라서 HA 관점에서는 두 개의 노드로 구성된 컨트롤 플레인은 +단일 노드로 구성된 컨트롤 플레인보다도 못하다. -* 마스터 복제본을 추가하면, 클러스터의 상태(Etcd)도 새 인스턴스로 복사된다. +* 컨트롤 플레인 노드를 추가하면, 클러스터의 상태(Etcd)도 새 인스턴스로 복사된다. 클러스터가 크면, 이 상태를 복제하는 시간이 오래 걸릴 수 있다. -이 작업은 [여기](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration) 기술한 대로 +이 작업은 [etcd 관리 가이드](https://etcd.io/docs/v2.3/admin_guide/#member-migration)에 기술한 대로 Etcd 데이터 디렉터리를 마이그레이션하여 속도를 높일 수 있다(향후에 Etcd 데이터 디렉터리 마이그레이션 지원 추가를 고려 중이다). @@ -128,7 +133,7 @@ Etcd 데이터 디렉터리를 마이그레이션하여 속도를 높일 수 있 ### 개요 -각 마스터 복제본은 다음 모드에서 다음 구성 요소를 실행한다. +각 컨트롤 플레인 노드는 다음 모드에서 다음 구성 요소를 실행한다. * Etcd 인스턴스: 모든 인스턴스는 합의를 사용하여 함께 클러스터화 한다. @@ -142,8 +147,8 @@ Etcd 데이터 디렉터리를 마이그레이션하여 속도를 높일 수 있 ### 로드 밸런싱 -두 번째 마스터 복제본을 시작할 때, 두 개의 복제본을 포함된 로드 밸런서가 생성될 것이고, 첫 번째 복제본의 IP 주소가 로드 밸런서의 IP 주소로 승격된다. -비슷하게 끝에서 두 번째의 마스터 복제본을 제거한 후에는 로드 밸런서가 제거되고 +두 번째 컨트롤 플레인 노드를 배치할 때, 두 개의 복제본에 대한 로드 밸런서가 생성될 것이고, 첫 번째 복제본의 IP 주소가 로드 밸런서의 IP 주소로 승격된다. +비슷하게 끝에서 두 번째의 컨트롤 플레인 노드를 제거한 후에는 로드 밸런서가 제거되고 해당 IP 주소는 마지막으로 남은 복제본에 할당된다. 로드 밸런서 생성 및 제거는 복잡한 작업이며, 이를 전파하는 데 시간(~20분)이 걸릴 수 있다. @@ -152,17 +157,17 @@ Etcd 데이터 디렉터리를 마이그레이션하여 속도를 높일 수 있 쿠버네티스 서비스에서 최신의 쿠버네티스 API 서버 목록을 유지하는 대신, 시스템은 모든 트래픽을 외부 IP 주소로 보낸다. -* 단일 마스터 클러스터에서 IP 주소는 단일 마스터를 가리킨다. +* 단일 노드 컨트롤 플레인의 경우, IP 주소는 단일 컨트롤 플레인 노드를 가리킨다. -* 다중 마스터 클러스터에서 IP 주소는 마스터 앞에 로드밸런서를 가리킨다. +* 고가용성 컨트롤 플레인의 경우, IP 주소는 마스터 앞의 로드밸런서를 가리킨다. -마찬가지로 Kubelet은 외부 IP 주소를 사용하여 마스터와 통신한다. +마찬가지로 Kubelet은 외부 IP 주소를 사용하여 컨트롤 플레인과 통신한다. -### 마스터 인증서 +### 컨트롤 플레인 노드 인증서 -쿠버네티스는 각 복제본의 외부 퍼블릭 IP 주소와 내부 IP 주소를 대상으로 마스터 TLS 인증서를 발급한다. -복제본의 임시 공개 IP 주소에 대한 인증서는 없다. -임시 퍼블릭 IP 주소를 통해 복제본에 접근하려면, TLS 검증을 건너뛰어야 한다. +쿠버네티스는 각 컨트롤 플레인 노드의 외부 퍼블릭 IP 주소와 내부 IP 주소를 대상으로 TLS 인증서를 발급한다. +컨트롤 플레인 노드의 임시 퍼블릭 IP 주소에 대한 인증서는 없다. +임시 퍼블릭 IP 주소를 통해 컨트롤 플레인 노드에 접근하려면, TLS 검증을 건너뛰어야 한다. ### etcd 클러스터화 @@ -171,7 +176,7 @@ etcd를 클러스터로 구축하려면, etcd 인스턴스간 통신에 필요 ### API 서버 신원 -{{< feature-state state="alpha" for_k8s_version="v1.20" >}} +{{< feature-state state="alpha" for_k8s_version="v1.20" >}} API 서버 식별 기능은 [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)에 diff --git a/content/ko/docs/tasks/administer-cluster/kubeadm/_index.md b/content/ko/docs/tasks/administer-cluster/kubeadm/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md b/content/ko/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md index 16c84d451c..e67a08a74e 100644 --- a/content/ko/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md +++ b/content/ko/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md @@ -183,7 +183,7 @@ curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/lates ```powershell # 예 -.\Install-Containerd.ps1 -ContainerDVersion v1.4.1 +.\Install-Containerd.ps1 -ContainerDVersion 1.4.1 ``` {{< /note >}} diff --git a/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 40e05d0f5c..6287069ba0 100644 --- a/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -85,7 +85,11 @@ front-proxy-ca Dec 28, 2029 23:36 UTC 9y no {{< /warning >}} {{< note >}} -kubeadm은 자동 인증서 갱신을 위해 kubelet을 구성하기 때문에 `kubelet.conf` 는 위 목록에 포함되어 있지 않다. +`kubelet.conf` 는 위 목록에 포함되어 있지 않은데, 이는 +kubeadm이 [자동 인증서 갱신](/ko/docs/tasks/tls/certificate-rotation/)을 위해 +`/var/lib/kubelet/pki`에 있는 갱신 가능한 인증서를 이용하여 kubelet을 구성하기 때문이다. +만료된 kubelet 클라이언트 인증서를 갱신하려면 +[kubelet 클라이언트 갱신 실패](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#kubelet-client-cert) 섹션을 확인한다. {{< /note >}} {{< warning >}} diff --git a/content/ko/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/ko/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index dc4acf8411..3461c57061 100644 --- a/content/ko/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/ko/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -28,7 +28,7 @@ weight: 60 * 사용자는 노드가 단 하나만 있는 쿠버네티스 클러스터가 필요하고, {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 커맨드라인 툴이 사용자의 클러스터와 통신할 수 있도록 설정되어 있어야 한다. 만약 사용자가 -아직 단일 노드 클러스터를 가지고 있지 않다면, [Minikube](/ko/docs/setup/learning-environment/minikube/)를 +아직 단일 노드 클러스터를 가지고 있지 않다면, [Minikube](/ko/docs/tasks/tools/#minikube)를 사용하여 클러스터 하나를 생성할 수 있다. * [퍼시스턴트 볼륨](https://minikube.sigs.k8s.io/docs/)의 diff --git a/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md index 5a8295aff2..2188ced539 100644 --- a/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/ko/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -55,7 +55,7 @@ cat ~/.docker/config.json ## 기존의 도커 자격 증명을 기반으로 시크릿 생성하기 {#registry-secret-existing-credentials} 쿠버네티스 클러스터는 프라이빗 이미지를 받아올 때, 컨테이너 레지스트리에 인증하기 위하여 -`docker-registry` 타입의 시크릿을 사용한다. +`kubernetes.io/dockerconfigjson` 타입의 시크릿을 사용한다. 만약 이미 `docker login` 을 수행하였다면, 이 때 생성된 자격 증명을 쿠버네티스 클러스터로 복사할 수 있다. diff --git a/content/ko/docs/tasks/configure-pod-container/quality-service-pod.md b/content/ko/docs/tasks/configure-pod-container/quality-service-pod.md index 57e0a94b40..c644a8aff1 100644 --- a/content/ko/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/ko/docs/tasks/configure-pod-container/quality-service-pod.md @@ -45,10 +45,14 @@ kubectl create namespace qos-example 파드에 Guaranteed QoS 클래스 할당을 위한 전제 조건은 다음과 같다. -* 파드의 초기화 컨테이너를 포함한 모든 컨테이너는 메모리 상한과 메모리 요청량을 가지고 있어야 하며, 이는 동일해야 한다. -* 파드의 초기화 컨테이너를 포함한 모든 컨테이너는 CPU 상한과 CPU 요청량을 가지고 있어야 하며, 이는 동일해야 한다. +* 파드 내 모든 컨테이너는 메모리 상한과 메모리 요청량을 가지고 있어야 한다. +* 파드 내 모든 컨테이너의 메모리 상한이 메모리 요청량과 일치해야 한다. +* 파드 내 모든 컨테이너는 CPU 상한과 CPU 요청량을 가지고 있어야 한다. +* 파드 내 모든 컨테이너의 CPU 상한이 CPU 요청량과 일치해야 한다. -이것은 하나의 컨테이너를 갖는 파드의 구성 파일이다. 해당 컨테이너는 메모리 상한과 +이러한 제약은 초기화 컨테이너와 앱 컨테이너 모두에 동일하게 적용된다. + +다음은 하나의 컨테이너를 갖는 파드의 구성 파일이다. 해당 컨테이너는 메모리 상한과 메모리 요청량을 갖고 있고, 200MiB로 동일하다. 해당 컨테이너는 CPU 상한과 CPU 요청량을 가지며, 700 milliCPU로 동일하다. {{< codenew file="pods/qos/qos-pod.yaml" >}} diff --git a/content/ko/docs/tasks/debug-application-cluster/_index.md b/content/ko/docs/tasks/debug-application-cluster/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md b/content/ko/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md index 3c8df08ede..4dde485c13 100644 --- a/content/ko/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md +++ b/content/ko/docs/tasks/debug-application-cluster/determine-reason-pod-failure.md @@ -41,7 +41,7 @@ content_type: task kubectl apply -f https://k8s.io/examples/debug/termination.yaml - YAML 파일에 있는 `cmd` 와 `args` 필드에서 컨테이너가 10초 간 잠든 뒤에 + YAML 파일에 있는 `command` 와 `args` 필드에서 컨테이너가 10초 간 잠든 뒤에 "Sleep expired" 문자열을 `/dev/termination-log` 파일에 기록하는 것을 확인할 수 있다. 컨테이너는 "Sleep expired" 메시지를 기록한 후에 종료된다. diff --git a/content/ko/docs/tasks/manage-daemon/update-daemon-set.md b/content/ko/docs/tasks/manage-daemon/update-daemon-set.md index ec29259de7..50a3a6ad2b 100644 --- a/content/ko/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/ko/docs/tasks/manage-daemon/update-daemon-set.md @@ -1,41 +1,42 @@ --- + + title: 데몬셋(DaemonSet)에서 롤링 업데이트 수행 content_type: task weight: 10 --- - - - 이 페이지는 데몬셋에서 롤링 업데이트를 수행하는 방법을 보여준다. ## {{% heading "prerequisites" %}} -* 데몬셋 롤링 업데이트 기능은 쿠버네티스 버전 1.6 이상에서만 지원된다. - ## 데몬셋 업데이트 전략 데몬셋에는 두 가지 업데이트 전략 유형이 있다. -* OnDelete: `OnDelete` 업데이트 전략을 사용하여, 데몬셋 템플릿을 업데이트한 후, +* `OnDelete`: `OnDelete` 업데이트 전략을 사용하여, 데몬셋 템플릿을 업데이트한 후, 이전 데몬셋 파드를 수동으로 삭제할 때 *만* 새 데몬셋 파드가 생성된다. 이것은 쿠버네티스 버전 1.5 이하에서의 데몬셋의 동작과 동일하다. -* RollingUpdate: 기본 업데이트 전략이다. +* `RollingUpdate`: 기본 업데이트 전략이다. `RollingUpdate` 업데이트 전략을 사용하여, 데몬셋 템플릿을 업데이트한 후, 오래된 데몬셋 파드가 종료되고, 새로운 데몬셋 파드는 - 제어 방식으로 자동 생성된다. 전체 업데이트 프로세스 동안 데몬셋의 최대 하나의 파드가 각 노드에서 실행된다. + 제어 방식으로 자동 생성된다. 전체 업데이트 프로세스 동안 + 데몬셋의 최대 하나의 파드가 각 노드에서 실행된다. ## 롤링 업데이트 수행 데몬셋의 롤링 업데이트 기능을 사용하려면, `.spec.updateStrategy.type` 에 `RollingUpdate` 를 설정해야 한다. -[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/ko/docs/concepts/workloads/controllers/deployment/#최대-불가max-unavailable)(기본값은 1)과 -[`.spec.minReadySeconds`](/ko/docs/concepts/workloads/controllers/deployment/#최소-대기-시간초)(기본값은 0)으로 설정할 수도 있다. +[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/ko/docs/concepts/workloads/controllers/deployment/#최대-불가max-unavailable) +(기본값은 1)과 +[`.spec.minReadySeconds`](/ko/docs/concepts/workloads/controllers/deployment/#최소-대기-시간초) +(기본값은 0)으로 +설정할 수도 있다. ### `RollingUpdate` 업데이트 전략으로 데몬셋 생성 @@ -142,7 +143,7 @@ daemonset "fluentd-elasticsearch" successfully rolled out #### 일부 노드에 리소스가 부족하다 적어도 하나의 노드에서 새 데몬셋 파드를 스케줄링할 수 없어서 롤아웃이 -중단되었다. 노드에 [리소스가 부족](/docs/tasks/administer-cluster/out-of-resource/)할 때 +중단되었다. 노드에 [리소스가 부족](/docs/concepts/scheduling-eviction/node-pressure-eviction/)할 때 발생할 수 있다. 이 경우, `kubectl get nodes` 의 출력 결과와 다음의 출력 결과를 비교하여 @@ -184,12 +185,7 @@ kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system kubectl delete ds fluentd-elasticsearch -n kube-system ``` - - - ## {{% heading "whatsnext" %}} - -* [태스크: 데몬셋에서 롤백 - 수행](/ko/docs/tasks/manage-daemon/rollback-daemon-set/)을 참고한다. -* [개념: 기존 데몬셋 파드를 채택하기 위한 데몬셋 생성](/ko/docs/concepts/workloads/controllers/daemonset/)을 참고한다. +* [데몬셋에서 롤백 수행](/ko/docs/tasks/manage-daemon/rollback-daemon-set/)을 참고한다. +* [기존 데몬셋 파드를 채택하기 위한 데몬셋 생성](/ko/docs/concepts/workloads/controllers/daemonset/)을 참고한다. diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md index ab442ebafd..9484882f30 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -180,7 +180,7 @@ spec: containers: - name: app image: my-app - volumeMount: + volumeMounts: - name: config mountPath: /config volumes: @@ -234,7 +234,7 @@ spec: containers: - image: my-app name: app - volumeMount: + volumeMounts: - mountPath: /config name: config volumes: @@ -327,7 +327,7 @@ spec: containers: - name: app image: my-app - volumeMount: + volumeMounts: - name: password mountPath: /secrets volumes: diff --git a/content/ko/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/ko/docs/tasks/network/customize-hosts-file-for-pods.md similarity index 98% rename from content/ko/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md rename to content/ko/docs/tasks/network/customize-hosts-file-for-pods.md index be39f13f21..1901f16726 100644 --- a/content/ko/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/content/ko/docs/tasks/network/customize-hosts-file-for-pods.md @@ -1,6 +1,6 @@ --- title: HostAliases로 파드의 /etc/hosts 항목 추가하기 -content_type: concept +content_type: task weight: 60 min-kubernetes-server-version: 1.7 --- @@ -13,7 +13,7 @@ min-kubernetes-server-version: 1.7 HostAliases를 사용하지 않은 수정은 권장하지 않는데, 이는 호스트 파일이 kubelet에 의해 관리되고, 파드 생성/재시작 중에 덮어쓰여질 수 있기 때문이다. - + ## 기본 호스트 파일 내용 diff --git a/content/ko/docs/tasks/tools/_index.md b/content/ko/docs/tasks/tools/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md b/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md deleted file mode 100644 index f3deae981c..0000000000 --- a/content/ko/docs/tasks/tools/included/install-kubectl-gcloud.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: "gcloud kubectl install" -description: "gcloud를 이용하여 kubectl을 설치하는 방법을 각 OS별 탭에 포함하기 위한 스니펫." -headless: true ---- - -Google Cloud SDK를 사용하여 kubectl을 설치할 수 있다. - -1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다. - -1. `kubectl` 설치 명령을 실행한다. - - ```shell - gcloud components install kubectl - ``` - -1. 설치한 버전이 최신 버전인지 확인한다. - - ```shell - kubectl version --client - ``` \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-kubectl-linux.md b/content/ko/docs/tasks/tools/install-kubectl-linux.md index 39c442c939..0ad5b7fc20 100644 --- a/content/ko/docs/tasks/tools/install-kubectl-linux.md +++ b/content/ko/docs/tasks/tools/install-kubectl-linux.md @@ -22,7 +22,6 @@ card: - [리눅스에 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-linux) - [기본 패키지 관리 도구를 사용하여 설치](#install-using-native-package-management) - [다른 패키지 관리 도구를 사용하여 설치](#install-using-other-package-management) -- [리눅스에 Google Cloud SDK를 사용하여 설치](#install-on-linux-as-part-of-the-google-cloud-sdk) ### 리눅스에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-linux} @@ -168,10 +167,6 @@ kubectl version --client {{< /tabs >}} -### 리눅스에 Google Cloud SDK를 사용하여 설치 {#install-on-linux-as-part-of-the-google-cloud-sdk} - -{{< include "included/install-kubectl-gcloud.md" >}} - ## kubectl 구성 확인 {{< include "included/verify-kubectl.md" >}} diff --git a/content/ko/docs/tasks/tools/install-kubectl-macos.md b/content/ko/docs/tasks/tools/install-kubectl-macos.md index 614134da8a..91e42f553b 100644 --- a/content/ko/docs/tasks/tools/install-kubectl-macos.md +++ b/content/ko/docs/tasks/tools/install-kubectl-macos.md @@ -22,7 +22,6 @@ card: - [macOS에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-macos) - [macOS에서 Homebrew를 사용하여 설치](#install-with-homebrew-on-macos) - [macOS에서 Macports를 사용하여 설치](#install-with-macports-on-macos) -- [macOS에서 Google Cloud SDK를 사용하여 설치](#install-on-macos-as-part-of-the-google-cloud-sdk) ### macOS에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-macos} @@ -99,10 +98,14 @@ card: 1. kubectl 바이너리를 시스템 `PATH` 의 파일 위치로 옮긴다. ```bash - sudo mv ./kubectl /usr/local/bin/kubectl && \ + sudo mv ./kubectl /usr/local/bin/kubectl sudo chown root: /usr/local/bin/kubectl ``` + {{< note >}} + `PATH` 환경 변수 안에 `/usr/local/bin` 이 있는지 확인한다. + {{< /note >}} + 1. 설치한 버전이 최신 버전인지 확인한다. ```bash @@ -148,11 +151,6 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하 kubectl version --client ``` - -### Google Cloud SDK를 사용하여 설치 {#install-on-macos-as-part-of-the-google-cloud-sdk} - -{{< include "included/install-kubectl-gcloud.md" >}} - ## kubectl 구성 확인 {{< include "included/verify-kubectl.md" >}} diff --git a/content/ko/docs/tasks/tools/install-kubectl-windows.md b/content/ko/docs/tasks/tools/install-kubectl-windows.md index 23b16e3da6..28e03cfef4 100644 --- a/content/ko/docs/tasks/tools/install-kubectl-windows.md +++ b/content/ko/docs/tasks/tools/install-kubectl-windows.md @@ -21,7 +21,6 @@ card: - [윈도우에서 curl을 사용하여 kubectl 바이너리 설치](#install-kubectl-binary-with-curl-on-windows) - [Chocolatey 또는 Scoop을 사용하여 윈도우에 설치](#install-on-windows-using-chocolatey-or-scoop) -- [윈도우에서 Google Cloud SDK를 사용하여 설치](#install-on-windows-as-part-of-the-google-cloud-sdk) ### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치 {#install-kubectl-binary-with-curl-on-windows} @@ -127,10 +126,6 @@ card: 메모장과 같은 텍스트 편집기를 선택하여 구성 파일을 편집한다. {{< /note >}} -### 윈도우에서 Google Cloud SDK를 사용하여 설치 {#install-on-windows-as-part-of-the-google-cloud-sdk} - -{{< include "included/install-kubectl-gcloud.md" >}} - ## kubectl 구성 확인 {{< include "included/verify-kubectl.md" >}} diff --git a/content/ko/docs/tutorials/_index.md b/content/ko/docs/tutorials/_index.md index 8d3fd54010..093208f22d 100644 --- a/content/ko/docs/tutorials/_index.md +++ b/content/ko/docs/tutorials/_index.md @@ -35,7 +35,7 @@ content_type: concept * [외부 IP 주소를 노출하여 클러스터의 애플리케이션에 접속하기](/ko/docs/tutorials/stateless-application/expose-external-ip-address/) -* [예시: MongoDB를 사용한 PHP 방명록 애플리케이션 배포하기](/ko/docs/tutorials/stateless-application/guestbook/) +* [예시: Redis를 사용한 PHP 방명록 애플리케이션 배포하기](/ko/docs/tutorials/stateless-application/guestbook/) ## 상태 유지가 필요한(stateful) 애플리케이션 diff --git a/content/ko/docs/tutorials/configuration/_index.md b/content/ko/docs/tutorials/configuration/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md index eaa81c3809..091a1c684f 100644 --- a/content/ko/docs/tutorials/hello-minikube.md +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -217,7 +217,7 @@ minikube 툴은 활성화하거나 비활성화할 수 있고 로컬 쿠버네 storage-provisioner-gluster: disabled ``` -2. 한 애드온을 활성화 한다. 예를 들어 `metrics-server` +2. 애드온을 활성화 한다. 여기서는 `metrics-server`를 예시로 사용한다. ```shell minikube addons enable metrics-server @@ -226,7 +226,7 @@ minikube 툴은 활성화하거나 비활성화할 수 있고 로컬 쿠버네 다음과 유사하게 출력된다. ``` - metrics-server was successfully enabled + The 'metrics-server' addon is enabled ``` 3. 생성한 파드와 서비스를 확인한다. diff --git a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md index 8b0a258ae6..ee7cccb70d 100644 --- a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md @@ -16,7 +16,7 @@ weight: 10 튜토리얼을 시작하기 전에 다음의 쿠버네티스 컨셉에 대해 익숙해야 한다. -* [파드](/docs/user-guide/pods/single-container/) +* [파드](/ko/docs/concepts/workloads/pods/) * [클러스터 DNS(Cluster DNS)](/ko/docs/concepts/services-networking/dns-pod-service/) * [헤드리스 서비스(Headless Services)](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스) * [퍼시스턴트볼륨(PersistentVolumes)](/ko/docs/concepts/storage/persistent-volumes/) @@ -833,11 +833,11 @@ kubectl get pods -w -l app=nginx 다른 터미널에서는 스테이트풀셋을 지우기 위해 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 명령어를 이용하자. -이 명령어에 `--cascade=false` 파라미터가 추가되었다. +이 명령어에 `--cascade=orphan` 파라미터가 추가되었다. 이 파라미터는 쿠버네티스에 스테이트풀셋만 삭제하고 그에 속한 파드는 지우지 않도록 요청한다. ```shell -kubectl delete statefulset web --cascade=false +kubectl delete statefulset web --cascade=orphan ``` ``` statefulset.apps "web" deleted @@ -953,7 +953,7 @@ kubectl get pods -w -l app=nginx ``` 다른 터미널창에서 스테이트풀셋을 다시 지우자. 이번에는 -`--cascade=false` 파라미터를 생략하자. +`--cascade=orphan` 파라미터를 생략하자. ```shell kubectl delete statefulset web diff --git a/content/ko/docs/tutorials/stateful-application/cassandra.md b/content/ko/docs/tutorials/stateful-application/cassandra.md index 7b1888a15e..8f7c2b37ba 100644 --- a/content/ko/docs/tutorials/stateful-application/cassandra.md +++ b/content/ko/docs/tutorials/stateful-application/cassandra.md @@ -7,7 +7,7 @@ weight: 30 -이 튜토리얼은 쿠버네티스에서 [아파치 카산드라](http://cassandra.apache.org/)를 실행하는 방법을 소개한다. +이 튜토리얼은 쿠버네티스에서 [아파치 카산드라](https://cassandra.apache.org/)를 실행하는 방법을 소개한다. 데이터베이스인 카산드라는 데이터 내구성을 제공하기 위해 퍼시스턴트 스토리지가 필요하다(애플리케이션 _상태_). 이 예제에서 사용자 지정 카산드라 시드 공급자는 카산드라가 클러스터에 가입할 때 카산드라가 인스턴스를 검색할 수 있도록 한다. diff --git a/content/ko/docs/tutorials/stateless-application/_index.md b/content/ko/docs/tutorials/stateless-application/_index.md old mode 100755 new mode 100644 diff --git a/content/ko/docs/tutorials/stateless-application/guestbook.md b/content/ko/docs/tutorials/stateless-application/guestbook.md index 1a984319d8..4a475563ba 100644 --- a/content/ko/docs/tutorials/stateless-application/guestbook.md +++ b/content/ko/docs/tutorials/stateless-application/guestbook.md @@ -1,5 +1,6 @@ --- -title: "예시: MongoDB를 사용한 PHP 방명록 애플리케이션 배포하기" +title: "예시: Redis를 사용한 PHP 방명록 애플리케이션 배포하기" + content_type: tutorial @@ -7,59 +8,56 @@ weight: 20 card: name: tutorials weight: 30 - title: "상태를 유지하지 않는 예제: MongoDB를 사용한 PHP 방명록" + title: "상태를 유지하지 않는 예제: Redis를 사용한 PHP 방명록" min-kubernetes-server-version: v1.14 +source: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook --- -이 튜토리얼에서는 쿠버네티스와 [Docker](https://www.docker.com/)를 사용하여 간단한 _(운영 준비가 아닌)_ 멀티 티어 웹 애플리케이션을 빌드하고 배포하는 방법을 보여준다. 이 예제는 다음과 같은 구성으로 이루어져 있다. +이 튜토리얼에서는 쿠버네티스와 [Docker](https://www.docker.com/)를 사용하여 간단한 _(운영 수준이 아닌)_ 멀티 티어 웹 애플리케이션을 빌드하고 배포하는 방법을 보여준다. 이 예제는 다음과 같은 구성으로 이루어져 있다. -* 방명록을 저장하는 단일 인스턴스 [MongoDB](https://www.mongodb.com/) +* 방명록 항목을 저장하기 위한 단일 인스턴스 [Redis](https://www.redis.com/) * 여러 개의 웹 프론트엔드 인스턴스 ## {{% heading "objectives" %}} -* Mongo 데이터베이스를 시작 -* 방명록 프론트엔드를 시작 +* Redis 리더를 실행 +* 2개의 Redis 팔로워를 실행 +* 방명록 프론트엔드를 실행 * 프론트엔드 서비스를 노출하고 확인 -* 정리 하기 - +* 정리하기 ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - -## Mongo 데이터베이스를 실행 +## Redis 데이터베이스를 실행 -방명록 애플리케이션은 MongoDB를 사용해서 데이터를 저장한다. +방명록 애플리케이션은 Redis를 사용하여 데이터를 저장한다. -### Mongo 디플로이먼트를 생성하기 +### Redis 디플로이먼트를 생성하기 -아래의 매니페스트 파일은 단일 복제본 Mongo 파드를 실행하는 디플로이먼트 컨트롤러를 지정한다. +아래의 매니페스트 파일은 단일 복제본 Redis 파드를 실행하는 디플로이먼트 컨트롤러에 대한 명세를 담고 있다. -{{< codenew file="application/guestbook/mongo-deployment.yaml" >}} +{{< codenew file="application/guestbook/redis-leader-deployment.yaml" >}} 1. 매니페스트 파일을 다운로드한 디렉터리에서 터미널 창을 시작한다. -1. `mongo-deployment.yaml` 파일을 통해 MongoDB 디플로이먼트에 적용한다. +1. `redis-leader-deployment.yaml` 파일을 이용하여 Redis 디플로이먼트를 생성한다. ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml ``` - -1. 파드의 목록을 질의하여 MongoDB 파드가 실행 중인지 확인한다. +1. 파드의 목록을 질의하여 Redis 파드가 실행 중인지 확인한다. ```shell kubectl get pods @@ -67,36 +65,35 @@ min-kubernetes-server-version: v1.14 결과는 아래와 같은 형태로 나타난다. - ```shell + ``` NAME READY STATUS RESTARTS AGE - mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s + redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s ``` -2. MongoDB 파드에서 로그를 보려면 다음 명령어를 실행한다. +2. Redis 리더 파드의 로그를 보려면 다음 명령어를 실행한다. ```shell - kubectl logs -f deployment/mongo + kubectl logs -f deployment/redis-leader ``` -### MongoDB 서비스 생성하기 +### Redis 리더 서비스 생성하기 -방명록 애플리케이션에서 데이터를 쓰려면 MongoDB와 통신해야 한다. MongoDB 파드로 트래픽을 프록시하려면 [서비스](/ko/docs/concepts/services-networking/service/)를 적용해야 한다. 서비스는 파드에 접근하기 위한 정책을 정의한다. +방명록 애플리케이션에서 데이터를 쓰려면 Redis와 통신해야 한다. Redis 파드로 트래픽을 프록시하려면 [서비스](/ko/docs/concepts/services-networking/service/)를 생성해야 한다. 서비스는 파드에 접근하기 위한 정책을 정의한다. -{{< codenew file="application/guestbook/mongo-service.yaml" >}} +{{< codenew file="application/guestbook/redis-leader-service.yaml" >}} -1. `mongo-service.yaml` 파일을 통해 MongoDB 서비스에 적용한다. +1. `redis-leader-service.yaml` 파일을 이용하여 Redis 서비스를 실행한다. ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml ``` - -1. 서비스의 목록을 질의하여 MongoDB 서비스가 실행 중인지 확인한다. +1. 서비스의 목록을 질의하여 Redis 서비스가 실행 중인지 확인한다. ```shell kubectl get service @@ -104,29 +101,98 @@ min-kubernetes-server-version: v1.14 결과는 아래와 같은 형태로 나타난다. - ```shell + ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 1m - mongo ClusterIP 10.0.0.151 27017/TCP 8s + redis-leader ClusterIP 10.103.78.24 6379/TCP 16s ``` {{< note >}} -이 매니페스트 파일은 이전에 정의된 레이블과 일치하는 레이블 집합을 가진 `mongo`라는 서비스를 생성하므로, 서비스는 네트워크 트래픽을 MongoDB 파드로 라우팅한다. +이 매니페스트 파일은 이전에 정의된 레이블과 일치하는 레이블 집합을 가진 `redis-leader`라는 서비스를 생성하므로, 서비스는 네트워크 트래픽을 Redis 파드로 라우팅한다. {{< /note >}} +### Redis 팔로워 구성하기 + +Redis 리더는 단일 파드이지만, 몇 개의 Redis 팔로워 또는 복제본을 추가하여 가용성을 높이고 트래픽 요구를 충족할 수 있다. + +{{< codenew file="application/guestbook/redis-follower-deployment.yaml" >}} + +1. `redis-follower-deployment.yaml` 파일을 이용하여 Redis 서비스를 실행한다. + + + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml + ``` + +1. 파드의 목록을 질의하여 2개의 Redis 팔로워 레플리카가 실행 중인지 확인한다. + + ```shell + kubectl get pods + ``` + + 결과는 아래와 같은 형태로 나타난다. + + ``` + NAME READY STATUS RESTARTS AGE + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s + redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m + ``` + +### Redis 팔로워 서비스 생성하기 + +방명록 애플리케이션이 데이터를 읽으려면 Redis 팔로워와 통신해야 한다. Redis 팔로워를 발견 가능(discoverable)하게 만드려면, 새로운 [서비스](/ko/docs/concepts/services-networking/service/)를 구성해야 한다. + +{{< codenew file="application/guestbook/redis-follower-service.yaml" >}} + +1. `redis-follower-service.yaml` 파일을 이용하여 Redis 서비스를 실행한다. + + + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml + ``` + +1. 서비스의 목록을 질의하여 Redis 서비스가 실행 중인지 확인한다. + + ```shell + kubectl get service + ``` + + 결과는 아래와 같은 형태로 나타난다. + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h + redis-follower ClusterIP 10.110.162.42 6379/TCP 9s + redis-leader ClusterIP 10.103.78.24 6379/TCP 6m10s + ``` + +{{< note >}} +이 매니페스트 파일은 이전에 정의된 레이블과 일치하는 레이블 집합을 가진 `redis-follower`라는 서비스를 생성하므로, 서비스는 네트워크 트래픽을 Redis 파드로 라우팅한다. +{{< /note >}} ## 방명록 프론트엔드를 설정하고 노출하기 -방명록 애플리케이션에는 PHP로 작성된 HTTP 요청을 처리하는 웹 프론트엔드가 있다. 방명록 항목들을 저장하기 위해 `mongo` 서비스에 연결하도록 구성 한다. +방명록을 위한 Redis 저장소를 구성하고 실행했으므로, 이제 방명록 웹 서버를 실행한다. Redis 팔로워와 마찬가지로, 프론트엔드는 쿠버네티스 디플로이먼트(Deployment)를 사용하여 배포된다. + +방명록 앱은 PHP 프론트엔드를 사용한다. DB에 대한 요청이 읽기인지 쓰기인지에 따라, Redis 팔로워 또는 리더 서비스와 통신하도록 구성된다. 프론트엔드는 JSON 인터페이스를 노출하고, jQuery-Ajax 기반 UX를 제공한다. ### 방명록 프론트엔드의 디플로이먼트 생성하기 {{< codenew file="application/guestbook/frontend-deployment.yaml" >}} -1. `frontend-deployment.yaml` 파일을 통해 프론트엔드의 디플로이먼트에 적용한다. +1. `frontend-deployment.yaml` 파일을 이용하여 프론트엔드 디플로이먼트를 생성한다. @@ -134,25 +200,24 @@ min-kubernetes-server-version: v1.14 kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml ``` - 1. 파드의 목록을 질의하여 세 개의 프론트엔드 복제본이 실행되고 있는지 확인한다. ```shell - kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend + kubectl get pods -l app=guestbook -l tier=frontend ``` 결과는 아래와 같은 형태로 나타난다. ``` - NAME READY STATUS RESTARTS AGE - frontend-3823415956-dsvc5 1/1 Running 0 54s - frontend-3823415956-k22zn 1/1 Running 0 54s - frontend-3823415956-w9gbt 1/1 Running 0 54s + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-5tqhb 1/1 Running 0 47s + frontend-85595f5bf9-qbzwm 1/1 Running 0 47s + frontend-85595f5bf9-zchwc 1/1 Running 0 47s ``` ### 프론트엔드 서비스 생성하기 -서비스의 기본 유형은 [ClusterIP](/ko/docs/concepts/services-networking/service/#publishing-services-service-types)이기 때문에 적용한 `mongo` 서비스는 컨테이너 클러스터 내에서만 접근할 수 있다. `ClusterIP`는 서비스가 가리키는 파드 집합에 대한 단일 IP 주소를 제공한다. 이 IP 주소는 클러스터 내에서만 접근할 수 있다. +서비스의 기본 유형은 [ClusterIP](/ko/docs/concepts/services-networking/service/#publishing-services-service-types)이기 때문에 생성한 `Redis` 서비스는 컨테이너 클러스터 내에서만 접근할 수 있다. `ClusterIP`는 서비스가 가리키는 파드 집합에 대한 단일 IP 주소를 제공한다. 이 IP 주소는 클러스터 내에서만 접근할 수 있다. 게스트가 방명록에 접근할 수 있도록 하려면, 외부에서 볼 수 있도록 프론트엔드 서비스를 구성해야 한다. 그렇게 하면 클라이언트가 쿠버네티스 클러스터 외부에서 서비스를 요청할 수 있다. 그러나 쿠버네티스 사용자는 `ClusterIP`를 사용하더라도 `kubectl port-forward`를 사용해서 서비스에 접근할 수 있다. @@ -162,10 +227,10 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 {{< codenew file="application/guestbook/frontend-service.yaml" >}} -1. `frontend-service.yaml` 파일을 통해 프론트엔드 서비스에 적용시킨다. +1. `frontend-service.yaml` 파일을 이용하여 프론트엔드 서비스를 실행한다. @@ -173,7 +238,6 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml ``` - 1. 서비스의 목록을 질의하여 프론트엔드 서비스가 실행 중인지 확인한다. ```shell @@ -183,10 +247,11 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 결과는 아래와 같은 형태로 나타난다. ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend ClusterIP 10.0.0.112 80/TCP 6s - kubernetes ClusterIP 10.0.0.1 443/TCP 4m - mongo ClusterIP 10.0.0.151 6379/TCP 2m + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + frontend ClusterIP 10.97.28.230 80/TCP 19s + kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h + redis-follower ClusterIP 10.110.162.42 6379/TCP 5m48s + redis-leader ClusterIP 10.103.78.24 6379/TCP 11m ``` ### `kubectl port-forward`를 통해 프론트엔드 서비스 확인하기 @@ -225,9 +290,13 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 1. IP 주소를 복사하고, 방명록을 보기 위해 브라우저에서 페이지를 로드한다. +{{< note >}} +메시지를 입력하고 'Submit'을 클릭하여 방명록에 글을 작성해 본다. 입력한 메시지가 프론트엔드에 나타난다. 이 메시지는 앞서 생성한 서비스를 통해 데이터가 Redis에 성공적으로 입력되었음을 나타낸다. +{{< /note >}} + ## 웹 프론트엔드 확장하기 -서버가 디플로이먼트 컨르롤러를 사용하는 서비스로 정의되어 있기에 필요에 따라 확장 또는 축소할 수 있다. +서버가 디플로이먼트 컨트롤러를 사용하는 서비스로 정의되어 있으므로 필요에 따라 확장 또는 축소할 수 있다. 1. 프론트엔드 파드의 수를 확장하기 위해 아래 명령어를 실행한다. @@ -244,13 +313,15 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 결과는 아래와 같은 형태로 나타난다. ``` - NAME READY STATUS RESTARTS AGE - frontend-3823415956-70qj5 1/1 Running 0 5s - frontend-3823415956-dsvc5 1/1 Running 0 54m - frontend-3823415956-k22zn 1/1 Running 0 54m - frontend-3823415956-w9gbt 1/1 Running 0 54m - frontend-3823415956-x2pld 1/1 Running 0 5s - mongo-1068406935-3lswp 1/1 Running 0 56m + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-5df5m 1/1 Running 0 83s + frontend-85595f5bf9-7zmg5 1/1 Running 0 83s + frontend-85595f5bf9-cpskg 1/1 Running 0 15m + frontend-85595f5bf9-l2l54 1/1 Running 0 14m + frontend-85595f5bf9-l9c8z 1/1 Running 0 14m + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m + redis-leader-fb76b4755-xjr2n 1/1 Running 0 108m ``` 1. 프론트엔드 파드의 수를 축소하기 위해 아래 명령어를 실행한다. @@ -269,13 +340,13 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 ``` NAME READY STATUS RESTARTS AGE - frontend-3823415956-k22zn 1/1 Running 0 1h - frontend-3823415956-w9gbt 1/1 Running 0 1h - mongo-1068406935-3lswp 1/1 Running 0 1h + frontend-85595f5bf9-cpskg 1/1 Running 0 16m + frontend-85595f5bf9-l9c8z 1/1 Running 0 15m + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m + redis-leader-fb76b4755-xjr2n 1/1 Running 0 109m ``` - - ## {{% heading "cleanup" %}} 디플로이먼트 및 서비스를 삭제하면 실행 중인 모든 파드도 삭제된다. 레이블을 사용하여 하나의 명령어로 여러 자원을 삭제해보자. @@ -283,17 +354,17 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 1. 모든 파드, 디플로이먼트, 서비스를 삭제하기 위해 아래 명령어를 실행한다. ```shell - kubectl delete deployment -l app.kubernetes.io/name=mongo - kubectl delete service -l app.kubernetes.io/name=mongo - kubectl delete deployment -l app.kubernetes.io/name=guestbook - kubectl delete service -l app.kubernetes.io/name=guestbook + kubectl delete deployment -l app=redis + kubectl delete service -l app=redis + kubectl delete deployment frontend + kubectl delete service frontend ``` 결과는 아래와 같은 형태로 나타난다. ``` - deployment.apps "mongo" deleted - service "mongo" deleted + deployment.apps "redis-follower" deleted + deployment.apps "redis-leader" deleted deployment.apps "frontend" deleted service "frontend" deleted ``` @@ -307,11 +378,9 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 결과는 아래와 같은 형태로 나타난다. ``` - No resources found. + No resources found in default namespace. ``` - - ## {{% heading "whatsnext" %}} * [쿠버네티스 기초](/ko/docs/tutorials/kubernetes-basics/) 튜토리얼을 완료 diff --git a/content/ko/examples/application/guestbook/frontend-deployment.yaml b/content/ko/examples/application/guestbook/frontend-deployment.yaml index 613c654aa9..f97f20dab6 100644 --- a/content/ko/examples/application/guestbook/frontend-deployment.yaml +++ b/content/ko/examples/application/guestbook/frontend-deployment.yaml @@ -1,32 +1,29 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook apiVersion: apps/v1 kind: Deployment metadata: name: frontend - labels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend spec: + replicas: 3 selector: matchLabels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend - replicas: 3 + app: guestbook + tier: frontend template: metadata: labels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend + app: guestbook + tier: frontend spec: containers: - - name: guestbook - image: paulczar/gb-frontend:v5 - # image: gcr.io/google-samples/gb-frontend:v4 + - name: php-redis + image: gcr.io/google_samples/gb-frontend:v5 + env: + - name: GET_HOSTS_FROM + value: "dns" resources: requests: cpu: 100m memory: 100Mi - env: - - name: GET_HOSTS_FROM - value: dns ports: - - containerPort: 80 + - containerPort: 80 \ No newline at end of file diff --git a/content/ko/examples/application/guestbook/frontend-service.yaml b/content/ko/examples/application/guestbook/frontend-service.yaml index 34ad3771d7..410c6bbaf2 100644 --- a/content/ko/examples/application/guestbook/frontend-service.yaml +++ b/content/ko/examples/application/guestbook/frontend-service.yaml @@ -1,16 +1,19 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook apiVersion: v1 kind: Service metadata: name: frontend labels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend + app: guestbook + tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer + #type: LoadBalancer ports: + # the port that this service should serve on - port: 80 selector: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend + app: guestbook + tier: frontend \ No newline at end of file diff --git a/content/ko/examples/application/guestbook/mongo-deployment.yaml b/content/ko/examples/application/guestbook/mongo-deployment.yaml deleted file mode 100644 index 04908ce25b..0000000000 --- a/content/ko/examples/application/guestbook/mongo-deployment.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: mongo - labels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend -spec: - selector: - matchLabels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend - replicas: 1 - template: - metadata: - labels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend - spec: - containers: - - name: mongo - image: mongo:4.2 - args: - - --bind_ip - - 0.0.0.0 - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 27017 diff --git a/content/ko/examples/application/guestbook/mongo-service.yaml b/content/ko/examples/application/guestbook/mongo-service.yaml deleted file mode 100644 index b9cef607bc..0000000000 --- a/content/ko/examples/application/guestbook/mongo-service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: mongo - labels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend -spec: - ports: - - port: 27017 - targetPort: 27017 - selector: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend diff --git a/content/ko/examples/application/guestbook/redis-follower-deployment.yaml b/content/ko/examples/application/guestbook/redis-follower-deployment.yaml new file mode 100644 index 0000000000..c418cf7364 --- /dev/null +++ b/content/ko/examples/application/guestbook/redis-follower-deployment.yaml @@ -0,0 +1,30 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-follower + labels: + app: redis + role: follower + tier: backend +spec: + replicas: 2 + selector: + matchLabels: + app: redis + template: + metadata: + labels: + app: redis + role: follower + tier: backend + spec: + containers: + - name: follower + image: gcr.io/google_samples/gb-redis-follower:v2 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 \ No newline at end of file diff --git a/content/ko/examples/application/guestbook/redis-follower-service.yaml b/content/ko/examples/application/guestbook/redis-follower-service.yaml new file mode 100644 index 0000000000..53283d35c4 --- /dev/null +++ b/content/ko/examples/application/guestbook/redis-follower-service.yaml @@ -0,0 +1,17 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: v1 +kind: Service +metadata: + name: redis-follower + labels: + app: redis + role: follower + tier: backend +spec: + ports: + # the port that this service should serve on + - port: 6379 + selector: + app: redis + role: follower + tier: backend \ No newline at end of file diff --git a/content/ko/examples/application/guestbook/redis-leader-deployment.yaml b/content/ko/examples/application/guestbook/redis-leader-deployment.yaml new file mode 100644 index 0000000000..9c7547291c --- /dev/null +++ b/content/ko/examples/application/guestbook/redis-leader-deployment.yaml @@ -0,0 +1,30 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-leader + labels: + app: redis + role: leader + tier: backend +spec: + replicas: 1 + selector: + matchLabels: + app: redis + template: + metadata: + labels: + app: redis + role: leader + tier: backend + spec: + containers: + - name: leader + image: "docker.io/redis:6.0.5" + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 \ No newline at end of file diff --git a/content/ko/examples/application/guestbook/redis-leader-service.yaml b/content/ko/examples/application/guestbook/redis-leader-service.yaml new file mode 100644 index 0000000000..e04cc183d0 --- /dev/null +++ b/content/ko/examples/application/guestbook/redis-leader-service.yaml @@ -0,0 +1,17 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: v1 +kind: Service +metadata: + name: redis-leader + labels: + app: redis + role: leader + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: redis + role: leader + tier: backend \ No newline at end of file diff --git a/content/ko/releases/version-skew-policy.md b/content/ko/releases/version-skew-policy.md index 38052aa18d..dba98dcdf8 100644 --- a/content/ko/releases/version-skew-policy.md +++ b/content/ko/releases/version-skew-policy.md @@ -6,22 +6,21 @@ -title: 쿠버네티스 버전 및 버전 차이(skew) 지원 정책 -content_type: concept -weight: 30 +title: 버전 차이(skew) 정책 +type: docs +description: > + 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이 --- 이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다. 특정 클러스터 배포 도구는 버전 차이에 대한 추가적인 제한을 설정할 수 있다. - ## 지원되는 버전 -쿠버네티스 버전은 **x.y.z** 로 표현되는데, -여기서 **x** 는 메이저 버전, **y** 는 마이너 버전, **z** 는 패치 버전을 의미하며, 이는 [시맨틱 버전](https://semver.org/) 용어에 따른 것이다. +쿠버네티스 버전은 **x.y.z** 로 표현되는데, 여기서 **x** 는 메이저 버전, **y** 는 마이너 버전, **z** 는 패치 버전을 의미하며, 이는 [시맨틱 버전](https://semver.org/) 용어에 따른 것이다. 자세한 내용은 [쿠버네티스 릴리스 버전](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)을 참조한다. 쿠버네티스 프로젝트는 최근 세 개의 마이너 릴리스 ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}) 에 대한 릴리스 분기를 유지한다. 쿠버네티스 1.19 이상은 약 1년간의 패치 지원을 받는다. 쿠버네티스 1.18 이상은 약 9개월의 패치 지원을 받는다. diff --git a/content/pl/docs/concepts/overview/_index.md b/content/pl/docs/concepts/overview/_index.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/cloud-controller-manager.md b/content/pl/docs/reference/glossary/cloud-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/cluster.md b/content/pl/docs/reference/glossary/cluster.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/etcd.md b/content/pl/docs/reference/glossary/etcd.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/index.md b/content/pl/docs/reference/glossary/index.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/kube-apiserver.md b/content/pl/docs/reference/glossary/kube-apiserver.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/kube-controller-manager.md b/content/pl/docs/reference/glossary/kube-controller-manager.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/kube-proxy.md b/content/pl/docs/reference/glossary/kube-proxy.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/kube-scheduler.md b/content/pl/docs/reference/glossary/kube-scheduler.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/reference/glossary/kubelet.md b/content/pl/docs/reference/glossary/kubelet.md old mode 100755 new mode 100644 diff --git a/content/pl/docs/setup/release/_index.md b/content/pl/docs/setup/release/_index.md old mode 100755 new mode 100644 diff --git a/content/pl/training/_index.html b/content/pl/training/_index.html index 2dd7ed433e..0fd093af3d 100644 --- a/content/pl/training/_index.html +++ b/content/pl/training/_index.html @@ -70,7 +70,7 @@ class: training

Nauka z Linux Foundation

-

Linux Foundation oferuje szkolenia prowadzone przez instruktora oraz szkolenia samodzielne obejmujące wszystkie aspekty rozwijania i zarządzania aplikacjami na Kubrnetesie.

+

Linux Foundation oferuje szkolenia prowadzone przez instruktora oraz szkolenia samodzielne obejmujące wszystkie aspekty rozwijania i zarządzania aplikacjami na Kubernetesie.



Sprawdź ofertę szkoleń
diff --git a/content/pt-br/docs/_index.md b/content/pt-br/docs/_index.md index 1d1529975c..6e34880dd2 100644 --- a/content/pt-br/docs/_index.md +++ b/content/pt-br/docs/_index.md @@ -16,7 +16,7 @@ Como você pode ver, a maior parte da documentação ainda está disponível ape -Se você quiser participar, você pode entrar no canal Slack [#kubernets-docs-pt](http://slack.kubernetes.io/) e fazer parte da equipe por trás da tradução. +Se você quiser participar, você pode entrar no canal Slack [#kubernetes-docs-pt](http://slack.kubernetes.io/) e fazer parte da equipe por trás da tradução. Você também pode acessar o canal para solicitar a tradução de uma página específica ou relatar qualquer erro que possa ter sido encontrado. Qualquer contribuição será bem recebida! diff --git a/content/pt-br/docs/concepts/configuration/overview.md b/content/pt-br/docs/concepts/configuration/overview.md new file mode 100644 index 0000000000..66f369c03b --- /dev/null +++ b/content/pt-br/docs/concepts/configuration/overview.md @@ -0,0 +1,126 @@ +--- +title: Melhores Práticas de Configuração +content_type: concept +weight: 10 +--- + + +Esse documento destaca e consolida as melhores práticas de configuração apresentadas em todo o guia de usuário, +na documentação de introdução e nos exemplos. + +Este é um documento vivo. Se você pensar em algo que não está nesta lista, mas pode ser útil para outras pessoas, +não hesite em criar uma *issue* ou submeter um PR. + + + +## Dicas Gerais de Configuração + +- Ao definir configurações, especifique a versão mais recente estável da API. + +- Os arquivos de configuração devem ser armazenados em um sistema de controle antes de serem enviados ao cluster. +Isso permite que você reverta rapidamente uma alteração de configuração, caso necessário. Isso também auxilia na recriação e restauração do cluster. + +- Escreva seus arquivos de configuração usando YAML ao invés de JSON. Embora esses formatos possam ser usados alternadamente em quase todos os cenários, YAML tende a ser mais amigável. + +- Agrupe objetos relacionados em um único arquivo sempre que fizer sentido. Geralmente, um arquivo é mais fácil de +gerenciar do que vários. Veja o [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/all-in-one/guestbook-all-in-one.yaml) como exemplo dessa sintaxe. + +- Observe também que vários comandos `kubectl` podem ser chamados em um diretório. Por exemplo, você pode chamar +`kubectl apply` em um diretório de arquivos de configuração. + +- Não especifique valores padrões desnecessariamente: configurações simples e mínimas diminuem a possibilidade de erros. + +- Coloque descrições de objetos nas anotações para permitir uma melhor análise. + + +## "Naked" Pods comparados a ReplicaSets, Deployments, e Jobs {#naked-pods-vs-replicasets-deployments-and-jobs} + +- Se você puder evitar, não use "naked" Pods (ou seja, se você puder evitar, pods não vinculados a um [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) ou [Deployment](/docs/concepts/workloads/controllers/deployment/)). +Os "naked" pods não serão reconfigurados em caso de falha de um nó. + + Criar um Deployment, que cria um ReplicaSet para garantir que o número desejado de Pods esteja disponível e especifica uma estratégia para substituir os Pods (como [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), é quase sempre preferível do que criar Pods diretamente, exceto para alguns cenários explícitos de restartPolicy:Never. Um Job também pode ser apropriado. + + +## Services + +- Crie o [Service](/docs/concepts/services-networking/service/) antes de suas cargas de trabalho de backend correspondentes (Deployments ou ReplicaSets) e antes de quaisquer cargas de trabalho que precisem acessá-lo. Quando o +Kubernetes inicia um contêiner, ele fornece variáveis de ambiente apontando para todos os Services que estavam em execução quando o contêiner foi iniciado. Por exemplo, se um Service chamado `foo` existe, todos os contêineres vão +receber as seguintes variáveis em seu ambiente inicial: + + ```shell + FOO_SERVICE_HOST= + FOO_SERVICE_PORT= + ``` + +*Isso implica em um requisito de pedido* - qualquer `Service` que um `Pod` quer acessar precisa ser criado antes do `Pod` em si, ou então as variáveis de ambiente não serão populadas. O DNS não possui essa restrição. + +- Um [cluster add-on](/docs/concepts/cluster-administration/addons/) opcional (embora fortemente recomendado) é um servidor DNS. O +servidor DNS monitora a API do Kubernetes buscando novos `Services` e cria um conjunto de DNS para cada um. Se o DNS foi habilitado em todo o cluster, então todos os `Pods` devem ser capazes de fazer a resolução de `Services` automaticamente. + +- Não especifique um `hostPort` para um Pod a menos que isso seja absolutamente necessário. Quando você vincula um Pod a um `hostPort`, isso limita o número de lugares em que o Pod pode ser agendado, porque cada +combinação de <`hostIP`, `hostPort`, `protocol`> deve ser única. Se você não especificar o `hostIP` e `protocol` explicitamente, o Kubernetes vai usar `0.0.0.0` como o `hostIP` padrão e `TCP` como `protocol` padrão. + + Se você precisa de acesso a porta apenas para fins de depuração, pode usar o [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) ou o [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). + + Se você precisa expor explicitamente a porta de um Pod no nó, considere usar um Service do tipo [NodePort](/docs/concepts/services-networking/service/#nodeport) antes de recorrer a `hostPort`. + +- Evite usar `hostNetwork` pelos mesmos motivos do `hostPort`. + +- Use [headless Services](/docs/concepts/services-networking/service/#headless-services) (que tem um `ClusterIP` ou `None`) para descoberta de serviço quando você não precisar de um balanceador de carga `kube-proxy`. +## Usando Labels + +- Defina e use [labels](/docs/concepts/overview/working-with-objects/labels/) que identifiquem _atributos semânticos_ da sua aplicação ou Deployment, como `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. Você pode usar essas labels para selecionar os Pods apropriados para outros recursos; por exemplo, um Service que seleciona todos os Pods `tier: frontend`, ou todos +os componentes de `app: myapp`. Veja o app [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) para exemplos dessa abordagem. + +Um Service pode ser feito para abranger vários Deployments, omitindo labels específicas de lançamento de seu seletor. Quando você +precisar atualizar um serviço em execução sem _downtime_, use um [Deployment](/docs/concepts/workloads/controllers/deployment/). + +Um estado desejado de um objeto é descrito por um Deployment, e se as alterações nesse _spec_ forem _aplicadas_ o controlador +do Deployment altera o estado real para o estado desejado em uma taxa controlada. + +- Use as [labels comuns do Kubernetes](/docs/concepts/overview/working-with-objects/common-labels/) para casos de uso comuns. +Essas labels padronizadas enriquecem os metadados de uma forma que permite que ferramentas, incluindo `kubectl` e a [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard), funcionem de uma forma interoperável. + +- Você pode manipular labels para depuração. Como os controladores do Kubernetes (como ReplicaSet) e Services se relacionam com os Pods usando seletor de labels, remover as labels relevantes de um Pod impedirá que ele seja considerado por um controlador ou que +seja atendido pelo tráfego de um Service. Se você remover as labels de um Pod existente, seu controlador criará um novo Pod para +substituí-lo. Essa é uma maneira útil de depurar um Pod anteriormente "ativo" em um ambiente de "quarentena". Para remover ou +alterar labels interativamente, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label). + + +## Imagens de Contêiner + +A [imagePullPolicy](/docs/concepts/containers/images/#updating-images) e tag da imagem afetam quando o [kubelet](/docs/reference/command-line-tools-reference/kubelet/) tenta puxar a imagem especificada. + +- `imagePullPolicy: IfNotPresent`: a imagem é puxada apenas se ainda não estiver presente localmente. + +- `imagePullPolicy: Always`: sempre que o kubelet inicia um contêiner, ele consulta o *registry* da imagem do contêiner para verificar o resumo de assinatura da imagem. Se o kubelet tiver uma imagem do contêiner com o mesmo resumo de assinatura +armazenado em cache localmente, o kubelet usará a imagem em cache, caso contrário, o kubelet baixa(*pulls*) a imagem com o resumo de assinatura resolvido, e usa essa imagem para iniciar o contêiner. + +- `imagePullPolicy` é omitido se a tag da imagem é `:latest` ou se `imagePullPolicy` é omitido é automaticamente definido como `Always`. Observe que _não_ será utilizado para `ifNotPresent`se o valor da tag mudar. + +- `imagePullPolicy` é omitido se uma tag da imagem existe mas não `:latest`: `imagePullPolicy` é automaticamente definido como `ifNotPresent`. Observe que isto _não_ será atualizado para `Always` se a tag for removida ou alterada para `:latest`. + +- `imagePullPolicy: Never`: presume-se que a imagem exista localmente. Não é feita nenhuma tentativa de puxar a imagem. + +{{< note >}} +Para garantir que seu contêiner sempre use a mesma versão de uma imagem, você pode especificar seu [resumo de assinatura](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier); +substitua `:` por `@` (por exemplo, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`). Esse resumo de assinatura identifica exclusivamente uma versão +específica de uma imagem, então isso nunca vai ser atualizado pelo Kubernetes a menos que você mude o valor do resumo de assinatura da imagem. +{{< /note >}} + +{{< note >}} +Você deve evitar o uso da tag `:latest` em produção, pois é mais difícil rastrear qual versão da imagem está sendo executada e mais difícil reverter adequadamente. +{{< /note >}} + +{{< note >}} +A semântica de cache do provedor de imagem subjacente torna até mesmo `imagePullPolicy: Always` eficiente, contanto que o registro esteja acessível de forma confiável. Com o Docker, por exemplo, se a imagem já existe, a tentativa de baixar(pull) é rápida porque todas as camadas da imagem são armazenadas em cache e nenhum download de imagem é necessário. +{{< /note >}} + +## Usando kubectl + +- Use `kubectl apply -f `. Isso procura por configurações do Kubernetes em todos os arquivos `.yaml`, `.yml` em `` e passa isso para `apply`. + +- Use *labels selectors* para operações `get` e `delete` em vez de nomes de objetos específicos. Consulte as seções sobre [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) +e [usando Labels efetivamente](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). + +- Use `kubectl create deployment` e `kubectl expose` para criar rapidamente Deployments e Services de um único contêiner. Consulte [Use um Service para acessar uma aplicação em um cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) para obter um exemplo. diff --git a/content/pt-br/docs/concepts/services-networking/_index.md b/content/pt-br/docs/concepts/services-networking/_index.md new file mode 100755 index 0000000000..cbe38ae33a --- /dev/null +++ b/content/pt-br/docs/concepts/services-networking/_index.md @@ -0,0 +1,14 @@ +--- +title: "Serviços, balanceamento de carga e conectividade" +weight: 60 +description: > + Conceitos e recursos por trás da conectividade no Kubernetes. +--- + +A conectividade do Kubernetes trata quatro preocupações: +- Contêineres em um Pod se comunicam via interface _loopback_. +- A conectividade do cluster provê a comunicação entre diferentes Pods. +- O recurso de _Service_ permite a você expor uma aplicação executando em um Pod, +de forma a ser alcançável de fora de seu cluster. +- Você também pode usar os _Services_ para publicar serviços de consumo interno do +seu cluster. diff --git a/content/pt-br/docs/concepts/services-networking/network-policies.md b/content/pt-br/docs/concepts/services-networking/network-policies.md new file mode 100644 index 0000000000..2c45902e40 --- /dev/null +++ b/content/pt-br/docs/concepts/services-networking/network-policies.md @@ -0,0 +1,345 @@ +--- +title: Políticas de rede +content_type: concept +weight: 50 +--- + + + +Se você deseja controlar o fluxo do tráfego de rede no nível do endereço IP ou de portas TCP e UDP +(camadas OSI 3 e 4) então você deve considerar usar Políticas de rede (`NetworkPolicies`) do Kubernetes para aplicações +no seu cluster. `NetworkPolicy` é um objeto focado em aplicações/experiência do desenvolvedor +que permite especificar como é permitido a um {{< glossary_tooltip text="pod" term_id="pod">}} +comunicar-se com várias "entidades" de rede. + +As entidades que um Pod pode se comunicar são identificadas através de uma combinação dos 3 +identificadores à seguir: + +1. Outros pods que são permitidos (exceção: um pod não pode bloquear a si próprio) +2. Namespaces que são permitidos +3. Blocos de IP (exceção: o tráfego de e para o nó que um Pod está executando sempre é permitido, +independentemente do endereço IP do Pod ou do Nó) + +Quando definimos uma política de rede baseada em pod ou namespace, utiliza-se um {{< glossary_tooltip text="selector" term_id="selector">}} +para especificar qual tráfego é permitido de e para o(s) Pod(s) que correspondem ao seletor. + +Quando uma política de redes baseada em IP é criada, nós definimos a política baseada em blocos de IP (faixas CIDR). + + +## Pré requisitos + +As políticas de rede são implementadas pelo [plugin de redes](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). Para usar +uma política de redes, você deve usar uma solução de redes que suporte o objeto `NetworkPolicy`. +A criação de um objeto `NetworkPolicy` sem um controlador que implemente essas regras não tem efeito. + +## Pods isolados e não isolados + +Por padrão, pods não são isolados; eles aceitam tráfego de qualquer origem. + +Os pods tornam-se isolados ao existir uma `NetworkPolicy` que selecione eles. Uma vez que +exista qualquer `NetworkPolicy` no namespace selecionando um pod em específico, aquele pod +irá rejeitar qualquer conexão não permitida por qualquer `NetworkPolicy`. (Outros pod no mesmo +namespace que não são selecionados por nenhuma outra `NetworkPolicy` irão continuar aceitando +todo tráfego de rede.) + +As políticas de rede não conflitam; elas são aditivas. Se qualquer política selecionar um pod, +o pod torna-se restrito ao que é permitido pela união das regras de entrada/saída de tráfego definidas +nas políticas. Assim, a ordem de avaliação não afeta o resultado da política. + +Para o fluxo de rede entre dois pods ser permitido, tanto a política de saída no pod de origem +e a política de entrada no pod de destino devem permitir o tráfego. Se a política de saída na +origem, ou a política de entrada no destino negar o tráfego, o tráfego será bloqueado. + +## O recurso NetworkPolicy {#networkpolicy-resource} + +Veja a referência [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) para uma definição completa do recurso. + +Uma `NetworkPolicy` de exemplo é similar ao abaixo: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: test-network-policy + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Ingress + - Egress + ingress: + - from: + - ipBlock: + cidr: 172.17.0.0/16 + except: + - 172.17.1.0/24 + - namespaceSelector: + matchLabels: + project: myproject + - podSelector: + matchLabels: + role: frontend + ports: + - protocol: TCP + port: 6379 + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 5978 +``` + +{{< note >}} +Criar esse objeto no seu cluster não terá efeito a não ser que você escolha uma +solução de redes que suporte políticas de rede. +{{< /note >}} + +__Campos obrigatórios__: Assim como todas as outras configurações do Kubernetes, uma `NetworkPolicy` +necessita dos campos `apiVersion`, `kind` e `metadata`. Para maiores informações sobre +trabalhar com arquivos de configuração, veja +[Configurando containeres usando ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), +e [Gerenciamento de objetos](/docs/concepts/overview/working-with-objects/object-management). + +__spec__: A [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) contém todas as informações necessárias +para definir uma política de redes em um namespace. + +__podSelector__: Cada `NetworkPolicy` inclui um `podSelector` que seleciona o grupo de pods +que a política se aplica. A política acima seleciona os pods com a _label_ "role=db". Um `podSelector` +vazio seleciona todos os pods no namespace. + +__policyTypes__: Cada `NetworkPolicy` inclui uma lista de `policyTypes` que pode incluir `Ingress`, +`Egress` ou ambos. O campo `policyTypes` indica se a política se aplica ao tráfego de entrada +com destino aos pods selecionados, o tráfego de saída com origem dos pods selecionados ou ambos. +Se nenhum `policyType` for definido então por padrão o tipo `Ingress` será sempre utilizado, e o +tipo `Egress` será configurado apenas se o objeto contiver alguma regra de saída. (campo `egress` a seguir). + +__ingress__: Cada `NetworkPolicy` pode incluir uma lista de regras de entrada permitidas através do campo `ingress`. +Cada regra permite o tráfego que corresponde simultaneamente às sessões `from` (de) e `ports` (portas). +A política de exemplo acima contém uma regra simples, que corresponde ao tráfego em uma única porta, +de uma das três origens definidas, sendo a primeira definida via `ipBlock`, a segunda via `namespaceSelector` e +a terceira via `podSelector`. + +__egress__: Cada política pode incluir uma lista de regras de regras de saída permitidas através do campo `egress`. +Cada regra permite o tráfego que corresponde simultaneamente às sessões `to` (para) e `ports` (portas). +A política de exemplo acima contém uma regra simples, que corresponde ao tráfego destinado a uma +porta em qualquer destino pertencente à faixa de IPs em `10.0.0.0/24`. + +Então a `NetworkPolicy` acima: + +1. Isola os pods no namespace "default" com a _label_ "role=db" para ambos os tráfegos de entrada +e saída (se eles ainda não estavam isolados) +2. (Regras de entrada/ingress) permite conexões para todos os pods no namespace "default" com a _label_ "role=db" na porta TCP 6379 de: + + * qualquer pod no namespace "default" com a _label_ "role=frontend" + * qualquer pod em um namespace que tenha a _label_ "project=myproject" (aqui cabe ressaltar que o namespace que deve ter a _label_ e não os pods dentro desse namespace) + * IPs dentro das faixas 172.17.0.0–172.17.0.255 e 172.17.2.0–172.17.255.255 (ex.:, toda 172.17.0.0/16 exceto 172.17.1.0/24) + +3. (Regras de saída/egress) permite conexões de qualquer pod no namespace "default" com a _label_ +"role=db" para a faixa de destino 10.0.0.0/24 na porta TCP 5978. + +Veja o tutorial [Declarando uma política de redes](/docs/tasks/administer-cluster/declare-network-policy/) para mais exemplos. + +## Comportamento dos seletores `to` e `from` + +Existem quatro tipos de seletores que podem ser especificados nas sessões `ingress.from` ou +`egress.to`: + +__podSelector__: Seleciona Pods no mesmo namespace que a política de rede foi criada, e que deve +ser permitido origens no tráfego de entrada ou destinos no tráfego de saída. + +__namespaceSelector__: Seleciona namespaces para o qual todos os Pods devem ser permitidos como +origens no caso de tráfego de entrada ou destino no tráfego de saída. + +__namespaceSelector__ *e* __podSelector__: Uma entrada `to`/`from` única que permite especificar +ambos `namespaceSelector` e `podSelector` e seleciona um conjunto de Pods dentro de um namespace. +Seja cuidadoso em utilizar a sintaxe YAML correta; essa política: + +```yaml + ... + ingress: + - from: + - namespaceSelector: + matchLabels: + user: alice + podSelector: + matchLabels: + role: client + ... +``` +contém um único elemento `from` permitindo conexões de Pods com a label `role=client` em +namespaces com a _label_ `user=alice`. Mas *essa* política: + +```yaml + ... + ingress: + - from: + - namespaceSelector: + matchLabels: + user: alice + - podSelector: + matchLabels: + role: client + ... +``` + +contém dois elementos no conjunto `from` e permite conexões de Pods no namespace local com +a _label_ `role=client`, *OU* de qualquer outro Pod em qualquer outro namespace que tenha +a label `user=alice`. + +Quando estiver em dúvida, utilize o comando `kubectl describe` para verificar como o +Kubernetes interpretou a política. + +__ipBlock__: Isso seleciona um conjunto particular de faixas de IP a serem permitidos como +origens no caso de entrada ou destinos no caso de saída. Devem ser considerados IPs externos +ao cluster, uma vez que os IPs dos Pods são efêmeros e imprevisíveis. + +Os mecanismos de entrada e saída do cluster geralmente requerem que os IPs de origem ou destino +sejam reescritos. Em casos em que isso aconteça, não é definido se deve acontecer antes ou +depois do processamento da `NetworkPolicy` que corresponde a esse tráfego, e o comportamento +pode ser diferente para cada plugin de rede, provedor de nuvem, implementação de `Service`, etc. + +No caso de tráfego de entrada, isso significa que em alguns casos você pode filtrar os pacotes +de entrada baseado no IP de origem atual, enquanto que em outros casos o IP de origem que +a `NetworkPolicy` atua pode ser o IP de um `LoadBalancer` ou do Nó em que o Pod está executando. + +No caso de tráfego de saída, isso significa que conexões de Pods para `Services` que são reescritos +para IPs externos ao cluster podem ou não estar sujeitos a políticas baseadas no campo `ipBlock`. + +## Políticas padrão + +Por padrão, se nenhuma política existir no namespace, então todo o tráfego de entrada e saída é +permitido de e para os pods nesse namespace. Os exemplos a seguir permitem a você mudar o +comportamento padrão nesse namespace. + +### Bloqueio padrão de todo tráfego de entrada + +Você pode criar uma política padrão de isolamento para um namespace criando um objeto `NetworkPolicy` +que seleciona todos os pods mas não permite o tráfego de entrada para esses pods. + +{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} + +Isso garante que mesmo pods que não são selecionados por nenhuma outra política de rede ainda +serão isolados. Essa política não muda o comportamento padrão de isolamento de tráfego de saída +nesse namespace. + +### Permitir por padrão todo tráfego de entrada + +Se você deseja permitir todo o tráfego de todos os pods em um namespace (mesmo que políticas que +sejam adicionadas faça com que alguns pods sejam tratados como "isolados"), você pode criar +uma política que permite explicitamente todo o tráfego naquele namespace. + +{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} + +### Bloqueio padrão de todo tráfego de saída + +Você pode criar uma política de isolamento de saída padrão para um namespace criando uma +política de redes que selecione todos os pods, mas não permita o tráfego de saída a partir +de nenhum desses pods. + +{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} + +Isso garante que mesmo pods que não são selecionados por outra política de rede não seja permitido +tráfego de saída. Essa política não muda o comportamento padrão de tráfego de entrada. + +### Permitir por padrão todo tráfego de saída + +Caso você queira permitir todo o tráfego de todos os pods em um namespace (mesmo que políticas sejam +adicionadas e cause com que alguns pods sejam tratados como "isolados"), você pode criar uma +política explicita que permite todo o tráfego de saída no namespace. + +{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} + +### Bloqueio padrão de todo tráfego de entrada e saída + +Você pode criar uma política padrão em um namespace que previne todo o tráfego de entrada +E saída criando a política a seguir no namespace. + +{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} + +Isso garante que mesmo pods que não são selecionados por nenhuma outra política de redes não +possuam permissão de tráfego de entrada ou saída. + +## Selecionando uma faixa de portas + +{{< feature-state for_k8s_version="v1.21" state="alpha" >}} + +Ao escrever uma política de redes, você pode selecionar uma faixa de portas ao invés de uma +porta única, utilizando-se do campo `endPort` conforme a seguir: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: multi-port-egress + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 32000 + endPort: 32768 +``` + +A regra acima permite a qualquer Pod com a _label_ "role=db" no namespace `default` de se comunicar +com qualquer IP na faixa `10.0.0.0/24` através de protocolo TCP, desde que a porta de destino +esteja na faixa entre 32000 e 32768. + +As seguintes restrições aplicam-se ao se utilizar esse campo: + +* Por ser uma funcionalidade "alpha", ela é desativada por padrão. Para habilitar o campo `endPort` +no cluster, você (ou o seu administrador do cluster) deve habilitar o [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NetworkPolicyEndPort` no `kube-apiserver` com a flag `--feature-gates=NetworkPolicyEndPort=true,...`. +* O valor de `endPort` deve ser igual ou maior ao valor do campo `port`. +* O campo `endPort` só pode ser definido se o campo `port` também for definido. +* Ambos os campos `port` e `endPort` devem ser números. + +{{< note >}} +Seu cluster deve utilizar um plugin {{< glossary_tooltip text="CNI" term_id="cni" >}} +que suporte o campo `endPort` na especificação da política de redes. +{{< /note >}} + +## Selecionando um Namespace pelo seu nome + +{{< feature-state state="beta" for_k8s_version="1.21" >}} + +A camada de gerenciamento do Kubernetes configura uma _label_ imutável `kubernetes.io/metadata.name` em +todos os namespaces, uma vez que o [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) esteja habilitado por padrão. +O valor dessa _label_ é o nome do namespace. + +Enquanto que um objeto `NetworkPolicy` não pode selecionar um namespace pelo seu nome através de +um campo específico, você pode utilizar essa _label_ padrão para selecionar um namespace pelo seu nome. + +## O que você não pode fazer com `NetworkPolicies` (ao menos por enquanto!) +Por enquanto no Kubernetes {{< skew latestVersion >}} as funcionalidades a seguir não existem +mas você pode conseguir implementar de forma alternativa utilizando componentes do Sistema Operacional +(como SELinux, OpenVSwitch, IPtables, etc) ou tecnologias da camada 7 OSI (Ingress controllers, implementações de service mesh) ou ainda _admission controllers_. +No caso do assunto "segurança de redes no Kubernetes" ser novo para você, vale notar que as +histórias de usuário a seguir ainda não podem ser implementadas: + +- Forçar o tráfego interno do cluster passar por um gateway comum (pode ser implementado via service mesh ou outros proxies) +- Qualquer coisa relacionada a TLS/mTLS (use um service mesh ou ingress controller para isso) +- Políticas específicas a nível do nó kubernetes (você pode utilizar as notações de IP CIDR para isso, mas não pode selecionar nós Kubernetes por suas identidades) +- Selecionar `Services` pelo seu nome (você pode, contudo, selecionar pods e namespaces por seus {{< glossary_tooltip text="labels" term_id="label" >}} o que torna-se uma solução de contorno viável). +- Criação ou gerenciamento +- Políticas padrão que são aplicadas a todos os namespaces e pods (existem alguns plugins externos do Kubernetes e projetos que podem fazer isso, e a comunidade está trabalhando nessa especificação). +- Ferramental de testes para validação de políticas de redes. +- Possibilidade de logar eventos de segurança de redes (conexões bloqueadas, aceitas). Existem plugins CNI que conseguem fazer isso à parte. +- Possibilidade de explicitamente negar políticas de rede (o modelo das `NetworkPolicies` são "negar por padrão e conforme a necessidade, deve-se adicionar regras que permitam o tráfego). +- Bloquear o tráfego que venha da interface de loopback/localhost ou que venham do nó em que o Pod se encontre. + +## {{% heading "whatsnext" %}} + + +- Veja o tutorial [Declarando políticas de redes](/docs/tasks/administer-cluster/declare-network-policy/) para mais exemplos. +- Veja mais [cenários comuns e exemplos](https://github.com/ahmetb/kubernetes-network-policy-recipes) de políticas de redes. diff --git a/content/pt/docs/concepts/storage/persistent-volumes.md b/content/pt-br/docs/concepts/storage/persistent-volumes.md similarity index 100% rename from content/pt/docs/concepts/storage/persistent-volumes.md rename to content/pt-br/docs/concepts/storage/persistent-volumes.md diff --git a/content/pt-br/docs/tutorials/kubernetes-basics/_index.html b/content/pt-br/docs/tutorials/kubernetes-basics/_index.html index b397afba37..10b721c3c7 100644 --- a/content/pt-br/docs/tutorials/kubernetes-basics/_index.html +++ b/content/pt-br/docs/tutorials/kubernetes-basics/_index.html @@ -27,10 +27,10 @@ card:

Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.

Usando os tutoriais interativos, você pode aprender a:

    -
  • Implante um aplicativo em contêiner em um cluster.
  • -
  • Dimensione a implantação.
  • -
  • Atualize o aplicativo em contêiner com uma nova versão do software.
  • -
  • Depure o aplicativo em contêiner.
  • +
  • Implantar um aplicativo em contêiner em um cluster.
  • +
  • Dimensionar a implantação.
  • +
  • Atualizar o aplicativo em contêiner com uma nova versão do software.
  • +
  • Depurar o aplicativo em contêiner.

Os tutoriais usam Katacoda para executar um terminal virtual em seu navegador da Web, executado em Minikube, uma implantação local em pequena escala do Kubernetes que pode ser executada em qualquer lugar. Não há necessidade de instalar nenhum software ou configurar nada; cada tutorial interativo é executado diretamente no navegador da web.

diff --git a/content/pt-br/examples/service/networking/network-policy-allow-all-egress.yaml b/content/pt-br/examples/service/networking/network-policy-allow-all-egress.yaml new file mode 100644 index 0000000000..42b2a2a296 --- /dev/null +++ b/content/pt-br/examples/service/networking/network-policy-allow-all-egress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-egress +spec: + podSelector: {} + egress: + - {} + policyTypes: + - Egress diff --git a/content/pt-br/examples/service/networking/network-policy-allow-all-ingress.yaml b/content/pt-br/examples/service/networking/network-policy-allow-all-ingress.yaml new file mode 100644 index 0000000000..462912dae4 --- /dev/null +++ b/content/pt-br/examples/service/networking/network-policy-allow-all-ingress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-ingress +spec: + podSelector: {} + ingress: + - {} + policyTypes: + - Ingress diff --git a/content/pt-br/examples/service/networking/network-policy-default-deny-all.yaml b/content/pt-br/examples/service/networking/network-policy-default-deny-all.yaml new file mode 100644 index 0000000000..5c0086bd71 --- /dev/null +++ b/content/pt-br/examples/service/networking/network-policy-default-deny-all.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-all +spec: + podSelector: {} + policyTypes: + - Ingress + - Egress diff --git a/content/pt-br/examples/service/networking/network-policy-default-deny-egress.yaml b/content/pt-br/examples/service/networking/network-policy-default-deny-egress.yaml new file mode 100644 index 0000000000..a4659e1417 --- /dev/null +++ b/content/pt-br/examples/service/networking/network-policy-default-deny-egress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-egress +spec: + podSelector: {} + policyTypes: + - Egress diff --git a/content/pt-br/examples/service/networking/network-policy-default-deny-ingress.yaml b/content/pt-br/examples/service/networking/network-policy-default-deny-ingress.yaml new file mode 100644 index 0000000000..e823802487 --- /dev/null +++ b/content/pt-br/examples/service/networking/network-policy-default-deny-ingress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-ingress +spec: + podSelector: {} + policyTypes: + - Ingress diff --git a/content/ru/_index.html b/content/ru/_index.html index aaf9f136f5..a460376f12 100644 --- a/content/ru/_index.html +++ b/content/ru/_index.html @@ -41,12 +41,12 @@ Kubernetes — это проект с открытым исходным кодо

-
Посетите KubeCon NA онлайн, 17-20 ноября 2020 + Посетите KubeCon в Северной Америке, 11-15 октября 2021 года



- Посетите KubeCon EU онлайн, 4 – 7 мая 2021 + Посетите KubeCon в Европе, 17-20 мая 2022 года
@@ -56,4 +56,4 @@ Kubernetes — это проект с открытым исходным кодо {{< blocks/kubernetes-features >}} -{{< blocks/case-studies >}} \ No newline at end of file +{{< blocks/case-studies >}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/names.md b/content/ru/docs/concepts/overview/working-with-objects/names.md index e73d436533..73477a1475 100644 --- a/content/ru/docs/concepts/overview/working-with-objects/names.md +++ b/content/ru/docs/concepts/overview/working-with-objects/names.md @@ -37,7 +37,7 @@ weight: 20 Некоторые типы ресурсов должны соответствовать стандарту меток DNS, который описан в [RFC 1123](https://tools.ietf.org/html/rfc1123). Таким образом, имя должно: - содержать не более 63 символов -- содержать только строчные буквенно-цифровые символы или '.' +- содержать только строчные буквенно-цифровые символы или '-' - начинаться с буквенно-цифрового символа - заканчивается буквенно-цифровым символом diff --git a/content/uk/_index.html b/content/uk/_index.html index ae7873ea7a..ebf3f6ea18 100644 --- a/content/uk/_index.html +++ b/content/uk/_index.html @@ -62,12 +62,12 @@ Kubernetes - проект з відкритим вихідним кодом. В

- Відвідайте KubeCon NA онлайн, 17-20 листопада 2020 року + Відвідайте KubeCon у Північній Америці, 11-15 жовтня 2021 року



- Відвідайте KubeCon EU онлайн, 17-20 травня 2021 року + Відвідайте KubeCon в Європі, 17-20 травня 2022 року
diff --git a/content/zh/docs/concepts/architecture/controller.md b/content/zh/docs/concepts/architecture/controller.md index e6660f1e1c..7c11a5a0d2 100644 --- a/content/zh/docs/concepts/architecture/controller.md +++ b/content/zh/docs/concepts/architecture/controller.md @@ -285,5 +285,4 @@ Kubernetes 允许你运行一个稳定的控制平面,这样即使某些内置 的一些基本知识 * 进一步学习 [Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/) * 如果你想编写自己的控制器,请看 Kubernetes 的 - [扩展模式](/zh/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns)。 - + [扩展模式](/zh/docs/concepts/extend-kubernetes/#extension-patterns)。 diff --git a/content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index f11620acf2..2dd14158f0 100644 --- a/content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -52,7 +52,7 @@ The aggregation layer runs in-process with the kube-apiserver. Until an extensio -APIService 的最常见实现方式是在集群中某 Pod 内运行 *扩展 API 服务器*。 +APIService 的最常见实现方式是在集群中某 Pod 内运行 *扩展 API 服务器*。 如果你在使用扩展 API 服务器来管理集群中的资源,该扩展 API 服务器(也被写成“extension-apiserver”) 一般需要和一个或多个{{< glossary_tooltip text="控制器" term_id="controller" >}}一起使用。 apiserver-builder 库同时提供构造扩展 API 服务器和控制器框架代码。 @@ -71,7 +71,7 @@ If your extension API server cannot achieve that latency requirement, consider m 扩展 API 服务器与 kube-apiserver 之间需要存在低延迟的网络连接。 发现请求需要在五秒钟或更短的时间内完成到 kube-apiserver 的往返。 -如果你的扩展 API 服务器无法满足这一延迟要求,应考虑如何更改配置已满足需要。 +如果你的扩展 API 服务器无法满足这一延迟要求,应考虑如何更改配置以满足需要。 ## {{% heading "whatsnext" %}} @@ -87,4 +87,3 @@ If your extension API server cannot achieve that latency requirement, consider m 开始使用聚合层。 * 也可以学习怎样[使用自定义资源定义扩展 Kubernetes API](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。 * 阅读 [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io) 的规范 - diff --git a/content/zh/docs/concepts/storage/persistent-volumes.md b/content/zh/docs/concepts/storage/persistent-volumes.md index 53647be555..607a899f2b 100644 --- a/content/zh/docs/concepts/storage/persistent-volumes.md +++ b/content/zh/docs/concepts/storage/persistent-volumes.md @@ -244,7 +244,7 @@ Finalizers: [kubernetes.io/pvc-protection] You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too: --> 你也可以看到当 PV 对象的状态为 `Terminating` 且其 `Finalizers` 列表中包含 -`kubernetes.io/pvc-protection` 时,PV 对象是处于被保护状态的。 +`kubernetes.io/pv-protection` 时,PV 对象是处于被保护状态的。 ```shell kubectl describe pv task-pv-volume diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index 9cf05702b5..28ca72dd42 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -1038,12 +1038,12 @@ metadata: provisioner: kubernetes.io/azure-disk parameters: storageaccounttype: Standard_LRS - kind: Shared + kind: managed ``` * `storageaccounttype`:Azure 存储帐户 Sku 层。默认为空。 -* `kind`:可能的值是 `shared`(默认)、`dedicated` 和 `managed`。 +* `kind`:可能的值是 `shared`、`dedicated` 和 `managed`(默认)。 当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。 当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。 * `resourceGroup`: 指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。 diff --git a/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 747087b059..06adbc26f3 100644 --- a/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -482,7 +482,7 @@ replication controllers, replica sets or stateful sets that the Pod belongs to. An example configuration might look like follows: --> -你可以在 [调度方案(Schedulingg Profile)](/zh/docs/reference/scheduling/config/#profiles) +你可以在 [调度方案(Scheduling Profile)](/zh/docs/reference/scheduling/config/#profiles) 中将默认约束作为 `PodTopologySpread` 插件参数的一部分来设置。 约束的设置采用[如前所述的 API](#api),只是 `labelSelector` 必须为空。 选择算符是根据 Pod 所属的服务、副本控制器、ReplicaSet 或 StatefulSet 来设置的。 diff --git a/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md index 2663bf357a..e9d5e62c65 100644 --- a/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -126,12 +126,13 @@ state for some duration: ## 签名者 {#signers} +也可以指定自定义 signerName。 所有签名者都应该提供自己工作方式的信息, 以便客户端可以预期到他们的 CSR 将发生什么。 此类信息包括: @@ -423,8 +424,8 @@ O is the group that this user will belong to. You can refer to 你可以参考 [RBAC](/zh/docs/reference/access-authn-authz/rbac/) 了解标准组的信息。 ```shell -openssl genrsa -out john.key 2048 -openssl req -new -key john.key -out john.csr +openssl genrsa -out myuser.key 2048 +openssl req -new -key myuser.key -out myuser.csr ``` 需要注意的几点: - `usage` 字段必须是 '`client auth`' - `request` 字段是 CSR 文件内容的 base64 编码值。 - 要得到该值,可以执行命令 `cat john.csr | base64 | tr -d "\n"`。 + 要得到该值,可以执行命令 `cat myuser.csr | base64 | tr -d "\n"`。 证书的内容使用 base64 编码,存放在字段 `status.certificate`。 +从 CertificateSigningRequest 导出颁发的证书。 + +``` +kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt +``` + @@ -555,7 +564,7 @@ First, we need to add new credentials: 首先,我们需要添加新的凭据: ```shell -kubectl config set-credentials john --client-key=/home/vagrant/work/john.key --client-certificate=/home/vagrant/work/john.crt --embed-certs=true +kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true ``` @@ -565,16 +574,16 @@ Then, you need to add the context: 然后,你需要添加上下文: ```shell -kubectl config set-context john --cluster=kubernetes --user=john +kubectl config set-context myuser --cluster=kubernetes --user=myuser ``` -来测试一下,把上下文切换为 `john`: +来测试一下,把上下文切换为 `myuser`: ```shell -kubectl config use-context john +kubectl config use-context myuser ``` `status.conditions.reason` 字段通常设置为一个首字母大写的对机器友好的原因码; 这是一个命名约定,但你也可以随你的个人喜好设置。 -如果你想添加一个仅供人类使用的注释,那就用 `status.conditions.message` 字段。 +如果你想添加一个供人类使用的注释,那就用 `status.conditions.message` 字段。 diff --git a/content/zh/docs/reference/command-line-tools-reference/feature-gates.md b/content/zh/docs/reference/command-line-tools-reference/feature-gates.md index ff81c38d6b..57e2a4d8e8 100644 --- a/content/zh/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/zh/docs/reference/command-line-tools-reference/feature-gates.md @@ -1349,7 +1349,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `ValidateProxyRedirects`: This flag controls whether the API server should validate that redirects are only followed to the same host. Only used if the `StreamingProxyRedirects` flag is enabled. -- 'VolumeCapacityPriority`: Enable support for prioritizing nodes in different +- `VolumeCapacityPriority`: Enable support for prioritizing nodes in different topologies based on available PV capacity. - `VolumePVCDataSource`: Enable support for specifying an existing PVC as a DataSource. - `VolumeScheduling`: Enable volume topology aware scheduling and make the @@ -1361,7 +1361,7 @@ Each feature gate is designed for enabling/disabling a specific feature: --> - `ValidateProxyRedirects`: 这个标志控制 API 服务器是否应该验证只跟随到相同的主机的重定向。 仅在启用 `StreamingProxyRedirects` 标志时被使用。 -- 'VolumeCapacityPriority`: 基于可用 PV 容量的拓扑,启用对不同节点的优先级支持。 +- `VolumeCapacityPriority`: 基于可用 PV 容量的拓扑,启用对不同节点的优先级支持。 - `VolumePVCDataSource`:启用对将现有 PVC 指定数据源的支持。 - `VolumeScheduling`:启用卷拓扑感知调度,并使 PersistentVolumeClaim(PVC) 绑定能够了解调度决策;当与 PersistentLocalVolumes 特性门控一起使用时, diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md index ac4d11b631..96d0229b43 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md +++ b/content/zh/docs/reference/command-line-tools-reference/kube-apiserver.md @@ -96,7 +96,8 @@ the host's default interface will be used. The map from metric-label to value allow-list of this label. The key's format is <MetricName>,<LabelName>. The value's format is <allowed_value>,<allowed_value>...e.g. metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'. --> 允许使用的指标标签到指标值的映射列表。键的格式为 <MetricName>,<LabelName>. -值得格式为 <allowed_value>,<allowed_value>...。 例如:metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'。 +值的格式为 <allowed_value>,<allowed_value>...。 +例如:metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'

@@ -2251,7 +2252,7 @@ are permanently removed in the release after that. --> 你要显示隐藏指标的先前版本。仅先前的次要版本有意义,不允许其他值。 格式为 <major>.<minor>,例如:"1.16"。 -这种格式的目的是确保您有机会注意到下一个版本是否隐藏了其他指标, +这种格式的目的是确保你有机会注意到下一个版本是否隐藏了其他指标, 而不是在此之后将它们从发行版中永久删除时感到惊讶。 diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md index 0e456cff71..7cb8e9778d 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -342,7 +342,7 @@ The instance prefix for the cluster. Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates. If specified, no more specific --cluster-signing-* flag may be specified. --> 包含 PEM 编码格式的 X509 CA 证书的文件名。该证书用来发放集群范围的证书。 -如果设置了此标志,则不需要锦衣设置 --cluster-signing-* 标志。 +如果设置了此标志,则不能指定更具体的--cluster-signing-* 标志。 diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md b/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md index 1c98ff35da..feeedae04c 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md @@ -4,17 +4,24 @@ content_type: tool-reference weight: 30 --- + + ## {{% heading "synopsis" %}} - - -Kubernetes 网络代理在每个节点上运行。网络代理反映了每个节点上 Kubernetes API 中定义的服务,并且可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行循环 TCP、UDP 和 SCTP 转发。当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口,这些环境变量指定了服务代理打开的端口。有一个可选的插件,可以为这些集群 IP 提供集群 DNS。用户必须使用 apiserver API 创建服务才能配置代理。 +Kubernetes 网络代理在每个节点上运行。网络代理反映了每个节点上 Kubernetes API +中定义的服务,并且可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行 +循环 TCP、UDP 和 SCTP 转发。 +当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口, +这些环境变量指定了服务代理打开的端口。 +有一个可选的插件,可以为这些集群 IP 提供集群 DNS。 +用户必须使用 apiserver API 创建服务才能配置代理。 ``` kube-proxy [flags] ``` - - ## {{% heading "options" %}} @@ -42,61 +53,92 @@ kube-proxy [flags] + +--add-dir-header + + +

+ +若此标志为 true,则将文件目录添加到日志消息的头部。 +

+ + + +--alsologtostderr + + +

+ +将日志输出到文件时也输出到标准错误输出(stderr)。 +

+ + --azure-container-registry-config string - +

包含 Azure 容器仓库配置信息的文件的路径。 +

- - ---bind-address 0.0.0.0     默认值: 0.0.0.0 - +--bind-address 0.0.0.0     默认值:0.0.0.0 - +

-代理服务器要使用的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::) +代理服务器要使用的 IP 地址(设置为 '0.0.0.0' 表示要使用所有 IPv4 接口; +设置为 '::' 表示使用所有 IPv6 接口)。 +

+ +--bind-address-hard-fail + + +

+ +若此标志为 true,kube-proxy 会将无法绑定端口的失败操作视为致命错误并退出。 +

+ + + +--boot-id-file string     默认值:"/proc/sys/kernel/random/boot_id" + + +

+ +用来检查 Boot-ID 的文件名,用逗号隔开。 +第一个存在的文件会被使用。 +

+ + --cleanup - +

如果为 true,清理 iptables 和 ipvs 规则并退出。 - - - - - - ---cleanup-ipvs     默认值: true - - - - - -如果设置为 true 并指定了 --cleanup,则 kube-proxy 除了常规清理外,还将刷新 IPVS 规则。 +

@@ -104,11 +146,14 @@ If true and --cleanup is specified, kube-proxy will also flush IPVS rules, in ad --cluster-cidr string - +

-集群中 Pod 的 CIDR 范围。配置后,将从该范围之外发送到服务集群 IP 的流量被伪装,从 Pod 发送到外部 LoadBalancer IP 的流量将被重定向到相应的集群 IP。 +集群中 Pod 的 CIDR 范围。配置后,将从该范围之外发送到服务集群 IP +的流量被伪装,从 Pod 发送到外部 LoadBalancer IP 的流量将被重定向 +到相应的集群 IP。 +

@@ -116,96 +161,80 @@ The CIDR range of pods in the cluster. When configured, traffic sent to a Servic --config string - +

配置文件的路径。 +

- - ---config-sync-period duration     默认值: 15m0s - +--config-sync-period duration     默认值:15m0s - +

来自 apiserver 的配置的刷新频率。必须大于 0。 +

- - ---conntrack-max-per-core int32     默认值: 32768 - +--conntrack-max-per-core int32     默认值:32768 - +

-每个 CPU 核跟踪的最大 NAT 连接数(0 表示保留原样限制并忽略 conntrack-min)。 +每个 CPU 核跟踪的最大 NAT 连接数(0 表示保留当前限制并忽略 conntrack-min 设置)。 +

- - ---conntrack-min int32     默认值: 131072 - +--conntrack-min int32     默认值:131072 - +

-无论 conntrack-max-per-core 多少,要分配的 conntrack 条目的最小数量(将 conntrack-max-per-core 设置为 0 即可保持原样的限制)。 +无论 conntrack-max-per-core 多少,要分配的 conntrack +条目的最小数量(将 conntrack-max-per-core 设置为 0 即可 +保持当前的限制)。 +

- - ---conntrack-tcp-timeout-close-wait duration     默认值: 1h0m0s +--conntrack-tcp-timeout-close-wait duration     默认值:1h0m0s - +

-处于 CLOSE_WAIT 状态的 TCP 连接的 NAT 超时 +处于 CLOSE_WAIT 状态的 TCP 连接的 NAT 超时。 +

- - ---conntrack-tcp-timeout-established duration     默认值: 24h0m0s - +--conntrack-tcp-timeout-established duration     默认值:24h0m0s - +

-已建立的 TCP 连接的空闲超时(0 保持原样) +已建立的 TCP 连接的空闲超时(0 保持当前设置)。 +

@@ -213,228 +242,233 @@ Idle timeout for established TCP connections (0 to leave as-is) --detect-local-mode LocalMode - +

-用于检测本地流量的模式 +用于检测本地流量的模式。 +

---feature-gates mapStringBool +--feature-gates <逗号分隔的 'key=True|False' 对’> - +

一组键=值(key=value)对,描述了 alpha/experimental 的特征。可选项有: -
APIListChunking=true|false (BETA - 默认值=true) -
APIPriorityAndFairness=true|false (ALPHA - 默认值=false) -
APIResponseCompression=true|false (BETA - 默认值=true) -
AllAlpha=true|false (ALPHA - 默认值=false) -
AllBeta=true|false (BETA - 默认值=false) -
AllowInsecureBackendProxy=true|false (BETA - 默认值=true) -
AnyVolumeDataSource=true|false (ALPHA - 默认值=false) -
AppArmor=true|false (BETA - 默认值=true) -
BalanceAttachedNodeVolumes=true|false (ALPHA - 默认值=false) -
BoundServiceAccountTokenVolume=true|false (ALPHA - 默认值=false) -
CPUManager=true|false (BETA - 默认值=true) -
CRIContainerLogRotation=true|false (BETA - 默认值=true) -
CSIInlineVolume=true|false (BETA - 默认值=true) -
CSIMigration=true|false (BETA - 默认值=true) -
CSIMigrationAWS=true|false (BETA - 默认值=false) -
CSIMigrationAWSComplete=true|false (ALPHA - 默认值=false) -
CSIMigrationAzureDisk=true|false (BETA - 默认值=false) -
CSIMigrationAzureDiskComplete=true|false (ALPHA - 默认值=false) -
CSIMigrationAzureFile=true|false (ALPHA - 默认值=false) -
CSIMigrationAzureFileComplete=true|false (ALPHA - 默认值=false) -
CSIMigrationGCE=true|false (BETA - 默认值=false) -
CSIMigrationGCEComplete=true|false (ALPHA - 默认值=false) -
CSIMigrationOpenStack=true|false (BETA - 默认值=false) -
CSIMigrationOpenStackComplete=true|false (ALPHA - 默认值=false) -
CSIMigrationvSphere=true|false (BETA - 默认值=false) -
CSIMigrationvSphereComplete=true|false (BETA - 默认值=false) -
CSIStorageCapacity=true|false (ALPHA - 默认值=false) -
CSIVolumeFSGroupPolicy=true|false (ALPHA - 默认值=false) -
ConfigurableFSGroupPolicy=true|false (ALPHA - 默认值=false) -
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false) -
DefaultPodTopologySpread=true|false (ALPHA - 默认值=false) -
DevicePlugins=true|false (BETA - 默认值=true) -
DisableAcceleratorUsageMetrics=true|false (ALPHA - 默认值=false) -
DynamicKubeletConfig=true|false (BETA - 默认值=true) -
EndpointSlice=true|false (BETA - 默认值=true) -
EndpointSliceProxying=true|false (BETA - 默认值=true) -
EphemeralContainers=true|false (ALPHA - 默认值=false) -
ExpandCSIVolumes=true|false (BETA - 默认值=true) -
ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true) -
ExpandPersistentVolumes=true|false (BETA - 默认值=true) -
ExperimentalHostUserNamespace默认值ing=true|false (BETA - 默认值=false) -
GenericEphemeralVolume=true|false (ALPHA - 默认值=false) -
HPAScaleToZero=true|false (ALPHA - 默认值=false) -
HugePageStorageMediumSize=true|false (BETA - 默认值=true) -
HyperVContainer=true|false (ALPHA - 默认值=false) -
IPv6DualStack=true|false (ALPHA - 默认值=false) -
ImmutableEphemeralVolumes=true|false (BETA - 默认值=true) -
KubeletPodResources=true|false (BETA - 默认值=true) -
LegacyNodeRoleBehavior=true|false (BETA - 默认值=true) -
LocalStorageCapacityIsolation=true|false (BETA - 默认值=true) -
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false) -
NodeDisruptionExclusion=true|false (BETA - 默认值=true) -
NonPreemptingPriority=true|false (BETA - 默认值=true) -
PodDisruptionBudget=true|false (BETA - 默认值=true) -
PodOverhead=true|false (BETA - 默认值=true) -
ProcMountType=true|false (ALPHA - 默认值=false) -
QOSReserved=true|false (ALPHA - 默认值=false) -
RemainingItemCount=true|false (BETA - 默认值=true) -
RemoveSelfLink=true|false (ALPHA - 默认值=false) -
RotateKubeletServerCertificate=true|false (BETA - 默认值=true) -
RunAsGroup=true|false (BETA - 默认值=true) -
RuntimeClass=true|false (BETA - 默认值=true) -
SCTPSupport=true|false (BETA - 默认值=true) -
SelectorIndex=true|false (BETA - 默认值=true) -
ServerSideApply=true|false (BETA - 默认值=true) -
ServiceAccountIssuerDiscovery=true|false (ALPHA - 默认值=false) -
ServiceAppProtocol=true|false (BETA - 默认值=true) -
ServiceNodeExclusion=true|false (BETA - 默认值=true) -
ServiceTopology=true|false (ALPHA - 默认值=false) -
SetHostnameAsFQDN=true|false (ALPHA - 默认值=false) -
StartupProbe=true|false (BETA - 默认值=true) -
StorageVersionHash=true|false (BETA - 默认值=true) -
SupportNodePidsLimit=true|false (BETA - 默认值=true) -
SupportPodPidsLimit=true|false (BETA - 默认值=true) -
Sysctls=true|false (BETA - 默认值=true) -
TTLAfterFinished=true|false (ALPHA - 默认值=false) -
TokenRequest=true|false (BETA - 默认值=true) -
TokenRequestProjection=true|false (BETA - 默认值=true) -
TopologyManager=true|false (BETA - 默认值=true) -
ValidateProxyRedirects=true|false (BETA - 默认值=true) -
VolumeSnapshotDataSource=true|false (BETA - 默认值=true) -
WarningHeaders=true|false (BETA - 默认值=true) -
WinDSR=true|false (ALPHA - 默认值=false) -
WinOverlay=true|false (ALPHA - 默认值=false) -
WindowsEndpointSliceProxying=true|false (ALPHA - 默认值=false) +APIListChunking=true|false (BETA - 默认值=true)
+APIPriorityAndFairness=true|false (BETA - 默认值=true)
+APIResponseCompression=true|false (BETA - 默认值=true)
+APIServerIdentity=true|false (ALPHA - 默认值=false)
+AllAlpha=true|false (ALPHA - 默认值=false)
+AllBeta=true|false (BETA - 默认值=false)
+AnyVolumeDataSource=true|false (ALPHA - 默认值=false)
+AppArmor=true|false (BETA - 默认值=true)
+BalanceAttachedNodeVolumes=true|false (ALPHA - 默认值=false)
+BoundServiceAccountTokenVolume=true|false (BETA - 默认值=true)
+CPUManager=true|false (BETA - 默认值=true)
+CSIInlineVolume=true|false (BETA - 默认值=true)
+CSIMigration=true|false (BETA - 默认值=true)
+CSIMigrationAWS=true|false (BETA - 默认值=false)
+CSIMigrationAzureDisk=true|false (BETA - 默认值=false)
+CSIMigrationAzureFile=true|false (BETA - 默认值=false)
+CSIMigrationGCE=true|false (BETA - 默认值=false)
+CSIMigrationOpenStack=true|false (BETA - 默认值=true)
+CSIMigrationvSphere=true|false (BETA - 默认值=false)
+CSIMigrationvSphereComplete=true|false (BETA - 默认值=false)
+CSIServiceAccountToken=true|false (BETA - 默认值=true)
+CSIStorageCapacity=true|false (BETA - 默认值=true)
+CSIVolumeFSGroupPolicy=true|false (BETA - 默认值=true)
+CSIVolumeHealth=true|false (ALPHA - 默认值=false)
+ConfigurableFSGroupPolicy=true|false (BETA - 默认值=true)
+ControllerManagerLeaderMigration=true|false (ALPHA - 默认值=false)
+CronJobControllerV2=true|false (BETA - 默认值=true)
+CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值=false)
+DaemonSetUpdateSurge=true|false (ALPHA - 默认值=false)
+DefaultPodTopologySpread=true|false (BETA - 默认值=true)
+DevicePlugins=true|false (BETA - 默认值=true)
+DisableAcceleratorUsageMetrics=true|false (BETA - 默认值=true)
+DownwardAPIHugePages=true|false (BETA - 默认值=false)
+DynamicKubeletConfig=true|false (BETA - 默认值=true)
+EfficientWatchResumption=true|false (BETA - 默认值=true)
+EndpointSliceProxying=true|false (BETA - 默认值=true)
+EndpointSliceTerminatingCondition=true|false (ALPHA - 默认值=false)
+EphemeralContainers=true|false (ALPHA - 默认值=false)
+ExpandCSIVolumes=true|false (BETA - 默认值=true)
+ExpandInUsePersistentVolumes=true|false (BETA - 默认值=true)
+ExpandPersistentVolumes=true|false (BETA - 默认值=true)
+ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值=false)
+GenericEphemeralVolume=true|false (BETA - 默认值=true)
+GracefulNodeShutdown=true|false (BETA - 默认值=true)
+HPAContainerMetrics=true|false (ALPHA - 默认值=false)
+HPAScaleToZero=true|false (ALPHA - 默认值=false)
+HugePageStorageMediumSize=true|false (BETA - 默认值=true)
+IPv6DualStack=true|false (BETA - 默认值=true)
+InTreePluginAWSUnregister=true|false (ALPHA - 默认值=false)
+InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值=false)
+InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值=false)
+InTreePluginGCEUnregister=true|false (ALPHA - 默认值=false)
+InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值=false)
+InTreePluginvSphereUnregister=true|false (ALPHA - 默认值=false)
+IndexedJob=true|false (ALPHA - 默认值=false)
+IngressClassNamespacedParams=true|false (ALPHA - 默认值=false)
+KubeletCredentialProviders=true|false (ALPHA - 默认值=false)
+KubeletPodResources=true|false (BETA - 默认值=true)
+KubeletPodResourcesGetAllocatable=true|false (ALPHA - 默认值=false)
+LocalStorageCapacityIsolation=true|false (BETA - 默认值=true)
+LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值=false)
+LogarithmicScaleDown=true|false (ALPHA - 默认值=false)
+MemoryManager=true|false (ALPHA - 默认值=false)
+MixedProtocolLBService=true|false (ALPHA - 默认值=false)
+NamespaceDefaultLabelName=true|false (BETA - 默认值=true)
+NetworkPolicyEndPort=true|false (ALPHA - 默认值=false)
+NonPreemptingPriority=true|false (BETA - 默认值=true)
+PodAffinityNamespaceSelector=true|false (ALPHA - 默认值=false)
+PodDeletionCost=true|false (ALPHA - 默认值=false)
+PodOverhead=true|false (BETA - 默认值=true)
+PreferNominatedNode=true|false (ALPHA - 默认值=false)
+ProbeTerminationGracePeriod=true|false (ALPHA - 默认值=false)
+ProcMountType=true|false (ALPHA - 默认值=false)
+QOSReserved=true|false (ALPHA - 默认值=false)
+RemainingItemCount=true|false (BETA - 默认值=true)
+RemoveSelfLink=true|false (BETA - 默认值=true)
+RotateKubeletServerCertificate=true|false (BETA - 默认值=true)
+ServerSideApply=true|false (BETA - 默认值=true)
+ServiceInternalTrafficPolicy=true|false (ALPHA - 默认值=false)
+ServiceLBNodePortControl=true|false (ALPHA - 默认值=false)
+ServiceLoadBalancerClass=true|false (ALPHA - 默认值=false)
+ServiceTopology=true|false (ALPHA - 默认值=false)
+SetHostnameAsFQDN=true|false (BETA - 默认值=true)
+SizeMemoryBackedVolumes=true|false (ALPHA - 默认值=false)
+StorageVersionAPI=true|false (ALPHA - 默认值=false)
+StorageVersionHash=true|false (BETA - 默认值=true)
+SuspendJob=true|false (ALPHA - 默认值=false)
+TTLAfterFinished=true|false (BETA - 默认值=true)
+TopologyAwareHints=true|false (ALPHA - 默认值=false)
+TopologyManager=true|false (BETA - 默认值=true)
+ValidateProxyRedirects=true|false (BETA - 默认值=true)
+VolumeCapacityPriority=true|false (ALPHA - 默认值=false)
+WarningHeaders=true|false (BETA - 默认值=true)
+WinDSR=true|false (ALPHA - 默认值=false)
+WinOverlay=true|false (BETA - 默认值=true)
+WindowsEndpointSliceProxying=true|false (BETA - 默认值=true) +

- - ---healthz-bind-address 0.0.0.0     默认值: 0.0.0.0:10256 - +--healthz-bind-address 0.0.0.0     默认值:0.0.0.0:10256 - +

-服务健康检查的 IP 地址和端口(对于所有 IPv4 接口设置为 '0.0.0.0:10256',对于所有 IPv6 接口设置为 '[::]:10256') +服务健康状态检查的 IP 地址和端口(设置为 '0.0.0.0:10256' 表示使用所有 +IPv4 接口,设置为 '[::]:10256' 表示使用所有 IPv6 接口); 设置为空则禁用。 - - - - - - ---healthz-bind-address 0.0.0.0     默认值: 0.0.0.0:10256 - - - - - -服务健康检查的 IP 地址和端口(设置为 0.0.0.0 表示使用所有 IPv4 接口,设置为 :: 表示使用所有 IPv6 接口) +

@@ -442,11 +476,12 @@ The IP address for the health check server to serve on (set to 0.0.0.0 for all I -h, --help - +

-kube-proxy 操作的帮助命令 +kube-proxy 操作的帮助命令。 +

@@ -454,74 +489,65 @@ kube-proxy 操作的帮助命令 --hostname-override string - +

-如果非空,将使用此字符串作为标识而不是实际的主机名。 +如果非空,将使用此字符串而不是实际的主机名作为标识。 +

- - ---iptables-masquerade-bit int32     默认值: 14 - +--iptables-masquerade-bit int32     默认值:14 - +

-如果使用纯 iptables 代理,则 fwmark 空间的 bit 用于标记需要 SNAT 的数据包。必须在 [0,31] 范围内。 +在使用纯 iptables 代理时,用来设置 fwmark 空间的 bit,标记需要 +SNAT 的数据包。必须在 [0,31] 范围内。 +

- - - --iptables-min-sync-period duration     默认值:1s - +--iptables-min-sync-period duration     默认值:1s - +

iptables 规则可以随着端点和服务的更改而刷新的最小间隔(例如 '5s'、'1m'、'2h22m')。 +

- - ---iptables-sync-period duration     默认值: 30s - +--iptables-sync-period duration     默认值:30s - +

刷新 iptables 规则的最大间隔(例如 '5s'、'1m'、'2h22m')。必须大于 0。 +

---ipvs-exclude-cidrs stringSlice +--ipvs-exclude-cidrs strings - +

-逗号分隔的 CIDR 列表,ipvs 代理在清理 IPVS 规则时不应使用此列表。 +逗号分隔的 CIDR 列表,ipvs 代理在清理 IPVS 规则时不会此列表中的地址范围。 +

@@ -529,11 +555,12 @@ A comma-separated list of CIDR's which the ipvs proxier should not touch when cl --ipvs-min-sync-period duration - +

ipvs 规则可以随着端点和服务的更改而刷新的最小间隔(例如 '5s'、'1m'、'2h22m')。 +

@@ -541,11 +568,12 @@ ipvs 规则可以随着端点和服务的更改而刷新的最小间隔(例如 --ipvs-scheduler string - +

-代理模式为 ipvs 时的 ipvs 调度器类型 +代理模式为 ipvs 时所选的 ipvs 调度器类型。 +

@@ -553,28 +581,26 @@ The ipvs scheduler type when proxy mode is ipvs --ipvs-strict-arp - +

-通过将 arp_ignore 设置为 1 并将 arp_announce 设置为 2 启用严格的 ARP +通过将 arp_ignore 设置为 1 并将 arp_announce +设置为 2 启用严格的 ARP。 +

- - ---ipvs-sync-period duration     默认值: 30s - +--ipvs-sync-period duration     默认值:30s - +

刷新 ipvs 规则的最大间隔(例如 '5s'、'1m'、'2h22m')。必须大于 0。 +

@@ -583,11 +609,12 @@ The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h --ipvs-tcp-timeout duration - +

空闲 IPVS TCP 连接的超时时间,0 保持连接(例如 '5s'、'1m'、'2h22m')。 +

@@ -595,11 +622,12 @@ The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', ' --ipvs-tcpfin-timeout duration - +

-收到 FIN 数据包后,IPVS TCP 连接的超时,0 保持连接不变(例如 '5s'、'1m'、'2h22m')。 +收到 FIN 数据包后,IPVS TCP 连接的超时,0 保持当前设置不变。(例如 '5s'、'1m'、'2h22m')。 +

@@ -607,62 +635,51 @@ The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as --ipvs-udp-timeout duration - +

-IPVS UDP 数据包的超时,0 保持连接不动(例如 '5s'、'1m'、'2h22m')。 +IPVS UDP 数据包的超时,0 保持当前设置不变。(例如 '5s'、'1m'、'2h22m')。 +

- - ---kube-api-burst int32     默认值: 10 - +--kube-api-burst int32     默认值:10 - +

-与 kubernetes apiserver 通信的数量 +与 kubernetes apiserver 通信的突发数量。 +

- - ---kube-api-content-type string     默认值: "application/vnd.kubernetes.protobuf" - +--kube-api-content-type string     默认值:"application/vnd.kubernetes.protobuf" - +

发送到 apiserver 的请求的内容类型。 +

- - ---kube-api-qps float32     默认值: 5 - +--kube-api-qps float32     默认值:5 - +

-与 kubernetes apiserver 交互时使用的 QPS +与 kubernetes apiserver 交互时使用的 QPS。 +

@@ -670,20 +687,69 @@ QPS to use while talking with kubernetes apiserver --kubeconfig string - +

-包含授权信息的 kubeconfig 文件的路径(master 位置由 master 标志设置)。 +包含鉴权信息的 kubeconfig 文件的路径(主控节点位置由 master 标志设置)。 +

- +--log-backtrace-at <形式为 'file:N' 的字符串>     Default: :0 + + +

---log-flush-frequency duration     默认值: 5s +当日志逻辑执行到文件 file 的第 N 行时,输出调用堆栈跟踪。 +

+ + + + +--log-dir string + + +

+ +若此标志费控,则将日志文件写入到此标志所给的目录下。 +

+ + + + +--log-file string + + +

+ +若此标志非空,则该字符串作为日志文件名。 +

+ + + +--log-file-max-size uint     默认值:1800 + + +

+ +定义日志文件可增长到的最大尺寸。单位是兆字节(MB)。 +如果此值为 0,则最大文件大小无限制。 +

+ + + + +--log-flush-frequency duration     默认值:5s @@ -691,19 +757,34 @@ Path to kubeconfig file with authorization information (the master location is s -两次日志刷新之间的最大秒数 +两次日志刷新之间的最大秒数。 + +--machine-id-file string     默认值:"/etc/machine-id,/var/lib/dbus/machine-id" + + +

+ +用来检查 Machine-ID 的文件列表,用逗号分隔。 +使用找到的第一个文件。 +

+ + --masquerade-all - +

-如果使用纯 iptables 代理,则对通过服务集群 IP 发送的所有流量进行 SNAT(通常不需要) +如果使用纯 iptables 代理,则对通过服务集群 IP 发送的所有流量 +进行 SNAT(通常不需要)。 +

@@ -711,78 +792,70 @@ If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs --master string - +

-Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值) +Kubernetes API 服务器的地址(覆盖 kubeconfig 中的相关值)。 +

- - ---metrics-bind-address ipport 0.0.0.0     默认值: 127.0.0.1:10249 - +--metrics-bind-address ipport     默认值:127.0.0.1:10249 - +

metrics 服务器要使用的 IP 地址和端口 -(设置为 '0.0.0.0:10249' 则使用 IPv4 接口,设置为 '[::]:10249' 则使用所有 IPv6 接口) +(设置为 '0.0.0.0:10249' 则使用所有 IPv4 接口,设置为 '[::]:10249' 则使用所有 IPv6 接口) 设置为空则禁用。 +

- - ---metrics-port int32     默认值: 10249 - +--nodeport-addresses strings - - -绑定 metrics 服务器的端口。使用 0 表示禁用。 - - - - ---nodeport-addresses stringSlice - - - +

-一个字符串值,指定用于 NodePorts 的地址。值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。默认的空字符串切片([])表示使用所有本地地址。 +一个字符串值,指定用于 NodePort 服务的地址。 +值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。 +默认的空字符串切片([])表示使用所有本地地址。 +

- - ---oom-score-adj int32     默认值: -999 - +--one-output - +

+ +若此标志为 true,则仅将日志写入到其原本的严重性级别之下 +(而不是将其写入到所有更低严重性级别中)。 +

+ + + +--oom-score-adj int32     默认值:-999 + + +

-kube-proxy 进程中的 oom-score-adj 值必须在 [-1000,1000] 范围内 +kube-proxy 进程中的 oom-score-adj 值,必须在 [-1000,1000] 范围内。 +

@@ -790,23 +863,28 @@ kube-proxy 进程中的 oom-score-adj 值必须在 [-1000,1000] 范围内 --profiling - +

-如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。 +如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。 +

---proxy-mode ProxyMode +--proxy-mode string - +

-使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'(实验)。如果为空,使用最佳可用代理(当前为 iptables)。如果选择了 iptables 代理,无论如何,但系统的内核或 iptables 版本较低,这总是会回退到用户空间代理。 +使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'。 +如果为空,使用最佳可用代理(当前为 iptables)。 +如果选择了 iptables 代理(无论是否为显式设置),但系统的内核或 +iptables 版本较低,总是会回退到 userspace 代理。 +

@@ -814,11 +892,14 @@ Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs'. I --proxy-port-range port-range - +

-可以使用代理服务流量的主机端口(包括 beginPort-endPort、single port、beginPort+offset)的范围。如果(未指定,0 或 0-0)则随机选择端口。 +可以用来代理服务流量的主机端口范围(包括'起始端口-结束端口'、 +'单个端口'、'起始端口+偏移'几种形式)。 +如果未指定或者设置为 0(或 0-0),则随机选择端口。 +

@@ -826,56 +907,117 @@ Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusi --show-hidden-metrics-for-version string - +

-你要显示隐藏指标的先前版本。 +要显示隐藏指标的先前版本。 仅先前的次要版本有意义,不允许其他值。 格式为 <major>.<minor> ,例如:'1.16'。 这种格式的目的是确保你有机会注意到下一个发行版是否隐藏了其他指标, 而不是在之后将其永久删除时感到惊讶。 +

- - ---udp-timeout duration     默认值: 250ms - +--skip-headers - +

+ +若此标志为 true,则避免在日志消息中包含头部前缀。 +

+ + + +--skip-log-headers + + +

+ +如果此标志为 true,则避免在打开日志文件时使用头部。 +

+ + + +--stderrthreshold int     默认值:2 + + +

+ +如果日志消息处于或者高于此阈值所设置的级别,则将其输出到标准错误输出(stderr)。 +

+ + + +--udp-timeout duration     默认值:250ms + + +

-空闲 UDP 连接将保持打开的时长(例如 '250ms','2s')。必须大于 0。仅适用于 proxy-mode=userspace +空闲 UDP 连接将保持打开的时长(例如 '250ms','2s')。必须大于 0。 +仅适用于 proxy-mode=userspace。 +

+ +-v, --v int + + +

+ +用来设置日志详细程度的数值。 +

+ + --version version[=true] - +

-打印版本信息并退出 +打印版本信息并退出。 +

+ +--vmodule <逗号分隔的 'pattern=N' 设置’> + + +

+ +用逗号分隔的列表,其中每一项为 'pattern=N' 格式。 +用来支持基于文件过滤的日志机制。 +

+ + --write-config-to string - +

-如果设置,将配置值写入此文件并退出。 +如果设置,将默认配置信息写入此文件并退出。 +

diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md index 60f3772c07..4151e1bc99 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md +++ b/content/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md @@ -299,8 +299,7 @@ A kubelet authenticating using bootstrap tokens is authenticated as a user in th @@ -354,7 +353,7 @@ If you want to use bootstrap tokens, you must enable it on kube-apiserver with t 如果你希望使用启动引导令牌,你必须在 kube-apiserver 上使用下面的标志启用之: -``` +```console --enable-bootstrap-token-auth=true ``` @@ -373,7 +372,7 @@ kube-apiserver 能够将令牌视作身份认证依据。 至少 128 位混沌数据。这里的随机数生成器可以是现代 Linux 系统上的 `/dev/urandom`。生成令牌的方式有很多种。例如: -``` +```shell head -c 16 /dev/urandom | od -An -t x | tr -d ' ' ``` @@ -388,7 +387,7 @@ values can be anything and the quoted group name should be as depicted: 令牌文件看起来是下面的例子这样,其中前面三个值可以是任何值,用引号括起来 的组名称则只能用例子中给的值。 -``` +```console 02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,"system:bootstrappers" ``` @@ -406,9 +405,13 @@ further details. ### 授权 kubelet 创建 CSR {#authorize-kubelet-to-create-csr} @@ -420,7 +423,7 @@ To do this, you just need to create a `ClusterRoleBinding` that binds the `syste 为了实现这一点,你只需要创建 `ClusterRoleBinding`,将 `system:bootstrappers` 组绑定到集群角色 `system:node-bootstrapper`。 -``` +```yaml # 允许启动引导节点创建 CSR apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding @@ -495,7 +498,7 @@ kubelet 身份认证,很重要的一点是为控制器管理器所提供的 CA 要将 Kubernetes CA 密钥和证书提供给 kube-controller-manager,可使用以下标志: -``` +```shell --cluster-signing-cert-file="/etc/path/to/kubernetes/ca/ca.crt" --cluster-signing-key-file="/etc/path/to/kubernetes/ca/ca.key" ``` @@ -504,7 +507,7 @@ For example: --> 例如: -``` +```shell --cluster-signing-cert-file="/var/lib/kubernetes/ca.pem" --cluster-signing-key-file="/var/lib/kubernetes/ca-key.pem" ``` @@ -513,7 +516,7 @@ The validity duration of signed certificates can be configured with flag: --> 所签名的证书的合法期限可以通过下面的标志来配置: -``` +```shell --cluster-signing-duration ``` @@ -602,7 +605,7 @@ collection. --> 作为 [kube-controller-manager](/zh/docs/reference/generated/kube-controller-manager/) 的一部分的 `csrapproving` 控制器是自动被启用的。 -该控制器使用 [`SubjectAccessReview` API](/docs/reference/access-authn-authz/authorization/#checking-api-access) +该控制器使用 [`SubjectAccessReview` API](/zh/docs/reference/access-authn-authz/authorization/#checking-api-access) 来确定是否某给定用户被授权请求 CSR,之后基于鉴权结果执行批复操作。 为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSRs。 该组件仅是忽略未被授权的请求。 @@ -682,7 +685,7 @@ The important elements to note are: diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet.md b/content/zh/docs/reference/command-line-tools-reference/kubelet.md index 6d7d2e757a..f542978d01 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/zh/docs/reference/command-line-tools-reference/kubelet.md @@ -8,13 +8,15 @@ weight: 28 kubelet 是在每个 Node 节点上运行的主要 “节点代理”。它可以使用以下之一向 apiserver 注册: 主机名(hostname);覆盖主机名的参数;某云驱动的特定逻辑。 kubelet 是基于 PodSpec 来工作的。每个 PodSpec 是一个描述 Pod 的 YAML 或 JSON 对象。 kubelet 接受通过各种机制(主要是通过 apiserver)提供的一组 PodSpec,并确保这些 @@ -431,9 +433,9 @@ kubelet 将从此标志所指的文件中加载其初始配置。此路径可以 -<警告:beta 特性> 设置容器的日志文件个数上限。此值必须不小于 2。 +设置容器的日志文件个数上限。此值必须不小于 2。 此标志只能与 --container-runtime=remote 标志一起使用。 已弃用:应在 --config 所给的配置文件中进行设置。 (进一步了解) @@ -446,10 +448,9 @@ kubelet 将从此标志所指的文件中加载其初始配置。此路径可以 -<警告:beta 特性> 设置容器日志文件在轮换生成新文件时之前的最大值 -(例如,10Mi)。 +设置容器日志文件在轮换生成新文件时之前的最大值(例如,10Mi)。 此标志只能与 --container-runtime=remote 标志一起使用。 已弃用:应在 --config 所给的配置文件中进行设置。 (进一步了解) @@ -892,7 +893,6 @@ AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
-CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
@@ -984,7 +984,6 @@ AppArmor=true|false (BETA - 默认值为 true)
BalanceAttachedNodeVolumes=true|false (ALPHA - 默认值为 false)
BoundServiceAccountTokenVolume=true|false (ALPHA - 默认值为 false)
CPUManager=true|false (BETA - 默认值为 true)
-CRIContainerLogRotation=true|false (BETA - 默认值为 true)
CSIInlineVolume=true|false (BETA - 默认值为 true)
CSIMigration=true|false (BETA - 默认值为 true)
CSIMigrationAWS=true|false (BETA - 默认值为 false)
@@ -1814,10 +1813,12 @@ The CIDR to use for pod IP addresses, only used in standalone mode. In cluster m -指定基础设施镜像,Pod 内所有容器与其共享网络和 IPC 命名空间。 -仅当容器运行环境设置为 docker 时,此特定于 docker 的参数才有效。 +所指定的镜像不会被镜像垃圾收集器删除。 +当容器运行环境设置为 docker 时,各个 Pod 中的所有容器都会 +使用此镜像中的网络和 IPC 名字空间。 +其他 CRI 实现有自己的配置来设置此镜像。 diff --git a/content/zh/docs/reference/glossary/disruption.md b/content/zh/docs/reference/glossary/disruption.md index 29c7e1ebe6..2b59797a39 100644 --- a/content/zh/docs/reference/glossary/disruption.md +++ b/content/zh/docs/reference/glossary/disruption.md @@ -42,6 +42,6 @@ Kubernetes terms that an _involuntary disruption_. See [Disruptions](/docs/concepts/workloads/pods/disruptions/) for more information. --> 如果您作为一个集群操作人员,销毁了一个从属于某个应用的 Pod, Kubernetes 视之为 _自愿干扰(Voluntary Disruption)_。如果由于节点故障 -或者影响更大区域故障的断电导致 Pod 离线,Kubrenetes 视之为 _非愿干扰(Involuntary Disruption)_。 +或者影响更大区域故障的断电导致 Pod 离线,kubernetes 视之为 _非愿干扰(Involuntary Disruption)_。 更多信息请查阅[Disruptions](/zh/docs/concepts/workloads/pods/disruptions/) \ No newline at end of file diff --git a/content/zh/docs/reference/glossary/manifest.md b/content/zh/docs/reference/glossary/manifest.md index 041cc0c0d1..811e59a7cf 100644 --- a/content/zh/docs/reference/glossary/manifest.md +++ b/content/zh/docs/reference/glossary/manifest.md @@ -29,4 +29,4 @@ tags: -清单指定了在应用该清单时 Kubrenetes 将维护的对象的期望状态。每个配置文件可包含多个清单。 +清单指定了在应用该清单时 kubernetes 将维护的对象的期望状态。每个配置文件可包含多个清单。 diff --git a/content/zh/docs/reference/glossary/platform-developer.md b/content/zh/docs/reference/glossary/platform-developer.md index 0f533b170c..41e8b99995 100644 --- a/content/zh/docs/reference/glossary/platform-developer.md +++ b/content/zh/docs/reference/glossary/platform-developer.md @@ -44,7 +44,7 @@ Others develop closed-source commercial or site-specific extensions. 平台开发人员可以使用[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 或[使用汇聚层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) 来为其 Kubernetes 实例增加功能,特别是为其应用程序添加功能。 -一些平台开发人员也是 Kubrenetes {{< glossary_tooltip text="贡献者" term_id="contributor" >}}, +一些平台开发人员也是 kubernetes {{< glossary_tooltip text="贡献者" term_id="contributor" >}}, 他们会开发贡献给 Kubernetes 社区的扩展。 另一些平台开发人员则开发封闭源代码的商业扩展或用于特定网站的扩展。 diff --git a/content/zh/docs/reference/kubectl/jsonpath.md b/content/zh/docs/reference/kubectl/jsonpath.md index c1302ec3d6..9d9c82b974 100644 --- a/content/zh/docs/reference/kubectl/jsonpath.md +++ b/content/zh/docs/reference/kubectl/jsonpath.md @@ -3,7 +3,7 @@ title: JSONPath 支持 content_type: concept weight: 25 --- - Kubectl 支持 JSONPath 模板。 - JSONPath 模板由 {} 包起来的 JSONPath 表达式组成。Kubectl 使用 JSONPath 表达式来过滤 JSON 对象中的特定字段并格式化输出。除了原始的 JSONPath 模板语法,以下函数和语法也是有效的: - 1. 使用双引号将 JSONPath 表达式内的文本引起来。 2. 使用 `range`,`end` 运算符来迭代列表。 3. 使用负片索引后退列表。负索引不会“环绕”列表,并且只要 `-index + listLength> = 0` 就有效。 {{< note >}} - - `$` 运算符是可选的,因为默认情况下表达式总是从根对象开始。 @@ -48,8 +48,8 @@ JSONPath 模板由 {} 包起来的 JSONPath 表达式组成。Kubectl 使用 JSO {{< /note >}} - 给定 JSON 输入: @@ -90,7 +90,7 @@ Given the JSON input: } ``` - 函数 | 描述 | 示例 | 结果 --------------------|---------------------------|-----------------------------------------------------------------|------------------ @@ -117,8 +117,8 @@ Function | Description | Example `range`, `end` | 迭代列表 | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]` `''` | 引用解释执行字符串 | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2` - 使用 `kubectl` 和 JSONPath 表达式的示例: @@ -131,7 +131,7 @@ kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}" kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' ``` - {{< note >}} -在 Windows 上,您必须用双引号把任何包含空格的 JSONPath 模板(不是上面 bash 所示的单引号)。 +在 Windows 上,对于任何包含空格的 JSONPath 模板,您必须使用双引号(不是上面 bash 所示的单引号)。 反过来,这意味着您必须在模板中的所有文字周围使用单引号或转义的双引号。 例如: @@ -176,4 +176,3 @@ kubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.nam kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).spec.containers[].image' ``` {{< /note >}} - diff --git a/content/zh/docs/reference/setup-tools/kubeadm/_index.md b/content/zh/docs/reference/setup-tools/kubeadm/_index.md index 7b8c2ac158..2c6e1be1e9 100644 --- a/content/zh/docs/reference/setup-tools/kubeadm/_index.md +++ b/content/zh/docs/reference/setup-tools/kubeadm/_index.md @@ -69,7 +69,7 @@ To install kubeadm, see the [installation guide](/docs/setup/production-environm 用于管理 `kubeadm join` 使用的令牌 * [kubeadm reset](/zh/docs/reference/setup-tools/kubeadm/kubeadm-reset) 用于恢复通过 `kubeadm init` 或者 `kubeadm join` 命令对节点进行的任何变更 -* [kubeadm certs](/docs/reference/setup-tools/kubeadm/kubeadm-certs) +* [kubeadm certs](/zh/docs/reference/setup-tools/kubeadm/kubeadm-certs) 用于管理 Kubernetes 证书 * [kubeadm kubeconfig](/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig) 用于管理 kubeconfig 文件 diff --git a/content/zh/docs/reference/tools/_index.md b/content/zh/docs/reference/tools/_index.md new file mode 100644 index 0000000000..2b7fefbe32 --- /dev/null +++ b/content/zh/docs/reference/tools/_index.md @@ -0,0 +1,101 @@ +--- +title: 其他工具 +content_type: concept +weight: 80 +no_list: true +--- + + + + + +Kubernetes 包含多个内置工具来帮助你使用 Kubernetes 系统。 + + + + + +## Minikube + +[`minikube`](https://minikube.sigs.k8s.io/docs/) +是一种在你的工作站上本地运行单节点 Kubernetes 集群的工具,用于开发和测试。 + + +## 仪表盘 + +[`Dashboard`](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/), +基于 Web 的 Kubernetes 用户界面, +允许你将容器化的应用程序部署到 Kubernetes 集群, +对它们进行故障排查,并管理集群及其资源本身。 + + +## Helm + +[`Kubernetes Helm`](https://github.com/kubernetes/helm) +是一个用于管理预配置 Kubernetes 资源包的工具,也就是 Kubernetes 图表。 + + +使用 Helm 来: + +* 查找和使用打包为 Kubernetes 图表的流行软件 +* 将你自己的应用程序共享为 Kubernetes 图表 +* 为你的 Kubernetes 应用程序创建可重现的构建 +* 智能管理你的 Kubernetes 清单文件 +* 管理 Helm 包的发布 + + +## Kompose + +[`Kompose`](https://github.com/kubernetes/kompose) +是一个帮助 Docker Compose 用户迁移到 Kubernetes 的工具。 + + + +使用 Kompose: + +* 将 Docker Compose 文件翻译成 Kubernetes 对象 +* 从本地 Docker 开发转到通过 Kubernetes 管理你的应用程序 +* 转换 Docker Compose v1 或 v2 版本的 `yaml` 文件或[分布式应用程序包](https://docs.docker.com/compose/bundles/) \ No newline at end of file diff --git a/content/zh/docs/reference/using-api/api-concepts.md b/content/zh/docs/reference/using-api/api-concepts.md index d44feae85c..4637ef108f 100644 --- a/content/zh/docs/reference/using-api/api-concepts.md +++ b/content/zh/docs/reference/using-api/api-concepts.md @@ -463,7 +463,7 @@ Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1, application/json GET 和 LIST 操作的语义含义如下: @@ -1142,7 +1142,7 @@ reply with a `410 Gone` HTTP response. ### 不可用的资源版本 {#unavailable-resource-versions} diff --git a/content/zh/docs/reference/using-api/client-libraries.md b/content/zh/docs/reference/using-api/client-libraries.md index cf01bcc56e..fd8ca85e6f 100644 --- a/content/zh/docs/reference/using-api/client-libraries.md +++ b/content/zh/docs/reference/using-api/client-libraries.md @@ -5,13 +5,11 @@ weight: 30 --- @@ -58,22 +56,21 @@ The following client libraries are officially maintained by -| 语言 | 客户端库 | 样例程序 | -|----------|----------------|-----------------| -| Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [浏览](https://github.com/kubernetes/client-go/tree/master/examples) -| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [浏览](https://github.com/kubernetes-client/python/tree/master/examples) -| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [浏览](https://github.com/kubernetes-client/java#installation) +| 语言 | 客户端库 | 样例程序 | +|---------|-----------------|-----------------| | dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [浏览](https://github.com/kubernetes-client/csharp/tree/master/examples/simple) -| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [浏览](https://github.com/kubernetes-client/javascript/tree/master/examples) +| Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [浏览](https://github.com/kubernetes/client-go/tree/master/examples) | Haskell | [github.com/kubernetes-client/haskell](https://github.com/kubernetes-client/haskell) | [浏览](https://github.com/kubernetes-client/haskell/tree/master/kubernetes-client/example) - +| Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [浏览](https://github.com/kubernetes-client/java#installation) +| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [浏览](https://github.com/kubernetes-client/javascript/tree/master/examples) +| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [浏览](https://github.com/kubernetes-client/python/tree/master/examples) | 语言 | 客户端库 | | -------------------- | ---------------------------------------- | | Clojure | [github.com/yanatan16/clj-kubernetes-api](https://github.com/yanatan16/clj-kubernetes-api) | +| DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | +| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) | +| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) | +| Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) | | Go | [github.com/ericchiang/k8s](https://github.com/ericchiang/k8s) | | Java (OSGi) | [bitbucket.org/amdatulabs/amdatu-kubernetes](https://bitbucket.org/amdatulabs/amdatu-kubernetes) | | Java (Fabric8, OSGi) | [github.com/fabric8io/kubernetes-client](https://github.com/fabric8io/kubernetes-client) | @@ -149,16 +151,11 @@ their authors, not the Kubernetes team. | Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | +| Ruby | [github.com/k8s-ruby/k8s-ruby](https://github.com/k8s-ruby/k8s-ruby) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | | Scala | [github.com/hagay3/skuber](https://github.com/hagay3/skuber) | | Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | | Swift | [github.com/swiftkube/client](https://github.com/swiftkube/client) | -| DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | -| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) | -| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) | -| Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) | - - diff --git a/content/zh/docs/reference/using-api/deprecation-policy.md b/content/zh/docs/reference/using-api/deprecation-policy.md index 27c2a5b636..50d77df4cf 100644 --- a/content/zh/docs/reference/using-api/deprecation-policy.md +++ b/content/zh/docs/reference/using-api/deprecation-policy.md @@ -171,7 +171,8 @@ This covers the [maximum supported version skew of 2 releases](/docs/setup/relea * **Beta: 9 个月或者 3 个发布版本(取其较长者)** * **Alpha: 0 个发布版本** -这里也包含了关于[最大支持 2 个发布版本的版本偏差](/zh/docs/setup/release/version-skew-policy/)的约定。 +这里也包含了关于[最大支持 2 个发布版本的版本偏差](/zh/docs/setup/release/version-skew-policy/) +的约定。 {{< note >}} -在[#52185](https://github.com/kubernetes/kubernetes/issues/52185)被解决之前, +在 [#52185](https://github.com/kubernetes/kubernetes/issues/52185) 被解决之前, 已经被保存到持久性存储中的 API 版本都不可以被去除。 你可以禁止这些版本所对应的 REST 末端(在符合本文中弃用时间线的前提下), 但是 API 服务器必须仍能解析和转换存储中以前写入的数据。 @@ -699,6 +700,14 @@ therefore the rules for deprecation are as follows: 特性门控的版本管理与之前讨论的组件版本管理不同,因此其对应的弃用策略如下: + **规则 #8:特性门控所对应的功能特性经历下面所列的成熟性阶段转换时,特性门控 必须被弃用。特性门控弃用时必须在以下时长内保持其功能可用:** @@ -730,8 +739,7 @@ this impacts removal of a metric during a Kubernetes release. These classes are determined by the perceived importance of the metric. The rules for deprecating and removing a metric are as follows: --> - -### 弃用度量值 {#Deprecating a metric} +### 弃用度量值 {#deprecating-a-metric} Kubernetes 控制平面的每个组件都公开度量值(通常是 `/metrics` 端点),它们通常由集群管理员使用。 并不是所有的度量值都是同样重要的:一些度量值通常用作 SLIs 或被使用来确定 SLOs,这些往往比较重要。 @@ -755,20 +763,25 @@ Kubernetes 控制平面的每个组件都公开度量值(通常是 `/metrics` --> **规则 #9a: 对于相应的稳定性类别,度量值起作用的周期必须不小于:** - * **STABLE: 4 个发布版本或者 12 个月 (取其较长者)** - * **ALPHA: 0 个发布版本** +* **STABLE: 4 个发布版本或者 12 个月 (取其较长者)** +* **ALPHA: 0 个发布版本** **规则 #9b: 在度量值被宣布启用之后,它起作用的周期必须不小于:** - * **STABLE: 3 个发布版本或者 9 个月 (取其较长者)** - * **ALPHA: 0 个发布版本** +* **STABLE: 3 个发布版本或者 9 个月 (取其较长者)** +* **ALPHA: 0 个发布版本** +已弃用的度量值将在其描述文本前加上一个已弃用通知字符串 '(Deprecated from x.y)', +并将在度量值被记录期间发出警告日志。就像稳定的、未被弃用的度量指标一样, +被弃用的度量值将自动注册到 metrics 端点,因此被弃用的度量值也是可见的。 + -已弃用的度量值将在其描述文本前加上一个已弃用通知字符串 '(Deprecated from x.y)', -并将在度量值被记录期间发出警告日志。就像稳定的、未被弃用的度量指标一样, -被弃用的度量值将自动注册到 metrics 端点,因此被弃用的度量值也是可见的。 - 在随后的版本中(当度量值 `deprecatedVersion` 等于_当前 Kubernetes 版本 - 3_), 被弃用的度量值将变成 _隐藏(Hidden)_ metric 度量值。 与被弃用的度量值不同,隐藏的度量值将不再被自动注册到 metrics 端点(因此被隐藏)。 -但是,它们可以通过可执行文件的命令行标志显式启用(`--show-hidden-metrics-for-version=`)。 - +但是,它们可以通过可执行文件的命令行标志显式启用 +(`--show-hidden-metrics-for-version=`)。 如果集群管理员不能对早期的弃用警告作出反应,这一设计就为他们提供了抓紧迁移弃用度量值的途径。 隐藏的度量值应该在再过一个发行版本后被删除。 diff --git a/content/zh/docs/reference/using-api/server-side-apply.md b/content/zh/docs/reference/using-api/server-side-apply.md index 207f8b0796..aa0b3fef20 100644 --- a/content/zh/docs/reference/using-api/server-side-apply.md +++ b/content/zh/docs/reference/using-api/server-side-apply.md @@ -5,7 +5,6 @@ weight: 25 min-kubernetes-server-version: 1.16 --- @@ -25,15 +23,15 @@ min-kubernetes-server-version: 1.16 ## 简介 {#introduction} 服务器端应用协助用户、控制器通过声明式配置的方式管理他们的资源。 -它发送完整描述的目标(A fully specified intent), +客户端可以发送完整描述的目标(A fully specified intent), 声明式地创建和/或修改 [对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/)。 @@ -84,7 +82,7 @@ Server side apply is meant both as a replacement for the original `kubectl apply` and as a simpler mechanism for controllers to enact their changes. If you have Server Side Apply enabled, the control plane tracks managed fields -for all newlly created objects. +for all newly created objects. --> 服务器端应用既是原有 `kubectl apply` 的替代品, 也是控制器发布自身变化的一个简化机制。 @@ -133,7 +131,7 @@ the appliers, results in a conflict. Shared field owners may give up ownership of a field by removing it from their configuration. Field management is stored in a`managedFields` field that is part of an object's -[`metadata`](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#objectmeta-v1-meta). +[`metadata`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectmeta-v1-meta). A simple example of an object created by Server Side Apply could look like this: --> @@ -142,7 +140,8 @@ A simple example of an object created by Server Side Apply could look like this: 共享字段的所有者可以放弃字段的所有权,这只需从配置文件中删除该字段即可。 字段管理的信息存储在 `managedFields` 字段中,该字段是对象的 -[`metadata`](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#objectmeta-v1-meta)中的一部分。 +[`metadata`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectmeta-v1-meta) +中的一部分。 服务器端应用创建对象的简单示例如下: @@ -356,15 +355,14 @@ would have failed due to conflicting ownership. The merging strategy, implemented with Server Side Apply, provides a generally more stable object lifecycle. Server Side Apply tries to merge fields based on -the fact who manages them instead of overruling just based on values. This way -it is intended to make it easier and more stable for multiple actors updating -the same object by causing less unexpected interference. +the actor who manages them instead of overruling based on values. This way +multiple actors can update the same object without causing unexpected interference. --> ## 合并策略 {#merge-strategy} 由服务器端应用实现的合并策略,提供了一个总体更稳定的对象生命周期。 -服务器端应用试图依据谁管理它们来合并字段,而不只是根据值来否决。 -这么做是为了多个参与者可以更简单、更稳定的更新同一个对象,且避免引起意外干扰。 +服务器端应用试图依据负责管理它们的主体来合并字段,而不是根据值来否决。 +这么做是为了多个主体可以更新同一个对象,且不会引起意外的相互干扰。 Kubernetes 1.16 和 1.17 中添加了一些标记, @@ -399,18 +397,116 @@ Kubernetes 1.16 和 1.17 中添加了一些标记, | Golang 标记 | OpenAPI extension | 可接受的值 | 描述 | 引入版本 | |---|---|---|---|---| -| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | 适用于 list。 `atomic` 和 `set` 适用于只包含标量元素的 list。 `map` 适用于只包含嵌套类型的 list。 如果配置为 `atomic`, 合并时整个列表会被替换掉; 任何时候,唯一的管理器都把列表作为一个整体来管理。如果是 `set` 或 `map` ,不同的管理器也可以分开管理条目。 | 1.16 | -| `//+listMapKey` | `x-kubernetes-list-map-keys` | 用来唯一标识条目的 map keys 切片,例如 `["port", "protocol"]` | 仅当 `+listType=map` 时适用。组合值的字符串切片必须唯一标识列表中的条目。尽管有多个 key,`listMapKey` 是单数的,这是因为 key 需要在 Go 类型中单独的指定。 | 1.16 | +| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | 适用于 list。`set` 适用于仅包含标量元素的列表。这些元素必须是不重复的。`map` 仅适用于包含嵌套类型的列表。列表中的键(参见 `listMapKey`)不可以重复。`atomic` 适用于任何类型的列表。如果配置为 `atomic`,则合并时整个列表会被替换掉。任何时候,只有一个管理器负责管理指定列表。如果配置为 `set` 或 `map`,不同的管理器也可以分开管理条目。 | 1.16 | +| `//+listMapKey` | `x-kubernetes-list-map-keys` | 字段名称的列表,例如,`["port", "protocol"]` | 仅当 `+listType=map` 时适用。取值为字段名称的列表,这些字段值的组合能够唯一标识列表中的条目。尽管可以存在多个键,`listMapKey` 是单数的,这是因为键名需要在 Go 类型中各自独立指定。键字段必须是标量。 | 1.16 | | `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | 适用于 map。 `atomic` 指 map 只能被单个的管理器整个的替换。 `granular` 指 map 支持多个管理器各自更新自己的字段。 | 1.17 | | `//+structType` | `x-kubernetes-map-type` | `atomic`/`granular` | 适用于 structs;否则就像 `//+mapType` 有相同的用法和 openapi 注释.| 1.17 | + +若未指定 `listType`,API 服务器将 `patchMergeStrategy=merge` 标记解释为 +`listType=map` 并且视对应的 `patchMergeKey` 标记为 `listMapKey` 取值。 + +`atomic` 列表类型是递归的。 + +这些标记都是用源代码注释的方式给出的,不必作为字段标签(tag)再重复。 + + +### 拓扑变化时的兼容性 {#compatibility-across-toplogy-changes} + + +在极少的情况下,CRD 或者内置类型的作者可能希望更改其资源中的某个字段的 +拓扑配置,同时又不提升版本号。 +通过升级集群或者更新 CRD 来更改类型的拓扑信息与更新现有对象的结果不同。 +变更的类型有两种:一种是将字段从 `map`/`set`/`granular` 更改为 `atomic`, +另一种是做逆向改变。 + + +当 `listType`、`mapType` 或 `structType` 从 `map`/`set`/`granular` 改为 +`atomic` 时,现有对象的整个列表、映射或结构的属主都会变为这些类型的 +元素之一的属主。这意味着,对这些对象的进一步变更会引发冲突。 + + +当一个列表、映射或结构从 `atomic` 改为 `map`/`set`/`granular` 之一 +时,API 服务器无法推导这些字段的新的属主。因此,当对象的这些字段 +再次被更新时不会引发冲突。出于这一原因,不建议将某类型从 `atomic` 改为 +`map`/`set`/`granular`。 + +以下面的自定义资源为例: + +```yaml +apiVersion: example.com/v1 +kind: Foo +metadata: + name: foo-sample + managedFields: + - manager: manager-one + operation: Apply + apiVersion: example.com/v1 + fields: + f:spec: + f:data: {} +spec: + data: + key1: val1 + key2: val2 +``` + + +在 `spec.data` 从 `atomic` 改为 `granular` 之前,`manager-one` 是 +`spec.data` 字段及其所包含字段(`key1` 和 `key2`)的属主。 +当对应的 CRD 被更改,使得 `spec.data` 变为 `granular` 拓扑时, +`manager-one` 继续拥有顶层字段 `spec.data`(这意味着其他管理者想 +删除名为 `data` 的映射而不引起冲突是不可能的),但不再拥有 +`key1` 和 `key2`。因此,其他管理者可以在不引起冲突的情况下更改 +或删除这些字段。 + -### 在控制器中使用服务器端应用 {#using-server-side-apply-in-controller} +## 在控制器中使用服务器端应用 {#using-server-side-apply-in-controller} 控制器的开发人员可以把服务器端应用作为简化控制器的更新逻辑的方式。 读-改-写 和/或 patch 的主要区别如下所示: @@ -463,7 +559,7 @@ might not be able to resolve or act on these conflicts. 强烈推荐:设置控制器在冲突时强制执行,这是因为冲突发生时,它们没有其他解决方案或措施。 -### 转移所有权 {#transferring-ownership} +## 转移所有权 {#transferring-ownership} 除了通过[冲突解决方案](#conflicts)提供的并发控制, 服务器端应用提供了一些协作方式来将字段所有权从用户转移到控制器。 @@ -526,7 +622,7 @@ is not what the user wants to happen, even temporarily. 这里有两个解决方案: -- (容易) 把 `replicas` 留在配置文件中;当 HPA 最终写入那个字段, +- (基本操作)把 `replicas` 留在配置文件中;当 HPA 最终写入那个字段, 系统基于此事件告诉用户:冲突发生了。在这个时间点,可以安全的删除配置文件。 -- (高级)然而,如果用户不想等待,比如他们想为合作伙伴保持集群清晰, +- (高级操作)然而,如果用户不想等待,比如他们想为合作伙伴保持集群清晰, 那他们就可以执行以下步骤,安全的从配置文件中删除 `replicas`。 首先,用户新定义一个只包含 `replicas` 字段的配置文件: @@ -561,13 +657,13 @@ kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replic 如果应用操作和 HPA 控制器产生冲突,那什么都不做。 -冲突只是表明控制器在更早的流程中已经对字段声明过所有权。 +冲突表明控制器在更早的流程中已经对字段声明过所有权。 在此时间点,用户可以从配置文件中删除 `replicas` 。 @@ -583,7 +679,7 @@ automatically deleted. No clean up is required. 这里不需要执行清理工作。 -## 在用户之间转移所有权 {#transferring-ownership-between-users} +### 在用户之间转移所有权 {#transferring-ownership-between-users} 通过在配置文件中把一个字段设置为相同的值,用户可以在他们之间转移字段的所有权, 从而共享了字段的所有权。 @@ -763,7 +859,7 @@ Data: [{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}] -这一操作将用只包含一个空条目的 list 覆写 managedFields, +这一操作将用只包含一个空条目的列表覆写 managedFields, 来实现从对象中整个的去除 managedFields。 -注意,只把 managedFields 设置为空 list 并不会重置字段。 +注意,只把 managedFields 设置为空列表并不会重置字段。 这么做是有目的的,所以 managedFields 将永远不会被与该字段无关的客户删除。 在重置操作结合 managedFields 以外其他字段更改的场景中, @@ -804,7 +900,8 @@ should have the same flag setting. --> ## 禁用此功能 {#disabling-the-feature} -服务器端应用是一个 beta 版特性,默认启用。 +服务器端应用是一个 Beta 版特性,默认启用。 要关闭此[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates), 你需要在启动 `kube-apiserver` 时包含参数 `--feature-gates ServerSideApply=false`。 -如果你有多个 `kube-apiserver` 副本,他们都应该有相同的标记设置。 \ No newline at end of file +如果你有多个 `kube-apiserver` 副本,它们的标志设置应该都相同。 + diff --git a/content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md index 93eae4e7d6..4cdc3f874a 100644 --- a/content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/zh/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -1,12 +1,14 @@ --- -title: Kubernetes 中调度 Windows 容器的指南 +title: Kubernetes 中 Windows 容器的调度指南 content_type: concept weight: 75 --- Windows 应用程序构成了许多组织中运行的服务和应用程序的很大一部分。 本指南将引导您完成在 Kubernetes 中配置和部署 Windows 容器的步骤。 @@ -36,20 +39,28 @@ Windows 应用程序构成了许多组织中运行的服务和应用程序的很 ## 在你开始之前 * 创建一个 Kubernetes 集群,其中包括一个 [运行 Windows 服务器的主节点和工作节点](/zh/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) -* 重要的是要注意,对于 Linux 和 Windows 容器,在 Kubernetes 上创建和部署服务和工作负载的行为几乎相同。 - 与集群接口的 [Kubectl 命令](/zh/docs/reference/kubectl/overview/)相同。提供以下部分中的示例只是为了快速启动 Windows 容器的使用体验。 +* 重要的是要注意,对于 Linux 和 Windows 容器,在 Kubernetes + 上创建和部署服务和工作负载的行为几乎相同。 + 与集群接口的 [kubectl 命令](/zh/docs/reference/kubectl/overview/)相同。 + 提供以下部分中的示例只是为了快速启动 Windows 容器的使用体验。 ## 入门:部署 Windows 容器 @@ -102,7 +113,8 @@ spec: ``` {{< note >}} 端口映射也是支持的,但为简单起见,在此示例中容器端口 80 直接暴露给服务。 @@ -128,9 +140,12 @@ Port mapping is also supported, but for simplicity in this example the container * Two containers per pod on the Windows node, use `docker ps` * Two pods listed from the Linux master, use `kubectl get pods` - * Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux master to check for a web server response - * Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node) using docker exec or kubectl exec - * Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`) from the Linux master and from individual pods + * Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux master + to check for a web server response + * Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node) + using docker exec or kubectl exec + * Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`) + from the Linux master and from individual pods * Service discovery, `curl` the service name with the Kubernetes [default DNS suffix](/docs/concepts/services-networking/dns-pod-service/#services) * Inbound connectivity, `curl` the NodePort from the Linux master or machines outside of the cluster * Outbound connectivity, `curl` external IPs from inside the pod using kubectl exec @@ -155,34 +170,90 @@ Port mapping is also supported, but for simplicity in this example the container * Windows 节点上每个 Pod 有两个容器,使用 `docker ps` * Linux 主机列出两个 Pod,使用 `kubectl get pods` * 跨网络的节点到 Pod 通信,从 Linux 主服务器 `curl` 您的 pod IPs 的端口80,以检查 Web 服务器响应 - * Pod 到 Pod 的通信,使用 docker exec 或 kubectl exec 在 pod 之间(以及跨主机,如果您有多个 Windows 节点)进行 ping 操作 - * 服务到 Pod 的通信,从 Linux 主服务器和各个 Pod 中 `curl` 虚拟服务 IP(在 `kubectl get services` 下可见) - * 服务发现,使用 Kubernetes `curl` 服务名称[默认 DNS 后缀](/zh/docs/concepts/services-networking/dns-pod-service/#services) + * Pod 到 Pod 的通信,使用 docker exec 或 kubectl exec 在 Pod 之间 + (以及跨主机,如果你有多个 Windows 节点)进行 ping 操作 + * 服务到 Pod 的通信,从 Linux 主服务器和各个 Pod 中 `curl` 虚拟服务 IP + (在 `kubectl get services` 下可见) + * 服务发现,使用 Kubernetes `curl` 服务名称 + [默认 DNS 后缀](/zh/docs/concepts/services-networking/dns-pod-service/#services) * 入站连接,从 Linux 主服务器或集群外部的计算机 `curl` NodePort * 出站连接,使用 kubectl exec 从 Pod 内部 curl 外部 IP {{< note >}} 由于当前平台对 Windows 网络堆栈的限制,Windows 容器主机无法访问在其上调度的服务的 IP。只有 Windows pods 才能访问服务 IP。 {{< /note >}} + +## 可观测性 {#observability} + +### 抓取来自工作负载的日志 + + +日志是可观测性的重要一环;使用日志用户可以获得对负载运行状况的洞察, +因而日志是故障排查的一个重要手法。 +因为 Windows 容器中的 Windows 容器和负载与 Linux 容器的行为不同, +用户很难收集日志,因此运行状态的可见性很受限。 +例如,Windows 工作负载通常被配置为将日志输出到 Windows 事件跟踪 +(Event Tracing for Windows,ETW),或者将日志条目推送到应用的事件日志中。 +[LogMonitor](https://github.com/microsoft/windows-container-tools/tree/master/LogMonitor) +是 Microsoft 提供的一个开源工具,是监视 Windows 容器中所配置的日志源 +的推荐方式。 +LogMonitor 支持监视时间日志、ETW 提供者模块以及自定义的应用日志, +并使用管道的方式将其输出到标准输出(stdout),以便 `kubectl logs ` +这类命令能够读取这些数据。 + + +请遵照 LogMonitor GitHub 页面上的指令,将其可执行文件和配置文件复制到 +你的所有容器中,并为其添加必要的入口点(Entrypoint),以便 LogMonitor +能够将你的日志输出推送到标准输出(stdout)。 + + ## 使用可配置的容器用户名 -从 Kubernetes v1.16 开始,可以为 Windows 容器配置与其镜像默认值不同的用户名来运行其入口点和进程。 +从 Kubernetes v1.16 开始,可以为 Windows 容器配置与其镜像默认值不同的用户名 +来运行其入口点和进程。 此能力的实现方式和 Linux 容器有些不同。 -在[此处](/zh/docs/tasks/configure-pod-container/configure-runasusername/)可了解更多信息。 +在[此处](/zh/docs/tasks/configure-pod-container/configure-runasusername/) +可了解更多信息。 ## 使用组托管服务帐户管理工作负载身份 @@ -190,7 +261,8 @@ Starting with Kubernetes v1.14, Windows container workloads can be configured to 组托管服务帐户是 Active Directory 帐户的一种特定类型,它提供自动密码管理, 简化的服务主体名称(SPN)管理以及将管理委派给跨多台服务器的其他管理员的功能。 配置了 GMSA 的容器可以访问外部 Active Directory 域资源,同时携带通过 GMSA 配置的身份。 -在[此处](/zh/docs/tasks/configure-pod-container/configure-gmsa/)了解有关为 Windows 容器配置和使用 GMSA 的更多信息。 +在[此处](/zh/docs/tasks/configure-pod-container/configure-gmsa/)了解有关为 +Windows 容器配置和使用 GMSA 的更多信息。 目前,用户需要将 Linux 和 Windows 工作负载运行在各自特定的操作系统的节点上, 因而需要结合使用污点和节点选择算符。 这可能仅给 Windows 用户造成不便。 @@ -210,7 +285,8 @@ Users today need to use some combination of taints and node selectors in order t ### 确保特定操作系统的工作负载落在适当的容器主机上 用户可以使用污点和容忍度确保 Windows 容器可以调度在适当的主机上。目前所有 Kubernetes 节点都具有以下默认标签: @@ -218,7 +294,10 @@ Users can ensure Windows containers can be scheduled on the appropriate host usi * kubernetes.io/arch = [amd64|arm64|...] 如果 Pod 规范未指定诸如 `"kubernetes.io/os": windows` 之类的 nodeSelector,则该 Pod 可能会被调度到任何主机(Windows 或 Linux)上。 @@ -226,7 +305,11 @@ If a Pod specification does not specify a nodeSelector like `"kubernetes.io/os": 最佳实践是使用 nodeSelector。 但是,我们了解到,在许多情况下,用户都有既存的大量的 Linux 容器部署,以及一个现成的配置生态系统, 例如社区 Helm charts,以及程序化 Pod 生成案例,例如 Operators。 @@ -239,7 +322,9 @@ For example: `--register-with-taints='os=windows:NoSchedule'` 例如:`--register-with-taints='os=windows:NoSchedule'` 向所有 Windows 节点添加污点后,Kubernetes 将不会在它们上调度任何负载(包括现有的 Linux Pod)。 为了使某 Windows Pod 调度到 Windows 节点上,该 Pod 既需要 nodeSelector 选择 Windows, @@ -266,18 +351,22 @@ The Windows Server version used by each pod must match that of the node. If you Server versions in the same cluster, then you should set additional node labels and nodeSelectors. --> 每个 Pod 使用的 Windows Server 版本必须与该节点的 Windows Server 版本相匹配。 -如果要在同一集群中使用多个 Windows Server 版本,则应该设置其他节点标签和 nodeSelector。 +如果要在同一集群中使用多个 Windows Server 版本,则应该设置其他节点标签和 +nodeSelector。 Kubernetes 1.17 自动添加了一个新标签 `node.kubernetes.io/windows-build` 来简化此操作。 如果您运行的是旧版本,则建议手动将此标签添加到 Windows 节点。 -此标签反映了需要兼容的 Windows 主要、次要和内部版本号。以下是当前每个 Windows Server 版本使用的值。 +此标签反映了需要兼容的 Windows 主要、次要和内部版本号。以下是当前每个 +Windows Server 版本使用的值。 | 产品名称 | 内部编号 | |--------------------------------------|------------------------| @@ -292,15 +381,19 @@ This label reflects the Windows major, minor, and build number that need to matc ### 使用 RuntimeClass 简化 -[RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 可用于简化使用污点和容忍度的过程。 +[RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 可用于 +简化使用污点和容忍度的过程。 集群管理员可以创建 `RuntimeClass` 对象,用于封装这些污点和容忍度。 -1. 将此文件保存到 `runtimeClasses.yml` 文件。它包括适用于 Windows 操作系统、体系结构和版本的 `nodeSelector`。 +1. 将此文件保存到 `runtimeClasses.yml` 文件。 + 它包括适用于 Windows 操作系统、体系结构和版本的 `nodeSelector`。 ```yaml apiVersion: node.k8s.io/v1 @@ -324,7 +417,7 @@ This label reflects the Windows major, minor, and build number that need to matc 1. Run `kubectl create -f runtimeClasses.yml` using as a cluster administrator 1. Add `runtimeClassName: windows-2019` as appropriate to Pod specs --> -2. 集群管理员运行 `kubectl create -f runtimeClasses.yml` 操作 +2. 集群管理员执行 `kubectl create -f runtimeClasses.yml` 操作 3. 根据需要向 Pod 规约中添加 `runtimeClassName: windows-2019` -*节点问题检测器(Node Problem Detector)*是一个守护程序,用于监视和报告节点的健康状况。 +*节点问题检测器(Node Problem Detector)* 是一个守护程序,用于监视和报告节点的健康状况。 你可以将节点问题探测器以 `DaemonSet` 或独立守护程序运行。 节点问题检测器从各种守护进程收集节点问题,并以 [NodeCondition](/zh/docs/concepts/architecture/nodes/#condition) 和 @@ -203,7 +203,7 @@ Kernel monitor watches the kernel log and detects known kernel issues following --> ## 内核监视器 -*内核监视器(Kernel Monitor)*是节点问题检测器中支持的系统日志监视器守护进程。 +*内核监视器(Kernel Monitor)* 是节点问题检测器中支持的系统日志监视器守护进程。 内核监视器观察内核日志并根据预定义规则检测已知的内核问题。 8. 确保你的扩展 apiserver 从该卷中加载了那些证书,并在 HTTPS 握手过程中使用它们。 -9. 在你的命令空间中创建一个 Kubernetes 服务账号。 +9. 在你的命名空间中创建一个 Kubernetes 服务账号。 10. 为资源允许的操作创建 Kubernetes 集群角色。 -11. 用你命令空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到你创建的角色上。 -12. 用你命令空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到 `system:auth-delegator` +11. 用你命名空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到你创建的角色上。 +12. 用你命名空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到 `system:auth-delegator` 集群角色,以将 auth 决策委派给 Kubernetes 核心 API 服务器。 -13. 以你命令空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到 +13. 以你命名空间中的服务账号创建一个 Kubernetes 集群角色绑定,绑定到 `extension-apiserver-authentication-reader` 角色。 这将让你的扩展 api-server 能够访问 `extension-apiserver-authentication` configmap。 @@ -114,4 +114,3 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 并启用 apiserver 的相关参数。 * 高级概述,请参阅[使用聚合层扩展 Kubernetes API](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation)。 * 了解如何[使用 Custom Resource Definition 扩展 Kubernetes API](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。 - diff --git a/content/zh/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md b/content/zh/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md new file mode 100644 index 0000000000..eb73c471c3 --- /dev/null +++ b/content/zh/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md @@ -0,0 +1,214 @@ +--- +title: 配置 kubelet 镜像凭据提供程序 +description: 配置 kubelet 的镜像凭据提供程序插件 +content_type: task +--- + + + +{{< feature-state for_k8s_version="v1.20" state="alpha" >}} + + + + +从 Kubernetes v1.20 开始,kubelet 可以使用 exec 插件动态检索容器镜像注册中心的凭据。 +kubelet 和 exec 插件使用 Kubernetes 版本化 API 通过标准输入输出(标准输入、标准输出和标准错误)通信。 +这些插件允许 kubelet 动态请求容器注册中心的凭据,而不是将静态凭据存储在磁盘上。 +例如,插件可能会与本地元数据通信,以检索 kubelet 正在拉取的镜像的短期凭据。 + + +如果以下任一情况属实,你可能对此功能感兴趣: + +* 需要调用云提供商的 API 来检索注册中心的身份验证信息。 +* 凭据的到期时间很短,需要频繁请求新凭据。 +* 将注册中心凭据存储在磁盘或者 imagePullSecret 是不可接受的。 + +## {{% heading "prerequisites" %}} + + +* kubelet 镜像凭证提供程序在 v1.20 版本作为 alpha 功能引入。 + 与其他 alpha 功能一样,当前仅当在 kubelet 启动 `KubeletCredentialProviders` 特性门禁才能使该功能正常工作。 +* 凭据提供程序 exec 插件的工作实现。你可以构建自己的插件或使用云提供商提供的插件。 + + + + +## 在节点上安装插件 {#installing-plugins-on-nodes} + +凭据提供程序插件是将由 kubelet 运行的可执行二进制文件。 +确保插件二进制存在于你的集群的每个节点上,并存储在已知目录中。 +稍后配置 kubelet 标志需要该目录。 + + +## 配置 kubelet {#configuring-the-kubelet} + +为了使用这个特性,kubelet 需要设置以下两个标志: +* `--image-credential-provider-config` —— 凭据提供程序插件配置文件的路径。 +* `--image-credential-provider-bin-dir` —— 凭据提供程序插件二进制文件所在目录的路径。 + + +### 配置 kubelet 凭据提供程序 {#configure-a-kubelet-credential-provider} + +kubelet 会读取传入 `--image-credential-provider-config` 的配置文件文件, +以确定应该为哪些容器镜像调用哪些 exec 插件。 +如果你正在使用基于 [ECR](https://aws.amazon.com/ecr/) 插件, +这里有个样例配置文件你可能最终会使用到: + +```yaml +kind: CredentialProviderConfig +apiVersion: kubelet.config.k8s.io/v1alpha1 +# providers 是将由 kubelet 启用的凭证提供程序插件列表。 +# 多个提供程序可能与单个镜像匹配,在这种情况下,来自所有提供程序的凭据将返回到 kubelet。 +# 如果为单个镜像调用多个提供程序,则结果会合并。 +# 如果提供程序返回重叠的身份验证密钥,则使用提供程序列表中较早的值。 +providers: + # name 是凭据提供程序的必需名称。 + # 它必须与 kubelet 看到的提供程序可执行文件的名称相匹配。 + # 可执行文件必须在 kubelet 的 bin 目录中 + # (由 --image-credential-provider-bin-dir 标志设置)。 + - name: ecr + # matchImages 是一个必需的字符串列表,用于匹配镜像以确定是否应调用此提供程序。 + # 如果其中一个字符串与 kubelet 请求的镜像相匹配,则该插件将被调用并有机会提供凭据。 + # 镜像应包含注册域和 URL 路径。 + # + # matchImages 中的每个条目都是一个模式,可以选择包含端口和路径。 + # 通配符可以在域中使用,但不能在端口或路径中使用。 + # 支持通配符作为子域(例如“*.k8s.io”或“k8s.*.io”)和顶级域(例如“k8s.*”)。 + # 还支持匹配部分子域,如“app*.k8s.io”。 + # 每个通配符只能匹配一个子域段,因此 *.io 不匹配 *.k8s.io。 + # + # 当以下所有条件都为真时,镜像和 matchImage 之间存在匹配: + # - 两者都包含相同数量的域部分并且每个部分都匹配。 + # - imageMatch 的 URL 路径必须是目标镜像 URL 路径的前缀。 + # - 如果 imageMatch 包含端口,则该端口也必须在图像中匹配。 + # + # matchImages 的示例值: + # - 123456789.dkr.ecr.us-east-1.amazonaws.com + # - *.azurecr.io + # - gcr.io + # - *.*.registry.io + # - registry.io:8080/path + matchImages: + - "*.dkr.ecr.*.amazonaws.com" + - "*.dkr.ecr.*.amazonaws.cn" + - "*.dkr.ecr-fips.*.amazonaws.com" + - "*.dkr.ecr.us-iso-east-1.c2s.ic.gov" + - "*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov" + # defaultCacheDuration 是插件将在内存中缓存凭据的默认持续时间 + # 如果插件响应中未提供缓存持续时间。此字段是必需的。 + defaultCacheDuration: "12h" + # exec CredentialProviderRequest 的必需输入版本。 + # 返回的 CredentialProviderResponse 必须使用与输入相同的编码版本。当前支持的值为: + # - credentialprovider.kubelet.k8s.io/v1alpha1 + apiVersion: credentialprovider.kubelet.k8s.io/v1alpha1 + # 执行命令时传递给命令的参数。 + # +可选 + args: + - get-credentials + # env 定义了额外的环境变量以暴露给进程。 + # 这些与主机环境以及 client-go 用于将参数传递给插件的变量结合在一起。 + # +可选 + env: + - name: AWS_PROFILE + value: example_profile +``` + + +`providers` 字段是 kubelet 使用的已启用插件列表。每个条目都有几个必填字段: +* `name`:插件的名称,必须与传入`--image-credential-provider-bin-dir` + 的目录中存在的可执行二进制文件的名称相匹配。 +* `matchImages`:用于匹配图像以确定是否应调用此提供程序的字符串列表。更多相关信息如下。 +* `defaultCacheDuration`:如果插件未指定缓存持续时间,kubelet 将在内存中缓存凭据的默认持续时间。 +* `apiVersion`:kubelet 和 exec 插件在通信时将使用的 api 版本。 + +每个凭证提供程序也可以被赋予可选的参数和环境变量。 +咨询插件实现者以确定给定插件需要哪些参数和环境变量集。 + + +#### 配置镜像匹配 {#configure-image-matching} + +kubelet 使用每个凭证提供程序的 `matchImages` 字段来确定是否应该为 Pod 正在使用的给定镜像调用插件。 +`matchImages` 中的每个条目都是一个镜像模式,可以选择包含端口和路径。 +通配符可以在域中使用,但不能在端口或路径中使用。 +支持通配符作为子域,如 `*.k8s.io` 或 `k8s.*.io`,以及顶级域,如 `k8s.*`。 +还支持匹配部分子域,如 `app*.k8s.io`。每个通配符只能匹配一个子域段, +因此 `*.io` 不匹配 `*.k8s.io`。 + + +当以下所有条件都为真时,镜像名称和 `matchImage` 条目之间存在匹配: + +* 两者都包含相同数量的域部分并且每个部分都匹配。 +* 匹配图片的 URL 路径必须是目标图片 URL 路径的前缀。 +* 如果 imageMatch 包含端口,则该端口也必须在镜像中匹配。 + +`matchImages` 模式的一些示例值: +* `123456789.dkr.ecr.us-east-1.amazonaws.com` +* `*.azurecr.io` +* `gcr.io` +* `*.*.registry.io` +* `foo.registry.io:8080/path` diff --git a/content/zh/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/zh/docs/tasks/network/customize-hosts-file-for-pods.md similarity index 99% rename from content/zh/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md rename to content/zh/docs/tasks/network/customize-hosts-file-for-pods.md index 4229689422..f2f5e7fc1b 100644 --- a/content/zh/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/content/zh/docs/tasks/network/customize-hosts-file-for-pods.md @@ -10,7 +10,7 @@ reviewers: - rickypai - thockin title: Adding entries to Pod /etc/hosts with HostAliases -content_type: concept +content_type: task weight: 60 min-kubernetes-server-version: 1.7 --> @@ -29,7 +29,7 @@ Modification not using HostAliases is not suggested because the file is managed 建议通过使用 HostAliases 来进行修改,因为该文件由 Kubelet 管理,并且 可以在 Pod 创建/重启过程中被重写。 - + -验证节点是否检测到 IPv4 和 IPv6 接口(用集群中的有效节点替换节点名称。 -在此示例中,节点名称为 `k8s-linuxpool1-34450317-0`): +验证节点是否检测到 IPv4 和 IPv6 接口。用集群中的有效节点替换节点名称。 +在此示例中,节点名称为 `k8s-linuxpool1-34450317-0`: ```shell kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf "%s: %s \n" .type .address}}{{end}}' @@ -81,12 +81,12 @@ InternalIP: 2001:1234:5678:9abc::5 ### 验证 Pod 寻址 -验证 Pod 已分配了 IPv4 和 IPv6 地址。(用集群中的有效 Pod 替换 Pod 名称。 -在此示例中,Pod 名称为 pod01) +验证 Pod 已分配了 IPv4 和 IPv6 地址。用集群中的有效 Pod 替换 Pod 名称。 +在此示例中,Pod 名称为 `pod01`: ```shell kubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf "%s \n" .ip}}{{end}}' @@ -209,7 +209,7 @@ Create the following Service that explicitly defines `IPv6` as the first array e Kubernetes 将 `service-cluster-ip-range` 配置的 IPv6 地址范围给 Service 分配集群 IP, 并将 `.spec.ipFamilyPolicy` 设置为 `SingleStack`。 -{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} +{{< codenew file="service/networking/dual-stack-ipfamilies-ipv6.yaml" >}} ### 创建双协议栈负载均衡服务 -如果云提供商支持配置启用 IPv6 的外部负载均衡器,则将 `ipFamily` 字段设置为 -`IPv6` 并将 `type` 字段设置为 `LoadBalancer` 的方式创建以下服务: +如果云提供商支持配置启用 IPv6 的外部负载均衡器,则创建如下 Service 时将 +`.spec.ipFamilyPolicy` 设置为 `PreferDualStack`, 并将 `spec.ipFamilies` 字段 +的第一个元素设置为 `IPv6`,将 `type` 字段设置为 `LoadBalancer`: -{{< codenew file="service/networking/dual-stack-ipv6-lb-svc.yaml" >}} +{{< codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" >}} + + +检查服务: + +```shell +kubectl get svc -l app=MyApp +``` @@ -31,244 +32,350 @@ This tutorial shows you how to build and deploy a simple _(not production ready) 一个简单的_(非面向生产)的_多层 web 应用程序。本例由以下组件组成: -* 单实例 [MongoDB](https://www.mongodb.com/) 以保存留言板条目 +* 单实例 [Redis](https://www.redis.com/) 以保存留言板条目 * 多个 web 前端实例 - - - ## {{% heading "objectives" %}} - - -* 启动 Mongo 数据库。 -* 启动留言板前端。 -* 公开并查看前端服务。 -* 清理。 - - +* 启动 Redis 领导者(Leader) +* 启动两个 Redis 跟随者(Follower) +* 公开并查看前端服务 +* 清理 ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - - -## 启动 Mongo 数据库 +## 启动 Redis 数据库 -留言板应用程序使用 MongoDB 存储数据。 +留言板应用程序使用 Redis 存储数据。 -### 创建 Mongo 的 Deployment +### 创建 Redis Deployment -下面包含的清单文件指定了一个 Deployment 控制器,该控制器运行一个 MongoDB Pod 副本。 +下面包含的清单文件指定了一个 Deployment 控制器,该控制器运行一个 Redis Pod 副本。 -{{< codenew file="application/guestbook/mongo-deployment.yaml" >}} +{{< codenew file="application/guestbook/redis-leader-deployment.yaml" >}} 1. 在下载清单文件的目录中启动终端窗口。 2. 从 `mongo-deployment.yaml` 文件中应用 MongoDB Deployment: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml + ``` 3. 查询 Pod 列表以验证 MongoDB Pod 是否正在运行: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` + + + 响应应该与此类似: + + ```shell + NAME READY STATUS RESTARTS AGE + redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s + ``` - 响应应该与此类似: - - ```shell - NAME READY STATUS RESTARTS AGE - mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s - ``` - - 4. 运行以下命令查看 MongoDB Deployment 中的日志: - ```shell - kubectl logs -f deployment/mongo - ``` + ```shell + kubectl logs -f deployment/redis-leader + ``` - -### 创建 MongoDB 服务 +### 创建 Redis 领导者服务 -留言板应用程序需要往 MongoDB 中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 MongoDB Pod 的流量。Service 定义了访问 Pod 的策略。 +留言板应用程序需要往 MongoDB 中写数据。因此,需要创建 +[Service](/zh/docs/concepts/services-networking/service/) 来转发 Redis Pod +的流量。Service 定义了访问 Pod 的策略。 -{{< codenew file="application/guestbook/mongo-service.yaml" >}} +{{< codenew file="application/guestbook/redis-leader-service.yaml" >}} -1. 使用下面的 `mongo-service.yaml` 文件创建 MongoDB 的服务: +1. 使用下面的 `redis-leader-service.yaml` 文件创建 Redis的服务: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml + ``` -2. 查询服务列表验证 MongoDB 服务是否正在运行: +2. 查询服务列表验证 Redis 服务是否正在运行: - ```shell - kubectl get service - ``` + ```shell + kubectl get service + ``` + + + 响应应该与此类似: + + ```shell + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.0.0.1 443/TCP 1m + redis-leader ClusterIP 10.103.78.24 6379/TCP 16s + ``` - 响应应该与此类似: - - ```shell - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - kubernetes ClusterIP 10.0.0.1 443/TCP 1m - mongo ClusterIP 10.0.0.151 27017/TCP 8s - ``` - - {{< note >}} -这个清单文件创建了一个名为 `mongo` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 MongoDB Pod 上。 +这个清单文件创建了一个名为 `redis-leader` 的 Service,其中包含一组 +与前面定义的标签匹配的标签,因此服务将网络流量路由到 Redis Pod 上。 {{< /note >}} + +### 设置 Redis 跟随者 + +尽管 Redis 领导者只有一个 Pod,你可以通过添加若干 Redis 跟随者来将其配置为高可用状态, +以满足流量需求。 + +{{< codenew file="application/guestbook/redis-follower-deployment.yaml" >}} + + +1. 应用下面的 `redis-follower-deployment.yaml` 文件创建 Redis Deployment: + + + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml + ``` + + +2. 通过查询 Pods 列表,验证两个 Redis 跟随者副本在运行: + + ```shell + kubectl get pods + ``` + + + 响应应该类似于这样: + + ``` + NAME READY STATUS RESTARTS AGE + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s + redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m + ``` + + +### 创建 Redis 跟随者服务 + +Guestbook 应用需要与 Redis 跟随者通信以读取数据。 +为了让 Redis 跟随者可被发现,你必须创建另一个 +[Service](/zh/docs/concepts/services-networking/service/)。 + +{{< codenew file="application/guestbook/redis-follower-service.yaml" >}} + + +1. 应用如下所示 `redis-follower-service.yaml` 文件中的 Redis Service: + + + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml + ``` + + +2. 查询 Service 列表,验证 Redis 服务在运行: + + ```shell + kubectl get service + ``` + + + 响应应该类似于这样: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h + redis-follower ClusterIP 10.110.162.42 6379/TCP 9s + redis-leader ClusterIP 10.103.78.24 6379/TCP 6m10s + ``` + +{{< note >}} + +清单文件创建了一个名为 `redis-follower` 的 Service,该 Service +具有一些与之前所定义的标签相匹配的标签,因此该 Service 能够将网络流量 +路由到 Redis Pod 之上。 +{{< /note >}} + + ## 设置并公开留言板前端 -留言板应用程序有一个 web 前端,服务于用 PHP 编写的 HTTP 请求。 -它被配置为连接到 `mongo` 服务以存储留言版条目。 +Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment. + +The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX. +--> +现在你有了一个为 Guestbook 应用配置的 Redis 存储处于运行状态, +接下来可以启动 Guestbook 的 Web 服务器了。 +与 Redis 跟随者类似,前端也是使用 Kubernetes Deployment 来部署的。 + +Guestbook 应用使用 PHP 前端。该前端被配置成与后端的 Redis 跟随者或者 +领导者服务通信,具体选择哪个服务取决于请求是读操作还是写操作。 +前端对外暴露一个 JSON 接口,并提供基于 jQuery-Ajax 的用户体验。 - -### 创建留言板前端 Deployment +### 创建 Guestbook 前端 Deployment {{< codenew file="application/guestbook/frontend-deployment.yaml" >}} -1. 从 `frontend-deployment.yaml` 应用前端 Deployment 文件: +1. 应用来自 `frontend-deployment.yaml` 文件的前端 Deployment: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml ``` -2. 查询 Pod 列表,验证三个前端副本是否正在运行: +2. 查询 Pod 列表,验证三个前端副本正在运行: - ```shell - kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend - ``` + ```shell + kubectl get pods -l app=guestbook -l tier=frontend + ``` - - 响应应该与此类似: + + 响应应该与此类似: - ``` - NAME READY STATUS RESTARTS AGE - frontend-3823415956-dsvc5 1/1 Running 0 54s - frontend-3823415956-k22zn 1/1 Running 0 54s - frontend-3823415956-w9gbt 1/1 Running 0 54s - ``` + ``` + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-5tqhb 1/1 Running 0 47s + frontend-85595f5bf9-qbzwm 1/1 Running 0 47s + frontend-85595f5bf9-zchwc 1/1 Running 0 47s + ``` - ### 创建前端服务 -应用的 `mongo` 服务只能在 Kubernetes 集群中访问,因为服务的默认类型是 +应用的 `Redis` 服务只能在 Kubernetes 集群中访问,因为服务的默认类型是 [ClusterIP](/zh/docs/concepts/services-networking/service/#publishing-services-service-types)。 `ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 -如果您希望访客能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户端可以从 Kubernetes 集群之外请求服务。然而即便使用了 `ClusterIP` Kubernetes 用户仍可以通过 `kubectl port-forward` 访问服务。 +如果你希望访客能够访问你的 Guestbook,你必须将前端服务配置为外部可见的, +以便客户端可以从 Kubernetes 集群之外请求服务。 +然而即便使用了 `ClusterIP`,Kubernetes 用户仍可以通过 +`kubectl port-forward` 访问服务。 {{< note >}} -一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine,支持外部负载均衡器。如果您的云提供商支持负载均衡器,并且您希望使用它, -只需取消注释 `type: LoadBalancer` 即可。 +一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine, +支持外部负载均衡器。如果你的云提供商支持负载均衡器,并且你希望使用它, +只需取消注释 `type: LoadBalancer`。 {{< /note >}} {{< codenew file="application/guestbook/frontend-service.yaml" >}} @@ -276,37 +383,38 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su -1. 从 `frontend-service.yaml` 文件中应用前端服务: +1. 应用来自 `frontend-service.yaml` 文件中的前端服务: - + - ```shell - kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml - ``` + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml + ``` -2. 查询服务列表以验证前端服务正在运行: +2. 查询 Service 列表以验证前端服务正在运行: - ```shell - kubectl get services - ``` + ```shell + kubectl get services + ``` - - 响应应该与此类似: + + 响应应该与此类似: - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend ClusterIP 10.0.0.112 80/TCP 6s - kubernetes ClusterIP 10.0.0.1 443/TCP 4m - mongo ClusterIP 10.0.0.151 6379/TCP 2m - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + frontend ClusterIP 10.97.28.230 80/TCP 19s + kubernetes ClusterIP 10.96.0.1 443/TCP 3d19h + redis-follower ClusterIP 10.110.162.42 6379/TCP 5m48s + redis-leader ClusterIP 10.103.78.24 6379/TCP 11m + ``` 1. 运行以下命令将本机的 `8080` 端口转发到服务的 `80` 端口。 - ```shell - kubectl port-forward svc/frontend 8080:80 - ``` + ```shell + kubectl port-forward svc/frontend 8080:80 + ``` - - 响应应该与此类似: + + 响应应该与此类似: - ``` - Forwarding from 127.0.0.1:8080 -> 80 - Forwarding from [::1]:8080 -> 80 - ``` + ``` + Forwarding from 127.0.0.1:8080 -> 80 + Forwarding from [::1]:8080 -> 80 + ``` -2. 在浏览器中加载 [http://localhost:8080](http://localhost:8080) 页面以查看留言板。 +2. 在浏览器中加载 [http://localhost:8080](http://localhost:8080) +页面以查看 Guestbook。 - ### 通过 `LoadBalancer` 查看前端服务 -如果您部署了 `frontend-service.yaml`。你需要找到 IP 地址来查看你的留言板。 +如果你部署了 `frontend-service.yaml`,需要找到用来查看 Guestbook 的 +IP 地址。 1. 运行以下命令以获取前端服务的 IP 地址。 - ```shell - kubectl get service frontend - ``` + ```shell + kubectl get service frontend + ``` - - 响应应该与此类似: + + 响应应该与此类似: - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - frontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m - ``` + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + frontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m + ``` -2. 复制外部 IP 地址,然后在浏览器中加载页面以查看留言板。 +2. 复制这里的外部 IP 地址,然后在浏览器中加载页面以查看留言板。 + +{{< note >}} + +尝试通过输入消息并点击 Submit 来添加一些留言板条目。 +你所输入的消息会在前端显示。这一消息表明数据被通过你 +之前所创建的 Service 添加到 Redis 存储中。 +{{< /note >}} - ## 扩展 Web 前端 -伸缩很容易是因为服务器本身被定义为使用一个 Deployment 控制器的 Service。 +你可以根据需要执行伸缩操作,这是因为服务器本身被定义为使用一个 +Deployment 控制器的 Service。 1. 运行以下命令扩展前端 Pod 的数量: - ```shell - kubectl scale deployment frontend --replicas=5 - ``` + ```shell + kubectl scale deployment frontend --replicas=5 + ``` 2. 查询 Pod 列表验证正在运行的前端 Pod 的数量: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - - 响应应该类似于这样: + + 响应应该类似于这样: - ``` - NAME READY STATUS RESTARTS AGE - frontend-3823415956-70qj5 1/1 Running 0 5s - frontend-3823415956-dsvc5 1/1 Running 0 54m - frontend-3823415956-k22zn 1/1 Running 0 54m - frontend-3823415956-w9gbt 1/1 Running 0 54m - frontend-3823415956-x2pld 1/1 Running 0 5s - mongo-1068406935-3lswp 1/1 Running 0 56m - ``` + ``` + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-5df5m 1/1 Running 0 83s + frontend-85595f5bf9-7zmg5 1/1 Running 0 83s + frontend-85595f5bf9-cpskg 1/1 Running 0 15m + frontend-85595f5bf9-l2l54 1/1 Running 0 14m + frontend-85595f5bf9-l9c8z 1/1 Running 0 14m + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m + redis-leader-fb76b4755-xjr2n 1/1 Running 0 108m + ``` 3. 运行以下命令缩小前端 Pod 的数量: - ```shell - kubectl scale deployment frontend --replicas=2 - ``` + ```shell + kubectl scale deployment frontend --replicas=2 + ``` 4. 查询 Pod 列表验证正在运行的前端 Pod 的数量: - ```shell - kubectl get pods - ``` - - - 响应应该类似于这样: - - ``` - NAME READY STATUS RESTARTS AGE - frontend-3823415956-k22zn 1/1 Running 0 1h - frontend-3823415956-w9gbt 1/1 Running 0 1h - mongo-1068406935-3lswp 1/1 Running 0 1h - ``` + ```shell + kubectl get pods + ``` + + 响应应该类似于这样: + ``` + NAME READY STATUS RESTARTS AGE + frontend-85595f5bf9-cpskg 1/1 Running 0 16m + frontend-85595f5bf9-l9c8z 1/1 Running 0 15m + redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m + redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m + redis-leader-fb76b4755-xjr2n 1/1 Running 0 109m + ``` ## {{% heading "cleanup" %}} - -删除 Deployments 和服务还会删除正在运行的 Pod。使用标签用一个命令删除多个资源。 +删除 Deployments 和服务还会删除正在运行的 Pod。 +使用标签用一个命令删除多个资源。 1. 运行以下命令以删除所有 Pod,Deployments 和 Services。 - ```shell - kubectl delete deployment -l app.kubernetes.io/name=mongo - kubectl delete service -l app.kubernetes.io/name=mongo - kubectl delete deployment -l app.kubernetes.io/name=guestbook - kubectl delete service -l app.kubernetes.io/name=guestbook - ``` + ```shell + kubectl delete deployment -l app=redis + kubectl delete service -l app=redis + kubectl delete deployment frontend + kubectl delete service frontend + ``` - - 响应应该是: - - ``` - deployment.apps "mongo" deleted - service "mongo" deleted - deployment.apps "frontend" deleted - service "frontend" deleted - ``` + + 响应应该是: + ``` + deployment.apps "redis-follower" deleted + deployment.apps "redis-leader" deleted + deployment.apps "frontend" deleted + service "frontend" deleted + ``` 2. 查询 Pod 列表,确认没有 Pod 在运行: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - - 响应应该是: - - ``` - No resources found. - ``` + + 响应应该是: + ``` + No resources found in default namespace. + ``` ## {{% heading "whatsnext" %}} - - -* 完成 [Kubernetes Basics](/zh/docs/tutorials/kubernetes-basics/) 交互式教程 -* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) -* 阅读更多关于[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/) -* 阅读更多关于[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) +* 完成 [Kubernetes 基础](/zh/docs/tutorials/kubernetes-basics/) 交互式教程 +* 使用 Kubernetes 创建一个博客,使用 + [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) +* 进一步阅读[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/) +* 进一步阅读[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) diff --git a/content/zh/examples/application/guestbook/frontend-deployment.yaml b/content/zh/examples/application/guestbook/frontend-deployment.yaml index 613c654aa9..f97f20dab6 100644 --- a/content/zh/examples/application/guestbook/frontend-deployment.yaml +++ b/content/zh/examples/application/guestbook/frontend-deployment.yaml @@ -1,32 +1,29 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook apiVersion: apps/v1 kind: Deployment metadata: name: frontend - labels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend spec: + replicas: 3 selector: matchLabels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend - replicas: 3 + app: guestbook + tier: frontend template: metadata: labels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend + app: guestbook + tier: frontend spec: containers: - - name: guestbook - image: paulczar/gb-frontend:v5 - # image: gcr.io/google-samples/gb-frontend:v4 + - name: php-redis + image: gcr.io/google_samples/gb-frontend:v5 + env: + - name: GET_HOSTS_FROM + value: "dns" resources: requests: cpu: 100m memory: 100Mi - env: - - name: GET_HOSTS_FROM - value: dns ports: - - containerPort: 80 + - containerPort: 80 \ No newline at end of file diff --git a/content/zh/examples/application/guestbook/frontend-service.yaml b/content/zh/examples/application/guestbook/frontend-service.yaml index 34ad3771d7..410c6bbaf2 100644 --- a/content/zh/examples/application/guestbook/frontend-service.yaml +++ b/content/zh/examples/application/guestbook/frontend-service.yaml @@ -1,16 +1,19 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook apiVersion: v1 kind: Service metadata: name: frontend labels: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend + app: guestbook + tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer + #type: LoadBalancer ports: + # the port that this service should serve on - port: 80 selector: - app.kubernetes.io/name: guestbook - app.kubernetes.io/component: frontend + app: guestbook + tier: frontend \ No newline at end of file diff --git a/content/zh/examples/application/guestbook/mongo-deployment.yaml b/content/zh/examples/application/guestbook/mongo-deployment.yaml deleted file mode 100644 index 04908ce25b..0000000000 --- a/content/zh/examples/application/guestbook/mongo-deployment.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: mongo - labels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend -spec: - selector: - matchLabels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend - replicas: 1 - template: - metadata: - labels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend - spec: - containers: - - name: mongo - image: mongo:4.2 - args: - - --bind_ip - - 0.0.0.0 - resources: - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 27017 diff --git a/content/zh/examples/application/guestbook/mongo-service.yaml b/content/zh/examples/application/guestbook/mongo-service.yaml deleted file mode 100644 index b9cef607bc..0000000000 --- a/content/zh/examples/application/guestbook/mongo-service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: mongo - labels: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend -spec: - ports: - - port: 27017 - targetPort: 27017 - selector: - app.kubernetes.io/name: mongo - app.kubernetes.io/component: backend diff --git a/content/zh/examples/application/guestbook/redis-follower-deployment.yaml b/content/zh/examples/application/guestbook/redis-follower-deployment.yaml new file mode 100644 index 0000000000..c418cf7364 --- /dev/null +++ b/content/zh/examples/application/guestbook/redis-follower-deployment.yaml @@ -0,0 +1,30 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-follower + labels: + app: redis + role: follower + tier: backend +spec: + replicas: 2 + selector: + matchLabels: + app: redis + template: + metadata: + labels: + app: redis + role: follower + tier: backend + spec: + containers: + - name: follower + image: gcr.io/google_samples/gb-redis-follower:v2 + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 \ No newline at end of file diff --git a/content/zh/examples/application/guestbook/redis-follower-service.yaml b/content/zh/examples/application/guestbook/redis-follower-service.yaml new file mode 100644 index 0000000000..53283d35c4 --- /dev/null +++ b/content/zh/examples/application/guestbook/redis-follower-service.yaml @@ -0,0 +1,17 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: v1 +kind: Service +metadata: + name: redis-follower + labels: + app: redis + role: follower + tier: backend +spec: + ports: + # the port that this service should serve on + - port: 6379 + selector: + app: redis + role: follower + tier: backend \ No newline at end of file diff --git a/content/zh/examples/application/guestbook/redis-leader-deployment.yaml b/content/zh/examples/application/guestbook/redis-leader-deployment.yaml new file mode 100644 index 0000000000..9c7547291c --- /dev/null +++ b/content/zh/examples/application/guestbook/redis-leader-deployment.yaml @@ -0,0 +1,30 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redis-leader + labels: + app: redis + role: leader + tier: backend +spec: + replicas: 1 + selector: + matchLabels: + app: redis + template: + metadata: + labels: + app: redis + role: leader + tier: backend + spec: + containers: + - name: leader + image: "docker.io/redis:6.0.5" + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 \ No newline at end of file diff --git a/content/zh/examples/application/guestbook/redis-leader-service.yaml b/content/zh/examples/application/guestbook/redis-leader-service.yaml new file mode 100644 index 0000000000..e04cc183d0 --- /dev/null +++ b/content/zh/examples/application/guestbook/redis-leader-service.yaml @@ -0,0 +1,17 @@ +# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook +apiVersion: v1 +kind: Service +metadata: + name: redis-leader + labels: + app: redis + role: leader + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: redis + role: leader + tier: backend \ No newline at end of file diff --git a/content/zh/examples/application/job/indexed-job-vol.yaml b/content/zh/examples/application/job/indexed-job-vol.yaml new file mode 100644 index 0000000000..ed40e1cc44 --- /dev/null +++ b/content/zh/examples/application/job/indexed-job-vol.yaml @@ -0,0 +1,27 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: 'indexed-job' +spec: + completions: 5 + parallelism: 3 + completionMode: Indexed + template: + spec: + restartPolicy: Never + containers: + - name: 'worker' + image: 'docker.io/library/busybox' + command: + - "rev" + - "/input/data.txt" + volumeMounts: + - mountPath: /input + name: input + volumes: + - name: input + downwardAPI: + items: + - path: "data.txt" + fieldRef: + fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index'] \ No newline at end of file diff --git a/content/zh/examples/application/job/indexed-job.yaml b/content/zh/examples/application/job/indexed-job.yaml new file mode 100644 index 0000000000..5b80d35264 --- /dev/null +++ b/content/zh/examples/application/job/indexed-job.yaml @@ -0,0 +1,35 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: 'indexed-job' +spec: + completions: 5 + parallelism: 3 + completionMode: Indexed + template: + spec: + restartPolicy: Never + initContainers: + - name: 'input' + image: 'docker.io/library/bash' + command: + - "bash" + - "-c" + - | + items=(foo bar baz qux xyz) + echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt + volumeMounts: + - mountPath: /input + name: input + containers: + - name: 'worker' + image: 'docker.io/library/busybox' + command: + - "rev" + - "/input/data.txt" + volumeMounts: + - mountPath: /input + name: input + volumes: + - name: input + emptyDir: {} diff --git a/content/zh/examples/examples_test.go b/content/zh/examples/examples_test.go index db80be97fe..f868eb3d4a 100644 --- a/content/zh/examples/examples_test.go +++ b/content/zh/examples/examples_test.go @@ -32,7 +32,6 @@ import ( "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/validation/field" "k8s.io/apimachinery/pkg/util/yaml" - // "k8s.io/apiserver/pkg/util/feature" "k8s.io/kubernetes/pkg/api/legacyscheme" "k8s.io/kubernetes/pkg/apis/apps" @@ -70,7 +69,6 @@ import ( _ "k8s.io/kubernetes/pkg/apis/networking/install" _ "k8s.io/kubernetes/pkg/apis/policy/install" _ "k8s.io/kubernetes/pkg/apis/rbac/install" - _ "k8s.io/kubernetes/pkg/apis/settings/install" _ "k8s.io/kubernetes/pkg/apis/storage/install" ) @@ -100,7 +98,6 @@ func (g TestGroup) Codec() runtime.Codec { func initGroups() { Groups = make(map[string]TestGroup) - groupNames := []string{ api.GroupName, apps.GroupName, @@ -109,7 +106,6 @@ func initGroups() { networking.GroupName, policy.GroupName, rbac.GroupName, - settings.GroupName, storage.GroupName, } @@ -152,6 +148,19 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) { } func validateObject(obj runtime.Object) (errors field.ErrorList) { + podValidationOptions := validation.PodValidationOptions{ + AllowMultipleHugePageResources: true, + AllowDownwardAPIHugePages: true, + } + + quotaValidationOptions := validation.ResourceQuotaValidationOptions{ + AllowPodAffinityNamespaceSelector: true, + } + + pspValidationOptions := policy_validation.PodSecurityPolicyValidationOptions{ + AllowEphemeralVolumeType: true, + } + // Enable CustomPodDNS for testing // feature.DefaultFeatureGate.Set("CustomPodDNS=true") switch t := obj.(type) { @@ -186,7 +195,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { opts := validation.PodValidationOptions{ AllowMultipleHugePageResources: true, } - errors = validation.ValidatePod(t, opts) + errors = validation.ValidatePodCreate(t, opts) case *api.PodList: for i := range t.Items { errors = append(errors, validateObject(&t.Items[i])...) @@ -195,12 +204,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidatePodTemplate(t) + errors = validation.ValidatePodTemplate(t, podValidationOptions) case *api.ReplicationController: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidateReplicationController(t) + errors = validation.ValidateReplicationController(t, podValidationOptions) case *api.ReplicationControllerList: for i := range t.Items { errors = append(errors, validateObject(&t.Items[i])...) @@ -209,7 +218,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidateResourceQuota(t) + errors = validation.ValidateResourceQuota(t, quotaValidationOptions) case *api.Secret: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -219,7 +228,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = validation.ValidateService(t, true) + // handle clusterIPs, logic copied from service strategy + if len(t.Spec.ClusterIP) > 0 && len(t.Spec.ClusterIPs) == 0 { + t.Spec.ClusterIPs = []string{t.Spec.ClusterIP} + } + errors = validation.ValidateService(t) case *api.ServiceAccount: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -233,7 +246,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = apps_validation.ValidateStatefulSet(t) + errors = apps_validation.ValidateStatefulSet(t, podValidationOptions) case *autoscaling.HorizontalPodAutoscaler: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -254,12 +267,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = apps_validation.ValidateDaemonSet(t) + errors = apps_validation.ValidateDaemonSet(t, podValidationOptions) case *apps.Deployment: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = apps_validation.ValidateDeployment(t) + errors = apps_validation.ValidateDeployment(t, podValidationOptions) case *networking.Ingress: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -269,18 +282,30 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version, } errors = networking_validation.ValidateIngressCreate(t, gv) + case *networking.IngressClass: + /* + if t.Namespace == "" { + t.Namespace = api.NamespaceDefault + } + gv := schema.GroupVersion{ + Group: networking.GroupName, + Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version, + } + */ + errors = networking_validation.ValidateIngressClass(t) + case *policy.PodSecurityPolicy: - errors = policy_validation.ValidatePodSecurityPolicy(t) + errors = policy_validation.ValidatePodSecurityPolicy(t, pspValidationOptions) case *apps.ReplicaSet: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = apps_validation.ValidateReplicaSet(t) + errors = apps_validation.ValidateReplicaSet(t, podValidationOptions) case *batch.CronJob: if t.Namespace == "" { t.Namespace = api.NamespaceDefault } - errors = batch_validation.ValidateCronJob(t) + errors = batch_validation.ValidateCronJob(t, podValidationOptions) case *networking.NetworkPolicy: if t.Namespace == "" { t.Namespace = api.NamespaceDefault @@ -291,6 +316,9 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) { t.Namespace = api.NamespaceDefault } errors = policy_validation.ValidatePodDisruptionBudget(t) + case *rbac.ClusterRole: + // clusterole does not accept namespace + errors = rbac_validation.ValidateClusterRole(t) case *rbac.ClusterRoleBinding: // clusterolebinding does not accept namespace errors = rbac_validation.ValidateClusterRoleBinding(t) @@ -418,6 +446,7 @@ func TestExampleObjectSchemas(t *testing.T) { "storagelimits": {&api.LimitRange{}}, }, "admin/sched": { + "clusterrole": {&rbac.ClusterRole{}}, "my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}}, "pod1": {&api.Pod{}}, "pod2": {&api.Pod{}}, @@ -441,12 +470,12 @@ func TestExampleObjectSchemas(t *testing.T) { "cassandra-statefulset": {&apps.StatefulSet{}, &storage.StorageClass{}}, }, "application/guestbook": { - "frontend-deployment": {&apps.Deployment{}}, - "frontend-service": {&api.Service{}}, - "redis-master-deployment": {&apps.Deployment{}}, - "redis-master-service": {&api.Service{}}, - "redis-slave-deployment": {&apps.Deployment{}}, - "redis-slave-service": {&api.Service{}}, + "frontend-deployment": {&apps.Deployment{}}, + "frontend-service": {&api.Service{}}, + "redis-follower-deployment": {&apps.Deployment{}}, + "redis-follower-service": {&api.Service{}}, + "redis-leader-deployment": {&apps.Deployment{}}, + "redis-leader-service": {&api.Service{}}, }, "application/hpa": { "php-apache": {&autoscaling.HorizontalPodAutoscaler{}}, @@ -456,8 +485,10 @@ func TestExampleObjectSchemas(t *testing.T) { "nginx-svc": {&api.Service{}}, }, "application/job": { - "cronjob": {&batch.CronJob{}}, - "job-tmpl": {&batch.Job{}}, + "cronjob": {&batch.CronJob{}}, + "job-tmpl": {&batch.Job{}}, + "indexed-job": {&batch.Job{}}, + "indexed-job-vol": {&batch.Job{}}, }, "application/job/rabbitmq": { "job": {&batch.Job{}}, @@ -536,13 +567,15 @@ func TestExampleObjectSchemas(t *testing.T) { "two-container-pod": {&api.Pod{}}, }, "pods/config": { - "redis-pod": {&api.Pod{}}, + "redis-pod": {&api.Pod{}}, + "example-redis-config": {&api.ConfigMap{}}, }, "pods/inject": { "dapi-envars-container": {&api.Pod{}}, "dapi-envars-pod": {&api.Pod{}}, "dapi-volume": {&api.Pod{}}, "dapi-volume-resources": {&api.Pod{}}, + "dependent-envars": {&api.Pod{}}, "envars": {&api.Pod{}}, "pod-multiple-secret-env-variable": {&api.Pod{}}, "pod-secret-envFrom": {&api.Pod{}}, @@ -588,10 +621,11 @@ func TestExampleObjectSchemas(t *testing.T) { "redis": {&api.Pod{}}, }, "policy": { - "baseline-psp": {&policy.PodSecurityPolicy{}}, - "example-psp": {&policy.PodSecurityPolicy{}}, - "privileged-psp": {&policy.PodSecurityPolicy{}}, - "restricted-psp": {&policy.PodSecurityPolicy{}}, + "baseline-psp": {&policy.PodSecurityPolicy{}}, + "example-psp": {&policy.PodSecurityPolicy{}}, + "priority-class-resourcequota": {&api.ResourceQuota{}}, + "privileged-psp": {&policy.PodSecurityPolicy{}}, + "restricted-psp": {&policy.PodSecurityPolicy{}}, "zookeeper-pod-disruption-budget-maxunavailable": {&policy.PodDisruptionBudget{}}, "zookeeper-pod-disruption-budget-minavailable": {&policy.PodDisruptionBudget{}}, }, @@ -600,29 +634,42 @@ func TestExampleObjectSchemas(t *testing.T) { "load-balancer-example": {&apps.Deployment{}}, }, "service/access": { - "frontend": {&api.Service{}, &apps.Deployment{}}, - "hello-application": {&apps.Deployment{}}, - "hello-service": {&api.Service{}}, - "hello": {&apps.Deployment{}}, + "backend-deployment": {&apps.Deployment{}}, + "backend-service": {&api.Service{}}, + "frontend-deployment": {&apps.Deployment{}}, + "frontend-service": {&api.Service{}}, + "hello-application": {&apps.Deployment{}}, }, "service/networking": { - "curlpod": {&apps.Deployment{}}, - "custom-dns": {&api.Pod{}}, - "dual-stack-default-svc": {&api.Service{}}, - "dual-stack-ipv4-svc": {&api.Service{}}, - "dual-stack-ipv6-lb-svc": {&api.Service{}}, - "dual-stack-ipv6-svc": {&api.Service{}}, - "hostaliases-pod": {&api.Pod{}}, - "ingress": {&networking.Ingress{}}, - "network-policy-allow-all-egress": {&networking.NetworkPolicy{}}, - "network-policy-allow-all-ingress": {&networking.NetworkPolicy{}}, - "network-policy-default-deny-egress": {&networking.NetworkPolicy{}}, - "network-policy-default-deny-ingress": {&networking.NetworkPolicy{}}, - "network-policy-default-deny-all": {&networking.NetworkPolicy{}}, - "nginx-policy": {&networking.NetworkPolicy{}}, - "nginx-secure-app": {&api.Service{}, &apps.Deployment{}}, - "nginx-svc": {&api.Service{}}, - "run-my-nginx": {&apps.Deployment{}}, + "curlpod": {&apps.Deployment{}}, + "custom-dns": {&api.Pod{}}, + "dual-stack-default-svc": {&api.Service{}}, + "dual-stack-ipfamilies-ipv6": {&api.Service{}}, + "dual-stack-ipv6-svc": {&api.Service{}}, + "dual-stack-prefer-ipv6-lb-svc": {&api.Service{}}, + "dual-stack-preferred-ipfamilies-svc": {&api.Service{}}, + "dual-stack-preferred-svc": {&api.Service{}}, + "external-lb": {&networking.IngressClass{}}, + "example-ingress": {&networking.Ingress{}}, + "hostaliases-pod": {&api.Pod{}}, + "ingress-resource-backend": {&networking.Ingress{}}, + "ingress-wildcard-host": {&networking.Ingress{}}, + "minimal-ingress": {&networking.Ingress{}}, + "name-virtual-host-ingress": {&networking.Ingress{}}, + "name-virtual-host-ingress-no-third-host": {&networking.Ingress{}}, + "namespaced-params": {&networking.IngressClass{}}, + "network-policy-allow-all-egress": {&networking.NetworkPolicy{}}, + "network-policy-allow-all-ingress": {&networking.NetworkPolicy{}}, + "network-policy-default-deny-egress": {&networking.NetworkPolicy{}}, + "network-policy-default-deny-ingress": {&networking.NetworkPolicy{}}, + "network-policy-default-deny-all": {&networking.NetworkPolicy{}}, + "nginx-policy": {&networking.NetworkPolicy{}}, + "nginx-secure-app": {&api.Service{}, &apps.Deployment{}}, + "nginx-svc": {&api.Service{}}, + "run-my-nginx": {&apps.Deployment{}}, + "simple-fanout-example": {&networking.Ingress{}}, + "test-ingress": {&networking.Ingress{}}, + "tls-example-ingress": {&networking.Ingress{}}, }, "windows": { "configmap-pod": {&api.ConfigMap{}, &api.Pod{}}, diff --git a/content/zh/examples/service/networking/dual-stack-ipv4-svc.yaml b/content/zh/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml similarity index 73% rename from content/zh/examples/service/networking/dual-stack-ipv4-svc.yaml rename to content/zh/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml index a875f44d6d..7c7239cae6 100644 --- a/content/zh/examples/service/networking/dual-stack-ipv4-svc.yaml +++ b/content/zh/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml @@ -2,11 +2,13 @@ apiVersion: v1 kind: Service metadata: name: my-service + labels: + app: MyApp spec: - ipFamily: IPv4 + ipFamilies: + - IPv6 selector: app: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file diff --git a/content/zh/examples/service/networking/dual-stack-ipv6-lb-svc.yaml b/content/zh/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml similarity index 76% rename from content/zh/examples/service/networking/dual-stack-ipv6-lb-svc.yaml rename to content/zh/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml index 2586ec9b39..0949a75428 100644 --- a/content/zh/examples/service/networking/dual-stack-ipv6-lb-svc.yaml +++ b/content/zh/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml @@ -5,11 +5,12 @@ metadata: labels: app: MyApp spec: - ipFamily: IPv6 + ipFamilyPolicy: PreferDualStack + ipFamilies: + - IPv6 type: LoadBalancer selector: app: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file diff --git a/data/releases/schedule.yaml b/data/releases/schedule.yaml index 1a239cde22..78fa73a6a4 100644 --- a/data/releases/schedule.yaml +++ b/data/releases/schedule.yaml @@ -1,20 +1,26 @@ schedules: - release: 1.21 - next: 1.21.2 - cherryPickDeadline: 2021-06-12 - targetDate: 2021-06-16 + next: 1.21.3 + cherryPickDeadline: 2021-07-10 + targetDate: 2021-07-14 endOfLifeDate: 2022-04-30 previousPatches: + - release: 1.21.2 + cherryPickDeadline: 2021-06-12 + targetDate: 2021-06-16 - release: 1.21.1 cherryPickDeadline: 2021-05-07 targetDate: 2021-05-12 note: Regression https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs - release: 1.20 - next: 1.20.8 - cherryPickDeadline: 2021-06-12 - targetDate: 2021-06-16 + next: 1.20.9 + cherryPickDeadline: 2021-07-10 + targetDate: 2021-07-14 endOfLifeDate: 2021-12-30 previousPatches: + - release: 1.20.8 + cherryPickDeadline: 2021-06-12 + targetDate: 2021-06-16 - release: 1.20.7 cherryPickDeadline: 2021-05-07 targetDate: 2021-05-12 @@ -40,11 +46,14 @@ schedules: targetDate: 2020-12-18 note: "Tagging Issue https://groups.google.com/g/kubernetes-dev/c/dNH2yknlCBA" - release: 1.19 - next: 1.19.12 - cherryPickDeadline: 2021-06-12 - targetDate: 2021-06-16 + next: 1.19.13 + cherryPickDeadline: 2021-07-10 + targetDate: 2021-07-14 endOfLifeDate: 2021-09-30 previousPatches: + - release: 1.19.12 + cherryPickDeadline: 2021-06-12 + targetDate: 2021-06-16 - release: 1.19.11 cherryPickDeadline: 2021-05-07 targetDate: 2021-05-12 diff --git a/layouts/partials/sidebar-tree.html b/layouts/partials/sidebar-tree.html index 4bc83737ae..5e909e778f 100644 --- a/layouts/partials/sidebar-tree.html +++ b/layouts/partials/sidebar-tree.html @@ -1,5 +1,6 @@ {{/* We cache this partial for bigger sites and set the active class client side. */}} -{{ $shouldDelayActive := ge (len .Site.Pages) 2000 }} +{{ $sidebarCacheLimit := cond (isset .Site.Params.ui "sidebar_cache_limit") .Site.Params.ui.sidebar_cache_limit 2000 }} +{{ $shouldDelayActive := ge (len .Site.Pages) $sidebarCacheLimit }}
{{ if not .Site.Params.ui.sidebar_search_disable }} + {{ else }} +
+ +
+
{{ end }} -
{{ define "section-tree-nav-section" }} -{{ $s := .section }} -{{ $p := .page }} -{{ $shouldDelayActive := .delayActive }} -{{ $active := eq $p.CurrentSection $s }} -{{ $show := or (eq $s $p.FirstSection) (and (not $p.Site.Params.ui.sidebar_menu_compact) ($p.IsDescendant $s)) }} -{{ $sid := $s.RelPermalink | anchorize }} -
    - {{ if (ne $s.File.Path "docs/_index.md") }} -
  • - - {{ $s.LinkTitle }} - -
  • - {{ end }} -
      -
    • - {{ $pages := where (union $s.Pages $s.Sections).ByWeight ".Params.toc_hide" "!=" true }} - {{ with site.Params.language_alternatives }} - {{ range . }} - {{ with (where $.section.Translations ".Lang" . ) }} - {{ $p := index . 0 }} - {{ $pages = $pages | lang.Merge (union $p.Pages $p.Sections) }} + {{ $s := .section }} + {{ $p := .page }} + {{ $shouldDelayActive := .shouldDelayActive }} + {{ $sidebarMenuTruncate := .sidebarMenuTruncate }} + {{ $treeRoot := cond (eq .ulNr 0) true false }} + {{ $ulNr := .ulNr }} + {{ $ulShow := .ulShow }} + {{ $active := and (not $shouldDelayActive) (eq $s $p) }} + {{ $activePath := and (not $shouldDelayActive) ($p.IsDescendant $s) }} + {{ $show := cond (or (lt $ulNr $ulShow) $activePath (and (not $shouldDelayActive) (eq $s.Parent $p.Parent)) (and (not $shouldDelayActive) (eq $s.Parent $p)) (and (not $shouldDelayActive) ($p.IsDescendant $s.Parent))) true false }} + {{ $mid := printf "m-%s" ($s.RelPermalink | anchorize) }} + {{ $pages_tmp := where (union $s.Pages $s.Sections).ByWeight ".Params.toc_hide" "!=" true }} + {{ $pages := $pages_tmp | first $sidebarMenuTruncate }} + {{ $withChild := gt (len $pages) 0 }} + {{ $manualLink := cond (isset $s.Params "manuallink") $s.Params.manualLink ( cond (isset $s.Params "manuallinkrelref") (relref $s $s.Params.manualLinkRelref) $s.RelPermalink) }} + {{ $manualLinkTitle := cond (isset $s.Params "manuallinktitle") $s.Params.manualLinkTitle $s.Title }} + +
    • + {{ if (and $p.Site.Params.ui.sidebar_menu_foldable (ge $ulNr 1)) }} + + + {{ else }} + {{ if not $treeRoot }} + {{ with $s.Params.Icon}}{{ end }}{{ $s.LinkTitle }} + {{ end }} + {{ end }} + {{if $withChild }} + {{ $ulNr := add $ulNr 1 }} +
        + {{ $pages := where (union $s.Pages $s.Sections).ByWeight ".Params.toc_hide" "!=" true }} + {{ with site.Params.language_alternatives }} + {{ range . }} + {{ with (where $.section.Translations ".Lang" . ) }} + {{ $p := index . 0 }} + {{ $pages = $pages | lang.Merge (union $p.Pages $p.Sections) }} + {{ end }} {{ end }} {{ end }} - {{ end }} - {{ $pages := $pages | first 50 }} - {{ range $pages }} - {{ if .IsPage }} - {{ $mid := printf "m-%s" (.RelPermalink | anchorize) }} - {{ $active := eq . $p }} - {{ $isForeignLanguage := (ne (string .Lang) (string $.currentLang)) }} - - {{ .LinkTitle }}{{ if $isForeignLanguage }} ({{ .Lang | upper }}){{ end }} - - {{ else }} - {{ template "section-tree-nav-section" (dict "page" $p "section" . "currentLang" $.currentLang) }} + {{ $pages := $pages | first 50 }} + {{ range $pages }} + {{ if (not (and (eq $s $p.Site.Home) (eq .Params.toc_root true)) ) }} + {{ $mid := printf "m-%s" (.RelPermalink | anchorize) }} + {{ $active := eq . $p }} + {{ $isForeignLanguage := (ne (string .Lang) (string $.currentLang)) }} + {{ if (and $isForeignLanguage ($p.IsDescendant $s)) }} + + {{ .LinkTitle }}{{ if $isForeignLanguage }} ({{ .Lang | upper }}){{ end }} + + {{ else }} + {{ template "section-tree-nav-section" (dict "page" $p "section" . "currentLang" $.currentLang "shouldDelayActive" $shouldDelayActive "sidebarMenuTruncate" $sidebarMenuTruncate "ulNr" $ulNr "ulShow" $ulShow) }} + {{ end }} + {{ end }} {{ end }} - {{ end }} - -
      -
    +
+ {{ end }} + {{ end }} diff --git a/layouts/shortcodes/capture.html b/layouts/shortcodes/capture.html deleted file mode 100644 index cc762273c3..0000000000 --- a/layouts/shortcodes/capture.html +++ /dev/null @@ -1,8 +0,0 @@ -{{ $_hugo_config := `{ "version": 1 }`}} -{{- $id := .Get 0 -}} -{{- if not $id -}} -{{- errorf "missing id in capture" -}} -{{- end -}} -{{- $capture_id := printf "capture %s" $id -}} -{{- .Page.Scratch.Set $capture_id .Inner -}} -{{ warnf "Invalid shortcode: %s, in %q" $capture_id (relLangURL .Page.Path) }} \ No newline at end of file diff --git a/static/_redirects b/static/_redirects index 0d95cef56f..daaa22d497 100644 --- a/static/_redirects +++ b/static/_redirects @@ -130,6 +130,7 @@ /docs/concepts/scheduling-eviction/eviction-policy/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301 /docs/concepts/service-catalog/ /docs/concepts/extend-kubernetes/service-catalog/ 301 /docs/concepts/services-networking/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301 +/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ /docs/tasks/network/customize-hosts-file-for-pods/ 301 /docs/concepts/storage/etcd-store-api-object/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 /docs/concepts/storage/volumes/emptyDirapiVersion/ /docs/concepts/storage/volumes/#emptydir/ 301 /docs/concepts/tools/kubectl/object-management-overview/ /docs/concepts/overview/object-management-kubectl/overview/ 301