Merge branch 'kubernetes:main' into master
This commit is contained in:
commit
b70a72e2cc
|
|
@ -2,10 +2,12 @@ aliases:
|
|||
sig-docs-blog-owners: # Approvers for blog content
|
||||
- onlydole
|
||||
- mrbobbytables
|
||||
- sftim
|
||||
sig-docs-blog-reviewers: # Reviewers for blog content
|
||||
- mrbobbytables
|
||||
- onlydole
|
||||
- sftim
|
||||
- nate-double-u
|
||||
sig-docs-de-owners: # Admins for German content
|
||||
- bene2k1
|
||||
- mkorbi
|
||||
|
|
@ -125,6 +127,7 @@ aliases:
|
|||
- ClaudiaJKang
|
||||
- gochist
|
||||
- ianychoi
|
||||
- jihoon-seo
|
||||
- seokho-son
|
||||
- ysyukr
|
||||
sig-docs-ko-reviews: # PR reviews for Korean content
|
||||
|
|
|
|||
|
|
@ -7,11 +7,16 @@ slug: are-you-ready-for-dockershim-removal
|
|||
|
||||
**Author:** Sergey Kanzhelev, Google. With reviews from Davanum Srinivas, Elana Hashman, Noah Kantrowitz, Rey Lejano.
|
||||
|
||||
{{% alert color="info" title="Poll closed" %}}
|
||||
This poll closed on January 7, 2022.
|
||||
{{% /alert %}}
|
||||
|
||||
Last year we announced that Dockershim is being deprecated: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
|
||||
Our current plan is to remove dockershim from the Kubernetes codebase soon.
|
||||
We are looking for feedback from you whether you are ready for dockershim
|
||||
removal and to ensure that you are ready when the time comes.
|
||||
**Please fill out this survey: https://forms.gle/svCJmhvTv78jGdSx8**.
|
||||
|
||||
<del>Please fill out this survey: https://forms.gle/svCJmhvTv78jGdSx8</del>
|
||||
|
||||
The dockershim component that enables Docker as a Kubernetes container runtime is
|
||||
being deprecated in favor of runtimes that directly use the [Container Runtime Interface](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
|
||||
|
|
@ -33,7 +38,7 @@ beta versions, dockershim will be removed in December at the beginning of the
|
|||
There is only one month left to give us feedback. We want you to tell us how
|
||||
ready you are.
|
||||
|
||||
**We are collecting opinions through this survey: [https://forms.gle/svCJmhvTv78jGdSx8](https://forms.gle/svCJmhvTv78jGdSx8)**
|
||||
<del>We are collecting opinions through this survey: https://forms.gle/svCJmhvTv78jGdSx8</del>
|
||||
To better understand preparedness for the dockershim removal, our survey is
|
||||
asking the version of Kubernetes you are currently using, and an estimate of
|
||||
when you think you will adopt Kubernetes 1.24. All the aggregated information
|
||||
|
|
|
|||
|
|
@ -0,0 +1,99 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes is Moving on From Dockershim: Commitments and Next Steps"
|
||||
date: 2022-01-07
|
||||
slug: kubernetes-is-moving-on-from-dockershim
|
||||
---
|
||||
|
||||
**Authors:** Sergey Kanzhelev (Google), Jim Angel (Google), Davanum Srinivas (VMware), Shannon Kularathna (Google), Chris Short (AWS), Dawn Chen (Google)
|
||||
|
||||
Kubernetes is removing dockershim in the upcoming v1.24 release. We're excited
|
||||
to reaffirm our community values by supporting open source container runtimes,
|
||||
enable a smaller kubelet, and increase engineering velocity for teams using
|
||||
Kubernetes. If you [use Docker Engine as a Container Runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)
|
||||
for your Kubernetes cluster, get ready to migrate to 1.24! To check if you're
|
||||
affected, refer to [Check whether dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/).
|
||||
|
||||
## Why we’re moving away from dockershim
|
||||
|
||||
Docker was the first container runtime used by Kubernetes. This is one of the
|
||||
reasons why Docker is so familiar to many Kubernetes users and enthusiasts.
|
||||
As containerization became an industry standard, the Kubernetes project added support
|
||||
for additional runtimes. This culminated with the implementation of the
|
||||
container runtime interface (CRI), letting system components (like the kubelet)
|
||||
talk to container runtimes in a standardized way. As a result, the hardcoded support for Docker –
|
||||
a component the project refers to as dockershim – became an anomaly in the Kubernetes project.
|
||||
Dependencies on Docker and dockershim have crept into various tools
|
||||
and projects in the CNCF ecosystem ecosystem, resulting in fragile code.
|
||||
|
||||
By removing the
|
||||
dockershim CRI, we're embracing the first value of CNCF: "[Fast is better than
|
||||
slow](https://github.com/cncf/foundation/blob/master/charter.md#3-values)".
|
||||
Stay tuned for future communications on the topic!
|
||||
|
||||
## Deprecation timeline
|
||||
|
||||
We [formally announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/) the dockershim deprecation in December 2020. Full removal is targeted
|
||||
in Kubernetes 1.24, in April 2022. This timeline
|
||||
aligns with our [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior),
|
||||
which states that deprecated behaviors must function for at least 1 year
|
||||
after their announced deprecation.
|
||||
|
||||
We'll support Kubernetes version 1.23, which includes
|
||||
dockershim, for another year in the Kubernetes project. Managed
|
||||
Kubernetes providers, vendor support is likely to last even longer, but this is
|
||||
dependent on the companies themselves. Regardless, we're confident all cluster operations will have
|
||||
time to migrate. If you have more questions about the dockershim removal, refer
|
||||
to the [Dockershim Deprecation FAQ](/dockershim).
|
||||
|
||||
We asked you whether you feel prepared for the migration from dockershim in this
|
||||
survey: [Are you ready for Dockershim removal](/blog/2021/11/12/are-you-ready-for-dockershim-removal/).
|
||||
We had over 600 responses. To everybody who took time filling out the survey,
|
||||
thank you.
|
||||
|
||||
The results show that we still have a lot of ground to cover to help you to
|
||||
migrate smoothly. Other container runtimes exist, and have been promoted
|
||||
extensively. However, many users told us they still rely on dockershim,
|
||||
and sometimes have dependencies that need to be re-worked. Some of these
|
||||
dependencies are outside of your control. Based on the feedback received from
|
||||
you, here are some of the steps we are taking to help.
|
||||
|
||||
## Our next steps
|
||||
|
||||
Based on the feedback you provided:
|
||||
|
||||
- CNCF and the 1.24 release team are committed to delivering documentation in
|
||||
time for the 1.24 release. This includes more informative blog posts like this
|
||||
one, updating existing code samples, tutorials, and tasks, and producing a
|
||||
migration guide for cluster operators.
|
||||
- We are reaching out to the rest of the CNCF community to help prepare them for
|
||||
this change.
|
||||
|
||||
If you're part of a project with dependencies on dockershim, or if you're
|
||||
interested in helping with the migration effort, please join us! There's always
|
||||
room for more contributors, whether to our transition tools or to our
|
||||
documentation. To get started, say hello in
|
||||
[#sig-node](https://kubernetes.slack.com/archives/C0BP8PW9G)
|
||||
channel on [Kuberentes Slack](https://slack.kubernetes.io/)!
|
||||
|
||||
## Final thoughts
|
||||
|
||||
As a project, we've already seen cluster operators increasingly adopt of other container runtimes through 2021.
|
||||
We believe there are no major blockers to migration. The steps we're taking to
|
||||
improve the migration experience will light the path more clearly for you.
|
||||
|
||||
We understand that migration from dockershim is yet another action you may need to
|
||||
do to keep your Kubernetes infrastructure up to date. For most of you, this step
|
||||
will be straightforward and transparent. In some cases, you will encounter
|
||||
hiccups or issues. The community has discussed at length whether postponing the
|
||||
dockershim removal would be helpful. For example, we recently talked about it in
|
||||
the [SIG Node discussion on November 11th](https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.r77y11bgzid)
|
||||
and in the [Kubernetes Steering committee meeting held on December 6th](https://docs.google.com/document/d/1qazwMIHGeF3iUh5xMJIJ6PDr-S3bNkT8tNLRkSiOkOU/edit#bookmark=id.m0ir406av7jx).
|
||||
We already [postponed](https://github.com/kubernetes/enhancements/pull/2481/) it once last year because the adoption rate of other
|
||||
runtimes was lower than we wanted, which also gave us more time to identify
|
||||
potential blocking issues.
|
||||
|
||||
At this point, we believe that the value that you (and Kubernetes) gain from
|
||||
dockershim removal makes up for the migration effort you'll have. Start planning
|
||||
now to avoid surprises. We'll have more updates and guides before Kubernetes
|
||||
1.24 is released.
|
||||
|
|
@ -0,0 +1,106 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Meet Our Contributors - APAC (India region)"
|
||||
date: 2022-01-10T12:00:00+0000
|
||||
slug: meet-our-contributors-india-ep-01
|
||||
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
|
||||
---
|
||||
|
||||
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
|
||||
|
||||
**Editor:** [Priyanka Saggu](https://psaggu.com)
|
||||
|
||||
---
|
||||
|
||||
Good day, everyone 👋
|
||||
|
||||
Welcome to the first episode of the APAC edition of the "Meet Our Contributors" blog post series.
|
||||
|
||||
|
||||
In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.
|
||||
|
||||
💫 *Let's get started, so without further ado…*
|
||||
|
||||
|
||||
## [Arsh Sharma](https://github.com/RinkiyaKeDad)
|
||||
|
||||
Arsh is currently employed with Okteto as a Developer Experience engineer. As a new contributor, he realised that 1:1 mentorship opportunities were quite beneficial in getting him started with the upstream project.
|
||||
|
||||
He is presently a CI Signal shadow on the Kubernetes 1.23 release team. He is also contributing to the SIG Testing and SIG Docs projects, as well as to the [cert-manager](https://github.com/cert-manager/infrastructure) tools development work that is being done under the aegis of SIG Architecture.
|
||||
|
||||
To the newcomers, Arsh helps plan their early contributions sustainably.
|
||||
|
||||
> _I would encourage folks to contribute in a way that's sustainable. What I mean by that
|
||||
> is that it's easy to be very enthusiastic early on and take up more stuff than one can
|
||||
> actually handle. This can often lead to burnout in later stages. It's much more sustainable
|
||||
> to work on things iteratively._
|
||||
|
||||
## [Kunal Kushwaha](https://github.com/kunal-kushwaha)
|
||||
|
||||
Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also a CNCF ambassador and one of the founders of the [CNCF Students Program](https://community.cncf.io/cloud-native-students/).. He also served as a Communications role shadow during the 1.22 release cycle.
|
||||
|
||||
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
|
||||
|
||||
As an open-source enthusiast, he believes that diverse participation in the community is beneficial since it introduces new perspectives and opinions and respect for one's peers. He has worked on various open-source projects, and his participation in communities has considerably assisted his development as a developer.
|
||||
|
||||
|
||||
> _I believe if you find yourself in a place where you do not know much about the
|
||||
> project, that's a good thing because now you can learn while contributing and the
|
||||
> community is there to help you. It has helped me a lot in gaining skills, meeting
|
||||
> people from around the world and also helping them. You can learn on the go,
|
||||
> you don't have to be an expert. Make sure to also check out no code contributions
|
||||
> because being a beginner is a skill and you can bring new perspectives to the
|
||||
> organisation._
|
||||
|
||||
## [Madhav Jivarajani](https://github.com/MadhavJivrajani)
|
||||
|
||||
|
||||
Madhav Jivarajani works on the VMware Upstream Kubernetes stability team. He began contributing to the Kubernetes project in January 2021 and has since made significant contributions to several areas of work under SIG Architecture, SIG API Machinery, and SIG ContribEx (contributor experience).
|
||||
|
||||
Among several significant contributions are his recent efforts toward the Archival of [design proposals](https://github.com/kubernetes/community/issues/6055), refactoring the ["groups" codebase](https://github.com/kubernetes/k8s.io/pull/2713) under k8s-infra repository to make it mockable and testable, and improving the functionality of the [GitHub k8s bot](https://github.com/kubernetes/test-infra/issues/23129).
|
||||
|
||||
In addition to his technical efforts, Madhav oversees many projects aimed at assisting new contributors. He organises bi-weekly "KEP reading club" sessions to help newcomers understand the process of adding new features, deprecating old ones, and making other key changes to the upstream project. He has also worked on developing [Katacoda scenarios](https://github.com/kubernetes-sigs/contributor-katacoda) to assist new contributors to become acquainted with the process of contributing to k/k. In addition to his current efforts to meet with community members every week, he has organised several [new contributors workshops (NCW)](https://www.youtube.com/watch?v=FgsXbHBRYIc).
|
||||
|
||||
> _I initially did not know much about Kubernetes. I joined because the community was
|
||||
> super friendly. But what made me stay was not just the people, but the project itself.
|
||||
> My solution to not feeling overwhelmed in the community was to gain as much context
|
||||
> and knowledge into the topics that I was interested in and were being discussed. And
|
||||
> as a result I continued to dig deeper into Kubernetes and the design of it.
|
||||
> I am a systems nut & thus Kubernetes was an absolute goldmine for me._
|
||||
|
||||
|
||||
## [Rajas Kakodkar](https://github.com/rajaskakodkar)
|
||||
|
||||
Rajas Kakodkar currently works at VMware as a Member of Technical Staff. He has been engaged in many aspects of the upstream Kubernetes project since 2019.
|
||||
|
||||
He is now a key contributor to the Testing special interest group. He is also active in the SIG Network community. Lately, Rajas has contributed significantly to the [NetworkPolicy++](https://docs.google.com/document/d/1AtWQy2fNa4qXRag9cCp5_HsefD7bxKe3ea2RPn8jnSs/) and [`kpng`](https://github.com/kubernetes-sigs/kpng) sub-projects.
|
||||
|
||||
One of the first challenges he ran across was that he was in a different time zone than the upstream project's regular meeting hours. However, async interactions on community forums progressively corrected that problem.
|
||||
|
||||
> _I enjoy contributing to Kubernetes not just because I get to work on
|
||||
> cutting edge tech but more importantly because I get to work with
|
||||
> awesome people and help in solving real world problems._
|
||||
|
||||
## [Rajula Vineet Reddy](https://github.com/rajula96reddy)
|
||||
|
||||
Rajula Vineet Reddy, a Junior Engineer at CERN, is a member of the Marketing Council team under SIG ContribEx . He also served as a release shadow for SIG Release during the 1.22 and 1.23 Kubernetes release cycles.
|
||||
|
||||
He started looking at the Kubernetes project as part of a university project with the help of one of his professors. Over time, he spent a significant amount of time reading the project's documentation, Slack discussions, GitHub issues, and blogs, which helped him better grasp the Kubernetes project and piqued his interest in contributing upstream. One of his key contributions was his assistance with automation in the SIG ContribEx Upstream Marketing subproject.
|
||||
|
||||
According to Rajula, attending project meetings and shadowing various project roles are vital for learning about the community.
|
||||
|
||||
> _I find the community very helpful and it's always_
|
||||
> “you get back as much as you contribute”.
|
||||
> _The more involved you are, the more you will understand, get to learn and
|
||||
> contribute new things._
|
||||
>
|
||||
> _The first step to_ “come forward and start” _is hard. But it's all gonna be
|
||||
> smooth after that. Just take that jump._
|
||||
|
||||
---
|
||||
|
||||
If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. We're thrilled to have other folks assisting us in reaching out to even more wonderful individuals of the community. Your suggestions would be much appreciated.
|
||||
|
||||
|
||||
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋
|
||||
|
||||
|
|
@ -30,55 +30,7 @@ insert dynamic port numbers into configuration blocks, services have to know
|
|||
how to find each other, etc. Rather than deal with this, Kubernetes takes a
|
||||
different approach.
|
||||
|
||||
## The Kubernetes network model
|
||||
|
||||
Every `Pod` gets its own IP address, maximum one per IP family. This means you
|
||||
do not need need to deal with mapping container ports to host ports in order to
|
||||
expose the `Pods` services on the network. This creates a clean,
|
||||
backwards-compatible model where `Pods` can be treated much like VMs or physical
|
||||
hosts from the perspectives of port allocation, naming, service discovery, load
|
||||
balancing, application configuration, and migration.
|
||||
|
||||
Kubernetes IP addresses exist at the `Pod` scope, in the `status.PodIPs` field
|
||||
- containers within a `Pod` share their network namespaces - including their IP
|
||||
address. This means that containers within a `Pod` can all reach each other's
|
||||
ports on `localhost`. This also means that containers within a `Pod` must
|
||||
coordinate port usage, but this is no different from processes in a VM. This is
|
||||
called the _IP-per-pod_ model.
|
||||
|
||||
In every cluster, there exists an abstract pod-network to which pods
|
||||
are connected by default, unless explicitly configured to use the
|
||||
host-network (on platforms that support it). Even if a host has
|
||||
multiple IPs, host-network pods only have one Kubernetes IP address at
|
||||
the `Pod` scope, that is, the `status.PodIPs` field contains one
|
||||
IP per address family (for now), so the "IP-per-pod" model is guaranteed.
|
||||
|
||||
Kubernetes imposes the following fundamental requirements on any networking
|
||||
implementation (barring any intentional network segmentation policies):
|
||||
|
||||
* any pod-network pod on any node can communicate with all other pod-network
|
||||
pods on all nodes without NAT.
|
||||
* non-pod processes on a node (the kubelet, and also for example any other system daemon) can
|
||||
communicate with all pods on that node.
|
||||
|
||||
In addition, for platforms and runtimes that support running pods in the host OS network:
|
||||
|
||||
* host-network pods of a node can connect directly with all pods IPs on all
|
||||
nodes, however, unlike pod-network pods, the source IP address might not be
|
||||
present in the `Pod` `status.PodIPs` field.
|
||||
|
||||
This model is principally compatible with the desire for Kubernetes to enable
|
||||
low-friction porting of apps from VMs to containers. If your workload previously ran
|
||||
in a VM, your VM typically had a single IP address; everything in that VM could talk to
|
||||
other VMs on your network.
|
||||
This is the same basic model but less complex overall.
|
||||
|
||||
How this is implemented is a detail of the particular container runtime in use. Likewise, the networking option you choose may support [dual-stack IPv4/IPv6 networking](/docs/concepts/services-networking/dual-stack/); implementations vary.
|
||||
|
||||
It is possible to request ports on the `Node` itself which forward to your `Pod`
|
||||
(called host ports), but this is a very niche operation. How that forwarding is
|
||||
implemented is also a detail of the container runtime. The `Pod` itself is
|
||||
blind to the existence or non-existence of host ports.
|
||||
To learn about the Kubernetes networking model, see [here](/docs/concepts/services-networking/).
|
||||
|
||||
## How to implement the Kubernetes networking model
|
||||
|
||||
|
|
@ -152,10 +104,6 @@ network complexity required to deploy Kubernetes at scale within AWS.
|
|||
[Coil](https://github.com/cybozu-go/coil) is a CNI plugin designed for ease of integration, providing flexible egress networking.
|
||||
Coil operates with a low overhead compared to bare metal, and allows you to define arbitrary egress NAT gateways for external networks.
|
||||
|
||||
### Contiv
|
||||
|
||||
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases.
|
||||
|
||||
### Contrail / Tungsten Fabric
|
||||
|
||||
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
|
|
|
|||
|
|
@ -5,8 +5,50 @@ description: >
|
|||
Concepts and resources behind networking in Kubernetes.
|
||||
---
|
||||
|
||||
## The Kubernetes network model
|
||||
|
||||
Every [`Pod`](/docs/concepts/workloads/pods/) gets its own IP address.
|
||||
This means you do not need to explicitly create links between `Pods` and you
|
||||
almost never need to deal with mapping container ports to host ports.
|
||||
This creates a clean, backwards-compatible model where `Pods` can be treated
|
||||
much like VMs or physical hosts from the perspectives of port allocation,
|
||||
naming, service discovery, [load balancing](/docs/concepts/services-networking/ingress/#load-balancing), application configuration,
|
||||
and migration.
|
||||
|
||||
Kubernetes imposes the following fundamental requirements on any networking
|
||||
implementation (barring any intentional network segmentation policies):
|
||||
|
||||
* pods on a [node](/docs/concepts/architecture/nodes/) can communicate with all pods on all nodes without NAT
|
||||
* agents on a node (e.g. system daemons, kubelet) can communicate with all
|
||||
pods on that node
|
||||
|
||||
Note: For those platforms that support `Pods` running in the host network (e.g.
|
||||
Linux):
|
||||
|
||||
* pods in the host network of a node can communicate with all pods on all
|
||||
nodes without NAT
|
||||
|
||||
This model is not only less complex overall, but it is principally compatible
|
||||
with the desire for Kubernetes to enable low-friction porting of apps from VMs
|
||||
to containers. If your job previously ran in a VM, your VM had an IP and could
|
||||
talk to other VMs in your project. This is the same basic model.
|
||||
|
||||
Kubernetes IP addresses exist at the `Pod` scope - containers within a `Pod`
|
||||
share their network namespaces - including their IP address and MAC address.
|
||||
This means that containers within a `Pod` can all reach each other's ports on
|
||||
`localhost`. This also means that containers within a `Pod` must coordinate port
|
||||
usage, but this is no different from processes in a VM. This is called the
|
||||
"IP-per-pod" model.
|
||||
|
||||
How this is implemented is a detail of the particular container runtime in use.
|
||||
|
||||
It is possible to request ports on the `Node` itself which forward to your `Pod`
|
||||
(called host ports), but this is a very niche operation. How that forwarding is
|
||||
implemented is also a detail of the container runtime. The `Pod` itself is
|
||||
blind to the existence or non-existence of host ports.
|
||||
|
||||
Kubernetes networking addresses four concerns:
|
||||
- Containers within a Pod use networking to communicate via loopback.
|
||||
- Containers within a Pod [use networking to communicate](/docs/concepts/services-networking/dns-pod-service/) via loopback.
|
||||
- Cluster networking provides communication between different Pods.
|
||||
- The Service resource lets you expose an application running in Pods to be reachable from outside your cluster.
|
||||
- You can also use Services to publish services only for consumption inside your cluster.
|
||||
- The [Service resource](/docs/concepts/services-networking/service/) lets you [expose an application running in Pods](/docs/concepts/services-networking/connect-applications-service/) to be reachable from outside your cluster.
|
||||
- You can also use Services to [publish services only for consumption inside your cluster](/docs/concepts/services-networking/service-traffic-policy/).
|
||||
|
|
|
|||
|
|
@ -39,6 +39,10 @@ request a particular class. Administrators set the name and other parameters
|
|||
of a class when first creating VolumeSnapshotClass objects, and the objects cannot
|
||||
be updated once they are created.
|
||||
|
||||
{{< note >}}
|
||||
Installation of the CRDs is the responsibility of the Kubernetes distribution. Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
|
|
|
|||
|
|
@ -916,7 +916,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
|
|||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--pod-infra-container-image string Default: <code>k8s.gcr.io/pause:3.5</code></td>
|
||||
<td colspan="2">--pod-infra-container-image string Default: <code>k8s.gcr.io/pause:3.6</code></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">Specified image will not be pruned by the image garbage collector. When container-runtime is set to <code>docker</code>, all containers in each pod will use the network/IPC namespaces from this image. Other CRI implementations have their own configuration to set this image.</td>
|
||||
|
|
|
|||
|
|
@ -41,8 +41,6 @@ For control-plane nodes additional steps are performed:
|
|||
|
||||
1. Adding new local etcd member.
|
||||
|
||||
1. Adding this node to the ClusterStatus of the kubeadm cluster.
|
||||
|
||||
### Using join phases with kubeadm {#join-phases}
|
||||
|
||||
Kubeadm allows you join a node to the cluster in phases using `kubeadm join phase`.
|
||||
|
|
|
|||
|
|
@ -16,8 +16,7 @@ Performs a best effort revert of changes made by `kubeadm init` or `kubeadm join
|
|||
|
||||
`kubeadm reset` is responsible for cleaning up a node local file system from files that were created using
|
||||
the `kubeadm init` or `kubeadm join` commands. For control-plane nodes `reset` also removes the local stacked
|
||||
etcd member of this node from the etcd cluster and also removes this node's information from the kubeadm
|
||||
`ClusterStatus` object. `ClusterStatus` is a kubeadm managed Kubernetes API object that holds a list of kube-apiserver endpoints.
|
||||
etcd member of this node from the etcd cluster.
|
||||
|
||||
`kubeadm reset phase` can be used to execute the separate phases of the above workflow.
|
||||
To skip a list of phases you can use the `--skip-phases` flag, which works in a similar way to
|
||||
|
|
|
|||
|
|
@ -43,7 +43,9 @@ similar to the following example:
|
|||
kubeadm init --pod-network-cidr=10.244.0.0/16,2001:db8:42:0::/56 --service-cidr=10.96.0.0/16,2001:db8:42:1::/112
|
||||
```
|
||||
|
||||
To make things clearer, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for the primary dual-stack control plane node.
|
||||
To make things clearer, here is an example kubeadm
|
||||
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
`kubeadm-config.yaml` for the primary dual-stack control plane node.
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
|
@ -81,7 +83,8 @@ The `--apiserver-advertise-address` flag does not support dual-stack.
|
|||
|
||||
Before joining a node, make sure that the node has IPv6 routable network interface and allows IPv6 forwarding.
|
||||
|
||||
Here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for joining a worker node to the cluster.
|
||||
Here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
`kubeadm-config.yaml` for joining a worker node to the cluster.
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
|
|
@ -98,7 +101,9 @@ nodeRegistration:
|
|||
node-ip: 10.100.0.3,fd00:1:2:3::3
|
||||
```
|
||||
|
||||
Also, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for joining another control plane node to the cluster.
|
||||
Also, here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
`kubeadm-config.yaml` for joining another control plane node to the cluster.
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
kind: JoinConfiguration
|
||||
|
|
@ -132,7 +137,9 @@ Dual-stack support doesn't mean that you need to use dual-stack addressing.
|
|||
You can deploy a single-stack cluster that has the dual-stack networking feature enabled.
|
||||
{{< /note >}}
|
||||
|
||||
To make things more clear, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for the single-stack control plane node.
|
||||
To make things more clear, here is an example kubeadm
|
||||
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
`kubeadm-config.yaml` for the single-stack control plane node.
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
|
|
|
|||
|
|
@ -775,9 +775,9 @@ SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes
|
|||
nssm start flanneld
|
||||
|
||||
# Register kubelet.exe
|
||||
# Microsoft releases the pause infrastructure container at mcr.microsoft.com/oss/kubernetes/pause:1.4.1
|
||||
# Microsoft releases the pause infrastructure container at mcr.microsoft.com/oss/kubernetes/pause:3.6
|
||||
nssm install kubelet C:\k\kubelet.exe
|
||||
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/oss/kubernetes/pause:1.4.1 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
|
||||
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/oss/kubernetes/pause:3.6 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
|
||||
nssm set kubelet AppDirectory C:\k
|
||||
nssm start kubelet
|
||||
|
||||
|
|
@ -923,7 +923,7 @@ SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes
|
|||
|
||||
1. `kubectl port-forward` fails with "unable to do port forwarding: wincat not found"
|
||||
|
||||
This was implemented in Kubernetes 1.15 by including `wincat.exe` in the pause infrastructure container `mcr.microsoft.com/oss/kubernetes/pause:1.4.1`. Be sure to use a supported version of Kubernetes.
|
||||
This was implemented in Kubernetes 1.15 by including `wincat.exe` in the pause infrastructure container `mcr.microsoft.com/oss/kubernetes/pause:3.6`. Be sure to use a supported version of Kubernetes.
|
||||
If you would like to build your own pause infrastructure container be sure to include [wincat](https://github.com/kubernetes/kubernetes/tree/master/build/pause/windows/wincat).
|
||||
|
||||
1. My Kubernetes installation is failing because my Windows Server node is behind a proxy
|
||||
|
|
|
|||
|
|
@ -214,7 +214,7 @@ In each case, the credentials of the pod are used to communicate securely with t
|
|||
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/access-application-cluster/access-cluster/)
|
||||
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/administer-cluster/access-cluster-services/)
|
||||
|
||||
## Requesting redirects
|
||||
|
||||
|
|
|
|||
|
|
@ -219,7 +219,7 @@ allocated resources, events and pods running on the node.
|
|||
|
||||
Shows all applications running in the selected namespace.
|
||||
The view lists applications by workload kind (for example: Deployments, ReplicaSets, StatefulSets).
|
||||
and each workload kind can be viewed separately.
|
||||
Each workload kind can be viewed separately.
|
||||
The lists summarize actionable information about the workloads,
|
||||
such as the number of ready pods for a ReplicaSet or current memory usage for a Pod.
|
||||
|
||||
|
|
|
|||
|
|
@ -248,7 +248,7 @@ To try the gRPC liveness check, create a Pod using the command below.
|
|||
In the example below, the etcd pod is configured to use gRPC liveness probe.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/probe/content/en/examples/pods/probe/grpc-liveness.yaml
|
||||
kubectl apply -f https://k8s.io/examples/pods/probe/grpc-liveness.yaml
|
||||
```
|
||||
|
||||
After 15 seconds, view Pod events to verify that the liveness check has not failed:
|
||||
|
|
|
|||
|
|
@ -36,8 +36,7 @@ If you define args, but do not define a command, the default command is used
|
|||
with your new arguments.
|
||||
|
||||
{{< note >}}
|
||||
The `command` field corresponds to `entrypoint` in some container
|
||||
runtimes. Refer to the [Notes](#notes) below.
|
||||
The `command` field corresponds to `entrypoint` in some container runtimes.
|
||||
{{< /note >}}
|
||||
|
||||
In this exercise, you create a Pod that runs one container. The configuration
|
||||
|
|
@ -111,50 +110,9 @@ command: ["/bin/sh"]
|
|||
args: ["-c", "while true; do echo hello; sleep 10;done"]
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
This table summarizes the field names used by Docker and Kubernetes.
|
||||
|
||||
| Description | Docker field name | Kubernetes field name |
|
||||
|----------------------------------------|------------------------|-----------------------|
|
||||
| The command run by the container | Entrypoint | command |
|
||||
| The arguments passed to the command | Cmd | args |
|
||||
|
||||
When you override the default Entrypoint and Cmd, these rules apply:
|
||||
|
||||
* If you do not supply `command` or `args` for a Container, the defaults defined
|
||||
in the Docker image are used.
|
||||
|
||||
* If you supply a `command` but no `args` for a Container, only the supplied
|
||||
`command` is used. The default EntryPoint and the default Cmd defined in the Docker
|
||||
image are ignored.
|
||||
|
||||
* If you supply only `args` for a Container, the default Entrypoint defined in
|
||||
the Docker image is run with the `args` that you supplied.
|
||||
|
||||
* If you supply a `command` and `args`, the default Entrypoint and the default
|
||||
Cmd defined in the Docker image are ignored. Your `command` is run with your
|
||||
`args`.
|
||||
|
||||
Here are some examples:
|
||||
|
||||
| Image Entrypoint | Image Cmd | Container command | Container args | Command run |
|
||||
|--------------------|------------------|---------------------|--------------------|------------------|
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` |
|
||||
| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` |
|
||||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [configuring pods and containers](/docs/tasks/).
|
||||
* Learn more about [running commands in a container](/docs/tasks/debug-application-cluster/get-shell-running-container/).
|
||||
* See [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core).
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -86,12 +86,20 @@ metadata:
|
|||
name: example-configmap-1-8mbdf7882g
|
||||
```
|
||||
|
||||
To generate a ConfigMap from an env file, add an entry to the `envs` list in `configMapGenerator`. Here is an example of generating a ConfigMap with a data item from a `.env` file:
|
||||
To generate a ConfigMap from an env file, add an entry to the `envs` list in `configMapGenerator`. This can also be used to set values from local environment variables by omitting the `=` and the value.
|
||||
|
||||
{{< note >}}
|
||||
It's recommended to use the local environment variable population functionality sparingly - an overlay with a patch is often more maintainable. Setting values from the environment may be useful when they cannot easily be predicted, such as a git SHA.
|
||||
{{< /note >}}
|
||||
|
||||
Here is an example of generating a ConfigMap with a data item from a `.env` file:
|
||||
|
||||
```shell
|
||||
# Create a .env file
|
||||
# BAZ will be populated from the local environment variable $BAZ
|
||||
cat <<EOF >.env
|
||||
FOO=Bar
|
||||
BAZ
|
||||
EOF
|
||||
|
||||
cat <<EOF >./kustomization.yaml
|
||||
|
|
@ -105,7 +113,7 @@ EOF
|
|||
The generated ConfigMap can be examined with the following command:
|
||||
|
||||
```shell
|
||||
kubectl kustomize ./
|
||||
BAZ=Qux kubectl kustomize ./
|
||||
```
|
||||
|
||||
The generated ConfigMap is:
|
||||
|
|
@ -113,10 +121,11 @@ The generated ConfigMap is:
|
|||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
BAZ: Qux
|
||||
FOO: Bar
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: example-configmap-1-42cfbf598f
|
||||
name: example-configmap-1-892ghb99c8
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
|||
|
|
@ -342,17 +342,16 @@ syscalls. Here seccomp has been instructed to error on any syscall by setting
|
|||
ability to do anything meaningful. What you really want is to give workloads
|
||||
only the privileges they need.
|
||||
|
||||
Clean up that Pod and Service before moving to the next section:
|
||||
Clean up that Pod before moving to the next section:
|
||||
|
||||
```shell
|
||||
kubectl delete service violation-pod --wait
|
||||
kubectl delete pod violation-pod --wait --now
|
||||
```
|
||||
|
||||
## Create Pod with seccomp profile that only allows necessary syscalls
|
||||
|
||||
If you take a look at the `fine-pod.json`, you will notice some of the syscalls
|
||||
seen in the first example where the profile set `"defaultAction":
|
||||
If you take a look at the `fine-grained.json` profile, you will notice some of the syscalls
|
||||
seen in syslog of the first example where the profile set `"defaultAction":
|
||||
"SCMP_ACT_LOG"`. Now the profile is setting `"defaultAction": "SCMP_ACT_ERRNO"`,
|
||||
but explicitly allowing a set of syscalls in the `"action": "SCMP_ACT_ALLOW"`
|
||||
block. Ideally, the container will run successfully and you will see no messages
|
||||
|
|
|
|||
|
|
@ -304,8 +304,8 @@ following:
|
|||
|
||||
## Clean up
|
||||
|
||||
Run `kind delete cluster -name psa-with-cluster-pss` and
|
||||
`kind delete cluster -name psa-wo-cluster-pss` to delete the clusters you
|
||||
Run `kind delete cluster --name psa-with-cluster-pss` and
|
||||
`kind delete cluster --name psa-wo-cluster-pss` to delete the clusters you
|
||||
created.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ releases may also occur in between these.
|
|||
|
||||
End of Life for **1.23** is **2023-02-28**.
|
||||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|
||||
|---------------|----------------------|-------------|------|
|
||||
| 1.23.2 | 2022-01-14 | 2022-01-19 | |
|
||||
| 1.23.1 | 2021-12-14 | 2021-12-16 | |
|
||||
|
|
@ -101,7 +101,7 @@ End of Life for **1.23** is **2023-02-28**.
|
|||
|
||||
End of Life for **1.22** is **2022-10-28**
|
||||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|
||||
|---------------|----------------------|-------------|------|
|
||||
| 1.22.6 | 2022-01-14 | 2022-01-19 | |
|
||||
| 1.22.5 | 2021-12-10 | 2021-12-15 | |
|
||||
|
|
@ -116,7 +116,7 @@ End of Life for **1.22** is **2022-10-28**
|
|||
|
||||
End of Life for **1.21** is **2022-06-28**
|
||||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|
||||
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
|
||||
| 1.21.9 | 2022-01-14 | 2022-01-19 | |
|
||||
| 1.21.8 | 2021-12-10 | 2021-12-15 | |
|
||||
|
|
@ -134,7 +134,7 @@ End of Life for **1.21** is **2022-06-28**
|
|||
|
||||
End of Life for **1.20** is **2022-02-28**
|
||||
|
||||
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|
||||
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|
||||
| ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- |
|
||||
| 1.20.15 | 2022-01-14 | 2022-01-19 | |
|
||||
| 1.20.14 | 2021-12-10 | 2021-12-15 | |
|
||||
|
|
@ -157,7 +157,7 @@ End of Life for **1.20** is **2022-02-28**
|
|||
|
||||
These releases are no longer supported.
|
||||
|
||||
| MINOR VERSION | FINAL PATCH RELEASE | EOL DATE | NOTE |
|
||||
| Minor Version | Final Patch Release | EOL Date | Note |
|
||||
| ------------- | ------------------- | ---------- | ---------------------------------------------------------------------- |
|
||||
| 1.19 | 1.19.16 | 2021-10-28 | |
|
||||
| 1.18 | 1.18.20 | 2021-06-18 | Created to resolve regression introduced in 1.18.19 |
|
||||
|
|
|
|||
|
|
@ -292,12 +292,6 @@ Nuage è stato progettato pensando alle applicazioni e semplifica la dichiarazio
|
|||
applicazioni. Il motore di analisi in tempo reale della piattaforma consente la visibilità e il monitoraggio della
|
||||
sicurezza per le applicazioni Kubernetes.
|
||||
|
||||
### OpenVSwitch
|
||||
|
||||
[OpenVSwitch](https://www.openvswitch.org/) è un po 'più maturo ma anche
|
||||
modo complicato per costruire una rete di sovrapposizione. Questo è approvato da molti dei
|
||||
"Grandi negozi" per il networking.
|
||||
|
||||
### OVN (Apri rete virtuale)
|
||||
|
||||
OVN è una soluzione di virtualizzazione della rete opensource sviluppata da
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -2,7 +2,7 @@
|
|||
title: ConfigMap
|
||||
id: configmap
|
||||
date: 2021-08-24
|
||||
full_link: /docs/concepts/configuration/configmap
|
||||
full_link: /pt-br/docs/concepts/configuration/configmap
|
||||
short_description: >
|
||||
Um objeto da API usado para armazenar dados não-confidenciais em pares chave-valor. Pode ser consumido como variáveis de ambiente, argumentos de linha de comando, ou arquivos de configuração em um volume.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: Variáveis de Ambiente de Contêineres
|
||||
id: container-env-variables
|
||||
date: 2021-11-20
|
||||
full_link: /pt-br/docs/concepts/containers/container-environment/
|
||||
short_description: >
|
||||
Variáveis de ambiente de contêineres são pares nome=valor que trazem informações úteis para os contêineres rodando dentro de um Pod.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Variáveis de ambiente de contêineres são pares nome=valor que trazem informações úteis para os contêineres rodando dentro de um {{< glossary_tooltip text="pod" term_id="Pod" >}}
|
||||
|
||||
<!--more-->
|
||||
|
||||
Variáveis de ambiente de contêineres fornecem informações requeridas pela aplicação conteinerizada, junto com informações sobre recursos importantes para o {{< glossary_tooltip text="contêiner" term_id="container" >}}. Por exemplo, detalhes do sistema de arquivos, informações sobre o contêiner, e outros recursos do cluster, como endpoints de serviços.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Contêiner
|
||||
id: container
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/containers/
|
||||
short_description: >
|
||||
Uma imagem executável leve e portável que contém software e todas as suas dependências.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- workload
|
||||
---
|
||||
Uma imagem executável leve e portável que contém software e todas as suas dependências.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Contêineres desacoplam aplicações da infraestrutura da máquina em que estas rodam para tornar a instalação mais fácil em diferentes ambientes de nuvem e de
|
||||
sistemas operacionais, e para facilitar o escalonamento das aplicações.
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
title: Secret
|
||||
id: secret
|
||||
date: 2021-08-24
|
||||
full_link: /docs/concepts/configuration/secret/
|
||||
full_link: /pt-br/docs/concepts/configuration/secret/
|
||||
short_description: >
|
||||
Armazena dados sensíveis, como senhas, tokens OAuth e chaves SSH.
|
||||
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
|
|||
下面是一个示例配置,它为万分之一的请求记录 spans,并使用了默认的 OpenTelemetry 端口。
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1alpha1
|
||||
apiVersion: apiserver.config.k8s.io/v1beta1
|
||||
kind: TracingConfiguration
|
||||
# default value
|
||||
#endpoint: localhost:4317
|
||||
|
|
@ -128,10 +128,11 @@ samplingRatePerMillion: 100
|
|||
|
||||
<!--
|
||||
For more information about the `TracingConfiguration` struct, see
|
||||
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).
|
||||
[API server config API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration).
|
||||
-->
|
||||
|
||||
有关 TracingConfiguration 结构体的更多信息,请参阅
|
||||
[API 服务器配置 API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration)。
|
||||
[API 服务器配置 API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration)。
|
||||
|
||||
<!--
|
||||
## Stability
|
||||
|
|
|
|||
|
|
@ -54,9 +54,7 @@ built-in automation from the core of Kubernetes. You can use Kubernetes
|
|||
to automate deploying and running workloads, *and* you can automate how
|
||||
Kubernetes does that.
|
||||
|
||||
Kubernetes' {{< glossary_tooltip text="controllers" term_id="controller" >}}
|
||||
concept lets you extend the cluster's behaviour without modifying the code
|
||||
of Kubernetes itself.
|
||||
Kubernetes' {{< glossary_tooltip text="operator pattern" term_id="operator-pattern" >}} concept lets you extend the cluster's behaviour without modifying the code of Kubernetes itself by linking {{< glossary_tooltip text="controllers" term_id="controller" >}} to one or more custom resources.
|
||||
Operators are clients of the Kubernetes API that act as controllers for
|
||||
a [Custom Resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
-->
|
||||
|
|
@ -65,10 +63,11 @@ a [Custom Resource](/docs/concepts/extend-kubernetes/api-extension/custom-resour
|
|||
Kubernetes 为自动化而生。无需任何修改,你即可以从 Kubernetes 核心中获得许多内置的自动化功能。
|
||||
你可以使用 Kubernetes 自动化部署和运行工作负载, *甚至* 可以自动化 Kubernetes 自身。
|
||||
|
||||
Kubernetes 的 {{< glossary_tooltip text="Operator 模式" term_id="operator-pattern" >}}概念
|
||||
使你无需修改 Kubernetes 自身的代码,通过把定制控制器关联到一个以上的定制资源上,即可以扩展集群的行为。
|
||||
Kubernetes 的 {{< glossary_tooltip text="Operator 模式" term_id="operator-pattern" >}}概念允许你在不修改
|
||||
Kubernetes 自身代码的情况下,通过为一个或多个自定义资源关联{{< glossary_tooltip text="控制器" term_id="controller" >}}
|
||||
来扩展集群的能力。
|
||||
Operator 是 Kubernetes API 的客户端,充当
|
||||
[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
的控制器。
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ The application can simply use it as a service.
|
|||
|
||||
Service Catalog uses the [Open service broker API](https://github.com/openservicebrokerapi/servicebroker) to communicate with service brokers, acting as an intermediary for the Kubernetes API Server to negotiate the initial provisioning and retrieve the credentials necessary for the application to use a managed service.
|
||||
|
||||
It is implemented as an extension API server and a controller, using etcd for storage. It also uses the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) available in Kubernetes 1.7+ to present its API.
|
||||
It is implemented using a [CRDs-based](/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) architecture.
|
||||
|
||||

|
||||
-->
|
||||
|
|
@ -66,10 +66,8 @@ It is implemented as an extension API server and a controller, using etcd for st
|
|||
与服务代理进行通信,并作为 Kubernetes API 服务器的中介,以便协商启动部署和获取
|
||||
应用程序使用托管服务时必须的凭据。
|
||||
|
||||
服务目录实现为一个扩展 API 服务器和一个控制器,使用 Etcd 提供存储。
|
||||
它还使用了 Kubernetes 1.7 之后版本中提供的
|
||||
[聚合层](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
来呈现其 API。
|
||||
它是[基于 CRDs](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources)
|
||||
架构实现的。
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -355,7 +355,7 @@ In each case, the credentials of the pod are used to communicate securely with t
|
|||
<!--
|
||||
## Accessing services running on the cluster
|
||||
|
||||
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/access-application-cluster/access-cluster/)
|
||||
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/administer-cluster/access-cluster-services/)
|
||||
-->
|
||||
|
||||
## 访问集群上运行的服务 {#accessing-services-running-on-the-cluster}
|
||||
|
|
|
|||
|
|
@ -53,34 +53,43 @@ dependency on Docker:
|
|||
这才是判定你是否依赖于 Docker 的方法。
|
||||
|
||||
<!--
|
||||
1. Make sure no privileged Pods execute Docker commands.
|
||||
2. Check that scripts and apps running on nodes outside of Kubernetes
|
||||
1. Make sure no privileged Pods execute Docker commands (like `docker ps`),
|
||||
restart the Docker service (commands such as `systemctl restart docker.service`),
|
||||
or modify Docker-specific files such as `/etc/docker/daemon.json`.
|
||||
1. Check for any private registries or image mirror settings in the Docker
|
||||
configuration file (like `/etc/docker/daemon.json`). Those typically need to
|
||||
be reconfigured for another container runtime.
|
||||
1. Check that scripts and apps running on nodes outside of your Kubernetes
|
||||
infrastructure do not execute Docker commands. It might be:
|
||||
- SSH to nodes to troubleshoot;
|
||||
- Node startup scripts;
|
||||
- Monitoring and security agents installed on nodes directly.
|
||||
3. Third-party tools that perform above mentioned privileged operations. See
|
||||
1. Third-party tools that perform above mentioned privileged operations. See
|
||||
[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
|
||||
for more information.
|
||||
4. Make sure there is no indirect dependencies on dockershim behavior.
|
||||
1. Make sure there is no indirect dependencies on dockershim behavior.
|
||||
This is an edge case and unlikely to affect your application. Some tooling may be configured
|
||||
to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for
|
||||
a specific log message as part of troubleshooting instructions.
|
||||
If you have such tooling configured, test the behavior on test
|
||||
cluster before migration.
|
||||
-->
|
||||
1. 确认没有特权 Pod 执行 docker 命令。
|
||||
2. 检查 Kubernetes 基础架构外部节点上的脚本和应用,确认它们没有执行 Docker 命令。可能的命令有:
|
||||
-->
|
||||
1. 确认没有特权 Pod 执行 Docker 命令(如 `docker ps`)、重新启动 Docker
|
||||
服务(如 `systemctl restart docker.service`)或修改
|
||||
Docker 配置文件 `/etc/docker/daemon.json`。
|
||||
2. 检查 Docker 配置文件(如 `/etc/docker/daemon.json`)中容器镜像仓库的镜像(mirror)站点设置。
|
||||
这些配置通常需要针对不同容器运行时来重新设置。
|
||||
3. 检查确保在 Kubernetes 基础设施之外的节点上运行的脚本和应用程序没有执行Docker命令。
|
||||
可能的情况如:
|
||||
- SSH 到节点排查故障;
|
||||
- 节点启动脚本;
|
||||
- 直接安装在节点上的监视和安全代理。
|
||||
3. 检查执行了上述特权操作的第三方工具。详细操作请参考:
|
||||
[从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
|
||||
4. 确认没有对 dockershim 行为的间接依赖。这是一种极端情况,不太可能影响你的应用。
|
||||
- 直接安装在节点上的监控和安全代理。
|
||||
4. 检查执行上述特权操作的第三方工具。详细操作请参考:
|
||||
[从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
|
||||
5. 确认没有对 dockershim 行为的间接依赖。这是一种极端情况,不太可能影响你的应用。
|
||||
一些工具很可能被配置为使用了 Docker 特性,比如,基于特定指标发警报,或者在故障排查指令的一个环节中搜索特定的日志信息。
|
||||
如果你有此类配置的工具,需要在迁移之前,在测试集群上完成功能验证。
|
||||
|
||||
|
||||
<!--
|
||||
## Dependency on Docker explained {#role-of-dockershim}
|
||||
-->
|
||||
|
|
|
|||
Loading…
Reference in New Issue