diff --git a/README.md b/README.md index 7c4809287f..d4c13cb77c 100644 --- a/README.md +++ b/README.md @@ -4,37 +4,61 @@ This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute! -## Running the website locally using Hugo +# Using this repository -See the [official Hugo documentation](https://gohugo.io/getting-started/installing/) for Hugo installation instructions. Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file. +You can run the website locally using Hugo, or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website. -Before building the site, clone the Kubernetes website repository: +## Prerequisites -```bash +To use this repository, you need the following installed locally: + +- [yarn](https://yarnpkg.com/) +- [npm](https://www.npmjs.com/) +- [Go](https://golang.org/) +- [Hugo](https://gohugo.io/) +- A container runtime, like [Docker](https://www.docker.com/). + +Before you start, install the dependencies. Clone the repository and navigate to the directory: + +``` git clone https://github.com/kubernetes/website.git cd website +``` + +The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/docsy#readme). Even if you plan to run the website in a container, we strongly recommend pulling in the submodule and other development dependencies by running the following: + +``` +# install dependencies +yarn + +# pull in the Docsy submodule git submodule update --init --recursive --depth 1 ``` -**Note:** The Kubernetes website deploys the [Docsy Hugo theme](https://github.com/google/docsy#readme). -If you have not updated your website repository, the `website/themes/docsy` directory is empty. The site cannot build -without a local copy of the theme. +## Running the website using a container -Update the website theme: +To build the site in a container, run the following to build the container image and run it: -```bash -git submodule update --init --recursive --depth 1 ``` +make container-image +make container-serve +``` + +Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh. + +## Running the website locally using Hugo + +Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file. To build and test the site locally, run: ```bash -hugo server --buildFuture +make serve ``` This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh. -## Get involved with SIG Docs +# Get involved with SIG Docs Learn more about SIG Docs Kubernetes community and meetings on the [community page](https://github.com/kubernetes/community/tree/master/sig-docs#meetings). @@ -43,7 +67,7 @@ You can also reach the maintainers of this project at: - [Slack](https://kubernetes.slack.com/messages/sig-docs) - [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) -## Contributing to the docs +# Contributing to the docs You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account. This copy is called a *fork*. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it. @@ -60,7 +84,7 @@ For more information about contributing to the Kubernetes documentation, see: * [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/) * [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/) -## Localization `README.md`'s +# Localization `README.md`'s | Language | Language | |---|---| @@ -72,10 +96,10 @@ For more information about contributing to the Kubernetes documentation, see: |[Italian](README-it.md)|[Ukrainian](README-uk.md)| |[Japanese](README-ja.md)|[Vietnamese](README-vi.md)| -## Code of conduct +# Code of conduct Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). -## Thank you! +# Thank you! Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation! \ No newline at end of file diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000000..2083d44cdf --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,22 @@ +# Security Policy + +## Security Announcements + +Join the [kubernetes-security-announce] group for security and vulnerability announcements. + +You can also subscribe to an RSS feed of the above using [this link][kubernetes-security-announce-rss]. + +## Reporting a Vulnerability + +Instructions for reporting a vulnerability can be found on the +[Kubernetes Security and Disclosure Information] page. + +## Supported Versions + +Information about supported Kubernetes versions can be found on the +[Kubernetes version and version skew support policy] page on the Kubernetes website. + +[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce +[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50 +[Kubernetes version and version skew support policy]: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions +[Kubernetes Security and Disclosure Information]: https://kubernetes.io/docs/reference/issues-security/security/#report-a-vulnerability diff --git a/config.toml b/config.toml index 00427abcf3..1ee8ae3101 100644 --- a/config.toml +++ b/config.toml @@ -231,7 +231,7 @@ no = 'Sorry to hear that. Please diff --git a/content/en/blog/_posts/2020-08-03-kubernetes-1-18-release-interview.md b/content/en/blog/_posts/2020-08-03-kubernetes-1-18-release-interview.md new file mode 100644 index 0000000000..a8e4e71736 --- /dev/null +++ b/content/en/blog/_posts/2020-08-03-kubernetes-1-18-release-interview.md @@ -0,0 +1,215 @@ +--- +layout: blog +title: "Physics, politics and Pull Requests: the Kubernetes 1.18 release interview" +date: 2020-08-03 +--- + +**Author**: Craig Box (Google) + +The start of the COVID-19 pandemic couldn't delay the release of Kubernetes 1.18, but unfortunately [a small bug](https://github.com/kubernetes/utils/issues/141) could — thankfully only by a day. This was the last cat that needed to be herded by 1.18 release lead [Jorge Alarcón](https://twitter.com/alejandrox135) before the [release on March 25](https://kubernetes.io/blog/2020/03/25/kubernetes-1-18-release-announcement/). + +One of the best parts about co-hosting the weekly [Kubernetes Podcast from Google](https://kubernetespodcast.com/) is the conversations we have with the people who help bring Kubernetes releases together. [Jorge was our guest on episode 96](https://kubernetespodcast.com/episode/096-kubernetes-1.18/) back in March, and [just like last week](https://kubernetes.io/blog/2020/07/27/music-and-math-the-kubernetes-1.17-release-interview/) we are delighted to bring you the transcript of this interview. + +If you'd rather enjoy the "audiobook version", including another interview when 1.19 is released later this month, [subscribe to the show](https://kubernetespodcast.com/subscribe/) wherever you get your podcasts. + +In the last few weeks, we've talked to long-time Kubernetes contributors and SIG leads [David Oppenheimer](https://kubernetespodcast.com/episode/114-scheduling/), [David Ashpole](https://kubernetespodcast.com/episode/113-instrumentation-and-cadvisor/) and [Wojciech Tyczynski](https://kubernetespodcast.com/episode/111-scalability/). All are worth taking the dog for a longer walk to listen to! + +--- + +**ADAM GLICK: You're a former physicist. I have to ask, what kind of physics did you work on?** + +JORGE ALARCÓN: Back in my days of math and all that, I used to work in [computational biology](https://en.wikipedia.org/wiki/Computational_biology) and a little bit of high energy physics. Computational biology was, for the most part, what I spent most of my time on. And it was essentially exploring the big idea of we have the structure of proteins. We know what they're made of. Now, based on that structure, we want to be able to predict [how they're going to fold](https://en.wikipedia.org/wiki/Protein_folding) and how they're going to behave, which essentially translates into the whole idea of designing pharmaceuticals, designing vaccines, or anything that you can possibly think of that has any connection whatsoever to a living organism. + +**ADAM GLICK: That would seem to ladder itself well into maybe going to something like bioinformatics. Did you take a tour into that, or did you decide to go elsewhere directly?** + +JORGE ALARCÓN: It is related, and I worked a little bit with some people that did focus on bioinformatics on the field specifically, but I never took a detour into it. Really, my big idea with computational biology, to be honest, it wasn't even the biology. That's usually what sells it, what people are really interested in, because protein engineering, all the cool and amazing things that you can do. + +Which is definitely good, and I don't want to take away from it. But my big thing is because biology is such a real thing, it is amazingly complicated. And the math— the models that you have to design to study those systems, to be able to predict something that people can actually experiment and measure, it just captivated me. The level of complexity, the beauty, the mechanisms, all the structures that you see once you got through the math and look at things, it just kind of got to me. + +**ADAM GLICK: How did you go from that world into the world of Kubernetes?** + +JORGE ALARCÓN: That's both a really boring story and an interesting one. + +[LAUGHING] + +I did my thing with physics, and it was good. It was fun. But at some point, I wanted— working in academia— at least my feeling for it is that generally all the people that you're surrounded with are usually academics. Just another bunch of physics, a bunch of mathematicians. + +But very seldom do you actually get the opportunity to take what you're working on and give it to someone else to use. Even with the mathematicians and physicists, the things that we're working on are super specialized, and you can probably find three, four, five people that can actually understand everything that you're saying. A lot of people are going to get the gist of it, but understanding the details, it's somewhat rare. + +One of the things that I absolutely love about tech, about software engineering, coding, all that, is how open and transparent everything is. You can write your library in Python, you can publish it, and suddenly the world is going to actually use it, actually consume it. And because normally, I've seen that it has a large avenue where you can work in something really complicated, you can communicate it, and people can actually go ahead and take it and run with it in their given direction. And that is kind of what happened. + +At some point, by pure accident and chance, I came across this group of people on the internet, and they were in the stages of making up this new group that's called [Data for Democracy](https://datafordemocracy.org/), a non-profit. And the whole idea was the internet, especially Twitter— that's how we congregated— Twitter, the internet. We have a ton of data scientists, people who work as software engineers, and the like. What if we all come together and try to solve some issues that actually affect the daily lives of people. And there were a ton of projects. Helping the ACLU gather data for something interesting that they were doing, gather data and analyze it for local governments— where do you have potholes, how much water is being consumed. + +Try to apply all the science that we knew, combined with all the code that we could write, and offer a good and digestible idea for people to say, OK, this makes sense, let's do something about it— policy, action, whatever. And I started working with this group, Data for Democracy— wonderful set of people. And the person who I believe we can blame for Data for Democracy— the one who got the idea and got it up and running, his name is Jonathan Morgan. And eventually, we got to work together. He started a startup, and I went to work with the startup. And that was essentially the thing that took me away from physics and into the world of software engineering— Data for Democracy, definitely. + +**ADAM GLICK: Were you using Kubernetes as part of that work there?** + +JORGE ALARCÓN: No, it was simple as it gets. You just try to get some data. You create a couple [IPython notebooks](https://ipython.org/), some setting up of really simple MySQL databases, and that was it. + +**ADAM GLICK: Where did you get started using Kubernetes? And was it before you started contributing to it and being a part, or did you decide to jump right in?** + +JORGE ALARCÓN: When I first started using Kubernetes, it was also on my first job. So there wasn't a lot of specific training in regards to software engineering or anything of the sort that I did before I actually started working as a software engineer. I just went from physicist to engineer. And in my days of physics, at least on the computer side, I was completely trained in the super old school system administrator, where you have your 10, 20 computers. You know physically where they are, and you have to connect the cables. + +**ADAM GLICK: All pets— all pets all the time.** + +JORGE ALARCÓN: [LAUGHING] You have to have your huge Python, bash scripts, three, five major versions, all because doing an upgrade will break something really important and you have no idea how to work on it. And that was my training. That was the way that I learned how to do things. Those were the kind of things that I knew how to do. + +And when I got to this company— startup— we were pretty much starting from scratch. We were building a couple applications. We work testing them, we were deploying them on a couple of managed instances. But like everything, there was a lot of toil that we wanted to automate. The whole issue of, OK, after days of work, we finally managed to get this version of the application up and running in these machines. + +It's open to the internet. People can test it out. But it turns out that it is now two weeks behind the latest on all the master branches for this repo, so now we want to update. And we have to go through the process of bringing it back up, creating new machines, do that whole thing. And I had no idea what Kubernetes was, to be honest. My boss at the moment mentioned it to me like, hey, we should use Kubernetes because apparently, Kubernetes is something that might be able to help us here. And we did some— I want to call it research and development. + +It was actually just making— again, startup, small company, small team, so really me just playing around with Kubernetes trying to get it to work, trying to get it to run. I was so lost. I had no idea what I was doing— not enough. I didn't have an idea of how Kubernetes was supposed to help me. And at that point, I did the best Googling that I could manage. Didn't really find a lot of examples. Didn't find a lot of blog posts. It was early. + +**ADAM GLICK: What time frame was this?** + +JORGE ALARCÓN: Three, four years ago, so definitely not 1.13. That's the best guesstimate that I can give at this point. But I wasn't able to find any good examples, any tutorials. The only book that I was able to get my hands on was the one written by Joe Beda, Kelsey Hightower, and I forget the other author. But what is it? "[Kubernetes— Up and Running](](http://shop.oreilly.com/product/0636920223788.do))"? + +And in general, right now I use it as reference— it's really good. But as a beginner, I still was lost. They give all these amazing examples, they provide the applications, but I had no idea why someone might need a Pod, why someone might need a Deployment. So my last resort was to try and find someone who actually knew Kubernetes. + +By accident, during my eternal Googling, I actually found a link to the [Kubernetes Slack](http://slack.kubernetes.io/). I jumped into the Kubernetes Slack hoping that someone might be able to help me out. And that was my entry point into the Kubernetes community. I just kept on exploring the Slack, tried to see what people were talking about, what they were asking to try to make sense of it, and just kept on iterating. And at some point, I think I got the hang of it. + +**ADAM GLICK: What made you decide to be a release lead?** + +JORGE ALARCÓN: The answer to this is my answer to why I have been contributing to Kubernetes. I really just want to be able to help out the community. Kubernetes is something that I absolutely adore. + +Comparing Kubernetes to old school system administration, a handful of years ago, it took me like a week to create a node for an application to run. It took me months to get something that vaguely looked like an Ingress resource— just setting up the Nginx, and allowing someone else to actually use my application. And the fact that I could do all of that in five minutes, it really captivated me. Plus I've got to blame it on the physics. The whole idea with physics, I really like the patterns, and I really like the design of Kubernetes. + +Once I actually got the hang of it, I loved the idea of how everything was designed, and I just wanted to learn a lot more about it. And I wanted to help the contributors. I wanted to help the people who actually build it. I wanted to help maintain it, and help provide the information for new contributors or new users. So instead of taking months for them to be up and running, let's just chat about what your issue is, and let's try to get a fix within the next hour or so. + +**ADAM GLICK: You work for a stealth startup right now. Is it fair to assume that they're using Kubernetes?** + +JORGE ALARCÓN: Yes— + +[LAUGHING] + +—for everything. + +**ADAM GLICK: Are you able to say what [Searchable](https://www.searchable.ai/) does?** + +JORGE ALARCÓN: The thing that we are trying to build is kind of like a search engine for your documents. Usually, if people have a question, they jump on Google. And for the most part, you're going to be able to get a good answer. You can ask something really random, like 'what is the weight of an elephant?' + +Which, if you think about it, it's kind of random, but Google is going to give you an answer. And the thing that we are trying to build is something similar to that, but for files. So essentially, a search engine for your files. And most people, you have your local machine loaded up with— at least mine, I have a couple tens of gigabytes of different files. + +I have Google Drive. I have a lot of documents that live in my email and the like. So the idea is to kind of build a search engine that is going to be able to connect all of those pieces. And besides doing simple word searches— for example, 'Kubernetes interview', and bring me the documents that we're looking at with all the questions— I can also ask things like what issue did I find last week while testing Prometheus. And it's going to be able to read my files, like through natural language processing, understand it, and be able to give me an answer. + +**ADAM GLICK: It is a Google for your personal and non-public information, essentially?** + +JORGE ALARCÓN: Hopefully. + +**ADAM GLICK: Is the work that you do with Kubernetes as the release lead— is that part of your day job, or is that something that you're doing kind of nights and weekends separate from your day job?** + +JORGE ALARCÓN: Both. Strictly speaking, my day job is just keep working on the application, build the things that it needs, maintain the infrastructure, and all that. When I started working at the company— which by the way, the person who brought me into the company was also someone that I met from my days in Data for Democracy— we started talking about the work. + +I mentioned that I do a lot of work with the Kubernetes community and if it was OK that I continue doing it. And to my surprise, the answer was not only a yes, but yeah, you can do it during your day work. And at least for the time being, I just balance— I try to keep things organized. + +Some days I just focus on Kubernetes. Some mornings I do Kubernetes. And then afternoon, I do Searchable, vice-versa, or just go back and forth, and try to balance the work as much as possible. But being release lead, definitely, it is a lot, so nights and weekends. + +**ADAM GLICK: How much time does it take to be the release lead?** + +JORGE ALARCÓN: It varies, but probably, if I had to give an estimate, at the very least you have to be able to dedicate four hours most days. + +**ADAM GLICK: Four hours a day?** + +JORGE ALARCÓN: Yeah, most days. It varies a lot. For example, at the beginning of the release cycle, you don't need to put in that much work because essentially, you're just waiting and helping people get set up, and people are writing their [Kubernetes Enhancement Proposals](https://github.com/kubernetes/enhancements/tree/master/keps), they are implementing it, and you can answer some questions. It's relatively easy, but for the most part, a lot of the time the four hours go into talking with people, just making sure that, hey, are people actually writing their enhancements, do we have all the enhancements that we want. And most of those fours hours, going around, chatting with people, and making sure that things are being done. And if, for some reason, someone needs help, just directing them to the right place to get their answer. + +**ADAM GLICK: What does Searchable get out of you doing this work?** + +JORGE ALARCÓN: Physically, nothing. The thing that we're striving for is to give back to the community. My manager/boss/homeslice— I told him I was going to call him my homeslice— both of us have experience working in open source. At some point, he was also working on a project that I'm probably going to mispronounce, but Mahout with Apache. + +And he also has had this experience. And both of us have this general idea and strive to build something for Searchable that's going to be useful for people, but also build knowledge, build guides, build applications that are going to be useful for the community. And at least one of the things that I was able to do right now is be the lead for the Kubernetes team. And this is a way of giving back to the community. We're using Kubernetes to run our things, so let's try to balance how things work. + +**ADAM GLICK: Lachlan Evenson was the release lead on 1.16 as well as [our guest back in episode 72](https://kubernetespodcast.com/episode/072-kubernetes-1.16/), and he's returned on this release as the [emeritus advisor](https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks/emeritus-adviser). What did you learn from him?** + +JORGE ALARCÓN: Oh, everything. And it actually all started back on 1.16. So like you said, an amazing person— he's an amazing individual. And it's truly an opportunity to be able to work with him. During 1.16, I was the CI Signal lead, and Lachie is very hands on. + +He's not the kind of person to just give you a list of things and say, do them. He actually comes to you, has a conversation, and he works with you more than anything. And when we were working together on 1.16, I got to learn a lot from him in terms of CI Signal. And especially because we talked about everything just to make sure that 1.16 was ready to go, I also got to pick up a couple of things that a release lead has to know, has to be able to do, has to work on to get a release out the door. + +And now, during this release, there is a lot of information that's really useful, and there's a lot of advice and general wisdom that comes in handy. For most of the things that impact a lot of things, we are always in communication. Like, I'm doing this, you're doing that, advice. And essentially, every single thing that we do is pretty much a code review. You do it, and then you wait for someone else to give you comments. And that's been a strong part of our relationship working. + +**ADAM GLICK: What would you say the theme for this release is?** + +JORGE ALARCÓN: I think one of the themes is "fit and finish". There are a lot of features that we are bumping from alpha to beta, from beta to stable. And we want to make sure that people have a good user experience. Operators and developers alike just want to get rid of as many bugs as possible, improve the flow of things. + +But the other really cool thing is we have about an equal distribution between alpha, beta, and stable. We are also bringing up a lot of new features. So besides making Kubernetes more stable for all the users that are already using it, we are working on bringing up new things that people can try out for the next release and see how it goes in the future. + +**ADAM GLICK: Did you have a release team mascot?** + +JORGE ALARCÓN: Kind of. + +**ADAM GLICK: Who/what was it?** + +JORGE ALARCÓN: [LAUGHING] I say kind of because I'm using the mascot in the [logo](https://twitter.com/KubernetesPod/status/1242953121380392963), and the logo is inspired by the Large Hadron Collider. + +**ADAM GLICK: Oh, fantastic.** + +JORGE ALARCÓN: Being the release lead, I really had to take a chance on this opportunity to use the LHC as the mascot. + +**ADAM GLICK: We've had [some of the folks from the LHC on the show](https://kubernetespodcast.com/episode/062-cern/), and I know they listen, and they will be thrilled with that.** + +JORGE ALARCÓN: [LAUGHING] Hopefully, they like the logo. + +**ADAM GLICK: If you look at this release, what part of this release, what thing that has been added to it are you personally most excited about?** + +JORGE ALARCÓN: Like a parent can't choose which child is his or her favorite, you really can't choose a specific thing. + +**ADAM GLICK: We have been following online and in the issues an enhancement that's called [sidecar containers](https://github.com/kubernetes/enhancements/issues/753). You'd be able to mark the order of containers starting in a pod. Tim Hockin posted [a long comment on behalf of a number of SIG Node contributors](https://github.com/kubernetes/enhancements/issues/753#issuecomment-597372056) citing social, procedural, and technical concerns about what's going on with that— in particular, that it moved out of 1.18 and is now moving to 1.19. Did you have any thoughts on that?** + +JORGE ALARCÓN: The sidecar enhancement has definitely been an interesting one. First off, thank you very much to Joseph Irving, the author of the KEP. And thank you very much to Tim Hockin, who voiced out the point of view of the approvers, maintainers of SIG Node. And I guess a little bit of context before we move on is, in the Kubernetes community, we have contributors, we have reviewers, and we have approvers. + +Contributors are people who write PRs, who file issues, who troubleshoot issues. Reviewers are contributors who focus on one or multiple specific areas within the project, and then approvers are maintainers for the specific area, for one or multiple specific areas, of the project. So you can think of approvers as people who have write access in a repo or someplace within a repo. + +The issue with the sidecar enhancement is that it has been deferred for multiple releases now, and that's been because there hasn't been a lot of collaboration between the KEP authors and the approvers for specific parts of the project. Something worthwhile to mention— and this was brought up during the original discussion— is this can obviously be frustrating for both contributors and for approvers. From the contributor's side of things, you are working on something. You are doing your best to make sure that it works. + +And to build something that's going to be used by people, both from the approver side of things and, I think, for the most part, every single person in the Kubernetes community, we are all really excited to see this project grow. We want to help improve it, and we love when new people come in and work on new enhancements, bug fixes, and the like. + +But one of the limitations is the day only has so many hours, and there are only so many things that we can work on at a time. So people prioritize in whatever way works best, and some things just fall behind. And a lot of the time, the things that fall behind are not because people don't want them to continue moving forward, but it's just a limited amount of resources, a limited amount of people. + +And I think this discussion around the sidecar enhancement proposal has been very useful, and it points us to the need for more standardized mentoring programs. This is something that multiple SIGs are working on. For example, SIG Contribex, SIG Cluster Lifecycle, SIG Release. The idea is to standardize some sort of mentoring experience so that we can better prepare new contributors to become reviewers and ultimately approvers. + +Because ultimately at the end of the day, if we have more people who are knowledgeable about Kubernetes, or even some specific area of Kubernetes, we can better distribute the load, and we can better collaborate on whatever new things come up. I think the sidecar enhancement has shown us mentoring is something worthwhile, and we need a lot more of it. Because as much work as we do, more things are going to continue popping in throughout the project. And the more people we have who are comfortable working in these really complicated areas of Kubernetes, the better off that we are going to be. + +**ADAM GLICK: Was there any talk of delaying 1.18 due to the current worldwide health situation?** + +JORGE ALARCÓN: We thought about it, and the plan was to just wait and see how people felt. Tried make sure that people were comfortable continuing to work and all the people were landing in new enhancements, or fixing tests, or members of the release team who were making sure that things were happening. We wanted to see that people were comfortable, that they could continue doing their job. And for a moment, I actually thought about delaying just outright— we're going to give it more time, and hopefully at some point, things are going to work out. + +But people just continue doing their amazing work. There was no delay. There was no hitch throughout the process. So at some point, I just figured we stay with the current timeline and see how we went. And at this point, things are more or less set. + +**ADAM GLICK: Amazing power of a distributed team.** + +JORGE ALARCÓN: Yeah, definitely. + +[LAUGHING] + +**ADAM GLICK: [Taylor Dolezal was announced as the 1.19 release lead](https://twitter.com/alejandrox135/status/1239629281766096898). Do you know how that choice was made, and by whom?** + +JORGE ALARCÓN: I actually got to choose the lead. The practice is the current lead for the release team is going to look at people and see, first off, who's interested and out of the people interested, who can do the job, who's comfortable enough with the release team, with the Kubernetes community at large who can actually commit the amount of hours throughout the next, hopefully, three months. + +And for one, I think Taylor has been part of my team. So there is the release team. Then the release team has multiple subgroups. One of those subgroups is actually just for me and my shadows. So for this release, it was mrbobbytables and Taylor. And Taylor volunteered to take over 1.19, and I'm sure that he will do an amazing job. + +**ADAM GLICK: I am as well. What advice will you give Taylor?** + +JORGE ALARCÓN: Over-communicate as much as possible. Normally, if you made it to the point that you are the lead for a release, or even the shadow for a release, you more or less are familiar with a lot of the work— CI Signal, enhancements, documentation, and the like. And a lot of people, if they know how to do their job, they might tell themselves, yeah, I could do it— no need to worry about it. I'm just going to go ahead and sign this PR, debug this test, whatever. + +But one of the interesting aspects is whenever we are actually working in a release, 50% of the work has to go into actually making the release happen. The other 50% of the work has to go into mentoring people, and making sure the newcomers, new members are able to learn everything that they need to learn to do your job, you being in the lead for a subgroup or the entire team. And whenever you actually see that things need to happen, just over-communicate. + +Try to provide the opportunity for someone else to do the work, and over-communicate with them as much as possible to make sure that they are learning whatever it is that they need to learn. If neither you or the other person knows what's going on, then I can over-communicate, so someone hopefully will see your messages and come to the rescue. That happens a lot. There's a lot of really nice and kind people who will come out and tell you how something works, help you fix it. + +**ADAM GLICK: If you were to sum up your experience running this release, what would it be?** + +JORGE ALARCÓN: It's been super fun and a little bit stressing, to be honest. Being the release lead is definitely amazing. You're kind of sitting at the center of Kubernetes. + +You not only see the people who are working on things— the things that are broken, and the users filling out issues, and saying what broke, and the like. But you also get the opportunity to work with a lot of people who do a lot of non-code related work. Docs is one of the most obvious things. There's a lot of work that goes into communications, contributor experience, public relations. + +And being connected, getting to talk with those people mostly every other day, it's really fun. It's a really good experience in terms of becoming a better contributor to the community, but also taking some of that knowledge home with you and applying it somewhere else. If you are a software engineer, if you are a project manager, whatever, it's amazing how much you can learn. + +**ADAM GLICK: I know the community likes to rotate around who are the release leads. But if you were given the opportunity to be a release lead for a future release of Kubernetes, would you do it again?** + +JORGE ALARCÓN: Yeah, it's a fun job. To be honest, it can be really stressing. Especially, as I mentioned, at some point, most of that work is just going to be talking with people, and talking requires a lot more thought and effort than just sitting down and thinking about things sometimes. And some of that can be really stressful. + +But the job itself, it is definitely fun. And at some distant point in the future, if for some reason it was a possibility, I will think about it. But definitely, as you mentioned, one thing that we try to do is cycle out, because I can have fun in it, and that's all good and nice. And hopefully I can help another release go out the door. But providing the opportunity for other people to learn I think is a lot more important than just being the lead itself. + +--- + +_[Jorge Alarcón](https://twitter.com/alejandrox135) is a site reliability engineer with Searchable AI and served as the Kubernetes 1.18 release team lead._ + +_You can find the [Kubernetes Podcast from Google](http://www.kubernetespodcast.com/) at [@KubernetesPod](https://twitter.com/KubernetesPod) on Twitter, and you can [subscribe](https://kubernetespodcast.com/subscribe/) so you never miss an episode._ \ No newline at end of file diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index 925f14d17a..8040213495 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -46,7 +46,7 @@ These connections terminate at the kubelet's HTTPS endpoint. By default, the api To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. -If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an +If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an untrusted or public network. Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) should be enabled to secure the kubelet API. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 516e4eb6d9..5482b074bc 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -23,8 +23,6 @@ The [components](/docs/concepts/overview/components/#node-components) on a node {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}. - - ## Management @@ -195,7 +193,7 @@ The node lifecycle controller automatically creates The scheduler takes the Node's taints into consideration when assigning a Pod to a Node. Pods can also have tolerations which let them tolerate a Node's taints. -See [Taint Nodes by Condition](/docs/concepts/configuration/taint-and-toleration/#taint-nodes-by-condition) +See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition) for more details. ### Capacity and Allocatable {#capacity} @@ -339,6 +337,6 @@ for more information. * Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core). * Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. -* Read about [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/). +* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). * Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling). diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index c3b51f3acf..ec3f9c2f54 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -34,14 +34,14 @@ Before choosing a guide, here are some considerations: - Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the latter, choose an actively-developed distro. Some distros only use binary releases, but offer a greater variety of choices. - - Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster. + - Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster. ## Managing a cluster * [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster. -* Learn how to [manage nodes](/docs/concepts/nodes/node/). +* Learn how to [manage nodes](/docs/concepts/architecture/nodes/). * Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters. @@ -59,14 +59,14 @@ Before choosing a guide, here are some considerations: * [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization. -* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters . +* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters . * [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs. ### Securing the kubelet * [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) * [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) - * [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/) + * [Kubelet authentication/authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) ## Optional Cluster Services diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 5b5110ec92..d2565d1e38 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -5,35 +5,30 @@ content_type: concept - Add-ons extend the functionality of Kubernetes. This page lists some of the available add-ons and links to their respective installation instructions. Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status. - - - ## Networking and Network Policy - * [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI. * [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. * [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy. * [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins. * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave. -* [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. -* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. +* [Contiv](https://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. +* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. * [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod. * [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes. * [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring. -* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). +* [Romana](https://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database. ## Service Discovery diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md index 8526ac830e..7b10760adf 100644 --- a/content/en/docs/concepts/cluster-administration/cloud-providers.md +++ b/content/en/docs/concepts/cluster-administration/cloud-providers.md @@ -8,8 +8,6 @@ weight: 30 This page explains how to manage Kubernetes running on a specific cloud provider. - - ### kubeadm [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters. @@ -46,8 +44,10 @@ controllerManager: ``` The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines -for the [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and the -[kubelet](/docs/admin/kubelet/). The contents of the file specified in `--cloud-config` for each provider is documented below as well. +for the [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/), +[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and the +[kubelet](/docs/reference/command-line-tools-reference/kubelet/). +The contents of the file specified in `--cloud-config` for each provider is documented below as well. For all external cloud providers, please follow the instructions on the individual repositories, which are listed under their headings below, or one may view [the list of all repositories](https://github.com/kubernetes?q=cloud-provider-&type=&language=) @@ -94,7 +94,7 @@ Different settings can be applied to a load balancer service in AWS using _annot * `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`: Used to specify access log s3 bucket prefix. * `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags`: Used on the service to specify a comma-separated list of key-value pairs which will be recorded as additional tags in the ELB. For example: `"Key1=Val1,Key2=Val2,KeyNoVal1=,KeyNoVal2"`. * `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If `http` (default) or `https`, an HTTPS listener that terminates the connection and parses headers is created. If set to `ssl` or `tcp`, a "raw" SSL listener is used. If set to `http` and `aws-load-balancer-ssl-cert` is not used then a HTTP listener is used. -* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`. +* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`. * `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`: Used on the service to enable or disable connection draining. * `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`: Used on the service to specify a connection draining timeout. * `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout`: Used on the service to specify the idle connection timeout. @@ -358,13 +358,10 @@ Kubernetes network plugin and should appear in the `[Route]` section of the the `extraroutes` extension then use `router-id` to specify a router to add routes to. The router chosen must span the private networks containing your cluster nodes (typically there is only one node network, and this value should be - the default router for the node network). This value is required to use [kubenet] + the default router for the node network). This value is required to use + [kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet) on OpenStack. -[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet - - - ## OVirt ### Node Name diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 5cdd070e0f..2380fa6a40 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -311,10 +311,12 @@ exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling important traffic, or find poorly-behaved workloads that may be harming system health. -* `apiserver_flowcontrol_rejected_requests_total` counts requests that - were rejected, grouped by the name of the assigned priority level, - the name of the assigned FlowSchema, and the reason for rejection. - The reason will be one of the following: +* `apiserver_flowcontrol_rejected_requests_total` is a counter vector + (cumulative since server start) of requests that were rejected, + broken down by the labels `flowSchema` (indicating the one that + matched the request), `priorityLevel` (indicating the one to which + the request was assigned), and `reason`. The `reason` label will be + have one of the following values: * `queue-full`, indicating that too many requests were already queued, * `concurrency-limit`, indicating that the @@ -323,23 +325,72 @@ poorly-behaved workloads that may be harming system health. * `time-out`, indicating that the request was still in the queue when its queuing time limit expired. -* `apiserver_flowcontrol_dispatched_requests_total` counts requests - that began executing, grouped by the name of the assigned priority - level and the name of the assigned FlowSchema. +* `apiserver_flowcontrol_dispatched_requests_total` is a counter + vector (cumulative since server start) of requests that began + executing, broken down by the labels `flowSchema` (indicating the + one that matched the request) and `priorityLevel` (indicating the + one to which the request was assigned). -* `apiserver_flowcontrol_current_inqueue_requests` gives the - instantaneous total number of queued (not executing) requests, - grouped by priority level and FlowSchema. +* `apiserver_current_inqueue_requests` is a gauge vector of recent + high water marks of the number of queued requests, grouped by a + label named `request_kind` whose value is `mutating` or `readOnly`. + These high water marks describe the largest number seen in the one + second window most recently completed. These complement the older + `apiserver_current_inflight_requests` gauge vector that holds the + last window's high water mark of number of requests actively being + served. -* `apiserver_flowcontrol_current_executing_requests` gives the instantaneous - total number of executing requests, grouped by priority level and FlowSchema. +* `apiserver_flowcontrol_read_vs_write_request_count_samples` is a + histogram vector of observations of the then-current number of + requests, broken down by the labels `phase` (which takes on the + values `waiting` and `executing`) and `request_kind` (which takes on + the values `mutating` and `readOnly`). The observations are made + periodically at a high rate. -* `apiserver_flowcontrol_request_queue_length_after_enqueue` gives a - histogram of queue lengths for the queues, grouped by priority level - and FlowSchema, as sampled by the enqueued requests. Each request - that gets queued contributes one sample to its histogram, reporting - the length of the queue just after the request was added. Note that - this produces different statistics than an unbiased survey would. +* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` is a + histogram vector of high or low water marks of the number of + requests broken down by the labels `phase` (which takes on the + values `waiting` and `executing`) and `request_kind` (which takes on + the values `mutating` and `readOnly`); the label `mark` takes on + values `high` and `low`. The water marks are accumulated over + windows bounded by the times when an observation was added to + `apiserver_flowcontrol_read_vs_write_request_count_samples`. These + water marks show the range of values that occurred between samples. + +* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector + holding the instantaneous number of queued (not executing) requests, + broken down by the labels `priorityLevel` and `flowSchema`. + +* `apiserver_flowcontrol_current_executing_requests` is a gauge vector + holding the instantaneous number of executing (not waiting in a + queue) requests, broken down by the labels `priorityLevel` and + `flowSchema`. + +* `apiserver_flowcontrol_priority_level_request_count_samples` is a + histogram vector of observations of the then-current number of + requests broken down by the labels `phase` (which takes on the + values `waiting` and `executing`) and `priorityLevel`. Each + histogram gets observations taken periodically, up through the last + activity of the relevant sort. The observations are made at a high + rate. + +* `apiserver_flowcontrol_priority_level_request_count_watermarks` is a + histogram vector of high or low water marks of the number of + requests broken down by the labels `phase` (which takes on the + values `waiting` and `executing`) and `priorityLevel`; the label + `mark` takes on values `high` and `low`. The water marks are + accumulated over windows bounded by the times when an observation + was added to + `apiserver_flowcontrol_priority_level_request_count_samples`. These + water marks show the range of values that occurred between samples. + +* `apiserver_flowcontrol_request_queue_length_after_enqueue` is a + histogram vector of queue lengths for the queues, broken down by + the labels `priorityLevel` and `flowSchema`, as sampled by the + enqueued requests. Each request that gets queued contributes one + sample to its histogram, reporting the length of the queue just + after the request was added. Note that this produces different + statistics than an unbiased survey would. {{< note >}} An outlier value in a histogram here means it is likely that a single flow (i.e., requests by one user or for one namespace, depending on @@ -349,14 +400,17 @@ poorly-behaved workloads that may be harming system health. to increase that PriorityLevelConfiguration's concurrency shares. {{< /note >}} -* `apiserver_flowcontrol_request_concurrency_limit` gives the computed - concurrency limit (based on the API server's total concurrency limit and PriorityLevelConfigurations' - concurrency shares) for each PriorityLevelConfiguration. +* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector + hoding the computed concurrency limit (based on the API server's + total concurrency limit and PriorityLevelConfigurations' concurrency + shares), broken down by the label `priorityLevel`. -* `apiserver_flowcontrol_request_wait_duration_seconds` gives a histogram of how - long requests spent queued, grouped by the FlowSchema that matched the - request, the PriorityLevel to which it was assigned, and whether or not the - request successfully executed. +* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram + vector of how long requests spent queued, broken down by the labels + `flowSchema` (indicating which one matched the request), + `priorityLevel` (indicating the one to which the request was + assigned), and `execute` (indicating whether the request started + executing). {{< note >}} Since each FlowSchema always assigns requests to a single PriorityLevelConfiguration, you can add the histograms for all the @@ -364,9 +418,11 @@ poorly-behaved workloads that may be harming system health. requests assigned to that priority level. {{< /note >}} -* `apiserver_flowcontrol_request_execution_seconds` gives a histogram of how - long requests took to actually execute, grouped by the FlowSchema that matched the - request and the PriorityLevel to which it was assigned. +* `apiserver_flowcontrol_request_execution_seconds` is a histogram + vector of how long requests took to actually execute, broken down by + the labels `flowSchema` (indicating which one matched the request) + and `priorityLevel` (indicating the one to which the request was + assigned). ### Debug endpoints diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index d60826e128..3e7f398e66 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -133,7 +133,7 @@ Because the logging agent must run on every node, it's common to implement it as Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_. -Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node. +Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver/) for use with Google Cloud Platform, and [Elasticsearch](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/). You can find more information and instructions in the dedicated documents. Both use [fluentd](https://www.fluentd.org/) with custom configuration as an agent on the node. ### Using a sidecar container with the logging agent @@ -240,7 +240,7 @@ a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to c {{< note >}} The configuration of fluentd is beyond the scope of this article. For information about configuring fluentd, see the -[official fluentd documentation](http://docs.fluentd.org/). +[official fluentd documentation](https://docs.fluentd.org/). {{< /note >}} The second file describes a pod that has a sidecar container running fluentd. diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index d0485a4342..50ed69ff42 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -10,9 +10,6 @@ weight: 40 You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/). - - - ## Organizing resource configurations @@ -356,7 +353,8 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you ### kubectl apply -It is suggested to maintain a set of configuration files in source control (see [configuration as code](http://martinfowler.com/bliki/InfrastructureAsCode.html)), +It is suggested to maintain a set of configuration files in source control +(see [configuration as code](https://martinfowler.com/bliki/InfrastructureAsCode.html)), so that they can be maintained and versioned along with the code for the resources they configure. Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster. diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index 6779ee984a..ff30e60b12 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -17,9 +17,6 @@ problems to address: 3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/). 4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/). - - - Kubernetes is all about sharing machines between applications. Typically, @@ -93,7 +90,7 @@ Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to i ### AOS from Apstra -[AOS](http://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs. +[AOS](https://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs. The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment. @@ -101,7 +98,7 @@ AOS has a rich set of REST API endpoints that enable Kubernetes to quickly chang AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux. -Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/ +Details on how the AOS system works can be accessed here: https://www.apstra.com/products/how-it-works/ ### AWS VPC CNI for Kubernetes @@ -123,7 +120,7 @@ Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https:/ With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed. -BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/). +BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/). ### Cilium @@ -135,7 +132,7 @@ addressing, and it can be used in combination with other CNI plugins. ### CNI-Genie from Huawei -[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/). +[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Romana](https://romana.io), [Weave-net](https://www.weave.works/products/weave-net/). CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin. @@ -157,11 +154,11 @@ network complexity required to deploy Kubernetes at scale within AWS. ### Contiv -[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced. +[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](https://contiv.io) is all open sourced. ### Contrail / Tungsten Fabric -[Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads. +[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads. ### DANM @@ -242,7 +239,7 @@ traffic to the internet. ### Kube-router -[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer. +[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer. ### L2 networks and linux bridging @@ -252,8 +249,8 @@ Note that these instructions have only been tried very casually - it seems to work, but has not been thoroughly tested. If you use this technique and perfect the process, please let us know. -Follow the "With Linux Bridge devices" section of [this very nice -tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from +Follow the "With Linux Bridge devices" section of +[this very nice tutorial](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from Lars Kellogg-Stedman. ### Multus (a Multi Network plugin) @@ -274,7 +271,7 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p ### Nuage Networks VCS (Virtualized Cloud Services) -[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards. +[Nuage](https://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards. The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications. @@ -294,7 +291,7 @@ at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes). ### Project Calico -[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine. +[Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine. Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall. @@ -302,7 +299,7 @@ Calico can also be run in policy enforcement mode in conjunction with other netw ### Romana -[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces. +[Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces. ### Weave Net from Weaveworks @@ -312,13 +309,9 @@ Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-pl or stand-alone. In either version, it doesn't require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes. - - ## {{% heading "whatsnext" %}} - The early design of the networking model and its rationale, and some future -plans are described in more detail in the [networking design -document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). - +plans are described in more detail in the +[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index f791510b39..c30e8d243b 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -752,4 +752,4 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh * Read the [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) API reference -* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS +* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index 744034f8ea..5882ce95dc 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -73,7 +73,7 @@ A desired state of an object is described by a Deployment, and if changes to tha ## Container Images -The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/admin/kubelet/) attempts to pull the specified image. +The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull the specified image. - `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally. diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md index 01e1f64d19..dce1b82f9c 100644 --- a/content/en/docs/concepts/configuration/pod-priority-preemption.md +++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md @@ -11,7 +11,7 @@ weight: 70 {{< feature-state for_k8s_version="v1.14" state="stable" >}} -[Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the +[Pods](/docs/concepts/workloads/pods/pod/) can have _priority_. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. diff --git a/content/en/docs/concepts/containers/container-environment.md b/content/en/docs/concepts/containers/container-environment.md index a57ac2181a..7ec28e97b4 100644 --- a/content/en/docs/concepts/containers/container-environment.md +++ b/content/en/docs/concepts/containers/container-environment.md @@ -28,7 +28,7 @@ The Kubernetes Container environment provides several important resources to Con The *hostname* of a Container is the name of the Pod in which the Container is running. It is available through the `hostname` command or the -[`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html) +[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html) function call in libc. The Pod name and namespace are available as environment variables through the @@ -51,7 +51,7 @@ FOO_SERVICE_PORT= ``` Services have dedicated IP addresses and are available to the Container via DNS, -if [DNS addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled.  +if [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled.  diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 415d920b40..ee21a00526 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -19,8 +19,6 @@ before referring to it in a This page provides an outline of the container image concept. - - ## Image names @@ -261,7 +259,7 @@ EOF This needs to be done for each pod that is using a private registry. However, setting of this field can be automated by setting the imagePullSecrets -in a [ServiceAccount](/docs/user-guide/service-accounts) resource. +in a [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-accounts/) resource. Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for detailed instructions. diff --git a/content/en/docs/concepts/extend-kubernetes/_index.md b/content/en/docs/concepts/extend-kubernetes/_index.md index 6468ffa410..4ffb0a831f 100644 --- a/content/en/docs/concepts/extend-kubernetes/_index.md +++ b/content/en/docs/concepts/extend-kubernetes/_index.md @@ -24,9 +24,6 @@ their work environment. Developers who are prospective {{< glossary_tooltip text useful as an introduction to what extension points and patterns exist, and their trade-offs and limitations. - - - ## Overview @@ -37,14 +34,14 @@ Customization approaches can be broadly divided into *configuration*, which only *Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary: -* [kubelet](/docs/admin/kubelet/) -* [kube-apiserver](/docs/admin/kube-apiserver/) -* [kube-controller-manager](/docs/admin/kube-controller-manager/) -* [kube-scheduler](/docs/admin/kube-scheduler/). +* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) +* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) +* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options. -*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable. +*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable. ## Extensions @@ -75,10 +72,9 @@ failure. In the webhook model, Kubernetes makes a network request to a remote service. In the *Binary Plugin* model, Kubernetes executes a binary (program). -Binary plugins are used by the kubelet (e.g. [Flex Volume -Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) -and [Network -Plugins](/docs/concepts/cluster-administration/network-plugins/)) +Binary plugins are used by the kubelet (e.g. +[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexVolume) +and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)) and by kubectl. Below is a diagram showing how the extension points interact with the @@ -98,12 +94,12 @@ This diagram shows the extension points in a Kubernetes system. 1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies. -2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section. -3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions. -4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section. +2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section. +3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions. +4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section. 5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources. -6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking. -7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins). +6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking. +7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins). If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions. @@ -119,7 +115,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro Do not use a Custom Resource as data storage for application, user, or monitoring data. -For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/). +For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). ### Combining New APIs with Automation @@ -172,20 +168,21 @@ Kubelet call a Binary Plugin to mount the volume. ### Device Plugins Device plugins allow a node to discover new Node resources (in addition to the -builtin ones like cpu and memory) via a [Device -Plugin](/docs/concepts/cluster-administration/device-plugins/). +builtin ones like cpu and memory) via a +[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/). ### Network Plugins -Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/). +Different networking fabrics can be supported via node-level +[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). ### Scheduler Extensions The scheduler is a special type of controller that watches pods, and assigns pods to nodes. The default scheduler can be replaced entirely, while -continuing to use other Kubernetes components, or [multiple -schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/) +continuing to use other Kubernetes components, or +[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) can run at the same time. This is a significant undertaking, and almost all Kubernetes users find they @@ -196,18 +193,14 @@ The scheduler also supports a that permits a webhook backend (scheduler extension) to filter and prioritize the nodes chosen for a pod. - - - ## {{% heading "whatsnext" %}} -* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/) +* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/) * Learn more about Infrastructure extensions - * [Network Plugins](/docs/concepts/cluster-administration/network-plugins/) - * [Device Plugins](/docs/concepts/cluster-administration/device-plugins/) + * [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) + * [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) * Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) * Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/) - diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index 1f47323301..74147624f5 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -15,8 +15,6 @@ The additional APIs can either be ready-made solutions such as [service-catalog] The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object. - - ## Aggregation layer @@ -34,11 +32,8 @@ If your extension API server cannot achieve that latency requirement, consider m `EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver to disable the timeout restriction. This deprecated feature gate will be removed in a future release. - - ## {{% heading "whatsnext" %}} - * To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/). * Then, [setup an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer. * Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index bd7d27305e..9a84267445 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -13,8 +13,6 @@ weight: 10 resource to your Kubernetes cluster and when to use a standalone service. It describes the two methods for adding custom resources and how to choose between them. - - ## Custom resources @@ -28,7 +26,7 @@ many core Kubernetes functions are now built using custom resources, making Kube Custom resources can appear and disappear in a running cluster through dynamic registration, and cluster admins can update custom resources independently of the cluster itself. Once a custom resource is installed, users can create and access its objects using -[kubectl](/docs/user-guide/kubectl-overview/), just as they do for built-in resources like +[kubectl](/docs/reference/kubectl/overview/), just as they do for built-in resources like *Pods*. ## Custom controllers @@ -52,7 +50,9 @@ for specific applications into an extension of the Kubernetes API. ## Should I add a custom resource to my Kubernetes Cluster? -When creating a new API, consider whether to [aggregate your API with the Kubernetes cluster APIs](/docs/concepts/api-extension/apiserver-aggregation/) or let your API stand alone. +When creating a new API, consider whether to +[aggregate your API with the Kubernetes cluster APIs](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) +or let your API stand alone. | Consider API aggregation if: | Prefer a stand-alone API if: | | ---------------------------- | ---------------------------- | diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 7295362867..954e391407 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -19,8 +19,6 @@ The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adap and other similar computing resources that may require vendor specific initialization and setup. - - ## Device plugin registration @@ -39,7 +37,7 @@ During the registration, the device plugin needs to send: * The name of its Unix socket. * The Device Plugin API version against which it was built. * The `ResourceName` it wants to advertise. Here `ResourceName` needs to follow the - [extended resource naming scheme](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) + [extended resource naming scheme](/docs/concepts/configuration/manage-resources-container/#extended-resources) as `vendor-domain/resourcetype`. (For example, an NVIDIA GPU is advertised as `nvidia.com/gpu`.) diff --git a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md index bc06bd4ab0..7f5fe11807 100644 --- a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md @@ -15,14 +15,14 @@ Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code. -This guide describes the options for customizing a Kubernetes -cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to -understand how to adapt their Kubernetes cluster to the needs of -their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it -useful as an introduction to what extension points and patterns -exist, and their trade-offs and limitations. - - +This guide describes the options for customizing a Kubernetes cluster. It is +aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} +who want to understand how to adapt their +Kubernetes cluster to the needs of their work environment. Developers who are prospective +{{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} +or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} +will also find it useful as an introduction to what extension points and +patterns exist, and their trade-offs and limitations. @@ -35,14 +35,14 @@ Customization approaches can be broadly divided into *configuration*, which only *Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary: -* [kubelet](/docs/admin/kubelet/) -* [kube-apiserver](/docs/admin/kube-apiserver/) -* [kube-controller-manager](/docs/admin/kube-controller-manager/) -* [kube-scheduler](/docs/admin/kube-scheduler/). +* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) +* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) +* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options. -*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable. +*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable. ## Extensions @@ -73,10 +73,9 @@ failure. In the webhook model, Kubernetes makes a network request to a remote service. In the *Binary Plugin* model, Kubernetes executes a binary (program). -Binary plugins are used by the kubelet (e.g. [Flex Volume -Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) -and [Network -Plugins](/docs/concepts/cluster-administration/network-plugins/)) +Binary plugins are used by the kubelet (e.g. +[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexVolume) +and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)) and by kubectl. Below is a diagram showing how the extension points interact with the @@ -95,13 +94,13 @@ This diagram shows the extension points in a Kubernetes system. -1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies. -2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section. -3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions. -4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section. -5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources. -6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking. -7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins). +1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies. +2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/extend-kubernetes/#api-access-extensions) section. +3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/extend-kubernetes/#user-defined-types) section. Custom Resources are often used with API Access Extensions. +4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section. +5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources. +6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/extend-kubernetes/#network-plugins) allow for different implementations of pod networking. +7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/extend-kubernetes/#storage-plugins). If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions. @@ -117,7 +116,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro Do not use a Custom Resource as data storage for application, user, or monitoring data. -For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/). +For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). ### Combining New APIs with Automation @@ -162,28 +161,28 @@ After a request is authorized, if it is a write operation, it also goes through ### Storage Plugins -[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md -) allow users to mount volume types without built-in support by having the +[Flex Volumes](/docs/concepts/storage/volumes/#flexVolume) +allow users to mount volume types without built-in support by having the Kubelet call a Binary Plugin to mount the volume. ### Device Plugins Device plugins allow a node to discover new Node resources (in addition to the -builtin ones like cpu and memory) via a [Device -Plugin](/docs/concepts/cluster-administration/device-plugins/). - +builtin ones like cpu and memory) via a +[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/). ### Network Plugins -Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/). +Different networking fabrics can be supported via node-level +[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). ### Scheduler Extensions The scheduler is a special type of controller that watches pods, and assigns pods to nodes. The default scheduler can be replaced entirely, while -continuing to use other Kubernetes components, or [multiple -schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/) +continuing to use other Kubernetes components, or +[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) can run at the same time. This is a significant undertaking, and almost all Kubernetes users find they @@ -195,16 +194,13 @@ that permits a webhook backend (scheduler extension) to filter and prioritize the nodes chosen for a pod. - - ## {{% heading "whatsnext" %}} - -* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/) +* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/) * Learn more about Infrastructure extensions - * [Network Plugins](/docs/concepts/cluster-administration/network-plugins/) - * [Device Plugins](/docs/concepts/cluster-administration/device-plugins/) + * [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) + * [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) * Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) * Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/) diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index dda8f0020b..31e8473ca9 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -6,14 +6,11 @@ weight: 30 -Operators are software extensions to Kubernetes that make use of [custom -resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +Operators are software extensions to Kubernetes that make use of +[custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to manage applications and their components. Operators follow Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane). - - - ## Motivation diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index 42d12425b7..aa09776753 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -24,7 +24,6 @@ The Kubernetes API lets you query and manipulate the state of objects in the Kub API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/). - ## API changes @@ -135,7 +134,7 @@ There are several API groups in a cluster: (e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a full list of available API groups. -There are two paths to extending the API with [custom resources](/docs/concepts/api-extension/custom-resources/): +There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/): 1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) lets you declaratively define how the API server should provide your chosen resource API. diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md index e995db10a5..e580cadb88 100644 --- a/content/en/docs/concepts/overview/working-with-objects/labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/labels.md @@ -22,10 +22,9 @@ Each object can have a set of key/value labels defined. Each Key must be unique } ``` -Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using [annotations](/docs/concepts/overview/working-with-objects/annotations/). - - - +Labels allow for efficient queries and watches and are ideal for use in UIs +and CLIs. Non-identifying information should be recorded using +[annotations](/docs/concepts/overview/working-with-objects/annotations/). @@ -77,7 +76,7 @@ spec: ## Label selectors -Unlike [names and UIDs](/docs/user-guide/identifiers), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s). +Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s). Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes. @@ -186,7 +185,10 @@ kubectl get pods -l 'environment,environment notin (frontend)' ### Set references in API objects -Some Kubernetes objects, such as [`services`](/docs/user-guide/services) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods). +Some Kubernetes objects, such as [`services`](/docs/concepts/services-networking/service/) +and [`replicationcontrollers`](/docs/concepts/workloads/controllers/replicationcontroller/), +also use label selectors to specify sets of other resources, such as +[pods](/docs/concepts/workloads/pods/pod/). #### Service and ReplicationController @@ -210,7 +212,11 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon #### Resources that support set-based requirements -Newer resources, such as [`Job`](/docs/concepts/workloads/controllers/jobs-run-to-completion/), [`Deployment`](/docs/concepts/workloads/controllers/deployment/), [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/), and [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/), support _set-based_ requirements as well. +Newer resources, such as [`Job`](/docs/concepts/workloads/controllers/job/), +[`Deployment`](/docs/concepts/workloads/controllers/deployment/), +[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/), and +[`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/), +support _set-based_ requirements as well. ```yaml selector: @@ -228,4 +234,3 @@ selector: One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information. - diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index 59b1763450..004c18ad2c 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -13,9 +13,6 @@ weight: 30 Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. - - - ## When to Use Multiple Namespaces @@ -35,13 +32,14 @@ In future versions of Kubernetes, objects in the same namespace will have the sa access control policies by default. It is not necessary to use multiple namespaces just to separate slightly different -resources, such as different versions of the same software: use [labels](/docs/user-guide/labels) to distinguish +resources, such as different versions of the same software: use +[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish resources within the same namespace. ## Working with Namespaces -Creation and deletion of namespaces are described in the [Admin Guide documentation -for namespaces](/docs/admin/namespaces). +Creation and deletion of namespaces are described in the +[Admin Guide documentation for namespaces](/docs/tasks/administer-cluster/namespaces). {{< note >}} Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces. @@ -93,7 +91,8 @@ kubectl config view --minify | grep namespace: ## Namespaces and DNS -When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). +When you create a [Service](/docs/concepts/services-networking/service/), +it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means that if a container just uses ``, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across @@ -104,7 +103,8 @@ across namespaces, you need to use the fully qualified domain name (FQDN). Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace resources are not themselves in a namespace. -And low-level resources, such as [nodes](/docs/admin/node) and +And low-level resources, such as +[nodes](/docs/concepts/architecture/nodes/) and persistentVolumes, are not in any namespace. To see which Kubernetes resources are and aren't in a namespace: @@ -117,12 +117,8 @@ kubectl api-resources --namespaced=true kubectl api-resources --namespaced=false ``` - - ## {{% heading "whatsnext" %}} * Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace). * Learn more about [deleting a namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace). - - diff --git a/content/en/docs/concepts/overview/working-with-objects/object-management.md b/content/en/docs/concepts/overview/working-with-objects/object-management.md index 7cb65b5497..a2dd737e77 100644 --- a/content/en/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/en/docs/concepts/overview/working-with-objects/object-management.md @@ -10,7 +10,6 @@ Kubernetes objects. This document provides an overview of the different approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for details of managing objects by Kubectl. - ## Management techniques @@ -167,11 +166,8 @@ Disadvantages compared to imperative object configuration: - Declarative object configuration is harder to debug and understand results when they are unexpected. - Partial updates using diffs create complex merge and patch operations. - - ## {{% heading "whatsnext" %}} - - [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) - [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/) - [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/) @@ -180,4 +176,3 @@ Disadvantages compared to imperative object configuration: - [Kubectl Book](https://kubectl.docs.kubernetes.io) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) - diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index 5b670d38a0..8158b78437 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -8,13 +8,10 @@ weight: 10 -By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster. +By default, containers run with unbounded [compute resources](/docs/concepts/configuration/manage-resources-containers/) on a Kubernetes cluster. With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace. - - - A _LimitRange_ provides constraints that can: @@ -54,11 +51,8 @@ there may be contention for resources. In this case, the Containers or Pods will Neither contention nor changes to a LimitRange will affect already created resources. - - ## {{% heading "whatsnext" %}} - Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information. For examples on using limits, see: @@ -68,7 +62,5 @@ For examples on using limits, see: - [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/). - [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). - [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage). -- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/). - - +- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index e72207ccd1..22cac5f2a2 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -14,9 +14,6 @@ weight: 20 Pod Security Policies enable fine-grained authorization of pod creation and updates. - - - ## What is a Pod Security Policy? @@ -143,13 +140,13 @@ For a complete example of authorizing a PodSecurityPolicy, see ### Troubleshooting -- The [Controller Manager](/docs/admin/kube-controller-manager/) must be run +- The [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) must be run against [the secured API port](/docs/reference/access-authn-authz/controlling-access/), and must not have superuser permissions. Otherwise requests would bypass authentication and authorization modules, all PodSecurityPolicy objects would be allowed, and users would be able to create privileged containers. For more details -on configuring Controller Manager authorization, see [Controller -Roles](/docs/reference/access-authn-authz/rbac/#controller-roles). +on configuring Controller Manager authorization, see +[Controller Roles](/docs/reference/access-authn-authz/rbac/#controller-roles). ## Policy Order @@ -639,15 +636,12 @@ By default, all safe sysctls are allowed. - `allowedUnsafeSysctls` - allows specific sysctls that had been disallowed by the default list, so long as these are not listed in `forbiddenSysctls`. Refer to the [Sysctl documentation]( -/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy). - - +/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy). ## {{% heading "whatsnext" %}} +- See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations. -See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations. - -Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details. +- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details. diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 1d2aaca499..6a29cb1741 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -13,9 +13,6 @@ there is a concern that one team could use more than its fair share of resources Resource quotas are a tool for administrators to address this concern. - - - A resource quota, defined by a `ResourceQuota` object, provides constraints that limit @@ -27,15 +24,21 @@ Resource quotas work like this: - Different teams work in different namespaces. Currently this is voluntary, but support for making this mandatory via ACLs is planned. + - The administrator creates one `ResourceQuota` for each namespace. + - Users create resources (pods, services, etc.) in the namespace, and the quota system tracks usage to ensure it does not exceed hard resource limits defined in a `ResourceQuota`. + - If creating or updating a resource violates a quota constraint, the request will fail with HTTP status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated. + - If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements. - See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem. + + See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) + for an example of how to avoid this problem. The name of a `ResourceQuota` object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). @@ -63,7 +66,7 @@ A resource quota is enforced in a particular namespace when there is a ## Compute Resource Quota -You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace. +You can limit the total sum of [compute resources](/docs/concepts/configuration/manage-resources-containers/) that can be requested in a given namespace. The following resource types are supported: @@ -77,7 +80,7 @@ The following resource types are supported: ### Resource Quota For Extended Resources In addition to the resources mentioned above, in release 1.10, quota support for -[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added. +[extended resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) is added. As overcommit is not allowed for extended resources, it makes no sense to specify both `requests` and `limits` for the same extended resource in a quota. So for extended resources, only quota items @@ -553,7 +556,7 @@ plugins: limitedResources: - resource: pods matchScopes: - - scopeName: PriorityClass + - scopeName: PriorityClass operator: In values: ["cluster-services"] ``` @@ -572,7 +575,7 @@ plugins: limitedResources: - resource: pods matchScopes: - - scopeName: PriorityClass + - scopeName: PriorityClass operator: In values: ["cluster-services"] ``` @@ -595,11 +598,7 @@ See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/). - - ## {{% heading "whatsnext" %}} - -See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. - +- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information. diff --git a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md index 0a94a779a3..19e22707b8 100644 --- a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md +++ b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -10,8 +10,6 @@ In Kubernetes, _scheduling_ refers to making sure that {{< glossary_tooltip text are matched to {{< glossary_tooltip text="Nodes" term_id="node" >}} so that {{< glossary_tooltip term_id="kubelet" >}} can run them. - - ## Scheduling overview {#scheduling} @@ -92,7 +90,7 @@ of the scheduler: * Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) * Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) * Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler -* Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/) +* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) * Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/) * Learn about [Pod Overhead](/docs/concepts/configuration/pod-overhead/) * Learn about scheduling of Pods that use volumes in: diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index 97a190a280..9bb54b3e4f 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -175,7 +175,7 @@ toleration to pods that use the special hardware. As in the dedicated nodes use it is probably easiest to apply the tolerations using a custom [admission controller](/docs/reference/access-authn-authz/admission-controllers/). For example, it is recommended to use [Extended -Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) +Resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) to represent the special hardware, taint your special hardware nodes with the extended resource name and run the [ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 79c6053301..86c4dd1623 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -133,7 +133,7 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the -[CoreDNS cluster addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). +[CoreDNS cluster addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). {{< note >}} If the service environment variables are not desired (because possible clashing with expected program ones, too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks` diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index a200587570..0af741ff0c 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -68,10 +68,19 @@ of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example` ### A/AAAA records -Any pods created by a Deployment or DaemonSet have the following -DNS resolution available: +In general a pod has the following DNS resolution: -`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.` +`pod-ip-address.my-namespace.pod.cluster-domain.example`. + +For example, if a pod in the `default` namespace has the IP address 172.17.0.3, +and the domain name for your cluster is `cluster.local`, then the Pod has a DNS name: + +`172-17-0-3.default.pod.cluster.local`. + +Any pods created by a Deployment or DaemonSet exposed by a Service have the +following DNS resolution available: + +`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`. ### Pod's hostname and subdomain fields @@ -294,4 +303,3 @@ The availability of Pod DNS Config and DNS Policy "`None`" is shown as below. For guidance on administering DNS configurations, check [Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/) - diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 33875e3637..3b375f7421 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -26,7 +26,7 @@ Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.i * [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress controller with [community](https://www.getambassador.io/docs) or [commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/). -* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager). +* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](https://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager). * [AWS ALB Ingress Controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) enables ingress using the [AWS Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/). * [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller provided and supported by VMware. diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index d7fc53a4d0..d3e60cccbe 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -393,7 +393,7 @@ a Service. It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as -[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) +[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) that allow you to achieve the same end result. Please review the controller specific documentation to see how they handle health checks (for example: [nginx](https://git.k8s.io/ingress-nginx/README.md), or diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 5a12c4e91d..03d99036c1 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -20,8 +20,6 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. - - ## Motivation @@ -387,7 +385,7 @@ variables and DNS. When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It supports both [Docker links compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see -[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49)) +[makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49)) and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. @@ -751,7 +749,7 @@ In the above example, if the Service contained three ports, `80`, `443`, and `8443`, then `443` and `8443` would use the SSL certificate, but `80` would just be proxied HTTP. -From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. +From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. To see which policies are available for use, you can use the `aws` command line tool: ```bash @@ -891,7 +889,7 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet ``` {{< note >}} -NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) +NLB only works with certain instance classes; see the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) on Elastic Load Balancing for a list of supported instance types. {{< /note >}} @@ -1048,9 +1046,9 @@ spec: ## Shortcomings Using the userspace proxy for VIPs, work at small to medium scale, but will -not scale to very large clusters with thousands of Services. The [original -design proposal for portals](http://issue.k8s.io/1107) has more details on -this. +not scale to very large clusters with thousands of Services. The +[original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107) +has more details on this. Using the userspace proxy obscures the source IP address of a packet accessing a Service. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index f60b90cb30..132e397e75 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -19,9 +19,6 @@ weight: 20 This document describes the current state of _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested. - - - ## Introduction @@ -148,7 +145,11 @@ The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. -However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below: +However, an administrator can configure a custom recycler Pod template using +the Kubernetes controller manager command line arguments as described in the +[reference](/docs/reference/command-line-tools-reference/kube-controller-manager/). +The custom recycler Pod template must contain a `volumes` specification, as +shown in the example below: ```yaml apiVersion: v1 diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 1f12303eb7..f2c8589db4 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -15,8 +15,6 @@ This document describes the concept of a StorageClass in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) and [persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested. - - ## Introduction @@ -168,11 +166,11 @@ A cluster administrator can address this issue by specifying the `WaitForFirstCo will delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created. PersistentVolumes will be selected or provisioned conforming to the topology that is specified by the Pod's scheduling constraints. These include, but are not limited to, [resource -requirements](/docs/concepts/configuration/manage-compute-resources-container), +requirements](/docs/concepts/configuration/manage-resources-containers/), [node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector), [pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity), -and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration). +and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration). The following plugins support `WaitForFirstConsumer` with dynamic provisioning: @@ -244,7 +242,7 @@ parameters: ``` * `type`: `io1`, `gp2`, `sc1`, `st1`. See - [AWS docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) + [AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) for details. Default: `gp2`. * `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster @@ -256,7 +254,7 @@ parameters: * `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see - [AWS docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). + [AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). A string is expected here, i.e. `"10"`, not `10`. * `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`. * `encrypted`: denotes whether the EBS volume should be encrypted or not. diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 9819334ce8..4ed310f6df 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -18,10 +18,7 @@ Container starts with a clean state. Second, when running Containers together in a `Pod` it is often necessary to share files between those Containers. The Kubernetes `Volume` abstraction solves both of these problems. -Familiarity with [Pods](/docs/user-guide/pods) is suggested. - - - +Familiarity with [Pods](/docs/concepts/workloads/pods/pod/) is suggested. @@ -100,7 +97,7 @@ We welcome additional contributions. ### awsElasticBlockStore {#awselasticblockstore} An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS -Volume](http://aws.amazon.com/ebs/) into your Pod. Unlike +Volume](https://aws.amazon.com/ebs/) into your Pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of an EBS volume are preserved and the volume is merely unmounted. This means that an EBS volume can be pre-populated with data, and that data can be "handed off" @@ -401,8 +398,8 @@ See the [Flocker example](https://github.com/kubernetes/examples/tree/{{< param ### gcePersistentDisk {#gcepersistentdisk} -A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent -Disk](http://cloud.google.com/compute/docs/disks) into your Pod. Unlike +A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) +[Persistent Disk](https://cloud.google.com/compute/docs/disks) into your Pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of a PD are preserved and the volume is merely unmounted. This means that a PD can be pre-populated with data, and that data can be "handed off" between Pods. @@ -537,7 +534,7 @@ spec: ### glusterfs {#glusterfs} -A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open +A `glusterfs` volume allows a [Glusterfs](https://www.gluster.org) (an open source networked filesystem) volume to be mounted into your Pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of a `glusterfs` volume are preserved and the volume is merely unmounted. This @@ -589,7 +586,7 @@ Watch out when using this type of volume, because: able to account for resources used by a `hostPath` * the files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a - [privileged Container](/docs/user-guide/security-context) or modify the file + [privileged Container](/docs/tasks/configure-pod-container/security-context/) or modify the file permissions on the host to be able to write to a `hostPath` volume #### Example Pod @@ -952,7 +949,7 @@ More details and examples can be found [here](https://github.com/kubernetes/exam ### quobyte {#quobyte} -A `quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to +A `quobyte` volume allows an existing [Quobyte](https://www.quobyte.com) volume to be mounted into your Pod. {{< caution >}} @@ -966,8 +963,8 @@ GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte ### rbd {#rbd} -An `rbd` volume allows a [Rados Block -Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your +An `rbd` volume allows a +[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your Pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of a `rbd` volume are preserved and the volume is merely unmounted. This means that a RBD volume can be pre-populated with data, and that data can @@ -1044,7 +1041,7 @@ A Container using a Secret as a [subPath](#using-subpath) volume mount will not receive Secret updates. {{< /note >}} -Secrets are described in more detail [here](/docs/user-guide/secrets). +Secrets are described in more detail [here](/docs/concepts/configuration/secret/). ### storageOS {#storageos} @@ -1274,11 +1271,12 @@ medium of the filesystem holding the kubelet root dir (typically Pods. In the future, we expect that `emptyDir` and `hostPath` volumes will be able to -request a certain amount of space using a [resource](/docs/user-guide/compute-resources) +request a certain amount of space using a [resource](/docs/concepts/configuration/manage-resources-containers/) specification, and to select the type of media to use, for clusters that have several media types. ## Out-of-Tree Volume Plugins + The Out-of-tree volume plugins include the Container Storage Interface (CSI) and FlexVolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository. diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index c3d8cf36d8..0d1e1d34b3 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -26,9 +26,6 @@ In a simple case, one DaemonSet, covering all nodes, would be used for each type A more complex setup might use multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types. - - - ## Writing a DaemonSet Spec @@ -48,7 +45,8 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml ### Required Fields As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For -general information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications/), +general information about working with config files, see +[running stateless applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/), and [object management using kubectl](/docs/concepts/overview/working-with-objects/object-management/) documents. The name of a DaemonSet object must be a valid @@ -71,7 +69,7 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl ### Pod Selector The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of -a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). +a [Job](/docs/concepts/workloads/controllers/job/). As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the `.spec.template`. The pod selector will no longer be defaulted when left empty. Selector @@ -147,7 +145,7 @@ automatically to DaemonSet Pods. The default scheduler ignores ### Taints and Tolerations Although Daemon Pods respect -[taints and tolerations](/docs/concepts/configuration/taint-and-toleration), +[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/), the following tolerations are added to DaemonSet Pods automatically according to the related features. @@ -213,7 +211,7 @@ use a DaemonSet rather than creating individual Pods. ### Static Pods It is possible to create Pods by writing a file to a certain directory watched by Kubelet. These -are called [static pods](/docs/concepts/cluster-administration/static-pod/). +are called [static pods](/docs/tasks/configure-pod-container/static-pod/). Unlike DaemonSet, static Pods cannot be managed with kubectl or other Kubernetes API clients. Static Pods do not depend on the apiserver, making them useful in cluster bootstrapping cases. Also, static Pods may be deprecated in the future. diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 2c2fd8c7c2..6e6b8c8ecf 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -22,7 +22,6 @@ You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_ Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. {{< /note >}} - ## Use Case @@ -1040,7 +1039,8 @@ can create multiple Deployments, one for each release, following the canary patt ## Writing a Deployment Spec As with all other Kubernetes configs, a Deployment needs `.apiVersion`, `.kind`, and `.metadata` fields. -For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/), +For general information about working with config files, see +[deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. The name of a Deployment object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 81c1280943..479858e3d4 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -24,9 +24,6 @@ due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. - - - ## Running an example Job @@ -122,6 +119,7 @@ A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/d The `.spec.template` is the only required field of the `.spec`. + The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`. In addition to required fields for a Pod, a pod template in a Job must specify appropriate @@ -450,7 +448,7 @@ requires only a single Pod. ### Replication Controller -Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller). +Jobs are complementary to [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/). A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks). diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index d59c09fc6b..941d8f585e 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -23,9 +23,6 @@ A _ReplicationController_ ensures that a specified number of pod replicas are ru time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. - - - ## How a ReplicationController Works @@ -134,7 +131,7 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified. For local container restarts, ReplicationControllers delegate to an agent on the node, -for example the [Kubelet](/docs/admin/kubelet/) or Docker. +for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) or Docker. ### Labels on the ReplicationController @@ -214,7 +211,7 @@ The ReplicationController makes it easy to scale the number of replicas up or do The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one. -As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. +As explained in [#1353](https://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. @@ -239,11 +236,11 @@ Pods created by a ReplicationController are intended to be fungible and semantic ## Responsibilities of the ReplicationController -The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. +The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. -The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)). +The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)). -The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. +The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc. ## API Object @@ -271,7 +268,7 @@ Unlike in the case where a user directly created pods, a ReplicationController r ### Job -Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own +Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicationController for pods that are expected to terminate on their own (that is, batch jobs). ### DaemonSet @@ -283,6 +280,6 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown. ## For more information -Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/). +Read [Run Stateless Application Deployment](/docs/tasks/run-application/run-stateless-application-deployment/). diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md index aff9e7b9f9..fd9833356a 100644 --- a/content/en/docs/concepts/workloads/controllers/statefulset.md +++ b/content/en/docs/concepts/workloads/controllers/statefulset.md @@ -200,7 +200,7 @@ The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is -[Running and Ready](/docs/user-guide/pod-states/), and web-2 will not be deployed until +[Running and Ready](/docs/concepts/workloads/pods/pod-lifecycle/), and web-2 will not be deployed until web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and becomes Running and Ready. diff --git a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md index 3a43d5e7b7..6b6ad65e0a 100644 --- a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -20,12 +20,6 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with both [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `TTLAfterFinished`. - - - - - - ## TTL Controller @@ -82,9 +76,7 @@ very small. Please be aware of this risk when setting a non-zero TTL. ## {{% heading "whatsnext" %}} +* [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically) -[Clean up Jobs automatically](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically) - -[Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md) - +* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md) diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md index 0810b6fec7..78e8b39a47 100644 --- a/content/en/docs/concepts/workloads/pods/disruptions.md +++ b/content/en/docs/concepts/workloads/pods/disruptions.md @@ -16,7 +16,6 @@ what types of disruptions can happen to Pods. It is also for cluster administrators who want to perform automated cluster actions, like upgrading and autoscaling clusters. - ## Voluntary and involuntary disruptions @@ -70,15 +69,15 @@ deleting deployments or pods bypasses Pod Disruption Budgets. Here are some ways to mitigate involuntary disruptions: -- Ensure your pod [requests the resources](/docs/tasks/configure-pod-container/assign-cpu-ram-container) it needs. +- Ensure your pod [requests the resources](/docs/tasks/configure-pod-container/assign-memory-resource) it needs. - Replicate your application if you need higher availability. (Learn about running replicated -[stateless](/docs/tasks/run-application/run-stateless-application-deployment/) -and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.) + [stateless](/docs/tasks/run-application/run-stateless-application-deployment/) + and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.) - For even higher availability when running replicated applications, -spread applications across racks (using -[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)) -or across zones (if using a -[multi-zone cluster](/docs/setup/multiple-zones).) + spread applications across racks (using + [anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature)) + or across zones (if using a + [multi-zone cluster](/docs/setup/multiple-zones).) The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are no voluntary disruptions at all. However, your cluster administrator or hosting provider diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 1ae4796522..74c4f8e091 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -75,7 +75,7 @@ For an example of adding a label, see the PR for adding the [Italian language la Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). Other localization teams are happy to help you get started and answer any questions you have. -You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding channels for Indonesian and Portuguese](https://github.com/kubernetes/community/pull/3605). +You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding a channel for Persian](https://github.com/kubernetes/community/pull/4980). ## Minimum required content diff --git a/content/en/docs/contribute/participate/_index.md b/content/en/docs/contribute/participate/_index.md index a5c0f2880a..6a86326945 100644 --- a/content/en/docs/contribute/participate/_index.md +++ b/content/en/docs/contribute/participate/_index.md @@ -105,7 +105,7 @@ SIG Docs approvers. Here's how it works. - Any Kubernetes member can add the `lgtm` label by adding a `/lgtm` comment. - Only SIG Docs approvers can merge a pull request by adding an `/approve` comment. Some approvers also perform additional - specific roles, such as [PR Wrangler](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) or + specific roles, such as [PR Wrangler](/docs/contribute/participate/pr-wranglers/) or [SIG Docs chairperson](#sig-docs-chairperson). diff --git a/content/en/docs/contribute/participate/pr-wranglers.md b/content/en/docs/contribute/participate/pr-wranglers.md index c2ab60a811..ba4f2925c2 100644 --- a/content/en/docs/contribute/participate/pr-wranglers.md +++ b/content/en/docs/contribute/participate/pr-wranglers.md @@ -47,6 +47,19 @@ These queries exclude localization PRs. All queries are against the main branch - [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]). - [Not against the main branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amaster): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch. +### Helpful Prow commands for wranglers + +``` +# add English label +/language en + +# add squash label to PR if more than one commit +/label tide/merge-method-squash + +# retitle a PR via Prow (such as a work-in-progress [WIP] or better detail of PR) +/retitle [WIP] +``` + ### When to close Pull Requests Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. @@ -66,4 +79,4 @@ To close a pull request, leave a `/close` comment on the PR. The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity. -{{< /note >}} \ No newline at end of file +{{< /note >}} diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index 398f06b671..9d6e7b3327 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -49,7 +49,7 @@ with the request: * Username: a string which identifies the end user. Common values might be `kube-admin` or `jane@example.com`. * UID: a string which identifies the end user and attempts to be more consistent and unique than username. -* Groups: a set of strings which associate users with a set of commonly grouped users. +* Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users. Common values might be `system:masters` or `devops-team`. * Extra fields: a map of strings to list of strings which holds additional information authorizers may find useful. All values are opaque to the authentication system and only hold significance diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 34861cb018..a937d18598 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -600,7 +600,11 @@ more information about how an object's schema is used to make decisions when merging, see [sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff). -A number of markers were added in Kubernetes 1.16 and 1.17, to allow API developers to describe the merge strategy supported by lists, maps, and structs. These markers can be applied to objects of the respective type, in Go files or OpenAPI specs. +A number of markers were added in Kubernetes 1.16 and 1.17, to allow API +developers to describe the merge strategy supported by lists, maps, and +structs. These markers can be applied to objects of the respective type, +in Go files or in the [OpenAPI schema definition of the +CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io): | Golang marker | OpenAPI extension | Accepted values | Description | Introduced in | |---|---|---|---|---| @@ -613,8 +617,12 @@ A number of markers were added in Kubernetes 1.16 and 1.17, to allow API develop By default, Server Side Apply treats custom resources as unstructured data. All keys are treated the same as struct fields, and all lists are considered atomic. -If the validation field is specified in the Custom Resource Definition, it is -used when merging objects of this type. + +If the Custom Resource Definition defines a +[schema](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io) +that contains annotations as defined in the previous "Merge Strategy" +section, these annotations will be used when merging objects of this +type. ### Using Server-Side Apply in a controller diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 2996568369..42ab59f4db 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -274,7 +274,7 @@ kubeadm to tell it what to do. When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the `/var/lib/kubelet/config.yaml` file during runtime. -If you are using a different CRI, you have to modify the file with your `cgroupDriver` value, like so: +If you are using a different CRI, you must pass your `cgroupDriver` value to `kubeadm init`, like so: ```yaml apiVersion: kubelet.config.k8s.io/v1beta1 @@ -282,6 +282,8 @@ kind: KubeletConfiguration cgroupDriver: <value> ``` +For further details, please read [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file). + Please mind, that you **only** have to do that if the cgroup driver of your CRI is not `cgroupfs`, because that is the default value in the kubelet already. diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index 696778f974..82ceef4696 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -403,4 +403,8 @@ nodeRegistration: Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please be advised that this is modifying a design principle of the Linux distribution. +## `kubeadm upgrade plan` prints out `context deadline exceeded` error message +This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`. + +This issue is fixed as of version 1.19. \ No newline at end of file diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index 07d8fea616..a95a325d5d 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -202,8 +202,8 @@ kubectl describe node <your-node-name> | grep dongle ### For cluster administrators -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) diff --git a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md index 13dec384ea..1347dc85a7 100644 --- a/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md +++ b/content/en/docs/tasks/administer-cluster/limit-storage-consumption.md @@ -8,7 +8,7 @@ content_type: task This example demonstrates an easy way to limit the amount of storage consumed in a namespace. The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/), -[LimitRange](/docs/tasks/administer-cluster/memory-default-namespace/), +[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/), and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/). diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md index d3d1541d27..e3758e05c9 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace.md @@ -202,7 +202,7 @@ resources: ``` Because your Container did not specify its own CPU request and limit, it was given the -[default CPU request and limit](/docs/tasks/administer-cluster/cpu-default-namespace/) +[default CPU request and limit](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) from the LimitRange. At this point, your Container might be running or it might not be running. Recall that a prerequisite for this task is that your cluster must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request. @@ -247,15 +247,15 @@ kubectl delete namespace constraints-cpu-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md index d2e15c91da..0156d67e4d 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace.md @@ -171,15 +171,15 @@ kubectl delete namespace default-cpu-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index a5ad383e78..de80b80ce3 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -198,7 +198,7 @@ resources: ``` Because your Container did not specify its own memory request and limit, it was given the -[default memory request and limit](/docs/tasks/administer-cluster/memory-default-namespace/) +[default memory request and limit](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) from the LimitRange. At this point, your Container might be running or it might not be running. Recall that a prerequisite @@ -247,15 +247,15 @@ kubectl delete namespace constraints-mem-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md index df7fce39f2..d2f4790abc 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/memory-default-namespace.md @@ -178,15 +178,15 @@ kubectl delete namespace default-mem-example ### For cluster administrators -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md index d69e3d29d6..4869c35e06 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace.md @@ -137,7 +137,7 @@ the memory request total for all Containers running in a namespace. You can also restrict the totals for memory limit, cpu request, and cpu limit. If you want to restrict individual Containers, instead of totals for all Containers, use a -[LimitRange](/docs/tasks/administer-cluster/memory-constraint-namespace/). +[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/). ## Clean up @@ -154,15 +154,15 @@ kubectl delete namespace quota-mem-cpu-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md index c44a07681f..b0485f2b45 100644 --- a/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md +++ b/content/en/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace.md @@ -115,15 +115,15 @@ kubectl delete namespace quota-pod-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/administer-cluster/quota-api-object.md b/content/en/docs/tasks/administer-cluster/quota-api-object.md index 1fb48c7a2b..11592d2152 100644 --- a/content/en/docs/tasks/administer-cluster/quota-api-object.md +++ b/content/en/docs/tasks/administer-cluster/quota-api-object.md @@ -148,17 +148,17 @@ kubectl delete namespace quota-object-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) ### For app developers diff --git a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md index 7e558fb48f..323f5b0a48 100644 --- a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md @@ -32,17 +32,17 @@ they are allowed to perform is the first line of defense. ### Use Transport Layer Security (TLS) for all API traffic Kubernetes expects that all API communication in the cluster is encrypted by default with TLS, and the -majority of installation methods will allow the necessary certificates to be created and distributed to -the cluster components. Note that some components and installation methods may enable local ports over -HTTP and administrators should familiarize themselves with the settings of each component to identify +majority of installation methods will allow the necessary certificates to be created and distributed to +the cluster components. Note that some components and installation methods may enable local ports over +HTTP and administrators should familiarize themselves with the settings of each component to identify potentially unsecured traffic. ### API Authentication -Choose an authentication mechanism for the API servers to use that matches the common access patterns -when you install a cluster. For instance, small single user clusters may wish to use a simple certificate +Choose an authentication mechanism for the API servers to use that matches the common access patterns +when you install a cluster. For instance, small single user clusters may wish to use a simple certificate or static Bearer token approach. Larger clusters may wish to integrate an existing OIDC or LDAP server that -allow users to be subdivided into groups. +allow users to be subdivided into groups. All API clients must be authenticated, even those that are part of the infrastructure like nodes, proxies, the scheduler, and volume plugins. These clients are typically [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/) or use x509 client certificates, and they are created automatically at cluster startup or are setup as part of the cluster installation. @@ -63,10 +63,10 @@ As with authentication, simple and broad roles may be appropriate for smaller cl more users interact with the cluster, it may become necessary to separate teams into separate namespaces with more limited roles. -With authorization, it is important to understand how updates on one object may cause actions in -other places. For instance, a user may not be able to create pods directly, but allowing them to -create a deployment, which creates pods on their behalf, will let them create those pods -indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node +With authorization, it is important to understand how updates on one object may cause actions in +other places. For instance, a user may not be able to create pods directly, but allowing them to +create a deployment, which creates pods on their behalf, will let them create those pods +indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node being terminated and recreated on other nodes. The out of the box roles represent a balance between flexibility and the common use cases, but more limited roles should be carefully reviewed to prevent accidental escalation. You can make roles specific to your use case if the out-of-box ones don't meet your needs. @@ -84,7 +84,7 @@ Consult the [Kubelet authentication/authorization reference](/docs/admin/kubelet ## Controlling the capabilities of a workload or user at runtime Authorization in Kubernetes is intentionally high level, focused on coarse actions on resources. -More powerful controls exist as **policies** to limit by use case how those objects act on the +More powerful controls exist as **policies** to limit by use case how those objects act on the cluster, themselves, and other resources. ### Limiting resource usage on a cluster @@ -92,9 +92,9 @@ cluster, themselves, and other resources. [Resource quota](/docs/concepts/policy/resource-quotas/) limits the number or capacity of resources granted to a namespace. This is most often used to limit the amount of CPU, memory, or persistent disk a namespace can allocate, but can also control how many pods, services, or -volumes exist in each namespace. +volumes exist in each namespace. -[Limit ranges](/docs/tasks/administer-cluster/memory-default-namespace/) restrict the maximum or minimum size of some of the +[Limit ranges](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) restrict the maximum or minimum size of some of the resources above, to prevent users from requesting unreasonably high or low values for commonly reserved resources like memory, or to provide default limits when none are specified. @@ -104,14 +104,14 @@ reserved resources like memory, or to provide default limits when none are speci A pod definition contains a [security context](/docs/tasks/configure-pod-container/security-context/) that allows it to request access to running as a specific Linux user on a node (like root), access to run privileged or access the host network, and other controls that would otherwise -allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/) +allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/) can limit which users or service accounts can provide dangerous security context settings. For example, pod security policies can limit volume mounts, especially `hostPath`, which are aspects of a pod that should be controlled. -Generally, most application workloads need limited access to host resources so they can -successfully run as a root process (uid 0) without access to host information. However, -considering the privileges associated with the root user, you should write application -containers to run as a non-root user. Similarly, administrators who wish to prevent -client applications from escaping their containers should use a restrictive pod security +Generally, most application workloads need limited access to host resources so they can +successfully run as a root process (uid 0) without access to host information. However, +considering the privileges associated with the root user, you should write application +containers to run as a non-root user. Similarly, administrators who wish to prevent +client applications from escaping their containers should use a restrictive pod security policy. @@ -147,8 +147,8 @@ kernel on behalf of some more-privileged process.) ### Restricting network access -The [network policies](/docs/tasks/administer-cluster/declare-network-policy/) for a namespace -allows application authors to restrict which pods in other namespaces may access pods and ports +The [network policies](/docs/tasks/administer-cluster/declare-network-policy/) for a namespace +allows application authors to restrict which pods in other namespaces may access pods and ports within their namespaces. Many of the supported [Kubernetes networking providers](/docs/concepts/cluster-administration/networking/) now respect network policy. @@ -157,7 +157,7 @@ load balanced services, which on many clusters can control whether those users a are visible outside of the cluster. Additional protections may be available that control network rules on a per plugin or per -environment basis, such as per-node firewalls, physically separating cluster nodes to +environment basis, such as per-node firewalls, physically separating cluster nodes to prevent cross talk, or advanced networking policy. ### Restricting cloud metadata API access @@ -173,14 +173,14 @@ to the metadata API, and avoid using provisioning data to deliver secrets. ### Controlling which nodes pods may access -By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a +By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a [rich set of policies for controlling placement of pods onto nodes](/docs/concepts/scheduling-eviction/assign-pod-node/) and the [taint based pod placement and eviction](/docs/concepts/scheduling-eviction/taint-and-toleration/) that are available to end users. For many clusters use of these policies to separate workloads can be a convention that authors adopt or enforce via tooling. -As an administrator, a beta admission plugin `PodNodeSelector` can be used to force pods -within a namespace to default or require a specific node selector, and if end users cannot +As an administrator, a beta admission plugin `PodNodeSelector` can be used to force pods +within a namespace to default or require a specific node selector, and if end users cannot alter namespaces, this can strongly limit the placement of all of the pods in a specific workload. @@ -194,7 +194,7 @@ Write access to the etcd backend for the API is equivalent to gaining root on th and read access can be used to escalate fairly quickly. Administrators should always use strong credentials from the API servers to their etcd server, such as mutual auth via TLS client certificates, and it is often recommended to isolate the etcd servers behind a firewall that only the API servers -may access. +may access. {{< caution >}} Allowing other components within the cluster to access the master etcd instance with @@ -206,7 +206,7 @@ access to a subset of the keyspace is strongly recommended. ### Enable audit logging The [audit logger](/docs/tasks/debug-application-cluster/audit/) is a beta feature that records actions taken by the -API for later analysis in the event of a compromise. It is recommended to enable audit logging +API for later analysis in the event of a compromise. It is recommended to enable audit logging and archive the audit file on a secure server. ### Restrict access to alpha or beta features @@ -229,8 +229,8 @@ rotate those tokens frequently. For example, once the bootstrap phase is complet Many third party integrations to Kubernetes may alter the security profile of your cluster. When enabling an integration, always review the permissions that an extension requests before granting it access. For example, many security integrations may request access to view all secrets on -your cluster which is effectively making that component a cluster admin. When in doubt, -restrict the integration to functioning in a single namespace if possible. +your cluster which is effectively making that component a cluster admin. When in doubt, +restrict the integration to functioning in a single namespace if possible. Components that create pods may also be unexpectedly powerful if they can do so inside namespaces like the `kube-system` namespace, because those pods can gain access to service account secrets @@ -251,7 +251,7 @@ are not encrypted or an attacker gains read access to etcd. ### Receiving alerts for security updates and reporting vulnerabilities -Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) +Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group for emails about security announcements. See the [security reporting](/security/) page for more on how to report vulnerabilities. diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 5e79704cc4..3afda46609 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -254,17 +254,17 @@ kubectl delete namespace cpu-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md index 394f435d12..79bc2b86b6 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -43,7 +43,7 @@ If the resource metrics API is available, the output includes a reference to `metrics.k8s.io`. ```shell -NAME +NAME v1beta1.metrics.k8s.io ``` @@ -344,17 +344,17 @@ kubectl delete namespace mem-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md index dec9e8db91..79c5260ead 100644 --- a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md @@ -250,17 +250,17 @@ kubectl delete namespace qos-example ### For cluster administrators -* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) -* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) -* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) -* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) -* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) -* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) +* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/) * [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/) diff --git a/content/en/docs/test.md b/content/en/docs/test.md index a08aeb3caa..071a873b37 100644 --- a/content/en/docs/test.md +++ b/content/en/docs/test.md @@ -1,7 +1,6 @@ --- title: Docs smoke test page main_menu: false -mermaid: true --- This page serves two purposes: @@ -297,7 +296,8 @@ tables, use HTML instead. ## Visualizations with Mermaid -Add `mermaid: true` to the [front matter](https://gohugo.io/content-management/front-matter/) of any page to enable [Mermaid JS](https://mermaidjs.github.io) visualizations. The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html) +You can use [Mermaid JS](https://mermaidjs.github.io) visualizations. +The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html) ``` {{</* mermaid */>}} diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index d04902ddfe..588010494c 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -1,7 +1,6 @@ --- title: Using Source IP content_type: tutorial -mermaid: true min-kubernetes-server-version: v1.5 --- diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md index 3bed3e059c..3c8e70b783 100644 --- a/content/en/docs/tutorials/stateful-application/zookeeper.md +++ b/content/en/docs/tutorials/stateful-application/zookeeper.md @@ -51,7 +51,7 @@ After this tutorial, you will know the following. - How to consistently configure the ensemble using ConfigMaps. - How to spread the deployment of ZooKeeper servers in the ensemble. - How to use PodDisruptionBudgets to ensure service availability during planned maintenance. - + <!-- lessoncontent --> @@ -91,7 +91,7 @@ kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml This creates the `zk-hs` Headless Service, the `zk-cs` Service, the `zk-pdb` PodDisruptionBudget, and the `zk` StatefulSet. -```shell +``` service/zk-hs created service/zk-cs created poddisruptionbudget.policy/zk-pdb created @@ -107,7 +107,7 @@ kubectl get pods -w -l app=zk Once the `zk-2` Pod is Running and Ready, use `CTRL-C` to terminate kubectl. -```shell +``` NAME READY STATUS RESTARTS AGE zk-0 0/1 Pending 0 0s zk-0 0/1 Pending 0 0s @@ -143,7 +143,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of `<statefulset name>-<ordinal index>`. Because the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and `zk-2`. -```shell +``` zk-0 zk-1 zk-2 @@ -159,7 +159,7 @@ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeepe Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal. -```shell +``` myid zk-0 1 myid zk-1 @@ -177,7 +177,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done The `zk-hs` Service creates a domain for all of the Pods, `zk-hs.default.svc.cluster.local`. -```shell +``` zk-0.zk-hs.default.svc.cluster.local zk-1.zk-hs.default.svc.cluster.local zk-2.zk-hs.default.svc.cluster.local @@ -196,7 +196,7 @@ the file, the `1`, `2`, and `3` correspond to the identifiers in the ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in the `zk` StatefulSet. -```shell +``` clientPort=2181 dataDir=/var/lib/zookeeper/data dataLogDir=/var/lib/zookeeper/log @@ -219,7 +219,9 @@ Consensus protocols require that the identifiers of each participant be unique. ```shell kubectl get pods -w -l app=zk +``` +``` NAME READY STATUS RESTARTS AGE zk-0 0/1 Pending 0 0s zk-0 0/1 Pending 0 0s @@ -243,7 +245,7 @@ the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that endpoint will be the unique ZooKeeper server claiming the identity configured in its `myid` file. -```shell +``` zk-0.zk-hs.default.svc.cluster.local zk-1.zk-hs.default.svc.cluster.local zk-2.zk-hs.default.svc.cluster.local @@ -252,7 +254,7 @@ zk-2.zk-hs.default.svc.cluster.local This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files represents a correctly configured ensemble. -```shell +``` server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888 server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888 server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888 @@ -269,7 +271,8 @@ The command below executes the `zkCli.sh` script to write `world` to the path `/ ```shell kubectl exec zk-0 zkCli.sh create /hello world - +``` +``` WATCHER:: WatchedEvent state:SyncConnected type:None path:null @@ -285,7 +288,7 @@ kubectl exec zk-1 zkCli.sh get /hello The data that you created on `zk-0` is available on all the servers in the ensemble. -```shell +``` WATCHER:: WatchedEvent state:SyncConnected type:None path:null @@ -316,6 +319,9 @@ Use the [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#d ```shell kubectl delete statefulset zk +``` + +``` statefulset.apps "zk" deleted ``` @@ -327,7 +333,7 @@ kubectl get pods -w -l app=zk When `zk-0` if fully terminated, use `CTRL-C` to terminate kubectl. -```shell +``` zk-2 1/1 Terminating 0 9m zk-0 1/1 Terminating 0 11m zk-1 1/1 Terminating 0 10m @@ -358,7 +364,7 @@ kubectl get pods -w -l app=zk Once the `zk-2` Pod is Running and Ready, use `CTRL-C` to terminate kubectl. -```shell +``` NAME READY STATUS RESTARTS AGE zk-0 0/1 Pending 0 0s zk-0 0/1 Pending 0 0s @@ -386,7 +392,7 @@ kubectl exec zk-2 zkCli.sh get /hello Even though you terminated and recreated all of the Pods in the `zk` StatefulSet, the ensemble still serves the original value. -```shell +``` WATCHER:: WatchedEvent state:SyncConnected type:None path:null @@ -430,7 +436,7 @@ kubectl get pvc -l app=zk When the `StatefulSet` recreated its Pods, it remounts the Pods' PersistentVolumes. -```shell +``` NAME STATUS VOLUME CAPACITY ACCESSMODES AGE datadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h @@ -464,6 +470,8 @@ Get the `zk` StatefulSet. ```shell kubectl get sts zk -o yaml +``` +``` … command: - sh @@ -506,7 +514,7 @@ kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties The logging configuration below will cause the ZooKeeper process to write all of its logs to the standard output file stream. -```shell +``` zookeeper.root.logger=CONSOLE zookeeper.console.threshold=INFO log4j.rootLogger=${zookeeper.root.logger} @@ -526,7 +534,7 @@ kubectl logs zk-0 --tail 20 You can view application logs written to standard out or standard error using `kubectl logs` and from the Kubernetes Dashboard. -```shell +``` 2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740 2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client) 2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749 @@ -583,7 +591,7 @@ kubectl exec zk-0 -- ps -elf As the `runAsUser` field of the `securityContext` object is set to 1000, instead of running as root, the ZooKeeper process runs as the zookeeper user. -```shell +``` F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground 0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg @@ -599,7 +607,7 @@ kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data Because the `fsGroup` field of the `securityContext` object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write its data. -```shell +``` drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ``` @@ -621,7 +629,8 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv ```shell kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]' - +``` +``` statefulset.apps/zk patched ``` @@ -629,7 +638,8 @@ Use `kubectl rollout status` to watch the status of the update. ```shell kubectl rollout status sts/zk - +``` +``` waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664... Waiting for 1 pods to be ready... Waiting for 1 pods to be ready... @@ -648,7 +658,9 @@ Use the `kubectl rollout history` command to view a history or previous configur ```shell kubectl rollout history sts/zk +``` +``` statefulsets "zk" REVISION 1 @@ -659,7 +671,9 @@ Use the `kubectl rollout undo` command to roll back the modification. ```shell kubectl rollout undo sts/zk +``` +``` statefulset.apps/zk rolled back ``` @@ -680,7 +694,7 @@ kubectl exec zk-0 -- ps -ef The command used as the container's entry point has PID 1, and the ZooKeeper process, a child of the entry point, has PID 27. -```shell +``` UID PID PPID C STIME TTY TIME CMD zookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground zookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg @@ -700,7 +714,7 @@ kubectl exec zk-0 -- pkill java The termination of the ZooKeeper process caused its parent process to terminate. Because the `RestartPolicy` of the container is Always, it restarted the parent process. -```shell +``` NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 0 21m zk-1 1/1 Running 0 20m @@ -740,7 +754,7 @@ The Pod `template` for the `zk` `StatefulSet` specifies a liveness probe. The probe calls a bash script that uses the ZooKeeper `ruok` four letter word to test the server's health. -```bash +``` OK=$(echo ruok | nc 127.0.0.1 $1) if [ "$OK" == "imok" ]; then exit 0 @@ -767,7 +781,9 @@ the ensemble are restarted. ```shell kubectl get pod -w -l app=zk +``` +``` NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 0 1h zk-1 1/1 Running 0 1h @@ -832,7 +848,7 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; All of the Pods in the `zk` `StatefulSet` are deployed on different nodes. -```shell +``` kubernetes-node-cxpk kubernetes-node-a5aq kubernetes-node-2g2d @@ -854,7 +870,7 @@ This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` spec ``` The `requiredDuringSchedulingIgnoredDuringExecution` field tells the -Kubernetes Scheduler that it should never co-locate two Pods which have `app` label +Kubernetes Scheduler that it should never co-locate two Pods which have `app` label as `zk` in the domain defined by the `topologyKey`. The `topologyKey` `kubernetes.io/hostname` indicates that the domain is an individual node. Using different rules, labels, and selectors, you can extend this technique to spread @@ -891,7 +907,7 @@ kubectl get pdb zk-pdb The `max-unavailable` field indicates to Kubernetes that at most one Pod from `zk` `StatefulSet` can be unavailable at any time. -```shell +``` NAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE zk-pdb N/A 1 1 ``` @@ -906,7 +922,9 @@ In another terminal, use this command to get the nodes that the Pods are current ```shell for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done +``` +``` kubernetes-node-pb41 kubernetes-node-ixsl kubernetes-node-i4c4 @@ -917,6 +935,9 @@ drain the node on which the `zk-0` Pod is scheduled. ```shell kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +``` + +``` node "kubernetes-node-pb41" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz @@ -927,7 +948,7 @@ node "kubernetes-node-pb41" drained As there are four nodes in your cluster, `kubectl drain`, succeeds and the `zk-0` is rescheduled to another node. -```shell +``` NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 2 1h zk-1 1/1 Running 0 1h @@ -949,7 +970,9 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node ```shell kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned +``` +``` WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74 pod "zk-1" deleted node "kubernetes-node-ixsl" drained @@ -959,7 +982,9 @@ The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `Po ```shell kubectl get pods -w -l app=zk +``` +``` NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 2 1h zk-1 1/1 Running 0 1h @@ -987,6 +1012,8 @@ Continue to watch the Pods of the stateful set, and drain the node on which ```shell kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data +``` +``` node "kubernetes-node-i4c4" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog @@ -1007,7 +1034,7 @@ kubectl exec zk-0 zkCli.sh get /hello The service is still available because its `PodDisruptionBudget` is respected. -```shell +``` WatchedEvent state:SyncConnected type:None path:null world cZxid = 0x200000002 @@ -1027,7 +1054,8 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc ```shell kubectl uncordon kubernetes-node-pb41 - +``` +``` node "kubernetes-node-pb41" uncordoned ``` @@ -1035,7 +1063,8 @@ node "kubernetes-node-pb41" uncordoned ```shell kubectl get pods -w -l app=zk - +``` +``` NAME READY STATUS RESTARTS AGE zk-0 1/1 Running 2 1h zk-1 1/1 Running 0 1h @@ -1102,6 +1131,3 @@ You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure used in this tutorial. Follow the necessary steps, based on your environment, storage configuration, and provisioning method, to ensure that all storage is reclaimed. - - - diff --git a/content/es/docs/concepts/overview/working-with-objects/namespaces.md b/content/es/docs/concepts/overview/working-with-objects/namespaces.md index b3c3c73e14..8cd9133f9a 100644 --- a/content/es/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/es/docs/concepts/overview/working-with-objects/namespaces.md @@ -99,7 +99,7 @@ debes utilizar el nombre cualificado completo de dominio (FQDN). La mayoría de los recursos de Kubernetes (ej. pods, services, replication controllers, y otros) están en algunos espacios de nombres. Sin embargo, los recursos que representan a los propios espacios de nombres no están a su vez en espacios de nombres. -De forma similar, los recursos de bajo nivel, como los nodos [nodos](/docs/admin/node) y +De forma similar, los recursos de bajo nivel, como los [nodos](/docs/admin/node) y los volúmenes persistentes, no están en ningún espacio de nombres. Para comprobar qué recursos de Kubernetes están y no están en un espacio de nombres: diff --git a/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md index 134a6fb3a0..8d1df672a4 100644 --- a/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/fr/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -40,7 +40,7 @@ A noter: Toutes les distributions ne sont pas activement maintenues. Choisissez * La rubrique [Certificats](/docs/concepts/cluster-administration/certificates/) décrit les étapes à suivre pour générer des certificats à l’aide de différentes suites d'outils. -* L' [Environnement de conteneur dans Kubernetes](/docs/concepts/containers/container-environment-variables/) décrit l'environnement des conteneurs gérés par la Kubelet sur un nœud Kubernetes. +* L' [Environnement de conteneur dans Kubernetes](/docs/concepts/containers/container-environment/) décrit l'environnement des conteneurs gérés par Kubelet sur un nœud Kubernetes. * Le [Contrôle de l'accès à l'API Kubernetes](/docs/reference/access-authn-authz/controlling-access/) explique comment configurer les autorisations pour les utilisateurs et les comptes de service. @@ -64,4 +64,3 @@ A noter: Toutes les distributions ne sont pas activement maintenues. Choisissez * [Integration DNS](/docs/concepts/services-networking/dns-pod-service/) décrit comment résoudre un nom DNS directement vers un service Kubernetes. * [Journalisation des évènements et surveillance de l'activité du cluster](/docs/concepts/cluster-administration/logging/) explique le fonctionnement de la journalisation des évènements dans Kubernetes et son implémentation. - diff --git a/content/fr/docs/concepts/containers/container-environment-variables.md b/content/fr/docs/concepts/containers/container-environment.md similarity index 95% rename from content/fr/docs/concepts/containers/container-environment-variables.md rename to content/fr/docs/concepts/containers/container-environment.md index 547809ffbf..adad1ab64a 100644 --- a/content/fr/docs/concepts/containers/container-environment-variables.md +++ b/content/fr/docs/concepts/containers/container-environment.md @@ -1,6 +1,6 @@ --- -title: Les variables d’environnement du conteneur -description: Variables d'environnement pour conteneur Kubernetes +title: L'environnement du conteneur +description: L'environnement du conteneur Kubernetes content_type: concept weight: 20 --- diff --git a/content/id/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/id/docs/tasks/administer-cluster/configure-upgrade-etcd.md new file mode 100644 index 0000000000..f0a9d789b2 --- /dev/null +++ b/content/id/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -0,0 +1,234 @@ +--- +title: Mengoperasikan klaster etcd untuk Kubernetes +content_type: task +--- + +<!-- overview --> + +{{< glossary_definition term_id="etcd" length="all" prepend="etcd adalah ">}} + + + + +## {{% heading "prerequisites" %}} + + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + + + +<!-- steps --> + +## Prerequisites + +* Jalankan etcd sebagai klaster dimana anggotanya berjumlah ganjil. + +* Etcd adalah sistem terdistribusi berbasis _leader_. Pastikan _leader_ secara berkala mengirimkan _heartbeat_ dengan tepat waktu ke semua pengikutnya untuk menjaga kestabilan klaster. + +* Pastikan tidak terjadi kekurangan sumber daya. + + Kinerja dan stabilitas dari klaster sensitif terhadap jaringan dan _IO disk_. Kekurangan sumber daya apa pun dapat menyebabkan _timeout_ dari _heartbeat_, yang menyebabkan ketidakstabilan klaster. Etcd yang tidak stabil mengindikasikan bahwa tidak ada _leader_ yang terpilih. Dalam keadaan seperti itu, sebuah klaster tidak dapat membuat perubahan apa pun ke kondisi saat ini, yang menyebabkan tidak ada Pod baru yang dapat dijadwalkan. + +* Menjaga kestabilan klaster etcd sangat penting untuk stabilitas klaster Kubernetes. Karenanya, jalankan klaster etcd pada mesin khusus atau lingkungan terisolasi untuk [persyaratan sumber daya terjamin](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations). + +* Versi minimum yang disarankan untuk etcd yang dijalankan dalam lingkungan produksi adalah `3.2.10+`. + +## Persyaratan sumber daya + +Mengoperasikan etcd dengan sumber daya terbatas hanya cocok untuk tujuan pengujian. Untuk peluncuran dalam lingkungan produksi, diperlukan konfigurasi perangkat keras lanjutan. Sebelum meluncurkan etcd dalam produksi, lihat [dokumentasi referensi persyaratan sumber daya](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations). + +## Memulai Klaster etcd + +Bagian ini mencakup bagaimana memulai klaster etcd dalam Node tunggal dan Node multipel. + +### Klaster etcd dalam Node tunggal + +Gunakan Klaster etcd Node tunggal hanya untuk tujuan pengujian + +1. Jalankan perintah berikut ini: + + ```sh + ./etcd --listen-client-urls=http://$PRIVATE_IP:2379 --advertise-client-urls=http://$PRIVATE_IP:2379 + ``` + +2. Start server API Kubernetes dengan _flag_ `--etcd-servers=$PRIVATE_IP:2379`. + + Ganti `PRIVATE_IP` dengan IP klien etcd kamu. + +### Klaster etcd dengan Node multipel + +Untuk daya tahan dan ketersediaan tinggi, jalankan etcd sebagai klaster dengan Node multipel dalam lingkungan produksi dan cadangkan secara berkala. Sebuah klaster dengan lima anggota direkomendasikan dalam lingkungan produksi. Untuk informasi lebih lanjut, lihat [Dokumentasi FAQ](https://github.com/coreos/etcd/blob/master/Documentation/faq.md#what-is-failure-tolerance). + +Mengkonfigurasi klaster etcd baik dengan informasi anggota statis atau dengan penemuan dinamis. Untuk informasi lebih lanjut tentang pengklasteran, lihat [Dokumentasi pengklasteran etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md). + +Sebagai contoh, tinjau sebuah klaster etcd dengan lima anggota yang berjalan dengan URL klien berikut: `http://$IP1:2379`, `http://$IP2:2379`, `http://$IP3:2379`, `http://$IP4:2379`, dan `http://$IP5:2379`. Untuk memulai server API Kubernetes: + +1. Jalankan perintah berikut ini: + + ```sh + ./etcd --listen-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379 --advertise-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379 + ``` + +2. Start server Kubernetes API dengan flag `--etcd-servers=$IP1:2379, $IP2:2379, $IP3:2379, $IP4:2379, $IP5:2379`. + + Ganti `IP` dengan alamat IP klien kamu. + +### Klaster etcd dengan Node multipel dengan load balancer + +Untuk menjalankan penyeimbangan beban (_load balancing_) untuk klaster etcd: + +1. Siapkan sebuah klaster etcd. +2. Konfigurasikan sebuah _load balancer_ di depan klaster etcd. + Sebagai contoh, anggap saja alamat _load balancer_ adalah `$LB`. +3. Mulai Server API Kubernetes dengan _flag_ `--etcd-servers=$LB:2379`. + +## Mengamankan klaster etcd + +Akses ke etcd setara dengan izin root pada klaster sehingga idealnya hanya server API yang memiliki akses ke etcd. Dengan pertimbangan sensitivitas data, disarankan untuk memberikan izin hanya kepada Node-Node yang membutuhkan akses ke klaster etcd. + +Untuk mengamankan etcd, tetapkan aturan _firewall_ atau gunakan fitur keamanan yang disediakan oleh etcd. Fitur keamanan etcd tergantung pada Infrastruktur Kunci Publik / _Public Key Infrastructure_ (PKI) x509. Untuk memulai, buat saluran komunikasi yang aman dengan menghasilkan pasangan kunci dan sertifikat. Sebagai contoh, gunakan pasangan kunci `peer.key` dan `peer.cert` untuk mengamankan komunikasi antara anggota etcd, dan `client.key` dan `client.cert` untuk mengamankan komunikasi antara etcd dan kliennya. Lihat [contoh skrip](https://github.com/coreos/etcd/tree/master/hack/tls-setup) yang disediakan oleh proyek etcd untuk menghasilkan pasangan kunci dan berkas CA untuk otentikasi klien. + +### Mengamankan komunikasi + +Untuk mengonfigurasi etcd dengan _secure peer communication_, tentukan _flag_ `--peer-key-file=peer.key` dan `--peer-cert-file=peer.cert`, dan gunakan https sebagai skema URL. + +Demikian pula, untuk mengonfigurasi etcd dengan _secure client communication_, tentukan _flag_ `--key-file=k8sclient.key` dan `--cert-file=k8sclient.cert`, dan gunakan https sebagai skema URL. + +### Membatasi akses klaster etcd + +Setelah konfigurasi komunikasi aman, batasi akses klaster etcd hanya ke server API Kubernetes. Gunakan otentikasi TLS untuk melakukannya. + +Sebagai contoh, anggap pasangan kunci `k8sclient.key` dan `k8sclient.cert` dipercaya oleh CA `etcd.ca`. Ketika etcd dikonfigurasi dengan `--client-cert-auth` bersama dengan TLS, etcd memverifikasi sertifikat dari klien dengan menggunakan CA dari sistem atau CA yang dilewati oleh _flag_ `--trusted-ca-file`. Menentukan _flag_ `--client-cert-auth=true` dan `--trusted-ca-file=etcd.ca` akan membatasi akses kepada klien yang mempunyai sertifikat `k8sclient.cert`. + +Setelah etcd dikonfigurasi dengan benar, hanya klien dengan sertifikat yang valid dapat mengaksesnya. Untuk memberikan akses kepada server Kubernetes API, konfigurasikan dengan _flag_ `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` dan `--etcd-cafile=ca.cert`. + +{{< note >}} +Otentikasi etcd saat ini tidak didukung oleh Kubernetes. Untuk informasi lebih lanjut, lihat masalah terkait [Mendukung Auth Dasar untuk Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398). +{{< /note >}} + +## Mengganti anggota etcd yang gagal + +Etcd klaster mencapai ketersediaan tinggi dengan mentolerir kegagalan dari sebagian kecil anggota. Namun, untuk meningkatkan kesehatan keseluruhan dari klaster, segera ganti anggota yang gagal. Ketika banyak anggota yang gagal, gantilah satu per satu. Mengganti anggota yang gagal melibatkan dua langkah: menghapus anggota yang gagal dan menambahkan anggota baru. + +Meskipun etcd menyimpan ID anggota unik secara internal, disarankan untuk menggunakan nama unik untuk setiap anggota untuk menghindari kesalahan manusia. Sebagai contoh, sebuah klaster etcd dengan tiga anggota. Jadikan URL-nya, member1=http://10.0.0.1, member2=http://10.0.0.2, and member3=http://10.0.0.3. Ketika member1 gagal, ganti dengan member4=http://10.0.0.4. + +1. Dapatkan ID anggota yang gagal dari member1: + + `etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list` + + Akan tampil pesan berikut: + + 8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379 + 91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379 + fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379 + +2. Hapus anggota yang gagal: + + `etcdctl member remove 8211f1d0f64f3269` + + Akan tampil pesan berikut: + + Removed member 8211f1d0f64f3269 from cluster + +3. Tambahkan anggota baru: + + `./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380` + + Akan tampil pesan berikut: + + Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4 + +4. Jalankan anggota yang baru ditambahkan pada mesin dengan IP `10.0.0.4`: + + export ETCD_NAME="member4" + export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380" + export ETCD_INITIAL_CLUSTER_STATE=existing + etcd [flags] + +5. Lakukan salah satu dari yang berikut: + + 1. Perbarui _flag_ `--etcd-server` untuk membuat Kubernetes mengetahui perubahan konfigurasi, lalu start ulang server API Kubernetes. + 2. Perbarui konfigurasi _load balancer_ jika _load balancer_ digunakan dalam Deployment. + +Untuk informasi lebih lanjut tentang konfigurasi ulang klaster, lihat [Dokumentasi Konfigurasi etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member). + +## Mencadangkan klaster etcd + +Semua objek Kubernetes disimpan dalam etcd. Mencadangkan secara berkala data klaster etcd penting untuk memulihkan klaster Kubernetes di bawah skenario bencana, seperti kehilangan semua Node _control plane_. Berkas _snapshot_ berisi semua status Kubernetes dan informasi penting. Untuk menjaga data Kubernetes yang sensitif aman, enkripsi berkas _snapshot_. + +Mencadangkan klaster etcd dapat dilakukan dengan dua cara: _snapshot_ etcd bawaan dan _snapshot_ volume. + +### Snapshot bawaan + +Fitur _snapshot_ didukung oleh etcd secara bawaan, jadi mencadangkan klaster etcd lebih mudah. _Snapshot_ dapat diambil dari anggota langsung dengan command `etcdctl snapshot save` atau dengan menyalin `member/snap/db` berkas dari etcd [direktori data](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) yang saat ini tidak digunakan oleh proses etcd. Mengambil _snapshot_ biasanya tidak akan mempengaruhi kinerja anggota. + +Di bawah ini adalah contoh untuk mengambil _snapshot_ dari _keyspace_ yang dilayani oleh `$ENDPOINT` ke berkas `snapshotdb`: + +```sh +ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb +# keluar 0 + +# memverifikasi hasil snapshot +ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb ++----------+----------+------------+------------+ +| HASH | REVISION | TOTAL KEYS | TOTAL SIZE | ++----------+----------+------------+------------+ +| fe01cf57 | 10 | 7 | 2.1 MB | ++----------+----------+------------+------------+ +``` + +### Snapshot volume + +Jika etcd berjalan pada volume penyimpanan yang mendukung cadangan, seperti Amazon Elastic Block Store, buat cadangan data etcd dengan mengambil _snapshot_ dari volume penyimpanan. + +## Memperbesar skala dari klaster etcd + +Peningkatan skala klaster etcd meningkatkan ketersediaan dengan menukarnya untuk kinerja. Penyekalaan tidak akan meningkatkan kinerja atau kemampuan klaster. Aturan umum adalah untuk tidak melakukan penyekalaan naik atau turun untuk klaster etcd. Jangan mengonfigurasi grup penyekalaan otomatis untuk klaster etcd. Sangat disarankan untuk selalu menjalankan klaster etcd statis dengan lima anggota untuk klaster produksi Kubernetes untuk setiap skala yang didukung secara resmi. + +Penyekalaan yang wajar adalah untuk meningkatkan klaster dengan tiga anggota menjadi dengan lima anggota, ketika dibutuhkan lebih banyak keandalan. Lihat [Dokumentasi Rekonfigurasi etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member) untuk informasi tentang cara menambahkan anggota ke klaster yang ada. + +## Memulihkan klaster etcd + +Etcd mendukung pemulihan dari _snapshot_ yang diambil dari proses etcd dari versi [major.minor](http://semver.org/). Memulihkan versi dari versi patch lain dari etcd juga didukung. Operasi pemulihan digunakan untuk memulihkan data klaster yang gagal. + +Sebelum memulai operasi pemulihan, berkas _snapshot_ harus ada. Ini bisa berupa berkas _snapshot_ dari operasi pencadangan sebelumnya, atau dari sisa [direktori data](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). Untuk informasi dan contoh lebih lanjut tentang memulihkan klaster dari berkas _snapshot_, lihat [dokumentasi pemulihan bencana etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster). + +Jika akses URL dari klaster yang dipulihkan berubah dari klaster sebelumnya, maka server API Kubernetes harus dikonfigurasi ulang sesuai dengan URL tersebut. Pada kasus ini, start kembali server API Kubernetes dengan _flag_ `--etcd-servers=$NEW_ETCD_CLUSTER` bukan _flag_ `--etcd-servers=$OLD_ETCD_CLUSTER`. Ganti `$NEW_ETCD_CLUSTER` dan `$OLD_ETCD_CLUSTER` dengan alamat IP masing-masing. Jika _load balancer_ digunakan di depan klaster etcd, kamu mungkin hanya perlu memperbarui _load balancer_ sebagai gantinya. + +Jika mayoritas anggota etcd telah gagal secara permanen, klaster etcd dianggap gagal. Dalam skenario ini, Kubernetes tidak dapat membuat perubahan apa pun ke kondisi saat ini. Meskipun Pod terjadwal mungkin terus berjalan, tidak ada Pod baru yang bisa dijadwalkan. Dalam kasus seperti itu, pulihkan klaster etcd dan kemungkinan juga untuk mengonfigurasi ulang server API Kubernetes untuk memperbaiki masalah ini. + +## Memutakhirkan dan memutar balikan klaster etcd + +Pada Kubernetes v1.13.0, etcd2 tidak lagi didukung sebagai _backend_ penyimpanan untuk klaster Kubernetes baru atau yang sudah ada. _Timeline_ untuk dukungan Kubernetes untuk etcd2 dan etcd3 adalah sebagai berikut: + +- Kubernetes v1.0: hanya etcd2 +- Kubernetes v1.5.1: dukungan etcd3 ditambahkan, standar klaster baru yang dibuat masih ke etcd2 +- Kubernetes v1.6.0: standar klaster baru yang dibuat dengan `kube-up.sh` adalah etcd3, + dan `kube-apiserver` standarnya ke etcd3 +- Kubernetes v1.9.0: pengumuman penghentian _backend_ penyimpanan etcd2 diumumkan +- Kubernetes v1.13.0: _backend_ penyimpanan etcd2 dihapus, `kube-apiserver` akan + menolak untuk start dengan `--storage-backend=etcd2`, dengan pesan + `etcd2 is no longer a supported storage backend` + +Sebelum memutakhirkan v1.12.x kube-apiserver menggunakan `--storage-backend=etcd2` ke +v1.13.x, data etcd v2 harus dimigrasikan ke _backend_ penyimpanan v3 dan +permintaan kube-apiserver harus diubah untuk menggunakan `--storage-backend=etcd3`. + +Proses untuk bermigrasi dari etcd2 ke etcd3 sangat tergantung pada bagaimana +klaster etcd diluncurkan dan dikonfigurasi, serta bagaimana klaster Kubernetes diluncurkan dan dikonfigurasi. Kami menyarankan kamu berkonsultasi dengan dokumentasi penyedia kluster kamu untuk melihat apakah ada solusi yang telah ditentukan. + +Jika klaster kamu dibuat melalui `kube-up.sh` dan masih menggunakan etcd2 sebagai penyimpanan _backend_, silakan baca [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters) + +## Masalah umum: penyeimbang klien etcd dengan _secure endpoint_ + +Klien etcd v3, dirilis pada etcd v3.3.13 atau sebelumnya, memiliki [_critical bug_](https://github.com/kubernetes/kubernetes/issues/72102) yang mempengaruhi kube-apiserver dan penyebaran HA. Pemindahan kegagalan (_failover_) penyeimbang klien etcd tidak bekerja dengan baik dengan _secure endpoint_. Sebagai hasilnya, server etcd boleh gagal atau terputus sesaat dari kube-apiserver. Hal ini mempengaruhi peluncuran HA dari kube-apiserver. + +Perbaikan dibuat di [etcd v3.4](https://github.com/etcd-io/etcd/pull/10911) (dan di-backport ke v3.3.14 atau yang lebih baru): klien baru sekarang membuat bundel kredensial sendiri untuk menetapkan target otoritas dengan benar dalam fungsi dial. + +Karena perbaikan tersebut memerlukan pemutakhiran dependensi dari gRPC (ke v1.23.0), _downstream_ Kubernetes [tidak mendukung upgrade etcd](https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978), yang berarti [perbaikan etcd di kube-apiserver](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab) hanya tersedia mulai Kubernetes 1.16. + +Untuk segera memperbaiki celah keamanan (_bug_) ini untuk Kubernetes 1.15 atau sebelumnya, buat kube-apiserver khusus. kamu dapat membuat perubahan lokal ke [`vendor/google.golang.org/grpc/credentials/credentials.go`](https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135) dengan [etcd@db61ee106](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab). + +Lihat ["kube-apiserver 1.13.x menolak untuk bekerja ketika server etcd pertama tidak tersedia"](https://github.com/kubernetes/kubernetes/issues/72102). + + diff --git a/content/id/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/id/docs/tasks/administer-cluster/dns-custom-nameservers.md new file mode 100644 index 0000000000..7028405b82 --- /dev/null +++ b/content/id/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -0,0 +1,262 @@ +--- +title: Kustomisasi Service DNS +content_type: task +min-kubernetes-server-version: v1.12 +--- + +<!-- overview --> +Laman ini menjelaskan cara mengonfigurasi DNS +{{< glossary_tooltip text="Pod" term_id="pod" >}} kamu dan menyesuaikan +proses resolusi DNS pada klaster kamu. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + +Klaster kamu harus menjalankan tambahan (_add-on_) CoreDNS terlebih dahulu. +[Migrasi ke CoreDNS](/docs/tasks/administer-cluster/coredns/#migrasi-ke-coredns) +menjelaskan tentang bagaimana menggunakan `kubeadm` untuk melakukan migrasi dari `kube-dns`. + +{{% version-check %}} + +<!-- steps --> + +## Pengenalan + +DNS adalah Service bawaan dalam Kubernetes yang diluncurkan secara otomatis +melalui _addon manager_ +[add-on klaster](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/README.md). + +Sejak Kubernetes v1.12, CoreDNS adalah server DNS yang direkomendasikan untuk menggantikan kube-dns. Jika klaster kamu +sebelumnya menggunakan kube-dns, maka kamu mungkin masih menggunakan `kube-dns` daripada CoreDNS. + +{{< note >}} +Baik Service CoreDNS dan kube-dns diberi nama `kube-dns` pada _field_ `metadata.name`. +Hal ini agar ada interoperabilitas yang lebih besar dengan beban kerja yang bergantung pada nama Service `kube-dns` lama untuk me-_resolve_ alamat internal ke dalam klaster. Dengan menggunakan sebuah Service yang bernama `kube-dns` mengabstraksi detail implementasi yang dijalankan oleh penyedia DNS di belakang nama umum tersebut. +{{< /note >}} + +Jika kamu menjalankan CoreDNS sebagai sebuah Deployment, maka biasanya akan ditampilkan sebagai sebuah Service Kubernetes dengan alamat IP yang statis. +Kubelet meneruskan informasi DNS _resolver_ ke setiap Container dengan argumen `--cluster-dns=<dns-service-ip>`. + +Nama DNS juga membutuhkan domain. Kamu dapat mengonfigurasi domain lokal di kubelet +dengan argumen `--cluster-domain=<default-local-domain>`. + +Server DNS mendukung _forward lookup_ (_record_ A dan AAAA), _port lookup_ (_record_ SRV), _reverse lookup_ alamat IP (_record_ PTR), +dan lain sebagainya. Untuk informasi lebih lanjut, lihatlah [DNS untuk Service dan Pod](/id/docs/concepts/services-networking/dns-pod-service/). + +Jika `dnsPolicy` dari Pod diatur menjadi `default`, itu berarti mewarisi konfigurasi resolusi nama +dari Node yang dijalankan Pod. Resolusi DNS pada Pod +harus berperilaku sama dengan Node tersebut. +Tapi lihat [Isu-isu yang telah diketahui](/docs/tasks/debug-application-cluster/dns-debugging-resolution/#known-issues). + +Jika kamu tidak menginginkan hal ini, atau jika kamu menginginkan konfigurasi DNS untuk Pod berbeda, kamu bisa +menggunakan argumen `--resolv-conf` pada kubelet. Atur argumen ini menjadi "" untuk mencegah Pod tidak +mewarisi konfigurasi DNS. Atur ke jalur (_path_) berkas yang tepat untuk berkas yang berbeda dengan +`/etc/resolv.conf` untuk menghindari mewarisi konfigurasi DNS. + +## CoreDNS + +CoreDNS adalah server DNS otoritatif untuk kegunaan secara umum yang dapat berfungsi sebagai Service DNS untuk klaster, yang sesuai dengan [spesifikasi dns](https://github.com/kubernetes/dns/blob/master/docs/specification.md). + +### Opsi ConfigMap pada CoreDNS + +CoreDNS adalah server DNS yang modular dan mudah dipasang, dan setiap _plugin_ dapat menambahkan fungsionalitas baru ke CoreDNS. +Fitur ini dapat dikonfigurasikan dengan menjaga berkas [Corefile](https://coredns.io/2017/07/23/corefile-explained/), yang merupakan +berkas konfigurasi dari CoreDNS. Sebagai administrator klaster, kamu dapat memodifikasi +{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} untuk Corefile dari CoreDNS dengan mengubah cara perilaku pencarian Service DNS +pada klaster tersebut. + +Di Kubernetes, CoreDNS diinstal dengan menggunakan konfigurasi Corefile bawaan sebagai berikut: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: coredns + namespace: kube-system +data: + Corefile: | + .:53 { + errors + health { + lameduck 5s + } + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + ttl 30 + } + prometheus :9153 + forward . /etc/resolv.conf + cache 30 + loop + reload + loadbalance + } +``` + +Konfigurasi Corefile meliputi [_plugin_](https://coredns.io/plugins/) berikut ini dari CoreDNS: + +* [errors](https://coredns.io/plugins/errors/): Kesalahan yang ditampilkan ke output standar (_stdout_) +* [health](https://coredns.io/plugins/health/): Kesehatan dari CoreDNS dilaporkan pada `http://localhost:8080/health`. Dalam sintaks yang diperluas `lameduck` akan menangani proses tidak sehat agar menunggu selama 5 detik sebelum proses tersebut dimatikan. +* [ready](https://coredns.io/plugins/ready/): _Endpoint_ HTTP pada port 8181 akan mengembalikan OK 200, ketika semua _plugin_ yang dapat memberi sinyal kesiapan, telah memberikan sinyalnya. +* [kubernetes](https://coredns.io/plugins/kubernetes/): CoreDNS akan menjawab pertanyaan (_query_) DNS berdasarkan IP Service dan Pod pada Kubernetes. Kamu dapat menemukan [lebih detail](https://coredns.io/plugins/kubernetes/) tentang _plugin_ itu dalam situs web CoreDNS. `ttl` memungkinkan kamu untuk mengatur TTL khusus untuk respon dari pertanyaan DNS. Standarnya adalah 5 detik. TTL minimum yang diizinkan adalah 0 detik, dan maksimum hanya dibatasi sampai 3600 detik. Mengatur TTL ke 0 akan mencegah _record_ untuk di simpan sementara dalam _cache_. + Opsi `pods insecure` disediakan untuk kompatibilitas dengan Service _kube-dns_ sebelumnya. Kamu dapat menggunakan opsi `pods verified`, yang mengembalikan _record_ A hanya jika ada Pod pada Namespace yang sama untuk alamat IP yang sesuai. Opsi `pods disabled` dapat digunakan jika kamu tidak menggunakan _record_ Pod. +* [prometheus](https://coredns.io/plugins/metrics/): Metrik dari CoreDNS tersedia pada `http://localhost:9153/metrics` dalam format yang sesuai dengan [Prometheus](https://prometheus.io/) (dikenal juga sebagai OpenMetrics). +* [forward](https://coredns.io/plugins/forward/): Setiap pertanyaan yang tidak berada dalam domain klaster Kubernetes akan diteruskan ke _resolver_ yang telah ditentukan dalam berkas (/etc/resolv.conf). +* [cache](https://coredns.io/plugins/cache/): Ini untuk mengaktifkan _frontend cache_. +* [loop](https://coredns.io/plugins/loop/): Mendeteksi _forwarding loop_ sederhana dan menghentikan proses CoreDNS jika _loop_ ditemukan. +* [reload](https://coredns.io/plugins/reload): Mengizinkan _reload_ otomatis Corefile yang telah diubah. Setelah kamu mengubah konfigurasi ConfigMap, beri waktu sekitar dua menit agar perubahan yang kamu lakukan berlaku. +* [loadbalance](https://coredns.io/plugins/loadbalance): Ini adalah _load balancer_ DNS secara _round-robin_ yang mengacak urutan _record_ A, AAAA, dan MX dalam setiap responnya. + +Kamu dapat memodifikasi perilaku CoreDNS bawaan dengan memodifikasi ConfigMap. + +### Konfigurasi _Stub-domain_ dan _Nameserver Upstream_ dengan menggunakan CoreDNS + +CoreDNS memiliki kemampuan untuk mengonfigurasi _stubdomain_ dan _nameserver upstream_ dengan menggunakan [_plugin_ forward](https://coredns.io/plugins/forward/). + +#### Contoh + +Jika operator klaster memiliki sebuah server domain [Consul](https://www.consul.io/) yang terletak di 10.150.0.1, dan semua nama Consul memiliki akhiran .consul.local. Untuk mengonfigurasinya di CoreDNS, administrator klaster membuat bait (_stanza_) berikut dalam ConfigMap CoreDNS. + +``` +consul.local:53 { + errors + cache 30 + forward . 10.150.0.1 + } +``` + +Untuk memaksa secara eksplisit semua pencarian DNS _non-cluster_ melalui _nameserver_ khusus pada 172.16.0.1, arahkan `forward` ke _nameserver_ bukan ke `/etc/resolv.conf` + +``` +forward . 172.16.0.1 +``` + +ConfigMap terakhir bersama dengan konfigurasi `Corefile` bawaan terlihat seperti berikut: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: coredns + namespace: kube-system +data: + Corefile: | + .:53 { + errors + health + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + prometheus :9153 + forward . 172.16.0.1 + cache 30 + loop + reload + loadbalance + } + consul.local:53 { + errors + cache 30 + forward . 10.150.0.1 + } +``` + +Perangkat `kubeadm` mendukung terjemahan otomatis dari ConfigMap kube-dns +ke ConfigMap CoreDNS yang setara. + +{{< note >}} +Sementara ini kube-dns dapat menerima FQDN untuk _stubdomain_ dan _nameserver_ (mis: ns.foo.com), namun CoreDNS belum mendukung fitur ini. +Selama penerjemahan, semua _nameserver_ FQDN akan dihilangkan dari konfigurasi CoreDNS. +{{< /note >}} + +## Konfigurasi CoreDNS yang setara dengan kube-dns + +CoreDNS mendukung fitur kube-dns dan banyak lagi lainnya. +ConfigMap dibuat agar kube-dns mendukung `StubDomains` dan `upstreamNameservers` untuk diterjemahkan ke _plugin_ `forward` dalam CoreDNS. +Begitu pula dengan _plugin_ `Federations` dalam kube-dns melakukan translasi untuk _plugin_ `federation` dalam CoreDNS. + +### Contoh + +Contoh ConfigMap ini untuk kube-dns menentukan federasi, _stub domain_ dan server _upstream nameserver_: + +```yaml +apiVersion: v1 +data: + federations: | + {"foo" : "foo.feddomain.com"} + stubDomains: | + {"abc.com" : ["1.2.3.4"], "my.cluster.local" : ["2.3.4.5"]} + upstreamNameservers: | + ["8.8.8.8", "8.8.4.4"] +kind: ConfigMap +``` + +Untuk konfigurasi yang setara dengan CoreDNS buat Corefile berikut: + +* Untuk federasi: +``` +federation cluster.local { + foo foo.feddomain.com +} +``` + +* Untuk stubDomain: +```yaml +abc.com:53 { + errors + cache 30 + forward . 1.2.3.4 +} +my.cluster.local:53 { + errors + cache 30 + forward . 2.3.4.5 +} +``` + +Corefile lengkap dengan _plugin_ bawaan: + +``` +.:53 { + errors + health + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + federation cluster.local { + foo foo.feddomain.com + } + prometheus :9153 + forward . 8.8.8.8 8.8.4.4 + cache 30 +} +abc.com:53 { + errors + cache 30 + forward . 1.2.3.4 +} +my.cluster.local:53 { + errors + cache 30 + forward . 2.3.4.5 +} +``` + +## Migrasi ke CoreDNS + +Untuk bermigrasi dari kube-dns ke CoreDNS, +[artikel blog](https://coredns.io/2018/05/21/migration-from-kube-dns-to-coredns/) yang detail +tersedia untuk membantu pengguna mengadaptasi CoreDNS sebagai pengganti dari kube-dns. + +Kamu juga dapat bermigrasi dengan menggunakan +[skrip _deploy_](https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh) CoreDNS yang resmi. + + +## {{% heading "whatsnext" %}} + +- Baca [_Debugging_ Resolusi DNS](/docs/tasks/administer-cluster/dns-debugging-resolution/) diff --git a/content/id/docs/tasks/administer-cluster/dns-custom-nameservers/dns.png b/content/id/docs/tasks/administer-cluster/dns-custom-nameservers/dns.png new file mode 100644 index 0000000000..b048875f53 Binary files /dev/null and b/content/id/docs/tasks/administer-cluster/dns-custom-nameservers/dns.png differ diff --git a/content/id/docs/tasks/debug-application-cluster/_index.md b/content/id/docs/tasks/debug-application-cluster/_index.md new file mode 100755 index 0000000000..a99e0b6580 --- /dev/null +++ b/content/id/docs/tasks/debug-application-cluster/_index.md @@ -0,0 +1,6 @@ +--- +title: "Pemantauan, Pencatatan, and Debugging" +description: Mengatur pemantauan dan pencatatan untuk memecahkan masalah klaster, atau men-_debug_ aplikasi yang terkontainerisasi. +weight: 80 +--- + diff --git a/content/id/docs/tasks/debug-application-cluster/resource-usage-monitoring.md b/content/id/docs/tasks/debug-application-cluster/resource-usage-monitoring.md new file mode 100644 index 0000000000..eeb16411d2 --- /dev/null +++ b/content/id/docs/tasks/debug-application-cluster/resource-usage-monitoring.md @@ -0,0 +1,57 @@ +--- +content_type: concept +title: Perangkat untuk Memantau Sumber Daya +--- + +<!-- overview --> + +Untuk melukan penyekalaan aplikasi dan memberikan Service yang handal, kamu perlu +memahami bagaimana aplikasi berperilaku ketika aplikasi tersebut digelar (_deploy_). Kamu bisa memeriksa +kinerja aplikasi dalam klaster Kubernetes dengan memeriksa Container, +[Pod](/docs/user-guide/pods), [Service](/docs/user-guide/services), dan +karakteristik klaster secara keseluruhan. Kubernetes memberikan detail +informasi tentang penggunaan sumber daya dari aplikasi pada setiap level ini. +Informasi ini memungkinkan kamu untuk mengevaluasi kinerja aplikasi kamu dan +mengevaluasi di mana kemacetan dapat dihilangkan untuk meningkatkan kinerja secara keseluruhan. + + + +<!-- body --> + +Di Kubernetes, pemantauan aplikasi tidak bergantung pada satu solusi pemantauan saja. Pada klaster baru, kamu bisa menggunakan _pipeline_ [metrik sumber daya](#pipeline-metrik-sumber-daya) atau _pipeline_ [metrik penuh](#pipeline-metrik-penuh) untuk mengumpulkan statistik pemantauan. + +## _Pipeline_ Metrik Sumber Daya + +_Pipeline_ metrik sumber daya menyediakan sekumpulan metrik terbatas yang terkait dengan +komponen-komponen klaster seperti _controller_ [HorizontalPodAutoscaler](/id/docs/tasks/run-application/horizontal-pod-autoscaler), begitu juga dengan utilitas `kubectl top`. +Metrik ini dikumpulkan oleh memori yang ringan, jangka pendek, dalam +[_metrics-server_](https://github.com/kubernetes-incubator/metrics-server) dan +diekspos ke API `metrics.k8s.io`. + +_Metrics-server_ menemukan semua Node dalam klaster dan +bertanya ke setiap +[kubelet](/docs/reference/command-line-tools-reference/kubelet) dari Node tentang penggunaan CPU dan +memori. Kubelet bertindak sebagai jembatan antara _control plane_ Kubernetes dan +Node, mengelola Pod dan Container yang berjalan pada sebuah mesin. Kubelet +menerjemahkan setiap Pod ke Container yang menyusunnya dan mengambil masing-masing +statistik penggunaan untuk setiap Container dari _runtime_ Container melalui +antarmuka _runtime_ Container. Kubelet mengambil informasi ini dari cAdvisor yang terintegrasi +untuk pengintegrasian Docker yang lama. Hal ini yang kemudian memperlihatkan +statistik penggunaan sumber daya dari kumpulan Pod melalui API sumber daya _metrics-server_. +API ini disediakan pada `/metrics/resource/v1beta1` pada kubelet yang terautentikasi dan +porta _read-only_. + +## _Pipeline_ Metrik Penuh + +_Pipeline_ metrik penuh memberi kamu akses ke metrik yang lebih banyak. Kubernetes bisa +menanggapi metrik ini secara otomatis dengan mengubah skala atau mengadaptasi klaster +berdasarkan kondisi saat ini, dengan menggunakan mekanisme seperti HorizontalPodAutoscaler. +_Pipeline_ pemantauan mengambil metrik dari kubelet dan +kemudian memgekspos ke Kubernetes melalui adaptor dengan mengimplementasikan salah satu dari API +`custom.metrics.k8s.io` atau API `external.metrics.k8s.io`. + + +[Prometheus](https://prometheus.io), sebuah proyek CNCF, yang dapat secara alami memonitor Kubernetes, Node, dan Prometheus itu sendiri. +Proyek _pipeline_ metrik penuh yang bukan merupakan bagian dari CNCF berada di luar ruang lingkup dari dokumentasi Kubernetes. + + diff --git a/content/id/docs/tasks/manage-daemon/_index.md b/content/id/docs/tasks/manage-daemon/_index.md new file mode 100644 index 0000000000..c3044a9265 --- /dev/null +++ b/content/id/docs/tasks/manage-daemon/_index.md @@ -0,0 +1,6 @@ +--- +title: "Mengelola Daemon Klaster" +description: Melakukan tugas-tugas umum untuk mengelola sebuah DaemonSet, misalnya _rolling update_. +weight: 130 +--- + diff --git a/content/id/docs/tasks/manage-daemon/rollback-daemon-set.md b/content/id/docs/tasks/manage-daemon/rollback-daemon-set.md new file mode 100644 index 0000000000..dcc030289d --- /dev/null +++ b/content/id/docs/tasks/manage-daemon/rollback-daemon-set.md @@ -0,0 +1,140 @@ +--- +title: Melakukan Rollback pada DaemonSet +content_type: task +weight: 20 +min-kubernetes-server-version: 1.7 +--- + +<!-- overview --> + +Laman ini memperlihatkan bagaimana caranya untuk melakukan _rollback_ pada sebuah {{< glossary_tooltip term_id="daemonset" >}}. + + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +Sebelum lanjut, alangkah baiknya jika kamu telah mengetahui cara +untuk [melakukan _rolling update_ pada sebuah DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/). + +<!-- steps --> + +## Melakukan _rollback_ pada DaemonSet + +### Langkah 1: Dapatkan nomor revisi DaemonSet yang ingin dikembalikan + +Lompati langkah ini jika kamu hanya ingin kembali (_rollback_) ke revisi terakhir. + +Perintah di bawah ini akan memperlihatkan daftar semua revisi dari DaemonSet: + +```shell +kubectl rollout history daemonset <nama-daemonset> +``` + +Perintah tersebut akan menampilkan daftar revisi seperti di bawah: + +``` +daemonsets "<nama-daemonset>" +REVISION CHANGE-CAUSE +1 ... +2 ... +... +``` + +* Alasan perubahan (_change cause_) kolom di atas merupakan salinan dari anotasi `kubernetes.io/change-cause` yang berkaitan dengan revisi pada DaemonSet. Kamu boleh menyetel _flag_ `--record=true` melalui `kubectl` untuk merekam perintah yang dijalankan akibat dari anotasi alasan perubahan. + +Untuk melihat detail dari revisi tertentu, jalankan perintah di bawah ini: + +```shell +kubectl rollout history daemonset <daemonset-name> --revision=1 +``` + +Perintah tersebut memberikan detail soal nomor revisi tertentu: + +``` +daemonsets "<nama-daemonset>" with revision #1 +Pod Template: +Labels: foo=bar +Containers: +app: + Image: ... + Port: ... + Environment: ... + Mounts: ... +Volumes: ... +``` + +### Langkah 2: _Rollback_ ke revisi tertentu + +```shell +# Tentukan nomor revisi yang kamu dapatkan dari Langkah 1 melalui --to-revision +kubectl rollout undo daemonset <nama-daemonset> --to-revision=<nomor-revisi> +``` + +Jika telah berhasil, perintah tersebut akan memberikan keluaran berikut: + +``` +daemonset "<nama-daemonset>" rolled back +``` + +{{< note >}} +Jika _flag_ `--to-revision` tidak diberikan, maka kubectl akan memilihkan revisi yang terakhir. +{{< /note >}} + +### Langkah 3: Lihat progres pada saat _rollback_ DaemonSet + +Perintah `kubectl rollout undo daemonset` memberitahu server untuk memulai _rollback_ DaemonSet. +_Rollback_ sebenarnya terjadi secara _asynchronous_ di dalam klaster {{< glossary_tooltip term_id="control-plane" text="_control plane_" >}}. + +Perintah di bawah ini dilakukan untuk melihat progres dari _rollback_: + +```shell +kubectl rollout status ds/<nama-daemonset> +``` + +Ketika _rollback_ telah selesai dilakukan, keluaran di bawah akan ditampilkan: + +``` +daemonset "<nama-daemonset>" successfully rolled out +``` + + +<!-- discussion --> + +## Memahami revisi DaemonSet + +Pada langkah `kubectl rollout history` sebelumnya, kamu telah mendapatkan +daftar revisi DaemonSet. Setiap revisi disimpan di dalam sumber daya bernama ControllerRevision. + +Untuk melihat apa yang disimpan pada setiap revisi, dapatkan sumber daya mentah (_raw_) dari +revisi DaemonSet: + +```shell +kubectl get controllerrevision -l <kunci-selektor-daemonset>=<nilai-selektor-daemonset> +``` + +Perintah di atas akan mengembalikan daftar ControllerRevision: + +``` +NAME CONTROLLER REVISION AGE +<nama-daemonset>-<hash-revisi> DaemonSet/<nama-daemonset> 1 1h +<nama-daemonset>-<hash-revisi> DaemonSet/<nama-daemonset> 2 1h +``` + +Setiap ControllerRevision menyimpan anotasi dan templat dari sebuah revisi DaemonSet. + +Perintah `kubectl rollout undo` mengambil ControllerRevision yang spesifik dan mengganti templat +DaemonSet dengan templat yang tersimpan pada ControllerRevision. +Perintah `kubectl rollout undo` sama seperti untuk memperbarui templat +DaemonSet ke revisi sebelumnya dengan menggunakan perintah lainnya, seperti `kubectl edit` atau `kubectl apply`. + +{{< note >}} +Revisi DaemonSet hanya bisa _roll_ ke depan. Artinya, setelah _rollback_ selesai dilakukan, +nomor revisi dari ControllerRevision (_field_ `.revision`) yang sedang di-_rollback_ akan maju ke depan. +Misalnya, jika kamu memiliki revisi 1 dan 2 pada sistem, lalu _rollback_ dari revisi 2 ke revisi 1, +ControllerRevision dengan `.revision: 1` akan menjadi `.revision: 3`. +{{< /note >}} + +## _Troubleshoot_ + +* Lihat cara untuk melakukan [_troubleshoot rolling update_ pada DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/#troubleshooting). diff --git a/content/ja/docs/concepts/configuration/configmap.md b/content/ja/docs/concepts/configuration/configmap.md new file mode 100644 index 0000000000..54147a7a90 --- /dev/null +++ b/content/ja/docs/concepts/configuration/configmap.md @@ -0,0 +1,191 @@ +--- +title: ConfigMap +content_type: concept +weight: 20 +--- + +<!-- overview --> + +{{< glossary_definition term_id="configmap" prepend="ConfigMapは、" length="all" >}} + +{{< caution >}} +ConfigMapは機密性や暗号化を提供しません。保存したいデータが機密情報である場合は、ConfigMapの代わりに{{< glossary_tooltip text="Secret" term_id="secret" >}}を使用するか、追加の(サードパーティー)ツールを使用してデータが非公開になるようにしてください。 +{{< /caution >}} + +<!-- body --> + +## 動機 + +アプリケーションのコードとは別に設定データを設定するには、ConfigMapを使用します。 + +たとえば、アプリケーションを開発していて、(開発用時には)自分のコンピューター上と、(実際のトラフィックをハンドルするときは)クラウド上とで実行することを想像してみてください。あなたは、`DATABASE_HOST`という名前の環境変数を使用するコードを書きます。ローカルでは、この変数を`localhost`に設定します。クラウド上では、データベースコンポーネントをクラスター内に公開するKubernetesの{{< glossary_tooltip text="Service" term_id="service" >}}を指すように設定します。 + +こうすることで、必要であればクラウド上で実行しているコンテナイメージを取得することで、ローカルでも完全に同じコードを使ってデバッグができるようになります。 + +## ConfigMapオブジェクト + +ConfigMapは、他のオブジェクトが使うための設定を保存できるAPI[オブジェクト](/ja/docs/concepts/overview/working-with-objects/kubernetes-objects/)です。ほとんどのKubernetesオブジェクトに`spec`セクションがあるのとは違い、ConfigMapにはアイテム(キー)と値を保存するための`data`セクションがあります。 + +ConfigMapの名前は、有効な[DNSのサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。 + +## ConfigMapとPod + +ConfigMapを参照して、ConfigMap内のデータを元にしてPod内のコンテナの設定をするPodの`spec`を書くことができます。このとき、PodとConfigMapは同じ{{< glossary_tooltip text="名前空間" term_id="namespace" >}}内に存在する必要があります。 + +以下に、ConfigMapの例を示します。単一の値を持つキーと、Configuration形式のデータ片のような値を持つキーがあります。 + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: game-demo +data: + # プロパティーに似たキー。各キーは単純な値にマッピングされている + player_initial_lives: "3" + ui_properties_file_name: "user-interface.properties" + # + # ファイルに似たキー + game.properties: | + enemy.types=aliens,monsters + player.maximum-lives=5 + user-interface.properties: | + color.good=purple + color.bad=yellow + allow.textmode=true +``` + +ConfigMapを利用してPod内のコンテナを設定する方法には、次の4種類があります。 + +1. コマンドライン引数をコンテナのエントリーポイントに渡す +1. 環境変数をコンテナに渡す +1. 読み取り専用のボリューム内にファイルを追加し、アプリケーションがそのファイルを読み取る +1. Kubernetes APIを使用してConfigMapを読み込むコードを書き、そのコードをPod内で実行する + +これらのさまざまな方法は、利用するデータをモデル化するのに役立ちます。最初の3つの方法では、{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}がPodのコンテナを起動する時にConfigMapのデータを使用します。 + +4番目の方法では、ConfigMapとそのデータを読み込むためのコードを自分自身で書く必要があります。しかし、Kubernetes APIを直接使用するため、アプリケーションはConfigMapがいつ変更されても更新イベントを受信でき、変更が発生したときにすぐに反応できます。この手法では、Kubernetes APIに直接アクセスすることで、別の名前空間にあるConfigMapにもアクセスできます。 + +以下に、Podを設定するために`game-demo`から値を使用するPodの例を示します。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: configmap-demo-pod +spec: + containers: + - name: demo + image: game.example/demo-game + env: + # 環境変数を定義します。 + - name: PLAYER_INITIAL_LIVES # ここではConfigMap内のキーの名前とは違い + # 大文字が使われていることに着目してください。 + valueFrom: + configMapKeyRef: + name: game-demo # この値を取得するConfigMap。 + key: player_initial_lives # 取得するキー。 + - name: UI_PROPERTIES_FILE_NAME + valueFrom: + configMapKeyRef: + name: game-demo + key: ui_properties_file_name + volumeMounts: + - name: config + mountPath: "/config" + readOnly: true + volumes: + # Podレベルでボリュームを設定し、Pod内のコンテナにマウントします。 + - name: config + configMap: + # マウントしたいConfigMapの名前を指定します。 + name: game-demo + # ファイルとして作成するConfigMapのキーの配列 + items: + - key: "game.properties" + path: "game.properties" + - key: "user-interface.properties" + path: "user-interface.properties" +``` + +ConfigMapは1行のプロパティの値と複数行のファイルに似た形式の値を区別しません。問題となるのは、Podや他のオブジェクトによる値の使用方法です。 + +この例では、ボリュームを定義して、`demo`コンテナの内部で`/config`にマウントしています。これにより、ConfigMap内には4つのキーがあるにもかかわらず、2つのファイル`/config/game.properties`および`/config/user-interface.properties`だけが作成されます。 + +これは、Podの定義が`volumes`セクションで`items`という配列を指定しているためです。もし`items`の配列を完全に省略すれば、ConfigMap内の各キーがキーと同じ名前のファイルになり、4つのファイルが作成されます。 + +## ConfigMapを使う + +ConfigMapは、データボリュームとしてマウントできます。ConfigMapは、Podへ直接公開せずにシステムの他の部品として使うこともできます。たとえば、ConfigMapには、システムの他の一部が設定のために使用するデータを保存できます。 + +{{< note >}} +ConfigMapの最も一般的な使い方では、同じ名前空間にあるPod内で実行されているコンテナに設定を構成します。ConfigMapを独立して使用することもできます。 + +たとえば、ConfigMapに基づいて動作を調整する{{< glossary_tooltip text="アドオン" term_id="addons" >}}や{{< glossary_tooltip text="オペレーター" term_id="operator-pattern" >}}を見かけることがあるかもしれません。 +{{< /note >}} + +### ConfigMapをPodからファイルとして使う + +ConfigMapをPod内のボリュームで使用するには、次のようにします。 + +1. ConfigMapを作成するか、既存のConfigMapを使用します。複数のPodから同じConfigMapを参照することもできます。 +1. Podの定義を修正して、`.spec.volumes[]`以下にボリュームを追加します。ボリュームに任意の名前を付け、`.spec.volumes[].configMap.name`フィールドにConfigMapオブジェクトへの参照を設定します。 +1. ConfigMapが必要な各コンテナに`.spec.containers[].volumeMounts[]`を追加します。`.spec.containers[].volumeMounts[].readOnly = true`を指定して、`.spec.containers[].volumeMounts[].mountPath`には、ConfigMapのデータを表示したい未使用のディレクトリ名を指定します。 +1. イメージまたはコマンドラインを修正して、プログラムがそのディレクトリ内のファイルを読み込むように設定します。ConfigMapの`data`マップ内の各キーが、`mountPath`以下のファイル名になります。 + +以下は、ボリューム内にConfigMapをマウントするPodの例です。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + configMap: + name: myconfigmap +``` + +使用したいそれぞれのConfigMapごとに、`.spec.volumes`内で参照する必要があります。 + +Pod内に複数のコンテナが存在する場合、各コンテナにそれぞれ別の`volumeMounts`のブロックが必要ですが、`.spec.volumes`はConfigMapごとに1つしか必要ありません。 + +#### マウントしたConfigMapの自動的な更新 + +ボリューム内で現在使用中のConfigMapが更新されると、射影されたキーも最終的に(eventually)更新されます。kubeletは定期的な同期のたびにマウントされたConfigMapが新しいかどうか確認します。しかし、kubeletが現在のConfigMapの値を取得するときにはローカルキャッシュを使用します。キャッシュの種類は、[KubeletConfiguration構造体](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)の中の`ConfigMapAndSecretChangeDetectionStrategy`フィールドで設定可能です。ConfigMapは、監視(デフォルト)、ttlベース、またはすべてのリクエストを直接APIサーバーへ単純にリダイレクトする方法のいずれかによって伝搬されます。その結果、ConfigMapが更新された瞬間から、新しいキーがPodに射影されるまでの遅延の合計は、最長でkubeletの同期期間+キャッシュの伝搬遅延になります。ここで、キャッシュの伝搬遅延は選択したキャッシュの種類に依存します(監視の伝搬遅延、キャッシュのttl、または0に等しくなります)。 + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +Kubernetesのアルファ版の機能である _イミュータブルなSecretおよびConfigMap_ は、個別のSecretやConfigMapをイミュータブルに設定するオプションを提供します。ConfigMapを広範に使用している(少なくとも数万のConfigMapがPodにマウントされている)クラスターでは、データの変更を防ぐことにより、以下のような利点が得られます。 + +- アプリケーションの停止を引き起こす可能性のある予想外の(または望まない)変更を防ぐことができる +- ConfigMapをイミュータブルにマークして監視を停止することにより、kube-apiserverへの負荷を大幅に削減し、クラスターの性能が向上する + +この機能を使用するには、`ImmutableEmphemeralVolumes`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にして、SecretやConfigMapの`immutable`フィールドを`true`に設定してください。次に例を示します。 + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + ... +data: + ... +immutable: true +``` + +{{< note >}} +一度ConfigMapやSecretがイミュータブルに設定すると、この変更を元に戻したり、`data`フィールドのコンテンツを変更することは*できません*。既存のPodは削除されたConfigMapのマウントポイントを保持するため、こうしたPodは再作成することをおすすめします。 +{{< /note >}} + +## {{% heading "whatsnext" %}} + +* [Secret](/docs/concepts/configuration/secret/)について読む。 +* [Podを構成してConfigMapを使用する](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)を読む。 +* コードを設定から分離する動機を理解するために[The Twelve-Factor App](https://12factor.net/ja/)を読む。 diff --git a/content/ja/docs/concepts/configuration/secret.md b/content/ja/docs/concepts/configuration/secret.md new file mode 100644 index 0000000000..26ce98ab50 --- /dev/null +++ b/content/ja/docs/concepts/configuration/secret.md @@ -0,0 +1,1157 @@ +--- +title: Secrets +content_type: concept +feature: + title: Secretと構成管理 + description: > + Secretやアプリケーションの構成情報を、イメージの再ビルドや機密情報を晒すことなくデプロイ、更新します +weight: 30 +--- + +<!-- overview --> + +KubernetesのSecretはパスワード、OAuthトークン、SSHキーのような機密情報を保存し、管理できるようにします。 +Secretに機密情報を保存することは、それらを{{< glossary_tooltip text="Pod" term_id="pod" >}}の定義や{{< glossary_tooltip text="コンテナイメージ" term_id="image" >}}に直接記載するより、安全で柔軟です。詳しくは[Secretの設計文書](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md)を参照してください。 + + + +<!-- body --> + +## Secretの概要 + +Secretはパスワード、トークン、キーのような小容量の機密データを含むオブジェクトです。 +他の方法としては、そのような情報はPodの定義やイメージに含めることができます。 +ユーザーはSecretを作ることができ、またシステムが作るSecretもあります。 + +Secretを使うには、PodはSecretを参照することが必要です。 +PodがSecretを使う方法は3種類あります。 + +- {{< glossary_tooltip text="ボリューム" term_id="volume" >}}内の[ファイル](#using-secrets-as-files-from-a-pod)として、Podの単一または複数のコンテナにマウントする +- [コンテナの環境変数](#using-secrets-as-environment-variables)として利用する +- Podを生成するために[kubeletがイメージをpullする](#using-imagepullsecrets)ときに使用する + +### 内蔵のSecret + +#### 自動的にサービスアカウントがAPIの認証情報のSecretを生成し、アタッチする + +KubernetesはAPIにアクセスするための認証情報を含むSecretを自動的に生成し、この種のSecretを使うように自動的にPodを改変します。 + +必要であれば、APIの認証情報が自動生成され利用される機能は無効化したり、上書きしたりすることができます。しかし、安全にAPIサーバーでアクセスすることのみが必要なのであれば、これは推奨されるワークフローです。 + +サービスアカウントがどのように機能するのかについては、[サービスアカウント](/docs/tasks/configure-pod-container/configure-service-account/) +のドキュメントを参照してください。 + +### Secretを作成する + +#### `kubectl`を利用してSecretを作成する + +SecretにはPodがデータベースにアクセスするために必要な認証情報を含むことができます。 +例えば、ユーザー名とパスワードからなる接続文字列です。 +ローカルマシンのファイル`./username.txt`にユーザー名を、ファイル`./password.txt`にパスワードを保存することができます。 + +```shell +# この後の例で使用するファイルを作成します +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt +``` + +`kubectl create secret`コマンドはそれらのファイルをSecretに格納して、APIサーバー上でオブジェクトを作成します。 +Secretオブジェクトの名称は正当な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)である必要があります。 + +```shell +kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +``` + +次のように出力されます: + +``` +secret "db-user-pass" created +``` + +デフォルトのキー名はファイル名です。`[--from-file=[key=]source]`を使って任意でキーを指定することができます。 + +```shell +kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt +``` + +{{< note >}} +`$`、`\`、`*`、`=`、`!`のような特殊文字は[シェル](https://ja.wikipedia.org/wiki/%E3%82%B7%E3%82%A7%E3%83%AB)に解釈されるので、エスケープする必要があります。 +ほとんどのシェルではパスワードをエスケープする最も簡単な方法はシングルクォート(`'`)で囲むことです。 +例えば、実際のパスワードが`S!B\*d$zDsb=`だとすると、実行すべきコマンドは下記のようになります。 + +```shell +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' +``` + +`--from-file`を使ってファイルからパスワードを読み込む場合、ファイルに含まれるパスワードの特殊文字をエスケープする必要はありません。 +{{< /note >}} + +Secretが作成されたことを確認できます。 + +```shell +kubectl get secrets +``` + +出力は次のようになります。 + +``` +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s +``` + +Secretの説明を参照することができます。 + +```shell +kubectl describe secrets/db-user-pass +``` + +出力は次のようになります。 + +``` +Name: db-user-pass +Namespace: default +Labels: <none> +Annotations: <none> + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +{{< note >}} +`kubectl get`や`kubectl describe`コマンドはデフォルトではSecretの内容の表示を避けます。 +これはSecretを誤って盗み見られたり、ターミナルのログへ記録されてしまったりすることがないよう保護するためです。 +{{< /note >}} + +Secretの内容を参照する方法は[Secretのデコード](#decoding-a-secret)を参照してください。 + +#### 手動でSecretを作成する + +SecretをJSONまたはYAMLフォーマットのファイルで作成し、その後オブジェクトを作成することができます。 +Secretオブジェクトの名称は正当な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)である必要があります。 +[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)は、`data`と`stringData`の2つの連想配列を持ちます。 +`data`フィールドは任意のデータの保存に使われ、Base64でエンコードされています。 +`stringData`は利便性のために存在するもので、機密データをエンコードされない文字列で扱えます。 + +例えば、`data`フィールドを使って1つのSecretに2つの文字列を保存するには、次のように文字列をBase64エンコードします。 + +```shell +echo -n 'admin' | base64 +``` + +出力は次のようになります。 + +``` +YWRtaW4= +``` + +```shell +echo -n '1f2d1e2e67df' | base64 +``` + +出力は次のようになります。 + +``` +MWYyZDFlMmU2N2Rm +``` + +このようなSecretを書きます。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +これでSecretを[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply)コマンドで作成できるようになりました。 + +```shell +kubectl apply -f ./secret.yaml +``` + +出力は次のようになります。 + +``` +secret "mysecret" created +``` + +状況によっては、代わりに`stringData`フィールドを使いたいときもあるでしょう。 +このフィールドを使えばBase64でエンコードされていない文字列を直接Secretに書くことができて、その文字列はSecretが作られたり更新されたりするときにエンコードされます。 + +実用的な例として、設定ファイルの格納にSecretを使うアプリケーションをデプロイすることを考えます。 +デプロイプロセスの途中で、この設定ファイルの一部のデータを投入したいとしましょう。 + +例えば、アプリケーションは次のような設定ファイルを使用するとします。 + +```yaml +apiUrl: "https://my.api.com/api/v1" +username: "user" +password: "password" +``` + +次のような定義を使用して、この設定ファイルをSecretに保存することができます。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +stringData: + config.yaml: |- + apiUrl: "https://my.api.com/api/v1" + username: {{username}} + password: {{password}} +``` + +デプロイツールは`kubectl apply`を実行する前に`{{username}}`と`{{password}}`のテンプレート変数を置換することができます。 + +`stringData`フィールドは利便性のための書き込み専用フィールドです。 +Secretを取得するときに出力されることは決してありません。 +例えば、次のコマンドを実行すると、 + +```shell +kubectl get secret mysecret -o yaml +``` + +出力は次のようになります。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:40:59Z + name: mysecret + namespace: default + resourceVersion: "7225" + uid: c280ad2e-e916-11e8-98f2-025000000001 +type: Opaque +data: + config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 +``` + +`username`のようなフィールドを`data`と`stringData`の両方で指定すると、`stringData`の値が使用されます。 +例えば、次のSecret定義からは + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= +stringData: + username: administrator +``` + +次のようなSecretが生成されます。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:46:46Z + name: mysecret + namespace: default + resourceVersion: "7579" + uid: 91460ecb-e917-11e8-98f2-025000000001 +type: Opaque +data: + username: YWRtaW5pc3RyYXRvcg== +``` + +`YWRtaW5pc3RyYXRvcg==`をデコードすると`administrator`になります。 + +`data`や`stringData`のキーは英数字または'-'、'_'、'.'からなる必要があります。 + +{{< note >}} +シリアライズされたJSONやYAMLの機密データはBase64エンコードされています。 +文字列の中の改行は不正で、含まれていてはなりません。 +Darwin/macOSの`base64`ユーティリティーを使うときは、長い行を分割する`-b`オプションを指定するのは避けるべきです。 +反対に、Linuxユーザーは`base64`コマンドに`-w 0`オプションを指定するか、`-w`オプションが使えない場合は`base64 | tr -d '\n'`のようにパイプ*すべき*です。 +{{< /note >}} + +#### ジェネレーターからSecretを作成する + +Kubernetes v1.14から、`kubectl`は[Kustomizeを使ったオブジェクトの管理](/docs/tasks/manage-kubernetes-objects/kustomization/)に対応しています。 +KustomizeはSecretやConfigMapを生成するリソースジェネレーターを提供します。 +Kustomizeのジェネレーターはディレクトリの中の`kustomization.yaml`ファイルにて指定されるべきです。 +Secretが生成された後には、`kubectl apply`コマンドを使用してAPIサーバー上にSecretを作成することができます。 + +#### ファイルからのSecretの生成 + +./username.txtと./password.txtのファイルから生成するように`secretGenerator`を定義することで、Secretを生成することができます。 + +```shell +cat <<EOF >./kustomization.yaml +secretGenerator: +- name: db-user-pass + files: + - username.txt + - password.txt +EOF +``` + +Secretを生成するには、`kustomization.yaml`を含むディレクトリをapplyします。 + +```shell +kubectl apply -k . +``` + +出力は次のようになります。 + +``` +secret/db-user-pass-96mffmfh4k created +``` + +Secretが生成されたことを確認できます。 + +```shell +kubectl get secrets +``` + +出力は次のようになります。 + +``` +NAME TYPE DATA AGE +db-user-pass-96mffmfh4k Opaque 2 51s +``` + +```shell +kubectl describe secrets/db-user-pass-96mffmfh4k +``` + +出力は次のようになります。 + +``` +Name: db-user-pass +Namespace: default +Labels: <none> +Annotations: <none> + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +#### 文字列リテラルからのSecretの生成 + +リテラル`username=admin`と`password=secret`から生成するように`secretGenerator`を定義して、Secretを生成することができます。 + +```shell +cat <<EOF >./kustomization.yaml +secretGenerator: +- name: db-user-pass + literals: + - username=admin + - password=secret +EOF +``` + +Secretを生成するには、`kustomization.yaml`を含むディレクトリをapplyします。 + +```shell +kubectl apply -k . +``` + +出力は次のようになります。 + +``` +secret/db-user-pass-dddghtt9b5 created +``` + +{{< note >}} +Secretが生成されるとき、Secretのデータからハッシュ値が算出され、Secretの名称にハッシュ値が加えられます。 +これはデータが更新されたときに毎回新しいSecretが生成されることを保証します。 +{{< /note >}} + +#### Secretのデコード + +Secretは`kubectl get secret`を実行することで取得可能です。 +例えば、前のセクションで作成したSecretは次のコマンドを実行することで参照できます。 + +```shell +kubectl get secret mysecret -o yaml +``` + +出力は次のようになります。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +`password`フィールドをデコードします。 + +```shell +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` + +出力は次のようになります。 + +``` +1f2d1e2e67df +``` + +#### Secretの編集 + +既存のSecretは次のコマンドで編集することができます。 + +```shell +kubectl edit secrets mysecret +``` + +デフォルトに設定されたエディターが開かれ、`data`フィールドのBase64でエンコードされたSecretの値を編集することができます。 + +```yaml +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +kind: Secret +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: { ... } + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +``` + +## Secretの使用 + +Podの中のコンテナがSecretを使うために、データボリュームとしてマウントしたり、{{< glossary_tooltip text="環境変数" term_id="container-env-variables" >}}として値を参照できるようにできます。 +Secretは直接Podが参照できるようにはされず、システムの別の部分に使われることもあります。 +例えば、Secretはあなたに代わってシステムの他の部分が外部のシステムとやりとりするために使う機密情報を保持することもあります。 + +### SecretをファイルとしてPodから利用する + +PodのボリュームとしてSecretを使うには、 + +1. Secretを作成するか既存のものを使用します。複数のPodが同一のSecretを参照することができます。 +1. ボリュームを追加するため、Podの定義の`.spec.volumes[]`以下をを書き換えます。ボリュームに命名し、`.spec.volumes[].secret.secretName`フィールドはSecretオブジェクトの名称と同一にします。 +1. Secretを必要とするそれぞれのコンテナに`.spec.containers[].volumeMounts[]`を追加します。`.spec.containers[].volumeMounts[].readOnly = true`を指定して`.spec.containers[].volumeMounts[].mountPath`をSecretをマウントする未使用のディレクトリ名にします。 +1. イメージやコマンドラインを変更し、プログラムがそのディレクトリを参照するようにします。連想配列`data`のキーは`mountPath`以下のファイル名になります。 + +これはSecretをボリュームとしてマウントするPodの例です。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret +``` + +使用したいSecretはそれぞれ`.spec.volumes`の中で参照されている必要があります。 + +Podに複数のコンテナがある場合、それぞれのコンテナが`volumeMounts`ブロックを必要としますが、`.spec.volumes`はSecret1つあたり1つで十分です。 + +多くのファイルを一つのSecretにまとめることも、多くのSecretを使うことも、便利な方を採ることができます。 + +#### Secretのキーの特定のパスへの割り当て + +Secretのキーが割り当てられるパスを制御することができます。 +それぞれのキーがターゲットとするパスは`.spec.volumes[].secret.items`フィールドによって指定てきます。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username +``` + +次のような挙動をします。 + +* `username`は`/etc/foo/username`の代わりに`/etc/foo/my-group/my-username`の元に格納されます。 +* `password`は現れません。 + +`.spec.volumes[].secret.items`が使われるときは、`items`の中で指定されたキーのみが現れます。 +Secretの中の全てのキーを使用したい場合は、`items`フィールドに全て列挙する必要があります。 +列挙されたキーは対応するSecretに存在する必要があり、そうでなければボリュームは生成されません。 + +#### Secretファイルのパーミッション + +単一のSecretキーに対して、ファイルアクセスパーミッションビットを指定することができます。 +パーミッションを指定しない場合、デフォルトで`0644`が使われます。 +Secretボリューム全体のデフォルトモードを指定し、必要に応じてキー単位で上書きすることもできます。 + +例えば、次のようにしてデフォルトモードを指定できます。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + defaultMode: 0400 +``` + +Secretは`/etc/foo`にマウントされ、Secretボリュームが生成する全てのファイルはパーミッション`0400`に設定されます。 + +JSONの仕様は8進数の記述に対応していないため、パーミッション0400を示す値として256を使用することに注意が必要です。 +Podの定義にJSONではなくYAMLを使う場合は、パーミッションを指定するためにより自然な8進表記を使うことができます。 + +`kubectl exec`を使ってPodに入るときは、期待したファイルモードを知るためにシンボリックリンクを辿る必要があることに注意してください。 + +例として、PodのSecretのファイルモードを確認します。 +``` +kubectl exec mypod -it sh + +cd /etc/foo +ls -l +``` + +出力は次のようになります。 +``` +total 0 +lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password +lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username +``` + +正しいファイルモードを知るためにシンボリックリンクを辿ります。 + +``` +cd /etc/foo/..data +ls -l +``` + +出力は次のようになります。 +``` +total 8 +-r-------- 1 root root 12 May 18 00:18 password +-r-------- 1 root root 5 May 18 00:18 username +``` + +前の例のようにマッピングを使い、ファイルごとに異なるパーミッションを指定することができます。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username + mode: 0777 +``` + +この例では、ファイル`/etc/foo/my-group/my-username`のパーミッションは`0777`になります。 +JSONを使う場合は、JSONの制約により10進表記の`511`と記述する必要があります。 + +後で参照する場合、このパーミッションの値は10進表記で表示されることがあることに注意してください。 + +#### Secretの値のボリュームによる利用 + +Secretのボリュームがマウントされたコンテナからは、Secretのキーはファイル名として、Secretの値はBase64デコードされ、それらのファイルに格納されます。 +上記の例のコンテナの中でコマンドを実行した結果を示します。 + +```shell +ls /etc/foo/ +``` + +出力は次のようになります。 + +``` +username +password +``` + +```shell +cat /etc/foo/username +``` + +出力は次のようになります。 + +``` +admin +``` + +```shell +cat /etc/foo/password +``` + +出力は次のようになります。 + +``` +1f2d1e2e67df +``` + +コンテナ内のプログラムはファイルからSecretの内容を読み取る責務を持ちます。 + +#### マウントされたSecretの自動更新 + +ボリュームとして使用されているSecretが更新されると、やがて割り当てられたキーも同様に更新されます。 +kubeletは定期的な同期のたびにマウントされたSecretが新しいかどうかを確認します。 +しかしながら、kubeletはSecretの現在の値の取得にローカルキャッシュを使用します。 +このキャッシュは[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)内の`ConfigMapAndSecretChangeDetectionStrategy`フィールドによって設定可能です。 +Secretはwatch(デフォルト)、TTLベース、単に全てのリクエストをAPIサーバーへリダイレクトすることのいずれかによって伝搬します。 +結果として、Secretが更新された時点からPodに新しいキーが反映されるまでの遅延時間の合計は、kubeletの同期間隔 + キャッシュの伝搬遅延となります。 +キャッシュの遅延は、キャッシュの種別により、それぞれwatchの伝搬遅延、キャッシュのTTL、0になります。 + +{{< note >}} +Secretを[subPath](/docs/concepts/storage/volumes#using-subpath)を指定してボリュームにマウントしているコンテナには、Secretの更新が反映されません。 +{{< /note >}} + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +Kubernetesのアルファ機能である _Immutable Secrets and ConfigMaps_ は各SecretやConfigMapが不変であると設定できるようにします。 +Secretを広範に利用しているクラスター(PodにマウントされているSecretが1万以上)においては、データが変更されないようにすることで次のような利点が得られます。 + +- 意図しない(または望まない)変更によってアプリケーションの停止を引き起こすことを防ぎます +- 不変であると設定されたSecretの監視を停止することにより、kube-apiserverの負荷が著しく軽減され、クラスターのパフォーマンスが改善されます + +この機能を利用するには、`ImmutableEphemeralVolumes`[feature gate](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にして、SecretまたはConfigMapの`immutable`フィールドに`true`を指定します。例えば、次のようにします。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + ... +data: + ... +immutable: true +``` + +{{< note >}} +一度SecretやConfigMapを不変であると設定すると、この変更を戻すことや`data`フィールドの内容を書き換えることは _できません_ 。 +Secretを削除して、再生成することだけができます。 +既存のPodは削除されたSecretへのマウントポイントを持ち続けるため、Podを再生成することが推奨されます。 +{{< /note >}} + +### Secretを環境変数として使用する {#using-secrets-as-environment-variables} + +SecretをPodの{{< glossary_tooltip text="環境変数" term_id="container-env-variables" >}}として使用するには、 + +1. Secretを作成するか既存のものを使います。複数のPodが同一のSecretを参照することができます。 +1. Podの定義を変更し、Secretを使用したいコンテナごとにSecretのキーと割り当てたい環境変数を指定します。Secretキーを利用する環境変数は`env[].valueFrom.secretKeyRef`にSecretの名前とキーを指定すべきです。 +1. イメージまたはコマンドライン(もしくはその両方)を変更し、プログラムが指定した環境変数を参照するようにします。 + +Secretを環境変数で参照するPodの例を示します。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-env-pod +spec: + containers: + - name: mycontainer + image: redis + env: + - name: SECRET_USERNAME + valueFrom: + secretKeyRef: + name: mysecret + key: username + - name: SECRET_PASSWORD + valueFrom: + secretKeyRef: + name: mysecret + key: password + restartPolicy: Never +``` + +#### 環境変数からのSecretの値の利用 + +Secretを環境変数として利用するコンテナの内部では、Secretのキーは一般の環境変数名として現れ、値はBase64デコードされた状態で保持されます。 + +上記の例のコンテナの内部でコマンドを実行した結果の例を示します。 + +```shell +echo $SECRET_USERNAME +``` + +出力は次のようになります。 + +``` +admin +``` + +```shell +echo $SECRET_PASSWORD +``` + +出力は次のようになります。 + +``` +1f2d1e2e67df +``` + +### imagePullSecretsを使用する {#using-imagepullsecrets} + +`imagePullSecrets`フィールドは同一のネームスペース内のSecretの参照のリストです。 +kubeletにDockerやその他のイメージレジストリのパスワードを渡すために、`imagePullSecrets`にそれを含むSecretを指定することができます。 +kubeletはこの情報をPodのためにプライベートイメージをpullするために使います。 +`imagePullSecrets`の詳細は[PodSpec API](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#podspec-v1-core)を参照してください。 + +#### imagePullSecretを手動で指定する + +`ImagePullSecrets`の指定の方法は[コンテナイメージのドキュメント](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)に記載されています。 + +### imagePullSecretsが自動的にアタッチされるようにする + +`imagePullSecrets`を手動で作成し、サービスアカウントから参照することができます。 +サービスアカウントが指定されるまたはデフォルトでサービスアカウントが設定されたPodは、サービスアカウントが持つ`imagePullSecrets`フィールドを得ます。 +詳細な手順の説明は[サービスアカウントへのImagePullSecretsの追加](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)を参照してください。 + +### 手動で作成されたSecretの自動的なマウント + +手動で作成されたSecret(例えばGitHubアカウントへのアクセスに使うトークンを含む)はサービスアカウントを基に自動的にアタッチすることができます。 +詳細な説明は[PodPresetを使ったPodへの情報の注入](/docs/tasks/inject-data-application/podpreset/)を参照してください。 + +## 詳細 + +### 制限事項 + +Secretボリュームは指定されたオブジェクト参照が実際に存在するSecretオブジェクトを指していることを保証するため検証されます。 +そのため、Secretはそれを必要とするPodよりも先に作成する必要があります。 + +Secretリソースは{{< glossary_tooltip text="namespace" term_id="namespace" >}}に属します。 +Secretは同一のnamespaceに属するPodからのみ参照することができます。 + +各Secretは1MiBの容量制限があります。 +これはAPIサーバーやkubeletのメモリーを枯渇するような非常に大きなSecretを作成することを避けるためです。 +しかしながら、小さなSecretを多数作成することも同様にメモリーを枯渇させます。 +Secretに起因するメモリー使用量をより網羅的に制限することは、将来計画されています。 + +kubeletがPodに対してSecretを使用するとき、APIサーバーから取得されたSecretのみをサポートします。 +これには`kubectl`を利用して、またはレプリケーションコントローラーによって間接的に作成されたPodが含まれます。 +kubeletの`--manifest-url`フラグ、`--config`フラグ、またはREST APIにより生成されたPodは含まれません +(これらはPodを生成するための一般的な方法ではありません)。 + +環境変数として使われるSecretは任意と指定されていない限り、それを使用するPodよりも先に作成される必要があります。 +存在しないSecretへの参照はPodの起動を妨げます。 + +Secretに存在しないキーへの参照(`secretKeyRef`フィールド)はPodの起動を妨げます。 + +Secretを`envFrom`フィールドによって環境変数へ設定する場合、環境変数の名称として不適切なキーは飛ばされます。 +Podは起動することを認められます。 +このとき、reasonが`InvalidVariableNames`であるイベントが発生し、メッセージに飛ばされたキーのリストが含まれます。 +この例では、Podは2つの不適切なキー`1badkey`と`2alsobad`を含むdefault/mysecretを参照しています。 + +```shell +kubectl get events +``` + +出力は次のようになります。 + +``` +LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON +0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. +``` + +### SecretとPodの相互作用 + +Kubernetes APIがコールされてPodが生成されるとき、参照するSecretの存在は確認されません。 +Podがスケジューリングされると、kubeletはSecretの値を取得しようとします。 +Secretが存在しない、または一時的にAPIサーバーへの接続が途絶えたことにより取得できない場合、kubeletは定期的にリトライします。 +kubeletはPodがまだ起動できない理由に関するイベントを報告します。 +Secretが取得されると、kubeletはそのボリュームを作成しマウントします。 +Podのボリュームが全てマウントされるまでは、Podのコンテナは起動することはありません。 + +## ユースケース + +### ユースケース: コンテナの環境変数として + +Secretの作成 +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + USER_NAME: YWRtaW4= + PASSWORD: MWYyZDFlMmU2N2Rm +``` + +```shell +kubectl apply -f mysecret.yaml +``` + +`envFrom`を使ってSecretの全てのデータをコンテナの環境変数として定義します。 +SecretのキーはPod内の環境変数の名称になります。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + envFrom: + - secretRef: + name: mysecret + restartPolicy: Never +``` + +### ユースケース: SSH鍵を持つPod + +SSH鍵を含むSecretを作成します。 + +```shell +kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +``` + +出力は次のようになります。 + +``` +secret "ssh-key-secret" created +``` + +SSH鍵を含む`secretGenerator`フィールドを持つ`kustomization.yaml`を作成することもできます。 + +{{< caution >}} +自身のSSH鍵を送る前に慎重に検討してください。クラスターの他のユーザーがSecretにアクセスできる可能性があります。 +Kubernetesクラスターを共有しているユーザー全員がアクセスできるようにサービスアカウントを使用し、ユーザーが安全でない状態になったらアカウントを無効化することができます。 +{{< /caution >}} + +SSH鍵のSecretを参照し、ボリュームとして使用するPodを作成することができます。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod + labels: + name: secret-test +spec: + volumes: + - name: secret-volume + secret: + secretName: ssh-key-secret + containers: + - name: ssh-test-container + image: mySshImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +コンテナのコマンドを実行するときは、下記のパスにて鍵が利用可能です。 + +``` +/etc/secret-volume/ssh-publickey +/etc/secret-volume/ssh-privatekey +``` + +コンテナーはSecretのデータをSSH接続を確立するために使用することができます。 + +### ユースケース: 本番、テスト用の認証情報を持つPod + +あるPodは本番の認証情報のSecretを使用し、別のPodはテスト環境の認証情報のSecretを使用する例を示します。 + +`secretGenerator`フィールドを持つ`kustomization.yaml`を作成するか、`kubectl create secret`を実行します。 + +```shell +kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 +``` + +出力は次のようになります。 + +``` +secret "prod-db-secret" created +``` + +```shell +kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests +``` + +出力は次のようになります。 + +``` +secret "test-db-secret" created +``` + +{{< note >}} +`$`、`\`、`*`、`=`、`!`のような特殊文字は[シェル](https://ja.wikipedia.org/wiki/%E3%82%B7%E3%82%A7%E3%83%AB)に解釈されるので、エスケープする必要があります。 +ほとんどのシェルではパスワードをエスケープする最も簡単な方法はシングルクォート(`'`)で囲むことです。 +例えば、実際のパスワードが`S!B\*d$zDsb=`だとすると、実行すべきコマンドは下記のようになります。 + +```shell +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' +``` + +`--from-file`によってファイルを指定する場合は、そのパスワードに含まれる特殊文字をエスケープする必要はありません。 +{{< /note >}} + +Podを作成します。 + +```shell +cat <<EOF > pod.yaml +apiVersion: v1 +kind: List +items: +- kind: Pod + apiVersion: v1 + metadata: + name: prod-db-client-pod + labels: + name: prod-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: prod-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +- kind: Pod + apiVersion: v1 + metadata: + name: test-db-client-pod + labels: + name: test-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: test-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +EOF +``` + +同じkustomization.yamlにPodを追記します。 + +```shell +cat <<EOF >> kustomization.yaml +resources: +- pod.yaml +EOF +``` + +下記のコマンドを実行して、APIサーバーにこれらのオブジェクト群を適用します。 + +```shell +kubectl apply -k . +``` + +両方のコンテナはそれぞれのファイルシステムに下記に示すファイルを持ちます。ファイルの値はそれぞれのコンテナの環境ごとに異なります。 + +``` +/etc/secret-volume/username +/etc/secret-volume/password +``` + +2つのPodの仕様の差分は1つのフィールドのみである点に留意してください。 +これは共通のPodテンプレートから異なる能力を持つPodを作成することを容易にします。 + +2つのサービスアカウントを使用すると、ベースのPod仕様をさらに単純にすることができます。 + +1. `prod-user` と `prod-db-secret` +1. `test-user` と `test-db-secret` + +簡略化されたPod仕様は次のようになります。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: prod-db-client-pod + labels: + name: prod-db-client +spec: + serviceAccount: prod-db-client + containers: + - name: db-client-container + image: myClientImage +``` + +### ユースケース: Secretボリューム内のdotfile + +キーをドットから始めることで、データを「隠す」ことができます。 +このキーはdotfileまたは「隠し」ファイルを示します。例えば、次のSecretは`secret-volume`ボリュームにマウントされます。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: k8s.gcr.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +このボリュームは`.secret-file`という単一のファイルを含み、`dotfile-test-container`はこのファイルを`/etc/secret-volume/.secret-file`のパスに持ちます。 + +{{< note >}} +ドットから始まるファイルは`ls -l`の出力では隠されるため、ディレクトリの内容を参照するときには`ls -la`を使わなければなりません。 +{{< /note >}} + +### ユースケース: Podの中の単一コンテナのみが参照できるSecret + +HTTPリクエストを扱い、複雑なビジネスロジックを処理し、メッセージにHMACによる認証コードを付与する必要のあるプログラムを考えます。 +複雑なアプリケーションロジックを持つため、サーバーにリモートのファイルを読み出せる未知の脆弱性がある可能性があり、この脆弱性は攻撃者に秘密鍵を晒してしまいます。 + +このプログラムは2つのコンテナに含まれる2つのプロセスへと分割することができます。 +フロントエンドのコンテナはユーザーとのやりとりやビジネスロジックを扱い、秘密鍵を参照することはできません。 +署名コンテナは秘密鍵を参照することができて、単にフロントエンドからの署名リクエストに応答します。例えば、localhostの通信によって行います。 + +この分割する手法によって、攻撃者はアプリケーションサーバーを騙して任意の処理を実行させる必要があるため、ファイルの内容を読み出すより困難になります。 + +<!-- TODO: explain how to do this while still using automation. --> + +## ベストプラクティス + +### Secret APIを使用するクライアント + +Secret APIとやりとりするアプリケーションをデプロイするときには、[RBAC]( +/docs/reference/access-authn-authz/rbac/)のような[認可ポリシー]( +/docs/reference/access-authn-authz/authorization/)を使用して、アクセスを制限すべきです。 +Secretは様々な種類の重要な値を保持することが多く、サービスアカウントのトークンのようにKubernetes内部や、外部のシステムで昇格できるものも多くあります。個々のアプリケーションが、Secretの能力について推論することができたとしても、同じネームスペースの別のアプリケーションがその推定を覆すこともあります。 + +これらの理由により、ネームスペース内のSecretに対する`watch`や`list`リクエストはかなり強力な能力であり、避けるべきです。Secretのリストを取得することはクライアントにネームスペース内の全てのSecretの値を調べさせることを認めるからです。クラスター内の全てのSecretに対する`watch`、`list`権限は最も特権的な、システムレベルのコンポーネントに限って認めるべきです。 + +Secret APIへのアクセスが必要なアプリケーションは、必要なSecretに対する`get`リクエストを発行すべきです。管理者は全てのSecretに対するアクセスは制限しつつ、アプリケーションが必要とする[個々のインスタンスに対するアクセス許可](/docs/reference/access-authn-authz/rbac/#referring-to-resources)を与えることができます。 + +`get`リクエストの繰り返しに対するパフォーマンスを向上するために、クライアントはSecretを参照するリソースを設計し、それを`watch`して、参照が変更されたときにSecretを再度リクエストすることができます。加えて、個々のリソースを`watch`することのできる["bulk watch" API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md)が提案されており、将来のKubernetesリリースにて利用可能になる可能性があります。 + +## セキュリティ特性 + +### 保護 + +Secretはそれを使用するPodとは独立に作成されるので、Podを作ったり、参照したり、編集したりするワークフローにおいてSecretが晒されるリスクは軽減されています。 +システムは、可能であればSecretの内容をディスクに書き込まないような、Secretについて追加の考慮も行っています。 + +Secretはノード上のPodが必要とした場合のみ送られます。 +kubeletはSecretがディスクストレージに書き込まれないよう、`tmpfs`に保存します。 +Secretを必要とするPodが削除されると、kubeletはSecretのローカルコピーも同様に削除します。 + +同一のノードにいくつかのPodに対する複数のSecretが存在することもあります。 +しかし、コンテナから参照できるのはPodが要求したSecretのみです。 +そのため、あるPodが他のPodのためのSecretにアクセスすることはできません。 + +Podに複数のコンテナが含まれることもあります。しかし、Podの各コンテナはコンテナ内からSecretを参照するために`volumeMounts`によってSecretボリュームを要求する必要があります。 +これは[Podレベルでのセキュリティ分離](#use-case-secret-visible-to-one-container-in-a-pod)を実装するのに便利です。 + +ほとんどのKubernetesディストリビューションにおいては、ユーザーとAPIサーバー間やAPIサーバーからkubelet間の通信はSSL/TLSで保護されています。 +そのような経路で伝送される場合、Secretは保護されています。 + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + + +[保存データの暗号化](/docs/tasks/administer-cluster/encrypt-data/)を有効にして、Secretが{{< glossary_tooltip term_id="etcd" >}}に平文で保存されないようにすることができます。 + +### リスク + + - APIサーバーでは、機密情報は{{< glossary_tooltip term_id="etcd" >}}に保存されます。 + そのため、 + - 管理者はクラスターデータの保存データの暗号化を有効にすべきです(v1.13以降が必要)。 + - 管理者はetcdへのアクセスを管理ユーザに限定すべきです。 + - 管理者はetcdで使用していたディスクを使用しなくなったときにはそれをワイプするか完全消去したくなるでしょう。 + - クラスターの中でetcdが動いている場合、管理者はetcdのピアツーピア通信がSSL/TLSを利用していることを確認すべきです。 + - Secretをマニフェストファイル(JSONまたはYAML)を介して設定する場合、それはBase64エンコードされた機密情報を含んでいるので、ファイルを共有したりソースリポジトリに入れることは秘密が侵害されることを意味します。Base64エンコーディングは暗号化手段では _なく_ 、平文と同様であると判断すべきです。 + - アプリケーションはボリュームからSecretの値を読み取った後も、その値を保護する必要があります。例えば意図せずログに出力する、信用できない相手に送信するようなことがないようにです。 + - Secretを利用するPodを作成できるユーザーはSecretの値を見ることができます。たとえAPIサーバーのポリシーがユーザーにSecretの読み取りを許可していなくても、ユーザーはSecretを晒すPodを実行することができます。 + - 現在、任意のノードでルート権限を持つ人は誰でも、kubeletに偽装することで _任意の_ SecretをAPIサーバーから読み取ることができます。 + 単一のノードのルート権限を不正に取得された場合の影響を抑えるため、実際に必要としているノードに対してのみSecretを送る機能が計画されています。 diff --git a/content/ja/docs/concepts/services-networking/service-topology.md b/content/ja/docs/concepts/services-networking/service-topology.md new file mode 100644 index 0000000000..6af79ca7b3 --- /dev/null +++ b/content/ja/docs/concepts/services-networking/service-topology.md @@ -0,0 +1,144 @@ +--- +title: Serviceトポロジー +feature: + title: Serviceトポロジー + description: > + Serviceのトラフィックをクラスタートポロジーに基づいてルーティングします。 +content_type: concept +weight: 10 +--- + + +<!-- overview --> + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +*Serviceトポロジー*を利用すると、Serviceのトラフィックをクラスターのノードトポロジーに基づいてルーティングできるようになります。たとえば、あるServiceのトラフィックに対して、できるだけ同じノードや同じアベイラビリティゾーン上にあるエンドポイントを優先してルーティングするように指定できます。 + +<!-- body --> + +## はじめに + +デフォルトでは、`ClusterIP`や`NodePort`Serviceに送信されたトラフィックは、Serviceに対応する任意のバックエンドのアドレスにルーティングされる可能性があります。しかし、Kubernetes 1.7以降では、「外部の」トラフィックをそのトラフィックを受信したノード上のPodにルーティングすることが可能になりました。しかし、この機能は`ClusterIP`Serviceでは対応しておらず、ゾーン内ルーティングなどのより複雑なトポロジーは実現不可能でした。*Serviceトポロジー*の機能を利用すれば、Serviceの作者が送信元ノードと送信先ノードのNodeのラベルに基づいてトラフィックをルーティングするためのポリシーを定義できるようになるため、この問題を解決できます。 + +送信元と送信先の間のNodeラベルのマッチングを使用することにより、オペレーターは、そのオペレーターの要件に適したメトリクスを使用して、お互いに「より近い」または「より遠い」ノードのグループを指定できます。たとえば、パブリッククラウド上のさまざまなオペレーターでは、Serviceのトラフィックを同一ゾーン内に留めようとする傾向があります。パブリッククラウドでは、ゾーンをまたぐトラフィックでは関連するコストがかかる一方、ゾーン内のトラフィックにはコストがかからない場合があるからです。その他のニーズとしては、DaemonSetが管理するローカルのPodにトラフィックをルーティングできるようにしたり、レイテンシーを低く抑えるために同じラック上のスイッチに接続されたノードにトラフィックを限定したいというものがあります。 + +## Serviceトポロジーを利用する + +クラスターのServiceトポロジーが有効になっていれば、Serviceのspecに`topologyKeys`フィールドを指定することで、Serviceのトラフィックのルーティングを制御できます。このフィールドは、Nodeラベルの優先順位リストで、このServiceにアクセスするときにエンドポイントをソートするために使われます。トラフィックは、最初のラベルの値が送信元Nodeのものと一致するNodeに送信されます。一致したノード上にServiceに対応するバックエンドが存在しなかった場合は、2つ目のラベルについて検討が行われ、同様に、残っているラベルが順番に検討されまます。 + +一致するキーが1つも見つからなかった場合、トラフィックは、Serviceに対応するバックエンドが存在しなかったかのように拒否されます。言い換えると、エンドポイントは、利用可能なバックエンドが存在する最初のトポロジーキーに基づいて選択されます。このフィールドが指定され、すべてのエントリーでクライアントのトポロジーに一致するバックエンドが存在しない場合、そのクライアントに対するバックエンドが存在しないものとしてコネクションが失敗します。「任意のトポロジー」を意味する特別な値`"*"`を指定することもできます。任意の値にマッチするこの値に意味があるのは、リストの最後の値として使った場合だけです。 + +`topologyKeys`が未指定または空の場合、トポロジーの制約は適用されません。 + +ホスト名、ゾーン名、リージョン名のラベルが付いたNodeを持つクラスターについて考えてみましょう。このとき、Serviceの`topologyKeys`の値を設定することで、トラフィックの向きを以下のように制御できます。 + +* トラフィックを同じノード上のエンドポイントのみに向け、同じノード上にエンドポイントが1つも存在しない場合には失敗するようにする: `["kubernetes.io/hostname"]`。 +* 同一ノード上のエンドポイントを優先し、失敗した場合には同一ゾーン上のエンドポイント、同一リージョンゾーンのエンドポイントへとフォールバックし、それ以外の場合には失敗する: `["kubernetes.io/hostname", "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`。これは、たとえばデータのローカリティが非常に重要である場合などに役に立ちます。 +* 同一ゾーンを優先しますが、ゾーン内に利用可能なノードが存在しない場合は、利用可能な任意のエンドポイントにフォールバックする: `["topology.kubernetes.io/zone", "*"]`。 + +## 制約 + +* Serviceトポロジーは`externalTrafficPolicy=Local`と互換性がないため、Serviceは2つの機能を同時に利用できません。2つの機能を同じクラスター上の異なるServiceでそれぞれ利用することは可能ですが、同一のService上では利用できません。 + +* 有効なトポロジーキーは、現在は`kubernetes.io/hostname`、`topology.kubernetes.io/zone`、および`topology.kubernetes.io/region`に限定されています。しかし、将来は一般化され、他のノードラベルも使用できるようになる予定です。 + +* トポロジーキーは有効なラベルのキーでなければならず、最大で16個のキーまで指定できます。 + +* すべての値をキャッチする`"*"`を使用する場合は、トポロジーキーの最後の値として指定しなければなりません。 + +## 例 + +以下では、Serviceトポロジーの機能を利用したよくある例を紹介します。 + +### ノードローカルのエンドポイントだけを使用する + +ノードローカルのエンドポイントのみにルーティングするServiceの例です。もし同一ノード上にエンドポイントが存在しない場合、トラフィックは損失します。 + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" +``` + +### ノードローカルのエンドポイントを優先して使用する + +ノードローカルのエンドポイントを優先して使用しますが、ノードローカルのエンドポイントが存在しない場合にはクラスター全体のエンドポイントにフォールバックするServiceの例です。 + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" + - "*" +``` + + +### 同一ゾーンや同一リージョンのエンドポイントだけを使用する + +同一リージョンのエンドポイントより同一ゾーンのエンドポイントを優先するServiceの例です。もしいずれのエンドポイントも存在しない場合、トラフィックは損失します。 + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "topology.kubernetes.io/zone" + - "topology.kubernetes.io/region" +``` + +### ノードローカル、同一ゾーン、同一リーションのエンドポイントを優先して使用する + +ノードローカル、同一ゾーン、同一リージョンのエンドポイントを順番に優先し、クラスター全体のエンドポイントにフォールバックするServiceの例です。 + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" + - "topology.kubernetes.io/zone" + - "topology.kubernetes.io/region" + - "*" +``` + +## {{% heading "whatsnext" %}} + +* [Serviceトポトジーを有効にする](/docs/tasks/administer-cluster/enabling-service-topology)を読む。 +* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む。 + diff --git a/content/ja/docs/reference/glossary/configmap.md b/content/ja/docs/reference/glossary/configmap.md new file mode 100755 index 0000000000..434c64b7a7 --- /dev/null +++ b/content/ja/docs/reference/glossary/configmap.md @@ -0,0 +1,17 @@ +--- +title: ConfigMap +id: configmap +date: 2018-04-12 +full_link: /ja/docs/concepts/configuration/configmap/ +short_description: > + 機密性のないデータをキーと値のペアで保存するために使用されるAPIオブジェクトです。環境変数、コマンドライン引数、またはボリューム内の設定ファイルとして使用できます。 +aka: +tags: +- core-object +--- + + 機密性のないデータをキーと値のペアで保存するために使用されるAPIオブジェクトです。{{< glossary_tooltip text="Pod" term_id="pod" >}}は、環境変数、コマンドライン引数、または{{< glossary_tooltip text="ボリューム" term_id="volume" >}}内の設定ファイルとしてConfigMapを使用できます。 + +<!--more--> + +ConfigMapを使用すると、環境固有の設定を{{< glossary_tooltip text="コンテナイメージ" term_id="image" >}}から分離できるため、アプリケーションを簡単に移植できるようになります。 diff --git a/content/ja/docs/setup/learning-environment/kind.md b/content/ja/docs/setup/learning-environment/kind.md new file mode 100644 index 0000000000..8c17327144 --- /dev/null +++ b/content/ja/docs/setup/learning-environment/kind.md @@ -0,0 +1,15 @@ +--- +title: Kindを使用してKubernetesをインストールする +weight: 40 +content_type: concept +--- + +<!-- overview --> + +Kindは、Dockerコンテナをノードとして使用して、ローカルのKubernetesクラスターを実行するためのツールです。 + +<!-- body --> + +## インストール + +[Kindをインストールする](https://kind.sigs.k8s.io/docs/user/quick-start/)を参照してください。 diff --git a/content/ja/docs/tasks/administer-cluster/enabling-service-topology.md b/content/ja/docs/tasks/administer-cluster/enabling-service-topology.md new file mode 100644 index 0000000000..304e0937a6 --- /dev/null +++ b/content/ja/docs/tasks/administer-cluster/enabling-service-topology.md @@ -0,0 +1,44 @@ +--- +title: Serviceトポロジーを有効にする +content_type: task +--- + +<!-- overview --> +このページでは、Kubernetes上でServiceトポロジーを有効にする方法の概要について説明します。 + + +## {{% heading "prerequisites" %}} + + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + + +<!-- steps --> + +## はじめに + +*Serviceトポロジー*は、クラスターのノードのトポロジーに基づいてトラフィックをルーティングできるようにする機能です。たとえば、あるServiceのトラフィックに対して、できるだけ同じノードや同じアベイラビリティゾーン上にあるエンドポイントを優先してルーティングするように指定できます。 + +## 前提 + +トポロジーを考慮したServiceのルーティングを有効にするには、以下の前提を満たしている必要があります。 + + * Kubernetesバージョン1.17以降である + * {{< glossary_tooltip text="Kube-proxy" term_id="kube-proxy" >}}がiptableモードまたはIPVSモードで稼働している + * [Endpoint Slice](/docs/concepts/services-networking/endpoint-slices/)を有効にしている + +## Serviceトポロジーを有効にする + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +Serviceトポロジーを有効にするには、すべてのKubernetesコンポーネントで`ServiceTopology`と`EndpointSlice`フィーチャーゲートを有効にする必要があります。 + +``` +--feature-gates="ServiceTopology=true,EndpointSlice=true" +``` + +## {{% heading "whatsnext" %}} + +* [Serviceトポロジー](/ja/docs/concepts/services-networking/service-topology)のコンセプトについて読む +* [Endpoint Slice](/docs/concepts/services-networking/endpoint-slices)について読む +* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む + diff --git a/content/ja/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/ja/docs/tasks/manage-hugepages/scheduling-hugepages.md new file mode 100644 index 0000000000..ed87cf4e3f --- /dev/null +++ b/content/ja/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -0,0 +1,93 @@ +--- +title: huge pageを管理する +content_type: task +description: クラスター内のスケジュール可能なリソースとしてhuge pageの設定と管理を行います。 +--- + +<!-- overview --> +{{< feature-state state="stable" >}} + +Kubernetesでは、事前割り当てされたhuge pageをPod内のアプリケーションに割り当てたり利用したりすることをサポートしています。このページでは、ユーザーがhuge pageを利用できるようにする方法について説明します。 + +## {{% heading "prerequisites" %}} + +1. Kubernetesのノードがhuge pageのキャパシティを報告するためには、ノード上でhuge pageを事前割り当てしておく必要があります。1つのノードでは複数のサイズのhuge pageが事前割り当てできます。 + +ノードは、すべてのhuge pageリソースを、スケジュール可能なリソースとして自動的に探索・報告してくれます。 + +<!-- steps --> + +## API + +huge pageはコンテナレベルのリソース要求で`hugepages-<size>`という名前のリソースを指定することで利用できます。ここで、`<size>`は、特定のノード上でサポートされている整数値を使った最も小さなバイナリ表記です。たとえば、ノードが2048KiBと1048576KiBのページサイズをサポートしている場合、ノードはスケジュール可能なリソースとして、`hugepages-2Mi`と`hugepages-1Gi`の2つのリソースを公開します。CPUやメモリとは違い、huge pageはオーバーコミットをサポートしません。huge pageリソースをリクエストするときには、メモリやCPUリソースを同時にリクエストしなければならないことに注意してください。 + +1つのPodのspec内に書くことで、Podから複数のサイズのhuge pageを利用することもできます。その場合、すべてのボリュームマウントで`medium: HugePages-<hugepagesize>`という表記を使う必要があります。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: huge-pages-example +spec: + containers: + - name: example + image: fedora:latest + command: + - sleep + - inf + volumeMounts: + - mountPath: /hugepages-2Mi + name: hugepage-2mi + - mountPath: /hugepages-1Gi + name: hugepage-1gi + resources: + limits: + hugepages-2Mi: 100Mi + hugepages-1Gi: 2Gi + memory: 100Mi + requests: + memory: 100Mi + volumes: + - name: hugepage-2mi + emptyDir: + medium: HugePages-2Mi + - name: hugepage-1gi + emptyDir: + medium: HugePages-1Gi +``` + +Podで1種類のサイズのhuge pageをリクエストするときだけは、`medium: HugePages`という表記を使うこともできます。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: huge-pages-example +spec: + containers: + - name: example + image: fedora:latest + command: + - sleep + - inf + volumeMounts: + - mountPath: /hugepages + name: hugepage + resources: + limits: + hugepages-2Mi: 100Mi + memory: 100Mi + requests: + memory: 100Mi + volumes: + - name: hugepage + emptyDir: + medium: HugePages +``` + +- huge pageのrequestsはlimitsと等しくなければなりません。limitsを指定した場合にはこれがデフォルトですが、requestsを指定しなかった場合にはデフォルトではありません。 +- huge pageはコンテナのスコープで隔離されるため、各コンテナにはそれぞれのcgroupサンドボックスの中でcontainer specでリクエストされた通りのlimitが設定されます。 +- huge pageベースのEmptyDirボリュームは、Podがリクエストしたよりも大きなサイズのページメモリーを使用できません。 +- `shmget()`に`SHM_HUGETLB`を指定して取得したhuge pageを使用するアプリケーションは、`/proc/sys/vm/hugetlb_shm_group`に一致する補助グループ(supplemental group)を使用して実行する必要があります。 +- namespace内のhuge pageの使用量は、ResourceQuotaに対して`cpu`や`memory`のような他の計算リソースと同じように`hugepages-<size>`というトークンを使用することで制御できます。 +- 複数のサイズのhuge pageのサポートはフィーチャーゲートによる設定が必要です。{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}と{{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}}上で、`HugePageStorageMediumSize`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を使用すると有効にできます(`--feature-gates=HugePageStorageMediumSize=true`)。 diff --git a/content/ja/docs/tasks/tls/_index.md b/content/ja/docs/tasks/tls/_index.md new file mode 100755 index 0000000000..42234c709e --- /dev/null +++ b/content/ja/docs/tasks/tls/_index.md @@ -0,0 +1,6 @@ +--- +title: "TLS" +weight: 100 +description: Transport Layer Security(TLS)を使用して、クラスター内のトラフィックを保護する方法について理解します。 +--- + diff --git a/content/ja/docs/tasks/tls/certificate-rotation.md b/content/ja/docs/tasks/tls/certificate-rotation.md new file mode 100644 index 0000000000..1d2026962c --- /dev/null +++ b/content/ja/docs/tasks/tls/certificate-rotation.md @@ -0,0 +1,41 @@ +--- +title: Kubeletの証明書のローテーションを設定する +content_type: task +--- + +<!-- overview --> +このページでは、kubeletの証明書のローテーションを設定する方法を説明します。 + +{{< feature-state for_k8s_version="v1.8" state="beta" >}} + +## {{% heading "prerequisites" %}} + +* Kubernetesはバージョン1.8.0以降である必要があります。 + +<!-- steps --> + +## 概要 + +kubeletは、Kubernetes APIへの認証のために証明書を使用します。デフォルトでは、証明書は1年間の有効期限付きで発行されるため、頻繁に更新する必要はありません。 + +Kubernetes 1.8にはベータ機能の[kubelet certificate rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)が含まれているため、現在の証明書の有効期限が近づいたときに自動的に新しい鍵を生成して、Kubernetes APIに新しい証明書をリクエストできます。新しい証明書が利用できるようになると、Kubernetes APIへの接続の認証に利用されます。 + +## クライアント証明書のローテーションを有効にする + +`kubelet`プロセスは`--rotate-certificates`という引数を受け付けます。この引数によって、現在使用している証明書の有効期限が近づいたときに、kubeletが自動的に新しい証明書をリクエストするかどうかを制御できます。証明書のローテーションはベータ機能であるため、`--feature-gates=RotateKubeletClientCertificate=true`を使用してフィーチャーフラグを有効にする必要もあります。 + +`kube-controller-manager`プロセスは、`--experimental-cluster-signing-duration`という引数を受け付け、この引数で証明書が発行される期間を制御できます。 + +## 証明書のローテーションの設定を理解する + +kubeletが起動すると、ブートストラップが設定されている場合(`--bootstrap-kubeconfig`フラグを使用した場合)、初期証明書を使用してKubernetes APIに接続して、証明書署名リクエスト(certificate signing request、CSR)を発行します。証明書署名リクエストのステータスは、次のコマンドで表示できます。 + +```sh +kubectl get csr +``` + +ノード上のkubeletから発行された証明書署名リクエストは、初めは`Pending`状態です。証明書署名リクエストが特定の条件を満たすと、コントローラーマネージャーに自動的に承認され、`Approved`状態になります。次に、コントローラーマネージャーは`--experimental-cluster-signing-duration`パラメーターで指定された有効期限で発行された証明書に署名を行い、署名された証明書が証明書署名リクエストに添付されます。 + +kubeletは署名された証明書をKubernetes APIから取得し、ディスク上の`--cert-dir`で指定された場所に書き込みます。その後、kubeletは新しい証明書を使用してKubernetes APIに接続するようになります。 + +署名された証明書の有効期限が近づくと、kubeletはKubernetes APIを使用して新しい証明書署名リクエストを自動的に発行します。再び、コントローラーマネージャーは証明書のリクエストを自動的に承認し、署名された証明書を証明書署名リクエストに添付します。kubeletは新しい署名された証明書をKubernetes APIから取得してディスクに書き込みます。その後、kubeletは既存のコネクションを更新して、新しい証明書でKubernetes APIに再接続します。 diff --git a/content/ja/docs/tutorials/stateful-application/cassandra.md b/content/ja/docs/tutorials/stateful-application/cassandra.md new file mode 100644 index 0000000000..32b215e62d --- /dev/null +++ b/content/ja/docs/tutorials/stateful-application/cassandra.md @@ -0,0 +1,261 @@ +--- +title: "例: StatefulSetを使用したCassandraのデプロイ" +content_type: tutorial +weight: 30 +--- + +<!-- overview --> +このチュートリアルでは、[Apache Cassandra](http://cassandra.apache.org/)をKubernetes上で実行する方法を紹介します。データベースの一種であるCassandraには、データの耐久性(アプリケーションの*状態*)を提供するために永続ストレージが必要です。この例では、カスタムのCassandraのseed providerにより、Cassandraクラスターに参加した新しいCassandraインスタンスを検出できるようにします。 + +*StatefulSet*を利用すると、ステートフルなアプリケーションをKubernetesクラスターにデプロイするのが簡単になります。このチュートリアルで使われている機能のより詳しい情報は、[StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/)を参照してください。 + +{{< note >}} +CassandraとKubernetesは、ともにクラスターのメンバーを表すために*ノード*という用語を使用しています。このチュートリアルでは、StatefulSetに属するPodはCassandraのノードであり、Cassandraクラスター(*ring*と呼ばれます)のメンバーでもあります。これらのPodがKubernetesクラスター内で実行されるとき、Kubernetesのコントロールプレーンは、PodをKubernetesの{{< glossary_tooltip text="Node" term_id="node" >}}上にスケジュールします。 + +Cassandraノードが開始すると、*シードリスト*を使ってring上の他のノードの検出が始まります。このチュートリアルでは、Kubernetesクラスター内に現れた新しいCassandra Podを検出するカスタムのCassandraのseed providerをデプロイします。 +{{< /note >}} + + +## {{% heading "objectives" %}} + +* Cassandraのheadless {{< glossary_tooltip text="Service" term_id="service" >}}を作成して検証する。 +* {{< glossary_tooltip term_id="StatefulSet" >}}を使用してCassandra ringを作成する。 +* StatefulSetを検証する。 +* StatefulSetを編集する。 +* StatefulSetと{{< glossary_tooltip text="Pod" term_id="pod" >}}を削除する。 + + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + +このチュートリアルを完了するには、{{< glossary_tooltip text="Pod" term_id="pod" >}}、{{< glossary_tooltip text="Service" term_id="service" >}}、{{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}}の基本についてすでに知っている必要があります。 + +### Minikubeのセットアップに関する追加の設定手順 + +{{< caution >}} +[Minikube](/ja/docs/getting-started-guides/minikube/)は、デフォルトでは1024MiBのメモリと1CPUに設定されます。デフォルトのリソース設定で起動したMinikubeでは、このチュートリアルの実行中にリソース不足のエラーが発生してしまいます。このエラーを回避するためにはMinikubeを次の設定で起動してください。 + +```shell +minikube start --memory 5120 --cpus=4 +``` +{{< /caution >}} + + + +<!-- lessoncontent --> +## Cassandraのheadless Serviceを作成する {#creating-a-cassandra-headless-service} + +Kubernetesでは、{{< glossary_tooltip text="Service" term_id="service" >}}は同じタスクを実行する{{< glossary_tooltip text="Pod" term_id="pod" >}}の集合を表します。 + +以下のServiceは、Cassandra Podとクラスター内のクライアント間のDNSルックアップに使われます。 + +{{< codenew file="application/cassandra/cassandra-service.yaml" >}} + +`cassandra-service.yaml`ファイルから、Cassandra StatefulSetのすべてのメンバーをトラッキングするServiceを作成します。 + +```shell +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml +``` + + +### 検証 (オプション) {#validating} + +Cassandra Serviceを取得します。 + +```shell +kubectl get svc cassandra +``` + +結果は次のようになります。 + +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +cassandra ClusterIP None <none> 9042/TCP 45s +``` + + +`cassandra`という名前のServiceが表示されない場合、作成に失敗しています。よくある問題のトラブルシューティングについては、[Serviceのデバッグ](/ja/docs/tasks/debug-application-cluster/debug-service/)を読んでください。 + +## StatefulSetを使ってCassandra ringを作成する + +以下に示すStatefulSetマニフェストは、3つのPodからなるCassandra ringを作成します。 + +{{< note >}} +この例ではMinikubeのデフォルトのプロビジョナーを使用しています。クラウドを使用している場合、StatefulSetを更新してください。 +{{< /note >}} + +{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}} + +`cassandra-statefulset.yaml`ファイルから、CassandraのStatefulSetを作成します。 + +```shell +# cassandra-statefulset.yaml を編集せずにapplyできる場合は、このコマンドを使用してください +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml +``` + +クラスターに合わせて`cassandra-statefulset.yaml`を編集する必要がある場合、 https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml をダウンロードして、修正したバージョンを保存したフォルダからマニフェストを適用してください。 + +```shell +# cassandra-statefulset.yaml をローカルで編集する必要がある場合、このコマンドを使用してください +kubectl apply -f cassandra-statefulset.yaml +``` + + +## CassandraのStatefulSetを検証する + +1. CassandraのStatefulSetを取得します + + ```shell + kubectl get statefulset cassandra + ``` + + 結果は次のようになるはずです。 + + ``` + NAME DESIRED CURRENT AGE + cassandra 3 0 13s + ``` + + `StatefulSet`リソースがPodを順番にデプロイします。 + +1. Podを取得して順序付きの作成ステータスを確認します + + ```shell + kubectl get pods -l="app=cassandra" + ``` + + 結果は次のようになるはずです。 + + ```shell + NAME READY STATUS RESTARTS AGE + cassandra-0 1/1 Running 0 1m + cassandra-1 0/1 ContainerCreating 0 8s + ``` + + 3つすべてのPodのデプロイには数分かかる場合があります。デプロイが完了すると、同じコマンドは次のような結果を返します。 + + ``` + NAME READY STATUS RESTARTS AGE + cassandra-0 1/1 Running 0 10m + cassandra-1 1/1 Running 0 9m + cassandra-2 1/1 Running 0 8m + ``` + +3. 1番目のPodの中でCassandraの[nodetool](https://cwiki.apache.org/confluence/display/CASSANDRA2/NodeTool)を実行して、ringのステータスを表示します。 + + ```shell + kubectl exec -it cassandra-0 -- nodetool status + ``` + + 結果は次のようになるはずです。 + + ``` + Datacenter: DC1-K8Demo + ====================== + Status=Up/Down + |/ State=Normal/Leaving/Joining/Moving + -- Address Load Tokens Owns (effective) Host ID Rack + UN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo + UN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo + UN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo + ``` + +## CassandraのStatefulSetを変更する + +`kubectl edit`を使うと、CassandraのStatefulSetのサイズを変更できます。 + +1. 次のコマンドを実行します。 + + ```shell + kubectl edit statefulset cassandra + ``` + + このコマンドを実行すると、ターミナルでエディタが起動します。変更が必要な行は`replicas`フィールドです。以下の例は、StatefulSetファイルの抜粋です。 + + ```yaml + # Please edit the object below. Lines beginning with a '#' will be ignored, + # and an empty file will abort the edit. If an error occurs while saving this file will be + # reopened with the relevant failures. + # + apiVersion: apps/v1 + kind: StatefulSet + metadata: + creationTimestamp: 2016-08-13T18:40:58Z + generation: 1 + labels: + app: cassandra + name: cassandra + namespace: default + resourceVersion: "323" + uid: 7a219483-6185-11e6-a910-42010a8a0fc0 + spec: + replicas: 3 + ``` + +1. レプリカ数を4に変更し、マニフェストを保存します。 + + これで、StatefulSetが4つのPodを実行するようにスケールされました。 + +1. CassandraのStatefulSetを取得して、変更を確かめます。 + + ```shell + kubectl get statefulset cassandra + ``` + + 結果は次のようになるはずです。 + + ``` + NAME DESIRED CURRENT AGE + cassandra 4 4 36m + ``` + + + +## {{% heading "cleanup" %}} + +StatefulSetを削除したりスケールダウンしても、StatefulSetに関係するボリュームは削除されません。StatefulSetに関連するすべてのリソースを自動的に破棄するよりも、データの方がより貴重であるため、安全のためにこのような設定になっています。 + +{{< warning >}} +ストレージクラスやreclaimポリシーによっては、*PersistentVolumeClaim*を削除すると、関連するボリュームも削除される可能性があります。PersistentVolumeClaimの削除後にもデータにアクセスできるとは決して想定しないでください。 +{{< /warning >}} + +1. 次のコマンドを実行して(単一のコマンドにまとめています)、CassandraのStatefulSetに含まれるすべてのリソースを削除します。 + + ```shell + grace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ + && kubectl delete statefulset -l app=cassandra \ + && echo "Sleeping ${grace} seconds" 1>&2 \ + && sleep $grace \ + && kubectl delete persistentvolumeclaim -l app=cassandra + ``` + +1. 次のコマンドを実行して、CassandraをセットアップしたServiceを削除します。 + + ```shell + kubectl delete service -l app=cassandra + ``` + +## Cassandraコンテナの環境変数 + +このチュートリアルのPodでは、Googleの[コンテナレジストリ](https://cloud.google.com/container-registry/docs/)の[`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile)イメージを使用しました。このDockerイメージは[debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)をベースにしており、OpenJDK 8が含まれています。 + +このイメージには、Apache Debianリポジトリの標準のCassandraインストールが含まれます。環境変数を利用すると、`cassandra.yaml`に挿入された値を変更できます。 + +| 環境変数 | デフォルト値 | +| ------------------------ |:---------------: | +| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | +| `CASSANDRA_NUM_TOKENS` | `32` | +| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | + + + +## {{% heading "whatsnext" %}} + + +* [StatefulSetのスケール](/ja/docs/tasks/run-application/scale-stateful-set/)を行う方法を学ぶ。 +* [*KubernetesSeedProvider*](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)についてもっと学ぶ。 +* カスタムの[Seed Providerの設定](https://git.k8s.io/examples/cassandra/java/README.md)についてもっと学ぶ。 + + + diff --git a/content/ko/_index.html b/content/ko/_index.html index c8d9d593c1..cde99ff2d3 100644 --- a/content/ko/_index.html +++ b/content/ko/_index.html @@ -41,7 +41,6 @@ Google이 일주일에 수십억 개의 컨테이너들을 운영하게 해준 <button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button> <br> <br> - <br> <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu20" button id="desktopKCButton">Attend KubeCon in Amsterdam on August 13-16, 2020</a> <br> <br> diff --git a/content/zh/_index.html b/content/zh/_index.html index 3dab160c77..71688a2e67 100644 --- a/content/zh/_index.html +++ b/content/zh/_index.html @@ -67,7 +67,6 @@ Kubernetes 是开源系统,可以自由地部署在企业内部,私有云、 <button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button> <br> <br> - <br> <!-- <a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">Attend KubeCon in Shanghai on Nov. 13-15, 2018</a> --> <a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">参加11月13日到15日的上海 KubeCon</a> <br> diff --git a/content/zh/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md b/content/zh/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md new file mode 100644 index 0000000000..8941ce5a54 --- /dev/null +++ b/content/zh/blog/_posts/2016-08-00-Stateful-Applications-Using-Kubernetes-Datera.md @@ -0,0 +1,696 @@ +--- +title: " 使用 Kubernetes Pet Sets 和 Datera Elastic Data Fabric 的 FlexVolume 扩展有状态的应用程序 " +date: 2016-08-29 +slug: stateful-applications-using-kubernetes-datera +url: /zh/blog/2016/08/Stateful-Applications-Using-Kubernetes-Datera +--- +<!-- +--- +title: " Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric " +date: 2016-08-29 +slug: stateful-applications-using-kubernetes-datera +url: /blog/2016/08/Stateful-Applications-Using-Kubernetes-Datera +--- +---> + +<!-- +_Editor’s note: today’s guest post is by Shailesh Mittal, Software Architect and Ashok Rajagopalan, Sr Director Product at Datera Inc, talking about Stateful Application provisioning with Kubernetes on Datera Elastic Data Fabric._ +---> +_编者注:今天的邀请帖子来自 Datera 公司的软件架构师 Shailesh Mittal 和高级产品总监 Ashok Rajagopalan,介绍在 Datera Elastic Data Fabric 上用 Kubernetes 配置状态应用程序。_ + +<!-- +**Introduction** + +Persistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by [Pet Sets](/docs/user-guide/petset/) has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement). +---> +**简介** + +用户从无状态工作负载转移到运行有状态应用程序,Kubernetes 中的持久卷是基础。虽然 Kubernetes 早已支持有状态的应用程序,比如 MySQL、Kafka、Cassandra 和 Couchbase,但是 Pet Sets 的引入明显改善了情况。特别是,[Pet Sets](/docs/user-guide/petset/) 具有持续扩展和关联的能力,在配置和启动的顺序过程中,可以自动缩放“Pets”(需要连续处理和持久放置的应用程序)。 + +<!-- +Datera, elastic block storage for cloud deployments, has [seamlessly integrated with Kubernetes](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13) through the [FlexVolume](/docs/user-guide/volumes/#flexvolume) framework. Based on the first principles of containers, Datera allows application resource provisioning to be decoupled from the underlying physical infrastructure. This brings clean contracts (aka, no dependency or direct knowledge of the underlying physical infrastructure), declarative formats, and eventually portability to stateful applications. +---> +Datera 是用于云部署的弹性块存储,可以通过 [FlexVolume](/docs/user-guide/volumes/#flexvolume) 框架与 [Kubernetes 无缝集成](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13)。基于容器的基本原则,Datera 允许应用程序的资源配置与底层物理基础架构分离,为有状态的应用程序提供简洁的协议(也就是说,不依赖底层物理基础结构及其相关内容)、声明式格式和最后移植的能力。 + +<!-- +While Kubernetes allows for great flexibility to define the underlying application infrastructure through yaml configurations, Datera allows for that configuration to be passed to the storage infrastructure to provide persistence. Through the notion of Datera AppTemplates, in a Kubernetes environment, stateful applications can be automated to scale. +---> +Kubernetes 可以通过 yaml 配置来灵活定义底层应用程序基础架构,而 Datera 可以将该配置传递给存储基础结构以提供持久性。通过 Datera AppTemplates 声明,在 Kubernetes 环境中,有状态的应用程序可以自动扩展。 + + + + +<!-- +**Deploying Persistent Storage** + + + +Persistent storage is defined using the Kubernetes [PersistentVolume](/docs/user-guide/persistent-volumes/#persistent-volumes) subsystem. PersistentVolumes are volume plugins and define volumes that live independently of the lifecycle of the pod that is using it. They are implemented as NFS, iSCSI, or by cloud provider specific storage system. Datera has developed a volume plugin for PersistentVolumes that can provision iSCSI block storage on the Datera Data Fabric for Kubernetes pods. +---> +**部署永久性存储** + + + +永久性存储是通过 Kubernetes 的子系统 [PersistentVolume](/docs/user-guide/persistent-volumes/#persistent-volumes) 定义的。PersistentVolumes 是卷插件,它定义的卷的生命周期和使用它的 Pod 相互独立。PersistentVolumes 由 NFS、iSCSI 或云提供商的特定存储系统实现。Datera 开发了用于 PersistentVolumes 的卷插件,可以在 Datera Data Fabric 上为 Kubernetes 的 Pod 配置 iSCSI 块存储。 + + +<!-- +The Datera volume plugin gets invoked by kubelets on minion nodes and relays the calls to the Datera Data Fabric over its REST API. Below is a sample deployment of a PersistentVolume with the Datera plugin: +---> +Datera 卷插件从 minion nodes 上的 kubelet 调用,并通过 REST API 回传到 Datera Data Fabric。以下是带有 Datera 插件的 PersistentVolume 的部署示例: + + + ``` + apiVersion: v1 + + kind: PersistentVolume + + metadata: + + name: pv-datera-0 + + spec: + + capacity: + + storage: 100Gi + + accessModes: + + - ReadWriteOnce + + persistentVolumeReclaimPolicy: Retain + + flexVolume: + + driver: "datera/iscsi" + + fsType: "xfs" + + options: + + volumeID: "kube-pv-datera-0" + + size: “100" + + replica: "3" + + backstoreServer: "[tlx170.tlx.daterainc.com](http://tlx170.tlx.daterainc.com/):7717” + ``` + + +<!-- +This manifest defines a PersistentVolume of 100 GB to be provisioned in the Datera Data Fabric, should a pod request the persistent storage. +---> +为 Pod 申请 PersistentVolume,要按照以下清单在 Datera Data Fabric 中配置 100 GB 的 PersistentVolume。 + + + + ``` +[root@tlx241 /]# kubectl get pv + +NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE + +pv-datera-0 100Gi RWO Available 8s + +pv-datera-1 100Gi RWO Available 2s + +pv-datera-2 100Gi RWO Available 7s + +pv-datera-3 100Gi RWO Available 4s + ``` + + +<!-- +**Configuration** + + + +The Datera PersistenceVolume plugin is installed on all minion nodes. When a pod lands on a minion node with a valid claim bound to the persistent storage provisioned earlier, the Datera plugin forwards the request to create the volume on the Datera Data Fabric. All the options that are specified in the PersistentVolume manifest are sent to the plugin upon the provisioning request. +---> +**配置** + + + +Datera PersistenceVolume 插件安装在所有 minion node 上。minion node 的声明是绑定到之前设置的永久性存储上的,当 Pod 进入具备有效声明的 minion node 上时,Datera 插件会转发请求,从而在 Datera Data Fabric 上创建卷。根据配置请求,PersistentVolume 清单中所有指定的选项都将发送到插件。 + +<!-- +Once a volume is provisioned in the Datera Data Fabric, volumes are presented as an iSCSI block device to the minion node, and kubelet mounts this device for the containers (in the pod) to access it. +---> +在 Datera Data Fabric 中配置的卷会作为 iSCSI 块设备呈现给 minion node,并且 kubelet 将该设备安装到容器(在 Pod 中)进行访问。 + + ![](https://lh4.googleusercontent.com/ILlUm1HrWhGa8uTt97dQ786Gn20FHFZkavfucz05NHv6moZWiGDG7GlELM6o4CSzANWvZckoAVug5o4jMg17a-PbrfD1FRbDPeUCIc8fKVmVBNUsUPshWanXYkBa3gIJy5BnhLmZ) + + +<!-- +**Using Persistent Storage** + + + +Kubernetes PersistentVolumes are used along with a pod using PersistentVolume Claims. Once a claim is defined, it is bound to a PersistentVolume matching the claim’s specification. A typical claim for the PersistentVolume defined above would look like below: +---> +**使用永久性存储** + + + +Kubernetes PersistentVolumes 与具备 PersistentVolume Claims 的 Pod 一起使用。定义声明后,会被绑定到与声明规范匹配的 PersistentVolume 上。上面提到的定义 PersistentVolume 的典型声明如下所示: + + + + ``` +kind: PersistentVolumeClaim + +apiVersion: v1 + +metadata: + + name: pv-claim-test-petset-0 + +spec: + + accessModes: + + - ReadWriteOnce + + resources: + + requests: + + storage: 100Gi + ``` + + +<!-- +When this claim is defined and it is bound to a PersistentVolume, resources can be used with the pod specification: +---> +定义这个声明并将其绑定到 PersistentVolume 时,资源与 Pod 规范可以一起使用: + + + + ``` +[root@tlx241 /]# kubectl get pv + +NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE + +pv-datera-0 100Gi RWO Bound default/pv-claim-test-petset-0 6m + +pv-datera-1 100Gi RWO Bound default/pv-claim-test-petset-1 6m + +pv-datera-2 100Gi RWO Available 7s + +pv-datera-3 100Gi RWO Available 4s + + +[root@tlx241 /]# kubectl get pvc + +NAME STATUS VOLUME CAPACITY ACCESSMODES AGE + +pv-claim-test-petset-0 Bound pv-datera-0 0 3m + +pv-claim-test-petset-1 Bound pv-datera-1 0 3m + ``` + + +<!-- +A pod can use a PersistentVolume Claim like below: +---> +Pod 可以使用 PersistentVolume 声明,如下所示: + + + ``` +apiVersion: v1 + +kind: Pod + +metadata: + + name: kube-pv-demo + +spec: + + containers: + + - name: data-pv-demo + + image: nginx + + volumeMounts: + + - name: test-kube-pv1 + + mountPath: /data + + ports: + + - containerPort: 80 + + volumes: + + - name: test-kube-pv1 + + persistentVolumeClaim: + + claimName: pv-claim-test-petset-0 + ``` + + +<!-- +The result is a pod using a PersistentVolume Claim as a volume. It in-turn sends the request to the Datera volume plugin to provision storage in the Datera Data Fabric. +---> +程序的结果是 Pod 将 PersistentVolume Claim 作为卷。依次将请求发送到 Datera 卷插件,然后在 Datera Data Fabric 中配置存储。 + + + + ``` +[root@tlx241 /]# kubectl describe pods kube-pv-demo + +Name: kube-pv-demo + +Namespace: default + +Node: tlx243/172.19.1.243 + +Start Time: Sun, 14 Aug 2016 19:17:31 -0700 + +Labels: \<none\> + +Status: Running + +IP: 10.40.0.3 + +Controllers: \<none\> + +Containers: + + data-pv-demo: + + Container ID: [docker://ae2a50c25e03143d0dd721cafdcc6543fac85a301531110e938a8e0433f74447](about:blank) + + Image: nginx + + Image ID: [docker://sha256:0d409d33b27e47423b049f7f863faa08655a8c901749c2b25b93ca67d01a470d](about:blank) + + Port: 80/TCP + + State: Running + + Started: Sun, 14 Aug 2016 19:17:34 -0700 + + Ready: True + + Restart Count: 0 + + Environment Variables: \<none\> + +Conditions: + + Type Status + + Initialized True + + Ready True + + PodScheduled True + +Volumes: + + test-kube-pv1: + + Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) + + ClaimName: pv-claim-test-petset-0 + + ReadOnly: false + + default-token-q3eva: + + Type: Secret (a volume populated by a Secret) + + SecretName: default-token-q3eva + + QoS Tier: BestEffort + +Events: + + FirstSeen LastSeen Count From SubobjectPath Type Reason Message + + --------- -------- ----- ---- ------------- -------- ------ ------- + + 43s 43s 1 {default-scheduler } Normal Scheduled Successfully assigned kube-pv-demo to tlx243 + + 42s 42s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulling pulling image "nginx" + + 40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulled Successfully pulled image "nginx" + + 40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Created Created container with docker id ae2a50c25e03 + + 40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Started Started container with docker id ae2a50c25e03 + ``` + + +<!-- +The persistent volume is presented as iSCSI device at minion node (tlx243 in this case): +---> +永久卷在 minion node(在本例中为 tlx243)中显示为 iSCSI 设备: + + + ``` +[root@tlx243 ~]# lsscsi + +[0:2:0:0] disk SMC SMC2208 3.24 /dev/sda + +[11:0:0:0] disk DATERA IBLOCK 4.0 /dev/sdb + + +[root@tlx243 datera~iscsi]# mount ``` grep sdb + +/dev/sdb on /var/lib/kubelet/pods/6b99bd2a-628e-11e6-8463-0cc47ab41442/volumes/datera~iscsi/pv-datera-0 type xfs (rw,relatime,attr2,inode64,noquota) + ``` + + +<!-- +Containers running in the pod see this device mounted at /data as specified in the manifest: +---> +在 Pod 中运行的容器按照清单中将设备安装在 /data 上: + + + ``` +[root@tlx241 /]# kubectl exec kube-pv-demo -c data-pv-demo -it bash + +root@kube-pv-demo:/# mount ``` grep data + +/dev/sdb on /data type xfs (rw,relatime,attr2,inode64,noquota) + ``` + + + +<!-- +**Using Pet Sets** + + + +Typically, pods are treated as stateless units, so if one of them is unhealthy or gets superseded, Kubernetes just disposes it. In contrast, a PetSet is a group of stateful pods that has a stronger notion of identity. The goal of a PetSet is to decouple this dependency by assigning identities to individual instances of an application that are not anchored to the underlying physical infrastructure. +---> +**使用 Pet Sets** + + + +通常,Pod 被视为无状态单元,因此,如果其中之一状态异常或被取代,Kubernetes 会将其丢弃。相反,PetSet 是一组有状态的 Pod,具有更强的身份概念。PetSet 可以将标识分配给应用程序的各个实例,这些应用程序没有与底层物理结构连接,PetSet 可以消除这种依赖性。 + + + +<!-- +A PetSet requires {0..n-1} Pets. Each Pet has a deterministic name, PetSetName-Ordinal, and a unique identity. Each Pet has at most one pod, and each PetSet has at most one Pet with a given identity. A PetSet ensures that a specified number of “pets” with unique identities are running at any given time. The identity of a Pet is comprised of: + +- a stable hostname, available in DNS +- an ordinal index +- stable storage: linked to the ordinal & hostname + + +A typical PetSet definition using a PersistentVolume Claim looks like below: +---> +每个 PetSet 需要{0..n-1}个 Pet。每个 Pet 都有一个确定的名字、PetSetName-Ordinal 和唯一的身份。每个 Pet 最多有一个 Pod,每个 PetSet 最多包含一个给定身份的 Pet。要确保每个 PetSet 在任何特定时间运行时,具有唯一标识的“pet”的数量都是确定的。Pet 的身份标识包括以下几点: + +- 一个稳定的主机名,可以在 DNS 中使用 +- 一个序号索引 +- 稳定的存储:链接到序号和主机名 + + +使用 PersistentVolume Claim 定义 PetSet 的典型例子如下所示: + + + ``` +# A headless service to create DNS records + +apiVersion: v1 + +kind: Service + +metadata: + + name: test-service + + labels: + + app: nginx + +spec: + + ports: + + - port: 80 + + name: web + + clusterIP: None + + selector: + + app: nginx + +--- + +apiVersion: apps/v1alpha1 + +kind: PetSet + +metadata: + + name: test-petset + +spec: + + serviceName: "test-service" + + replicas: 2 + + template: + + metadata: + + labels: + + app: nginx + + annotations: + + [pod.alpha.kubernetes.io/initialized:](http://pod.alpha.kubernetes.io/initialized:) "true" + + spec: + + terminationGracePeriodSeconds: 0 + + containers: + + - name: nginx + + image: [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8) + + ports: + + - containerPort: 80 + + name: web + + volumeMounts: + + - name: pv-claim + + mountPath: /data + + volumeClaimTemplates: + + - metadata: + + name: pv-claim + + annotations: + + [volume.alpha.kubernetes.io/storage-class:](http://volume.alpha.kubernetes.io/storage-class:) anything + + spec: + + accessModes: ["ReadWriteOnce"] + + resources: + + requests: + + storage: 100Gi + ``` + + +<!-- +We have the following PersistentVolume Claims available: +---> +我们提供以下 PersistentVolume Claim: + + + ``` +[root@tlx241 /]# kubectl get pvc + +NAME STATUS VOLUME CAPACITY ACCESSMODES AGE + +pv-claim-test-petset-0 Bound pv-datera-0 0 41m + +pv-claim-test-petset-1 Bound pv-datera-1 0 41m + +pv-claim-test-petset-2 Bound pv-datera-2 0 5s + +pv-claim-test-petset-3 Bound pv-datera-3 0 2s + ``` + + +<!-- +When this PetSet is provisioned, two pods get instantiated: +---> +配置 PetSet 时,将实例化两个 Pod: + + + ``` +[root@tlx241 /]# kubectl get pods + +NAMESPACE NAME READY STATUS RESTARTS AGE + +default test-petset-0 1/1 Running 0 7s + +default test-petset-1 1/1 Running 0 3s + ``` + + +<!-- +Here is how the PetSet test-petset instantiated earlier looks like: +---> +以下是一个 PetSet:test-petset 实例化之前的样子: + + + + ``` +[root@tlx241 /]# kubectl describe petset test-petset + +Name: test-petset + +Namespace: default + +Image(s): [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8) + +Selector: app=nginx + +Labels: app=nginx + +Replicas: 2 current / 2 desired + +Annotations: \<none\> + +CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700 + +Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed + +No volumes. + +No events. + ``` + + +<!-- +Once a PetSet is instantiated, such as test-petset below, upon increasing the number of replicas (i.e. the number of pods started with that PetSet), more pods get instantiated and more PersistentVolume Claims get bound to new pods: +---> +一旦实例化 PetSet(例如下面的 test-petset),随着副本数(从 PetSet 的初始 Pod 数量算起)的增加,实例化的 Pod 将变得更多,并且更多的 PersistentVolume Claim 会绑定到新的 Pod 上: + + + ``` +[root@tlx241 /]# kubectl patch petset test-petset -p'{"spec":{"replicas":"3"}}' + +"test-petset” patched + + +[root@tlx241 /]# kubectl describe petset test-petset + +Name: test-petset + +Namespace: default + +Image(s): [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8) + +Selector: app=nginx + +Labels: app=nginx + +Replicas: 3 current / 3 desired + +Annotations: \<none\> + +CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700 + +Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed + +No volumes. + +No events. + + +[root@tlx241 /]# kubectl get pods + +NAME READY STATUS RESTARTS AGE + +test-petset-0 1/1 Running 0 29m + +test-petset-1 1/1 Running 0 28m + +test-petset-2 1/1 Running 0 9s + ``` + + +<!-- +Now the PetSet is running 3 pods after patch application. +---> +现在,应用修补程序后,PetSet 正在运行3个 Pod。 + + + +<!-- +When the above PetSet definition is patched to have one more replica, it introduces one more pod in the system. This in turn results in one more volume getting provisioned on the Datera Data Fabric. So volumes get dynamically provisioned and attached to a pod upon the PetSet scaling up. +---> +当上述 PetSet 定义修补完成,会产生另一个副本,PetSet 将在系统中引入另一个 pod。反之,这会导致在 Datera Data Fabric 上配置更多的卷。因此,在 PetSet 进行扩展时,要配置动态卷并将其附加到 Pod 上。 + + +<!-- +To support the notion of durability and consistency, if a pod moves from one minion to another, volumes do get attached (mounted) to the new minion node and detached (unmounted) from the old minion to maintain persistent access to the data. +---> +为了平衡持久性和一致性的概念,如果 Pod 从一个 Minion 转移到另一个,卷确实会附加(安装)到新的 minion node 上,并与旧的 Minion 分离(卸载),从而实现对数据的持久访问。 + + + +<!-- +**Conclusion** + + + +This demonstrates Kubernetes with Pet Sets orchestrating stateful and stateless workloads. While the Kubernetes community is working on expanding the FlexVolume framework’s capabilities, we are excited that this solution makes it possible for Kubernetes to be run more widely in the datacenters. +---> +**结论** + + + +本文展示了具备 Pet Sets 的 Kubernetes 协调有状态和无状态工作负载。当 Kubernetes 社区致力于扩展 FlexVolume 框架的功能时,我们很高兴这个解决方案使 Kubernetes 能够在数据中心广泛运行。 + + +<!-- +Join and contribute: Kubernetes [Storage SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-storage). +---> +加入我们并作出贡献:Kubernetes [Storage SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-storage). + + + +<!-- +- [Download Kubernetes](http://get.k8s.io/) +- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes) +- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) +- Connect with the community on the [k8s Slack](http://slack.k8s.io/) +- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates +---> +- [下载 Kubernetes](http://get.k8s.io/) +- 参与 Kubernetes 项目 [GitHub](https://github.com/kubernetes/kubernetes) +- 发布问题(或者回答问题) [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) +- 联系社区 [k8s Slack](http://slack.k8s.io/) +- 在 Twitter 上关注我们 [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates diff --git a/content/zh/blog/_posts/2019-08-06-OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes.md b/content/zh/blog/_posts/2019-08-06-OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes.md new file mode 100644 index 0000000000..58ee778b84 --- /dev/null +++ b/content/zh/blog/_posts/2019-08-06-OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes.md @@ -0,0 +1,290 @@ +--- +layout: blog +title: "OPA Gatekeeper:Kubernetes 的策略和管理" +date: 2019-08-06 +slug: OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes +--- +<!-- +--- +layout: blog +title: "OPA Gatekeeper: Policy and Governance for Kubernetes" +date: 2019-08-06 +slug: OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes +--- +---> + +<!-- +**Authors:** Rita Zhang (Microsoft), Max Smythe (Google), Craig Hooper (Commonwealth Bank AU), Tim Hinrichs (Styra), Lachie Evenson (Microsoft), Torin Sandall (Styra) +---> +**作者:** Rita Zhang (Microsoft), Max Smythe (Google), Craig Hooper (Commonwealth Bank AU), Tim Hinrichs (Styra), Lachie Evenson (Microsoft), Torin Sandall (Styra) + +<!-- +The [Open Policy Agent Gatekeeper](https://github.com/open-policy-agent/gatekeeper) project can be leveraged to help enforce policies and strengthen governance in your Kubernetes environment. In this post, we will walk through the goals, history, and current state of the project. +---> +可以从项目 [Open Policy Agent Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 中获得帮助,在 Kubernetes 环境下实施策略并加强治理。在本文中,我们将逐步介绍该项目的目标,历史和当前状态。 + +<!-- +The following recordings from the Kubecon EU 2019 sessions are a great starting place in working with Gatekeeper: + +* [Intro: Open Policy Agent Gatekeeper](https://youtu.be/Yup1FUc2Qn0) +* [Deep Dive: Open Policy Agent](https://youtu.be/n94_FNhuzy4) +---> +以下是 Kubecon EU 2019 会议的录音,帮助我们更好地开展与 Gatekeeper 合作: + +* [简介:开放策略代理 Gatekeeper](https://youtu.be/Yup1FUc2Qn0) +* [深入研究:开放策略代理](https://youtu.be/n94_FNhuzy4) + +<!-- +## Motivations + +If your organization has been operating Kubernetes, you probably have been looking for ways to control what end-users can do on the cluster and ways to ensure that clusters are in compliance with company policies. These policies may be there to meet governance and legal requirements or to enforce best practices and organizational conventions. With Kubernetes, how do you ensure compliance without sacrificing development agility and operational independence? +---> +## 出发点 + +如果您所在的组织一直在使用 Kubernetes,您可能一直在寻找如何控制终端用户在集群上的行为,以及如何确保集群符合公司政策。这些策略可能需要满足管理和法律要求,或者符合最佳执行方法和组织惯例。使用 Kubernetes,如何在不牺牲开发敏捷性和运营独立性的前提下确保合规性? + +<!-- +For example, you can enforce policies like: + +* All images must be from approved repositories +* All ingress hostnames must be globally unique +* All pods must have resource limits +* All namespaces must have a label that lists a point-of-contact +---> +例如,您可以执行以下策略: + +* 所有镜像必须来自获得批准的存储库 +* 所有入口主机名必须是全局唯一的 +* 所有 Pod 必须有资源限制 +* 所有命名空间都必须具有列出联系的标签 + +<!-- +Kubernetes allows decoupling policy decisions from the API server by means of [admission controller webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to intercept admission requests before they are persisted as objects in Kubernetes. [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) was created to enable users to customize admission control via configuration, not code and to bring awareness of the cluster’s state, not just the single object under evaluation at admission time. Gatekeeper is a customizable admission webhook for Kubernetes that enforces policies executed by the [Open Policy Agent (OPA)](https://www.openpolicyagent.org), a policy engine for Cloud Native environments hosted by CNCF. +---> +在接收请求被持久化为 Kubernetes 中的对象之前,Kubernetes 允许通过 [admission controller webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) 将策略决策与 API 服务器分离,从而拦截这些请求。[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 创建的目的是使用户能够通过配置(而不是代码)自定义控制许可,并使用户了解群集的状态,而不仅仅是针对评估状态的单个对象,在这些对象准许加入的时候。Gatekeeper 是 Kubernetes 的一个可定制的许可 webhook ,它由 [Open Policy Agent (OPA)](https://www.openpolicyagent.org) 强制执行, OPA 是 Cloud Native 环境下的策略引擎,由 CNCF 主办。 + +<!-- +## Evolution + +Before we dive into the current state of Gatekeeper, let’s take a look at how the Gatekeeper project has evolved. +---> +## 发展 + +在深入了解 Gatekeeper 的当前情况之前,让我们看一下 Gatekeeper 项目是如何发展的。 + +<!-- +* Gatekeeper v1.0 - Uses OPA as the admission controller with the kube-mgmt sidecar enforcing configmap-based policies. It provides validating and mutating admission control. Donated by Styra. +* Gatekeeper v2.0 - Uses Kubernetes policy controller as the admission controller with OPA and kube-mgmt sidecars enforcing configmap-based policies. It provides validating and mutating admission control and audit functionality. Donated by Microsoft. + * Gatekeeper v3.0 - The admission controller is integrated with the [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) to enforce CRD-based policies and allow declaratively configured policies to be reliably shareable. Built with kubebuilder, it provides validating and, eventually, mutating (to be implemented) admission control and audit functionality. This enables the creation of policy templates for [Rego](https://www.openpolicyagent.org/docs/latest/how-do-i-write-policies/) policies, creation of policies as CRDs, and storage of audit results on policy CRDs. This project is a collaboration between Google, Microsoft, Red Hat, and Styra. +---> +* Gatekeeper v1.0 - 使用 OPA 作为带有 kube-mgmt sidecar 的许可控制器,用来强制执行基于 configmap 的策略。这种方法实现了验证和转换许可控制。贡献方:Styra +* Gatekeeper v2.0 - 使用 Kubernetes 策略控制器作为许可控制器,OPA 和 kube-mgmt sidecar 实施基于 configmap 的策略。这种方法实现了验证和转换准入控制和审核功能。贡献方:Microsoft + * Gatekeeper v3.0 - 准入控制器与 [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) 集成在一起,用来实施基于 CRD 的策略,并可以可靠地共享已完成声明配置的策略。使用 kubebuilder 进行构建,实现了验证以及最终转换(待完成)为许可控制和审核功能。这样就可以为 [Rego](https://www.openpolicyagent.org/docs/latest/how-do-i-write-policies/) 策略创建策略模板,将策略创建为 CRD 并存储审核结果到策略 CRD 上。该项目是 Google,Microsoft,Red Hat 和 Styra 合作完成的。 + +![](/images/blog/2019-08-06-opa-gatekeeper/v3.png) + +<!-- +## Gatekeeper v3.0 Features + +Now let’s take a closer look at the current state of Gatekeeper and how you can leverage all the latest features. Consider an organization that wants to ensure all objects in a cluster have departmental information provided as part of the object’s labels. How can you do this with Gatekeeper? +---> +## Gatekeeper v3.0 的功能 + +现在我们详细看一下 Gatekeeper 当前的状态,以及如何利用所有最新的功能。假设一个组织希望确保集群中的所有对象都有 department 信息,这些信息是对象标签的一部分。如何利用 Gatekeeper 完成这项需求? + +<!-- +### Validating Admission Control + +Once all the Gatekeeper components have been [installed](https://github.com/open-policy-agent/gatekeeper) in your cluster, the API server will trigger the Gatekeeper admission webhook to process the admission request whenever a resource in the cluster is created, updated, or deleted. + +During the validation process, Gatekeeper acts as a bridge between the API server and OPA. The API server will enforce all policies executed by OPA. +---> +### 验证许可控制 + +在集群中所有 Gatekeeper 组件都 [安装](https://github.com/open-policy-agent/gatekeeper) 完成之后,只要集群中的资源进行创建、更新或删除,API 服务器将触发 Gatekeeper 准入 webhook 来处理准入请求。 + +在验证过程中,Gatekeeper 充当 API 服务器和 OPA 之间的桥梁。API 服务器将强制实施 OPA 执行的所有策略。 + +<!-- +### Policies and Constraints + +With the integration of the OPA Constraint Framework, a Constraint is a declaration that its author wants a system to meet a given set of requirements. Each Constraint is written with Rego, a declarative query language used by OPA to enumerate instances of data that violate the expected state of the system. All Constraints are evaluated as a logical AND. If one Constraint is not satisfied, then the whole request is rejected. +---> +### 策略与 Constraint + +结合 OPA Constraint Framework,Constraint 是一个声明,表示作者希望系统满足给定的一系列要求。Constraint 都使用 Rego 编写,Rego 是声明性查询语言,OPA 用 Rego 来枚举违背系统预期状态的数据实例。所有 Constraint 都遵循逻辑 AND。假使有一个 Constraint 不满足,那么整个请求都将被拒绝。 + +<!-- +Before defining a Constraint, you need to create a Constraint Template that allows people to declare new Constraints. Each template describes both the Rego logic that enforces the Constraint and the schema for the Constraint, which includes the schema of the CRD and the parameters that can be passed into a Constraint, much like arguments to a function. + +For example, here is a Constraint template CRD that requires certain labels to be present on an arbitrary object. +---> +在定义 Constraint 之前,您需要创建一个 Constraint Template,允许大家声明新的 Constraint。每个模板都描述了强制执行 Constraint 的 Rego 逻辑和 Constraint 的模式,其中包括 CRD 的模式和传递到 enforces 中的参数,就像函数的参数一样。 + +例如,以下是一个 Constraint 模板 CRD,它的请求是在任意对象上显示某些标签。 + +```yaml +apiVersion: templates.gatekeeper.sh/v1beta1 +kind: ConstraintTemplate +metadata: + name: k8srequiredlabels +spec: + crd: + spec: + names: + kind: K8sRequiredLabels + listKind: K8sRequiredLabelsList + plural: k8srequiredlabels + singular: k8srequiredlabels + validation: + # Schema for the `parameters` field + openAPIV3Schema: + properties: + labels: + type: array + items: string + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8srequiredlabels + + deny[{"msg": msg, "details": {"missing_labels": missing}}] { + provided := {label | input.review.object.metadata.labels[label]} + required := {label | label := input.parameters.labels[_]} + missing := required - provided + count(missing) > 0 + msg := sprintf("you must provide labels: %v", [missing]) + } +``` + +<!-- +Once a Constraint template has been deployed in the cluster, an admin can now create individual Constraint CRDs as defined by the Constraint template. For example, here is a Constraint CRD that requires the label `hr` to be present on all namespaces. +---> +在集群中部署了 Constraint 模板后,管理员现在可以创建由 Constraint 模板定义的单个 Constraint CRD。例如,这里以下是一个 Constraint CRD,要求标签 `hr` 出现在所有命名空间上。 + +```yaml +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sRequiredLabels +metadata: + name: ns-must-have-hr +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Namespace"] + parameters: + labels: ["hr"] +``` + +<!-- +Similarly, another Constraint CRD that requires the label `finance` to be present on all namespaces can easily be created from the same Constraint template. +---> +类似地,可以从同一个 Constraint 模板轻松地创建另一个 Constraint CRD,该 Constraint CRD 要求所有命名空间上都有 `finance` 标签。 + +```yaml +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sRequiredLabels +metadata: + name: ns-must-have-finance +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Namespace"] + parameters: + labels: ["finance"] +``` + +<!-- +As you can see, with the Constraint framework, we can reliably share Regos via the Constraint templates, define the scope of enforcement with the match field, and provide user-defined parameters to the Constraints to create customized behavior for each Constraint. +---> +如您所见,使用 Constraint framework,我们可以通过 Constraint 模板可靠地共享 rego,使用匹配字段定义执行范围,并为 Constraint 提供用户定义的参数,从而为每个 Constraint 创建自定义行为。 + +<!-- +### Audit + +The audit functionality enables periodic evaluations of replicated resources against the Constraints enforced in the cluster to detect pre-existing misconfigurations. Gatekeeper stores audit results as `violations` listed in the `status` field of the relevant Constraint. ---> +### 审核 + +根据群集中强制执行的 Constraint,审核功能可定期评估复制的资源,并检测先前存在的错误配置。Gatekeeper 将审核结果存储为 `violations`,在相关 Constraint 的 `status` 字段中列出。 + +```yaml +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sRequiredLabels +metadata: + name: ns-must-have-hr +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Namespace"] + parameters: + labels: ["hr"] +status: + auditTimestamp: "2019-08-06T01:46:13Z" + byPod: + - enforced: true + id: gatekeeper-controller-manager-0 + violations: + - enforcementAction: deny + kind: Namespace + message: 'you must provide labels: {"hr"}' + name: default + - enforcementAction: deny + kind: Namespace + message: 'you must provide labels: {"hr"}' + name: gatekeeper-system + - enforcementAction: deny + kind: Namespace + message: 'you must provide labels: {"hr"}' + name: kube-public + - enforcementAction: deny + kind: Namespace + message: 'you must provide labels: {"hr"}' + name: kube-system +``` + +<!-- +### Data Replication + +Audit requires replication of Kubernetes resources into OPA before they can be evaluated against the enforced Constraints. Data replication is also required by Constraints that need access to objects in the cluster other than the object under evaluation. For example, a Constraint that enforces uniqueness of ingress hostname must have access to all other ingresses in the cluster. +---> +### 数据复制 + +审核要求将 Kubernetes 复制到 OPA 中,然后才能根据强制的 Constraint 对其进行评估。数据复制同样也需要 Constraint,这些 Constraint 需要访问集群中除评估对象之外的对象。例如,一个 Constraint 要强制确定入口主机名的唯一性,就必须有权访问集群中的所有其他入口。 + +<!-- +To configure Kubernetes data to be replicated, create a sync config resource with the resources to be replicated into OPA. For example, the below configuration replicates all namespace and pod resources to OPA. +---> +对 Kubernetes 数据进行复制,请使用复制到 OPA 中的资源创建 sync config 资源。例如,下面的配置将所有命名空间和 Pod 资源复制到 OPA。 + +```yaml +apiVersion: config.gatekeeper.sh/v1alpha1 +kind: Config +metadata: + name: config + namespace: "gatekeeper-system" +spec: + sync: + syncOnly: + - group: "" + version: "v1" + kind: "Namespace" + - group: "" + version: "v1" + kind: "Pod" +``` + +<!-- +## Planned for Future + +The community behind the Gatekeeper project will be focusing on providing mutating admission control to support mutation scenarios (for example: annotate objects automatically with departmental information when creating a new resource), support external data to inject context external to the cluster into the admission decisions, support dry run to see impact of a policy on existing resources in the cluster before enforcing it, and more audit functionalities. +---> +## 未来计划 + +Gatekeeper 项目背后的社区将专注于提供转换许可控制,可以用来支持转换方案(例如:在创建新资源时使用 department 信息自动注释对象),支持外部数据以将集群外部环境加入到许可决策中,支持试运行以便在执行策略之前了解策略对集群中现有资源的影响,还有更多的审核功能。 + +<!-- +If you are interested in learning more about the project, check out the [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) repo. If you are interested in helping define the direction of Gatekeeper, join the [#kubernetes-policy](https://openpolicyagent.slack.com/messages/CDTN970AX) channel on OPA Slack, and join our [weekly meetings](https://docs.google.com/document/d/1A1-Q-1OMw3QODs1wT6eqfLTagcGmgzAJAjJihiO3T48/edit) to discuss development, issues, use cases, etc. +---> +如果您有兴趣了解更多有关该项目的信息,请查看 [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 存储库。如果您有兴趣帮助确定 Gatekeeper 的方向,请加入 [#kubernetes-policy](https://openpolicyagent.slack.com/messages/CDTN970AX) OPA Slack 频道,并加入我们的 [周会](https://docs.google.com/document/d/1A1-Q-1OMw3QODs1wT6eqfLTagcGmgzAJAjJihiO3T48/edit) 一同讨论开发、任务、用例等。 diff --git a/content/zh/docs/concepts/_index.md b/content/zh/docs/concepts/_index.md index 90e2d6b4f1..5a83afcdf0 100644 --- a/content/zh/docs/concepts/_index.md +++ b/content/zh/docs/concepts/_index.md @@ -19,136 +19,3 @@ The Concepts section helps you learn about the parts of the Kubernetes system an --> 概念部分可以帮助你了解 Kubernetes 的各个组成部分以及 Kubernetes 用来表示集群的一些抽象概念,并帮助你更加深入的理解 Kubernetes 是如何工作的。 - - - -<!-- body --> - -<!-- -## Overview ---> - -## 概述 - -<!-- -To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state. ---> - -要使用 Kubernetes,你需要用 *Kubernetes API 对象* 来描述集群的 *预期状态(desired state)* :包括你需要运行的应用或者负载,它们使用的镜像、副本数,以及所需网络和磁盘资源等等。你可以使用命令行工具 `kubectl` 来调用 Kubernetes API 创建对象,通过所创建的这些对象来配置预期状态。你也可以直接调用 Kubernetes API 和集群进行交互,设置或者修改预期状态。 - -<!-- -Once you've set your desired state, the *Kubernetes Control Plane* makes the cluster's current state match the desired state via the Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). To do so, Kubernetes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster: ---> - -一旦你设置了你所需的目标状态,*Kubernetes 控制面(control plane)* 会通过 Pod 生命周期事件生成器([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)),促成集群的当前状态符合其预期状态。为此,Kubernetes 会自动执行各类任务,比如运行或者重启容器、调整给定应用的副本数等等。Kubernetes 控制面由一组运行在集群上的进程组成: - -<!-- -* The **Kubernetes Master** is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and [kube-scheduler](/docs/admin/kube-scheduler/). -* Each individual non-master node in your cluster runs two processes: - * **[kubelet](/docs/admin/kubelet/)**, which communicates with the Kubernetes Master. - * **[kube-proxy](/docs/admin/kube-proxy/)**, a network proxy which reflects Kubernetes networking services on each node. ---> - -* **Kubernetes 主控组件(Master)** 包含三个进程,都运行在集群中的某个节点上,主控组件通常这个节点被称为 master 节点。这些进程包括:[kube-apiserver](/docs/admin/kube-apiserver/)、[kube-controller-manager](/docs/admin/kube-controller-manager/) 和 [kube-scheduler](/docs/admin/kube-scheduler/)。 -* 集群中的每个非 master 节点都运行两个进程: - * **[kubelet](/docs/admin/kubelet/)**,和 master 节点进行通信。 - * **[kube-proxy](/docs/admin/kube-proxy/)**,一种网络代理,将 Kubernetes 的网络服务代理到每个节点上。 - -<!-- -## Kubernetes Objects ---> - -## Kubernetes 对象 - -<!-- -Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API. See [Understanding Kubernetes Objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) for more details. ---> - -Kubernetes 包含若干用来表示系统状态的抽象层,包括:已部署的容器化应用和负载、与它们相关的网络和磁盘资源以及有关集群正在运行的其他操作的信息。这些抽象使用 Kubernetes API 对象来表示。有关更多详细信息,请参阅[了解 Kubernetes 对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/)。 - -<!-- -The basic Kubernetes objects include: ---> - -基本的 Kubernetes 对象包括: - -* [Pod](/docs/concepts/workloads/pods/pod-overview/) -* [Service](/docs/concepts/services-networking/service/) -* [Volume](/docs/concepts/storage/volumes/) -* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) - -<!-- -Kubernetes also contains higher-level abstractions that rely on [Controllers](/docs/concepts/architecture/controller/) to build upon the basic objects, and provide additional functionality and convenience features. These include: ---> - -Kubernetes 也包含大量的被称作 [Controller](/docs/concepts/architecture/controller/) 的高级抽象。控制器基于基本对象构建并提供额外的功能和方便使用的特性。具体包括: - -* [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) -* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) -* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) -* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) - -<!-- -## Kubernetes Control Plane ---> - -## Kubernetes 控制面 - -<!-- -The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. The Control Plane maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state. At any given time, the Control Plane's control loops will respond to changes in the cluster and work to make the actual state of all the objects in the system match the desired state that you provided. ---> - -关于 Kubernetes 控制平面的各个部分,(如 Kubernetes 主控组件和 kubelet 进程),管理着 Kubernetes 如何与你的集群进行通信。控制平面维护着系统中所有的 Kubernetes 对象的状态记录,并且通过连续的控制循环来管理这些对象的状态。在任意的给定时间点,控制面的控制环都能响应集群中的变化,并且让系统中所有对象的实际状态与你提供的预期状态相匹配。 - -<!-- -For example, when you use the Kubernetes API to create a Deployment, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes--thus making the cluster's actual state match the desired state. ---> - -比如, 当你通过 Kubernetes API 创建一个 Deployment 对象,你就为系统增加了一个新的目标状态。Kubernetes 控制平面记录着对象的创建,并启动必要的应用然后将它们调度至集群某个节点上来执行你的指令,以此来保持集群的实际状态和目标状态的匹配。 - -<!-- -### Kubernetes Master ---> - -### Kubernetes Master 节点 - -<!-- -The Kubernetes master is responsible for maintaining the desired state for your cluster. When you interact with Kubernetes, such as by using the `kubectl` command-line interface, you're communicating with your cluster's Kubernetes master. ---> - -Kubernetes master 节点负责维护集群的目标状态。当你要与 Kubernetes 通信时,使用如 `kubectl` 的命令行工具,就可以直接与 Kubernetes master 节点进行通信。 - -<!-- -> The "master" refers to a collection of processes managing the cluster state. Typically all these processes run on a single node in the cluster, and this node is also referred to as the master. The master can also be replicated for availability and redundancy. ---> - -> "master" 是指管理集群状态的一组进程的集合。通常这些进程都跑在集群中一个单独的节点上,并且这个节点被称为 master 节点。master 节点也可以扩展副本数,来获取更好的可用性及冗余。 - -<!-- -### Kubernetes Nodes ---> - -### Kubernetes Node 节点 - -<!-- -The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you'll rarely interact with nodes directly. ---> - -集群中的 node 节点(虚拟机、物理机等等)都是用来运行你的应用和云工作流的机器。Kubernetes master 节点控制所有 node 节点;你很少需要和 node 节点进行直接通信。 - - - - -## {{% heading "whatsnext" %}} - - -<!-- -If you would like to write a concept page, see -[Using Page Templates](/docs/home/contribute/page-templates/) -for information about the concept page type and the concept template. ---> - -如果你想编写一个概念页面,请参阅[使用页面模板](/docs/home/contribute/page-templates/)获取更多有关概念页面类型和概念模板的信息。 - - diff --git a/content/zh/docs/concepts/configuration/configmap.md b/content/zh/docs/concepts/configuration/configmap.md index 1af8566a34..d1aa8fa01a 100644 --- a/content/zh/docs/concepts/configuration/configmap.md +++ b/content/zh/docs/concepts/configuration/configmap.md @@ -19,7 +19,6 @@ ConfigMap 并不提供保密或者加密功能。如果你想存储的数据是 {{< /caution >}} - <!-- body --> <!-- ## Motivation @@ -59,9 +58,11 @@ The name of a ConfigMap must be a valid --> ## ConfigMap 对象 -ConfigMap 是一个 API [对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/),让你可以存储其他对象所需要使用的配置。和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。 +ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/), +让你可以存储其他对象所需要使用的配置。 +和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。 -ConfigMap 的名字必须是一个合法的 [DNS 子域名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +ConfigMap 的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 <!-- ## ConfigMaps and Pods @@ -216,15 +217,14 @@ ConfigMap 最常见的用法是为同一命名空间里某 Pod 中运行的容 ## {{% heading "whatsnext" %}} - <!-- * Read about [Secrets](/docs/concepts/configuration/secret/). * Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). * Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for separating code from configuration. --> -* 阅读 [Secret](/docs/concepts/configuration/secret/)。 -* 阅读 [配置 Pod 来使用 ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)。 +* 阅读 [Secret](/zh/docs/concepts/configuration/secret/)。 +* 阅读 [配置 Pod 来使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 * 阅读 [Twelve-Factor 应用](https://12factor.net/) 来了解将代码和配置分开的动机。 diff --git a/content/zh/docs/concepts/configuration/manage-resources-containers.md b/content/zh/docs/concepts/configuration/manage-resources-containers.md index 5b7b359848..8860f5807d 100644 --- a/content/zh/docs/concepts/configuration/manage-resources-containers.md +++ b/content/zh/docs/concepts/configuration/manage-resources-containers.md @@ -9,7 +9,6 @@ feature: --- <!-- ---- title: Managing Resources for Containers content_type: concept weight: 40 @@ -17,7 +16,6 @@ feature: title: Automatic binpacking description: > Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. ---- --> <!-- overview --> @@ -35,7 +33,7 @@ at least the _request_ amount of that system resource specifically for that cont to use. --> -当你定义 {{< glossary_tooltip term_id="pod" >}} 时可以选择性地为每个 +当你定义 {{< glossary_tooltip text="Pod" term_id="pod" >}} 时可以选择性地为每个 {{< glossary_tooltip text="容器" term_id="container" >}}设定所需要的资源数量。 最常见的可设定资源是 CPU 和内存(RAM)大小;此外还有其他类型的资源。 @@ -138,8 +136,8 @@ through the Kubernetes API server. CPU 和内存统称为*计算资源*,或简称为*资源*。 计算资源的数量是可测量的,可以被请求、被分配、被消耗。 -它们与 [API 资源](/docs/concepts/overview/kubernetes-api/) 不同。 -API 资源(如 Pod 和 [Service](/docs/concepts/services-networking/service/))是可通过 +它们与 [API 资源](/zh/docs/concepts/overview/kubernetes-api/) 不同。 +API 资源(如 Pod 和 [Service](/zh/docs/concepts/services-networking/service/))是可通过 Kubernetes API 服务器读取和修改的对象。 <!-- @@ -249,8 +247,8 @@ metadata: name: frontend spec: containers: - - name: db - image: mysql + - name: app + image: images.my-company.example/app:v4 env: - name: MYSQL_ROOT_PASSWORD value: "password" @@ -261,8 +259,8 @@ spec: limits: memory: "128Mi" cpu: "500m" - - name: wp - image: wordpress + - name: log-aggregator + image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" @@ -388,9 +386,9 @@ directly or from your monitoring tools. Pod 的资源使用情况是作为 Pod 状态的一部分来报告的。 如果为集群配置了可选的 -[监控工具](/docs/tasks/debug-application-cluster/resource-usage-monitoring/), +[监控工具](/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring/), 则可以直接从 -[指标 API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) +[指标 API](/zh/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) 或者监控工具获得 Pod 的资源使用情况。 <!-- @@ -417,7 +415,7 @@ mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Pods 通常可以使用临时性本地存储来实现缓冲区、保存日志等功能。 kubelet 可以为使用本地临时存储的 Pods 提供这种存储空间,允许后者使用 -[`emptyDir`](/docs/concepts/storage/volumes/#emptydir) 类型的 +[`emptyDir`](/zh/docs/concepts/storage/volumes/#emptydir) 类型的 {{< glossary_tooltip term_id="volume" text="卷" >}}将其挂载到容器中。 <!-- @@ -436,7 +434,7 @@ of ephemeral local storage a Pod can consume. --> kubelet 也使用此类存储来保存 -[节点层面的容器日志](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), +[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), 容器镜像文件、以及运行中容器的可写入层。 {{< caution >}} @@ -474,7 +472,7 @@ Kubernetes 有两种方式支持节点上配置本地临时性存储: (kubelet)来保存数据的。 kubelet 也会生成 -[节点层面的容器日志](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), +[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), 并按临时性本地存储的方式对待之。 <!-- @@ -525,7 +523,7 @@ as you like. 无关的其他系统日志);这个文件系统还可以是根文件系统。 kubelet 也将 -[节点层面的容器日志](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) +[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) 写入到第一个文件系统中,并按临时性本地存储的方式对待之。 同时你使用另一个由不同逻辑存储设备支持的文件系统。在这种配置下,你会告诉 @@ -558,7 +556,7 @@ than as local ephemeral storage. kubelet 能够度量其本地存储的用量。实现度量机制的前提是: -- `LocalStorageCapacityIsolation` [特性门控](/docs/reference/command-line-tools-reference/feature-gates/)被启用(默认状态),并且 +- `LocalStorageCapacityIsolation` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)被启用(默认状态),并且 - 你已经对节点进行了配置,使之使用所支持的本地临时性储存配置方式之一 如果你的节点配置不同于以上预期,kubelet 就无法对临时性本地存储的资源约束实施限制。 @@ -617,18 +615,15 @@ metadata: name: frontend spec: containers: - - name: db - image: mysql - env: - - name: MYSQL_ROOT_PASSWORD - value: "password" + - name: app + image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" - - name: wp - image: wordpress + - name: log-aggregator + image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" @@ -650,7 +645,7 @@ The scheduler ensures that the sum of the resource requests of the scheduled Con 当你创建一个 Pod 时,Kubernetes 调度器会为 Pod 选择一个节点来运行之。 每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。 欲了解更多信息,可参考 -[节点可分配资源](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +[节点可分配资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) 节。 调度器会确保所调度的 Containers 的资源请求总和不会超出节点的资源容量。 @@ -838,7 +833,7 @@ If you want to use project quotas, you should: 如果你希望使用项目配额,你需要: * 在 kubelet 配置中启用 `LocalStorageCapacityIsolationFSQuotaMonitoring=true` - [特性门控](/docs/reference/command-line-tools-reference/feature-gates/)。 + [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 * 确保根文件系统(或者可选的运行时文件系统)启用了项目配额。所有 XFS 文件系统都支持项目配额。 @@ -896,7 +891,8 @@ for how to advertise device plugin managed resources on each node. ##### 设备插件管理的资源 -有关如何颁布在各节点上由设备插件所管理的资源,请参阅[设备插件](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)。 +有关如何颁布在各节点上由设备插件所管理的资源,请参阅 +[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)。 <!-- ##### Other resources @@ -1198,11 +1194,11 @@ with namespaces, it can prevent one team from hogging all the resources. 通过查看 `Pods` 部分,您将看到哪些 Pod 占用了节点上的资源。 可供 Pod 使用的资源量小于节点容量,因为系统守护程序也会使用一部分可用资源。 -[NodeStatus](/docs/resources-reference/{{< param "version" >}}/#nodestatus-v1-core) +[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) 的 `allocatable` 字段给出了可用于 Pod 的资源量。 有关更多信息,请参阅 [节点可分配资源](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md)。 -可以配置 [资源配额](/docs/concepts/policy/resource-quotas/) 功能特性以限制可以使用的资源总量。 +可以配置 [资源配额](/zh/docs/concepts/policy/resource-quotas/) 功能特性以限制可以使用的资源总量。 如果与名字空间配合一起使用,就可以防止一个团队占用所有资源。 <!-- @@ -1301,10 +1297,10 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh * Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS --> -* 获取将 [分配内存资源给容器和 Pod ](/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验 -* 获取将 [分配 CPU 资源给容器和 Pod ](/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验 +* 获取将 [分配内存资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验 +* 获取将 [分配 CPU 资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验 * 关于请求和约束之间的区别,细节信息可参见[资源服务质量](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md) * 阅读 API 参考文档中 [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) 部分。 * 阅读 API 参考文档中 [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) 部分。 -* 阅读 XFS 中关于 [项目配额](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) 的文档。 +* 阅读 XFS 中关于 [项目配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) 的文档。 diff --git a/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index c44ad088a9..db4176391a 100644 --- a/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/zh/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -4,12 +4,10 @@ content_type: concept weight: 60 --- <!-- ---- title: Organizing Cluster Access Using kubeconfig Files content_type: concept weight: 60 ---- ----> +--> <!-- overview --> @@ -18,7 +16,7 @@ Use kubeconfig files to organize information about clusters, users, namespaces, authentication mechanisms. The `kubectl` command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server of a cluster. ----> +--> 使用 kubeconfig 文件来组织有关集群、用户、命名空间和身份认证机制的信息。`kubectl` 命令行工具使用 kubeconfig 文件来查找选择集群所需的信息,并与集群的 API 服务器进行通信。 <!-- @@ -27,7 +25,7 @@ A file that is used to configure access to clusters is called a *kubeconfig file*. This is a generic way of referring to configuration files. It does not mean that there is a file named `kubeconfig`. {{< /note >}} ----> +--> {{< note >}} 用于配置集群访问的文件称为 *kubeconfig 文件*。这是引用配置文件的通用方法。这并不意味着有一个名为 `kubeconfig` 的文件 {{< /note >}} @@ -36,37 +34,37 @@ It does not mean that there is a file named `kubeconfig`. By default, `kubectl` looks for a file named `config` in the `$HOME/.kube` directory. You can specify other kubeconfig files by setting the `KUBECONFIG` environment variable or by setting the -[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/) flag. ----> -默认情况下,`kubectl` 在 `$HOME/.kube` 目录下查找名为 `config` 的文件。您可以通过设置 `KUBECONFIG` 环境变量或者设置[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/)参数来指定其他 kubeconfig 文件。 +[`-kubeconfig`](/docs/reference/generated/kubectl/kubectl/) flag. +--> +默认情况下,`kubectl` 在 `$HOME/.kube` 目录下查找名为 `config` 的文件。 +您可以通过设置 `KUBECONFIG` 环境变量或者设置 +[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/)参数来指定其他 kubeconfig 文件。 <!-- For step-by-step instructions on creating and specifying kubeconfig files, see [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters). ----> -有关创建和指定 kubeconfig 文件的分步说明,请参阅[配置对多集群的访问](/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。 - - - +--> +有关创建和指定 kubeconfig 文件的分步说明,请参阅 +[配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。 <!-- body --> <!-- ## Supporting multiple clusters, users, and authentication mechanisms ----> +--> ## 支持多集群、用户和身份认证机制 <!-- Suppose you have several clusters, and your users and components authenticate in a variety of ways. For example: ----> +--> 假设您有多个集群,并且您的用户和组件以多种方式进行身份认证。比如: <!-- - A running kubelet might authenticate using certificates. - A user might authenticate using tokens. - Administrators might have sets of certificates that they provide to individual users. ----> +--> - 正在运行的 kubelet 可能使用证书在进行认证。 - 用户可能通过令牌进行认证。 - 管理员可能拥有多个证书集合提供给各用户。 @@ -75,12 +73,12 @@ in a variety of ways. For example: With kubeconfig files, you can organize your clusters, users, and namespaces. You can also define contexts to quickly and easily switch between clusters and namespaces. ----> +--> 使用 kubeconfig 文件,您可以组织集群、用户和命名空间。您还可以定义上下文,以便在集群和命名空间之间快速轻松地切换。 <!-- ## Context ----> +--> ## 上下文(Context) <!-- @@ -88,12 +86,12 @@ A *context* element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the `kubectl` command-line tool uses parameters from the *current context* to communicate with the cluster. ----> +--> 通过 kubeconfig 文件中的 *context* 元素,使用简便的名称来对访问参数进行分组。每个上下文都有三个参数:cluster、namespace 和 user。默认情况下,`kubectl` 命令行工具使用 *当前上下文* 中的参数与集群进行通信。 <!-- To choose the current context: ----> +--> 选择当前上下文 ``` kubectl config use-context @@ -101,7 +99,7 @@ kubectl config use-context <!-- ## The KUBECONFIG environment variable ----> +--> ## KUBECONFIG 环境变量 <!-- @@ -110,25 +108,29 @@ For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The `KUBECONFIG` environment variable is not required. If the `KUBECONFIG` environment variable doesn't exist, `kubectl` uses the default kubeconfig file, `$HOME/.kube/config`. ----> -`KUBECONFIG` 环境变量包含一个 kubeconfig 文件列表。对于 Linux 和 Mac,列表以冒号分隔。对于 Windows,列表以分号分隔。`KUBECONFIG` 环境变量不是必要的。如果 `KUBECONFIG` 环境变量不存在,`kubectl` 使用默认的 kubeconfig 文件,`$HOME/.kube/config`。 +--> +`KUBECONFIG` 环境变量包含一个 kubeconfig 文件列表。 +对于 Linux 和 Mac,列表以冒号分隔。对于 Windows,列表以分号分隔。 +`KUBECONFIG` 环境变量不是必要的。 +如果 `KUBECONFIG` 环境变量不存在,`kubectl` 使用默认的 kubeconfig 文件,`$HOME/.kube/config`。 <!-- If the `KUBECONFIG` environment variable does exist, `kubectl` uses an effective configuration that is the result of merging the files listed in the `KUBECONFIG` environment variable. ----> +--> 如果 `KUBECONFIG` 环境变量存在,`kubectl` 使用 `KUBECONFIG` 环境变量中列举的文件合并后的有效配置。 <!-- ## Merging kubeconfig files ----> +--> ## 合并 kubeconfig 文件 <!-- To see your configuration, enter this command: ----> +--> 要查看配置,输入以下命令: + ```shell kubectl config view ``` @@ -136,16 +138,16 @@ kubectl config view <!-- As described previously, the output might be from a single kubeconfig file, or it might be the result of merging several kubeconfig files. ----> +--> 如前所述,输出可能来自 kubeconfig 文件,也可能是合并多个 kubeconfig 文件的结果。 <!-- Here are the rules that `kubectl` uses when it merges kubeconfig files: ----> +--> 以下是 `kubectl` 在合并 kubeconfig 文件时使用的规则。 <!-- -1. If the `--kubeconfig` flag is set, use only the specified file. Do not merge. +1. If the `-kubeconfig` flag is set, use only the specified file. Do not merge. Only one instance of this flag is allowed. Otherwise, if the `KUBECONFIG` environment variable is set, use it as a @@ -160,7 +162,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: Example: Preserve the context of the first file to set `current-context`. Example: If two files specify a `red-user`, use only values from the first file's `red-user`. Even if the second file has non-conflicting entries under `red-user`, discard them. ----> +--> 1. 如果设置了 `--kubeconfig` 参数,则仅使用指定的文件。不进行合并。此参数只能使用一次。 否则,如果设置了 `KUBECONFIG` 环境变量,将它用作应合并的文件列表。根据以下规则合并 `KUBECONFIG` 环境变量中列出的文件: @@ -173,20 +175,21 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: <!-- For an example of setting the `KUBECONFIG` environment variable, see [Setting the KUBECONFIG environment variable](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable). ----> - 有关设置 `KUBECONFIG` 环境变量的示例,请参阅[设置 KUBECONFIG 环境变量](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)。 +--> + 有关设置 `KUBECONFIG` 环境变量的示例,请参阅 + [设置 KUBECONFIG 环境变量](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)。 <!-- Otherwise, use the default kubeconfig file, `$HOME/.kube/config`, with no merging. ----> +--> 否则,使用默认的 kubeconfig 文件, `$HOME/.kube/config`,不进行合并。 <!-- 1. Determine the context to use based on the first hit in this chain: - 1. Use the `--context` command-line flag if it exists. + 1. Use the `-context` command-line flag if it exists. 2. Use the `current-context` from the merged kubeconfig files. ----> +--> 1. 根据此链中的第一个匹配确定要使用的上下文。 1. 如果存在,使用 `--context` 命令行参数。 @@ -194,7 +197,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: <!-- An empty context is allowed at this point. ----> +--> 这种场景下允许空上下文。 <!-- @@ -204,7 +207,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: 1. Use a command-line flag if it exists: `--user` or `--cluster`. 2. If the context is non-empty, take the user or cluster from the context. ----> +--> 1. 确定集群和用户。此时,可能有也可能没有上下文。根据此链中的第一个匹配确定集群和用户,这将运行两次:一次用于用户,一次用于集群。 1. 如果存在,使用命令行参数:`--user` 或者 `--cluster`。 @@ -212,7 +215,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: <!-- The user and cluster can be empty at this point. ----> +--> 这种场景下用户和集群可以为空。 <!-- @@ -223,7 +226,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: 1. Use command line flags if they exist: `--server`, `--certificate-authority`, `--insecure-skip-tls-verify`. 2. If any cluster information attributes exist from the merged kubeconfig files, use them. 3. If there is no server location, fail. ----> +--> 1. 确定要使用的实际集群信息。此时,可能有也可能没有集群信息。基于此链构建每个集群信息;第一个匹配项会被采用: 1. 如果存在:`--server`、`--certificate-authority` 和 `--insecure-skip-tls-verify`,使用命令行参数。 @@ -238,7 +241,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: 1. Use command line flags if they exist: `--client-certificate`, `--client-key`, `--username`, `--password`, `--token`. 2. Use the `user` fields from the merged kubeconfig files. 3. If there are two conflicting techniques, fail. ----> +--> 2. 确定要使用的实际用户信息。使用与集群信息相同的规则构建用户信息,但每个用户只允许一种身份认证技术: 1. 如果存在:`--client-certificate`、`--client-key`、`--username`、`--password` 和 `--token`,使用命令行参数。 @@ -248,12 +251,12 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files: <!-- 3. For any information still missing, use default values and potentially prompt for authentication information. ----> +--> 3. 对于仍然缺失的任何信息,使用其对应的默认值,并可能提示输入身份认证信息。 <!-- ## File references ----> +--> ## 文件引用 <!-- @@ -261,21 +264,17 @@ File and path references in a kubeconfig file are relative to the location of th File references on the command line are relative to the current working directory. In `$HOME/.kube/config`, relative paths are stored relatively, and absolute paths are stored absolutely. ----> -kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位置。命令行上的文件引用是相当对于当前工作目录的。在 `$HOME/.kube/config` 中,相对路径按相对路径存储,绝对路径按绝对路径存储。 - - - +--> +kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位置。 +命令行上的文件引用是相当对于当前工作目录的。 +在 `$HOME/.kube/config` 中,相对路径按相对路径存储,绝对路径按绝对路径存储。 ## {{% heading "whatsnext" %}} - <!-- * [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) * [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config) ---> -* [配置对多集群的访问](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +* [配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) * [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config) - - diff --git a/content/zh/docs/concepts/configuration/overview.md b/content/zh/docs/concepts/configuration/overview.md index da540c0a89..e3cb9ce89b 100644 --- a/content/zh/docs/concepts/configuration/overview.md +++ b/content/zh/docs/concepts/configuration/overview.md @@ -1,18 +1,12 @@ --- -reviewers: -- mikedanese title: 配置最佳实践 content_type: concept weight: 10 --- <!-- ---- -reviewers: -- mikedanese title: Configuration Best Practices content_type: concept weight: 10 ---- --> <!-- overview --> @@ -24,9 +18,8 @@ This document highlights and consolidates configuration best practices that are <!-- This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR. --> -这是一份活文件。 -如果您认为某些内容不在此列表中但可能对其他人有用,请不要犹豫,提交问题或提交 PR。 - +这是一份不断改进的文件。 +如果您认为某些内容缺失但可能对其他人有用,请不要犹豫,提交 Issue 或提交 PR。 <!-- body --> <!-- @@ -83,15 +76,19 @@ This is a living document. If you think of something that is not on this list bu <!-- - Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or [Deployment](/docs/concepts/workloads/controllers/deployment/)) if you can avoid it. Naked Pods will not be rescheduled in the event of a node failure. --> -- 如果您能避免,不要使用 naked Pods(即,Pod 未绑定到[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) 或[Deployment](/docs/concepts/workloads/controllers/deployment/))。 - 如果节点发生故障,将不会重新安排 Naked Pods。 +- 如果可能,不要使用独立的 Pods(即,未绑定到 +[ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/) 或 +[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 的 Pod)。 + 如果节点发生故障,将不会重新调度独立的 Pods。 <!-- A Deployment, which both creates a ReplicaSet to ensure that the desired number of Pods is always available, and specifies a strategy to replace Pods (such as [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), is almost always preferable to creating Pods directly, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) may also be appropriate. --> - Deployment,它创建一个 ReplicaSet 以确保所需数量的 Pod 始终可用,并指定替换 Pod 的策略(例如 [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)),除了一些显式的[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)场景之外,几乎总是优先考虑直接创建 Pod。 -[Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) 也可能是合适的。 - +Deployment 会创建一个 ReplicaSet 以确保所需数量的 Pod 始终可用,并指定替换 Pod 的策略 +(例如 [RollingUpdate](/zh/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), +除了一些显式的[`restartPolicy: Never`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +场景之外,几乎总是优先考虑直接创建 Pod。 +[Job](/zh/docs/concepts/workloads/controllers/job/) 也可能是合适的。 <!-- ## Services @@ -101,9 +98,10 @@ This is a living document. If you think of something that is not on this list bu <!-- - Create a [Service](/docs/concepts/services-networking/service/) before its corresponding backend workloads (Deployments or ReplicaSets), and before any workloads that need to access it. When Kubernetes starts a container, it provides environment variables pointing to all the Services which were running when the container was started. For example, if a Service named `foo` exists, all containers will get the following variables in their initial environment: --> -- 在其相应的后端工作负载(Deployment 或 ReplicaSet)之前,以及在需要访问它的任何工作负载之前创建[服务](/docs/concepts/services-networking/service/)。 - 当 Kubernetes 启动容器时,它提供指向启动容器时正在运行的所有服务的环境变量。 - 例如,如果存在名为`foo`当服务,则所有容器将在其初始环境中获取以下变量。 +- 在创建相应的后端工作负载(Deployment 或 ReplicaSet),以及在需要访问它的任何工作负载之前创建 + [服务](/zh/docs/concepts/services-networking/service/)。 + 当 Kubernetes 启动容器时,它提供指向启动容器时正在运行的所有服务的环境变量。 + 例如,如果存在名为 `foo` 的服务,则所有容器将在其初始环境中获得以下变量。 ```shell FOO_SERVICE_HOST=<the host the Service is running on> @@ -113,43 +111,51 @@ This is a living document. If you think of something that is not on this list bu <!-- *This does imply an ordering requirement* - any `Service` that a `Pod` wants to access must be created before the `Pod` itself, or else the environment variables will not be populated. DNS does not have this restriction. --> - *这确实意味着订购要求* - 必须在`Pod`本身之前创建`Pod`想要访问的任何`Service`,否则将不会填充环境变量。 - DNS没有此限制。 + *这确实意味着在顺序上的要求* - 必须在 `Pod` 本身被创建之前创建 `Pod` 想要访问的任何 `Service`, + 否则将环境变量不会生效。DNS 没有此限制。 <!-- - An optional (though strongly recommended) [cluster add-on](/docs/concepts/cluster-administration/addons/) is a DNS server. The DNS server watches the Kubernetes API for new `Services` and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all `Pods` should be able to do name resolution of `Services` automatically. --> -- 一个可选(尽管强烈推荐)[cluster add-on](/docs/concepts/cluster-administration/addons/)是 DNS 服务器。DNS 服务器为新的`Services`监视 Kubernetes API,并为每个创建一组 DNS 记录。 - 如果在整个集群中启用了 DNS,则所有`Pods`应该能够自动对`Services`进行名称解析。 +- 一个可选(尽管强烈推荐)的[集群插件](/zh/docs/concepts/cluster-administration/addons/) + 是 DNS 服务器。DNS 服务器为新的 `Services` 监视 Kubernetes API,并为每个创建一组 DNS 记录。 + 如果在整个集群中启用了 DNS,则所有 `Pods` 应该能够自动对 `Services` 进行名称解析。 <!-- - Don't specify a `hostPort` for a Pod unless it is absolutely necessary. When you bind a Pod to a `hostPort`, it limits the number of places the Pod can be scheduled, because each <`hostIP`, `hostPort`, `protocol`> combination must be unique. If you don't specify the `hostIP` and `protocol` explicitly, Kubernetes will use `0.0.0.0` as the default `hostIP` and `TCP` as the default `protocol`. --> -- 除非绝对必要,否则不要为 Pod 指定`hostPort`。 - 将 Pod 绑定到`hostPort`时,它会限制 Pod 可以调度的位置数,因为每个<`hostIP`, `hostPort`, `protocol`>组合必须是唯一的。如果您没有明确指定`hostIP`和`protocol`,Kubernetes将使用`0.0.0.0`作为默认`hostIP`和`TCP`作为默认`protocol`。 +- 除非绝对必要,否则不要为 Pod 指定 `hostPort`。 + 将 Pod 绑定到`hostPort`时,它会限制 Pod 可以调度的位置数,因为每个 + `<hostIP, hostPort, protocol>`组合必须是唯一的。 + 如果您没有明确指定 `hostIP` 和 `protocol`,Kubernetes 将使用 `0.0.0.0` 作为默认 + `hostIP` 和 `TCP` 作为默认 `protocol`。 <!-- If you only need access to the port for debugging purposes, you can use the [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) or [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). --> - 如果您只需要访问端口以进行调试,则可以使用[apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)或[`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)。 + 如果您只需要访问端口以进行调试,则可以使用 + [apiserver proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)或 + [`kubectl port-forward`](/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)。 <!-- If you explicitly need to expose a Pod's port on the node, consider using a [NodePort](/docs/concepts/services-networking/service/#nodeport) Service before resorting to `hostPort`. --> - 如果您明确需要在节点上公开 Pod 的端口,请在使用`hostPort`之前考虑使用[NodePort](/docs/concepts/services-networking/service/#nodeport) 服务。 + 如果您明确需要在节点上公开 Pod 的端口,请在使用 `hostPort` 之前考虑使用 + [NodePort](/zh/docs/concepts/services-networking/service/#nodeport) 服务。 <!-- - Avoid using `hostNetwork`, for the same reasons as `hostPort`. --> -- 避免使用`hostNetwork`,原因与`hostPort`相同。 +- 避免使用 `hostNetwork`,原因与 `hostPort` 相同。 <!-- - Use [headless Services](/docs/concepts/services-networking/service/#headless- services) (which have a `ClusterIP` of `None`) for easy service discovery when you don't need `kube-proxy` load balancing. --> -- 当您不需要`kube-proxy`负载平衡时,使用 [无头服务](/docs/concepts/services-networking/service/#headless- -services) (具有`None`的`ClusterIP`)以便于服务发现。 +- 当您不需要 `kube-proxy` 负载均衡时,使用 + [无头服务](/zh/docs/concepts/services-networking/service/#headless-services) + (`ClusterIP` 被设置为 `None`)以便于服务发现。 <!-- ## Using Labels @@ -159,29 +165,33 @@ services) (具有`None`的`ClusterIP`)以便于服务发现。 <!-- - Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach. --> -- 定义并使用[标签](/docs/concepts/overview/working-with-objects/labels/)来识别应用程序或部署的__semantic attributes__,例如`{ app: myapp, tier: frontend, phase: test, deployment: v3 }`。 - 您可以使用这些标签为其他资源选择合适的 Pod;例如,一个选择所有`tier: frontend` Pod 的服务,或者`app: myapp`的所有`phase: test`组件。 - 有关此方法的示例,请参阅[留言板](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 。 +- 定义并使用[标签](/zh/docs/concepts/overview/working-with-objects/labels/)来识别应用程序 + 或 Deployment 的 __语义属性__,例如`{ app: myapp, tier: frontend, phase: test, deployment: v3 }`。 + 你可以使用这些标签为其他资源选择合适的 Pod; + 例如,一个选择所有 `tier: frontend` Pod 的服务,或者 `app: myapp` 的所有 `phase: test` 组件。 + 有关此方法的示例,请参阅[guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 。 <!-- A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. [Deployments](/docs/concepts/workloads/controllers/deployment/) make it easy to update a running service without downtime. --> -通过从选择器中省略特定发行版的标签,可以使服务跨越多个部署。 -[部署](/docs/concepts/workloads/controllers/deployment/)可以在不停机的情况下轻松更新正在运行的服务。 +通过从选择器中省略特定发行版的标签,可以使服务跨越多个 Deployment。 +[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 可以在不停机的情况下轻松更新正在运行的服务。 <!-- A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate. --> -部署描述了对象的期望状态,并且如果对该规范的更改是_applied_,则部署控制器以受控速率将实际状态改变为期望状态。 +Deployment 描述了对象的期望状态,并且如果对该规范的更改被成功应用, +则 Deployment 控制器以受控速率将实际状态改变为期望状态。 <!-- - You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label). --> - 您可以操纵标签进行调试。 -- 由于 Kubernetes 控制器(例如 ReplicaSet)和服务使用选择器标签与 Pod 匹配,因此从 Pod 中删除相关标签将阻止其被控制器考虑或由服务提供服务流量。 + 由于 Kubernetes 控制器(例如 ReplicaSet)和服务使用选择器标签来匹配 Pod, + 从 Pod 中删除相关标签将阻止其被控制器考虑或由服务提供服务流量。 如果删除现有 Pod 的标签,其控制器将创建一个新的 Pod 来取代它。 - 这是在"隔离"环境中调试先前"实时"Pod 的有用方法。 - 要以交互方式删除或添加标签,请使用[`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label)。 + 这是在"隔离"环境中调试先前"活跃"的 Pod 的有用方法。 + 要以交互方式删除或添加标签,请使用 [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label)。 <!-- ## Container Images @@ -191,38 +201,29 @@ A desired state of an object is described by a Deployment, and if changes to tha <!-- The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/admin/kubelet/) attempts to pull the specified image. --> -当 [kubelet](/docs/admin/kubelet/)尝试拉取指定的镜像时,[imagePullPolicy](/docs/concepts/containers/images/#升级镜像)和镜像标签会生效。 +[imagePullPolicy](/zh/docs/concepts/containers/images/#updating-images)和镜像标签会影响 +[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 何时尝试拉取指定的镜像。 <!-- - `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally. ---> -- `imagePullPolicy: IfNotPresent`:仅当镜像在本地不存在时镜像才被拉取。 - -<!-- - `imagePullPolicy: Always`: the image is pulled every time the pod is started. ---> -- `imagePullPolicy: Always`:每次启动 pod 的时候都会拉取镜像。 - -<!-- - `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied. ---> -- `imagePullPolicy` 省略时,镜像标签为 `:latest` 或不存在,使用 `Always` 值。 - -<!-- - `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `IfNotPresent` is applied. ---> -- `imagePullPolicy` 省略时,指定镜像标签并且不是 `:latest`,使用 `IfNotPresent` 值。 - -<!-- - `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image. --> +- `imagePullPolicy: IfNotPresent`:仅当镜像在本地不存在时才被拉取。 +- `imagePullPolicy: Always`:每次启动 Pod 的时候都会拉取镜像。 +- `imagePullPolicy` 省略时,镜像标签为 `:latest` 或不存在,使用 `Always` 值。 +- `imagePullPolicy` 省略时,指定镜像标签并且不是 `:latest`,使用 `IfNotPresent` 值。 - `imagePullPolicy: Never`:假设镜像已经存在本地,不会尝试拉取镜像。 <!-- To make sure the container always uses the same version of the image, you can specify its [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), for example `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`. The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value. --> {{< note >}} -要确保容器始终使用相同版本的镜像,你可以指定其 [摘要](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), 例如`sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`。 +要确保容器始终使用相同版本的镜像,你可以指定其 +[摘要](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), +例如 `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`。 摘要唯一地标识出镜像的指定版本,因此除非您更改摘要值,否则 Kubernetes 永远不会更新它。 {{< /note >}} @@ -230,15 +231,15 @@ To make sure the container always uses the same version of the image, you can sp You should avoid using the `:latest` tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly. --> {{< note >}} -在生产中部署容器时应避免使用 `:latest` 标记,因为更难跟踪正在运行的镜像版本,并且更难以正确回滚。 +在生产中部署容器时应避免使用 `:latest` 标记,因为这样更难跟踪正在运行的镜像版本,并且更难以正确回滚。 {{< /note >}} <!-- The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed. --> {{< note >}} -底层镜像提供程序的缓存语义甚至使 `imagePullPolicy: Always`变得高效。 -例如,对于 Docker,如果镜像已经存在,则拉取尝试很快,因为镜像层都被缓存并且不需要镜像下载。 +底层镜像驱动程序的缓存语义能够使即便 `imagePullPolicy: Always` 的配置也很高效。 +例如,对于 Docker,如果镜像已经存在,则拉取尝试很快,因为镜像层都被缓存并且不需要下载。 {{< /note >}} <!-- @@ -249,19 +250,20 @@ The caching semantics of the underlying image provider make even `imagePullPolic <!-- - Use `kubectl apply -f <directory>`. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes it to `apply`. --> -- 使用`kubectl apply -f <directory>`。 - 它在`<directory>`中的所有`.yaml`,`.yml`和`.json`文件中查找 Kubernetes 配置,并将其传递给`apply`。 +- 使用 `kubectl apply -f <directory>`。 + 它在 `<directory>` 中的所有` .yaml`、`.yml` 和 `.json` 文件中查找 Kubernetes 配置,并将其传递给 `apply`。 <!-- - Use label selectors for `get` and `delete` operations instead of specific object names. See the sections on [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). --> -- 使用标签选择器进行`get`和`delete`操作,而不是特定的对象名称。 -- 请参阅[标签选择器](/docs/concepts/overview/working-with-objects/labels/#label-selectors)和[有效使用标签](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。 +- 使用标签选择器进行 `get` 和 `delete` 操作,而不是特定的对象名称。 +- 请参阅[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)和 + [有效使用标签](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。 <!-- - Use `kubectl run` and `kubectl expose` to quickly create single-container Deployments and Services. See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) for an example. --> - 使用`kubectl run`和`kubectl expose`来快速创建单容器部署和服务。 - 有关示例,请参阅[使用服务访问集群中的应用程序](/docs/tasks/access-application-cluster/service-access-application-cluster/)。 + 有关示例,请参阅[使用服务访问集群中的应用程序](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/)。 diff --git a/content/zh/docs/concepts/configuration/pod-overhead.md b/content/zh/docs/concepts/configuration/pod-overhead.md index 3c81b3607a..4f785b152e 100644 --- a/content/zh/docs/concepts/configuration/pod-overhead.md +++ b/content/zh/docs/concepts/configuration/pod-overhead.md @@ -18,9 +18,6 @@ on top of the container requests & limits. 在节点上运行 Pod 时,Pod 本身占用大量系统资源。这些资源是运行 Pod 内容器所需资源的附加资源。 _POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。 - - - <!-- body --> <!-- @@ -36,8 +33,8 @@ time according to the overhead associated with the Pod's [RuntimeClass](/docs/concepts/containers/runtime-class/). --> -在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/docs/concepts/containers/runtime-class/) 相关联的开销在 -[准入](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。 +在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 相关联的开销在 +[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。 <!-- When Pod Overhead is enabled, the overhead is considered in addition to the sum of container @@ -56,7 +53,8 @@ You need to make sure that the `PodOverhead` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18) across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field. --> -您需要确保在集群中启用了 `PodOverhead` [特性门](/docs/reference/command-line-tools-reference/feature-gates/)(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`。 +您需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) +(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`。 <!-- ## Usage example @@ -68,7 +66,9 @@ To use the PodOverhead feature, you need a RuntimeClass that defines the `overhe an example, you could use the following RuntimeClass definition with a virtualizing container runtime that uses around 120MiB per Pod for the virtual machine and the guest OS: --> -要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass. 作为例子,可以在虚拟机和来宾操作系统中通过一个虚拟化容器运行时来定义 RuntimeClass 如下,其中每个 Pod 大约使用 120MiB: +要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass。 +作为例子,可以在虚拟机和寄宿操作系统中通过一个虚拟化容器运行时来定义 +RuntimeClass 如下,其中每个 Pod 大约使用 120MiB: ```yaml --- @@ -123,8 +123,9 @@ updates the workload's PodSpec to include the `overhead` as described in the Run the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod to include an `overhead`. --> -在准入阶段 RuntimeClass [准入控制器](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含 - RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`. +在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含 + RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。 +在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`. <!-- After the RuntimeClass admission controller, you can check the updated PodSpec: @@ -298,12 +299,8 @@ from source in the meantime. 在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 `kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定开销运行的工作负载的稳定性。 该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。在此之前,用户需要从源代码构建 kube-state-metrics. - - ## {{% heading "whatsnext" %}} - -* [RuntimeClass](/docs/concepts/containers/runtime-class/) +* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) * [PodOverhead 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) - diff --git a/content/zh/docs/concepts/configuration/secret.md b/content/zh/docs/concepts/configuration/secret.md index be3bf12cee..47b0e983ff 100644 --- a/content/zh/docs/concepts/configuration/secret.md +++ b/content/zh/docs/concepts/configuration/secret.md @@ -1,9 +1,24 @@ --- title: Secret content_type: concept -weight: 50 +feature: + title: Secret 和配置管理 + description: > + 部署和更新 Secrets 和应用程序的配置而不必重新构建容器镜像,且 + 不必将软件堆栈配置中的秘密信息暴露出来。 +weight: 30 --- - +<!-- +reviewers: +- mikedanese +title: Secrets +content_type: concept +feature: + title: Secret and configuration management + description: > + Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration. +weight: 30 +--> <!-- overview --> @@ -14,12 +29,10 @@ is safer and more flexible than putting it verbatim in a {{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. --> -`Secret` 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。 +`Secret` 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 `secret` 中比放在 {{< glossary_tooltip term_id="pod" >}} 的定义或者 {{< glossary_tooltip text="容器镜像" term_id="image" >}} 中来说更加安全和灵活。 参阅 [Secret 设计文档](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) 获取更多详细信息。 - - <!-- body --> <!-- @@ -27,26 +40,33 @@ is safer and more flexible than putting it verbatim in a A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a -Pod specification or in an image; putting it in a Secret object allows for -more control over how it is used, and reduces the risk of accidental exposure. ---> +Pod specification or in an image. Users can create secrets and the system +also creates some secrets. +--> ## Secret 概览 -Secret 是一种包含少量敏感信息例如密码、token 或 key 的对象。这样的信息可能会被放在 Pod spec 中或者镜像中;将其放在一个 secret 对象中可以更好地控制它的用途,并降低意外暴露的风险。 +Secret 是一种包含少量敏感信息例如密码、令牌或密钥的对象。 +这样的信息可能会被放在 Pod 规约中或者镜像中。 +用户可以创建 Secret,同时系统也创建了一些 Secret。 <!-- -Users can create secrets, and the system also creates some secrets. +To use a secret, a Pod needs to reference the secret. +A secret can be used with a Pod in three ways: -To use a secret, a pod needs to reference the secret. -A secret can be used with a pod in two ways: as files in a +- As [files](#using-secrets-as-files-from-a-pod) in a {{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of -its containers, or used by kubelet when pulling images for the pod. +its containers. +- As [container environment variable](#using-secrets-as-environment-variables). +- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod. --> +要使用 Secret,Pod 需要引用 Secret。 +Pod 可以用三种方式之一来使用 Secret: -用户可以创建 secret,同时系统也创建了一些 secret。 - -要使用 secret,pod 需要引用 secret。Pod 可以用两种方式使用 secret:作为 {{< glossary_tooltip text="volume" term_id="volume" >}} 中的文件被挂载到 pod 中的一个或者多个容器里,或者当 kubelet 为 pod 拉取镜像时使用。 +- 作为挂载到一个或多个容器上的 {{< glossary_tooltip text="卷" term_id="volume" >}} + 中的[文件](#using-secrets-as-files-from-a-pod)。 +- 作为[容器的环境变量](#using-secrets-as-environment-variables) +- 由 [kubelet 在为 Pod 拉取镜像时使用](#using-imagepullsecrets) <!-- ### Built-in Secrets @@ -57,93 +77,136 @@ Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret. --> +### 内置 Secret -### 内置 secret +#### 服务账号使用 API 凭证自动创建和附加 Secret -#### Service Account 使用 API 凭证自动创建和附加 secret - -Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改您的 pod 以使用此类型的 secret。 +Kubernetes 自动创建包含访问 API 凭据的 Secret,并自动修改你的 Pod 以使用此类型的 Secret。 <!-- The automatic creation and use of API credentials can be disabled or overridden -if desired. However, if all you need to do is securely access the apiserver, +if desired. However, if all you need to do is securely access the API server, this is the recommended workflow. -See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more -information on how Service Accounts work. +See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) +documentation for more information on how Service Accounts work. --> +如果需要,可以禁用或覆盖自动创建和使用 API 凭据。 +但是,如果您需要的只是安全地访问 API 服务器,我们推荐这样的工作流程。 -如果需要,可以禁用或覆盖自动创建和使用 API 凭据。但是,如果您需要的只是安全地访问 apiserver,我们推荐这样的工作流程。 - -参阅 [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) 文档获取关于 Service Account 如何工作的更多信息。 +参阅[服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/) +文档了解关于服务账号如何工作的更多信息。 <!-- ### Creating your own Secrets #### Creating a Secret Using kubectl create secret -Say that some pods need to access a database. The -username and password that the pods should use is in the files -`./username.txt` and `./password.txt` on your local machine. +Secrets can contain user credentials required by Pods to access a database. +For example, a database connection string +consists of a username and password. You can store the username in a file `./username.txt` +and the password in a file `./password.txt` on your local machine. --> ### 创建您自己的 Secret -#### 使用 kubectl 创建 Secret +#### 使用 `kubectl` 创建 Secret -假设有些 pod 需要访问数据库。这些 pod 需要使用的用户名和密码在您本地机器的 `./username.txt` 和 `./password.txt` 文件里。 +Secret 中可以包含 Pod 访问数据库时需要的用户凭证信息。 +例如,某个数据库连接字符串可能包含用户名和密码。 +你可以将用户名和密码保存在本地机器的 `./username.txt` 和 `./password.txt` 文件里。 ```shell -# Create files needed for rest of example. +# 创建本例中要使用的文件 echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt ``` <!-- -The `kubectl create secret` command -packages these files into a Secret and creates -the object on the Apiserver. +The `kubectl create secret` command packages these files into a Secret and creates +the object on the API Server. +The name of a Secret object must be a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). --> `kubectl create secret` 命令将这些文件打包到一个 Secret 中并在 API server 中创建了一个对象。 - +Secret 对象的名称必须是合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 ```shell kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt ``` + +输出类似于: + ``` secret "db-user-pass" created ``` -{{< note >}} <!-- -Special characters such as `$`, `\*`, and `!` require escaping. -If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: - kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\\*d\\$zDsb - You do not need to escape special characters in passwords from files (`--from-file`). +Default key name is the filename. You may optionally set the key name using `[--from-file=[key=]source]`. +--> +默认的键名是文件名。你也可以使用 `[--from-file=[key=]source]` 参数来设置键名。 + +```shell +kubectl create secret generic db-user-pass \ + --from-file=username=./username.txt \ + --from-file=password=./password.txt +``` + +<!-- +Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. +In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). +For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way: + +``` +kubectl create secret generic dev-db-secret \ + --from-literal=username=devuser \ + --from-literal=password='S!B\*d$zDsb=' +``` + +You do not need to escape special characters in passwords from files (`--from-file`). --> +{{< note >}} +特殊字符(例如 `$`、`*`、`*`、`=` 和 `!`)可能会被你的 +[Shell](https://en.wikipedia.org/wiki/Shell_(computing)) 解析,因此需要转义。 +在大多数 Shell 中,对密码进行转义的最简单方式是使用单引号(`'`)将其扩起来。 +例如,如果您的实际密码是 `S!B\*d$zDsb=` ,则应通过以下方式执行命令: +``` +kubectl create secret generic dev-db-secret \ + --from-literal=username=devuser \ + --from-literal=password='S!B\*d$zDsb=' +``` -特殊字符(例如 `$`, `\*` 和 `!` )需要转义。 -如果您使用的密码具有特殊字符,则需要使用 `\\` 字符对其进行转义。 例如,如果您的实际密码是 `S!B\*d$zDsb` ,则应通过以下方式执行命令: - kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\\*d\\$zDsb -您无需从文件中转义密码中的特殊字符( `--from-file` )。 +您无需对文件中保存(`--from-file`)的密码中的特殊字符执行转义操作。 {{< /note >}} <!-- You can check that the secret was created like this: --> -您可以这样检查刚创建的 secret: +您可以这样检查刚创建的 Secret: ```shell kubectl get secrets ``` + +其输出类似于: + ``` NAME TYPE DATA AGE db-user-pass Opaque 2 51s ``` + +<!-- +You can view a description of the secret: +--> +你可以查看 Secret 的描述: + ```shell kubectl describe secrets/db-user-pass ``` + +其输出类似于: + ``` Name: db-user-pass Namespace: default @@ -158,53 +221,69 @@ password.txt: 12 bytes username.txt: 5 bytes ``` -{{< note >}} - <!-- `kubectl get` and `kubectl describe` avoid showing the contents of a secret by default. This is to protect the secret from being exposed accidentally to an onlooker, or from being stored in a terminal log. --> - -默认情况下,`kubectl get` 和 `kubectl describe` 避免显示密码的内容。 这是为了防止机密被意外地暴露给旁观者或存储在终端日志中。 - +{{< note >}} +默认情况下,`kubectl get` 和 `kubectl describe` 避免显示密码的内容。 +这是为了防止机密被意外地暴露给旁观者或存储在终端日志中。 {{< /note >}} <!-- -See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret. +See [decoding secret](#decoding-secret) for how to see the contents of a secret. --> - -请参阅 [解码 secret](#解码-secret) 了解如何查看它们的内容。 +请参阅[解码 Secret](#decoding-secret) 了解如何查看 Secret 的内容。 <!-- #### Creating a Secret Manually -You can also create a Secret in a file first, in json or yaml format, -and then create that object. The -[Secret](/docs/reference/generated/kubernetes-api/v1.12/#secret-v1-core) contains two maps: -data and stringData. The data field is used to store arbitrary data, encoded using +You can also create a Secret in a file first, in JSON or YAML format, +and then create that object. +The name of a Secret object must be a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) +contains two maps: +`data` and `stringData`. The `data` field is used to store arbitrary data, encoded using base64. The stringData field is provided for convenience, and allows you to provide secret data as unencoded strings. --> - #### 手动创建 Secret -您也可以先以 json 或 yaml 格式在文件中创建一个 secret 对象,然后创建该对象。 -[密码](/docs/reference/generated/kubernetes-api/v1.12/#secret-v1-core)包含两种类型,数据和字符串数据。 -数据字段用于存储使用 base64 编码的任意数据。 提供 stringData 字段是为了方便起见,它允许您将机密数据作为未编码的字符串提供。 +您也可以先以 JSON 或 YAML 格式文件创建一个 Secret,然后创建该对象。 +Secret 对象的名称必须是合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 +[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) +包含两个映射:`data` 和 `stringData`。 +`data` 字段用于存储使用 base64 编码的任意数据。 +提供 `stringData` 字段是为了方便,允许您用未编码的字符串提供机密数据。 <!-- For example, to store two strings in a Secret using the data field, convert them to base64 as follows: --> - -例如,要使用数据字段将两个字符串存储在 Secret 中,请按如下所示将它们转换为 base64: +例如,要使用 `data` 字段将两个字符串存储在 Secret 中,请按如下所示将它们转换为 base64: ```shell echo -n 'admin' | base64 +``` + +<!-- The output is similar to: --> +输出类似于: + +``` YWRtaW4= +``` + +```shell echo -n '1f2d1e2e67df' | base64 +``` + +<!-- The output is similar to: --> +输出类似于: + +``` MWYyZDFlMmU2N2Rm ``` @@ -212,7 +291,7 @@ MWYyZDFlMmU2N2Rm Write a Secret that looks like this: --> -现在可以像这样写一个 secret 对象: +现在可以像这样写一个 Secret 对象: ```yaml apiVersion: v1 @@ -228,12 +307,15 @@ data: <!-- Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): --> - -使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) 创建 secret: +使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) 创建 Secret 对象: ```shell kubectl apply -f ./secret.yaml ``` + +<!--The output is similar to: --> +输出类似于: + ``` secret "mysecret" created ``` @@ -249,11 +331,12 @@ parts of that configuration file during your deployment process. If your application uses the following configuration file: --> - -对于某些情况,您可能希望改用 stringData 字段。此字段允许您将非 base64 编码的字符串直接放入 Secret 中, +在某些情况下,你可能希望改用 stringData 字段。 +此字段允许您将非 base64 编码的字符串直接放入 Secret 中, 并且在创建或更新 Secret 时将为您编码该字符串。 -下面的一个实践示例提供了一个参考,您正在部署使用密钥存储配置文件的应用程序,并希望在部署过程中填补齐配置文件的部分内容。 +下面的一个实践示例提供了一个参考。 +你正在部署使用 Secret 存储配置文件的应用程序,并希望在部署过程中填齐配置文件的部分内容。 如果您的应用程序使用以下配置文件: @@ -263,9 +346,7 @@ username: "user" password: "password" ``` -<!-- -You could store this in a Secret using the following: ---> +<!-- You could store this in a Secret using the following: --> 您可以使用以下方法将其存储在Secret中: @@ -293,6 +374,7 @@ retrieving Secrets. For example, if you run the following command: 然后,您的部署工具可以在执行 `kubectl apply` 之前替换模板的 `{{username}}` 和 `{{password}}` 变量。 stringData 是只写的便利字段。检索 Secrets 时永远不会被输出。例如,如果您运行以下命令: + ```shell kubectl get secret mysecret -o yaml ``` @@ -300,8 +382,7 @@ kubectl get secret mysecret -o yaml <!-- The output will be similar to: --> - -输出将类似于: +输出类似于: ```yaml apiVersion: v1 @@ -322,7 +403,8 @@ If a field is specified in both data and stringData, the value from stringData is used. For example, the following Secret definition: --> -如果在 data 和 stringData 中都指定了字段,则使用 stringData 中的值。例如,以下是 Secret 定义: +如果在 data 和 stringData 中都指定了某一字段,则使用 stringData 中的值。 +例如,以下是 Secret 定义: ```yaml apiVersion: v1 @@ -339,7 +421,6 @@ stringData: <!-- Results in the following secret: --> - secret 中的生成结果: ```yaml @@ -359,45 +440,56 @@ data: <!-- Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. --> - -`YWRtaW5pc3RyYXRvcg==` 转换成了 `administrator`。 +其中的 `YWRtaW5pc3RyYXRvcg==` 解码后即是 `administrator`。 <!-- The keys of data and stringData must consist of alphanumeric characters, '-', '_' or '.'. -**Encoding Note:** The serialized JSON and YAML values of secret data are +The serialized JSON and YAML values of secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/macOS users should avoid using the `-b` option to split long lines. Conversely Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if `-w` option is not available. --> +data 和 stringData 的键必须由字母数字字符 '-', '\_' 或者 '.' 组成。 -data 和 stringData 的键必须由字母数字字符 '-', '_' 或者 '.' 组成。 - -** 编码注意:** 秘密数据的序列化 JSON 和 YAML 值被编码为 base64 字符串。换行符在这些字符串中无效,因此必须省略。在 Darwin/macOS 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来分隔长行。相反,Linux用户 *应该* 在 `base64` 命令中添加选项 `-w 0` ,或者,如果 `-w` 选项不可用的情况下,执行 `base64 | tr -d '\n'`。 +{{< note >}} +Secret 数据在序列化为 JSON 和 YAML 时,其值被编码为 base64 字符串。 +换行符在这些字符串中是非法的,因此必须省略。 +在 Darwin/macOS 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来分隔长行。 +相反,Linux用户 *应该* 在 `base64` 命令中添加选项 `-w 0` , +或者,如果 `-w` 选项不可用的情况下,执行 `base64 | tr -d '\n'`。 +{{< /note >}} <!-- #### Creating a Secret from Generator -Kubectl supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) -since 1.14. With this new feature, -you can also create a Secret from generators and then apply it to create the object on -the Apiserver. The generators -should be specified in a `kustomization.yaml` inside a directory. -For example, to generate a Secret from files `./username.txt` and `./password.txt` +Since Kubernetes v1.14, `kubectl` supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/). Kustomize provides resource Generators to +create Secrets and ConfigMaps. The Kustomize generators should be specified in a +`kustomization.yaml` file inside a directory. After generating the Secret, +you can create the Secret on the API server with `kubectl apply`. --> - #### 从生成器创建 Secret -Kubectl 从 1.14 版本开始支持 [使用 Kustomize 管理对象](/docs/tasks/manage-kubernetes-objects/kustomization/) -使用此新功能,您还可以从生成器创建一个 Secret,然后将其应用于在 Apiserver 上创建对象。 -生成器应在目录内的 `kustomization.yaml` 中指定。 -例如,从文件 `./username.txt` 和 `./password.txt` 生成一个 Secret。 +Kubectl 从 1.14 版本开始支持[使用 Kustomize 管理对象](/zh/docs/tasks/manage-kubernetes-objects/kustomization/)。 +Kustomize 提供资源生成器创建 Secret 和 ConfigMaps。 +Kustomize 生成器要在当前目录内的 `kustomization.yaml` 中指定。 +生成 Secret 之后,使用 `kubectl apply` 在 API 服务器上创建对象。 + +<!-- +#### Generating a Secret from files + +You can generate a Secret by defining a `secretGenerator` from the +files ./username.txt and ./password.txt: +--> +#### 从文件生成 Secret {#generating-a-secret-from-files} + +你可以通过定义基于文件 `./username.txt` 和 `./password.txt` 的 +`secretGenerator` 来生成一个 Secret。 ```shell -# Create a kustomization.yaml file with SecretGenerator cat <<EOF >./kustomization.yaml secretGenerator: - name: db-user-pass @@ -408,24 +500,34 @@ EOF ``` <!-- -Apply the kustomization directory to create the Secret object. +Apply the directory, containing the `kustomization.yaml`, to create the Secret. --> - -应用 kustomization 目录创建 Secret 对象。 +应用包含 `kustomization.yaml` 目录以创建 Secret 对象。 ```shell -$ kubectl apply -k . +kubectl apply -k . +``` + +<!-- The output is similar to: --> +输出类似于: + +``` secret/db-user-pass-96mffmfh4k created ``` <!-- You can check that the secret was created like this: --> - -您可以检查 secret 是否是这样创建的: +您可以检查 Secret 是否创建成功: ```shell -$ kubectl get secrets +kubectl get secrets +``` + +<!-- The output is similar to: --> +输出类似于: + +``` NAME TYPE DATA AGE db-user-pass-96mffmfh4k Opaque 2 51s @@ -444,15 +546,18 @@ username.txt: 5 bytes ``` <!-- -For example, to generate a Secret from literals `username=admin` and `password=secret`, -you can specify the secret generator in `kustomization.yaml` as ---> +#### Generating a Secret from string literals -例如,要从文字 `username=admin` 和 `password=secret` 生成秘密,可以在 `kustomization.yaml` 中将秘密生成器指定为 +You can create a Secret by defining a `secretGenerator` +from literals `username=admin` and `password=secret`: +--> +#### 基于字符串值来创建 Secret {#generating-a-secret-from-string-literals} + +你可以通过定义使用字符串值 `username=admin` 和 `password=secret` +的 `secretGenerator` 来创建 Secret。 ```shell -# Create a kustomization.yaml file with SecretGenerator -$ cat <<EOF >./kustomization.yaml +cat <<EOF >./kustomization.yaml secretGenerator: - name: db-user-pass literals: @@ -460,36 +565,57 @@ secretGenerator: - password=secret EOF ``` -Apply the kustomization directory to create the Secret object. +<!-- +Apply the directory, containing the `kustomization.yaml`, to create the Secret. +--> +应用包含 `kustomization.yaml` 目录以创建 Secret 对象。 + ```shell -$ kubectl apply -k . -secret/db-user-pass-dddghtt9b5 created +kubectl apply -k . ``` -{{< note >}} <!-- -The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new -Secret is generated each time the contents is modified. +The output is similar to: +--> +输出类似于: + +``` +secret/db-user-pass-dddghtt9b5 created +``` + +<!-- +When a Secret is generated, the Secret name is created by hashing +the Secret data and appending this value to the name. This ensures that +a new Secret is generated each time the data is modified. --> -通过对内容进行序列化后,生成一个后缀作为 Secrets 的名称。这样可以确保每次修改内容时都会生成一个新的 Secret。 - +{{< note >}} +Secret 被创建时,Secret 的名称是通过为 Secret 数据计算哈希值得到一个字符串, +并将该字符串添加到名称之后得到的。这会确保数据被修改后,会有新的 Secret +对象被生成。 {{< /note >}} <!-- #### Decoding a Secret -Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section: +Secrets can be retrieved via the `kubectl get secret`. +For example, to retrieve the secret created in the previous section: --> -#### 解码 Secret +#### 解码 Secret {#decoding-secret} -可以使用 `kubectl get secret` 命令获取 secret。例如,获取在上一节中创建的 secret: +可以使用 `kubectl get secret` 命令获取 Secret。例如,获取在上一节中创建的 secret: ```shell kubectl get secret mysecret -o yaml ``` -``` + +<!-- +The output is similar to: +--> +输出类似于: + +```yaml apiVersion: v1 kind: Secret metadata: @@ -508,11 +634,15 @@ data: Decode the password field: --> -解码密码字段: +解码 `password` 字段: ```shell echo 'MWYyZDFlMmU2N2Rm' | base64 --decode ``` + +<!-- The output is similar to:--> +输出类似于: + ``` 1f2d1e2e67df ``` @@ -522,10 +652,9 @@ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode An existing secret may be edited with the following command: --> - #### 编辑 Secret -可以通过下面的命令编辑一个已经存在的 secret 。 +可以通过下面的命令可以编辑一个已经存在的 secret 。 ```shell kubectl edit secrets mysecret @@ -534,8 +663,7 @@ kubectl edit secrets mysecret <!-- This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field: --> - -这将打开默认配置的编辑器,并允许更新 `data` 字段中的 base64 编码的 secret: +这将打开默认配置的编辑器,并允许更新 `data` 字段中的 base64 编码的 Secret 值: ``` # Please edit the object below. Lines beginning with a '#' will be ignored, @@ -572,8 +700,8 @@ systems on your behalf. ## 使用 Secret Secret 可以作为数据卷被挂载,或作为{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}} -暴露出来以供 pod 中的容器使用。它们也可以被系统的其他部分使用,而不直接暴露在 pod 内。 -例如,它们可以保存凭据,系统的其他部分应该用它来代表您与外部系统进行交互。 +暴露出来以供 Pod 中的容器使用。它们也可以被系统的其他部分使用,而不直接暴露在 Pod 内。 +例如,它们可以保存凭据,系统的其他部分将用它来代表你与外部系统进行交互。 <!-- ### Using Secrets as Files from a Pod @@ -588,16 +716,20 @@ To consume a Secret in a volume in a Pod: This is an example of a pod that mounts a secret in a volume: --> -### 在 Pod 中使用 Secret 文件 +### 在 Pod 中使用 Secret 文件 {#using-secrets-as-files-from-a-pod} -在 Pod 中的 volume 里使用 Secret: +在 Pod 中使用存放在卷中的 Secret: -1. 创建一个 secret 或者使用已有的 secret。多个 pod 可以引用同一个 secret。 -1. 修改您的 pod 的定义在 `spec.volumes[]` 下增加一个 volume。可以给这个 volume 随意命名,它的 `spec.volumes[].secret.secretName` 必须等于 secret 对象的名字。 -1. 将 `spec.containers[].volumeMounts[]` 加到需要用到该 secret 的容器中。指定 `spec.containers[].volumeMounts[].readOnly = true` 和 `spec.containers[].volumeMounts[].mountPath` 为您想要该 secret 出现的尚未使用的目录。 -1. 修改您的镜像并且/或者命令行让程序从该目录下寻找文件。Secret 的 `data` 映射中的每一个键都成为了 `mountPath` 下的一个文件名。 +1. 创建一个 Secret 或者使用已有的 Secret。多个 Pod 可以引用同一个 Secret。 +1. 修改你的 Pod 定义,在 `spec.volumes[]` 下增加一个卷。可以给这个卷随意命名, + 它的 `spec.volumes[].secret.secretName` 必须是 Secret 对象的名字。 +1. 将 `spec.containers[].volumeMounts[]` 加到需要用到该 Secret 的容器中。 + 指定 `spec.containers[].volumeMounts[].readOnly = true` 和 + `spec.containers[].volumeMounts[].mountPath` 为你想要该 Secret 出现的尚未使用的目录。 +1. 修改你的镜像并且/或者命令行,让程序从该目录下寻找文件。 + Secret 的 `data` 映射中的每一个键都对应 `mountPath` 下的一个文件名。 -这是一个在 pod 中使用 volume 挂在 secret 的例子: +这是一个在 Pod 中使用存放在挂载卷中 Secret 的例子: ```yaml apiVersion: v1 @@ -626,21 +758,22 @@ own `volumeMounts` block, but only one `.spec.volumes` is needed per secret. You can package many files into one secret, or use many secrets, whichever is convenient. -**Projection of secret keys to specific paths** +#### Projection of Secret keys to specific paths We can also control the paths within the volume where Secret keys are projected. You can use `.spec.volumes[].secret.items` field to change target path of each key: --> +您想要用的每个 Secret 都需要在 `spec.volumes` 中引用。 -您想要用的每个 secret 都需要在 `spec.volumes` 中指明。 +如果 Pod 中有多个容器,每个容器都需要自己的 `volumeMounts` 配置块, +但是每个 Secret 只需要一个 `spec.volumes`。 -如果 pod 中有多个容器,每个容器都需要自己的 `volumeMounts` 配置块,但是每个 secret 只需要一个 `spec.volumes`。 +您可以打包多个文件到一个 Secret 中,或者使用的多个 Secret,怎样方便就怎样来。 -您可以打包多个文件到一个 secret 中,或者使用的多个 secret,怎样方便就怎样来。 +#### 将 Secret 键名映射到特定路径 -**向特性路径映射 secret 密钥** - -我们还可以控制 Secret key 映射在 volume 中的路径。您可以使用 `spec.volumes[].secret.items` 字段修改每个 key 的目标路径: +我们还可以控制 Secret 键名在存储卷中映射的的路径。 +你可以使用 `spec.volumes[].secret.items` 字段修改每个键对应的目标路径: ```yaml apiVersion: v1 @@ -674,7 +807,7 @@ If `.spec.volumes[].secret.items` is used, only keys specified in `items` are pr To consume all keys from the secret, all of them must be listed in the `items` field. All listed keys must exist in the corresponding secret. Otherwise, the volume is not created. -**Secret files permissions** +#### Secret files permissions You can also specify the permission mode bits files part of a secret will have. If you don't specify any, `0644` is used by default. You can specify a default @@ -682,17 +815,19 @@ mode for the whole secret volume and override per key if needed. For example, you can specify a default mode like this: --> - 将会发生什么呢: -- `username` secret 存储在 `/etc/foo/my-group/my-username` 文件中而不是 `/etc/foo/username` 中。 -- `password` secret 没有被映射 +- `username` Secret 存储在 `/etc/foo/my-group/my-username` 文件中而不是 `/etc/foo/username` 中。 +- `password` Secret 没有被映射 -如果使用了 `spec.volumes[].secret.items`,只有在 `items` 中指定的 key 被映射。要使用 secret 中所有的 key,所有这些都必须列在 `items` 字段中。所有列出的密钥必须存在于相应的 secret 中。否则,不会创建卷。 +如果使用了 `spec.volumes[].secret.items`,只有在 `items` 中指定的键会被映射。 +要使用 Secret 中所有键,就必须将它们都列在 `items` 字段中。 +所有列出的键名必须存在于相应的 Secret 中。否则,不会创建卷。 -**Secret 文件权限** +#### Secret 文件权限 -您还可以指定 secret 将拥有的权限模式位文件。如果不指定,默认使用 `0644`。您可以为整个保密卷指定默认模式,如果需要,可以覆盖每个密钥。 +你还可以指定 Secret 将拥有的权限模式位。如果不指定,默认使用 `0644`。 +你可以为整个 Secret 卷指定默认模式;如果需要,可以为每个密钥设定重载值。 例如,您可以指定如下默认模式: @@ -722,16 +857,67 @@ secret volume mount will have permission `0400`. Note that the JSON spec doesn't support octal notation, so use the value 256 for 0400 permissions. If you use yaml instead of json for the pod, you can use octal notation to specify permissions in a more natural way. +--> +之后,Secret 将被挂载到 `/etc/foo` 目录,而所有通过该 Secret 卷挂载 +所创建的文件的权限都是 `0400`。 +请注意,JSON 规范不支持八进制符号,因此使用 256 值作为 0400 权限。 +如果你使用 YAML 而不是 JSON,则可以使用八进制符号以更自然的方式指定权限。 + +<!-- +Note if you `kubectl exec` into the Pod, you need to follow the symlink to find +the expected file mode. For example, + +Check the secrets file mode on the pod. +--> +注意,如果你通过 `kubectl exec` 进入到 Pod 中,你需要沿着符号链接来找到 +所期望的文件模式。例如,下面命令检查 Secret 文件的访问模式: + +```shell +kubectl exec mypod -it sh + +cd /etc/foo +ls -l +``` + +<!-- +The output is similar to this: +--> +输出类似于: + +``` +total 0 +lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password +lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username +``` + +<!-- +Follow the symlink to find the correct file mode. +--> +沿着符号链接,可以查看文件的访问模式: + +```shell +cd /etc/foo/..data +ls -l +``` + +<!-- +The output is similar to this: +--> +输出类似于: + +``` +total 8 +-r-------- 1 root root 12 May 18 00:18 password +-r-------- 1 root root 5 May 18 00:18 username +``` + +<!-- You can also use mapping, as in the previous example, and specify different permission for different files like this: --> -然后,secret 将被挂载到 `/etc/foo` 目录,所有通过该 secret volume 挂载创建的文件的权限都是 `0400`。 - -请注意,JSON 规范不支持八进制符号,因此使用 256 值作为 0400 权限。如果您使用 yaml 而不是 json 作为 pod,则可以使用八进制符号以更自然的方式指定权限。 - -您还可以使用映射,如上一个示例,并为不同的文件指定不同的权限,如下所示: +你还可以使用映射,如上一个示例,并为不同的文件指定不同的权限,如下所示: ```yaml apiVersion: v1 @@ -758,30 +944,36 @@ spec: <!-- In this case, the file resulting in `/etc/foo/my-group/my-username` will have permission value of `0777`. Owing to JSON limitations, you must specify the mode -in decimal notation. +in decimal notation, `511`. Note that this permission value might be displayed in decimal notation if you read it later. -**Consuming Secret Values from Volumes** +#### Consuming Secret Values from Volumes Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files. This is the result of commands executed inside the container from the example above: --> +在这里,位于 `/etc/foo/my-group/my-username` 的文件的权限值为 `0777`。 +由于 JSON 限制,必须以十进制格式指定模式,即 `511`。 -在这种情况下,导致 `/etc/foo/my-group/my-username` 的文件的权限值为 `0777`。由于 JSON 限制,必须以十进制格式指定模式。 +请注意,如果稍后读取此权限值,可能会以十进制格式显示。 -请注意,如果稍后阅读此权限值可能会以十进制格式显示。 +#### 使用来自卷中的 Secret 值 {#consuming-secret-values-from-volumes} -**从 Volume 中消费 secret 值** - -在挂载的 secret volume 的容器内,secret key 将作为文件,并且 secret 的值使用 base-64 解码并存储在这些文件中。这是在上面的示例容器内执行的命令的结果: +在挂载了 Secret 卷的容器内,Secret 键名显示为文件名,并且 Secret 的值 +使用 base-64 解码后存储在这些文件中。 +这是在上面的示例容器内执行的命令的结果: ```shell ls /etc/foo/ ``` + +<!-- The output is similar to: --> +输出类似于: + ``` username password @@ -790,14 +982,21 @@ password ```shell cat /etc/foo/username ``` + +<!-- The output is similar to: --> +输出类似于: + ``` admin ``` - ```shell cat /etc/foo/password ``` + +<!-- The output is similar to: --> +输出类似于: + ``` 1f2d1e2e67df ``` @@ -805,15 +1004,12 @@ cat /etc/foo/password <!-- The program in a container is responsible for reading the secrets from the files. - -**Mounted Secrets are updated automatically** --> - 容器中的程序负责从文件中读取 secret。 -**挂载的 secret 被自动更新** - <!-- +#### Mounted Secrets are updated automatically + When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local cache for getting the current value of the Secret. @@ -827,23 +1023,29 @@ when new keys are projected to the Pod can be as long as kubelet sync period + c propagation delay, where cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero corespondingly). --> -当已经在 volume 中被消费的 secret 被更新时,被映射的 key 也将被更新。Kubelet 在周期性同步时检查被挂载的 secret 是不是最新的。但是,它正在使用其本地缓存来获取 Secret 的当前值。 +#### 挂载的 Secret 会被自动更新 -缓存的类型可以使用 (`ConfigMapAndSecretChangeDetectionStrategy` 中的 [KubeletConfiguration 结构](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). -它可以通过基于 ttl 的 watch(默认)传播,也可以将所有请求直接重定向到直接kube-apiserver。 -结果,从更新密钥到将新密钥投射到 Pod 的那一刻的总延迟可能与 kubelet 同步周期 + 缓存传播延迟一样长,其中缓存传播延迟取决于所选的缓存类型。 -(它等于观察传播延迟,缓存的 ttl 或相应为 0) +当已经存储于卷中被使用的 Secret 被更新时,被映射的键也将终将被更新。 +组件 kubelet 在周期性同步时检查被挂载的 Secret 是不是最新的。 +但是,它会使用其本地缓存的数值作为 Secret 的当前值。 -{{< note >}} +缓存的类型可以使用 [KubeletConfiguration 结构](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go) +中的 `ConfigMapAndSecretChangeDetectionStrategy` 字段来配置。 +它可以通过 watch 操作来传播(默认),基于 TTL 来刷新,也可以 +将所有请求直接重定向到 API 服务器。 +因此,从 Secret 被更新到将新 Secret 被投射到 Pod 的那一刻的总延迟可能与 +kubelet 同步周期 + 缓存传播延迟一样长,其中缓存传播延迟取决于所选的缓存类型。 +对应于不同的缓存类型,该延迟或者等于 watch 传播延迟,或者等于缓存的 TTL, +或者为 0。 <!-- A container using a Secret as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive Secret updates. --> - -使用 Secret 作为[子路径](/docs/concepts/storage/volumes#using-subpath)卷安装的容器将不会收到 Secret 更新。 - +{{< note >}} +使用 Secret 作为[子路径](/zh/docs/concepts/storage/volumes#using-subpath)卷挂载的容器 +不会收到 Secret 更新。 {{< /note >}} {{< feature-state for_k8s_version="v1.18" state="alpha" >}} @@ -854,8 +1056,10 @@ individual Secrets and ConfigMaps as immutable. For clusters that extensively us (at least tens of thousands of unique Secret to Pod mounts), preventing changes to their data has the following advantages: --> -Kubernetes 的 alpha 特性 _不可变的 Secret 和 ConfigMap_ 提供了一个设置各个 Secret 和 ConfigMap 为不可变的选项。 -对于大量使用 Secret 的集群(至少有成千上万各不相同的 Secret 供 Pod 挂载),禁止变更它们的数据有下列好处: +Kubernetes 的 alpha 特性 _不可变的 Secret 和 ConfigMap_ 提供了一种可选配置, +可以设置各个 Secret 和 ConfigMap 为不可变的。 +对于大量使用 Secret 的集群(至少有成千上万各不相同的 Secret 供 Pod 挂载), +禁止变更它们的数据有下列好处: <!-- - protects you from accidental (or unwanted) updates that could cause applications outages @@ -863,14 +1067,17 @@ Kubernetes 的 alpha 特性 _不可变的 Secret 和 ConfigMap_ 提供了一个 closing watches for secrets marked as immutable. --> - 防止意外(或非预期的)更新导致应用程序中断 -- 通过将 Secret 标记为不可变来关闭 kube-apiserver 对其的监视,以显著地降低 kube-apiserver 的负载来提升集群性能。 +- 通过将 Secret 标记为不可变来关闭 kube-apiserver 对其的监视,从而显著降低 + kube-apiserver 的负载,提升集群性能。 <!-- To use this feature, enable the `ImmutableEmphemeralVolumes` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set your Secret or ConfigMap `immutable` field to `true`. For example: --> -使用这个特性需要启用 `ImmutableEmphemeralVolumes` [特性开关](/docs/reference/command-line-tools-reference/feature-gates/) 并将 Secret 或 ConfigMap 的 `immutable` 字段设置为 `true`. 例如: +使用这个特性需要启用 `ImmutableEmphemeralVolumes` +[特性开关](/zh/docs/reference/command-line-tools-reference/feature-gates/) +并将 Secret 或 ConfigMap 的 `immutable` 字段设置为 `true`. 例如: ```yaml apiVersion: v1 @@ -890,7 +1097,7 @@ these pods. --> {{< note >}} 一旦一个 Secret 或 ConfigMap 被标记为不可变,撤销此操作或者更改 `data` 字段的内容都是 _不_ 可能的。 -只能删除并重新创建这个 Secret. 现有的 Pod 将维持对已删除 Secret 的挂载点 - 建议重新创建这些 pod. +只能删除并重新创建这个 Secret。现有的 Pod 将维持对已删除 Secret 的挂载点 - 建议重新创建这些 Pod。 {{< /note >}} <!-- @@ -906,15 +1113,17 @@ in a pod: This is an example of a pod that uses secrets from environment variables: --> -#### Secret 作为环境变量 +#### 以环境变量的形式使用 Secrets {#using-secrets-as-environment-variables} -将 secret 作为 pod 中的{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}}使用: +将 Secret 作为 Pod 中的{{< glossary_tooltip text="环境变量" term_id="container-env-variables" >}}使用: -1. 创建一个 secret 或者使用一个已存在的 secret。多个 pod 可以引用同一个 secret。 -1. 修改 Pod 定义,为每个要使用 secret 的容器添加对应 secret key 的环境变量。消费secret key 的环境变量应填充 secret 的名称,并键入 `env[x].valueFrom.secretKeyRef`。 -1. 修改镜像并/或者命令行,以便程序在指定的环境变量中查找值。 +1. 创建一个 Secret 或者使用一个已存在的 Secret。多个 Pod 可以引用同一个 Secret。 +1. 修改 Pod 定义,为每个要使用 Secret 的容器添加对应 Secret 键的环境变量。 + 使用 Secret 键的环境变量应在 `env[x].valueFrom.secretKeyRef` 中指定 + 要包含的 Secret 名称和键名。 +1. 更改镜像并/或者命令行,以便程序在指定的环境变量中查找值。 -这是一个使用 Secret 作为环境变量的示例: +这是一个使用来自环境变量中的 Secret 值的 Pod 示例: ```yaml apiVersion: v1 @@ -946,19 +1155,29 @@ Inside a container that consumes a secret in an environment variables, the secre normal environment variables containing the base-64 decoded values of the secret data. This is the result of commands executed inside the container from the example above: --> -**消费环境变量里的 Secret 值** +#### 使用来自环境变量的 Secret 值 {#consuming-secret-values-from-environment-variables} -在一个消耗环境变量 secret 的容器中,secret key 作为包含 secret 数据的 base-64 解码值的常规环境变量。这是从上面的示例在容器内执行的命令的结果: +在一个以环境变量形式使用 Secret 的容器中,Secret 键表现为常规的环境变量,其中 +包含 Secret 数据的 base-64 解码值。这是从上面的示例在容器内执行的命令的结果: ```shell echo $SECRET_USERNAME ``` + +<!-- The output is similar to: --> +输出类似于: + ``` admin ``` + ```shell echo $SECRET_PASSWORD ``` + +<!-- The output is similar to: --> +输出类似于: + ``` 1f2d1e2e67df ``` @@ -966,22 +1185,26 @@ echo $SECRET_PASSWORD <!-- ### Using imagePullSecrets -An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry -password to the Kubelet so it can pull a private image on behalf of your Pod. +The `imagePullSecrets` field is a list of references to secrets in the same namespace. +You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry +password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. +See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field. -**Manually specifying an imagePullSecret** - -Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) +#### Manually specifying an imagePullSecret +You can learn how to specify `ImagePullSecrets` from the [container images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). --> +#### 使用 imagePullSecret {#using-imagepullsecrets} -#### 使用 imagePullSecret +`imagePullSecrets` 字段中包含一个列表,列举对同一名字空间中的 Secret 的引用。 +你可以使用 `imagePullSecrets` 将包含 Docker(或其他)镜像仓库密码的 Secret 传递给 +kubelet。kubelet 使用此信息来替你的 Pod 拉取私有镜像。 +关于 `imagePullSecrets` 字段的更多信息,请参考 [PodSpec API](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#podspec-v1-core) 文档。 -imagePullSecret 是将包含 Docker(或其他)镜像注册表密码的 secret 传递给 Kubelet 的一种方式,因此可以代表您的 pod 拉取私有镜像。 +#### 手动指定 imagePullSecret -**手动指定 imagePullSecret** - -imagePullSecret 的使用在 [镜像文档](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 中说明。 +你可以阅读[容器镜像文档](/zh/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) +以了解如何设置 `imagePullSecrets`。 <!-- ### Arranging for imagePullSecrets to be Automatically Attached @@ -993,10 +1216,13 @@ field set to that of the service account. See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for a detailed explanation of that process. --> +#### 设置自动附加 imagePullSecrets -### 安排 imagePullSecrets 自动附加 - -您可以手动创建 imagePullSecret,并从 serviceAccount 引用它。使用该 serviceAccount 创建的任何 pod 和默认使用该 serviceAccount 的 pod 将会将其的 imagePullSecret 字段设置为服务帐户的 imagePullSecret 字段。有关该过程的详细说明,请参阅 [将 ImagePullSecrets 添加到服务帐户](/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account)。 +您可以手动创建 `imagePullSecret`,并在 ServiceAccount 中引用它。 +使用该 ServiceAccount 创建的任何 Pod 和默认使用该 ServiceAccount 的 +Pod 将会将其的 imagePullSecret 字段设置为服务帐户的 imagePullSecret 值。 +有关该过程的详细说明,请参阅 +[将 ImagePullSecrets 添加到服务帐户](/zh/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account)。 <!-- ### Automatic Mounting of Manually Created Secrets @@ -1008,7 +1234,10 @@ See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data- #### 自动挂载手动创建的 Secret -手动创建的 secret(例如包含用于访问 github 帐户的令牌)可以根据其服务帐户自动附加到 pod。请参阅 [使用 PodPreset 向 Pod 中注入信息](/docs/tasks/run-application/podpreset/) 以获取该进程的详细说明。 +手动创建的 Secret(例如包含用于访问 GitHub 帐户令牌的 Secret)可以 +根据其服务帐户自动附加到 Pod。 +请参阅[使用 PodPreset 向 Pod 中注入信息](/zh/docs/tasks/inject-data-application/podpreset/) +以获取该过程的详细说明。 <!-- ## Details @@ -1016,20 +1245,21 @@ See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data- ### Restrictions Secret volume sources are validated to ensure that the specified object -reference actually points to an object of type `Secret`. Therefore, a secret +reference actually points to an object of type Secret. Therefore, a secret needs to be created before any pods that depend on it. Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. They can only be referenced by pods in that same namespace. --> +## 详细说明 {#details} -## 详细 +### 限制 {#restrictions} -### 限制 +Kubernetes 会验证 Secret 作为卷来源时所给的对象引用确实指向一个类型为 +Secret 的对象。因此,Secret 需要先于任何依赖于它的 Pod 创建。 -验证 secret volume 来源确保指定的对象引用实际上指向一个类型为 Secret 的对象。因此,需要在依赖于它的任何 pod 之前创建一个 secret。 - -Secret API 对象驻留在命名空间中。它们只能由同一命名空间中的 pod 引用。 +Secret API 对象处于某{{< glossary_tooltip text="名字空间" term_id="namespace" >}} +中。它们只能由同一命名空间中的 Pod 引用。 <!-- Individual secrets are limited to 1MiB in size. This is to discourage creation @@ -1043,10 +1273,14 @@ controller. It does not include pods created via the kubelets `--manifest-url` flag, its `--config` flag, or its REST API (these are not common ways to create pods.) --> +每个 Secret 的大小限制为 1MB。这是为了防止创建非常大的 Secret 导致 API 服务器 +和 kubelet 的内存耗尽。然而,创建过多较小的 Secret 也可能耗尽内存。 +更全面得限制 Secret 内存用量的功能还在计划中。 -每个 secret 的大小限制为 1MB。这是为了防止创建非常大的 secret 会耗尽 apiserver 和 kubelet 的内存。然而,创建许多较小的 secret 也可能耗尽内存。更全面得限制 secret 对内存使用的功能还在计划中。 - -Kubelet 仅支持从 API server 获取的 Pod 使用 secret。这包括使用 kubectl 创建的任何 pod,或间接通过 replication controller 创建的 pod。它不包括通过 kubelet `--manifest-url` 标志,其 `--config` 标志或其 REST API 创建的 pod(这些不是创建 pod 的常用方法)。 +kubelet 仅支持从 API 服务器获得的 Pod 使用 Secret。 +这包括使用 `kubectl` 创建的所有 Pod,以及间接通过副本控制器创建的 Pod。 +它不包括通过 kubelet `--manifest-url` 标志,`--config` 标志或其 REST API +创建的 Pod(这些不是创建 Pod 的常用方法)。 <!-- Secrets must be created before they are consumed in pods as environment @@ -1063,16 +1297,24 @@ reason is `InvalidVariableNames` and the message will contain the list of invalid keys that were skipped. The example shows a pod which refers to the default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad. --> +以环境变量形式在 Pod 中使用 Secret 之前必须先创建 +Secret,除非该环境变量被标记为可选的。 +Pod 中引用不存在的 Secret 时将无法启动。 -必须先创建 secret,除非将它们标记为可选项,否则必须在将其作为环境变量在 pod 中使用之前创建 secret。对不存在的 secret 的引用将阻止其启动。 +使用 `secretKeyRef` 时,如果引用了指定 Secret 不存在的键,对应的 Pod 也无法启动。 -使用 `secretKeyRef` ,引用指定的 secret 中的不存在的 key ,这会阻止 pod 的启动。 - -对于通过 `envFrom` 填充环境变量的 secret,这些环境变量具有被认为是无效环境变量名称的 key 将跳过这些键。该 pod 将被允许启动。将会有一个事件,其原因是 `InvalidVariableNames`,该消息将包含被跳过的无效键的列表。该示例显示一个 pod,它指的是包含2个无效键,1badkey 和 2alsobad 的默认/mysecret ConfigMap。 +对于通过 `envFrom` 填充环境变量的 Secret,如果 Secret 中包含的键名无法作为 +合法的环境变量名称,对应的键会被跳过,该 Pod 将被允许启动。 +不过这时会产生一个事件,其原因为 `InvalidVariableNames`,其消息中包含被跳过的无效键的列表。 +下面的示例显示一个 Pod,它引用了包含 2 个无效键 1badkey 和 2alsobad。 ```shell kubectl get events ``` + +<!--The output is similar to:--> +输出类似于: + ``` LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. @@ -1090,46 +1332,110 @@ reason it is not started yet. Once the secret is fetched, the kubelet will create and mount a volume containing it. None of the pod's containers will start until all the pod's volumes are mounted. --> -### Secret 与 Pod 生命周期的联系 +### Secret 与 Pod 生命周期的关系 -通过 API 创建 Pod 时,不会检查应用的 secret 是否存在。一旦 Pod 被调度,kubelet 就会尝试获取该 secret 的值。如果获取不到该 secret,或者暂时无法与 API server 建立连接,kubelet 将会定期重试。Kubelet 将会报告关于 pod 的事件,并解释它无法启动的原因。一旦获取到 secret,kubelet 将创建并装载一个包含它的卷。在所有 pod 的卷被挂载之前,都不会启动 pod 的容器。 +通过 API 创建 Pod 时,不会检查引用的 Secret 是否存在。一旦 Pod 被调度,kubelet +就会尝试获取该 Secret 的值。如果获取不到该 Secret,或者暂时无法与 API 服务器建立连接, +kubelet 将会定期重试。kubelet 将会报告关于 Pod 的事件,并解释它无法启动的原因。 +一旦获取到 Secret,kubelet 将创建并挂载一个包含它的卷。在 Pod 的所有卷被挂载之前, +Pod 中的容器不会启动。 <!-- ## Use cases -### Use-Case: Pod with ssh keys +### Use-Case: As container environment variables -Create a kustomization.yaml with SecretGenerator containing some ssh keys: --> - ## 使用案例 -### 使用案例:包含 ssh 密钥的 pod -创建一个包含 ssh key 的 secret: +### 案例:以环境变量的形式使用 Secret + +<!-- Create a secret --> +创建一个 Secret 定义: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + USER_NAME: YWRtaW4= + PASSWORD: MWYyZDFlMmU2N2Rm +``` + +<!-- Create the Secret: --> +生成 Secret 对象: ```shell -kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +kubectl apply -f mysecret.yaml ``` +<!-- +Use `envFrom` to define all of the Secret’s data as container environment variables. The key from the Secret becomes the environment variable name in the Pod. +--> +使用 `envFrom` 将 Secret 的所有数据定义为容器的环境变量。 +Secret 中的键名称为 Pod 中的环境变量名称: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + envFrom: + - secretRef: + name: mysecret + restartPolicy: Never +``` + +<!-- +### Use-Case: Pod with ssh keys + +Create a secret containing some ssh keys: +--> +### 案例:包含 SSH 密钥的 Pod + +创建一个包含 SSH 密钥的 Secret: + +```shell +kubectl create secret generic ssh-key-secret \ + --from-file=ssh-privatekey=/path/to/.ssh/id_rsa \ + --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +``` + +<!-- The output is similar to: --> +输出类似于: + ``` secret "ssh-key-secret" created ``` -{{< caution >}} +<!-- +You can also create a `kustomization.yaml` with a `secretGenerator` field containing ssh keys. +--> +你也可以创建一个带有包含 SSH 密钥的 `secretGenerator` 字段的 +`kustomization.yaml` 文件。 + <!-- Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised. --> - -发送自己的 ssh 密钥之前要仔细思考:集群的其他用户可能有权访问该密钥。使用您想要共享 Kubernetes 群集的所有用户可以访问的服务帐户,如果它们遭到入侵,可以撤销。 +{{< caution >}} +发送自己的 SSH 密钥之前要仔细思考:集群的其他用户可能有权访问该密钥。 +你可以使用一个服务帐户,分享给 Kubernetes 集群中合适的用户,这些用户是你要分享的。 +如果服务账号遭到侵犯,可以将其收回。 {{< /caution >}} - <!-- Now we can create a pod which references the secret with the ssh key and consumes it in a volume: --> -现在我们可以创建一个使用 ssh 密钥引用 secret 的 pod,并在一个卷中使用它: +现在我们可以创建一个 Pod,令其引用包含 SSH 密钥的 Secret,并通过存储卷来使用它: ```yaml apiVersion: v1 @@ -1155,10 +1461,9 @@ spec: <!-- When the container's command runs, the pieces of the key will be available in: --> +容器中的命令运行时,密钥的片段可以在以下目录找到: -当容器中的命令运行时,密钥的片段将可在以下目录: - -```shell +``` /etc/secret-volume/ssh-publickey /etc/secret-volume/ssh-privatekey ``` @@ -1166,7 +1471,7 @@ When the container's command runs, the pieces of the key will be available in: <!-- The container is then free to use the secret data to establish an ssh connection. --> -然后容器可以自由使用密钥数据建立一个 ssh 连接。 +然后容器可以自由使用 Secret 数据建立一个 SSH 连接。 <!-- ### Use-Case: Pods with prod / test credentials @@ -1175,43 +1480,61 @@ This example illustrates a pod which consumes a secret containing prod credentials and another pod which consumes a secret with test environment credentials. -Make the kustomization.yaml with SecretGenerator +You can create a `kustomization.yaml` with a `secretGenerator` field or run +`kubectl create secret`. + --> -### 使用案例:包含 prod/test 凭据的 pod +### 案例:包含生产/测试凭据的 Pod -下面的例子说明一个 pod 消费一个包含 prod 凭据的 secret,另一个 pod 使用测试环境凭据消费 secret。 +下面的例子展示的是两个 Pod。 +一个 Pod 使用包含生产环境凭据的 Secret,另一个 Pod 使用包含测试环境凭据的 Secret。 -通过秘钥生成器制作 kustomization.yaml +你可以创建一个带有 `secretGenerator` 字段的 `kustomization.yaml` +文件,或者执行 `kubectl create secret`: ```shell -kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 +kubectl create secret generic prod-db-secret \ + --from-literal=username=produser \ + --from-literal=password=Y4nys7f11 ``` + +<!--The output is similar to:--> +输出类似于: + ``` secret "prod-db-secret" created ``` ```shell -kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests +kubectl create secret generic test-db-secret \ + --from-literal=username=testuser \ + --from-literal=password=iluvtests ``` + +<!--The output is similar to:--> +输出类似于: + ``` secret "test-db-secret" created ``` -{{< note >}} -<!-- -Special characters such as `$`, `\*`, and `!` require escaping. -If the password you are using has special characters, you need to escape them using the `\\` character. For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: ---> - -特殊字符(例如 `$`, `\*`, 和 `!`)需要转义。 如果您使用的密码具有特殊字符,则需要使用 `\\` 字符对其进行转义。 例如,如果您的实际密码是 `S!B\*d$zDsb`,则应通过以下方式执行命令: - -```shell -kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\*d\\$zDsb -``` <!-- +Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. +In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). +For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way: You do not need to escape special characters in passwords from files (`--from-file`). --> -您无需从文件中转义密码中的特殊字符( `--from-file` )。 +{{< note >}} +特殊字符(例如 `$`、`\`、`*`、`=` 和 `!`)会被你的 +[Shell](https://en.wikipedia.org/wiki/Shell_(computing))解释,因此需要转义。 +在大多数 Shell 中,对密码进行转义的最简单方式是用单引号(`'`)将其括起来。 +例如,如果您的实际密码是 `S!B\*d$zDsb`,则应通过以下方式执行命令: + +```shell +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' +``` + +您无需对文件中的密码(`--from-file`)中的特殊字符进行转义。 {{< /note >}} <!-- @@ -1266,8 +1589,7 @@ EOF <!-- Add the pods to the same kustomization.yaml --> - -加入 Pod 到同样的 kustomization.yaml 文件 +将 Pod 添加到同一个 kustomization.yaml 文件 ```shell $ cat <<EOF >> kustomization.yaml @@ -1279,7 +1601,7 @@ EOF <!-- Apply all those objects on the Apiserver by --> -部署所有的对象通过下面的命令 +通过下面的命令应用所有对象 ```shell kubectl apply -k . @@ -1288,9 +1610,9 @@ kubectl apply -k . <!-- Both containers will have the following files present on their filesystems with the values for each container's environment: --> -这两个容器将在其文件系统上显示以下文件,其中包含每个容器环境的值: +两个容器都会在其文件系统上存在以下文件,其中包含容器对应的环境的值: -```shell +``` /etc/secret-volume/username /etc/secret-volume/password ``` @@ -1300,13 +1622,21 @@ Note how the specs for the two pods differ only in one field; this facilitates creating pods with different capabilities from a common pod config template. You could further simplify the base pod specification by using two Service Accounts: -one called, say, `prod-user` with the `prod-db-secret`, and one called, say, -`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example: + +1. `prod-user` with the `prod-db-secret` +1. `test-user` with the `test-db-secret` + +The Pod specification is shortened to: --> +请注意,两个 Pod 的规约配置中仅有一个字段不同;这有助于使用共同的 Pod 配置模板创建 +具有不同能力的 Pod。 -请注意,两个 pod 的 spec 配置中仅有一个字段有所不同;这有助于使用普通的 pod 配置模板创建具有不同功能的 pod。 +您可以使用两个服务账号进一步简化基本的 Pod 规约: -您可以使用两个 service account 进一步简化基本 pod spec:一个名为 `prod-user` 拥有 `prod-db-secret` ,另一个称为 `test-user` 拥有 `test-db-secret` 。然后,pod spec 可以缩短为,例如: +1. 名为 `prod-user` 的服务账号拥有 `prod-db-secret` +1. 名为 `test-user` 的服务账号拥有 `test-db-secret` + +然后,Pod 规约可以缩短为: ```yaml apiVersion: v1 @@ -1325,12 +1655,15 @@ spec: <!-- ### Use-case: Dotfiles in secret volume -In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply -make that key begin with a dot. For example, when the following secret is mounted into a volume: ---> -### 使用案例:Secret 卷中以点号开头的文件 +You can make your data "hidden" by defining a key that begins with a dot. +This key represents a dotfile or "hidden" file. For example, when the following secret +is mounted into a volume, `secret-volume`: -为了将数据“隐藏”起来(即文件名以点号开头的文件),简单地说让该键以一个点开始。例如,当如下 secret 被挂载到卷中: +--> +### 案例:Secret 卷中以句点号开头的文件 + +你可以通过定义以句点开头的键名,将数据“隐藏”起来。 +例如,当如下 Secret 被挂载到 `secret-volume` 卷中: ```yaml apiVersion: v1 @@ -1364,20 +1697,20 @@ spec: <!-- -The `secret-volume` will contain a single file, called `.secret-file`, and +The volume will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`. --> -`Secret-volume` 将包含一个单独的文件,叫做 `.secret-file`,`dotfile-test-container` 的 `/etc/secret-volume/.secret-file` 路径下将有该文件。 - -{{< note >}} +卷中将包含唯一的叫做 `.secret-file` 的文件。 +容器 `dotfile-test-container` 中,该文件处于 `/etc/secret-volume/.secret-file` 路径下。 <!-- Files beginning with dot characters are hidden from the output of `ls -l`; you must use `ls -la` to see them when listing directory contents. --> - -以点号开头的文件在 `ls -l` 的输出中被隐藏起来了;列出目录内容时,必须使用 `ls -la` 才能查看它们。 +{{< note >}} +以点号开头的文件在 `ls -l` 的输出中会被隐藏起来; +列出目录内容时,必须使用 `ls -la` 才能看到它们。 {{< /note >}} <!-- @@ -1388,10 +1721,10 @@ logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed remote file reading exploit in the server, which could expose the private key to an attacker. --> - -### 使用案例:Secret 仅对 pod 中的一个容器可见 - -考虑以下一个需要处理 HTTP 请求的程序,执行一些复杂的业务逻辑,然后使用 HMAC 签署一些消息。因为它具有复杂的应用程序逻辑,所以在服务器中可能会出现一个未被注意的远程文件读取漏洞,这可能会将私钥暴露给攻击者。 +### 案例:Secret 仅对 Pod 中的一个容器可见 {#secret-visible-to-only-one-container} +考虑一个需要处理 HTTP 请求、执行一些复杂的业务逻辑,然后使用 HMAC 签署一些消息的应用。 +因为应用程序逻辑复杂,服务器中可能会存在一个未被注意的远程文件读取漏洞, +可能会将私钥暴露给攻击者。 <!-- This could be divided into two processes in two containers: a frontend container @@ -1403,10 +1736,12 @@ With this partitioned approach, an attacker now has to trick the application server into doing something rather arbitrary, which may be harder than getting it to read a file. --> +解决的办法可以是将应用分为两个进程,分别运行在两个容器中: +前端容器,用于处理用户交互和业务逻辑,但无法看到私钥; +签名容器,可以看到私钥,响应来自前端(例如通过本地主机网络)的简单签名请求。 -这可以在两个容器中分为两个进程:前端容器,用于处理用户交互和业务逻辑,但无法看到私钥;以及可以看到私钥的签名者容器,并且响应来自前端的简单签名请求(例如通过本地主机网络)。 - -使用这种分割方法,攻击者现在必须欺骗应用程序服务器才能进行任意的操作,这可能比使其读取文件更难。 +使用这种分割方法,攻击者现在必须欺骗应用程序服务器才能进行任意的操作, +这可能比使其读取文件更难。 <!-- TODO: explain how to do this while still using automation. --> @@ -1420,12 +1755,13 @@ limited using [authorization policies]( /docs/reference/access-authn-authz/authorization/) such as [RBAC]( /docs/reference/access-authn-authz/rbac/). --> - -## 最佳实践 +## 最佳实践 {#best-practices} ### 客户端使用 Secret API -当部署与 secret API 交互的应用程序时,应使用 [授权策略](/docs/reference/access-authn-authz/authorization/), 例如 [RBAC](/docs/reference/access-authn-authz/rbac/) 来限制访问。 +当部署与 Secret API 交互的应用程序时,应使用 +[鉴权策略](/zh/docs/reference/access-authn-authz/authorization/), +例如 [RBAC](/zh/docs/reference/access-authn-authz/rbac/),来限制访问。 <!-- Secrets often hold values that span a spectrum of importance, many of which can @@ -1441,9 +1777,15 @@ the clients to inspect the values of all secrets that are in that namespace. The privileged, system-level components. --> -Secret 中的值对于不同的环境来说重要性可能不同,例如对于 Kubernetes 集群内部(例如 service account 令牌)和集群外部来说就不一样。即使一个应用程序可以理解其期望的与之交互的 secret 有多大的能力,但是同一命名空间中的其他应用程序却可能不这样认为。 +Secret 中的值对于不同的环境来说重要性可能不同。 +很多 Secret 都可能导致 Kubernetes 集群内部的权限越界(例如服务账号令牌) +甚至逃逸到集群外部。 +即使某一个应用程序可以就所交互的 Secret 的能力作出正确抉择,但是同一命名空间中 +的其他应用程序却可能不这样做。 -由于这些原因,在命名空间中 `watch` 和 `list` secret 的请求是非常强大的功能,应该避免这样的行为,因为列出 secret 可以让客户端检查所有 secret 是否在该命名空间中。在群集中 `watch` 和 `list` 所有 secret 的能力应该只保留给最有特权的系统级组件。 +由于这些原因,在命名空间中 `watch` 和 `list` Secret 的请求是非常强大的能力, +是应该避免的行为。列出 Secret 的操作可以让客户端检查该命名空间中存在的所有 Secret。 +在群集中 `watch` 和 `list` 所有 Secret 的能力应该只保留给特权最高的系统级组件。 <!-- Applications that need to access the secrets API should perform `get` requests on @@ -1459,17 +1801,18 @@ https://github.com/kubernetes/community/blob/master/contributors/design-proposal to let clients `watch` individual resources has also been proposed, and will likely be available in future releases of Kubernetes. --> +需要访问 Secret API 的应用程序应该针对所需要的 Secret 执行 `get` 请求。 +这样,管理员就能限制对所有 Secret 的访问,同时为应用所需要的 +[实例设置访问允许清单](/zh/docs/reference/access-authn-authz/rbac/#referring-to-resources) 。 -需要访问 secrets API 的应用程序应该根据他们需要的 secret 执行 `get` 请求。这允许管理员限制对所有 secret 的访问, -同时设置 [白名单访问](/docs/reference/access-authn-authz/rbac/#referring-to-resources) 应用程序需要的各个实例。 - -为了提高循环获取的性能,客户端可以设计引用 secret 的资源,然后 `watch` 资源,在引用更改时重新请求 secret。 -此外,还提出了一种 [”批量监控“ API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bulk_watch.md) 来让客户端 `watch` 每个资源,该功能可能会在将来的 Kubernetes 版本中提供。 +为了获得高于轮询操作的性能,客户端设计资源时,可以引用 Secret,然后对资源执行 `watch` +操作,在引用更改时重新检索 Secret。 +此外,社区还存在一种 [“批量监控” API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/bulk_watch.md) +的提案,允许客户端 `watch` 独立的资源,该功能可能会在将来的 Kubernetes 版本中提供。 <!-- ## Security Properties - ### Protections Because `secret` objects can be created independently of the `pods` that use @@ -1478,12 +1821,13 @@ creating, viewing, and editing pods. The system can also take additional precautions with `secret` objects, such as avoiding writing them to disk where possible. --> +## 安全属性 {#security-properties} -## 安全属性 +### 保护 {#protections} -### 保护 - -因为 `secret` 对象可以独立于使用它们的 `pod` 而创建,所以在创建、查看和编辑 pod 的流程中 secret 被暴露的风险较小。系统还可以对 `secret` 对象采取额外的预防措施,例如避免将其写入到磁盘中可能的位置。 +因为 Secret 对象可以独立于使用它们的 Pod 而创建,所以在创建、查看和编辑 Pod 的流程中 +Secret 被暴露的风险较小。系统还可以对 Secret 对象采取额外的预防性保护措施, +例如,在可能的情况下避免将其写到磁盘。 <!-- A secret is only sent to a node if a pod on that node requires it. @@ -1495,10 +1839,13 @@ There may be secrets for several pods on the same node. However, only the secrets that a pod requests are potentially visible within its containers. Therefore, one Pod does not have access to the secrets of another Pod. --> +只有当某节点上的 Pod 需要用到某 Secret 时,该 Secret 才会被发送到该节点上。 +Secret 不会被写入磁盘,而是被 kubelet 存储在 tmpfs 中。 +一旦依赖于它的 Pod 被删除,Secret 数据的本地副本就被删除。 -只有当节点上的 pod 需要用到该 secret 时,该 secret 才会被发送到该节点上。它不会被写入磁盘,而是存储在 tmpfs 中。一旦依赖于它的 pod 被删除,它就被删除。 - -同一节点上的很多个 pod 可能拥有多个 secret。但是,只有 pod 请求的 secret 在其容器中才是可见的。因此,一个 pod 不能访问另一个 Pod 的 secret。 +同一节点上的很多个 Pod 可能拥有多个 Secret。 +但是,只有 Pod 所请求的 Secret 在其容器中才是可见的。 +因此,一个 Pod 不能访问另一个 Pod 的 Secret。 <!-- There may be several containers in a pod. However, each container in a pod has @@ -1506,14 +1853,17 @@ to request the secret volume in its `volumeMounts` for it to be visible within the container. This can be used to construct useful [security partitions at the Pod level](#use-case-secret-visible-to-one-container-in-a-pod). -On most Kubernetes-project-maintained distributions, communication between user +On most Kubernetes distributions, communication between users to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. Secrets are protected when transmitted over these channels. --> -Pod 中有多个容器。但是,pod 中的每个容器必须请求其挂载卷中的 secret 卷才能在容器内可见。 -这可以用于 [在 Pod 级别构建安全分区](#使用案例secret-仅对-pod-中的一个容器可见)。 +同一个 Pod 中可能有多个容器。但是,Pod 中的每个容器必须通过 `volumeeMounts` +请求挂载 Secret 卷才能使卷中的 Secret 对容器可见。 +这一实现可以用于在 Pod 级别[构建安全分区](#secret-visible-to-only-one-container)。 -在大多数 Kubernetes 项目维护的发行版中,用户与 API server 之间的通信以及从 API server 到 kubelet 的通信都受到 SSL/TLS 的保护。通过这些通道传输时,secret 受到保护。 +在大多数 Kubernetes 发行版中,用户与 API 服务器之间的通信以及 +从 API 服务器到 kubelet 的通信都受到 SSL/TLS 的保护。 +通过这些通道传输时,Secret 受到保护。 {{< feature-state for_k8s_version="v1.13" state="beta" >}} @@ -1521,7 +1871,8 @@ Pod 中有多个容器。但是,pod 中的每个容器必须请求其挂载卷 You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}. --> -你可以为 secret 数据开启[静态加密](/docs/tasks/administer-cluster/encrypt-data/),这样秘密信息就不会以明文形式存储到{{< glossary_tooltip term_id="etcd" >}}。 +你可以为 Secret 数据开启[静态加密](/zh/docs/tasks/administer-cluster/encrypt-data/), +这样 Secret 数据就不会以明文形式存储到{{< glossary_tooltip term_id="etcd" >}} 中。 <!-- ### Risks @@ -1547,21 +1898,17 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa nodes that actually require them, to restrict the impact of a root exploit on a single node. --> - - ### 风险 -- API server 的 secret 数据以纯文本的方式存储在 etcd 中,因此: - - 管理员应该为集群数据开启静态加密(需求 v1.13 或者更新)。 - - 管理员应该限制 admin 用户访问 etcd; - - API server 中的 secret 数据位于 etcd 使用的磁盘上;管理员可能希望在不再使用时擦除/粉碎 etcd 使用的磁盘 +- API 服务器上的 Secret 数据以纯文本的方式存储在 etcd 中,因此: + - 管理员应该为集群数据开启静态加密(要求 v1.13 或者更高版本)。 + - 管理员应该限制只有 admin 用户能访问 etcd; + - API 服务器中的 Secret 数据位于 etcd 使用的磁盘上;管理员可能希望在不再使用时擦除/粉碎 etcd 使用的磁盘 - 如果 etcd 运行在集群内,管理员应该确保 etcd 之间的通信使用 SSL/TLS 进行加密。 -- 如果您将 secret 数据编码为 base64 的清单(JSON 或 YAML)文件,共享该文件或将其检入代码库,这样的话该密码将会被泄露。 Base64 编码不是一种加密方式,一样也是纯文本。 -- 应用程序在从卷中读取 secret 后仍然需要保护 secret 的值,例如不会意外记录或发送给不信任方。 -- 可以创建和使用 secret 的 pod 的用户也可以看到该 secret 的值。即使 API server 策略不允许用户读取 secret 对象,用户也可以运行暴露 secret 的 pod。 -- 目前,任何节点的 root 用户都可以通过模拟 kubelet 来读取 API server 中的任何 secret。只有向实际需要它们的节点发送 secret 才能限制单个节点的根漏洞的影响,该功能还在计划中。 - -## {{% heading "whatsnext" %}} - - +- 如果您将 Secret 数据编码为 base64 的清单(JSON 或 YAML)文件,共享该文件或将其检入代码库,该密码将会被泄露。 Base64 编码不是一种加密方式,应该视同纯文本。 +- 应用程序在从卷中读取 Secret 后仍然需要保护 Secret 的值,例如不会意外将其写入日志或发送给不信任方。 +- 可以创建使用 Secret 的 Pod 的用户也可以看到该 Secret 的值。即使 API 服务器策略不允许用户读取 Secret 对象,用户也可以运行 Pod 导致 Secret 暴露。 +- 目前,任何节点的 root 用户都可以通过模拟 kubelet 来读取 API 服务器中的任何 Secret。 + 仅向实际需要 Secret 的节点发送 Secret 数据才能限制节点的 root 账号漏洞的影响, + 该功能还在计划中。 diff --git a/content/zh/docs/concepts/containers/images.md b/content/zh/docs/concepts/containers/images.md index cde48800a3..8622ff532e 100644 --- a/content/zh/docs/concepts/containers/images.md +++ b/content/zh/docs/concepts/containers/images.md @@ -33,7 +33,7 @@ The `image` property of a container supports the same syntax as the `docker` com <!-- ## Updating Images --> -## 升级镜像 +## 更新镜像 {#updating-images} <!-- The default pull policy is `IfNotPresent` which causes the Kubelet to skip diff --git a/content/zh/docs/concepts/example-concept-template.md b/content/zh/docs/concepts/example-concept-template.md deleted file mode 100644 index c88d280c9c..0000000000 --- a/content/zh/docs/concepts/example-concept-template.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: 概念模板示例 -content_type: concept -toc_hide: true ---- - -<!-- ---- -title: Example Concept Template -content_type: concept -toc_hide: true ---- ---> - -<!-- overview --> - -<!-- -Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document. ---> - -{{< note >}} -确保您的新文档也可以[在目录中创建一个条目](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents)。 -{{< /note >}} - -<!-- -This page explains ... ---> - -本页解释了 ... - - - -<!-- body --> - -<!-- -## Understanding ... ---> -## 了解 ... - -<!-- -Kubernetes provides ... ---> -Kubernetes 提供 ... - -<!-- -## Using ... ---> -## 使用 ... - -<!-- -To use ... ---> -使用 ... - - - -## {{% heading "whatsnext" %}} - - - -<!-- -**[Optional Section]** ---> -**[可选章节]** - - -<!-- -* Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/). -* See [Using Page Templates - Concept template](/docs/home/contribute/page-templates/#concept_template) for how to use this template. ---> -* 了解有关[撰写新主题](/docs/home/contribute/write-new-topic/)的更多信息。 -* 有关如何使用此模板的信息,请参阅[使用页面模板 - 概念模板](/docs/home/contribute/page-templates/#concept_template)。 - - - - diff --git a/content/zh/docs/concepts/extend-kubernetes/_index.md b/content/zh/docs/concepts/extend-kubernetes/_index.md index c52f4624e6..3d091a51cd 100644 --- a/content/zh/docs/concepts/extend-kubernetes/_index.md +++ b/content/zh/docs/concepts/extend-kubernetes/_index.md @@ -1,4 +1,454 @@ --- title: 扩展 Kubernetes -weight: 40 +weight: 110 +description: 改变你的 Kubernetes 集群的行为的若干方法。 +content_type: concept +no_list: true --- +<!-- +title: Extending Kubernetes +weight: 110 +description: Different ways to change the behavior of your Kubernetes cluster. +reviewers: +- erictune +- lavalamp +- cheftako +- chenopis +content_type: concept +no_list: true +--> + +<!-- overview --> + +<!-- +Kubernetes is highly configurable and extensible. As a result, +there is rarely a need to fork or submit patches to the Kubernetes +project code. + +This guide describes the options for customizing a Kubernetes +cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to +understand how to adapt their Kubernetes cluster to the needs of +their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it +useful as an introduction to what extension points and patterns +exist, and their trade-offs and limitations. +--> +Kubernetes 是高度可配置且可扩展的。因此,大多数情况下,你不需要 +派生自己的 Kubernetes 副本或者向项目代码提交补丁。 + +本指南描述定制 Kubernetes 的可选方式。主要针对的读者是希望了解如何针对自身工作环境 +需要来调整 Kubernetes 的{{< glossary_tooltip text="集群管理者" term_id="cluster-operator" >}}。 +对于那些充当{{< glossary_tooltip text="平台开发人员" term_id="platform-developer" >}} +的开发人员或 Kubernetes 项目的{{< glossary_tooltip text="贡献者" term_id="contributor" >}} +而言,他们也会在本指南中找到有用的介绍信息,了解系统中存在哪些扩展点和扩展模式, +以及它们所附带的各种权衡和约束等等。 + +<!-- body --> + +<!-- +## Overview + +Customization approaches can be broadly divided into *configuration*, which only involves changing flags, local configuration files, or API resources; and *extensions*, which involve running additional programs or services. This document is primarily about extensions. +--> +## 概述 {#overview} + +定制化的方法主要可分为 *配置(Configuration)* 和 *扩展(Extensions)* 两种。 +前者主要涉及改变参数标志、本地配置文件或者 API 资源; +后者则需要额外运行一些程序或服务。 +本文主要关注扩展。 + +<!-- +## Configuration + +*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary: + +* [kubelet](/docs/admin/kubelet/) +* [kube-apiserver](/docs/admin/kube-apiserver/) +* [kube-controller-manager](/docs/admin/kube-controller-manager/) +* [kube-scheduler](/docs/admin/kube-scheduler/). +--> + +## Configuration + +*配置文件*和*参数标志*的说明位于在线文档的参考章节,按可执行文件组织: + +* [kubelet](/docs/admin/kubelet/) +* [kube-apiserver](/docs/admin/kube-apiserver/) +* [kube-controller-manager](/docs/admin/kube-controller-manager/) +* [kube-scheduler](/docs/admin/kube-scheduler/). + +<!-- +Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options. +--> +在托管的 Kubernetes 服务中或者受控安装的发行版本中,参数标志和配置文件不总是可以 +修改的。即使它们是可修改的,通常其修改权限也仅限于集群管理员。 +此外,这些内容在将来的 Kubernetes 版本中很可能发生变化,设置新参数或配置文件可能 +需要重启进程。 +有鉴于此,通常应该在没有其他替代方案时才应考虑更改参数标志和配置文件。 + +<!-- +*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable. +--> +*内置的策略 API*,例如[ResourceQuota](/zh/docs/concepts/policy/resource-quotas/)、 +[PodSecurityPolicies](/zh/docs/concepts/policy/pod-security-policy/)、 +[NetworkPolicy](/zh/docs/concepts/services-networking/network-policies/) +和基于角色的访问控制([RBAC](/zh/docs/reference/access-authn-authz/rbac/))等等 +都是内置的 Kubernetes API。 +API 通常用于托管的 Kubernetes 服务和受控的 Kubernetes 安装环境中。 +这些 API 是声明式的,与 Pod 这类其他 Kubernetes 资源遵从相同的约定,所以 +新的集群配置是可复用的,并且可以当作应用程序来管理。 +此外,对于稳定版本的 API 而言,它们与其他 Kubernetes API 一样,采纳的是 +一种[预定义的支持策略](/zh/docs/reference/deprecation-policy/)。 +出于以上原因,在条件允许的情况下,基于 API 的方案应该优先于*配置文件*和*参数标志*。 + +<!-- +## Extensions + +Extensions are software components that extend and deeply integrate with Kubernetes. +They adapt it to support new types and new kinds of hardware. + +Most cluster administrators will use a hosted or distribution +instance of Kubernetes. As a result, most Kubernetes users will not need to +install extensions and fewer will need to author new ones. +--> +## 扩展 {#extensions} + +扩展(Extensions)是一些扩充 Kubernetes 能力并与之深度集成的软件组件。 +它们调整 Kubernetes 的工作方式使之支持新的类型和新的硬件种类。 + +大多数集群管理员会使用一种托管的 Kubernetes 服务或者其某种发行版本。 +因此,大多数 Kubernetes 用户不需要安装扩展, +至于需要自己编写新的扩展的情况就更少了。 + +<!-- +## Extension Patterns + +Kubernetes is designed to be automated by writing client programs. Any +program that reads and/or writes to the Kubernetes API can provide useful +automation. *Automation* can run on the cluster or off it. By following +the guidance in this doc you can write highly available and robust automation. +Automation generally works with any Kubernetes cluster, including hosted +clusters and managed installations. +--> +## 扩展模式 {#extension-patterns} + +Kubernetes 从设计上即支持通过编写客户端程序来将其操作自动化。 +任何能够对 Kubernetes API 发出读写指令的程序都可以提供有用的自动化能力。 +*自动化组件*可以运行在集群上,也可以运行在集群之外。 +通过遵从本文中的指南,你可以编写高度可用的、运行稳定的自动化组件。 +自动化组件通常可以用于所有 Kubernetes 集群,包括托管的集群和受控的安装环境。 + +<!-- +There is a specific pattern for writing client programs that work well with +Kubernetes called the *Controller* pattern. Controllers typically read an +object's `.spec`, possibly do things, and then update the object's `.status`. + +A controller is a client of Kubernetes. When Kubernetes is the client and +calls out to a remote service, it is called a *Webhook*. The remote service +is called a *Webhook Backend*. Like Controllers, Webhooks do add a point of +failure. +--> +编写客户端程序有一种特殊的*Controller(控制器)*模式,能够与 Kubernetes 很好地 +协同工作。控制器通常会读取某个对象的 `.spec`,或许还会执行一些操作,之后更新 +对象的 `.status`。 + +控制器是 Kubernetes 的客户端。当 Kubernetes 充当客户端,调用某远程服务时,对应 +的远程组件称作*Webhook*。 远程服务称作*Webhook 后端*。 +与控制器模式相似,Webhook 也会在整个架构中引入新的失效点(Point of Failure)。 + +<!-- +In the webhook model, Kubernetes makes a network request to a remote service. +In the *Binary Plugin* model, Kubernetes executes a binary (program). +Binary plugins are used by the kubelet (e.g. [Flex Volume +Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) +and [Network +Plugins](/docs/concepts/cluster-administration/network-plugins/)) +and by kubectl. + +Below is a diagram showing how the extension points interact with the +Kubernetes control plane. +--> +在 Webhook 模式中,Kubernetes 向远程服务发起网络请求。 +在*可执行文件插件(Binary Plugin)*模式中,Kubernetes 执行某个可执行文件(程序)。 +可执行文件插件在 kubelet (例如, +[FlexVolume 插件](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) +和[网络插件](/zh/docs/concepts/cluster-administration/network-plugins/)) +和 kubectl 中使用。 + +下面的示意图中展示了这些扩展点如何与 Kubernetes 控制面交互。 + +<img src="https://docs.google.com/drawings/d/e/2PACX-1vQBRWyXLVUlQPlp7BvxvV9S1mxyXSM6rAc_cbLANvKlu6kCCf-kGTporTMIeG5GZtUdxXz1xowN7RmL/pub?w=960&h=720"> + +<!-- image source drawing https://docs.google.com/drawings/d/1muJ7Oxuj_7Gtv7HV9-2zJbOnkQJnjxq-v1ym_kZfB-4/edit?ts=5a01e054 --> + +<!-- +## Extension Points + +This diagram shows the extension points in a Kubernetes system. +--> +## 扩展点 {#extension-points} + +此示意图显示的是 Kubernetes 系统中的扩展点。 + +<img src="https://docs.google.com/drawings/d/e/2PACX-1vSH5ZWUO2jH9f34YHenhnCd14baEb4vT-pzfxeFC7NzdNqRDgdz4DDAVqArtH4onOGqh0bhwMX0zGBb/pub?w=425&h=809"> + +<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view --> + +<!-- +1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies. +2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section. +3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions. +4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section. +5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources. +6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking. +7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins). + +If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions. +--> +1. 用户通常使用 `kubectl` 与 Kubernetes API 交互。 + [kubectl 插件](/zh/docs/tasks/extend-kubectl/kubectl-plugins/)能够扩展 kubectl 程序的行为。 + 这些插件只会影响到每个用户的本地环境,因此无法用来强制实施整个站点范围的策略。 + +2. API 服务器处理所有请求。API 服务器中的几种扩展点能够使用户对请求执行身份认证、 + 基于其内容阻止请求、编辑请求内容、处理删除操作等等。 + 这些扩展点在 [API 访问扩展](/zh/docs/concepts/overview/extending#api-access-extensions) + 节详述。 + +3. API 服务器向外提供不同类型的*资源(resources)*。 + *内置的资源类型*,如 `pods`,是由 Kubernetes 项目所定义的,无法改变。 + 你也可以添加自己定义的或者其他项目所定义的称作*自定义资源(Custom Resources)* + 的资源,正如[自定义资源](/zh/docs/concepts/overview/extending#user-defined-types)节所描述的那样。 + 自定义资源通常与 API 访问扩展点结合使用。 + +4. Kubernetes 调度器负责决定 Pod 要放置到哪些节点上执行。 + 有几种方式来扩展调度行为。这些方法将在 + [调度器扩展](/zh/docs/concepts/overview/extending#scheduler-extensions)节中展开。 + +5. Kubernetes 中的很多行为都是通过称为控制器(Controllers)的程序来实现的,这些程序也都是 API 服务器 + 的客户端。控制器常常与自定义资源结合使用。 + +6. 组件 kubelet 运行在各个节点上,帮助 Pod 展现为虚拟的服务器并在集群网络中拥有自己的 IP。 + [网络插件](/zh/docs/concepts/overview/extending#network-plugins)使得 Kubernetes 能够采用 + 不同实现技术来连接 Pod 网络。 + +7. 组件 kubelet 也会为容器增加或解除存储卷的挂载。 + 通过[存储插件](/zh/docs/concepts/overview/extending#storage-plugins),可以支持新的存储类型。 + +如果你无法确定从何处入手,下面的流程图可能对你有些帮助。 +注意,某些方案可能需要同时采用几种类型的扩展。 + +<img src="https://docs.google.com/drawings/d/e/2PACX-1vRWXNNIVWFDqzDY0CsKZJY3AR8sDeFDXItdc5awYxVH8s0OLherMlEPVUpxPIB1CSUu7GPk7B2fEnzM/pub?w=1440&h=1080"> + +<!-- image source drawing: https://docs.google.com/drawings/d/1sdviU6lDz4BpnzJNHfNpQrqI9F19QZ07KnhnxVrp2yg/edit --> + +<!-- +## API Extensions +### User-Defined Types + +Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as `kubectl`. + +Do not use a Custom Resource as data storage for application, user, or monitoring data. + +For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/). +--> +## API 扩展 {#api-extensions} + +### 用户定义的类型 {#user-defined-types} + +如果你想要定义新的控制器、应用配置对象或者其他声明式 API,并且使用 Kubernetes +工具(如 `kubectl`)来管理它们,可以考虑向 Kubernetes 添加自定义资源。 + +不要使用自定义资源来充当应用、用户或者监控数据的数据存储。 + +关于自定义资源的更多信息,可参见[自定义资源概念指南](/zh/docs/concepts/api-extension/custom-resources/)。 + +<!-- +### Combining New APIs with Automation + +The combination of a custom resource API and a control loop is called the [Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage specific, usually stateful, applications. These custom APIs and control loops can also be used to control other resources, such as storage or policies. +--> +### 结合使用新 API 与自动化组件 {#combinding-new-apis-with-automation} + +自定义资源 API 与控制回路的组合称作 +[Operator 模式](/zh/docs/concepts/extend-kubernetes/operator/)。 +Operator 模式用来管理特定的、通常是有状态的应用。 +这些自定义 API 和控制回路也可用来控制其他资源,如存储或策略。 + +<!-- +### Changing Built-in Resources + +When you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups. +Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API Access Extensions do. +--> +### 更改内置资源 {#changing-built-in-resources} + +当你通过添加自定义资源来扩展 Kubernetes 时,所添加的资源通常会被放在一个新的 +API 组中。你不可以替换或更改现有的 API 组。 +添加新的 API 不会直接让你影响现有 API (如 Pods)的行为,不过 API +访问扩展能够实现这点。 + +<!-- +### API Access Extensions + +When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) for more on this flow. + +Each of these steps offers extension points. + +Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization header to a remote service for verification (a webhook). All of these methods are covered in the [Authentication documentation](/docs/reference/access-authn-authz/authentication/). +--> +### API 访问扩展 {#api-access-extensions} + +当请求到达 Kubernetes API 服务器时,首先要经过身份认证,之后是鉴权操作, +再之后要经过若干类型的准入控制器的检查。 +参见[控制 Kubernetes API 访问](/zh/docs/reference/access-authn-authz/controlling-access/) +以了解此流程的细节。 + +这些步骤中都存在扩展点。 + +Kubernetes 提供若干内置的身份认证方法。 +它也可以运行在某中身份认证代理的后面,并且可以将来自鉴权头部的令牌发送到 +某个远程服务(Webhook)来执行验证操作。 +所有这些方法都在[身份认证文档](/zh/docs/reference/access-authn-authz/authentication/) +中详细论述。 + +<!-- +### Authentication + +[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates in all requests to a username for the client making the request. + +Kubernetes provides several built-in authentication methods, and an [Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) method if those don't meet your needs. +--> +### 身份认证 {#authentication} + +[身份认证](/zh/docs/reference/access-authn-authz/authentication/)负责将所有请求中 +的头部或证书映射到发出该请求的客户端的用户名。 + +Kubernetes 提供若干种内置的认证方法,以及 +[认证 Webhook](/zh/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) +方法以备内置方法无法满足你的要求。 + +<!-- +### Authorization + +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources - it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +--> +### 鉴权 {#authorization} + +[鉴权](/docs/reference/access-authn-authz/webhook/)操作负责确定特定的用户 +是否可以读、写 API 资源或对其执行其他操作。 +此操作仅在整个资源集合的层面进行。 +换言之,它不会基于对象的特定字段作出不同的判决。 +如果内置的鉴权选项无法满足你的需要,你可以使用 +[鉴权 Webhook](/zh/docs/reference/access-authn-authz/webhook/)来调用用户提供 +的代码,执行定制的鉴权操作。 + +<!-- +### Dynamic Admission Control + +After a request is authorized, if it is a write operation, it also goes through [Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps. In addition to the built-in steps, there are several extensions: + +* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) restricts what images can be run in containers. +* To make arbitrary admission control decisions, a general [Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) can be used. Admission Webhooks can reject creations or updates. +--> +### 动态准入控制 {#dynamic-admission-control} + +请求的鉴权操作结束之后,如果请求的是写操作,还会经过 +[准入控制](/zh/docs/reference/access-authn-authz/admission-controllers/)处理步骤。 +除了内置的处理步骤,还存在一些扩展点: + +* [Image Policy webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) + 能够限制容器中可以运行哪些镜像。 +* 为了执行任意的准入控制,可以使用一种通用的 + [Admission webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) + 机制。这类 Webhook 可以拒绝对象创建或更新请求。 + +<!-- +## Infrastructure Extensions + +### Storage Plugins + +[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md +) allow users to mount volume types without built-in support by having the +Kubelet call a Binary Plugin to mount the volume. +--> +## 基础设施扩展 {#infrastructure-extensions} + +### 存储插件 {#storage-plugins} + +[FlexVolumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md +) +卷可以让用户挂载无需内建支持的卷类型,kubelet 会调用可执行文件插件 +来挂载对应的存储卷。 + +<!-- +### Device Plugins + +Device plugins allow a node to discover new Node resources (in addition to the +builtin ones like cpu and memory) via a [Device +Plugin](/docs/concepts/cluster-administration/device-plugins/). + +### Network Plugins + +Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/). +--> +### 设备插件 {#device-plugins} + +使用[设备插件](/zh/docs/concepts/cluster-administration/device-plugins/), +节点能够发现新的节点资源(除了内置的类似 CPU 和内存这类资源)。 + +### 网络插件 {#network-plugins} + +通过节点层面的[网络插件](/docs/admin/network-plugins/),可以支持 +不同的网络设施。 + +<!-- +### Scheduler Extensions + +The scheduler is a special type of controller that watches pods, and assigns +pods to nodes. The default scheduler can be replaced entirely, while +continuing to use other Kubernetes components, or [multiple +schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/) +can run at the same time. + +This is a significant undertaking, and almost all Kubernetes users find they +do not need to modify the scheduler. + +The scheduler also supports a +[webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md) +that permits a webhook backend (scheduler extension) to filter and prioritize +the nodes chosen for a pod. +--> +### 调度器扩展 {#scheduler-extensions} + +调度器是一种特殊的控制器,负责监视 Pod 变化并将 Pod 分派给节点。 +默认的调度器可以被整体替换掉,同时继续使用其他 Kubernetes 组件。 +或者也可以在同一时刻使用 +[多个调度器](/zh/docs/tasks/administer-cluster/configure-multiple-schedulers/)。 + +这是一项非同小可的任务,几乎绝大多数 Kubernetes +用户都会发现其实他们不需要修改调度器。 + +调度器也支持一种 [webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md), +允许使用某种 Webhook 后端(调度器扩展)来为 Pod +可选的节点执行过滤和优先排序操作。 + + +## {{% heading "whatsnext" %}} + +<!-- +* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/) +* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/) +* Learn more about Infrastructure extensions + * [Network Plugins](/docs/concepts/cluster-administration/network-plugins/) + * [Device Plugins](/docs/concepts/cluster-administration/device-plugins/) +* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) +* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/) +--> +* 进一步了解[自定义资源](/zh/docs/concepts/api-extension/custom-resources/) +* 了解[动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/) +* 进一步了解基础设施扩展 + * [网络插件](/zh/docs/concepts/cluster-administration/network-plugins/) + * [设备插件](/zh/docs/concepts/cluster-administration/device-plugins/) +* 了解 [kubectl 插件](/zh/docs/tasks/extend-kubectl/kubectl-plugins/) +* 了解 [Operator 模式](/zh/docs/concepts/extend-kubernetes/operator/) + + diff --git a/content/zh/docs/concepts/overview/kubernetes-api.md b/content/zh/docs/concepts/overview/kubernetes-api.md index 9a40a3b74b..8594bdcfb4 100644 --- a/content/zh/docs/concepts/overview/kubernetes-api.md +++ b/content/zh/docs/concepts/overview/kubernetes-api.md @@ -2,6 +2,8 @@ title: Kubernetes API content_type: concept weight: 30 +description: > + Kubernetes API 使您可以查询和操纵 Kubernetes 中对象的状态。Kubernetes 控制平面的核心是 API 服务器和它暴露的 HTTP API。 用户、集群的不同部分以及外部组件都通过 API 服务器相互通信。 card: name: concepts weight: 30 @@ -270,5 +272,3 @@ Individual resource enablement/disablement is only supported in the `extensions/ 出于遗留原因,仅在 `extensions / v1beta1` API 组中支持各个资源的启用/禁用。 {{< /note >}} - - diff --git a/content/zh/docs/concepts/overview/working-with-objects/common-labels.md b/content/zh/docs/concepts/overview/working-with-objects/common-labels.md index be60e6f10f..4132291df2 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/common-labels.md +++ b/content/zh/docs/concepts/overview/working-with-objects/common-labels.md @@ -64,7 +64,7 @@ on every resource object. | Key | Description | Example | Type | | ----------------------------------- | --------------------- | -------- | ---- | | `app.kubernetes.io/name` | The name of the application | `mysql` | string | -| `app.kubernetes.io/instance` | A unique name identifying the instance of an application | `wordpress-abcxzy` | string | +| `app.kubernetes.io/instance` | A unique name identifying the instance of an application | `mysql-abcxzy` | string | | `app.kubernetes.io/version` | The current version of the application (e.g., a semantic version, revision hash, etc.) | `5.7.21` | string | | `app.kubernetes.io/component` | The component within the architecture | `database` | string | | `app.kubernetes.io/part-of` | The name of a higher level application this one is part of | `wordpress` | string | @@ -73,7 +73,7 @@ on every resource object. | 键 | 描述 | 示例 | 类型 | | ----------------------------------- | --------------------- | -------- | ---- | | `app.kubernetes.io/name` | 应用程序的名称 | `mysql` | 字符串 | -| `app.kubernetes.io/instance` | 用于唯一确定应用实例的名称 | `wordpress-abcxzy` | 字符串 | +| `app.kubernetes.io/instance` | 用于唯一确定应用实例的名称 | `mysql-abcxzy` | 字符串 | | `app.kubernetes.io/version` | 应用程序的当前版本(例如,语义版本,修订版哈希等) | `5.7.21` | 字符串 | | `app.kubernetes.io/component` | 架构中的组件 | `database` | 字符串 | | `app.kubernetes.io/part-of` | 此级别的更高级别应用程序的名称 | `wordpress` | 字符串 | @@ -89,7 +89,7 @@ kind: StatefulSet metadata: labels: app.kubernetes.io/name: mysql - app.kubernetes.io/instance: wordpress-abcxzy + app.kubernetes.io/instance: mysql-abcxzy app.kubernetes.io/version: "5.7.21" app.kubernetes.io/component: database app.kubernetes.io/part-of: wordpress diff --git a/content/zh/docs/concepts/configuration/assign-pod-node.md b/content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md similarity index 87% rename from content/zh/docs/concepts/configuration/assign-pod-node.md rename to content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md index 92d26f57cf..29dec9c016 100644 --- a/content/zh/docs/concepts/configuration/assign-pod-node.md +++ b/content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -5,18 +5,11 @@ weight: 50 --- <!-- ---- -reviewers: -- davidopp -- kevin-wangzefeng -- bsalamat title: Assigning Pods to Nodes content_type: concept weight: 50 ---- --> - <!-- overview --> <!-- @@ -26,14 +19,12 @@ There are several ways to do this, and the recommended approaches all use [label selectors](/docs/concepts/overview/working-with-objects/labels/) to make the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) -but there are some circumstances where you may want more control on a node where a pod lands, e.g. to ensure +but there are some circumstances where you may want more control on a node where a pod lands, for example to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone. --> -你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可用资源不足的节点上等等),但在某些情况下,你可以需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务且有大量通信的 pod 放置在同一个可用区。 - - +你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可用资源不足的节点上等等),但在某些情况下,你可以需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务且有大量通信的 pod 放置在同一个可用区。 <!-- body --> @@ -46,7 +37,8 @@ to run on a node, the node must have each of the indicated key-value pairs as la additional labels as well). The most common usage is one key-value pair. --> -`nodeSelector` 是节点选择约束的最简单推荐形式。`nodeSelector` 是 PodSpec 的一个字段。它指定键值对的映射。为了使 pod 可以在节点上运行,节点必须具有每个指定的键值对作为标签(它也可以具有其他标签)。最常用的是一对键值对。 +`nodeSelector` 是节点选择约束的最简单推荐形式。`nodeSelector` 是 PodSpec 的一个字段。 +它包含键值对的映射。为了使 pod 可以在某个节点上运行,该节点的标签中必须包含这里的每个键值对(它也可以具有其他标签)。最常见的用法的是一对键值对。 <!-- Let's walk through an example of how to use `nodeSelector`. @@ -63,38 +55,40 @@ Let's walk through an example of how to use `nodeSelector`. <!-- This example assumes that you have a basic understanding of Kubernetes pods and that you have [set up a Kubernetes cluster](/docs/setup/). --> - -本示例假设你已基本了解 Kubernetes 的 pod 并且已经[建立一个 Kubernetes 集群](/docs/setup/)。 +本示例假设你已基本了解 Kubernetes 的 Pod 并且已经[建立一个 Kubernetes 集群](/zh/docs/setup/)。 <!-- ### Step One: Attach label to the node --> - -### 步骤一:添加标签到节点 +### 步骤一:添加标签到节点 {#attach-labels-to-node} <!-- Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes <node-name> <label-key>=<label-value>` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`. --> -执行 `kubectl get nodes` 命令获取集群的节点名称。选择一个你要增加标签的节点,然后执行 `kubectl label nodes <node-name> <label-key>=<label-value>` 命令将标签添加到你所选择的节点上。例如,如果你的节点名称为 'kubernetes-foo-node-1.c.a-robinson.internal' 并且想要的标签是 'disktype=ssd',则可以执行 `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd` 命令。 +执行 `kubectl get nodes` 命令获取集群的节点名称。 +选择一个你要增加标签的节点,然后执行 `kubectl label nodes <node-name> <label-key>=<label-value>` +命令将标签添加到你所选择的节点上。 +例如,如果你的节点名称为 'kubernetes-foo-node-1.c.a-robinson.internal' +并且想要的标签是 'disktype=ssd',则可以执行 +`kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd` 命令。 <!-- -You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node. +You can verify that it worked by re-running `kubectl get nodes -show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node. --> - -你可以通过重新运行 `kubectl get nodes --show-labels` 并且查看节点当前具有了一个标签来验证它是否有效。你也可以使用 `kubectl describe node "nodename"` 命令查看指定节点的标签完整列表。 +你可以通过重新运行 `kubectl get nodes --show-labels`,查看节点当前具有了所指定的标签来验证它是否有效。 +你也可以使用 `kubectl describe node "nodename"` 命令查看指定节点的标签完整列表。 <!-- ### Step Two: Add a nodeSelector field to your pod configuration --> - -### 步骤二:添加 nodeSelector 字段到 pod 配置中 +### 步骤二:添加 nodeSelector 字段到 Pod 配置中 <!-- Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config: --> - -拿任意一个你想运行的 pod 的配置文件,并且在其中添加一个 nodeSelector 部分。例如,如果下面是我的 pod 配置: +选择任何一个你想运行的 Pod 的配置文件,并且在其中添加一个 nodeSelector 部分。 +例如,如果下面是我的 pod 配置: ```yaml apiVersion: v1 @@ -123,32 +117,30 @@ the Pod will get scheduled on the node that you attached the label to. You can verify that it worked by running `kubectl get pods -o wide` and looking at the "NODE" that the Pod was assigned to. --> - -当你之后运行 `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml` 命令,pod 将会调度到将标签添加到的节点上。你可以通过运行 `kubectl get pods -o wide` 并查看分配给 pod 的 “NODE” 来验证其是否有效。 +当你之后运行 `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml` 命令, +Pod 将会调度到将标签添加到的节点上。你可以通过运行 `kubectl get pods -o wide` 并查看分配给 pod 的 “NODE” 来验证其是否有效。 <!-- ## Interlude: built-in node labels {#built-in-node-labels} --> -## 插曲:内置的节点标签 {#内置的节点标签} +## 插曲:内置的节点标签 {#built-in-node-labels} <!-- In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated with a standard set of labels. These labels are --> +除了你[附加](#attach-labels-to-node)的标签外,节点还预先填充了一组标准标签。这些标签是 -除了你[附加](#添加标签到节点)的标签外,节点还预先填充了一组标准标签。这些标签是 - -* [`kubernetes.io/hostname`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname) -* [`failure-domain.beta.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone) -* [`failure-domain.beta.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion) -* [`topology.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone) -* [`topology.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone) -* [`beta.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type) -* [`node.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type) -* [`kubernetes.io/os`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os) -* [`kubernetes.io/arch`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch) - +* [`kubernetes.io/hostname`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname) +* [`failure-domain.beta.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone) +* [`failure-domain.beta.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion) +* [`topology.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone) +* [`topology.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone) +* [`beta.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type) +* [`node.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type) +* [`kubernetes.io/os`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os) +* [`kubernetes.io/arch`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch) {{< note >}} <!-- @@ -190,7 +182,8 @@ For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example. --> 1. 检查是否在使用 Kubernetes v1.11+,以便 NodeRestriction 功能可用。 -2. 确保你在使用[节点授权](/docs/reference/access-authn-authz/node/)并且已经_启用_ [NodeRestriction 准入插件](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。 +2. 确保你在使用[节点授权](/zh/docs/reference/access-authn-authz/node/)并且已经_启用_ + [NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。 3. 将 `node-restriction.kubernetes.io/` 前缀下的标签添加到 Node 对象,然后在节点选择器中使用这些标签。例如,`example.com.node-restriction.kubernetes.io/fips=true` 或 `example.com.node-restriction.kubernetes.io/pci-dss=true`。 <!-- @@ -289,10 +282,13 @@ value is `another-node-label-value` should be preferred. <!-- You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`. You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use -[node taints](/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes. +[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes. --> -你可以在上面的例子中看到 `In` 操作符的使用。新的节点亲和语法支持下面的操作符: `In`,`NotIn`,`Exists`,`DoesNotExist`,`Gt`,`Lt`。你可以使用 `NotIn` 和 `DoesNotExist` 来实现节点反亲和行为,或者使用[节点污点](/docs/concepts/configuration/taint-and-toleration/)将 pod 从特定节点中驱逐。 +你可以在上面的例子中看到 `In` 操作符的使用。新的节点亲和语法支持下面的操作符: +`In`,`NotIn`,`Exists`,`DoesNotExist`,`Gt`,`Lt`。 +你可以使用 `NotIn` 和 `DoesNotExist` 来实现节点反亲和行为,或者使用 +[节点污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)将 pod 从特定节点中驱逐。 <!-- If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod @@ -343,7 +339,7 @@ key for the node label that the system uses to denote such a topology domain, e. in the section [Interlude: built-in node labels](#built-in-node-labels). --> -pod 间亲和与反亲和使你可以*基于已经在节点上运行的 pod 的标签*来约束 pod 可以调度到的节点,而不是基于节点上的标签。规则的格式为“如果 X 节点上已经运行了一个或多个 满足规则 Y 的pod,则这个 pod 应该(或者在非亲和的情况下不应该)运行在 X 节点”。Y 表示一个具有可选的关联命令空间列表的 LabelSelector;与节点不同,因为 pod 是命名空间限定的(因此 pod 上的标签也是命名空间限定的),因此作用于 pod 标签的标签选择器必须指定选择器应用在哪个命名空间。从概念上讲,X 是一个拓扑域,如节点,机架,云供应商地区,云供应商区域等。你可以使用 `topologyKey` 来表示它,`topologyKey` 是节点标签的键以便系统用来表示这样的拓扑域。请参阅上面[插曲:内置的节点标签](#内置的节点标签)部分中列出的标签键。 +pod 间亲和与反亲和使你可以*基于已经在节点上运行的 pod 的标签*来约束 pod 可以调度到的节点,而不是基于节点上的标签。规则的格式为“如果 X 节点上已经运行了一个或多个 满足规则 Y 的pod,则这个 pod 应该(或者在非亲和的情况下不应该)运行在 X 节点”。Y 表示一个具有可选的关联命令空间列表的 LabelSelector;与节点不同,因为 pod 是命名空间限定的(因此 pod 上的标签也是命名空间限定的),因此作用于 pod 标签的标签选择器必须指定选择器应用在哪个命名空间。从概念上讲,X 是一个拓扑域,如节点,机架,云供应商地区,云供应商区域等。你可以使用 `topologyKey` 来表示它,`topologyKey` 是节点标签的键以便系统用来表示这样的拓扑域。请参阅上面[插曲:内置的节点标签](#built-in-node-labels)部分中列出的标签键。 {{< note >}} <!-- @@ -578,10 +574,12 @@ As you can see, all the 3 replicas of the `web-server` are automatically co-loca ``` kubectl get pods -o wide ``` + <!-- The output is similar to this: --> 输出类似于如下内容: + ``` NAME READY STATUS RESTARTS AGE IP NODE redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3 @@ -604,8 +602,10 @@ no two instances are located on the same host. See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique. --> - -上面的例子使用 `PodAntiAffinity` 规则和 `topologyKey: "kubernetes.io/hostname"` 来部署 redis 集群以便在同一主机上没有两个实例。参阅 [ZooKeeper 教程](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure),以获取配置反亲和来达到高可用性的 StatefulSet 的样例(使用了相同的技巧)。 +上面的例子使用 `PodAntiAffinity` 规则和 `topologyKey: "kubernetes.io/hostname"` +来部署 redis 集群以便在同一主机上没有两个实例。 +参阅 [ZooKeeper 教程](/zh/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure), +以获取配置反亲和来达到高可用性的 StatefulSet 的样例(使用了相同的技巧)。 ## nodeName @@ -617,13 +617,11 @@ kubelet running on the named node tries to run the pod. Thus, if `nodeName` is provided in the PodSpec, it takes precedence over the above methods for node selection. --> - `nodeName` 是节点选择约束的最简单方法,但是由于其自身限制,通常不使用它。`nodeName` 是 PodSpec 的一个字段。如果它不为空,调度器将忽略 pod,并且运行在它指定节点上的 kubelet 进程尝试运行该 pod。因此,如果 `nodeName` 在 PodSpec 中指定了,则它优先于上面的节点选择方法。 <!-- Some of the limitations of using `nodeName` to select nodes are: --> - 使用 `nodeName` 来选择节点的一些限制: <!-- @@ -635,7 +633,6 @@ Some of the limitations of using `nodeName` to select nodes are: - Node names in cloud environments are not always predictable or stable. --> - - 如果指定的节点不存在, - 如果指定的节点没有资源来容纳 pod,pod 将会调度失败并且其原因将显示为,比如 OutOfmemory 或 OutOfcpu。 - 云环境中的节点名称并非总是可预测或稳定的。 @@ -643,7 +640,6 @@ Some of the limitations of using `nodeName` to select nodes are: <!-- Here is an example of a pod config file using the `nodeName` field: --> - 下面的是使用 `nodeName` 字段的 pod 配置文件的例子: ```yaml @@ -664,16 +660,13 @@ The above pod will run on the node kube-01. 上面的 pod 将运行在 kube-01 节点上。 - - ## {{% heading "whatsnext" %}} - <!-- -[Taints](/docs/concepts/configuration/taint-and-toleration/) allow a Node to *repel* a set of Pods. +[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods. --> -[污点](/docs/concepts/configuration/taint-and-toleration/)允许节点*排斥*一组 pod。 +[污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)允许节点*排斥*一组 pod。 <!-- The design documents for @@ -681,7 +674,8 @@ The design documents for and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) contain extra background information about these features. --> -[节点亲和](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)与 [pod 间亲和/反亲和](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)的设计文档包含这些功能的其他背景信息。 +[节点亲和](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)与 +[pod 间亲和/反亲和](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)的设计文档包含这些功能的其他背景信息。 <!-- Once a Pod is assigned to a Node, the kubelet runs the Pod and allocates node-local resources. @@ -689,6 +683,7 @@ The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can tak resource allocation decisions. --> -一旦 pod 分配给 节点,kubelet 应用将运行该 pod 并且分配节点本地资源。[拓扑管理](/docs/tasks/administer-cluster/topology-manager/) - +一旦 Pod 分配给 节点,kubelet 应用将运行该 pod 并且分配节点本地资源。 +[拓扑管理器](/zh/docs/tasks/administer-cluster/topology-manager/) +可以参与到节点级别的资源分配决定中。 diff --git a/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md index b38314d989..0efa384a27 100644 --- a/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md +++ b/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -4,13 +4,11 @@ content_type: concept weight: 70 --- <!-- ---- reviewers: - bsalamat title: Scheduler Performance Tuning content_type: concept weight: 70 ---- --> <!-- overview --> @@ -22,7 +20,9 @@ weight: 70 is the Kubernetes default scheduler. It is responsible for placement of Pods on Nodes in a cluster. --> -作为 kubernetes 集群的默认调度器,[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) 主要负责将 Pod 调度到集群的 Node 上。 +作为 kubernetes 集群的默认调度器, +[kube-scheduler](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) +主要负责将 Pod 调度到集群的 Node 上。 <!-- Nodes in a cluster that meet the scheduling requirements of a Pod are @@ -32,7 +32,10 @@ picking a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called _Binding_. --> -在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度_ Node。调度器先在集群中找到一个 Pod 的可调度 Node,然后根据一系列函数对这些可调度 Node打分,之后选出其中得分最高的 Node 来运行 Pod。最后,调度器将这个调度决定告知 kube-apiserver,这个过程叫做 _绑定_。 +在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度_ Node。 +调度器先在集群中找到一个 Pod 的可调度 Node,然后根据一系列函数对这些可调度 Node 打分, +之后选出其中得分最高的 Node 来运行 Pod。 +最后,调度器将这个调度决定告知 kube-apiserver,这个过程叫做 _绑定(Binding)_。 <!-- This page explains performance tuning optimizations that are relevant for @@ -40,8 +43,6 @@ large Kubernetes clusters. --> 这篇文章将会介绍一些在大规模 Kubernetes 集群下调度器性能优化的方式。 - - <!-- body --> <!-- @@ -55,7 +56,8 @@ a threshold for scheduling nodes in your cluster. --> 在大规模集群中,你可以调节调度器的表现来平衡调度的延迟(新 Pod 快速就位)和精度(调度器很少做出糟糕的放置决策)。 -你可以通过设置 kube-scheduler 的 `percentageOfNodesToScore` 来配置这个调优设置。这个 KubeSchedulerConfiguration 设置决定了调度集群中节点的阈值。 +你可以通过设置 kube-scheduler 的 `percentageOfNodesToScore` 来配置这个调优设置。 +这个 KubeSchedulerConfiguration 设置决定了调度集群中节点的阈值。 <!-- ### Setting the threshold @@ -117,8 +119,11 @@ enough feasible nodes to exceed the configured percentage, the kube-scheduler stops searching for more feasible nodes and moves on to the [scoring phase](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation). --> -你可以使用整个集群节点总数的百分比作为阈值来指定需要多少节点就足够。 kube-scheduler 会将它转换为节点数的整数值。在调度期间,如果 -kube-scheduler 已确认的可调度节点数足以超过了配置的百分比数量,kube-scheduler 将停止继续查找可调度节点并继续进行 [打分阶段](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。 +你可以使用整个集群节点总数的百分比作为阈值来指定需要多少节点就足够。 +kube-scheduler 会将它转换为节点数的整数值。在调度期间,如果 +kube-scheduler 已确认的可调度节点数足以超过了配置的百分比数量, +kube-scheduler 将停止继续查找可调度节点并继续进行 +[打分阶段](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。 <!-- [How the scheduler iterates over Nodes](#how-the-scheduler-iterates-over-nodes) @@ -228,7 +233,7 @@ prefer to run the Pod on any Node as long as it is feasible. <!-- ### How the scheduler iterates over Nodes --> -### 调度器做调度选择的时候如何覆盖所有的 Node +### 调度器做调度选择的时候如何覆盖所有的 Node {#how-the-scheduler-iterates-over-nodes} <!-- This section is intended for those who want to understand the internal details diff --git a/content/zh/docs/concepts/configuration/taint-and-toleration.md b/content/zh/docs/concepts/scheduling-eviction/taint-and-toleration.md similarity index 52% rename from content/zh/docs/concepts/configuration/taint-and-toleration.md rename to content/zh/docs/concepts/scheduling-eviction/taint-and-toleration.md index 26cbc3d364..1635c410d2 100755 --- a/content/zh/docs/concepts/configuration/taint-and-toleration.md +++ b/content/zh/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -1,45 +1,48 @@ --- -title: Taint 和 Toleration +title: 污点和容忍度 content_type: concept weight: 40 --- - <!-- overview --> <!-- Node affinity, described [here](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature), -is a property of *pods* that *attracts* them to a set of nodes (either as a -preference or a hard requirement). Taints are the opposite -- they allow a -*node* to *repel* a set of pods. +is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attracts* them to +a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a +hard requirement). Taints are the opposite -they allow a node to repel a set of pods. --> -节点亲和性(详见[这里](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature)),是 *pod* 的一种属性(偏好或硬性要求),它使 *pod* 被吸引到一类特定的节点。Taint 则相反,它使 *节点* 能够 *排斥* 一类特定的 pod。 +节点亲和性(详见[这里](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)) +是 {{< glossary_tooltip text="Pod" term_id="pod" >}} 的一种属性,它使 Pod +被吸引到一类特定的{{< glossary_tooltip text="节点" term_id="node" >}}。 +这可能出于一种偏好,也可能是硬性要求。 +Taint(污点)则相反,它使节点能够排斥一类特定的 Pod。 <!-- +_Tolerations_ are applied to pods, and allow (but do not require) the pods to schedule +onto nodes with matching taints. + Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. -Tolerations are applied to pods, and allow (but do not require) the pods to schedule -onto nodes with matching taints. --> -Taint 和 toleration 相互配合,可以用来避免 pod 被分配到不合适的节点上。每个节点上都可以应用一个或多个 taint ,这表示对于那些不能容忍这些 taint 的 pod,是不会被该节点接受的。如果将 toleration 应用于 pod 上,则表示这些 pod 可以(但不要求)被调度到具有匹配 taint 的节点上。 - +容忍度(Tolerations)是应用于 Pod 上的,允许(但并不要求)Pod +调度到带有与之匹配的污点的节点上。 +污点和容忍度(Toleration)相互配合,可以用来避免 Pod 被分配到不合适的节点上。 +每个节点上都可以应用一个或多个污点,这表示对于那些不能容忍这些污点的 Pod,是不会被该节点接受的。 <!-- body --> <!-- - ## Concepts - --> - ## 概念 <!-- You add a taint to a node using [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint). For example, --> -您可以使用命令 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) 给节点增加一个 taint。比如, +您可以使用命令 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) 给节点增加一个污点。比如, ```shell kubectl taint nodes node1 key=value:NoSchedule @@ -48,16 +51,25 @@ kubectl taint nodes node1 key=value:NoSchedule <!-- places a taint on node `node1`. The taint has key `key`, value `value`, and taint effect `NoSchedule`. This means that no pod will be able to schedule onto `node1` unless it has a matching toleration. -You specify a toleration for a pod in the PodSpec. Both of the following tolerations "match" the -taint created by the `kubectl taint` line above, and thus a pod with either toleration would be able -to schedule onto `node1`: ---> -给节点 `node1` 增加一个 taint,它的 key 是 `key`,value 是 `value`,effect 是 `NoSchedule`。这表示只有拥有和这个 taint 相匹配的 toleration 的 pod 才能够被分配到 `node1` 这个节点。您可以在 PodSpec 中定义 pod 的 toleration。下面两个 toleration 均与上面例子中使用 `kubectl taint` 命令创建的 taint 相匹配,因此如果一个 pod 拥有其中的任何一个 toleration 都能够被分配到 `node1` : -<!-- +```shell +kubectl taint nodes node1 key:NoSchedule +``` + To remove the taint added by the command above, you can run: +```shell +kubectl taint nodes node1 key:NoSchedule- +``` --> -想删除上述命令添加的 taint ,您可以运行: +给节点 `node1` 增加一个污点,它的键名是 `key`,键值是 `value`,效果是 `NoSchedule`。 +这表示只有拥有和这个污点相匹配的容忍度的 Pod 才能够被分配到 `node1` 这个节点。 + +```shell +kubectl taint nodes node1 key:NoSchedule- +``` + +若要移除上述命令所添加的污点,你可以执行: + ```shell kubectl taint nodes node1 key:NoSchedule- ``` @@ -67,9 +79,9 @@ You specify a toleration for a pod in the PodSpec. Both of the following tolerat taint created by the `kubectl taint` line above, and thus a pod with either toleration would be able to schedule onto `node1`: --> - -您可以在 PodSpec 中为容器设定容忍标签。以下两个容忍标签都与上面的 `kubectl taint` 创建的污点“匹配”, -因此具有任一容忍标签的Pod都可以将其调度到 `node1` 上: +您可以在 PodSpec 中定义 Pod 的容忍度。 +下面两个容忍度均与上面例子中使用 `kubectl taint` 命令创建的污点相匹配, +因此如果一个 Pod 拥有其中的任何一个容忍度都能够被分配到 `node1` : ```yaml tolerations: @@ -86,55 +98,55 @@ tolerations: effect: "NoSchedule" ``` +<!-- +Here’s an example of a pod that uses tolerations: +--> +这里是一个使用了容忍度的 Pod: + +{{< codenew file="pods/pod-with-toleration.yaml" >}} + +<!-- +The default value for `operator` is `Equal`. +--> +`operator` 的默认值是 `Equal`。 + <!-- A toleration "matches" a taint if the keys are the same and the effects are the same, and: * the `operator` is `Exists` (in which case no `value` should be specified), or * the `operator` is `Equal` and the `value`s are equal - -`Operator` defaults to `Equal` if not specified. --> -一个 toleration 和一个 taint 相“匹配”是指它们有一样的 key 和 effect ,并且: +一个容忍度和一个污点相“匹配”是指它们有一样的键名和效果,并且: -* 如果 `operator` 是 `Exists` (此时 toleration 不能指定 `value`),或者 +* 如果 `operator` 是 `Exists` (此时容忍度不能指定 `value`),或者 * 如果 `operator` 是 `Equal` ,则它们的 `value` 应该相等 -{{< note >}} <!-- There are two special cases: -* An empty `key` with operator `Exists` matches all keys, values and effects which means this +An empty `key` with operator `Exists` matches all keys, values and effects which means this will tolerate everything. ---> +An empty `effect` matches all effects with key `key`. +--> +{{< note >}} 存在两种特殊情况: -* 如果一个 toleration 的 `key` 为空且 operator 为 `Exists`,表示这个 toleration 与任意的 key 、value 和 effect 都匹配,即这个 toleration 能容忍任意 taint。 +如果一个容忍度的 `key` 为空且 operator 为 `Exists`, +表示这个容忍度与任意的 key 、value 和 effect 都匹配,即这个容忍度能容忍任意 taint。 -```yaml -tolerations: -- operator: "Exists" -``` - -<!-- -* An empty `effect` matches all effects with key `key`. ---> -* 如果一个 toleration 的 `effect` 为空,则 `key` 值与之相同的相匹配 taint 的 `effect` 可以是任意值。 - -```yaml -tolerations: -- key: "key" - operator: "Exists" -``` +如果 `effect` 为空,则可以与所有键名 `key` 的效果相匹配。 {{< /note >}} <!-- The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`. -This is a "preference" or "soft" version of `NoSchedule` -- the system will *try* to avoid placing a +This is a "preference" or "soft" version of `NoSchedule` - the system will *try* to avoid placing a pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is `NoExecute`, described later. --> -上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。这是“优化”或“软”版本的 `NoSchedule` ——系统会 *尽量* 避免将 pod 调度到存在其不能容忍 taint 的节点上,但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。 +上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。 +这是“优化”或“软”版本的 `NoSchedule` —— 系统会 *尽量* 避免将 Pod 调度到存在其不能容忍污点的节点上, +但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。 <!-- You can put multiple taints on the same node and multiple tolerations on the same pod. @@ -142,7 +154,10 @@ The way Kubernetes processes multiple taints and tolerations is like a filter: s with all of a node's taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the pod. In particular, --> -您可以给一个节点添加多个 taint ,也可以给一个 pod 添加多个 toleration。Kubernetes 处理多个 taint 和 toleration 的过程就像一个过滤器:从一个节点的所有 taint 开始遍历,过滤掉那些 pod 中存在与之相匹配的 toleration 的 taint。余下未被过滤的 taint 的 effect 值决定了 pod 是否会被分配到该节点,特别是以下情况: +您可以给一个节点添加多个污点,也可以给一个 Pod 添加多个容忍度设置。 +Kubernetes 处理多个污点和容忍度的过程就像一个过滤器:从一个节点的所有污点开始遍历, +过滤掉那些 Pod 中存在与之相匹配的容忍度的污点。余下未被过滤的污点的 effect 值决定了 +Pod 是否会被分配到该节点,特别是以下情况: <!-- @@ -154,14 +169,19 @@ effect `PreferNoSchedule` then Kubernetes will *try* to not schedule the pod ont the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node). --> -* 如果未被过滤的 taint 中存在至少一个 effect 值为 `NoSchedule` 的 taint,则 Kubernetes 不会将 pod 分配到该节点。 -* 如果未被过滤的 taint 中不存在 effect 值为 `NoSchedule` 的 taint,但是存在 effect 值为 `PreferNoSchedule` 的 taint,则 Kubernetes 会 *尝试* 将 pod 分配到该节点。 -* 如果未被过滤的 taint 中存在至少一个 effect 值为 `NoExecute` 的 taint,则 Kubernetes 不会将 pod 分配到该节点(如果 pod 还未在节点上运行),或者将 pod 从该节点驱逐(如果 pod 已经在节点上运行)。 +* 如果未被过滤的污点中存在至少一个 effect 值为 `NoSchedule` 的污点, + 则 Kubernetes 不会将 Pod 分配到该节点。 +* 如果未被过滤的污点中不存在 effect 值为 `NoSchedule` 的污点, + 但是存在 effect 值为 `PreferNoSchedule` 的污点, + 则 Kubernetes 会 *尝试* 将 Pod 分配到该节点。 +* 如果未被过滤的污点中存在至少一个 effect 值为 `NoExecute` 的污点, + 则 Kubernetes 不会将 Pod 分配到该节点(如果 Pod 还未在节点上运行), + 或者将 Pod 从该节点驱逐(如果 Pod 已经在节点上运行)。 <!-- For example, imagine you taint a node like this --> -例如,假设您给一个节点添加了如下的 taint +例如,假设您给一个节点添加了如下污点 ```shell kubectl taint nodes node1 key1=value1:NoSchedule @@ -172,7 +192,7 @@ kubectl taint nodes node1 key2=value2:NoSchedule <!-- And a pod has two tolerations: --> -然后存在一个 pod,它有两个 toleration: +假定有一个 Pod,它有两个容忍度: ```yaml tolerations: @@ -192,7 +212,9 @@ toleration matching the third taint. But it will be able to continue running if already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. --> -在这个例子中,上述 pod 不会被分配到上述节点,因为其没有 toleration 和第三个 taint 相匹配。但是如果在给节点添加上述 taint 之前,该 pod 已经在上述节点运行,那么它还可以继续运行在该节点上,因为第三个 taint 是三个 taint 中唯一不能被这个 pod 容忍的。 +在这种情况下,上述 Pod 不会被分配到上述节点,因为其没有容忍度和第三个污点相匹配。 +但是如果在给节点添加上述污点之前,该 Pod 已经在上述节点运行, +那么它还可以继续运行在该节点上,因为第三个污点是三个污点中唯一不能被这个 Pod 容忍的。 <!-- Normally, if a taint with effect `NoExecute` is added to a node, then any pods that do @@ -201,7 +223,12 @@ taint will never be evicted. However, a toleration with `NoExecute` effect can s an optional `tolerationSeconds` field that dictates how long the pod will stay bound to the node after the taint is added. For example, --> -通常情况下,如果给一个节点添加了一个 effect 值为 `NoExecute` 的 taint,则任何不能忍受这个 taint 的 pod 都会马上被驱逐,任何可以忍受这个 taint 的 pod 都不会被驱逐。但是,如果 pod 存在一个 effect 值为 `NoExecute` 的 toleration 指定了可选属性 `tolerationSeconds` 的值,则表示在给节点添加了上述 taint 之后,pod 还能继续在节点上运行的时间。例如, +通常情况下,如果给一个节点添加了一个 effect 值为 `NoExecute` 的污点, +则任何不能忍受这个污点的 Pod 都会马上被驱逐, +任何可以忍受这个污点的 Pod 都不会被驱逐。 +但是,如果 Pod 存在一个 effect 值为 `NoExecute` 的容忍度指定了可选属性 +`tolerationSeconds` 的值,则表示在给节点添加了上述污点之后, +Pod 还能继续在节点上运行的时间。例如, ```yaml tolerations: @@ -217,24 +244,22 @@ means that if this pod is running and a matching taint is added to the node, the the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, the pod will not be evicted. --> -这表示如果这个 pod 正在运行,然后一个匹配的 taint 被添加到其所在的节点,那么 pod 还将继续在节点上运行 3600 秒,然后被驱逐。如果在此之前上述 taint 被删除了,则 pod 不会被驱逐。 +这表示如果这个 Pod 正在运行,同时一个匹配的污点被添加到其所在的节点, +那么 Pod 还将继续在节点上运行 3600 秒,然后被驱逐。 +如果在此之前上述污点被删除了,则 Pod 不会被驱逐。 <!-- - ## Example Use Cases - --> - ## 使用例子 <!-- Taints and tolerations are a flexible way to steer pods *away* from nodes or evict pods that shouldn't be running. A few of the use cases are --> -通过 taint 和 toleration,可以灵活地让 pod *避开* 某些节点或者将 pod 从某些节点驱逐。下面是几个使用例子: +通过污点和容忍度,可以灵活地让 Pod *避开* 某些节点或者将 Pod 从某些节点驱逐。下面是几个使用例子: <!-- - * **Dedicated Nodes**: If you want to dedicate a set of nodes for exclusive use by a particular set of users, you can add a taint to those nodes (say, `kubectl taint nodes nodename dedicated=groupName:NoSchedule`) and then add a corresponding @@ -247,11 +272,16 @@ to the taint to the same set of nodes (e.g. `dedicated=groupName`), and the admi controller should additionally add a node affinity to require that the pods can only schedule onto nodes labeled with `dedicated=groupName`. --> -* **专用节点**:如果您想将某些节点专门分配给特定的一组用户使用,您可以给这些节点添加一个 taint(即, - `kubectl taint nodes nodename dedicated=groupName:NoSchedule`),然后给这组用户的 pod 添加一个相对应的 toleration(通过编写一个自定义的 [admission controller](/docs/admin/admission-controllers/),很容易就能做到)。拥有上述 toleration 的 pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。如果您希望这些 pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 taint 类似的 label (例如:`dedicated=groupName`),同时 还要在上述 admission controller 中给 pod 增加节点亲和性要求上述 pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。 +* **专用节点**:如果您想将某些节点专门分配给特定的一组用户使用,您可以给这些节点添加一个污点(即, + `kubectl taint nodes nodename dedicated=groupName:NoSchedule`), + 然后给这组用户的 Pod 添加一个相对应的 toleration(通过编写一个自定义的 + [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/),很容易就能做到)。 + 拥有上述容忍度的 Pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。 + 如果您希望这些 Pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 + 污点类似的 label (例如:`dedicated=groupName`),同时 还要在上述准入控制器中给 Pod + 增加节点亲和性要求上述 Pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。 <!-- - * **Nodes with Special Hardware**: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those nodes, thus leaving room for later-arriving pods that do need the @@ -274,27 +304,39 @@ on the special hardware nodes. This will make sure that these special hardware nodes are dedicated for pods requesting such hardware and you don't have to manually add tolerations to your pods. --> -* **配备了特殊硬件的节点**:在部分节点配备了特殊硬件(比如 GPU)的集群中,我们希望不需要这类硬件的 pod 不要被分配到这些特殊节点,以便为后继需要这类硬件的 pod 保留资源。要达到这个目的,可以先给配备了特殊硬件的节点添加 taint(例如 `kubectl taint nodes nodename special=true:NoSchedule` or `kubectl taint nodes nodename special=true:PreferNoSchedule`),然后给使用了这类特殊硬件的 pod 添加一个相匹配的 toleration。和专用节点的例子类似,添加这个 toleration 的最简单的方法是使用自定义 [admission controller](/docs/reference/access-authn-authz/admission-controllers/)。比如,我们推荐使用 [Extended Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) 来表示特殊硬件,给配置了特殊硬件的节点添加 taint 时包含 extended resource 名称,然后运行一个 [ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) admission controller。此时,因为节点已经被 taint 了,没有对应 toleration 的 Pod 会被调度到这些节点。但当你创建一个使用了 extended resource 的 Pod 时,`ExtendedResourceToleration` admission controller 会自动给 Pod 加上正确的 toleration ,这样 Pod 就会被自动调度到这些配置了特殊硬件件的节点上。这样就能够确保这些配置了特殊硬件的节点专门用于运行 需要使用这些硬件的 Pod,并且您无需手动给这些 Pod 添加 toleration。 +* **配备了特殊硬件的节点**:在部分节点配备了特殊硬件(比如 GPU)的集群中, + 我们希望不需要这类硬件的 Pod 不要被分配到这些特殊节点,以便为后继需要这类硬件的 Pod 保留资源。 + 要达到这个目的,可以先给配备了特殊硬件的节点添加 taint + (例如 `kubectl taint nodes nodename special=true:NoSchedule` 或 + `kubectl taint nodes nodename special=true:PreferNoSchedule`), + 然后给使用了这类特殊硬件的 Pod 添加一个相匹配的 toleration。 + 和专用节点的例子类似,添加这个容忍度的最简单的方法是使用自定义 + [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)。 + 比如,我们推荐使用[扩展资源](/zh/docs/concepts/configuration/manage-resources-containers/#extended-resources) + 来表示特殊硬件,给配置了特殊硬件的节点添加污点时包含扩展资源名称, + 然后运行一个 [ExtendedResourceToleration](/zh/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) + 准入控制器。此时,因为节点已经被设置污点了,没有对应容忍度的 Pod + 会被调度到这些节点。但当你创建一个使用了扩展资源的 Pod 时, + `ExtendedResourceToleration` 准入控制器会自动给 Pod 加上正确的容忍度, + 这样 Pod 就会被自动调度到这些配置了特殊硬件件的节点上。 + 这样就能够确保这些配置了特殊硬件的节点专门用于运行需要使用这些硬件的 Pod, + 并且您无需手动给这些 Pod 添加容忍度。 <!-- - * **Taint based Evictions**: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section. --> -* **基于 taint 的驱逐**: 这是在每个 pod 中配置的在节点出现问题时的驱逐行为,接下来的章节会描述这个特性 +* **基于污点的驱逐**: 这是在每个 Pod 中配置的在节点出现问题时的驱逐行为,接下来的章节会描述这个特性。 <!-- - ## Taint based Evictions - --> - -## 基于 taint 的驱逐 +## 基于污点的驱逐 {#taint-based-evictions} {{< feature-state for_k8s_version="v1.18" state="stable" >}} <!-- -Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already +The `NoExecute` taint effect, which affects pods that are already running on the node as follows * pods that do not tolerate the taint are evicted immediately @@ -303,16 +345,17 @@ running on the node as follows * pods that tolerate the taint with a specified `tolerationSeconds` remain bound for the specified amount of time --> - 前文我们提到过 taint 的 effect 值 `NoExecute` ,它会影响已经在节点上运行的 pod +前文提到过污点的 effect 值 `NoExecute`会影响已经在节点上运行的 Pod - * 如果 pod 不能忍受 effect 值为 `NoExecute` 的 taint,那么 pod 将马上被驱逐 - * 如果 pod 能够忍受 effect 值为 `NoExecute` 的 taint,但是在 toleration 定义中没有指定 `tolerationSeconds`,则 pod 还会一直在这个节点上运行。 - * 如果 pod 能够忍受 effect 值为 `NoExecute` 的 taint,而且指定了 `tolerationSeconds`,则 pod 还能在这个节点上继续运行这个指定的时间长度。 + * 如果 Pod 不能忍受 effect 值为 `NoExecute` 的污点,那么 Pod 将马上被驱逐 + * 如果 Pod 能够忍受 effect 值为 `NoExecute` 的污点,但是在容忍度定义中没有指定 + `tolerationSeconds`,则 Pod 还会一直在这个节点上运行。 + * 如果 Pod 能够忍受 effect 值为 `NoExecute` 的污点,而且指定了 `tolerationSeconds`, + 则 Pod 还能在这个节点上继续运行这个指定的时间长度。 <!-- -In addition, Kubernetes 1.6 introduced alpha support for representing node -problems. In other words, the node controller automatically taints a node when -certain condition is true. The following taints are built in: +The node controller automatically taints a node when certain conditions are +true. The following taints are built in: * `node.kubernetes.io/not-ready`: Node is not ready. This corresponds to the NodeCondition `Ready` being "`False`". @@ -328,40 +371,46 @@ certain condition is true. The following taints are built in: as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. --> - 此外,Kubernetes 1.6 已经支持(alpha阶段)节点问题的表示。换句话说,当某种条件为真时,node controller会自动给节点添加一个 taint。当前内置的 taint 包括: +当某种条件为真时,节点控制器会自动给节点添加一个污点。当前内置的污点包括: * `node.kubernetes.io/not-ready`:节点未准备好。这相当于节点状态 `Ready` 的值为 "`False`"。 - * `node.kubernetes.io/unreachable`:node controller 访问不到节点. 这相当于节点状态 `Ready` 的值为 "`Unknown`"。 + * `node.kubernetes.io/unreachable`:节点控制器访问不到节点. 这相当于节点状态 `Ready` 的值为 "`Unknown`"。 * `node.kubernetes.io/out-of-disk`:节点磁盘耗尽。 * `node.kubernetes.io/memory-pressure`:节点存在内存压力。 * `node.kubernetes.io/disk-pressure`:节点存在磁盘压力。 * `node.kubernetes.io/network-unavailable`:节点网络不可用。 * `node.kubernetes.io/unschedulable`: 节点不可调度。 - * `node.cloudprovider.kubernetes.io/uninitialized`:如果 kubelet 启动时指定了一个 "外部" cloud provider,它将给当前节点添加一个 taint 将其标志为不可用。在 cloud-controller-manager 的一个 controller 初始化这个节点后,kubelet 将删除这个 taint。 + * `node.cloudprovider.kubernetes.io/uninitialized`:如果 kubelet 启动时指定了一个 "外部" 云平台驱动, + 它将给当前节点添加一个污点将其标志为不可用。在 cloud-controller-manager + 的一个控制器初始化这个节点后,kubelet 将删除这个污点。 <!-- In case a node is to be evicted, the node controller or the kubelet adds relevant taints with `NoExecute` effect. If the fault condition returns to normal the kubelet or node controller can remove the relevant taint(s). --> -在节点被驱逐时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效应的相关污点。如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。 +在节点被驱逐时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效应的相关污点。 +如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。 - -{{< note >}} <!-- To maintain the existing [rate limiting](/docs/concepts/architecture/nodes/) behavior of pod evictions due to node problems, the system actually adds the taints in a rate-limited way. This prevents massive pod evictions in scenarios such as the master becoming partitioned from the nodes. --> -为了保证由于节点问题引起的 pod 驱逐[rate limiting](/docs/concepts/architecture/nodes/)行为正常,系统实际上会以 rate-limited 的方式添加 taint。在像 master 和 node 通讯中断等场景下,这避免了 pod 被大量驱逐。 +{{< note >}} +为了保证由于节点问题引起的 Pod 驱逐 +[速率限制](/zh/docs/concepts/architecture/nodes/)行为正常, +系统实际上会以限定速率的方式添加污点。在像主控节点与工作节点间通信中断等场景下, +这样做可以避免 Pod 被大量驱逐。 {{< /note >}} <!-- This feature, in combination with `tolerationSeconds`, allows a pod to specify how long it should stay bound to a node that has one or both of these problems. --> -使用这个功能特性,结合 `tolerationSeconds`,pod 就可以指定当节点出现一个或全部上述问题时还将在这个节点上运行多长的时间。 +使用这个功能特性,结合 `tolerationSeconds`,Pod 就可以指定当节点出现一个 +或全部上述问题时还将在这个节点上运行多长的时间。 <!-- For example, an application with a lot of local state might want to stay @@ -369,7 +418,8 @@ bound to node for a long time in the event of network partition, in the hope that the partition will recover and thus the pod eviction can be avoided. The toleration the pod would use in that case would look like --> -比如,一个使用了很多本地状态的应用程序在网络断开时,仍然希望停留在当前节点上运行一段较长的时间,愿意等待网络恢复以避免被驱逐。在这种情况下,pod 的 toleration 可能是下面这样的: +比如,一个使用了很多本地状态的应用程序在网络断开时,仍然希望停留在当前节点上运行一段较长的时间, +愿意等待网络恢复以避免被驱逐。在这种情况下,Pod 的容忍度可能是下面这样的: ```yaml tolerations: @@ -389,17 +439,23 @@ Likewise it adds a toleration for unless the pod configuration provided by the user already has a toleration for `node.kubernetes.io/unreachable`. --> -注意,Kubernetes 会自动给 pod 添加一个 key 为 `node.kubernetes.io/not-ready` 的 toleration 并配置 `tolerationSeconds=300`,除非用户提供的 pod 配置中已经已存在了 key 为 `node.kubernetes.io/not-ready` 的 toleration。同样,Kubernetes 会给 pod 添加一个 key 为 `node.kubernetes.io/unreachable` 的 toleration 并配置 `tolerationSeconds=300`,除非用户提供的 pod 配置中已经已存在了 key 为 `node.kubernetes.io/unreachable` 的 toleration。 + +{{< note >}} +Kubernetes 会自动给 Pod 添加一个 key 为 `node.kubernetes.io/not-ready` 的容忍度 +并配置 `tolerationSeconds=300`,除非用户提供的 Pod 配置中已经已存在了 key 为 +`node.kubernetes.io/not-ready` 的容忍度。 + +同样,Kubernetes 会给 Pod 添加一个 key 为 `node.kubernetes.io/unreachable` 的容忍度 +并配置 `tolerationSeconds=300`,除非用户提供的 Pod 配置中已经已存在了 key 为 +`node.kubernetes.io/unreachable` 的容忍度。 +{{< /note >}} <!-- -These automatically-added tolerations ensure that -the default pod behavior of remaining bound for 5 minutes after one of these -problems is detected is maintained. -The two default tolerations are added by the [DefaultTolerationSeconds -admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds). +These automatically-added tolerations mean that Pods remain bound to +Nodes for 5 minutes after one of these problems is detected. --> -这种自动添加 toleration 机制保证了在其中一种问题被检测到时 pod 默认能够继续停留在当前节点运行 5 分钟。这两个默认 toleration 是由 [DefaultTolerationSeconds -admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds)添加的。 +这种自动添加的容忍度意味着在其中一种问题被检测到时 Pod +默认能够继续停留在当前节点运行 5 分钟。 <!-- [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) pods are created with @@ -410,24 +466,25 @@ admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/default This ensures that DaemonSet pods are never evicted due to these problems. --> -[DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 中的 pod 被创建时,针对以下 taint 自动添加的 `NoExecute` 的 toleration 将不会指定 `tolerationSeconds`: +[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/) 中的 Pod 被创建时, +针对以下污点自动添加的 `NoExecute` 的容忍度将不会指定 `tolerationSeconds`: * `node.kubernetes.io/unreachable` * `node.kubernetes.io/not-ready` -这保证了出现上述问题时 DaemonSet 中的 pod 永远不会被驱逐。 +这保证了出现上述问题时 DaemonSet 中的 Pod 永远不会被驱逐。 <!-- ## Taint Nodes by Condition --> - -## 基于节点状态添加 taint +## 基于节点状态添加污点 <!-- -The node lifecycle controller automatically creates taints corresponding to Node conditions with `NoSchedule` effect. +The node lifecycle controller automatically creates taints corresponding to +Node conditions with `NoSchedule` effect. Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations. -Starting in Kubernetes 1.8, the DaemonSet controller automatically adds the +The DaemonSet controller automatically adds the following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from breaking. @@ -438,19 +495,31 @@ breaking. * `node.kubernetes.io/network-unavailable` (*host network only*) --> Node 生命周期控制器会自动创建与 Node 条件相对应的带有 `NoSchedule` 效应的污点。 -同样,调度器不检查节点条件,而是检查节点污点。这确保了节点条件不会影响调度到节点上的内容。用户可以通过添加适当的 Pod 容忍度来选择忽略某些 Node 的问题(表示为 Node 的调度条件)。 +同样,调度器不检查节点条件,而是检查节点污点。这确保了节点条件不会影响调度到节点上的内容。 +用户可以通过添加适当的 Pod 容忍度来选择忽略某些 Node 的问题(表示为 Node 的调度条件)。 -自 Kubernetes 1.8 起, DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` toleration 以防 DaemonSet 崩溃: +DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` 容忍度以防 DaemonSet 崩溃: * `node.kubernetes.io/memory-pressure` * `node.kubernetes.io/disk-pressure` - * `node.kubernetes.io/out-of-disk` (*只适合 critical pod*) + * `node.kubernetes.io/out-of-disk` (*只适合关键 Pod*) * `node.kubernetes.io/unschedulable` (1.10 或更高版本) - * `node.kubernetes.io/network-unavailable` (*只适合 host network*) + * `node.kubernetes.io/network-unavailable` (*只适合主机网络配置*) <!-- Adding these tolerations ensures backward compatibility. You can also add arbitrary tolerations to DaemonSets. --> -添加上述 toleration 确保了向后兼容,您也可以选择自由的向 DaemonSet 添加 toleration。 +添加上述容忍度确保了向后兼容,您也可以选择自由向 DaemonSet 添加容忍度。 + +## {{% heading "whatsnext" %}} + +<!-- +* Read about [out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) and how you can configure it +* Read about [pod priority](/docs/concepts/configuration/pod-priority-preemption/) +--> +* 阅读[资源耗尽的处理](/zh/docs/tasks/administer-cluster/out-of-resource/),以及如何配置其行为 +* 阅读 [Pod 优先级](/zh/docs/concepts/configuration/pod-priority-preemption/) + + diff --git a/content/zh/docs/concepts/services-networking/connect-applications-service.md b/content/zh/docs/concepts/services-networking/connect-applications-service.md index a4985549c1..25bebe4d01 100644 --- a/content/zh/docs/concepts/services-networking/connect-applications-service.md +++ b/content/zh/docs/concepts/services-networking/connect-applications-service.md @@ -1,10 +1,9 @@ --- -title: 应用连接到 Service +title: 使用 Service 连接到应用 content_type: concept weight: 30 --- - <!-- overview --> <!-- @@ -24,8 +23,6 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same 既然有了一个持续运行、可复制的应用,我们就能够将它暴露到网络上。 在讨论 Kubernetes 网络连接的方式之前,非常值得与 Docker 中 “正常” 方式的网络进行对比。 - - 默认情况下,Docker 使用私有主机网络连接,只能与同在一台机器上的容器进行通信。 为了实现容器的跨节点通信,必须在机器自己的 IP 上为这些容器分配端口,为容器进行端口转发或者代理。 @@ -35,10 +32,8 @@ Kubernetes 假设 Pod 可与其它 Pod 通信,不管它们在哪个主机上 这表明了在 Pod 内的容器都能够连接到本地的每个端口,集群中的所有 Pod 不需要通过 NAT 转换就能够互相看到。 文档的剩余部分将详述如何在一个网络模型之上运行可靠的服务。 -该指南使用一个简单的 Nginx server 来演示并证明谈到的概念。同样的原则也体现在一个更加完整的 [Jenkins CI 应用](http://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes.html) 中。 - - - +该指南使用一个简单的 Nginx server 来演示并证明谈到的概念。同样的原则也体现在一个更加完整的 +[Jenkins CI 应用](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes.html) 中。 <!-- body --> @@ -48,7 +43,6 @@ Kubernetes 假设 Pod 可与其它 Pod 通信,不管它们在哪个主机上 We did this in a previous example, but let's do it once again and focus on the networking perspective. Create an nginx Pod, and note that it has a container port specification: --> - ## 在集群中暴露 Pod 我们在之前的示例中已经做过,然而再让我重试一次,这次聚焦在网络连接的视角。 @@ -75,7 +69,6 @@ my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 <!-- Check your pods' IPs: --> - 检查 Pod 的 IP 地址: ```shell @@ -89,14 +82,12 @@ You should be able to ssh into any node in your cluster and curl both IPs. Note You can read more about [how we achieve this](/docs/concepts/cluster-administration/networking/#how-to-achieve-this) if you're curious. --> - 应该能够通过 ssh 登录到集群中的任何一个节点上,使用 curl 也能调通所有 IP 地址。 需要注意的是,容器不会使用该节点上的 80 端口,也不会使用任何特定的 NAT 规则去路由流量到 Pod 上。 这意味着可以在同一个节点上运行多个 Pod,使用相同的容器端口,并且可以从集群中任何其他的 Pod 或节点上使用 IP 的方式访问到它们。 像 Docker 一样,端口能够被发布到主机节点的接口上,但是出于网络模型的原因应该从根本上减少这种用法。 -如果对此好奇,可以获取更多关于 [如何实现网络模型](/docs/concepts/cluster-administration/networking/#how-to-achieve-this) 的内容。 - +如果对此好奇,可以获取更多关于 [如何实现网络模型](/zh/docs/concepts/cluster-administration/networking/#how-to-achieve-this) 的内容。 <!-- ## Creating a Service @@ -107,7 +98,6 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni You can create a Service for your 2 nginx replicas with `kubectl expose`: --> - ## 创建 Service 我们有 Pod 在一个扁平的、集群范围的地址空间中运行 Nginx 服务,可以直接连接到这些 Pod,但如果某个节点死掉了会发生什么呢? @@ -145,9 +135,11 @@ View [Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/ API object to see the list of supported fields in service definition. Check your Service: --> - -上述规约将创建一个 Service,对应具有标签 `run: my-nginx` 的 Pod,目标 TCP 端口 80,并且在一个抽象的 Service 端口(`targetPort`:容器接收流量的端口;`port`:抽象的 Service 端口,可以使任何其它 Pod 访问该 Service 的端口)上暴露。 -查看 [Service API 对象](/docs/api-reference/{{< param "version" >}}/#service-v1-core) 了解 Service 定义支持的字段列表。 +上述规约将创建一个 Service,对应具有标签 `run: my-nginx` 的 Pod,目标 TCP 端口 80, +并且在一个抽象的 Service 端口(`targetPort`:容器接收流量的端口;`port`:抽象的 Service +端口,可以使任何其它 Pod 访问该 Service 的端口)上暴露。 +查看 [Service API 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) +了解 Service 定义支持的字段列表。 查看你的 Service 资源: ```shell @@ -167,7 +159,6 @@ matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the Pods created in the first step: --> - 正如前面所提到的,一个 Service 由一组 backend Pod 组成。这些 Pod 通过 `endpoints` 暴露出来。 Service Selector 将持续评估,结果被 POST 到一个名称为 `my-nginx` 的 Endpoint 对象上。 当 Pod 终止后,它会自动从 Endpoint 中移除,新的能够匹配上 Service Selector 的 Pod 将自动地被添加到 Endpoint 中。 @@ -206,7 +197,8 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip 现在,能够从集群中任意节点上使用 curl 命令请求 Nginx Service `<CLUSTER-IP>:<PORT>` 。 注意 Service IP 完全是虚拟的,它从来没有走过网络,如果对它如何工作的原理感到好奇, -可以阅读更多关于 [服务代理](/docs/user-guide/services/#virtual-ips-and-service-proxies) 的内容。 +可以进一步阅读[服务代理](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) +的内容。 <!-- ## Accessing the Service @@ -219,22 +211,19 @@ and DNS. The former works out of the box while the latter requires the ## 访问 Service Kubernetes支持两种查找服务的主要模式: 环境变量和DNS。 前者开箱即用,而后者则需要[CoreDNS集群插件] -[CoreDNS 集群插件](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). - -{{< note >}} +[CoreDNS 集群插件](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). <!-- If the service environment variables are not desired (because possible clashing with expected program ones, too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks` flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). --> - +{{< note >}} 如果不需要服务环境变量(因为可能与预期的程序冲突,可能要处理的变量太多,或者仅使用DNS等),则可以通过在 -[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)上将 `enableServiceLinks` 标志设置为 `false` 来禁用此模式。 - +[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) +上将 `enableServiceLinks` 标志设置为 `false` 来禁用此模式。 {{< /note >}} - <!-- ### Environment Variables @@ -324,9 +313,10 @@ The rest of this section will assume you have a Service with a long lived IP cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let's run another curl application to test this: --> - -如果没有在运行,可以 [启用它](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md#how-do-i-configure-it)。 -本段剩余的内容,将假设已经有一个 Service,它具有一个长久存在的 IP(my-nginx),一个为该 IP 指派名称的 DNS 服务器(kube-dns 集群插件),所以可以通过标准做法,使在集群中的任何 Pod 都能与该 Service 通信(例如:gethostbyname)。 +如果没有在运行,可以[启用它](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md#how-do-i-configure-it)。 +本段剩余的内容,将假设已经有一个 Service,它具有一个长久存在的 IP(my-nginx), +一个为该 IP 指派名称的 DNS 服务器(kube-dns 集群插件),所以可以通过标准做法, +使在集群中的任何 Pod 都能与该 Service 通信(例如:gethostbyname)。 让我们运行另一个 curl 应用来进行测试: ```shell @@ -370,9 +360,10 @@ You can acquire all these from the [nginx https example](https://github.com/kube * https 自签名证书(除非已经有了一个识别身份的证书) * 使用证书配置的 Nginx server -* 使证书可以访问 Pod 的[秘钥](/docs/user-guide/secrets) +* 使证书可以访问 Pod 的 [Secret](/zh/docs/concepts/configuration/secret/) -可以从 [Nginx https 示例](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/examples/https-nginx/) 获取所有上述内容,简明示例如下: +可以从 [Nginx https 示例](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/examples/https-nginx/) +获取所有上述内容,简明示例如下: ```shell make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt @@ -456,7 +447,8 @@ Noteworthy points about the nginx-secure-app manifest: 关于 nginx-secure-app manifest 值得注意的点如下: - 它在相同的文件中包含了 Deployment 和 Service 的规格 -- [Nginx server](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/examples/https-nginx/default.conf) 处理 80 端口上的 http 流量,以及 443 端口上的 https 流量,Nginx Service 暴露了这两个端口。 +- [Nginx 服务器](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/examples/https-nginx/default.conf) + 处理 80 端口上的 http 流量,以及 443 端口上的 https 流量,Nginx Service 暴露了这两个端口。 - 每个容器访问挂载在 /etc/nginx/ssl 卷上的秘钥。这需要在 Nginx server 启动之前安装好。 ```shell @@ -483,7 +475,8 @@ so we have to tell curl to ignore the CName mismatch. By creating a Service we l Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service): --> -注意最后一步我们是如何提供 `-k` 参数执行 curl命令的,这是因为在证书生成时,我们不知道任何关于运行 Nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。 +注意最后一步我们是如何提供 `-k` 参数执行 curl命令的,这是因为在证书生成时, +我们不知道任何关于运行 Nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。 通过创建 Service,我们连接了在证书中的 CName 与在 Service 查询时被 Pod使用的实际 DNS 名字。 让我们从一个 Pod 来测试(为了简化使用同一个秘钥,Pod 仅需要使用 nginx.crt 去访问 Service): @@ -513,15 +506,18 @@ LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx HTTPS replica is ready to serve traffic on the internet if your node has a public IP. --> - ## 暴露 Service 对我们应用的某些部分,可能希望将 Service 暴露在一个外部 IP 地址上。 Kubernetes 支持两种实现方式:NodePort 和 LoadBalancer。 -在上一段创建的 Service 使用了 `NodePort`,因此 Nginx https 副本已经就绪,如果使用一个公网 IP,能够处理 Internet 上的流量。 +在上一段创建的 Service 使用了 `NodePort`,因此 Nginx https 副本已经就绪, +如果使用一个公网 IP,能够处理 Internet 上的流量。 ```shell kubectl get svc my-nginx -o yaml | grep nodePort -C 5 +``` + +``` uid: 07191fb3-f61a-11e5-8ae5-42010af00002 spec: clusterIP: 10.0.162.149 @@ -539,8 +535,12 @@ spec: selector: run: my-nginx ``` + ```shell kubectl get nodes -o yaml | grep ExternalIP -C 1 +``` + +``` - address: 104.197.41.11 type: ExternalIP allocatable: @@ -549,8 +549,14 @@ kubectl get nodes -o yaml | grep ExternalIP -C 1 type: ExternalIP allocatable: ... +``` -$ curl https://<EXTERNAL-IP>:<NODE-PORT> -k +```shell +curl https://<EXTERNAL-IP>:<NODE-PORT> -k +``` + +输出类似于: +``` ... <h1>Welcome to nginx!</h1> ``` @@ -593,13 +599,14 @@ see it. You'll see something like this: ```shell kubectl describe service my-nginx +``` + +``` ... LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com ... ``` - - ## {{% heading "whatsnext" %}} <!-- @@ -610,3 +617,4 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el * 进一步了解如何[使用 Service 访问集群中的应用](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/) * 进一步了解如何[使用 Service 将前端连接到后端](/zh/docs/tasks/access-application-cluster/connecting-frontend-backend/) * 进一步了解如何[创建外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/) + diff --git a/content/zh/docs/concepts/services-networking/dns-pod-service.md b/content/zh/docs/concepts/services-networking/dns-pod-service.md index bbdda7e18a..48453424da 100644 --- a/content/zh/docs/concepts/services-networking/dns-pod-service.md +++ b/content/zh/docs/concepts/services-networking/dns-pod-service.md @@ -8,8 +8,7 @@ weight: 20 <!-- This page provides an overview of DNS support by Kubernetes. --> -该页面概述了Kubernetes对DNS的支持。 - +本页面提供 Kubernetes 对 DNS 的支持的概述。 <!-- body --> @@ -47,28 +46,25 @@ For more up-to-date specification, see ## 怎样获取 DNS 名字? 在集群中定义的每个 Service(包括 DNS 服务器自身)都会被指派一个 DNS 名称。 -默认,一个客户端 Pod 的 DNS 搜索列表将包含该 Pod 自己的 Namespace 和集群默认域。 -通过如下示例可以很好地说明: +默认,一个客户端 Pod 的 DNS 搜索列表将包含该 Pod 自己的名字空间和集群默认域。 +如下示例是一个很好的说明: -假设在 Kubernetes 集群的 Namespace `bar` 中,定义了一个Service `foo`。 -运行在Namespace `bar` 中的一个 Pod,可以简单地通过 DNS 查询 `foo` 来找到该 Service。 -运行在 Namespace `quux` 中的一个 Pod 可以通过 DNS 查询 `foo.bar` 找到该 Service。 +假设在 Kubernetes 集群的名字空间 `bar` 中,定义了一个服务 `foo`。 +运行在名字空间 `bar` 中的 Pod 可以简单地通过 DNS 查询 `foo` 来找到该服务。 +运行在名字空间 `quux` 中的 Pod 可以通过 DNS 查询 `foo.bar` 找到该服务。 -以下各节详细介绍了受支持的记录类型和支持的布局。 其中代码部分的布局,名称或查询命令均被视为实现细节,如有更改,恕不另行通知。 +以下各节详细介绍了受支持的记录类型和支持的布局。 +其它布局、名称或者查询即使碰巧可以工作,也应视为实现细节, +将来很可能被更改而且不会因此出现警告。 有关最新规范请查看 -[Kubernetes 基于 DNS 的服务发现](https://github.com/kubernetes/dns/blob/master/docs/specification.md). - -## 支持的 DNS 模式 - -下面各段详细说明支持的记录类型和布局。 -如果任何其它的布局、名称或查询,碰巧也能够使用,这就需要研究下它们的实现细节,以免后续修改它们又不能使用了。 +[Kubernetes 基于 DNS 的服务发现](https://github.com/kubernetes/dns/blob/master/docs/specification.md)。 <!-- ## Services -### A records +### A/AAAA records -"Normal" (not headless) Services are assigned a DNS A record for a name of the +"Normal" (not headless) Services are assigned a DNS A or AAAA record for a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP of the Service. @@ -79,16 +75,19 @@ Clients are expected to consume the set or else use standard round-robin selection from the set. --> -### Service +### 服务 {#services} -#### A 记录 +#### A/AAAA 记录 -“正常” Service(除了 Headless Service)会以 `my-svc.my-namespace.svc.cluster-domain.example` 这种名字的形式被指派一个 DNS A 记录。 -这会解析成该 Service 的 Cluster IP。 +“普通” 服务(除了无头服务)会以 `my-svc.my-namespace.svc.cluster-domain.example` +这种名字的形式被分配一个 DNS A 或 AAAA 记录,取决于服务的 IP 协议族。 +该名称会解析成对应服务的集群 IP。 -“Headless” Service(没有Cluster IP)也会以 `my-svc.my-namespace.svc.cluster-domain.example` 这种名字的形式被指派一个 DNS A 记录。 -不像正常 Service,它会解析成该 Service 选择的一组 Pod 的 IP。 -希望客户端能够使用这一组 IP,否则就使用标准的 round-robin 策略从这一组 IP 中进行选择。 +“无头(Headless)” 服务(没有集群 IP)也会以 +`my-svc.my-namespace.svc.cluster-domain.example` 这种名字的形式被指派一个 DNS A 或 AAAA 记录, +具体取决于服务的 IP 协议族。 +与普通服务不同,这一记录会被解析成对应服务所选择的 Pod 集合的 IP。 +客户端要能够使用这组 IP,或者使用标准的轮转策略从这组 IP 中进行选择。 <!-- ### SRV records @@ -103,19 +102,33 @@ For a headless service, this resolves to multiple answers, one for each pod that is backing the service, and contains the port number and the domain name of the pod of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`. --> +#### SRV 记录 {#srv-records} -#### SRV 记录 - -命名端口需要创建 SRV 记录,这些端口是正常 Service或 [Headless -Services](/docs/concepts/services-networking/service/#headless-services) 的一部分。 +Kubernetes 会为命名端口创建 SRV 记录,这些端口是普通服务或 +[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)的一部分。 对每个命名端口,SRV 记录具有 `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example` 这种形式。 -对普通 Service,这会被解析成端口号和 CNAME:`my-svc.my-namespace.svc.cluster-domain.example`。 -对 Headless Service,这会被解析成多个结果,Service 对应的每个 backend Pod 各一个, -包含 `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example` 这种形式 Pod 的端口号和 CNAME。 - +对普通服务,该记录会被解析成端口号和域名:`my-svc.my-namespace.svc.cluster-domain.example`。 +对无头服务,该记录会被解析成多个结果,服务对应的每个后端 Pod 各一个; +其中包含 Pod 端口号和形为 `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example` +的域名。 ## Pods +<!-- +### A/AAAA records + +Any pods created by a Deployment or DaemonSet have the following +DNS resolution available: + +`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.` +--> +### A/AAAA 记录 + +经由 Deployment 或者 DaemonSet 所创建的所有 Pods 都会有如下 DNS +解析项与之对应: + +`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.` + <!-- ### Pod's hostname and subdomain fields @@ -134,15 +147,22 @@ domain name (FQDN) "`foo.bar.my-namespace.svc.cluster-domain.example`". Example: --> -### Pod的 hostname 和 subdomain 字段 +### Pod 的 hostname 和 subdomain 字段 -当前,创建 Pod 后,它的主机名是该 Pod 的 `metadata.name` 值。 +当前,创建 Pod 时其主机名取自 Pod 的 `metadata.name` 值。 -PodSpec 有一个可选的 `hostname` 字段,可以用来指定 Pod 的主机名。当这个字段被设置时,它将优先于 Pod 的名字成为该 Pod 的主机名。举个例子,给定一个 `hostname` 设置为 "`my-host`" 的 Pod,该 Pod 的主机名将被设置为 "`my-host`"。 +Pod 规约中包含一个可选的 `hostname` 字段,可以用来指定 Pod 的主机名。 +当这个字段被设置时,它将优先于 Pod 的名字成为该 Pod 的主机名。 +举个例子,给定一个 `hostname` 设置为 "`my-host`" 的 Pod, +该 Pod 的主机名将被设置为 "`my-host`"。 -PodSpec 还有一个可选的 `subdomain` 字段,可以用来指定 Pod 的子域名。举个例子,一个 Pod 的 `hostname` 设置为 “`foo`”,`subdomain` 设置为 “`bar`”,在 namespace “`my-namespace`” 中对应的完全限定域名(FQDN)为 “`foo.bar.my-namespace.svc.cluster-domain.example`”。 +Pod 规约还有一个可选的 `subdomain` 字段,可以用来指定 Pod 的子域名。 +举个例子,某 Pod 的 `hostname` 设置为 “`foo`”,`subdomain` 设置为 “`bar`”, +在名字空间 “`my-namespace`” 中对应的完全限定域名(FQDN)为 +“`foo.bar.my-namespace.svc.cluster-domain.example`”。 + +示例: -实例: ```yaml apiVersion: v1 kind: Service @@ -153,7 +173,7 @@ spec: name: busybox clusterIP: None ports: - - name: foo # Actually, no port is needed. + - name: foo # 实际上不需要指定端口号 port: 1234 targetPort: 1234 --- @@ -192,30 +212,29 @@ spec: <!-- If there exists a headless service in the same namespace as the pod and with -the same name as the subdomain, the cluster's KubeDNS Server also returns an A +the same name as the subdomain, the cluster's DNS Server also returns an A or AAAA record for the Pod's fully qualified hostname. For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to "`default-subdomain`", and a headless Service named "`default-subdomain`" in the same namespace, the pod will see its own FQDN as "`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`". DNS serves an -A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and -"`busybox2`" can have their distinct A records. +A or AAAA record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and +"`busybox2`" can have their distinct A or AAAA records. --> - -如果 Headless Service 与 Pod 在同一个 Namespace 中,它们具有相同的子域名,集群的 KubeDNS 服务器也会为该 Pod 的完整合法主机名返回 A 记录。 -例如,在同一个 Namespace 中,给定一个主机名为 “busybox-1” 的 Pod,子域名设置为 “default-subdomain”,名称为 “default-subdomain” 的 Headless Service ,Pod 将看到自己的 FQDN 为 “busybox-1.default-subdomain.my-namespace.svc.cluster.local”。 -DNS 会为那个名字提供一个 A 记录,指向该 Pod 的 IP。 -“busybox1” 和 “busybox2” 这两个 Pod 分别具有它们自己的 A 记录。 - +如果某无头服务与某 Pod 在同一个名字空间中,且它们具有相同的子域名, +集群的 DNS 服务器也会为该 Pod 的全限定主机名返回 A 记录或 AAAA 记录。 +例如,在同一个名字空间中,给定一个主机名为 “busybox-1”、 +子域名设置为 “default-subdomain” 的 Pod,和一个名称为 “`default-subdomain`” +的无头服务,Pod 将看到自己的 FQDN 为 +"`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`"。 +DNS 会为此名字提供一个 A 记录或 AAAA 记录,指向该 Pod 的 IP。 +“`busybox1`” 和 “`busybox2`” 这两个 Pod 分别具有它们自己的 A 或 AAAA 记录。 <!-- The Endpoints object can specify the `hostname` for any endpoint addresses, along with its IP. --> - -端点对象可以为任何端点地址及其 IP 指定 `hostname`。 - -{{< note >}} +Endpoints 对象可以为任何端点地址及其 IP 指定 `hostname`。 <!-- Because A records are not created for Pod names, `hostname` is required for the Pod's A @@ -224,11 +243,15 @@ A record for the headless service (`default-subdomain.my-namespace.svc.cluster-d pointing to the Pod's IP address. Also, Pod needs to become ready in order to have a record unless `publishNotReadyAddresses=True` is set on the Service. --> +{{< note >}} +因为没有为 Pod 名称创建 A 记录或 AAAA 记录,所以要创建 Pod 的 A 记录 +或 AAAA 记录需要 `hostname`。 -因为没有为 Pod 名称创建A记录,所以要创建 Pod 的 A 记录需要 `hostname` 。 - -没有 `hostname` 但带有 `subdomain` 的 Pod 只会为指向Pod的IP地址的 headless 服务创建 A 记录(`default-subdomain.my-namespace.svc.cluster-domain.example`)。 -另外,除非在服务上设置了 `publishNotReadyAddresses=True`,否则 Pod 需要准备好 A 记录。 +没有设置 `hostname` 但设置了 `subdomain` 的 Pod 只会为 +无头服务创建 A 或 AAAA 记录(`default-subdomain.my-namespace.svc.cluster-domain.example`) +指向 Pod 的 IP 地址。 +另外,除非在服务上设置了 `publishNotReadyAddresses=True`,否则只有 Pod 进入就绪状态 +才会有与之对应的记录。 {{< /note >}} <!-- @@ -256,30 +279,32 @@ following pod-specific DNS policies. These policies are specified in the See [Pod's DNS config](#pod-s-dns-config) subsection below. --> -- "`Default`": Pod从运行所在的节点继承名称解析配置。 - 参考 [相关讨论](/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node) 获取更多信息。 -- "`ClusterFirst`": 与配置的群集域后缀不匹配的任何DNS查询(例如 “www.kubernetes.io” )都将转发到从节点继承的上游名称服务器。 群集管理员可能配置了额外的存根域和上游DNS服务器。 - See [相关讨论](/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods) 获取如何 DNS 的查询和处理信息的相关资料。 -- "`ClusterFirstWithHostNet`": 对于与 hostNetwork 一起运行的 Pod,应显式设置其DNS策略 "`ClusterFirstWithHostNet`"。 -- "`None`": 它允许 Pod 忽略 Kubernetes 环境中的 DN S设置。 应该使用 Pod Spec 中的 `dnsConfig` 字段提供所有 DNS 设置。 - -{{< note >}} +- "`Default`": Pod 从运行所在的节点继承名称解析配置。 + 参考[相关讨论](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node) 获取更多信息。 +- "`ClusterFirst`": 与配置的集群域后缀不匹配的任何 DNS 查询(例如 “www.kubernetes.io”) + 都将转发到从节点继承的上游名称服务器。集群管理员可能配置了额外的存根域和上游 DNS 服务器。 + 参阅[相关讨论](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods) + 了解在这些场景中如何处理 DNS 查询的信息。 +- "`ClusterFirstWithHostNet`":对于以 hostNetwork 方式运行的 Pod,应显式设置其 DNS 策略 + "`ClusterFirstWithHostNet`"。 +- "`None`": 此设置允许 Pod 忽略 Kubernetes 环境中的 DNS 设置。Pod 会使用其 `dnsConfig` 字段 + 所提供的 DNS 设置。 + 参见 [Pod 的 DNS 配置](#pod-dns-config)节。 <!-- "Default" is not the default DNS policy. If `dnsPolicy` is not explicitly specified, then “ClusterFirst” is used. --> - -"Default" 不是默认的 DNS 策略。 如果未明确指定 `dnsPolicy`,则使用 “ClusterFirst”。 +{{< note >}} +"`Default`" 不是默认的 DNS 策略。如果未明确指定 `dnsPolicy`,则使用 "`ClusterFirst`"。 {{< /note >}} - <!-- The example below shows a Pod with its DNS policy set to "`ClusterFirstWithHostNet`" because it has `hostNetwork` set to `true`. --> - -下面的示例显示了一个Pod,其DNS策略设置为 "`ClusterFirstWithHostNet`",因为它已将 `hostNetwork` 设置为 `true`。 +下面的示例显示了一个 Pod,其 DNS 策略设置为 "`ClusterFirstWithHostNet`", +因为它已将 `hostNetwork` 设置为 `true`。 ```yaml apiVersion: v1 @@ -311,8 +336,7 @@ to be specified. Below are the properties a user can specify in the `dnsConfig` field: --> - -### Pod 的 DNS 设定 +### Pod 的 DNS 配置 {#pod-dns-config} Pod 的 DNS 配置可让用户对 Pod 的 DNS 设置进行更多控制。 @@ -339,18 +363,23 @@ Pod 的 DNS 配置可让用户对 Pod 的 DNS 设置进行更多控制。 Duplicate entries are removed. --> -- `nameservers`: 将用作于 Pod 的 DNS 服务器的 IP 地址列表。最多可以指定3个 IP 地址。 当 Pod 的 `dnsPolicy` 设置为 "`None`" 时,列表必须至少包含一个IP地址,否则此属性是可选的。列出的服务器将合并到从指定的 DNS 策略生成的基本名称服务器,并删除重复的地址。 -- `searches`: 用于在 Pod 中查找主机名的 DNS 搜索域的列表。此属性是可选的。指定后,提供的列表将合并到根据所选 DNS 策略生成的基本搜索域名中。 - 重复的域名将被删除。 -   Kubernetes最多允许6个搜索域。 -- `options`: 对象的可选列表,其中每个对象可能具有 `name` 属性(必需)和 `value` 属性(可选)。 此属性中的内容将合并到从指定的 DNS 策略生成的选项。 - 重复的条目将被删除。 +- `nameservers`:将用作于 Pod 的 DNS 服务器的 IP 地址列表。 + 最多可以指定 3 个 IP 地址。当 Pod 的 `dnsPolicy` 设置为 "`None`" 时, + 列表必须至少包含一个 IP 地址,否则此属性是可选的。 + 所列出的服务器将合并到从指定的 DNS 策略生成的基本名称服务器,并删除重复的地址。 + +- `searches`:用于在 Pod 中查找主机名的 DNS 搜索域的列表。此属性是可选的。 + 指定此属性时,所提供的列表将合并到根据所选 DNS 策略生成的基本搜索域名中。 + 重复的域名将被删除。Kubernetes 最多允许 6 个搜索域。 + +- `options`:可选的对象列表,其中每个对象可能具有 `name` 属性(必需)和 `value` 属性(可选)。 + 此属性中的内容将合并到从指定的 DNS 策略生成的选项。 + 重复的条目将被删除。 <!-- The following is an example Pod with custom DNS settings: --> - -以下是具有自定义DNS设置的Pod示例: +以下是具有自定义 DNS 设置的 Pod 示例: {{< codenew file="service/networking/custom-dns.yaml" >}} @@ -358,8 +387,7 @@ The following is an example Pod with custom DNS settings: When the Pod above is created, the container `test` gets the following contents in its `/etc/resolv.conf` file: --> - -创建上面的Pod后,容器 `test` 会在其 `/etc/resolv.conf` 文件中获取以下内容: +创建上面的 Pod 后,容器 `test` 会在其 `/etc/resolv.conf` 文件中获取以下内容: ``` nameserver 1.2.3.4 @@ -367,12 +395,10 @@ search ns1.svc.cluster-domain.example my.dns.search.suffix options ndots:2 edns0 ``` - - <!-- For IPv6 setup, search path and name server should be setup like this: --> -对于IPv6设置,搜索路径和名称服务器应按以下方式设置: +对于 IPv6 设置,搜索路径和名称服务器应按以下方式设置: ```shell kubectl exec -it dns-example -- cat /etc/resolv.conf @@ -381,8 +407,9 @@ kubectl exec -it dns-example -- cat /etc/resolv.conf <!-- The output is similar to this: --> -有以下输出: -```shell +输出类似于 + +``` nameserver fd00:79:30::a search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example options ndots:5 @@ -393,29 +420,22 @@ options ndots:5 The availability of Pod DNS Config and DNS Policy "`None`" is shown as below. --> +### 功能的可用性 -### 可用功能 +Pod DNS 配置和 DNS 策略 "`None`" 的可用版本对应如下所示。 -Pod DNS 配置和 DNS 策略 "`None`" 的版本对应如下所示。 - -| k8s version | Feature support | +| k8s 版本 | 特性支持 | | :---------: |:-----------:| -| 1.14 | Stable | -| 1.10 | Beta (on by default)| +| 1.14 | 稳定 | +| 1.10 | Beta(默认启用) | | 1.9 | Alpha | - - ## {{% heading "whatsnext" %}} - <!-- For guidance on administering DNS configurations, check [Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/) --> - 有关管理 DNS 配置的指导,请查看 -[配置 DNS 服务](/docs/tasks/administer-cluster/dns-custom-nameservers/) - - +[配置 DNS 服务](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/) diff --git a/content/zh/docs/concepts/services-networking/dual-stack.md b/content/zh/docs/concepts/services-networking/dual-stack.md index 4fe0dad18c..f1bfba435f 100644 --- a/content/zh/docs/concepts/services-networking/dual-stack.md +++ b/content/zh/docs/concepts/services-networking/dual-stack.md @@ -3,26 +3,19 @@ title: IPv4/IPv6 双协议栈 feature: title: IPv4/IPv6 双协议栈 description: > - Allocation of IPv4 and IPv6 addresses to Pods and Services - + 为 Pod 和 Service 分配 IPv4 和 IPv6 地址 content_type: concept weight: 70 --- + <!-- ---- -reviewers: -- lachie83 -- khenidak -- aramase title: IPv4/IPv6 dual-stack feature: title: IPv4/IPv6 dual-stack description: > Allocation of IPv4 and IPv6 addresses to Pods and Services - content_type: concept weight: 70 ---- --> <!-- overview --> @@ -32,14 +25,15 @@ weight: 70 <!-- IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}. --> -IPv4/IPv6 双协议栈能够将 IPv4 和 IPv6 地址分配给 {{< glossary_tooltip text="Pods" term_id="pod" >}} 和 {{< glossary_tooltip text="Services" term_id="service" >}}。 +IPv4/IPv6 双协议栈能够将 IPv4 和 IPv6 地址分配给 +{{< glossary_tooltip text="Pod" term_id="pod" >}} 和 +{{< glossary_tooltip text="Service" term_id="service" >}}。 <!-- If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses. --> -如果你为 Kubernetes 集群启用了 IPv4/IPv6 双协议栈网络,则该集群将支持同时分配 IPv4 和 IPv6 地址。 - - +如果你为 Kubernetes 集群启用了 IPv4/IPv6 双协议栈网络, +则该集群将支持同时分配 IPv4 和 IPv6 地址。 <!-- body --> @@ -89,7 +83,9 @@ The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack <!-- To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments: --> -要启用 IPv4/IPv6 双协议栈,为集群的相关组件启用 `IPv6DualStack` [特性门控](/docs/reference/command-line-tools-reference/feature-gates/),并且设置双协议栈的集群网络分配: +要启用 IPv4/IPv6 双协议栈,为集群的相关组件启用 `IPv6DualStack` +[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), +并且设置双协议栈的集群网络分配: * kube-apiserver: * `--feature-gates="IPv6DualStack=true"` @@ -113,13 +109,20 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/ If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service. You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic. --> -如果你的集群启用了 IPv4/IPv6 双协议栈网络,则可以使用 IPv4 或 IPv6 地址来创建 {{< glossary_tooltip text="Services" term_id="service" >}}。你可以通过设置服务的 `.spec.ipFamily` 字段来选择服务的集群 IP 的地址族。你只能在创建新服务时设置该字段。`.spec.ipFamily` 字段的设置是可选的,并且仅当你计划在集群上启用 IPv4 和 IPv6 的 {{< glossary_tooltip text="Services" term_id="service" >}} 和 {{< glossary_tooltip text="Ingresses" term_id="ingress" >}}。对于[出口](#出口流量)流量,该字段的配置不是必须的。 +如果你的集群启用了 IPv4/IPv6 双协议栈网络,则可以使用 IPv4 或 IPv6 地址来创建 +{{< glossary_tooltip text="Service" term_id="service" >}}。 +你可以通过设置服务的 `.spec.ipFamily` 字段来选择服务的集群 IP 的地址族。 +你只能在创建新服务时设置该字段。`.spec.ipFamily` 字段的设置是可选的, +并且仅当你计划在集群上启用 IPv4 和 IPv6 的 {{< glossary_tooltip text="Service" term_id="service" >}} +和 {{< glossary_tooltip text="Ingress" term_id="ingress" >}}。 +对于[出口](#出口流量)流量,该字段的配置不是必须的。 <!-- -The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager. +The default address family for your cluster is the address family of the first service cluster IP range configured via the `-service-cluster-ip-range` flag to the kube-controller-manager. --> {{< note >}} -集群的默认地址族是第一个服务集群 IP 范围的地址族,该地址范围通过 kube-controller-manager 上的 `--service-cluster-ip-range` 标志设置。 +集群的默认地址族是第一个服务集群 IP 范围的地址族,该地址范围通过 +`kube-controller-manager` 上的 `--service-cluster-ip-range` 标志设置。 {{< /note >}} <!-- @@ -158,12 +161,13 @@ For comparison, the following Service specification will be assigned an IPV4 add <!-- ### Type LoadBalancer --> -### 负载均衡器类型 +### LoadBalancer 类型 <!-- On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service. --> -在支持启用了 IPv6 的外部服务均衡器的云驱动上,除了将 `ipFamily` 字段设置为 `IPv6`,将 `type` 字段设置为 `LoadBalancer`,为你的服务提供云负载均衡。 +在支持启用了 IPv6 的外部服务均衡器的云驱动上,除了将 `ipFamily` 字段设置为 `IPv6`, +将 `type` 字段设置为 `LoadBalancer`,为你的服务提供云负载均衡。 <!-- ## Egress Traffic @@ -173,7 +177,12 @@ On cloud providers which support IPv6 enabled external load balancers, setting t <!-- The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters. --> -公共路由和非公共路由的 IPv6 地址块的使用是可以的。提供底层 {{< glossary_tooltip text="CNI" term_id="cni" >}} 的提供程序可以实现这种传输。如果你拥有使用非公共路由 IPv6 地址的 Pod,并且希望该 Pod 到达集群外目的(比如,公共网络),你必须为出口流量和任何响应消息设置 IP 伪装。[ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) 可以感知双栈,所以你可以在双栈集群中使用 ip-masq-agent 来进行 IP 伪装。 +公共路由和非公共路由的 IPv6 地址块的使用是可以的。提供底层 +{{< glossary_tooltip text="CNI" term_id="cni" >}} 的提供程序可以实现这种传输。 +如果你拥有使用非公共路由 IPv6 地址的 Pod,并且希望该 Pod 到达集群外目的 +(比如,公共网络),你必须为出口流量和任何响应消息设置 IP 伪装。 +[ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) 可以感知双栈, +所以你可以在双栈集群中使用 ip-masq-agent 来进行 IP 伪装。 <!-- ## Known Issues @@ -181,19 +190,15 @@ The use of publicly routable and non-publicly routable IPv6 address blocks is ac ## 已知问题 <!-- - * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr) + * Kubenet forces IPv4,IPv6 positional reporting of IPs (-cluster-cidr) --> - * Kubenet 强制 IPv4,IPv6 的 IPs 位置报告 (--cluster-cidr) - - + * Kubenet 强制 IPv4,IPv6 的 IPs 位置报告 (`--cluster-cidr`) ## {{% heading "whatsnext" %}} - <!-- * [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking --> -* [验证 IPv4/IPv6 双协议栈](/docs/tasks/network/validate-dual-stack)网络 - +* [验证 IPv4/IPv6 双协议栈](/zh/docs/tasks/network/validate-dual-stack)网络 diff --git a/content/zh/docs/concepts/services-networking/endpoint-slices.md b/content/zh/docs/concepts/services-networking/endpoint-slices.md index 8a71b19b3c..d788da3d7d 100644 --- a/content/zh/docs/concepts/services-networking/endpoint-slices.md +++ b/content/zh/docs/concepts/services-networking/endpoint-slices.md @@ -1,9 +1,7 @@ --- -reviewers: -- freehan -title: Endpoint Slices +title: 端点切片(Endpoint Slices) feature: - title: Endpoint Slices + title: 端点切片 description: > Kubernetes 集群中网络端点的可扩展跟踪。 @@ -12,9 +10,6 @@ weight: 10 --- <!-- ---- -reviewers: -- freehan title: Endpoint Slices feature: title: Endpoint Slices @@ -23,7 +18,6 @@ feature: content_type: concept weight: 10 ---- --> <!-- overview --> @@ -35,9 +29,8 @@ _Endpoint Slices_ provide a simple way to track network endpoints within a Kubernetes cluster. They offer a more scalable and extensible alternative to Endpoints. --> -_Endpoint Slices_ 提供了一种简单的方法来跟踪 Kubernetes 集群中的网络端点(network endpoints)。它们为 Endpoints 提供了一种可伸缩和可拓展的替代方案。 - - +_端点切片(Endpoint Slices)_ 提供了一种简单的方法来跟踪 Kubernetes 集群中的网络端点 +(network endpoints)。它们为 Endpoints 提供了一种可伸缩和可拓展的替代方案。 <!-- body --> @@ -55,7 +48,9 @@ Kubernetes Service. --> ## Endpoint Slice 资源 {#endpointslice-resource} -在 Kubernetes 中,`EndpointSlice` 包含对一组网络端点的引用。指定选择器后,EndpointSlice 控制器会自动为 Kubernetes 服务创建 EndpointSlice。这些 EndpointSlice 将包含对与服务选择器匹配的所有 Pod 的引用。EndpointSlice 通过唯一的服务和端口组合将网络端点组织在一起。 +在 Kubernetes 中,`EndpointSlice` 包含对一组网络端点的引用。 +指定选择器后,EndpointSlice 控制器会自动为 Kubernetes 服务创建 EndpointSlice。 +这些 EndpointSlice 将包含对与服务选择器匹配的所有 Pod 的引用。EndpointSlice 通过唯一的服务和端口组合将网络端点组织在一起。 例如,这里是 Kubernetes服务 `example` 的示例 EndpointSlice 资源。 @@ -90,7 +85,14 @@ with Endpoints and Services and have similar performance. Endpoint Slices can act as the source of truth for kube-proxy when it comes to how to route internal traffic. When enabled, they should provide a performance improvement for services with large numbers of endpoints. +--> +默认情况下,由 EndpointSlice 控制器管理的 Endpoint Slice 将有不超过 100 个端点。 +低于此比例时,Endpoint Slices 应与 Endpoints 和服务进行 1:1 映射,并具有相似的性能。 +当涉及如何路由内部流量时,Endpoint Slices 可以充当 kube-proxy 的真实来源。 +启用该功能后,在服务的 endpoints 规模庞大时会有可观的性能提升。 + +<!-- ## Address Types EndpointSlices support three address types: @@ -98,7 +100,16 @@ EndpointSlices support three address types: * IPv4 * IPv6 * FQDN (Fully Qualified Domain Name) +--> +## 地址类型 +EndpointSlice 支持三种地址类型: + +* IPv4 +* IPv6 +* FQDN (完全合格的域名) + +<!-- ## Motivation The Endpoints API has provided a simple and straightforward way of @@ -114,38 +125,25 @@ significant amounts of network traffic and processing when Endpoints changed. Endpoint Slices help you mitigate those issues as well as provide an extensible platform for additional features such as topological routing. --> - -默认情况下,由 EndpointSlice 控制器管理的 Endpoint Slice 将有不超过 100 个 endpoints。低于此比例时,Endpoint Slices 应与 Endpoints 和服务进行 1:1 映射,并具有相似的性能。 - -当涉及如何路由内部流量时,Endpoint Slices 可以充当 kube-proxy 的真实来源。启用该功能后,在服务的 endpoints 规模庞大时会有可观的性能提升。 - -<!-- -## Address Types ---> -## 地址类型 - -EndpointSlice 支持三种地址类型: - -* IPv4 -* IPv6 -* FQDN (完全合格的域名) - ## 动机 -Endpoints API 提供了一种简单明了的方法在 Kubernetes 中跟踪网络端点。不幸的是,随着 Kubernetes 集群与服务的增长,该 API 的局限性变得更加明显。最值得注意的是,这包含了扩展到更多网络端点的挑战。 +Endpoints API 提供了一种简单明了的方法在 Kubernetes 中跟踪网络端点。 +不幸的是,随着 Kubernetes 集群与服务的增长,该 API 的局限性变得更加明显。 +最值得注意的是,这包含了扩展到更多网络端点的挑战。 -由于服务的所有网络端点都存储在单个 Endpoints 资源中,因此这些资源可能会变得很大。这影响了 Kubernetes 组件(尤其是主控制平面)的性能,并在 Endpoints 发生更改时导致大量网络流量和处理。Endpoint Slices 可帮助您缓解这些问题并提供可扩展的 +由于服务的所有网络端点都存储在单个 Endpoints 资源中, +因此这些资源可能会变得很大。 +这影响了 Kubernetes 组件(尤其是主控制平面)的性能,并在 Endpoints +发生更改时导致大量网络流量和处理。 +Endpoint Slices 可帮助您缓解这些问题并提供可扩展的 附加特性(例如拓扑路由)平台。 - - ## {{% heading "whatsnext" %}} - <!-- * [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpoint-slices) * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) --> -* [启用 Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpoint-slices) -* 阅读 [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) +* [启用端点切片](/zh/docs/tasks/administer-cluster/enabling-endpointslices) +* 阅读[使用服务链接应用](/zh/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/zh/docs/concepts/services-networking/ingress-controllers.md b/content/zh/docs/concepts/services-networking/ingress-controllers.md index 46d166d9d9..4071ea6189 100644 --- a/content/zh/docs/concepts/services-networking/ingress-controllers.md +++ b/content/zh/docs/concepts/services-networking/ingress-controllers.md @@ -1,134 +1,155 @@ ---- -title: Ingress 控制器 -content_type: concept -weight: 40 ---- - -<!-- ---- -title: Ingress Controllers -reviewers: -content_type: concept -weight: 40 ---- ---> - -<!-- overview --> - -<!-- -In order for the Ingress resource to work, the cluster must have an ingress controller running. - -Unlike other types of controllers which run as part of the `kube-controller-manager` binary, Ingress controllers -are not started automatically with a cluster. Use this page to choose the ingress controller implementation -that best fits your cluster. - -Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and - [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. - ---> - -为了让 Ingress 资源工作,集群必须有一个正在运行的 Ingress 控制器。 - -与作为 `kube-controller-manager` 可执行文件的一部分运行的其他类型的控制器不同,Ingress 控制器不是随集群自动启动的。 -基于此页面,您可选择最适合您的集群的 ingress 控制器实现。 - -Kubernetes 作为一个项目,目前支持和维护 [GCE](https://git.k8s.io/ingress-gce/README.md) -和 [nginx](https://git.k8s.io/ingress-nginx/README.md) 控制器。 - - - -<!-- body --> - -<!-- -## Additional controllers ---> -## 其他控制器 - -<!-- -* [AKS Application Gateway Ingress Controller](https://github.com/Azure/application-gateway-kubernetes-ingress) is an ingress controller that enables ingress to [AKS clusters](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal) using the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview). -* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress - controller with [community](https://www.getambassador.io/docs) or - [commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/). -* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager). -* [AWS ALB Ingress Controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) enables ingress using the [AWS Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/). -* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller - provided and supported by VMware. -* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments. -* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508) - for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest). -* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io). -* [HAProxy Ingress](https://haproxy-ingress.github.io) is a highly customizable community-driven ingress controller for HAProxy. -* [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for the [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress). See the [official documentation](https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/). -* [Istio](https://istio.io/) based ingress controller - [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/). -* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or - [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the - [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller). -* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the - [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). -* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy -* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller - ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial - support by [Containous](https://containo.us/services). ---> -* [AKS 应用程序网关 Ingress 控制器]使用 [Azure 应用程序网关](https://docs.microsoft.com/azure/application-gateway/overview)启用[AKS 集群](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal) ingress。 -* [Ambassador](https://www.getambassador.io/) API 网关, 一个基于 [Envoy](https://www.envoyproxy.io) 的 ingress - 控制器,有着来自[社区](https://www.getambassador.io/docs) 的支持和来自 [Datawire](https://www.datawire.io/) 的[商业](https://www.getambassador.io/pro/) 支持。 -* [AppsCode Inc.](https://appscode.com) 为最广泛使用的基于 [HAProxy](http://www.haproxy.org/) 的 ingress 控制器 [Voyager](https://appscode.com/products/voyager) 提供支持和维护。 -* [AWS ALB Ingress 控制器](https://github.com/kubernetes-sigs/aws-alb-ingress-controller)通过 [AWS 应用 Load Balancer](https://aws.amazon.com/elasticloadbalancing/) 启用 ingress。 -* [Contour](https://projectcontour.io/) 是一个基于 [Envoy](https://www.envoyproxy.io/) 的 ingress 控制器,它由 VMware 提供和支持。 -* Citrix 为其硬件(MPX),虚拟化(VPX)和 [免费容器化 (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) 提供了一个 [Ingress 控制器](https://github.com/citrix/citrix-k8s-ingress-controller),用于[裸金属](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal)和[云](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment)部署。 -* F5 Networks 为 [用于 Kubernetes 的 F5 BIG-IP 控制器](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest)提供[支持和维护](https://support.f5.com/csp/article/K86859508)。 -* [Gloo](https://gloo.solo.io) 是一个开源的基于 [Envoy](https://www.envoyproxy.io) 的 ingress 控制器,它提供了 API 网关功能,有着来自 [solo.io](https://www.solo.io) 的企业级支持。 -* [HAProxy Ingress](https://haproxy-ingress.github.io) 是 HAProxy 高度可定制的、由社区驱动的 Ingress 控制器。 -* [HAProxy Technologies](https://www.haproxy.com/) 为[用于 Kubernetes 的 HAProxy Ingress 控制器](https://github.com/haproxytech/kubernetes-ingress) 提供支持和维护。具体信息请参考[官方文档](https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/)。 -* 基于 [Istio](https://istio.io/) 的 ingress 控制器[控制 Ingress 流量](https://istio.io/docs/tasks/traffic-management/ingress/)。 -* [Kong](https://konghq.com/) 为[用于 Kubernetes 的 Kong Ingress 控制器](https://github.com/Kong/kubernetes-ingress-controller) 提供[社区](https://discuss.konghq.com/c/kubernetes)或[商业](https://konghq.com/kong-enterprise/)支持和维护。 -* [NGINX, Inc.](https://www.nginx.com/) 为[用于 Kubernetes 的 NGINX Ingress 控制器](https://www.nginx.com/products/nginx/kubernetes-ingress-controller)提供支持和维护。 -* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP 路由器和反向代理,用于服务组合,包括诸如 Kubernetes Ingress 之类的用例,被设计为用于构建自定义代理的库。 -* [Traefik](https://github.com/containous/traefik) 是一个全功能的 ingress 控制器 - ([Let's Encrypt](https://letsencrypt.org),secrets,http2,websocket),并且它也有来自 [Containous](https://containo.us/services) 的商业支持。 - -<!-- -## Using multiple Ingress controllers ---> -## 使用多个 Ingress 控制器 - -<!-- -You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) -within a cluster. When you create an ingress, you should annotate each ingress with the appropriate -[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) -to indicate which ingress controller should be used if more than one exists within your cluster. - -If you do not define a class, your cloud provider may use a default ingress controller. - -Ideally, all ingress controllers should fulfill this specification, but the various ingress -controllers operate slightly differently. ---> - -你可以在集群中部署[任意数量的 ingress 控制器](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)。 -创建 ingress 时,应该使用适当的 [`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) 注解每个 ingress -以表明在集群中如果有多个 ingress 控制器时,应该使用哪个 ingress 控制器。 - -如果不定义 `ingress.class`,云提供商可能使用默认的 ingress 控制器。 - -理想情况下,所有 ingress 控制器都应满足此规范,但各种 ingress 控制器的操作略有不同。 - -<!-- -Make sure you review your ingress controller's documentation to understand the caveats of choosing it. ---> -{{< note >}} -确保您查看了 ingress 控制器的文档,以了解选择它的注意事项。 -{{< /note >}} - - - +--- +title: Ingress 控制器 +content_type: concept +weight: 40 +--- + +<!-- +title: Ingress Controllers +content_type: concept +weight: 40 +--> + +<!-- overview --> + +<!-- +In order for the Ingress resource to work, the cluster must have an ingress controller running. + +Unlike other types of controllers which run as part of the `kube-controller-manager` binary, Ingress controllers +are not started automatically with a cluster. Use this page to choose the ingress controller implementation +that best fits your cluster. + +Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and + [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. +--> +为了让 Ingress 资源工作,集群必须有一个正在运行的 Ingress 控制器。 + +与作为 `kube-controller-manager` 可执行文件的一部分运行的其他类型的控制器不同,Ingress 控制器不是随集群自动启动的。 +基于此页面,您可选择最适合您的集群的 ingress 控制器实现。 + +Kubernetes 作为一个项目,目前支持和维护 [GCE](https://git.k8s.io/ingress-gce/README.md) +和 [nginx](https://git.k8s.io/ingress-nginx/README.md) 控制器。 + +<!-- body --> + +<!-- +## Additional controllers +--> +## 其他控制器 + +<!-- +* [AKS Application Gateway Ingress Controller](https://github.com/Azure/application-gateway-kubernetes-ingress) is an ingress controller that enables ingress to [AKS clusters](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal) using the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview). +* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress + controller with [community](https://www.getambassador.io/docs) or + [commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/). +* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager). +* [AWS ALB Ingress Controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) enables ingress using the [AWS Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/). +* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller + provided and supported by VMware. +--> +* [AKS 应用程序网关 Ingress 控制器]使用 + [Azure 应用程序网关](https://docs.microsoft.com/azure/application-gateway/overview)启用 + [AKS 集群](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough-portal) ingress。 +* [Ambassador](https://www.getambassador.io/) API 网关,一个基于 [Envoy](https://www.envoyproxy.io) 的 Ingress + 控制器,有着来自[社区](https://www.getambassador.io/docs) 的支持和来自 + [Datawire](https://www.datawire.io/) 的[商业](https://www.getambassador.io/pro/) 支持。 +* [AppsCode Inc.](https://appscode.com) 为最广泛使用的基于 + [HAProxy](https://www.haproxy.org/) 的 Ingress 控制器 + [Voyager](https://appscode.com/products/voyager) 提供支持和维护。 +* [AWS ALB Ingress 控制器](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) + 通过 [AWS 应用 Load Balancer](https://aws.amazon.com/elasticloadbalancing/) 启用 Ingress。 +* [Contour](https://projectcontour.io/) 是一个基于 [Envoy](https://www.envoyproxy.io/) + 的 Ingress 控制器,它由 VMware 提供和支持。 +<!-- +* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments. +* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508) + for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest). +* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io). +* [HAProxy Ingress](https://haproxy-ingress.github.io) is a highly customizable community-driven ingress controller for HAProxy. +* [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for the [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress). See the [official documentation](https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/). +* [Istio](https://istio.io/) based ingress controller + [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/). +--> +* Citrix 为其硬件(MPX),虚拟化(VPX)和 + [免费容器化 (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) + 提供了一个 [Ingress 控制器](https://github.com/citrix/citrix-k8s-ingress-controller), + 用于[裸金属](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal)和 + [云](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment)部署。 +* F5 Networks 为 + [用于 Kubernetes 的 F5 BIG-IP 控制器](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest)提供 + [支持和维护](https://support.f5.com/csp/article/K86859508)。 +* [Gloo](https://gloo.solo.io) 是一个开源的基于 + [Envoy](https://www.envoyproxy.io) 的 Ingress 控制器,它提供了 API 网关功能, + 有着来自 [solo.io](https://www.solo.io) 的企业级支持。 +* [HAProxy Ingress](https://haproxy-ingress.github.io) 是 HAProxy 高度可定制的、 + 由社区驱动的 Ingress 控制器。 +* [HAProxy Technologies](https://www.haproxy.com/) 为 + [用于 Kubernetes 的 HAProxy Ingress 控制器](https://github.com/haproxytech/kubernetes-ingress) + 提供支持和维护。具体信息请参考[官方文档](https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/)。 +* 基于 [Istio](https://istio.io/) 的 ingress 控制器 + [控制 Ingress 流量](https://istio.io/docs/tasks/traffic-management/ingress/)。 +<!-- +* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or + [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the + [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller). +* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the + [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). +* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy +* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller + ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial + support by [Containous](https://containo.us/services). +--> +* [Kong](https://konghq.com/) 为 + [用于 Kubernetes 的 Kong Ingress 控制器](https://github.com/Kong/kubernetes-ingress-controller) + 提供[社区](https://discuss.konghq.com/c/kubernetes)或 + [商业](https://konghq.com/kong-enterprise/)支持和维护。 +* [NGINX, Inc.](https://www.nginx.com/) 为 + [用于 Kubernetes 的 NGINX Ingress 控制器](https://www.nginx.com/products/nginx/kubernetes-ingress-controller) + 提供支持和维护。 +* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP 路由器和反向代理,用于服务组合,包括诸如 Kubernetes Ingress 之类的用例,被设计为用于构建自定义代理的库。 +* [Traefik](https://github.com/containous/traefik) 是一个全功能的 ingress 控制器 + ([Let's Encrypt](https://letsencrypt.org),secrets,http2,websocket), + 并且它也有来自 [Containous](https://containo.us/services) 的商业支持。 + +<!-- +## Using multiple Ingress controllers +--> +## 使用多个 Ingress 控制器 + +<!-- +You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) +within a cluster. When you create an ingress, you should annotate each ingress with the appropriate +[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) +to indicate which ingress controller should be used if more than one exists within your cluster. + +If you do not define a class, your cloud provider may use a default ingress controller. + +Ideally, all ingress controllers should fulfill this specification, but the various ingress +controllers operate slightly differently. +--> + +你可以在集群中部署[任意数量的 ingress 控制器](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)。 +创建 ingress 时,应该使用适当的 +[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) +注解每个 Ingress 以表明在集群中如果有多个 Ingress 控制器时,应该使用哪个 Ingress 控制器。 + +如果不定义 `ingress.class`,云提供商可能使用默认的 Ingress 控制器。 + +理想情况下,所有 Ingress 控制器都应满足此规范,但各种 Ingress 控制器的操作略有不同。 + +<!-- +Make sure you review your ingress controller's documentation to understand the caveats of choosing it. +--> +{{< note >}} +确保您查看了 ingress 控制器的文档,以了解选择它的注意事项。 +{{< /note >}} + ## {{% heading "whatsnext" %}} - -<!-- -* Learn more about [Ingress](/docs/concepts/services-networking/ingress/). -* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube). ---> -* 进一步了解 [Ingress](/docs/concepts/services-networking/ingress/)。 -* [在 Minikube 上使用 NGINX 控制器安装 Ingress](/docs/tasks/access-application-cluster/ingress-minikube)。 - + +<!-- +* Learn more about [Ingress](/docs/concepts/services-networking/ingress/). +* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube). +--> +* 进一步了解 [Ingress](/zh/docs/concepts/services-networking/ingress/)。 +* [在 Minikube 上使用 NGINX 控制器安装 Ingress](/zh/docs/tasks/access-application-cluster/ingress-minikube)。 + diff --git a/content/zh/docs/concepts/services-networking/ingress.md b/content/zh/docs/concepts/services-networking/ingress.md index 502abfcbc4..254f04cba2 100644 --- a/content/zh/docs/concepts/services-networking/ingress.md +++ b/content/zh/docs/concepts/services-networking/ingress.md @@ -4,8 +4,6 @@ content_type: concept weight: 40 --- <!-- -reviewers: -- bprashanth title: Ingress content_type: concept weight: 40 @@ -21,7 +19,7 @@ weight: 40 <!-- ## Terminology --> -## 专用术语 +## 术语 <!-- For clarity, this guide defines the following terms: @@ -36,10 +34,14 @@ For clarity, this guide defines the following terms: * Service: A Kubernetes {{< glossary_tooltip term_id="service" >}} that identifies a set of Pods using {{< glossary_tooltip text="label" term_id="label" >}} selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network. --> * 节点(Node): Kubernetes 集群中其中一台工作机器,是集群的一部分。 -* 集群(Cluster): 一组运行程序(这些程序是容器化的,被 Kubernetes 管理的)的节点。 在此示例中,和在大多数常见的Kubernetes部署方案,集群中的节点都不会是公共网络。 -* 边缘路由器(Edge router): 在集群中强制性执行防火墙策略的路由器(router)。可以是由云提供商管理的网关,也可以是物理硬件。 -* 集群网络(Cluster network): 一组逻辑或物理的链接,根据 Kubernetes [网络模型](/docs/concepts/cluster-administration/networking/) 在集群内实现通信。 -* 服务(Service):Kubernetes {{< glossary_tooltip term_id="service" >}} 使用 {{< glossary_tooltip text="标签" term_id="label" >}} 选择器(selectors)标识的一组 Pod。除非另有说明,否则假定服务只具有在集群网络中可路由的虚拟 IP。 +* 集群(Cluster): 一组运行由 Kubernetes 管理的容器化应用程序的节点。 + 在此示例和在大多数常见的 Kubernetes 部署环境中,集群中的节点都不在公共网络中。 +* 边缘路由器(Edge router): 在集群中强制执行防火墙策略的路由器(router)。可以是由云提供商管理的网关,也可以是物理硬件。 +* 集群网络(Cluster network): 一组逻辑的或物理的连接,根据 Kubernetes + [网络模型](/zh/docs/concepts/cluster-administration/networking/) 在集群内实现通信。 +* 服务(Service):Kubernetes {{< glossary_tooltip text="服务" term_id="service" >}}使用 + {{< glossary_tooltip text="标签" term_id="label" >}} 选择算符(selectors)标识的一组 Pod。 + 除非另有说明,否则假定服务只具有在集群网络中可路由的虚拟 IP。 <!-- ## What is Ingress? @@ -52,7 +54,8 @@ For clarity, this guide defines the following terms: Traffic routing is controlled by rules defined on the Ingress resource. --> -[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) 公开了从集群外部到集群内 {{< link text="services" url="/docs/concepts/services-networking/service/" >}} 的 HTTP 和 HTTPS 路由。 +[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) +公开了从集群外部到集群内[服务](/zh/docs/concepts/services-networking/service/)的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。 ```none @@ -66,7 +69,9 @@ Traffic routing is controlled by rules defined on the Ingress resource. <!-- An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. --> -可以将 Ingress 配置为提供服务外部可访问的 URL、负载均衡流量、终止 SSL / TLS,以及提供基于名称的虚拟主机。[Ingress 控制器](/docs/concepts/services-networking/ingress-controllers) 通常负责通过负载均衡器来实现 Ingress,尽管它也可以配置边缘路由器或其他前端来帮助处理流量。 +可以将 Ingress 配置为服务提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS,以及提供基于名称的虚拟主机等能力。 +[Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) +通常负责通过负载均衡器来实现 Ingress,尽管它也可以配置边缘路由器或其他前端来帮助处理流量。 <!-- An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically @@ -74,23 +79,27 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer). --> Ingress 不会公开任意端口或协议。 -将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用 [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) 或者 [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) 类型的服务。 +将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用 +[Service.Type=NodePort](/zh/docs/concepts/services-networking/service/#nodeport) +或 [Service.Type=LoadBalancer](/zh/docs/concepts/services-networking/service/#loadbalancer) +类型的服务。 <!-- ## Prerequisites + +You must have an [ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect. --> ## 环境准备 -<!-- -You must have an [ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect. ---> -您必须具有 [ingress 控制器](/docs/concepts/services-networking/ingress-controllers) 才能满足 Ingress 的要求。仅创建 Ingress 资源无效。 +你必须具有 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 才能满足 Ingress 的要求。 +仅创建 Ingress 资源本身没有任何效果。 <!-- You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/). You can choose from a number of [Ingress controllers](/docs/concepts/services-networking/ingress-controllers). --> -您可能需要部署 Ingress 控制器,例如 [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/)。您可以从许多[Ingress 控制器](/docs/concepts/services-networking/ingress-controllers) 中进行选择。 +你可能需要部署 Ingress 控制器,例如 [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/)。 +你可以从许多 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 中进行选择。 <!-- Ideally, all Ingress controllers should fit the reference specification. In reality, the various Ingress @@ -107,12 +116,11 @@ Make sure you review your Ingress controller's documentation to understand the c <!-- ## The Ingress Resource + +A minimal Ingress resource example: --> ## Ingress 资源 -<!-- -A minimal Ingress resource example: ---> 一个最小的 Ingress 资源示例: ```yaml @@ -144,10 +152,14 @@ Different [Ingress controller](/docs/concepts/services-networking/ingress-contro your choice of Ingress controller to learn which annotations are supported. --> 与所有其他 Kubernetes 资源一样,Ingress 需要使用 `apiVersion`、`kind` 和 `metadata` 字段。 - Ingress 对象的命名必须是合法的 [DNS 子域名名称](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 - 有关使用配置文件的一般信息,请参见[部署应用](/docs/tasks/run-application/run-stateless-application-deployment/)、 [配置容器](/docs/tasks/configure-pod-container/configure-pod-configmap/)、[管理资源](/docs/concepts/cluster-administration/manage-deployment/)。 - Ingress 经常使用注解(annotations)来配置一些选项,具体取决于 Ingress 控制器,例如 [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)。 - 不同的 [Ingress 控制器](/docs/concepts/services-networking/ingress-controllers) 支持不同的注解(annotations)。查看文档以供您选择 Ingress 控制器,以了解支持哪些注解(annotations)。 + Ingress 对象的命名必须是合法的 [DNS 子域名名称](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + 有关使用配置文件的一般信息,请参见[部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)、 +[配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)、 +[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/)。 + Ingress 经常使用注解(annotations)来配置一些选项,具体取决于 Ingress 控制器,例如 +[重写目标注解](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)。 + 不同的 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) +支持不同的注解。查看文档以供您选择 Ingress 控制器,以了解支持哪些注解。 <!-- The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) @@ -155,12 +167,15 @@ has all the information needed to configure a load balancer or proxy server. Mos contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP traffic. --> -Ingress [规范](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 具有配置负载均衡器或者代理服务器所需的所有信息。最重要的是,它包含与所有传入请求匹配的规则列表。Ingress 资源仅支持用于定向 HTTP 流量的规则。 +Ingress [规约](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) +提供了配置负载均衡器或者代理服务器所需的所有信息。 +最重要的是,其中包含与所有传入请求匹配的规则列表。 +Ingress 资源仅支持用于转发 HTTP 流量的规则。 <!-- ### Ingress rules --> -### Ingress 规则 +### Ingress 规则 {#ingress-rules} <!-- Each HTTP rule contains the following information: @@ -178,43 +193,46 @@ Each HTTP rule contains the following information: [Service doc](/docs/concepts/services-networking/service/). HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend. --> -* 可选主机。在此示例中,未指定主机,因此该规则适用于通过指定 IP 地址的所有入站 HTTP 通信。如果提供了主机(例如 foo.bar.com),则规则适用于该主机。 -* 路径列表(例如,`/testpath`),每个路径都有一个由 `serviceName` 和 `servicePort` 定义的关联后端。在负载均衡器将流量定向到引用的服务之前,主机和路径都必须匹配传入请求的内容。 -* 后端是 [Service 文档](/docs/concepts/services-networking/service/)中所述的服务和端口名称的组合。与规则的主机和路径匹配的对 Ingress 的 HTTP(和 HTTPS )请求将发送到列出的后端。 +* 可选主机。在此示例中,未指定主机,因此该规则适用于通过指定 IP 地址的所有入站 HTTP 通信。 + 如果提供了主机(例如 foo.bar.com),则规则适用于该主机。 +* 路径列表(例如,`/testpath`),每个路径都有一个由 `serviceName` 和 `servicePort` 定义的关联后端。 + 在负载均衡器将流量定向到引用的服务之前,主机和路径都必须匹配传入请求的内容。 +* 后端是 [Service 文档](/zh/docs/concepts/services-networking/service/)中所述的服务和端口名称的组合。 + 与规则的主机和路径匹配的对 Ingress 的 HTTP(和 HTTPS )请求将发送到列出的后端。 <!-- A default backend is often configured in an Ingress controller to service any requests that do not match a path in the spec. --> -通常在 Ingress 控制器中配置默认后端,以服务任何不符合规范中路径的请求。 +通常在 Ingress 控制器中会配置默认后端,以服务任何不符合规范中路径的请求。 <!-- ### Default Backend ---> -### 默认后端 -<!-- An Ingress with no rules sends all traffic to a single default backend. The default backend is typically a configuration option of the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) and is not specified in your Ingress resources. --> -没有规则的 Ingress 将所有流量发送到单个默认后端。默认后端通常是 [Ingress 控制器](/docs/concepts/services-networking/ingress-controllers)的配置选项,并且未在 Ingress 资源中指定。 +### 默认后端 + +没有规则的 Ingress 将所有流量发送到同一个默认后端。 +默认后端通常是 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) +的配置选项,并且未在 Ingress 资源中指定。 <!-- If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend. --> -如果没有主机或路径与 Ingress 对象中的 HTTP 请求匹配,则流量将路由到您的默认后端。 +如果主机或路径都没有与 Ingress 对象中的 HTTP 请求匹配,则流量将路由到默认后端。 <!-- ### Path Types - --> -### 路径类型 -<!-- Each path in an Ingress has a corresponding path type. There are three supported path types: - --> -Ingress 中的每个路径都有对应的路径类型。支持三种类型: +--> +### 路径类型 {#path-types} + +Ingress 中的每个路径都有对应的路径类型。当前支持的路径类型有三种: <!-- * _`ImplementationSpecific`_ (default): With this path type, matching is up to @@ -232,14 +250,19 @@ Ingress 中的每个路径都有对应的路径类型。支持三种类型: last element in request path, it is not a match (for example: `/foo/bar` matches`/foo/bar/baz`, but does not match `/foo/barbaz`). --> -* _`ImplementationSpecific`_ (默认):对于这种类型,匹配取决于 IngressClass. 具体实现可以将其作为单独的 `pathType` 处理或者与 `Prefix` 或 `Exact` 类型作相同处理。 +* _`ImplementationSpecific`_ (默认):对于这种类型,匹配取决于 IngressClass。 + 具体实现可以将其作为单独的 `pathType` 处理或者与 `Prefix` 或 `Exact` 类型作相同处理。 -* _`Exact`_:精确匹配 URL 路径且对大小写敏感。 +* _`Exact`_:精确匹配 URL 路径,且对大小写敏感。 -* _`Prefix`_:基于以 `/` 分割的 URL 路径前缀匹配。匹配对大小写敏感,并且对路径中的元素逐个完成。路径元素指的是由 `/` 分隔符分割的路径中的标签列表。如果每个 _p_ 都是请求路径 _p_ 的元素前缀,则请求与路径 _p_ 匹配。 - {{< note >}} - 如果路径的最后一个元素是请求路径中最后一个元素的子字符串,则不会匹配(例如:`/foo/bar` 匹配 `/foo/bar/baz`, 但不匹配 `/foo/barbaz`)。 - {{< /note >}} +* _`Prefix`_:基于以 `/` 分隔的 URL 路径前缀匹配。匹配对大小写敏感,并且对路径中的元素逐个完成。 + 路径元素指的是由 `/` 分隔符分隔的路径中的标签列表。 + 如果每个 _p_ 都是请求路径 _p_ 的元素前缀,则请求与路径 _p_ 匹配。 + + {{< note >}} + 如果路径的最后一个元素是请求路径中最后一个元素的子字符串,则不会匹配 + (例如:`/foo/bar` 匹配 `/foo/bar/baz`, 但不匹配 `/foo/barbaz`)。 + {{< /note >}} <!-- #### Multiple Matches @@ -248,22 +271,25 @@ cases precedence will be given first to the longest matching path. If two paths are still equally matched, precedence will be given to paths with an exact path type over prefix path type. --> -#### 多重匹配 +#### 多重匹配 {#multiple-matches} -在某些情况下,Ingress 中的多条路径会匹配同一个请求。这种情况下最长的匹配路径优先。如果仍然有两条同等的匹配路径,则精确路径类型优先于前缀路径类型。 +在某些情况下,Ingress 中的多条路径会匹配同一个请求。 +这种情况下最长的匹配路径优先。 +如果仍然有两条同等的匹配路径,则精确路径类型优先于前缀路径类型。 <!-- ## Ingress Class - --> -## Ingress 类 -<!-- Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class. - --> -Ingress 可以由不同的控制器实现,通常使用不同的配置。每个 Ingress 应当指定一个类,一个对 IngressClass 资源的引用,该资源包含额外的配置,其中包括应当实现该类的控制器名称。 +--> +## Ingress 类 {#ingress-class} + +Ingress 可以由不同的控制器实现,通常使用不同的配置。 +每个 Ingress 应当指定一个类,也就是一个对 IngressClass 资源的引用。 +IngressClass 资源包含额外的配置,其中包括应当实现该类的控制器名称。 ```yaml apiVersion: networking.k8s.io/v1beta1 @@ -282,21 +308,21 @@ spec: IngressClass resources contain an optional parameters field. This can be used to reference additional configuration for this class. --> -IngressClass 资源包含一个可选的参数字段。可用于引用该类的额外配置。 +IngressClass 资源包含一个可选的参数字段,可用于为该类引用额外配置。 <!-- ### Deprecated Annotation - --> -### 废弃的注解 -<!-- Before the IngressClass resource and `ingressClassName` field were added in Kubernetes 1.18, Ingress classes were specified with a `kubernetes.io/ingress.class` annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers. - --> -在 IngressClass 资源和 `ingressClassName` 字段被引入 Kubernetes 1.18 之前,Ingress 类是通过 Ingress 中的一个 -`kubernetes.io/ingress.class` 注解来指定的。这个注解从未被正式定义过,但是得到了 Ingress 控制器的广泛支持。 +--> +### 废弃的注解 + +在 Kubernetes 1.18 版本引入 IngressClass 资源和 `ingressClassName` 字段之前, +Ingress 类是通过 Ingress 中的一个 `kubernetes.io/ingress.class` 注解来指定的。 +这个注解从未被正式定义过,但是得到了 Ingress 控制器的广泛支持。 <!-- The newer `ingressClassName` field on Ingresses is a replacement for that @@ -304,22 +330,26 @@ annotation, but is not a direct equivalent. While the annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, the field is a reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller. - --> -Ingress 中新的 `ingressClassName` 字段是该注解的替代品,但并非完全等价。该注解通常用于引用实现该 Ingress 的控制器的名称, -而这个新的字段则是对一个包含额外 Ingress 配置的 IngressClass 资源的引用,包括 Ingress 控制器的名称。 +--> +Ingress 中新的 `ingressClassName` 字段是该注解的替代品,但并非完全等价。 +该注解通常用于引用实现该 Ingress 的控制器的名称, +而这个新的字段则是对一个包含额外 Ingress 配置的 IngressClass 资源的引用, +包括 Ingress 控制器的名称。 <!-- ### Default Ingress Class - --> -### 默认 Ingress 类 -<!-- You can mark a particular IngressClass as default for your cluster. Setting the `ingressclass.kubernetes.io/is-default-class` annotation to `true` on an IngressClass resource will ensure that new Ingresses without an `ingressClassName` field specified will be assigned this default IngressClass. - --> -您可以将一个特定的 IngressClass 标记为集群默认项。将一个 IngressClass 资源的 `ingressclass.kubernetes.io/is-default-class` 注解设置为 `true` 将确保新的未指定 `ingressClassName` 字段的 Ingress 能够分配为这个默认的 IngressClass. +--> +### 默认 Ingress 类 {#default-ingress-class} + +您可以将一个特定的 IngressClass 标记为集群默认选项。 +将一个 IngressClass 资源的 `ingressclass.kubernetes.io/is-default-class` 注解设置为 +`true` 将确保新的未指定 `ingressClassName` 字段的 Ingress 能够分配为这个默认的 +IngressClass. <!-- If you have more than one IngressClass marked as the default for your cluster, @@ -328,26 +358,26 @@ an `ingressClassName` specified. You can resolve this by ensuring that at most 1 IngressClasess are marked as default in your cluster. --> {{< caution >}} -如果集群中有多个 IngressClass 被标记为默认,准入控制器将阻止创建新的未指定 `ingressClassName` 字段的 Ingress 对象。 +如果集群中有多个 IngressClass 被标记为默认,准入控制器将阻止创建新的未指定 `ingressClassName` +的 Ingress 对象。 解决这个问题只需确保集群中最多只能有一个 IngressClass 被标记为默认。 {{< /caution >}} <!-- ## Types of Ingress ---> -## Ingress 类型 -<!-- ### Single Service Ingress ---> -### 单服务 Ingress -<!-- There are existing Kubernetes concepts that allow you to expose a single Service (see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a *default backend* with no rules. --> -现有的 Kubernetes 概念允许您暴露单个 Service (查看[替代方案](#alternatives)),您也可以通过指定无规则的 *默认后端* 来对 Ingress 进行此操作。 +## Ingress 类型 {#types-of-ingress} + +### 单服务 Ingress {#single-service-ingress} + +现有的 Kubernetes 概念允许您暴露单个 Service (查看[替代方案](#alternatives))。 +你也可以通过指定无规则的 *默认后端* 来对 Ingress 进行此操作。 {{< codenew file="service/networking/ingress.yaml" >}} @@ -377,19 +407,19 @@ Ingress controllers and load balancers may take a minute or two to allocate an I Until that time, you often see the address listed as `<pending>`. --> {{< note >}} -入口控制器和负载平衡器可能需要一两分钟才能分配IP地址。 在此之前,您通常会看到地址字段的值被设定为 `<pending>`。 +入口控制器和负载平衡器可能需要一两分钟才能分配 IP 地址。在此之前,您通常会看到地址字段的值被设定为 +`<pending>`。 {{< /note >}} <!-- ### Simple fanout ---> -### 简单分列 -<!-- A fanout configuration routes traffic from a single IP address to more than one Service, based on the HTTP URI being requested. An Ingress allows you to keep the number of load balancers down to a minimum. For example, a setup like: --> +### 简单分列 + 一个分列配置根据请求的 HTTP URI 将流量从单个 IP 地址路由到多个服务。 Ingress 允许您将负载均衡器的数量降至最低。例如,这样的设置: @@ -401,7 +431,7 @@ foo.bar.com -> 178.91.123.132 -> / foo service1:4200 <!-- would require an Ingress such as: --> -将需要一个 Ingress,例如: +将需要一个如下所示的 Ingress: ```yaml apiVersion: networking.k8s.io/v1beta1 @@ -428,7 +458,7 @@ spec: <!-- When you create the Ingress with `kubectl apply -f`: --> -当您使用 `kubectl apply -f` 创建 Ingress 时: +当你使用 `kubectl apply -f` 创建 Ingress 时: ```shell kubectl describe ingress simple-fanout-example @@ -459,9 +489,8 @@ that satisfies the Ingress, as long as the Services (`service1`, `service2`) exi When it has done so, you can see the address of the load balancer at the Address field. --> - Ingress 控制器将提供实现特定的负载均衡器来满足 Ingress,只要 Service (`service1`,`service2`) 存在。 -当它这样做了,您会在地址栏看到负载均衡器的地址。 +当它这样做了,你会在地址字段看到负载均衡器的地址。 <!-- Depending on the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) @@ -469,19 +498,18 @@ you are using, you may need to create a default-http-backend [Service](/docs/concepts/services-networking/service/). --> {{< note >}} -根据您使用的 [Ingress 控制器](/docs/concepts/services-networking/ingress-controllers),您可能需要创建默认 HTTP 后端 [Service](/docs/concepts/services-networking/service/)。 +取决于你使用的 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers), +你可能需要创建默认 HTTP 后端[服务](/zh/docs/concepts/services-networking/service/)。 {{< /note >}} - <!-- ### Name based virtual hosting + +Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. --> ### 基于名称的虚拟托管 -<!-- -Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. ---> -基于名称的虚拟主机支持将 HTTP 流量路由到同一 IP 地址上的多个主机名。 +基于名称的虚拟主机支持将针对多个主机名的 HTTP 流量路由到同一 IP 地址上。 ```none foo.bar.com --| |-> foo.bar.com service1:80 @@ -489,12 +517,12 @@ foo.bar.com --| |-> foo.bar.com service1:80 bar.foo.com --| |-> bar.foo.com service2:80 ``` - <!-- The following Ingress tells the backing load balancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4). --> -以下 Ingress 让后台负载均衡器基于[主机 header](https://tools.ietf.org/html/rfc7230#section-5.4) 路由请求。 +以下 Ingress 让后台负载均衡器基于[host 头部字段](https://tools.ietf.org/html/rfc7230#section-5.4) +来路由请求。 ```yaml apiVersion: networking.k8s.io/v1beta1 @@ -522,7 +550,8 @@ If you create an Ingress resource without any hosts defined in the rules, then a web traffic to the IP address of your Ingress controller can be matched without a name based virtual host being required. --> -如果您创建的 Ingress 资源没有规则中定义的任何主机,则可以匹配到您 Ingress 控制器 IP 地址的任何网络流量,而无需基于名称的虚拟主机。 +如果您创建的 Ingress 资源没有规则中定义的任何主机,则可以匹配指向 Ingress 控制器 IP 地址 +的任何网络流量,而无需基于名称的虚拟主机。 <!-- For example, the following Ingress resource will route traffic @@ -530,7 +559,9 @@ requested for `first.bar.com` to `service1`, `second.foo.com` to `service2`, and to the IP address without a hostname defined in request (that is, without a request header being presented) to `service3`. --> -例如,以下 Ingress 资源会将 `first.bar.com` 请求的流量路由到 `service1`,将 `second.foo.com` 请求的流量路由到 `service2`,而没有在请求中定义主机名的 IP 地址的流量路由(即,不提供请求标头)到 `service3`。 +例如,以下 Ingress 资源会将 `first.bar.com` 请求的流量路由到 `service1`, +将 `second.foo.com` 请求的流量路由到 `service2`, +而没有在请求中定义主机名的 IP 地址的流量路由(即,不提供请求标头)到 `service3`。 ```yaml apiVersion: networking.k8s.io/v1beta1 @@ -560,11 +591,7 @@ spec: <!-- ### TLS ---> -### TLS - -<!-- You can secure an Ingress by specifying a {{< glossary_tooltip term_id="secret" >}} that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS @@ -574,12 +601,16 @@ SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS. For example: --> +### TLS -您可以通过指定包含 TLS 私钥和证书的 secret {{< glossary_tooltip term_id="secret" >}} 来加密 Ingress。 +你可以通过设定包含 TLS 私钥和证书的{{< glossary_tooltip text="Secret" term_id="secret" >}} +来保护 Ingress。 目前,Ingress 只支持单个 TLS 端口 443,并假定 TLS 终止。 -如果 Ingress 中的 TLS 配置部分指定了不同的主机,那么它们将根据通过 SNI TLS 扩展指定的主机名(如果 Ingress 控制器支持 SNI)在同一端口上进行复用。 -TLS Secret 必须包含名为 `tls.crt` 和 `tls.key` 的密钥,这些密钥包含用于 TLS 的证书和私钥,例如: +如果 Ingress 中的 TLS 配置部分指定了不同的主机,那么它们将根据通过 SNI TLS 扩展指定的主机名 +(如果 Ingress 控制器支持 SNI)在同一端口上进行复用。 +TLS Secret 必须包含名为 `tls.crt` 和 `tls.key` 的键名。 +这些数据包含用于 TLS 的证书和私钥。例如: ```yaml apiVersion: v1 @@ -599,7 +630,9 @@ secure the channel from the client to the load balancer using TLS. You need to m sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for `sslexample.foo.com`. --> -在 Ingress 中引用此 Secret 将会告诉 Ingress 控制器使用 TLS 加密从客户端到负载均衡器的通道。您需要确保创建的 TLS secret 来自包含 `sslexample.foo.com` 的公用名称(CN)的证书,也被称为全限定域名(FQDN)。 +在 Ingress 中引用此 Secret 将会告诉 Ingress 控制器使用 TLS 加密从客户端到负载均衡器的通道。 +你需要确保创建的 TLS Secret 来自包含 `sslexample.foo.com` 的公用名称(CN)的证书。 +这里的公共名称也被称为全限定域名(FQDN)。 ```yaml apiVersion: networking.k8s.io/v1beta1 @@ -629,17 +662,15 @@ controllers. Please refer to documentation on platform specific Ingress controller to understand how TLS works in your environment. --> {{< note >}} -各种 Ingress 控制器所支持的 TLS 功能之间存在差异。请参阅有关文件 +各种 Ingress 控制器所支持的 TLS 功能之间存在差异。请参阅有关 [nginx](https://kubernetes.github.io/ingress-nginx/user-guide/tls/)、 -[GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https) 或者任何其他平台特定的 Ingress 控制器,以了解 TLS 如何在您的环境中工作。 +[GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https) +或者任何其他平台特定的 Ingress 控制器的文档,以了解 TLS 如何在你的环境中工作。 {{< /note >}} <!-- ### Loadbalancing ---> -### 负载均衡 -<!-- An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts @@ -647,32 +678,36 @@ weight scheme, and others. More advanced load balancing concepts Ingress. You can instead get these features through the load balancer used for a Service. --> +### 负载均衡 -Ingress 控制器使用一些适用于所有 Ingress 的负载均衡策略设置进行自举,例如负载均衡算法、后端权重方案和其他等。更高级的负载均衡概念(例如,持久会话、动态权重)尚未通过 Ingress 公开。您可以通过用于服务的负载均衡器来获取这些功能。 +Ingress 控制器启动引导时使用一些适用于所有 Ingress 的负载均衡策略设置, +例如负载均衡算法、后端权重方案和其他等。 +更高级的负载均衡概念(例如持久会话、动态权重)尚未通过 Ingress 公开。 +你可以通过用于服务的负载均衡器来获取这些功能。 <!-- It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as -[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) +[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) that allow you to achieve the same end result. Please review the controller specific documentation to see how they handle health checks ( [nginx](https://git.k8s.io/ingress-nginx/README.md), [GCE](https://git.k8s.io/ingress-gce/README.md#health-checks)). --> -值得注意的是,即使健康检查不是通过 Ingress 直接暴露的,但是在 Kubernetes 中存在并行概念,比如 [就绪检查](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/),它允许您实现相同的最终结果。 -请检查控制器特殊说明文档,以了解他们是怎样处理健康检查的 ( +值得注意的是,即使健康检查不是通过 Ingress 直接暴露的,在 Kubernetes +中存在并行概念,比如[就绪检查](/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) +允许你实现相同的目的。 +请检查特定控制器的说明文档,以了解它们是怎样处理健康检查的 ( [nginx](https://git.k8s.io/ingress-nginx/README.md), [GCE](https://git.k8s.io/ingress-gce/README.md#health-checks))。 - <!-- ## Updating an Ingress + +To update an existing Ingress to add a new Host, you can update it by editing the resource: --> ## 更新 Ingress -<!-- -To update an existing Ingress to add a new Host, you can update it by editing the resource: ---> 要更新现有的 Ingress 以添加新的 Host,可以通过编辑资源来对其进行更新: ```shell @@ -705,8 +740,7 @@ kubectl edit ingress test This pops up an editor with the existing configuration in YAML format. Modify it to include the new Host: --> - -这将弹出具有 YAML 格式的现有配置的编辑器。 +这一命令将打开编辑器,允许你以 YAML 格式编辑现有配置。 修改它来增加新的主机: ```yaml @@ -768,45 +802,42 @@ Events: <!-- You can achieve the same outcome by invoking `kubectl replace -f` on a modified Ingress YAML file. --> -您可以通过 `kubectl replace -f` 命令调用修改后的 Ingress yaml 文件来获得同样的结果。 +你也可以通过 `kubectl replace -f` 命令调用修改后的 Ingress yaml 文件来获得同样的结果。 <!-- ## Failing across availability zones ---> -## 跨可用区失败 -<!-- Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details. You can also refer to the [federation documentation](https://github.com/kubernetes-sigs/federation-v2) for details on deploying Ingress in a federated cluster. --> +## 跨可用区失败 {#failing-across-availability-zones} -用于跨故障域传播流量的技术在云提供商之间是不同的。详情请查阅相关 Ingress 控制器的文档。 -请查看相关[ Ingress 控制器](/docs/concepts/services-networking/ingress-controllers) 的文档以了解详细信息。 -您还可以参考[联邦文档](https://github.com/kubernetes-sigs/federation-v2),以获取有关在联合集群中部署 Ingress 的详细信息。 - +不同的云厂商使用不同的技术来实现跨故障域的流量分布。详情请查阅相关 Ingress 控制器的文档。 +请查看相关[ Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers) 的文档以了解详细信息。 +你还可以参考[联邦文档](https://github.com/kubernetes-sigs/federation-v2),以获取有关在联合集群中部署 Ingress 的详细信息。 <!-- ## Future Work ---> -## 未来工作 -<!-- Track [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) for more details on the evolution of Ingress and related resources. You may also track the [Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the evolution of various Ingress controllers. --> -跟踪 [SIG 网络](https://github.com/kubernetes/community/tree/master/sig-network)以获得有关 Ingress 和相关资源演变的更多细节。您还可以跟踪 [Ingress 仓库](https://github.com/kubernetes/ingress/tree/master)以获取有关各种 Ingress 控制器的更多细节。 +## 未来工作 +跟踪 [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) +的活动以获得有关 Ingress 和相关资源演变的更多细节。 +你还可以跟踪 [Ingress 仓库](https://github.com/kubernetes/ingress/tree/master) +以获取有关各种 Ingress 控制器的更多细节。 <!-- ## Alternatives ---> -## 替代方案 -<!-- You can expose a Service in multiple ways that don't directly involve the Ingress resource: --> +## 替代方案 {#alternatives} + 不直接使用 Ingress 资源,也有多种方法暴露 Service: <!-- @@ -816,8 +847,6 @@ You can expose a Service in multiple ways that don't directly involve the Ingres * 使用 [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) * 使用 [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) - - ## {{% heading "whatsnext" %}} <!-- @@ -825,7 +854,7 @@ You can expose a Service in multiple ways that don't directly involve the Ingres * Learn about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/) * [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube) --> -* 了解更多 [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) -* 了解更多 [Ingress 控制器](/docs/concepts/services-networking/ingress-controllers/) -* [使用 NGINX 控制器在 Minikube 上安装 Ingress](/docs/tasks/access-application-cluster/ingress-minikube) +* 进一步了解 [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) +* 进一步了解 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers/) +* [使用 NGINX 控制器在 Minikube 上安装 Ingress](/zh/docs/tasks/access-application-cluster/ingress-minikube) diff --git a/content/zh/docs/concepts/services-networking/network-policies.md b/content/zh/docs/concepts/services-networking/network-policies.md index 2ad44866c6..784455922a 100644 --- a/content/zh/docs/concepts/services-networking/network-policies.md +++ b/content/zh/docs/concepts/services-networking/network-policies.md @@ -5,16 +5,10 @@ weight: 50 --- <!-- ---- -reviewers: -- thockin -- caseydavenport -- danwinship title: Network Policies content_type: concept weight: 50 ---- - --> +--> {{< toc >}} @@ -30,8 +24,6 @@ NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">} NetworkPolicy 资源使用 {{< glossary_tooltip text="标签" term_id="label">}} 选择 Pod,并定义选定 Pod 所允许的通信规则。 - - <!-- body --> <!-- @@ -42,7 +34,9 @@ Network policies are implemented by the [network plugin](/docs/concepts/extend-k ## 前提 -网络策略通过[网络插件](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)来实现。要使用网络策略,用户必须使用支持 NetworkPolicy 的网络解决方案。创建一个资源对象,而没有控制器来使它生效的话,是没有任何作用的。 +网络策略通过[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +来实现。要使用网络策略,用户必须使用支持 NetworkPolicy 的网络解决方案。 +创建一个资源对象,而没有控制器来使它生效的话,是没有任何作用的。 <!-- ## Isolated and Non-isolated Pods @@ -53,14 +47,17 @@ Pods become isolated by having a NetworkPolicy that selects them. Once there is Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result. --> - ## 隔离和非隔离的 Pod 默认情况下,Pod 是非隔离的,它们接受任何来源的流量。 -Pod 可以通过相关的网络策略进行隔离。一旦命名空间中有网络策略选择了特定的 Pod,该 Pod 会拒绝网络策略所不允许的连接。 (命名空间下其他未被网络策略所选择的 Pod 会继续接收所有的流量) +Pod 可以通过相关的网络策略进行隔离。一旦命名空间中有网络策略选择了特定的 Pod, +该 Pod 会拒绝网络策略所不允许的连接。 +(命名空间下其他未被网络策略所选择的 Pod 会继续接收所有的流量) -网络策略不会冲突,它们是附加的。如果任何一个或多个策略选择了一个 Pod, 则该 Pod 受限于这些策略的 ingress/egress 规则的并集。因此评估的顺序并不会影响策略的结果。 +网络策略不会冲突,它们是累积的。 +如果任何一个或多个策略选择了一个 Pod, 则该 Pod 受限于这些策略的 +ingress/egress 规则的并集。因此评估的顺序并不会影响策略的结果。 <!-- ## The NetworkPolicy resource {#networkpolicy-resource} @@ -72,7 +69,7 @@ An example NetworkPolicy might look like this: ## NetworkPolicy 资源 {#networkpolicy-resource} -查看 [网络策略](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) 来了解完整的资源定义。 +查看 [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) 来了解完整的资源定义。 下面是一个 NetworkPolicy 的示例: @@ -138,8 +135,9 @@ __ingress__: Each NetworkPolicy may include a list of whitelist `ingress` rules. __egress__: Each NetworkPolicy may include a list of whitelist `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`. --> -__必填字段__: 与所有其他的 Kubernetes 配置一样,NetworkPolicy 需要 `apiVersion`、 `kind` 和 `metadata` 字段。 关于配置文件操作的一般信息,请参考 [使用 ConfigMap 配置容器](/docs/tasks/configure-pod-container/configure-pod-configmap/), -和 [对象管理](/docs/concepts/overview/working-with-objects/object-management)。 +__必填字段__: 与所有其他的 Kubernetes 配置一样,NetworkPolicy 需要 `apiVersion`、`kind` 和 `metadata` 字段。 + 关于配置文件操作的一般信息,请参考 [使用 ConfigMap 配置容器](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/), + 和[对象管理](/zh/docs/concepts/overview/working-with-objects/object-management)。 __spec__: NetworkPolicy [规约](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 中包含了在一个命名空间中定义特定网络策略所需的所有信息。 @@ -175,7 +173,7 @@ See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network- * IP 地址范围为 172.17.0.0–172.17.0.255 和 172.17.2.0–172.17.255.255(即,除了 172.17.1.0/24 之外的所有 172.17.0.0/16) 3. (Egress 规则)允许从带有 "role=db" 标签的命名空间下的任何 Pod 到 CIDR 10.0.0.0/24 下 5978 TCP 端口的连接。 -查看 [声明网络策略](/docs/getting-started-guides/network-policy/walkthrough) 来进行更多的示例演练。 +查看[声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) 来进行更多的示例演练。 <!-- ## Behavior of `to` and `from` selectors @@ -362,7 +360,9 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will To use this feature, you (or your cluster administrator) will need to enable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=true,…`. When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`. --> -要启用此特性,你(或你的集群管理员)需要通过为 API server 指定 `--feature-gates=SCTPSupport=true,…` 来启用 `SCTPSupport` [特性开关](/docs/reference/command-line-tools-reference/feature-gates/)。启用该特性开关后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`。 +要启用此特性,你(或你的集群管理员)需要通过为 API server 指定 `--feature-gates=SCTPSupport=true,…` +来启用 `SCTPSupport` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 +启用该特性开关后,用户可以将 NetworkPolicy 的 `protocol` 字段设置为 `SCTP`。 <!-- You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies. @@ -371,20 +371,16 @@ You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin tha 必须使用支持 SCTP 协议网络策略的 {{< glossary_tooltip text="CNI" term_id="cni" >}} 插件。 {{< /note >}} - - - ## {{% heading "whatsnext" %}} - <!-- - See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. - See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. --> -- 查看 [声明网络策略](/docs/tasks/administer-cluster/declare-network-policy/) +- 查看 [声明网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) 来进行更多的示例演练 -- 有关 NetworkPolicy 资源启用的常见场景的更多信息,请参见 [指南](https://github.com/ahmetb/kubernetes-network-policy-recipes)。 - +- 有关 NetworkPolicy 资源启用的常见场景的更多信息,请参见 + [此指南](https://github.com/ahmetb/kubernetes-network-policy-recipes)。 diff --git a/content/zh/docs/concepts/services-networking/service.md b/content/zh/docs/concepts/services-networking/service.md index 9573261c05..294738085d 100644 --- a/content/zh/docs/concepts/services-networking/service.md +++ b/content/zh/docs/concepts/services-networking/service.md @@ -1,7 +1,5 @@ --- -reviewers: -- bprashanth -title: Services +title: 服务 feature: title: 服务发现与负载均衡 description: > @@ -12,18 +10,15 @@ weight: 10 --- <!-- ---- -reviewers: -- bprashanth title: Services feature: title: Service discovery and load balancing description: > - No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them. + No need to modify your application to use an unfamiliar service discovery mechanism. + Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them. content_type: concept weight: 10 ---- --> <!-- overview --> @@ -35,10 +30,9 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. --> -使用Kubernetes,您无需修改应用程序即可使用不熟悉的服务发现机制。 -Kubernetes为Pods提供自己的IP地址和一组Pod的单个DNS名称,并且可以在它们之间进行负载平衡。 - - +使用 Kubernetes,您无需修改应用程序即可使用不熟悉的服务发现机制。 +Kubernetes 为 Pods 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, +并且可以在它们之间进行负载平衡。 <!-- body --> @@ -64,19 +58,21 @@ Enter _Services_. ## 动机 -Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} 是有生命周期的。他们可以被创建,而且销毁不会再启动。 -如果您使用 {{<glossary_tooltip term_id ="deployment">}} 来运行您的应用程序,则它可以动态创建和销毁 Pod。 +Kubernetes {{< glossary_tooltip term_id="pod" text="Pod" >}} 是有生命周期的。 +它们可以被创建,而且销毁之后不会再启动。 +如果您使用 {{< glossary_tooltip text="Deployment" term_id="deployment">}} +来运行您的应用程序,则它可以动态创建和销毁 Pod。 每个 Pod 都有自己的 IP 地址,但是在 Deployment 中,在同一时刻运行的 Pod 集合可能与稍后运行该应用程序的 Pod 集合不同。 -这导致了一个问题: 如果一组 Pod(称为“后端”)为群集内的其他 Pod(称为“前端”)提供功能,那么前端如何找出并跟踪要连接的 IP 地址,以便前端可以使用工作量的后端部分? +这导致了一个问题: 如果一组 Pod(称为“后端”)为群集内的其他 Pod(称为“前端”)提供功能, +那么前端如何找出并跟踪要连接的 IP 地址,以便前端可以使用工作量的后端部分? 进入 _Services_。 <!-- ## Service resources {#service-resource} --> - ## Service 资源 {#service-resource} <!-- @@ -87,10 +83,9 @@ by a {{< glossary_tooltip text="selector" term_id="selector" >}} (see [below](#services-without-selectors) for why you might want a Service _without_ a selector). --> - -Kubernetes `Service` 定义了这样一种抽象:逻辑上的一组 `Pod`,一种可以访问它们的策略 —— 通常称为微服务。 -这一组 `Pod` 能够被 `Service` 访问到,通常是通过 {{< glossary_tooltip text="selector" term_id="selector" >}} -(查看[下面](#services-without-selectors)了解,为什么你可能需要没有 selector 的 `Service`)实现的。 +Kubernetes Service 定义了这样一种抽象:逻辑上的一组 Pod,一种可以访问它们的策略 —— 通常称为微服务。 +这一组 Pod 能够被 Service 访问到,通常是通过 {{< glossary_tooltip text="选择算符" term_id="selector" >}} +(查看[下面](#services-without-selectors)了解,为什么你可能需要没有 selector 的 Service)实现的。 <!-- For example, consider a stateless image-processing backend which is running with @@ -101,10 +96,11 @@ track of the set of backends themselves. The Service abstraction enables this decoupling. --> - -举个例子,考虑一个图片处理 backend,它运行了3个副本。这些副本是可互换的 —— frontend 不需要关心它们调用了哪个 backend 副本。 -然而组成这一组 backend 程序的 `Pod` 实际上可能会发生变化,frontend 客户端不应该也没必要知道,而且也不需要跟踪这一组 backend 的状态。 -`Service` 定义的抽象能够解耦这种关联。 +举个例子,考虑一个图片处理后端,它运行了 3 个副本。这些副本是可互换的 —— +前端不需要关心它们调用了哪个后端副本。 +然而组成这一组后端程序的 Pod 实际上可能会发生变化, +前端客户端不应该也没必要知道,而且也不需要跟踪这一组后端的状态。 +Service 定义的抽象能够解耦这种关联。 <!-- ### Cloud-native service discovery @@ -118,9 +114,11 @@ balancer in between your application and the backend Pods. --> ### 云原生服务发现 -如果您想要在应用程序中使用 Kubernetes 接口进行服务发现,则可以查询 {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} 的 endpoint 资源,只要服务中的Pod集合发生更改,端点就会更新。 +如果您想要在应用程序中使用 Kubernetes API 进行服务发现,则可以查询 +{{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}} +的 Endpoints 资源,只要服务中的 Pod 集合发生更改,Endpoints 就会被更新。 -对于非本机应用程序,Kubernetes提供了在应用程序和后端Pod之间放置网络端口或负载均衡器的方法。 +对于非本机应用程序,Kubernetes 提供了在应用程序和后端 Pod 之间放置网络端口或负载均衡器的方法。 <!-- ## Defining a Service @@ -135,10 +133,10 @@ and carry a label `app=MyApp`: ## 定义 Service -一个 `Service` 在 Kubernetes 中是一个 REST 对象,和 `Pod` 类似。 -像所有的 REST 对象一样, `Service` 定义可以基于 `POST` 方式,请求 API server 创建新的实例。 +Service 在 Kubernetes 中是一个 REST 对象,和 Pod 类似。 +像所有的 REST 对象一样,Service 定义可以基于 `POST` 方式,请求 API server 创建新的实例。 -例如,假定有一组 `Pod`,它们对外暴露了 9376 端口,同时还被打上 `app=MyApp` 标签。 +例如,假定有一组 Pod,它们对外暴露了 9376 端口,同时还被打上 `app=MyApp` 标签。 ```yaml apiVersion: v1 @@ -166,20 +164,20 @@ The controller for the Service selector continuously scans for Pods that match its selector, and then POSTs any updates to an Endpoint object also named “my-service”. --> - -上述配置创建一个名称为 "my-service" 的 `Service` 对象,它会将请求代理到使用 TCP 端口 9376,并且具有标签 `"app=MyApp"` 的 `Pod` 上。 -Kubernetes 为该服务分配一个 IP 地址(有时称为 "集群IP" ),该 IP 地址由服务代理使用。 +上述配置创建一个名称为 "my-service" 的 Service 对象,它会将请求代理到使用 +TCP 端口 9376,并且具有标签 `"app=MyApp"` 的 Pod 上。 +Kubernetes 为该服务分配一个 IP 地址(有时称为 "集群IP"),该 IP 地址由服务代理使用。 (请参见下面的 [VIP 和 Service 代理](#virtual-ips-and-service-proxies)). -服务选择器的控制器不断扫描与其选择器匹配的 Pod,然后将所有更新发布到也称为 “my-service” 的Endpoint对象。 - -{{< note >}} +服务选择算符的控制器不断扫描与其选择器匹配的 Pod,然后将所有更新发布到也称为 +“my-service” 的 Endpoint 对象。 <!-- A Service can map _any_ incoming `port` to a `targetPort`. By default and for convenience, the `targetPort` is set to the same value as the `port` field. --> -需要注意的是, `Service` 能够将一个接收 `port` 映射到任意的 `targetPort`。 +{{< note >}} +需要注意的是,Service 能够将一个接收 `port` 映射到任意的 `targetPort`。 默认情况下,`targetPort` 将被设置为与 `port` 字段相同的值。 {{< /note >}} @@ -199,13 +197,12 @@ As many Services need to expose more than one port, Kubernetes supports multiple port definitions on a Service object. Each port definition can have the same `protocol`, or a different one. --> - -Pod中的端口定义具有名称字段,您可以在服务的 `targetTarget` 属性中引用这些名称。 +Pod 中的端口定义是有名字的,你可以在服务的 `targetTarget` 属性中引用这些名称。 即使服务中使用单个配置的名称混合使用 Pod,并且通过不同的端口号提供相同的网络协议,此功能也可以使用。 这为部署和发展服务提供了很大的灵活性。 例如,您可以更改Pods在新版本的后端软件中公开的端口号,而不会破坏客户端。 -服务的默认协议是TCP。 您还可以使用任何其他 [受支持的协议](#protocol-support)。 +服务的默认协议是TCP。 您还可以使用任何其他[受支持的协议](#protocol-support)。 由于许多服务需要公开多个端口,因此 Kubernetes 在服务对象上支持多个端口定义。 每个端口定义可以具有相同的 `protocol`,也可以具有不同的协议。 @@ -227,8 +224,7 @@ For example: In any of these scenarios you can define a Service _without_ a Pod selector. For example: --> - -### 没有 selector 的 Service +### 没有选择算符的 Service {#services-without-selectors} 服务最常见的是抽象化对 Kubernetes Pod 的访问,但是它们也可以抽象化其他种类的后端。 实例: @@ -237,7 +233,7 @@ For example: * 希望服务指向另一个 {{< glossary_tooltip term_id="namespace" >}} 中或其它集群中的服务。 * 您正在将工作负载迁移到 Kubernetes。 在评估该方法时,您仅在 Kubernetes 中运行一部分后端。 -在任何这些场景中,都能够定义没有 selector 的 `Service`。 +在任何这些场景中,都能够定义没有选择算符的 Service。 实例: ```yaml @@ -257,8 +253,8 @@ Because this Service has no selector, the corresponding Endpoint object is *not* created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoint object manually: --> - -由于此服务没有选择器,因此 *不会* 自动创建相应的 Endpoint 对象。 您可以通过手动添加 Endpoint 对象,将服务手动映射到运行该服务的网络地址和端口: +由于此服务没有选择算符,因此 *不会* 自动创建相应的 Endpoint 对象。 +您可以通过手动添加 Endpoint 对象,将服务手动映射到运行该服务的网络地址和端口: ```yaml apiVersion: v1 @@ -272,8 +268,6 @@ subsets: - port: 9376 ``` -{{< note >}} - <!-- The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). @@ -282,9 +276,11 @@ Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because {{< glossary_tooltip term_id="kube-proxy" >}} doesn't support virtual IPs as a destination. --> - -端点 IPs _必须不可以_ : 环回( IPv4 的 127.0.0.0/8 , IPv6 的 ::1/128 )或本地链接(IPv4 的 169.254.0.0/16 和 224.0.0.0/24,IPv6 的 fe80::/64)。 -端点 IP 地址不能是其他 Kubernetes Services 的群集 IP,因为 {{<glossary_tooltip term_id ="kube-proxy">}} 不支持将虚拟 IP 作为目标。 +{{< note >}} +端点 IPs _必须不可以_ 是:本地回路(IPv4 的 `127.0.0.0/8`, IPv6 的 `::1/128`)或 +本地链接(IPv4 的 `169.254.0.0/16` 和 `224.0.0.0/24`,IPv6 的 `fe80::/64`)。 +端点 IP 地址不能是其他 Kubernetes 服务的集群 IP,因为 +{{< glossary_tooltip term_id ="kube-proxy">}} 不支持将虚拟 IP 作为目标。 {{< /note >}} <!-- @@ -292,23 +288,22 @@ Accessing a Service without a selector works the same as if it had a selector. In the example above, traffic is routed to the single endpoint defined in the YAML: `192.0.2.42:9376` (TCP). --> - -访问没有 selector 的 `Service`,与有 selector 的 `Service` 的原理相同。 -请求将被路由到用户定义的 Endpoint, YAML中为: `192.0.2.42:9376` (TCP)。 +访问没有选择算符的 Service,与有选择算符的 Service 的原理相同。 +请求将被路由到用户定义的 Endpoint,YAML 中为:`192.0.2.42:9376`(TCP)。 <!-- An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead. For more information, see the [ExternalName](#externalname) section later in this document. --> - -ExternalName `Service` 是 `Service` 的特例,它没有 selector,也没有使用 DNS 名称代替。 +ExternalName Service 是 Service 的特例,它没有选择算符,但是使用 DNS 名称。 有关更多信息,请参阅本文档后面的[`ExternalName`](#externalname)。 <!-- ### Endpoint Slices --> ### Endpoint 切片 + {{< feature-state for_k8s_version="v1.16" state="alpha" >}} <!-- @@ -324,15 +319,15 @@ described in detail in [Endpoint Slices](/docs/concepts/services-networking/endp --> Endpoint 切片是一种 API 资源,可以为 Endpoint 提供更可扩展的替代方案。 尽管从概念上讲与 Endpoint 非常相似,但 Endpoint 切片允许跨多个资源分布网络端点。 -默认情况下,一旦到达100个 Endpoint,该 Endpoint 切片将被视为“已满”,届时将创建其他 Endpoint 切片来存储任何其他 Endpoint。 +默认情况下,一旦到达100个 Endpoint,该 Endpoint 切片将被视为“已满”, +届时将创建其他 Endpoint 切片来存储任何其他 Endpoint。 -Endpoint 切片提供了附加的属性和功能,这些属性和功能在 [Endpoint 切片](/docs/concepts/services-networking/endpoint-slices/)中进行了详细描述。 +Endpoint 切片提供了附加的属性和功能,这些属性和功能在 +[Endpoint 切片](/zh/docs/concepts/services-networking/endpoint-slices/)中有详细描述。 <!-- ### Application protocol -{{< feature-state for_k8s_version="v1.18" state="alpha" >}} - The AppProtocol field provides a way to specify an application protocol to be used for each Service port. @@ -344,10 +339,10 @@ gate](/docs/reference/command-line-tools-reference/feature-gates/). {{< feature-state for_k8s_version="v1.18" state="alpha" >}} -AppProtocol 字段提供了一种为每个 Service 端口指定应用程序协议的方式。 +`appProtocol` 字段提供了一种为每个 Service 端口指定应用程序协议的方式。 -作为一个 alpha 特性,该字段默认未启用。要使用该字段,请启用 `ServiceAppProtocol` [特性开关] -(/docs/reference/command-line-tools-reference/feature-gates/)。 +作为一个 alpha 特性,该字段默认未启用。要使用该字段,请启用 `ServiceAppProtocol` +[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。 <!-- ## Virtual IPs and service proxies @@ -359,7 +354,9 @@ than [`ExternalName`](#externalname). ## VIP 和 Service 代理 {#virtual-ips-and-service-proxies} -在 Kubernetes 集群中,每个 Node 运行一个 `kube-proxy` 进程。`kube-proxy` 负责为 `Service` 实现了一种 VIP(虚拟 IP)的形式,而不是 [`ExternalName`](#externalname) 的形式。 +在 Kubernetes 集群中,每个 Node 运行一个 `kube-proxy` 进程。 +`kube-proxy` 负责为 Service 实现了一种 VIP(虚拟 IP)的形式,而不是 +[`ExternalName`](#externalname) 的形式。 <!-- ### Why not use round-robin DNS? @@ -403,9 +400,9 @@ Kubernetes v1.8 added ipvs proxy mode. ### 版本兼容性 -从Kubernetes v1.0开始,您已经可以使用 [用户空间代理模式](#proxy-mode-userspace)。 -Kubernetes v1.1添加了 iptables 模式代理,在 Kubernetes v1.2 中,kube-proxy 的 iptables 模式成为默认设置。 -Kubernetes v1.8添加了 ipvs 代理模式。 +从 Kubernetes v1.0 开始,您已经可以使用 [用户空间代理模式](#proxy-mode-userspace)。 +Kubernetes v1.1 添加了 iptables 模式代理,在 Kubernetes v1.2 中,kube-proxy 的 iptables 模式成为默认设置。 +Kubernetes v1.8 添加了 ipvs 代理模式。 <!-- ### User space proxy mode {#proxy-mode-userspace} @@ -425,19 +422,17 @@ By default, kube-proxy in userspace mode chooses a backend via a round-robin alg ![Services overview diagram for userspace proxy](/images/docs/services-userspace-overview.svg) --> - ### userspace 代理模式 {#proxy-mode-userspace} -这种模式,kube-proxy 会监视 Kubernetes master 对 `Service` 对象和 `Endpoints` 对象的添加和移除。 -对每个 `Service`,它会在本地 Node 上打开一个端口(随机选择)。 -任何连接到“代理端口”的请求,都会被代理到 `Service` 的backend `Pods` 中的某个上面(如 `Endpoints` 所报告的一样)。 -使用哪个 backend `Pod`,是 kube-proxy 基于 `SessionAffinity` 来确定的。 +这种模式,kube-proxy 会监视 Kubernetes 主控节点对 Service 对象和 Endpoints 对象的添加和移除操作。 +对每个 Service,它会在本地 Node 上打开一个端口(随机选择)。 +任何连接到“代理端口”的请求,都会被代理到 Service 的后端 `Pods` 中的某个上面(如 `Endpoints` 所报告的一样)。 +使用哪个后端 Pod,是 kube-proxy 基于 `SessionAffinity` 来确定的。 -最后,它配置 iptables 规则,捕获到达该 `Service` 的 `clusterIP`(是虚拟 IP)和 `Port` 的请求,并重定向到代理端口,代理端口再代理请求到 backend `Pod`。 +最后,它配置 iptables 规则,捕获到达该 Service 的 `clusterIP`(是虚拟 IP) +和 `Port` 的请求,并重定向到代理端口,代理端口再代理请求到后端Pod。 -默认情况下,用户空间模式下的kube-proxy通过循环算法选择后端。 - -默认的策略是,通过 round-robin 算法来选择 backend `Pod`。 +默认情况下,用户空间模式下的 kube-proxy 通过轮转算法选择后端。 ![userspace代理模式下Service概览图](/images/docs/services-userspace-overview.svg) @@ -471,18 +466,24 @@ having traffic sent via kube-proxy to a Pod that's known to have failed. --> ### iptables 代理模式 {#proxy-mode-iptables} -这种模式,kube-proxy 会监视 Kubernetes 控制节点对 `Service` 对象和 `Endpoints` 对象的添加和移除。 -对每个 `Service`,它会配置 iptables 规则,从而捕获到达该 `Service` 的 `clusterIP` 和端口的请求,进而将请求重定向到 `Service` 的一组 backend 中的某个上面。 -对于每个 `Endpoints` 对象,它也会配置 iptables 规则,这个规则会选择一个 backend 组合。 +这种模式,`kube-proxy` 会监视 Kubernetes 控制节点对 Service 对象和 Endpoints 对象的添加和移除。 +对每个 Service,它会配置 iptables 规则,从而捕获到达该 Service 的 `clusterIP` +和端口的请求,进而将请求重定向到 Service 的一组后端中的某个 Pod 上面。 +对于每个 Endpoints 对象,它也会配置 iptables 规则,这个规则会选择一个后端组合。 -默认的策略是,kube-proxy 在 iptables 模式下随机选择一个 backend。 +默认的策略是,kube-proxy 在 iptables 模式下随机选择一个后端。 -使用 iptables 处理流量具有较低的系统开销,因为流量由 Linux netfilter 处理,而无需在用户空间和内核空间之间切换。 这种方法也可能更可靠。 +使用 iptables 处理流量具有较低的系统开销,因为流量由 Linux netfilter 处理, +而无需在用户空间和内核空间之间切换。 这种方法也可能更可靠。 -如果 kube-proxy 在 iptables 模式下运行,并且所选的第一个 Pod 没有响应,则连接失败。 这与用户空间模式不同:在这种情况下,kube-proxy 将检测到与第一个 Pod 的连接已失败,并会自动使用其他后端 Pod 重试。 +如果 kube-proxy 在 iptables 模式下运行,并且所选的第一个 Pod 没有响应, +则连接失败。 +这与用户空间模式不同:在这种情况下,kube-proxy 将检测到与第一个 Pod 的连接已失败, +并会自动使用其他后端 Pod 重试。 -您可以使用 Pod [ readiness 探测器](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) -验证后端 Pod 可以正常工作,以便 iptables 模式下的 kube-proxy 仅看到测试正常的后端。 这样做意味着您避免将流量通过 kube-proxy 发送到已知已失败的Pod。 +您可以使用 Pod [就绪探测器](/zh/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) +验证后端 Pod 可以正常工作,以便 iptables 模式下的 kube-proxy 仅看到测试正常的后端。 +这样做意味着您避免将流量通过 kube-proxy 发送到已知已失败的Pod。 ![iptables代理模式下Service概览图](/images/docs/services-iptables-overview.svg) @@ -514,10 +515,14 @@ these are: --> 在 `ipvs` 模式下,kube-proxy监视Kubernetes服务和端点,调用 `netlink` 接口相应地创建 IPVS 规则, -并定期将 IPVS 规则与 Kubernetes 服务和端点同步。 该控制循环可确保 IPVS 状态与所需状态匹配。 访问服务时,IPVS 将流量定向到后端Pod之一。 +并定期将 IPVS 规则与 Kubernetes 服务和端点同步。 该控制循环可确保IPVS +状态与所需状态匹配。访问服务时,IPVS 将流量定向到后端Pod之一。 -IPVS代理模式基于类似于 iptables 模式的 netfilter 挂钩函数,但是使用哈希表作为基础数据结构,并且在内核空间中工作。 -这意味着,与 iptables 模式下的 kube-proxy 相比,IPVS 模式下的 kube-proxy 重定向通信的延迟要短,并且在同步代理规则时具有更好的性能。与其他代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。 +IPVS代理模式基于类似于 iptables 模式的 netfilter 挂钩函数, +但是使用哈希表作为基础数据结构,并且在内核空间中工作。 +这意味着,与 iptables 模式下的 kube-proxy 相比,IPVS 模式下的 kube-proxy +重定向通信的延迟要短,并且在同步代理规则时具有更好的性能。 +与其他代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。 IPVS提供了更多选项来平衡后端Pod的流量。 这些是: @@ -528,8 +533,6 @@ IPVS提供了更多选项来平衡后端Pod的流量。 这些是: - `sed`: shortest expected delay - `nq`: never queue -{{< note >}} - <!-- To run kube-proxy in IPVS mode, you must make the IPVS Linux available on the node before you starting kube-proxy. @@ -538,12 +541,11 @@ When kube-proxy starts in IPVS proxy mode, it verifies whether IPVS kernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy falls back to running in iptables proxy mode. --> - +{{< note >}} 要在 IPVS 模式下运行 kube-proxy,必须在启动 kube-proxy 之前使 IPVS Linux 在节点上可用。 当 kube-proxy 以 IPVS 代理模式启动时,它将验证 IPVS 内核模块是否可用。 如果未检测到 IPVS 内核模块,则 kube-proxy 将退回到以 iptables 代理模式运行。 - {{< /note >}} <!-- @@ -564,10 +566,15 @@ You can also set the maximum session sticky time by setting ![IPVS代理的 Services 概述图](/images/docs/services-ipvs-overview.svg) -在这些代理模型中,绑定到服务IP的流量:在客户端不了解Kubernetes或服务或Pod的任何信息的情况下,将Port代理到适当的后端。 -如果要确保每次都将来自特定客户端的连接传递到同一Pod,则可以通过将 `service.spec.sessionAffinity` 设置为 "ClientIP" (默认值是 "None"),来基于客户端的IP地址选择会话关联。 +在这些代理模型中,绑定到服务IP的流量: +在客户端不了解Kubernetes或服务或Pod的任何信息的情况下,将Port代理到适当的后端。 +如果要确保每次都将来自特定客户端的连接传递到同一 Pod, +则可以通过将 `service.spec.sessionAffinity` 设置为 "ClientIP" +(默认值是 "None"),来基于客户端的 IP 地址选择会话关联。 -您还可以通过适当设置 `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` 来设置最大会话停留时间。 (默认值为 10800 秒,即 3 小时)。 +您还可以通过适当设置 `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` +来设置最大会话停留时间。 +(默认值为 10800 秒,即 3 小时)。 <!-- ## Multi-Port Services @@ -578,12 +585,12 @@ When using multiple ports for a Service, you must give all of your ports names so that these are unambiguous. For example: --> - ## 多端口 Service 对于某些服务,您需要公开多个端口。 -Kubernetes允许您在Service对象上配置多个端口定义。 +Kubernetes 允许您在 Service 对象上配置多个端口定义。 为服务使用多个端口时,必须提供所有端口名称,以使它们无歧义。 + 例如: ```yaml @@ -605,8 +612,6 @@ spec: targetPort: 9377 ``` -{{< note >}} - <!-- As with Kubernetes {{< glossary_tooltip term_id="name" text="names">}} in general, names for ports must only contain lowercase alphanumeric characters and `-`. Port names must @@ -614,8 +619,9 @@ also start and end with an alphanumeric character. For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not. --> - -与一般的Kubernetes名称一样,端口名称只能包含 小写字母数字字符 和 `-`。 端口名称还必须以字母数字字符开头和结尾。 +{{< note >}} +与一般的Kubernetes名称一样,端口名称只能包含小写字母数字字符 和 `-`。 +端口名称还必须以字母数字字符开头和结尾。 例如,名称 `123-abc` 和 `web` 有效,但是 `123_abc` 和 `-web` 无效。 {{< /note >}} @@ -639,8 +645,9 @@ server will return a 422 HTTP status code to indicate that there's a problem. 在 `Service` 创建的请求中,可以通过设置 `spec.clusterIP` 字段来指定自己的集群 IP 地址。 比如,希望替换一个已经已存在的 DNS 条目,或者遗留系统已经配置了一个固定的 IP 且很难重新配置。 -用户选择的 IP 地址必须合法,并且这个 IP 地址在 `service-cluster-ip-range` CIDR 范围内,这对 API Server 来说是通过一个标识来指定的。 -如果 IP 地址不合法,API Server 会返回 HTTP 状态码 422,表示值不合法。 +用户选择的 IP 地址必须合法,并且这个 IP 地址在 `service-cluster-ip-range` CIDR 范围内, +这对 API 服务器来说是通过一个标识来指定的。 +如果 IP 地址不合法,API 服务器会返回 HTTP 状态码 422,表示值不合法。 <!-- ## Discovering services @@ -648,10 +655,9 @@ server will return a 422 HTTP status code to indicate that there's a problem. Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. --> +## 服务发现 {#discovering-services} -## 服务发现 - -Kubernetes 支持2种基本的服务发现模式 —— 环境变量和 DNS。 +Kubernetes 支持两种基本的服务发现模式 —— 环境变量和 DNS。 <!-- ### Environment variables @@ -667,13 +673,16 @@ For example, the Service `"redis-master"` which exposes TCP port 6379 and has be allocated cluster IP address 10.0.0.11, produces the following environment variables: --> - ### 环境变量 -当 `Pod` 运行在 `Node` 上,kubelet 会为每个活跃的 `Service` 添加一组环境变量。 -它同时支持 [Docker links兼容](https://docs.docker.com/userguide/dockerlinks/) 变量(查看 [makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))、简单的 `{SVCNAME}_SERVICE_HOST` 和 `{SVCNAME}_SERVICE_PORT` 变量,这里 `Service` 的名称需大写,横线被转换成下划线。 +当 Pod 运行在 `Node` 上,kubelet 会为每个活跃的 Service 添加一组环境变量。 +它同时支持 [Docker links兼容](https://docs.docker.com/userguide/dockerlinks/) 变量 +(查看 [makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))、 +简单的 `{SVCNAME}_SERVICE_HOST` 和 `{SVCNAME}_SERVICE_PORT` 变量。 +这里 Service 的名称需大写,横线被转换成下划线。 -举个例子,一个名称为 `"redis-master"` 的 Service 暴露了 TCP 端口 6379,同时给它分配了 Cluster IP 地址 10.0.0.11,这个 Service 生成了如下环境变量: +举个例子,一个名称为 `"redis-master"` 的 Service 暴露了 TCP 端口 6379, +同时给它分配了 Cluster IP 地址 10.0.0.11,这个 Service 生成了如下环境变量: ```shell REDIS_MASTER_SERVICE_HOST=10.0.0.11 @@ -685,8 +694,6 @@ REDIS_MASTER_PORT_6379_TCP_PORT=6379 REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11 ``` -{{< note >}} - <!-- When you have a Pod that needs to access a Service, and you are using the environment variable method to publish the port and cluster IP to the client @@ -696,12 +703,12 @@ Otherwise, those client Pods won't have their environment variables populated. If you only use DNS to discover the cluster IP for a Service, you don't need to worry about this ordering issue. --> +{{< note >}} +当您具有需要访问服务的Pod时,并且您正在使用环境变量方法将端口和群集 IP 发布到客户端 +Pod 时,必须在客户端 Pod 出现 *之前* 创建服务。 +否则,这些客户端 Pod 将不会设定其环境变量。 -当您具有需要访问服务的Pod时,并且您正在使用环境变量方法将端口和群集IP发布到客户端Pod时,必须在客户端Pod出现 *之前* 创建服务。 -否则,这些客户端Pod将不会设定其环境变量。 - -如果仅使用DNS查找服务的群集IP,则无需担心此设定问题。 - +如果仅使用 DNS 查找服务的群集 IP,则无需担心此设定问题。 {{< /note >}} ### DNS @@ -733,29 +740,31 @@ The Kubernetes DNS server is the only way to access `ExternalName` Services. You can find more information about `ExternalName` resolution in [DNS Pods and Services](/docs/concepts/services-networking/dns-pod-service/). --> +您可以(几乎总是应该)使用[附加组件](/zh/docs/concepts/cluster-administration/addons/) +为 Kubernetes 集群设置 DNS 服务。 -您可以(几乎总是应该)使用[附加组件](/docs/concepts/cluster-administration/addons/)为Kubernetes集群设置DNS服务。 - -支持群集的DNS服务器(例如CoreDNS)监视 Kubernetes API 中的新服务,并为每个服务创建一组 DNS 记录。 +支持群集的 DNS 服务器(例如 CoreDNS)监视 Kubernetes API 中的新服务,并为每个服务创建一组 DNS 记录。 如果在整个群集中都启用了 DNS,则所有 Pod 都应该能够通过其 DNS 名称自动解析服务。 例如,如果您在 Kubernetes 命名空间 `"my-ns"` 中有一个名为 `"my-service"` 的服务, 则控制平面和DNS服务共同为 `"my-service.my-ns"` 创建 DNS 记录。 -`"my-ns"` 命名空间中的Pod应该能够通过简单地对 `my-service` 进行名称查找来找到它( `"my-service.my-ns"` 也可以工作)。 +`"my-ns"` 命名空间中的 Pod 应该能够通过简单地对 `my-service` 进行名称查找来找到它 +(`"my-service.my-ns"` 也可以工作)。 -其他命名空间中的Pod必须将名称限定为 `my-service.my-ns` 。 这些名称将解析为为服务分配的群集IP。 +其他命名空间中的Pod必须将名称限定为 `my-service.my-ns`。这些名称将解析为为服务分配的群集 IP。 Kubernetes 还支持命名端口的 DNS SRV(服务)记录。 -如果 `"my-service.my-ns"` 服务具有名为 `"http"` 的端口,且协议设置为`TCP`, -则可以对 `_http._tcp.my-service.my-ns` 执行DNS SRV查询查询以发现该端口号, `"http"`以及IP地址。 +如果 `"my-service.my-ns"` 服务具有名为 `"http"` 的端口,且协议设置为 TCP, +则可以对 `_http._tcp.my-service.my-ns` 执行 DNS SRV 查询查询以发现该端口号, +`"http"` 以及 IP 地址。 Kubernetes DNS 服务器是唯一的一种能够访问 `ExternalName` 类型的 Service 的方式。 -更多关于 `ExternalName` 信息可以查看[DNS Pod 和 Service](/docs/concepts/services-networking/dns-pod-service/)。 +更多关于 `ExternalName` 信息可以查看 +[DNS Pod 和 Service](/zh/docs/concepts/services-networking/dns-pod-service/)。 -## Headless Services +## Headless Services {#headless-services} <!-- - Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying `"None"` for the cluster IP (`.spec.clusterIP`). @@ -768,26 +777,28 @@ these Services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the Service has selectors defined: --> - 有时不需要或不想要负载均衡,以及单独的 Service IP。 -遇到这种情况,可以通过指定 Cluster IP(`spec.clusterIP`)的值为 `"None"` 来创建 `Headless` Service。 +遇到这种情况,可以通过指定 Cluster IP(`spec.clusterIP`)的值为 `"None"` +来创建 `Headless` Service。 -您可以使用 headless Service 与其他服务发现机制进行接口,而不必与 Kubernetes 的实现捆绑在一起。 +您可以使用无头 Service 与其他服务发现机制进行接口,而不必与 Kubernetes 的实现捆绑在一起。 -对这 headless `Service` 并不会分配 Cluster IP,kube-proxy 不会处理它们,而且平台也不会为它们进行负载均衡和路由。 -DNS 如何实现自动配置,依赖于 `Service` 是否定义了 selector。 +对这无头 Service 并不会分配 Cluster IP,kube-proxy 不会处理它们, +而且平台也不会为它们进行负载均衡和路由。 +DNS 如何实现自动配置,依赖于 Service 是否定义了选择算符。 <!-- ### With selectors For headless Services that define selectors, the endpoints controller creates `Endpoints` records in the API, and modifies the DNS configuration to return -records (addresses) that point directly to the `Pods` backing the `Service`. +records (addresses) that point directly to the `Pods` backing the Service. --> -### 配置 Selector +### 带选择算符的服务 -对定义了 selector 的 Headless Service,Endpoint 控制器在 API 中创建了 `Endpoints` 记录,并且修改 DNS 配置返回 A 记录(地址),通过这个地址直接到达 `Service` 的后端 `Pod` 上。 +对定义了选择算符的无头服务,Endpoint 控制器在 API 中创建了 Endpoints 记录, +并且修改 DNS 配置返回 A 记录(地址),通过这个地址直接到达 Service 的后端 Pod 上。 <!-- ### Without selectors @@ -801,9 +812,9 @@ either: other types. --> -### 不配置 Selector +### 无选择算符的服务 -对没有定义 selector 的 Headless Service,Endpoint 控制器不会创建 `Endpoints` 记录。 +对没有定义选择算符的无头服务,Endpoint 控制器不会创建 `Endpoints` 记录。 然而 DNS 系统会查找和配置,无论是: * `ExternalName` 类型 Service 的 CNAME 记录 @@ -841,26 +852,31 @@ The default is `ClusterIP`. You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address. --> - ## 发布服务 —— 服务类型 {#publishing-services-service-types} -对一些应用(如 Frontend)的某些部分,可能希望通过外部Kubernetes 集群外部IP 地址暴露 Service。 +对一些应用(如前端)的某些部分,可能希望通过外部 Kubernetes 集群外部 IP 地址暴露 Service。 Kubernetes `ServiceTypes` 允许指定一个需要的类型的 Service,默认是 `ClusterIP` 类型。 `Type` 的取值以及行为如下: * `ClusterIP`:通过集群的内部 IP 暴露服务,选择该值,服务只能够在集群内部可以访问,这也是默认的 `ServiceType`。 - * [`NodePort`](#nodeport):通过每个 Node 上的 IP 和静态端口(`NodePort`)暴露服务。`NodePort` 服务会路由到 `ClusterIP` 服务,这个 `ClusterIP` 服务会自动创建。通过请求 `<NodeIP>:<NodePort>`,可以从集群的外部访问一个 `NodePort` 服务。 - * [`LoadBalancer`](#loadbalancer):使用云提供商的负载局衡器,可以向外部暴露服务。外部的负载均衡器可以路由到 `NodePort` 服务和 `ClusterIP` 服务。 - * [`ExternalName`](#externalname):通过返回 `CNAME` 和它的值,可以将服务映射到 `externalName` 字段的内容(例如, `foo.bar.example.com`)。 + * [`NodePort`](#nodeport):通过每个 Node 上的 IP 和静态端口(`NodePort`)暴露服务。 + `NodePort` 服务会路由到 `ClusterIP` 服务,这个 `ClusterIP` 服务会自动创建。 + 通过请求 `<NodeIP>:<NodePort>`,可以从集群的外部访问一个 `NodePort` 服务。 + * [`LoadBalancer`](#loadbalancer):使用云提供商的负载局衡器,可以向外部暴露服务。 + 外部的负载均衡器可以路由到 `NodePort` 服务和 `ClusterIP` 服务。 + * [`ExternalName`](#externalname):通过返回 `CNAME` 和它的值,可以将服务映射到 `externalName` + 字段的内容(例如, `foo.bar.example.com`)。 没有任何类型代理被创建。 + {{< note >}} 您需要 CoreDNS 1.7 或更高版本才能使用 `ExternalName` 类型。 {{< /note >}} -您也可以使用 [Ingress](/docs/concepts/services-networking/ingress/) 来暴露自己的服务。 -Ingress 不是服务类型,但它充当集群的入口点。 它可以将路由规则整合到一个资源中,因为它可以在同一IP地址下公开多个服务。 +您也可以使用 [Ingress](/zh/docs/concepts/services-networking/ingress/) 来暴露自己的服务。 +Ingress 不是服务类型,但它充当集群的入口点。 +它可以将路由规则整合到一个资源中,因为它可以在同一IP地址下公开多个服务。 <!-- ### Type NodePort {#nodeport} @@ -870,10 +886,23 @@ allocates a port from a range specified by `--service-node-port-range` flag (def Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its `.spec.ports[*].nodePort` field. - If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10. This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node. +--> +### NodePort 类型 {#nodeport} + +如果将 `type` 字段设置为 `NodePort`,则 Kubernetes 控制平面将在 `--service-node-port-range` 标志指定的范围内分配端口(默认值:30000-32767)。 +每个节点将那个端口(每个节点上的相同端口号)代理到您的服务中。 +您的服务在其 `.spec.ports[*].nodePort` 字段中要求分配的端口。 + +如果您想指定特定的 IP 代理端口,则可以将 kube-proxy 中的 `--nodeport-addresses` +标志设置为特定的 IP 块。从 Kubernetes v1.10 开始支持此功能。 + +该标志采用逗号分隔的 IP 块列表(例如,`10.0.0.0/8`、`192.0.2.0/25`)来指定 +kube-proxy 应该认为是此节点本地的 IP 地址范围。 + +<!-- For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases). If you want a specific port number, you can specify a value in the `nodePort` @@ -882,56 +911,32 @@ the API transaction failed. This means that you need to take care about possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use. +--> +例如,如果您使用 `--nodeport-addresses=127.0.0.0/8` 标志启动 kube-proxy,则 kube-proxy 仅选择 NodePort Services 的环回接口。 +`--nodeport-addresses` 的默认值是一个空列表。 +这意味着 kube-proxy 应该考虑 NodePort 的所有可用网络接口。 +(这也与早期的 Kubernetes 版本兼容)。 +如果需要特定的端口号,则可以在 `nodePort` 字段中指定一个值。控制平面将为您分配该端口或向API报告事务失败。 +这意味着您需要自己注意可能发生的端口冲突。您还必须使用有效的端口号,该端口号在配置用于NodePort的范围内。 + +<!-- Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even to just expose one or more nodes' IPs directly. Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort` -and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).) +and `.spec.clusterIP:spec.ports[*].port`. (If the `-nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).) For example: -```yaml -apiVersion: v1 -kind: Service -metadata: - name: my-service -spec: - type: NodePort - selector: - app: MyApp - ports: - # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - - port: 80 - targetPort: 80 - # Optional field - # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) - nodePort: 30007 -``` --> - -### NodePort 类型 - -如果将 `type` 字段设置为 `NodePort`,则 Kubernetes 控制平面将在 `--service-node-port-range` 标志指定的范围内分配端口(默认值:30000-32767)。 -每个节点将那个端口(每个节点上的相同端口号)代理到您的服务中。 -您的服务在其 `.spec.ports[*].nodePort` 字段中要求分配的端口。 - -如果您想指定特定的IP代理端口,则可以将 kube-proxy 中的 `--nodeport-addresses` 标志设置为特定的IP块。从Kubernetes v1.10开始支持此功能。 - -该标志采用逗号分隔的IP块列表(例如10.0.0.0/8、192.0.2.0/25)来指定 kube-proxy 应该认为是此节点本地的IP地址范围。 - -例如,如果您使用 `--nodeport-addresses=127.0.0.0/8` 标志启动 kube-proxy,则 kube-proxy 仅选择 NodePort Services 的环回接口。 -`--nodeport-addresses` 的默认值是一个空列表。 -这意味着 kube-proxy 应该考虑 NodePort 的所有可用网络接口。 (这也与早期的Kubernetes版本兼容)。 - -如果需要特定的端口号,则可以在 `nodePort` 字段中指定一个值。 控制平面将为您分配该端口或向API报告事务失败。 -这意味着您需要自己注意可能发生的端口冲突。 您还必须使用有效的端口号,该端口号在配置用于NodePort的范围内。 - -使用 NodePort 可以让您自由设置自己的负载平衡解决方案,配置 Kubernetes 不完全支持的环境,甚至直接暴露一个或多个节点的IP。 +使用 NodePort 可以让您自由设置自己的负载平衡解决方案,配置 Kubernetes 不完全支持的环境, +甚至直接暴露一个或多个节点的 IP。 需要注意的是,Service 能够通过 `<NodeIP>:spec.ports[*].nodePort` 和 `spec.clusterIp:spec.ports[*].port` 而对外可见。 例如: + ```yaml apiVersion: v1 kind: Service @@ -960,10 +965,13 @@ information about the provisioned balancer is published in the Service's `.status.loadBalancer` field. For example: --> -### LoadBalancer 类型 +### LoadBalancer 类型 {#loadbalancer} + +在使用支持外部负载均衡器的云提供商的服务时,设置 `type` 的值为 `"LoadBalancer"`, +将为 Service 提供负载均衡器。 +负载均衡器是异步创建的,关于被提供的负载均衡器的信息将会通过 Service 的 +`status.loadBalancer` 字段发布出去。 -使用支持外部负载均衡器的云提供商的服务,设置 `type` 的值为 `"LoadBalancer"`,将为 `Service` 提供负载均衡器。 -负载均衡器是异步创建的,关于被提供的负载均衡器的信息将会通过 `Service` 的 `status.loadBalancer` 字段被发布出去。 实例: ```yaml @@ -996,23 +1004,21 @@ the loadBalancer is set up with an ephemeral IP address. If you specify a `loadB but your cloud provider does not support the feature, the `loadbalancerIP` field that you set is ignored. --> - -来自外部负载均衡器的流量将直接打到 backend `Pod` 上,不过实际它们是如何工作的,这要依赖于云提供商。 +来自外部负载均衡器的流量将直接重定向到后端 Pod 上,不过实际它们是如何工作的,这要依赖于云提供商。 在这些情况下,将根据用户设置的 `loadBalancerIP` 来创建负载均衡器。 某些云提供商允许设置 `loadBalancerIP`。如果没有设置 `loadBalancerIP`,将会给负载均衡器指派一个临时 IP。 如果设置了 `loadBalancerIP`,但云提供商并不支持这种特性,那么设置的 `loadBalancerIP` 值将会被忽略掉。 -{{< note >}} - <!-- If you're using SCTP, see the [caveat](#caveat-sctp-loadbalancer-service-type) below about the `LoadBalancer` Service type. --> -如果您使用的是 SCTP,请参阅下面有关 `LoadBalancer` 服务类型的 [caveat](#caveat-sctp-loadbalancer-service-type)。 +{{< note >}} +如果您使用的是 SCTP,请参阅下面有关 `LoadBalancer` 服务类型的 +[注意事项](#caveat-sctp-loadbalancer-service-type)。 {{< /note >}} -{{< note >}} <!-- On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need to create a static type public IP address resource. This public IP address resource should @@ -1021,16 +1027,19 @@ For example, `MC_myResourceGroup_myAKSCluster_eastus`. Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357). --> +{{< note >}} 在 **Azure** 上,如果要使用用户指定的公共类型 `loadBalancerIP` ,则首先需要创建静态类型的公共IP地址资源。 此公共IP地址资源应与群集中其他自动创建的资源位于同一资源组中。 例如,`MC_myResourceGroup_myAKSCluster_eastus`。 -将分配的IP地址指定为loadBalancerIP。 确保您已更新云提供程序配置文件中的securityGroupName。 +将分配的IP地址指定为 loadBalancerIP。 确保您已更新云提供程序配置文件中的 securityGroupName。 有关对 `CreatingLoadBalancerFailed` 权限问题进行故障排除的信息, -请参阅 [与Azure Kubernetes服务(AKS)负载平衡器一起使用静态IP地址](https://docs.microsoft.com/en-us/azure/aks/static-ip)或[通过高级网络在AKS群集上创建LoadBalancerFailed](https://github.com/Azure/AKS/issues/357)。 +请参阅 [与Azure Kubernetes服务(AKS)负载平衡器一起使用静态IP地址](https://docs.microsoft.com/en-us/azure/aks/static-ip) +或[通过高级网络在AKS群集上创建LoadBalancerFailed](https://github.com/Azure/AKS/issues/357)。 {{< /note >}} <!-- #### Internal load balancer + In a mixed environment it is sometimes necessary to route traffic from Services inside the same (virtual) network address block. @@ -1039,12 +1048,11 @@ In a split-horizon DNS environment you would need two Services to be able to rou You can achieve this by adding one the following annotations to a Service. The annotation to add depends on the cloud Service provider you're using. --> - #### 内部负载均衡器 在混合环境中,有时有必要在同一(虚拟)网络地址块内路由来自服务的流量。 -在水平分割 DNS 环境中,您需要两个服务才能将内部和外部流量都路由到您的 endpoints。 +在水平分割 DNS 环境中,您需要两个服务才能将内部和外部流量都路由到您的端点(Endpoints)。 您可以通过向服务添加以下注释之一来实现此目的。 要添加的注释取决于您使用的云服务提供商。 @@ -1069,7 +1077,7 @@ Use `cloud.google.com/load-balancer-type: "internal"` for masters with version 1 For more information, see the [docs](https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing). --> 将 `cloud.google.com/load-balancer-type: "internal"` 节点用于版本1.7.0至1.7.3的主服务器。 -有关更多信息,请参见 [文档](https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing). +有关更多信息,请参见[文档](https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing)。 {{% /tab %}} {{% tab name="AWS" %}} ```yaml @@ -1171,10 +1179,10 @@ modifying the headers. In a mixed-use environment where some ports are secured and others are left unencrypted, you can use the following annotations: --> - 第二个注释指定 Pod 使用哪种协议。 对于 HTTPS 和 SSL,ELB 希望 Pod 使用证书通过加密连接对自己进行身份验证。 -HTTP 和 HTTPS 选择第7层代理:ELB 终止与用户的连接,解析标头,并在转发请求时向 `X-Forwarded-For` 标头注入用户的 IP 地址(Pod 仅在连接的另一端看到 ELB 的 IP 地址)。 +HTTP 和 HTTPS 选择第7层代理:ELB 终止与用户的连接,解析标头,并在转发请求时向 +`X-Forwarded-For` 标头注入用户的 IP 地址(Pod 仅在连接的另一端看到 ELB 的 IP 地址)。 TCP 和 SSL 选择第4层代理:ELB 转发流量而不修改报头。 @@ -1197,8 +1205,11 @@ From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://do To see which policies are available for use, you can use the `aws` command line tool: --> -从Kubernetes v1.9起可以使用 [预定义的 AWS SSL 策略](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) 为您的服务使用HTTPS或SSL侦听器。 +从 Kubernetes v1.9 起可以使用 +[预定义的 AWS SSL 策略](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) +为您的服务使用 HTTPS 或 SSL 侦听器。 要查看可以使用哪些策略,可以使用 `aws` 命令行工具: + ```bash aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName' ``` @@ -1208,10 +1219,8 @@ You can then specify any one of those policies using the "`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`" annotation; for example: --> - -然后,您可以使用 -"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`" -注解; 例如: +然后,您可以使用 "`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`" 注解; +例如: ```yaml metadata: @@ -1227,10 +1236,9 @@ To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protoc support for clusters running on AWS, you can use the following service annotation: --> +#### AWS 上的 PROXY 协议支持 -#### AWS上的PROXY协议支持 - -为了支持在AWS上运行的集群,启用 [PROXY协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt), +为了支持在 AWS 上运行的集群,启用 [PROXY 协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。 您可以使用以下服务注释: ```yaml @@ -1244,17 +1252,28 @@ annotation: Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB and cannot be configured otherwise. --> - -从1.3.0版开始,此注释的使用适用于 ELB 代理的所有端口,并且不能进行其他配置。 +从 1.3.0 版开始,此注释的使用适用于 ELB 代理的所有端口,并且不能进行其他配置。 <!-- ### External IPs + If there are external IPs that route to one or more cluster nodes, Kubernetes services can be exposed on those `externalIPs`. Traffic that ingresses into the cluster with the external IP (as destination IP), on the service port, will be routed to one of the service endpoints. `externalIPs` are not managed by Kubernetes and are the responsibility of the cluster administrator. In the `ServiceSpec`, `externalIPs` can be specified along with any of the `ServiceTypes`. In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`"" (`externalIP:port`) +--> +### 外部 IP + +如果有一些外部 IP 地址能够路由到一个或多个集群节点,Kubernetes 服务可以在这些 +`externalIPs` 上暴露出来。 +通过外部 IP 进入集群的入站请求,如果指向的是服务的端口,会被路由到服务的末端之一。 +`externalIPs` 不受 Kubernets 管理;它们由集群管理员管理。 +在服务规约中,`externalIPs` 可以和 `ServiceTypes` 一起指定。 +在上面的例子中,客户端可以通过 "`80.11.12.10:80`" (`externalIP:port`) 访问 "`my-service`" +服务。 + ```yaml kind: Service apiVersion: v1 @@ -1271,7 +1290,6 @@ spec: externalIPs: - 80.11.12.10 ``` ---> <!-- #### ELB Access Logs on AWS @@ -1292,18 +1310,20 @@ stored. The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix` specifies the logical hierarchy you created for your Amazon S3 bucket. --> +#### AWS 上的 ELB 访问日志 -#### AWS上的ELB访问日志 - -有几个注释可用于管理AWS上ELB服务的访问日志。 +有几个注释可用于管理 AWS 上 ELB 服务的访问日志。 注释 `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled` 控制是否启用访问日志。 -注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval` 控制发布访问日志的时间间隔(以分钟为单位)。 您可以指定5分钟或60分钟的间隔。 +注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval` +控制发布访问日志的时间间隔(以分钟为单位)。您可以指定 5 分钟或 60 分钟的间隔。 -注释 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name` 控制存储负载均衡器访问日志的Amazon S3存储桶的名称。 +注释 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name` +控制存储负载均衡器访问日志的 Amazon S3 存储桶的名称。 -注释 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix` 指定为Amazon S3存储桶创建的逻辑层次结构。 +注释 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix` +指定为 Amazon S3 存储桶创建的逻辑层次结构。 ```yaml metadata: @@ -1328,11 +1348,12 @@ to the value of `"true"`. The annotation `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances. --> +#### AWS 上的连接排空 -#### AWS上的连接排空 - -可以将注释 `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` 设置为 `"true"` 的值来管理 ELB 的连接消耗。 -注释 `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` 也可以用于设置最大时间(以秒为单位),以保持现有连接在注销实例之前保持打开状态。 +可以将注解 `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` +设置为 `"true"` 来管理 ELB 的连接排空。 +注释 `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` +也可以用于设置最大时间(以秒为单位),以保持现有连接在注销实例之前保持打开状态。 ```yaml metadata: @@ -1347,53 +1368,57 @@ also be used to set maximum time, in seconds, to keep the existing connections o There are other annotations to manage Classic Elastic Load Balancers that are described below. --> - -#### 其他ELB注释 +#### 其他 ELB 注解 还有其他一些注释,用于管理经典弹性负载均衡器,如下所述。 + ```yaml metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60" - # The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer + # 按秒计的时间,表示负载均衡器关闭连接之前连接可以保持空闲 + # (连接上无数据传输)的时间长度 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" - # Specifies whether cross-zone load balancing is enabled for the load balancer + # 指定该负载均衡器上是否启用跨区的负载均衡能力 service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops" - # A comma-separated list of key-value pairs which will be recorded as - # additional tags in the ELB. + # 逗号分隔列表值,每一项都是一个键-值耦对,会作为额外的标签记录于 ELB 中 service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "" - # The number of successive successful health checks required for a backend to - # be considered healthy for traffic. Defaults to 2, must be between 2 and 10 + # 将某后端视为健康、可接收请求之前需要达到的连续成功健康检查次数。 + # 默认为 2,必须介于 2 和 10 之间 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" - # The number of unsuccessful health checks required for a backend to be - # considered unhealthy for traffic. Defaults to 6, must be between 2 and 10 + # 将某后端视为不健康、不可接收请求之前需要达到的连续不成功健康检查次数。 + # 默认为 6,必须介于 2 和 10 之间 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20" - # The approximate interval, in seconds, between health checks of an - # individual instance. Defaults to 10, must be between 5 and 300 + # 对每个实例进行健康检查时,连续两次检查之间的大致间隔秒数 + # 默认为 10,必须介于 5 和 300 之间 + service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5" - # The amount of time, in seconds, during which no response means a failed - # health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval - # value. Defaults to 5, must be between 2 and 60 + # 时长秒数,在此期间没有响应意味着健康检查失败 + # 此值必须小于 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval + # 默认值为 5,必须介于 2 和 60 之间 service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e" - # A list of additional security groups to be added to the ELB + # 要添加到 ELB 上的额外安全组列表 ``` <!-- #### Network Load Balancer support on AWS {#aws-nlb-support} --> +#### AWS 上负载均衡器支持 {#aws-nlb-support} {{< feature-state for_k8s_version="v1.15" state="beta" >}} <!-- To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`. --> +要在 AWS 上使用网络负载均衡器,可以使用注解 +`service.beta.kubernetes.io/aws-load-balancer-type`,将其取值设为 `nlb`。 ```yaml metadata: @@ -1402,14 +1427,16 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet service.beta.kubernetes.io/aws-load-balancer-type: "nlb" ``` -{{< note >}} - <!-- NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) on Elastic Load Balancing for a list of supported instance types. --> -NLB 仅适用于某些实例类。 有关受支持的实例类型的列表,请参见 Elastic Load Balancing 上的 [AWS文档](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)。 +{{< note >}} +NLB 仅适用于某些实例类。有关受支持的实例类型的列表, +请参见 +[AWS文档](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) +中关于所支持的实例类型的 Elastic Load Balancing 说明。 {{< /note >}} <!-- @@ -1423,9 +1450,18 @@ propagated to the end Pods, but this could result in uneven distribution of traffic. Nodes without any Pods for a particular LoadBalancer Service will fail the NLB Target Group's health check on the auto-assigned `.spec.healthCheckNodePort` and not receive any traffic. +--> +与经典弹性负载平衡器不同,网络负载平衡器(NLB)将客户端的 IP 地址转发到该节点。 +如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` ,则客户端的IP地址不会传达到最终的 Pod。 +通过将 `.spec.externalTrafficPolicy` 设置为 `Local`,客户端IP地址将传播到最终的 Pod, +但这可能导致流量分配不均。 +没有针对特定 LoadBalancer 服务的任何 Pod 的节点将无法通过自动分配的 +`.spec.healthCheckNodePort` 进行 NLB 目标组的运行状况检查,并且不会收到任何流量。 + +<!-- In order to achieve even traffic, either use a DaemonSet, or specify a -[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) +[pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) to not locate on the same node. You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer) @@ -1435,20 +1471,18 @@ In order for client traffic to reach instances behind an NLB, the Node security groups are modified with the following IP rules: --> -与经典弹性负载平衡器不同,网络负载平衡器(NLB)将客户端的 IP 地址转发到该节点。 如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` ,则客户端的IP地址不会传达到终端 Pod。 +为了获得均衡流量,请使用 DaemonSet 或指定 +[Pod 反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) +使其不在同一节点上。 -通过将 `.spec.externalTrafficPolicy` 设置为 `Local`,客户端IP地址将传播到终端 Pod,但这可能导致流量分配不均。 -没有针对特定 LoadBalancer 服务的任何 Pod 的节点将无法通过自动分配的 `.spec.healthCheckNodePort` 进行 NLB 目标组的运行状况检查,并且不会收到任何流量。 - -为了获得平均流量,请使用DaemonSet或指定 [pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)使其不在同一节点上。 - -您还可以将NLB服务与 [内部负载平衡器](/docs/concepts/services-networking/service/#internal-load-balancer)批注一起使用。 +你还可以将 NLB 服务与[内部负载平衡器](/zh/docs/concepts/services-networking/service/#internal-load-balancer) +注解一起使用。 为了使客户端流量能够到达 NLB 后面的实例,使用以下 IP 规则修改了节点安全组: | Rule | Protocol | Port(s) | IpRange(s) | IpRange Description | |------|----------|---------|------------|---------------------| -| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> | +| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy=Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> | | Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> | | MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> | @@ -1456,7 +1490,6 @@ groups are modified with the following IP rules: In order to limit which client IP's can access the Network Load Balancer, specify `loadBalancerSourceRanges`. --> - 为了限制哪些客户端IP可以访问网络负载平衡器,请指定 `loadBalancerSourceRanges`。 ```yaml @@ -1465,15 +1498,13 @@ spec: - "143.231.0.0/16" ``` -{{< note >}} - <!-- If `.spec.loadBalancerSourceRanges` is not set, Kubernetes allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have public IP addresses, be aware that non-NLB traffic can also reach all instances in those modified security groups. --> - +{{< note >}} 如果未设置 `.spec.loadBalancerSourceRanges` ,则 Kubernetes 允许从 `0.0.0.0/0` 到节点安全组的流量。 如果节点具有公共 IP 地址,请注意,非 NLB 流量也可以到达那些修改后的安全组中的所有实例。 {{< /note >}} @@ -1488,7 +1519,7 @@ This Service definition, for example, maps the `my-service` Service in the `prod` namespace to `my.database.example.com`: --> -### 类型ExternalName {#externalname} +### ExternalName 类型 {#externalname} 类型为 ExternalName 的服务将服务映射到 DNS 名称,而不是典型的选择器,例如 `my-service` 或者 `cassandra`。 您可以使用 `spec.externalName` 参数指定这些服务。 @@ -1505,15 +1536,15 @@ spec: type: ExternalName externalName: my.database.example.com ``` -{{< note >}} <!-- ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. To hardcode an IP address, consider using [headless Services](#headless-services). --> - -ExternalName 接受 IPv4 地址字符串,但作为包含数字的 DNS 名称,而不是 IP 地址。 类似于 IPv4 地址的外部名称不能由 CoreDNS 或 ingress-nginx 解析,因为外部名称旨在指定规范的 DNS 名称。 +{{< note >}} +ExternalName 服务接受 IPv4 地址字符串,但作为包含数字的 DNS 名称,而不是 IP 地址。 +类似于 IPv4 地址的外部名称不能由 CoreDNS 或 ingress-nginx 解析,因为外部名称旨在指定规范的 DNS 名称。 要对 IP 地址进行硬编码,请考虑使用 [headless Services](#headless-services)。 {{< /note >}} @@ -1526,17 +1557,18 @@ forwarding. Should you later decide to move your database into your cluster, you can start its Pods, add appropriate selectors or endpoints, and change the Service's `type`. --> - -当查找主机 `my-service.prod.svc.cluster.local` 时,群集DNS服务返回 `CNAME` 记录,其值为 `my.database.example.com`。 +当查找主机 `my-service.prod.svc.cluster.local` 时,集群 DNS 服务返回 `CNAME` 记录, +其值为 `my.database.example.com`。 访问 `my-service` 的方式与其他服务的方式相同,但主要区别在于重定向发生在 DNS 级别,而不是通过代理或转发。 -如果以后您决定将数据库移到群集中,则可以启动其 Pod,添加适当的选择器或端点以及更改服务的`类型`。 +如果以后您决定将数据库移到群集中,则可以启动其 Pod,添加适当的选择器或端点以及更改服务的 `type`。 -{{< note >}} <!-- This section is indebted to the [Kubernetes Tips - Part 1](https://akomljen.com/kubernetes-tips-part-1/) blog post from [Alen Komljen](https://akomljen.com/). --> -本部分感谢 [Alen Komljen](https://akomljen.com/)的 [Kubernetes Tips - Part1](https://akomljen.com/kubernetes-tips-part-1/) 博客文章。 +{{< note >}} +本部分感谢 [Alen Komljen](https://akomljen.com/)的 +[Kubernetes Tips - Part1](https://akomljen.com/kubernetes-tips-part-1/) 博客文章。 {{< /note >}} <!-- @@ -1550,14 +1582,13 @@ of the cluster administrator. In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`. In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`) --> +### 外部 IP {#external-ips} -### 外部 IP - -如果外部的 IP 路由到集群中一个或多个 Node 上,Kubernetes `Service` 会被暴露给这些 `externalIPs`。 -通过外部 IP(作为目的 IP 地址)进入到集群,打到 `Service` 的端口上的流量,将会被路由到 `Service` 的 Endpoint 上。 +如果外部的 IP 路由到集群中一个或多个 Node 上,Kubernetes Service 会被暴露给这些 externalIPs。 +通过外部 IP(作为目的 IP 地址)进入到集群,打到 Service 的端口上的流量,将会被路由到 Service 的 Endpoint 上。 `externalIPs` 不会被 Kubernetes 管理,它属于集群管理员的职责范畴。 -根据 `Service` 的规定,`externalIPs` 可以同任意的 `ServiceType` 来一起指定。 +根据 Service 的规定,`externalIPs` 可以同任意的 `ServiceType` 来一起指定。 在上面的例子中,`my-service` 可以在 "`80.11.12.10:80`"(`externalIP:port`) 上被客户端访问。 ```yaml @@ -1597,19 +1628,18 @@ previous. This is not strictly required on all cloud providers (e.g. Google Com not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does) but the current API requires it. --> - ## 不足之处 -为 VIP 使用 userspace 代理,将只适合小型到中型规模的集群,不能够扩展到上千 `Service` 的大型集群。 -查看 [最初设计方案](http://issue.k8s.io/1107) 获取更多细节。 +为 VIP 使用用户空间代理,将只适合小型到中型规模的集群,不能够扩展到上千 Service 的大型集群。 +查看[最初设计方案](https://issue.k8s.io/1107) 获取更多细节。 -使用 userspace 代理,隐藏了访问 `Service` 的数据包的源 IP 地址。 +使用用户空间代理,隐藏了访问 Service 的数据包的源 IP 地址。 这使得一些类型的防火墙无法起作用。 iptables 代理不会隐藏 Kubernetes 集群内部的 IP 地址,但却要求客户端请求必须通过一个负载均衡器或 Node 端口。 `Type` 字段支持嵌套功能 —— 每一层需要添加到上一层里面。 -不会严格要求所有云提供商(例如,GCE 就没必要为了使一个 `LoadBalancer` 能工作而分配一个 `NodePort`,但是 AWS 需要 ),但当前 API 是强制要求的。 - +不会严格要求所有云提供商(例如,GCE 就没必要为了使一个 `LoadBalancer` +能工作而分配一个 `NodePort`,但是 AWS 需要 ),但当前 API 是强制要求的。 <!-- ## Virtual IP implementation {#the-gory-details-of-virtual-ips} @@ -1618,10 +1648,9 @@ The previous information should be sufficient for many people who just want to use Services. However, there is a lot going on behind the scenes that may be worth understanding. --> - ## 虚拟IP实施 {#the-gory-details-of-virtual-ips} -对很多想使用 `Service` 的人来说,前面的信息应该足够了。 +对很多想使用 Service 的人来说,前面的信息应该足够了。 然而,有很多内部原理性的内容,还是值去理解的。 <!-- @@ -1648,9 +1677,7 @@ map (needed to support migrating from older versions of Kubernetes that used in-memory locking). Kubernetes also uses controllers to check for invalid assignments (eg due to administrator intervention) and for cleaning up allocated IP addresses that are no longer used by any Services. - --> - ### 避免冲突 Kubernetes 最主要的哲学之一,是用户不应该暴露那些能够导致他们操作失败、但又不是他们的过错的场景。 @@ -1660,10 +1687,16 @@ Kubernetes 最主要的哲学之一,是用户不应该暴露那些能够导致 为了使用户能够为他们的 Service 选择一个端口号,我们必须确保不能有2个 Service 发生冲突。 Kubernetes 通过为每个 Service 分配它们自己的 IP 地址来实现。 -为了保证每个 Service 被分配到一个唯一的 IP,需要一个内部的分配器能够原子地更新 {{< glossary_tooltip term_id="etcd" >}} 中的一个全局分配映射表,这个更新操作要先于创建每一个 Service。 -为了使 Service 能够获取到 IP,这个映射表对象必须在注册中心存在,否则创建 Service 将会失败,指示一个 IP 不能被分配。 +为了保证每个 Service 被分配到一个唯一的 IP,需要一个内部的分配器能够原子地更新 +{{< glossary_tooltip term_id="etcd" >}} 中的一个全局分配映射表, +这个更新操作要先于创建每一个 Service。 +为了使 Service 能够获取到 IP,这个映射表对象必须在注册中心存在, +否则创建 Service 将会失败,指示一个 IP 不能被分配。 -在控制平面中,一个后台 Controller 的职责是创建映射表(需要支持从使用了内存锁的 Kubernetes 的旧版本迁移过来)。同时 Kubernetes 会通过控制器检查不合理的分配(如管理员干预导致的)以及清理已被分配但不再被任何 Service 使用的 IP 地址。 +在控制平面中,一个后台 Controller 的职责是创建映射表 +(需要支持从使用了内存锁的 Kubernetes 的旧版本迁移过来)。 +同时 Kubernetes 会通过控制器检查不合理的分配(如管理员干预导致的) +以及清理已被分配但不再被任何 Service 使用的 IP 地址。 <!-- ### Service IP addresses {#ips-and-vips} @@ -1679,17 +1712,16 @@ terms of the Service's virtual IP address (and port). kube-proxy supports three proxy modes—userspace, iptables and IPVS—which each operate slightly differently. --> - ### Service IP 地址 {#ips-and-vips} -不像 `Pod` 的 IP 地址,它实际路由到一个固定的目的地,`Service` 的 IP 实际上不能通过单个主机来进行应答。 +不像 Pod 的 IP 地址,它实际路由到一个固定的目的地,Service 的 IP 实际上不能通过单个主机来进行应答。 相反,我们使用 `iptables`(Linux 中的数据包处理逻辑)来定义一个虚拟IP地址(VIP),它可以根据需要透明地进行重定向。 当客户端连接到 VIP 时,它们的流量会自动地传输到一个合适的 Endpoint。 -环境变量和 DNS,实际上会根据 `Service` 的 VIP 和端口来进行填充。 +环境变量和 DNS,实际上会根据 Service 的 VIP 和端口来进行填充。 kube-proxy支持三种代理模式: 用户空间,iptables和IPVS;它们各自的操作略有不同。 -#### Userspace +#### Userspace {#userspace} <!-- As an example, consider the image processing application described above. @@ -1710,18 +1742,15 @@ of which Pods they are actually accessing. --> 作为一个例子,考虑前面提到的图片处理应用程序。 -当创建 backend `Service` 时,Kubernetes master 会给它指派一个虚拟 IP 地址,比如 10.0.0.1。 -假设 `Service` 的端口是 1234,该 `Service` 会被集群中所有的 `kube-proxy` 实例观察到。 -当代理看到一个新的 `Service`, 它会打开一个新的端口,建立一个从该 VIP 重定向到新端口的 iptables,并开始接收请求连接。 +当创建后端 Service 时,Kubernetes master 会给它指派一个虚拟 IP 地址,比如 10.0.0.1。 +假设 Service 的端口是 1234,该 Service 会被集群中所有的 `kube-proxy` 实例观察到。 +当代理看到一个新的 Service, 它会打开一个新的端口,建立一个从该 VIP 重定向到新端口的 iptables,并开始接收请求连接。 +当一个客户端连接到一个 VIP,iptables 规则开始起作用,它会重定向该数据包到 "服务代理" 的端口。 +"服务代理" 选择一个后端,并将客户端的流量代理到后端上。 - -当一个客户端连接到一个 VIP,iptables 规则开始起作用,它会重定向该数据包到 `Service代理` 的端口。 -`Service代理` 选择一个 backend,并将客户端的流量代理到 backend 上。 - -这意味着 `Service` 的所有者能够选择任何他们想使用的端口,而不存在冲突的风险。 -客户端可以简单地连接到一个 IP 和端口,而不需要知道实际访问了哪些 `Pod`。 - +这意味着 Service 的所有者能够选择任何他们想使用的端口,而不存在冲突的风险。 +客户端可以简单地连接到一个 IP 和端口,而不需要知道实际访问了哪些 Pod。 #### iptables @@ -1745,16 +1774,18 @@ address. This same basic flow executes when traffic comes in through a node-port or through a load-balancer, though in those cases the client IP does get altered. --> - 再次考虑前面提到的图片处理应用程序。 -当创建 backend `Service` 时,Kubernetes 控制面板会给它指派一个虚拟 IP 地址,比如 10.0.0.1。 -假设 `Service` 的端口是 1234,该 `Service` 会被集群中所有的 `kube-proxy` 实例观察到。 -当代理看到一个新的 `Service`, 它会配置一系列的 iptables 规则,从 VIP 重定向到 per-`Service` 规则。 -该 per-`Service` 规则连接到 per-`Endpoint` 规则,该 per-`Endpoint` 规则会重定向(目标 NAT)到 backend。 +当创建后端 Service 时,Kubernetes 控制面板会给它指派一个虚拟 IP 地址,比如 10.0.0.1。 +假设 Service 的端口是 1234,该 Service 会被集群中所有的 `kube-proxy` 实例观察到。 +当代理看到一个新的 Service, 它会配置一系列的 iptables 规则,从 VIP 重定向到每个 Service 规则。 +该特定于服务的规则连接到特定于 Endpoint 的规则,而后者会重定向(目标地址转译)到后端。 -当一个客户端连接到一个 VIP,iptables 规则开始起作用。一个 backend 会被选择(或者根据会话亲和性,或者随机),数据包被重定向到这个 backend。 -不像 userspace 代理,数据包从来不拷贝到用户空间,kube-proxy 不是必须为该 VIP 工作而运行,并且客户端 IP 是不可更改的。 -当流量打到 Node 的端口上,或通过负载均衡器,会执行相同的基本流程,但是在那些案例中客户端 IP 是可以更改的。 +当客户端连接到一个 VIP,iptables 规则开始起作用。一个后端会被选择(或者根据会话亲和性,或者随机), +数据包被重定向到这个后端。 +不像用户空间代理,数据包从来不拷贝到用户空间,kube-proxy 不是必须为该 VIP 工作而运行, +并且客户端 IP 是不可更改的。 +当流量打到 Node 的端口上,或通过负载均衡器,会执行相同的基本流程, +但是在那些案例中客户端 IP 是可以更改的。 #### IPVS @@ -1762,21 +1793,20 @@ through a load-balancer, though in those cases the client IP does get altered. iptables operations slow down dramatically in large scale cluster e.g 10,000 Services. IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence). --> - -在大规模集群(例如10,000个服务)中,iptables 操作会显着降低速度。 IPVS 专为负载平衡而设计,并基于内核内哈希表。 +在大规模集群(例如 10000 个服务)中,iptables 操作会显着降低速度。 IPVS 专为负载平衡而设计,并基于内核内哈希表。 因此,您可以通过基于 IPVS 的 kube-proxy 在大量服务中实现性能一致性。 同时,基于 IPVS 的 kube-proxy 具有更复杂的负载平衡算法(最小连接,局部性,加权,持久性)。 -## API Object +## API 对象 <!-- Service is a top-level resource in the Kubernetes REST API. You can find more details about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). --> -Service 是Kubernetes REST API中的顶级资源。 您可以在以下位置找到有关API对象的更多详细信息: +Service 是 Kubernetes REST API 中的顶级资源。您可以在以下位置找到有关A PI 对象的更多详细信息: [Service 对象 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). -## Supported protocols {#protocol-support} +## 受支持的协议 {#protocol-support} ### TCP @@ -1785,7 +1815,7 @@ Service 是Kubernetes REST API中的顶级资源。 您可以在以下位置找 <!-- You can use TCP for any kind of Service, and it's the default network protocol. --> -您可以将TCP用于任何类型的服务,这是默认的网络协议。 +您可以将 TCP 用于任何类型的服务,这是默认的网络协议。 ### UDP @@ -1795,7 +1825,7 @@ You can use TCP for any kind of Service, and it's the default network protocol. You can use UDP for most Services. For type=LoadBalancer Services, UDP support depends on the cloud provider offering this facility. --> -您可以将UDP用于大多数服务。 对于 type=LoadBalancer 服务,对 UDP 的支持取决于提供此功能的云提供商。 +您可以将 UDP 用于大多数服务。 对于 type=LoadBalancer 服务,对 UDP 的支持取决于提供此功能的云提供商。 ### HTTP @@ -1808,13 +1838,12 @@ of the Service. --> 如果您的云提供商支持它,则可以在 LoadBalancer 模式下使用服务来设置外部 HTTP/HTTPS 反向代理,并将其转发到该服务的 Endpoints。 -{{< note >}} - <!-- You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service to expose HTTP / HTTPS Services. --> -您还可以使用 {{< glossary_tooltip term_id="ingress" >}} 代替 Service 来公开HTTP / HTTPS服务。 +{{< note >}} +您还可以使用 {{< glossary_tooltip text="Ingres" term_id="ingress" >}} 代替 Service 来公开 HTTP/HTTPS 服务。 {{< /note >}} <!-- @@ -1834,12 +1863,13 @@ The load balancer will send an initial series of octets describing the incoming connection, similar to this example --> -如果您的云提供商支持它(例如, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)), -则可以在 LoadBalancer 模式下使用 Service 在 Kubernetes 本身之外配置负载均衡器,该负载均衡器将转发前缀为 [PROXY协议][PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) 的连接。 +如果您的云提供商支持它(例如, [AWS](/zh/docs/concepts/cluster-administration/cloud-providers/#aws)), +则可以在 LoadBalancer 模式下使用 Service 在 Kubernetes 本身之外配置负载均衡器, +该负载均衡器将转发前缀为 [PROXY协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) +的连接。 负载平衡器将发送一系列初始字节,描述传入的连接,类似于此示例 - ``` PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n ``` @@ -1854,16 +1884,17 @@ followed by the data from the client. {{< feature-state for_k8s_version="v1.12" state="alpha" >}} <!-- -Kubernetes supports SCTP as a `protocol` value in Service, Endpoint, NetworkPolicy and Pod definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `--feature-gates=SCTPSupport=true,…`. +Kubernetes supports SCTP as a `protocol` value in Service, Endpoint, NetworkPolicy and Pod definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `-feature-gates=SCTPSupport=true,…`. When the feature gate is enabled, you can set the `protocol` field of a Service, Endpoint, NetworkPolicy or Pod to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections. --> -Kubernetes 支持 SCTP 作为 Service,Endpoint,NetworkPolicy 和 Pod 定义中的 `协议` 值作为alpha功能。 -要启用此功能,集群管理员需要在apiserver上启用 `SCTPSupport` 功能门,例如 `--feature-gates = SCTPSupport = true,…`。 +作为一种 alpha 功能,Kubernetes 支持 SCTP 作为 Service、Endpoint、NetworkPolicy 和 Pod 定义中的 `protocol` 值。 +要启用此功能,集群管理员需要在 API 服务器上启用 `SCTPSupport` 特性门控, +例如 `--feature-gates=SCTPSupport=true,...`。 -启用功能门后,您可以将服务,端点,NetworkPolicy或Pod的 `protocol` 字段设置为 `SCTP`。 -Kubernetes相应地为 SCTP 关联设置网络,就像为 TCP 连接一样。 +启用特性门控后,你可以将 Service、Endpoints、NetworkPolicy 或 Pod 的 `protocol` 字段设置为 `SCTP`。 +Kubernetes 相应地为 SCTP 关联设置网络,就像为 TCP 连接所做的一样。 <!-- #### Warnings {#caveat-sctp-overview} @@ -1875,14 +1906,13 @@ Kubernetes相应地为 SCTP 关联设置网络,就像为 TCP 连接一样。 ##### 支持多宿主SCTP关联 {#caveat-sctp-multihomed} -{{< warning >}} - <!-- The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod. NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. --> -对多宿主 SCTP 关联的支持要求CNI插件可以支持将多个接口和 IP 地址分配给 Pod。 +{{< warning >}} +对多宿主 SCTP 关联的支持要求 CNI 插件可以支持将多个接口和 IP 地址分配给 Pod。 用于多宿主 SCTP 关联的 NAT 在相应的内核模块中需要特殊的逻辑。 {{< /warning >}} @@ -1891,31 +1921,32 @@ NAT for multihomed SCTP associations requires special logic in the corresponding --> ##### Service 类型为 LoadBalancer 的服务 {#caveat-sctp-loadbalancer-service-type} -{{< warning >}} <!-- You can only create a Service with `type` LoadBalancer plus `protocol` SCTP if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise, the Service creation request is rejected. The current set of cloud load balancer providers (Azure, AWS, CloudStack, GCE, OpenStack) all lack support for SCTP. --> -如果云提供商的负载平衡器实现支持将 SCTP 作为协议,则只能使用 `类型` LoadBalancer 加上 `协议` SCTP 创建服务。 否则,服务创建请求将被拒绝。 当前的云负载平衡器提供商(Azure,AWS,CloudStack,GCE,OpenStack)都缺乏对 SCTP 的支持。 + +{{< warning >}} +如果云提供商的负载平衡器实现支持将 SCTP 作为协议,则只能使用 `type` LoadBalancer 加上 +`protocol` SCTP 创建服务。否则,服务创建请求将被拒绝。 +当前的云负载平衡器提供商(Azure、AWS、CloudStack、GCE、OpenStack)都缺乏对 SCTP 的支持。 {{< /warning >}} ##### Windows {#caveat-sctp-windows-os} -{{< warning >}} - <!-- SCTP is not supported on Windows based nodes. --> -基于Windows的节点不支持SCTP。 +{{< warning >}} +基于 Windows 的节点不支持 SCTP。 {{< /warning >}} -##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace} - -{{< warning >}} +##### 用户空间 kube-proxy {#caveat-sctp-kube-proxy-userspace} <!-- The kube-proxy does not support the management of SCTP associations when it is in userspace mode. --> +{{< warning >}} 当 kube-proxy 处于用户空间模式时,它不支持 SCTP 关联的管理。 {{< /warning >}} @@ -1934,27 +1965,23 @@ which encompass the current ClusterIP, NodePort, and LoadBalancer modes and more --> ## 未来工作 -未来我们能预见到,代理策略可能会变得比简单的 round-robin 均衡策略有更多细微的差别,比如 master 选举或分片。 -我们也能想到,某些 `Service` 将具有 “真正” 的负载均衡器,这种情况下 VIP 将简化数据包的传输。 - -Kubernetes 项目打算为 L7(HTTP)`Service` 改进我们对它的支持。 - -Kubernetes 项目打算为 `Service` 实现更加灵活的请求进入模式,这些 `Service` 包含当前 `ClusterIP`、`NodePort` 和 `LoadBalancer` 模式,或者更多。 - +未来我们能预见到,代理策略可能会变得比简单的轮转均衡策略有更多细微的差别,比如主控节点选举或分片。 +我们也能想到,某些 Service 将具有 “真正” 的负载均衡器,这种情况下 VIP 将简化数据包的传输。 +Kubernetes 项目打算为 L7(HTTP)服务改进支持。 +Kubernetes 项目打算为 Service 实现更加灵活的请求进入模式, +这些模式包含当前的 `ClusterIP`、`NodePort` 和 `LoadBalancer` 模式,或者更多。 ## {{% heading "whatsnext" %}} - <!-- * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) * Read about [Ingress](/docs/concepts/services-networking/ingress/) * Read about [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/) --> -* 阅读 [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) -* 阅读 [Ingress](/docs/concepts/services-networking/ingress/) -* 阅读 [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/) - +* 阅读[使用服务访问应用](/zh/docs/concepts/services-networking/connect-applications-service/) +* 阅读了解 [Ingress](/zh/docs/concepts/services-networking/ingress/) +* 阅读了解 [端点切片](/zh/docs/concepts/services-networking/endpoint-slices/) diff --git a/content/zh/docs/concepts/storage/dynamic-provisioning.md b/content/zh/docs/concepts/storage/dynamic-provisioning.md index 3388a7bd6b..d2e8cff8e5 100644 --- a/content/zh/docs/concepts/storage/dynamic-provisioning.md +++ b/content/zh/docs/concepts/storage/dynamic-provisioning.md @@ -4,16 +4,9 @@ content_type: concept weight: 40 --- <!-- ---- -reviewers: -- saad-ali -- jsafrane -- thockin -- msau42 title: Dynamic Volume Provisioning content_type: concept weight: 40 ---- --> <!-- overview --> @@ -29,11 +22,9 @@ automatically provisions storage when it is requested by users. --> 动态卷供应允许按需创建存储卷。 如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷, -然后在 Kubernetes 集群创建 [`PersistentVolume` 对象](/docs/concepts/storage/persistent-volumes/)来表示这些卷。 +然后在 Kubernetes 集群创建 [`PersistentVolume` 对象](/zh/docs/concepts/storage/persistent-volumes/)来表示这些卷。 动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。 - - <!-- body --> <!-- @@ -66,12 +57,12 @@ have the ability to select from multiple storage options. More information on storage classes can be found [here](/docs/concepts/storage/storage-classes/). --> -点击[这里](/docs/concepts/storage/storage-classes/)查阅有关存储类的更多信息。 +点击[这里](/zh/docs/concepts/storage/storage-classes/)查阅有关存储类的更多信息。 <!-- ## Enabling Dynamic Provisioning --> -## 启用动态卷供应 +## 启用动态卷供应 {#enabling-dynamic-provisioning} <!-- To enable dynamic provisioning, a cluster administrator needs to pre-create @@ -208,6 +199,7 @@ Zones in a Region. Single-Zone storage backends should be provisioned in the Zon Pods are scheduled. This can be accomplished by setting the [Volume Binding Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode). --> -在[多区域](/docs/setup/multiple-zones)集群中,Pod 可以被分散到多个区域。 +在[多区域](/zh/docs/setup/best-practices/multiple-zones/)集群中,Pod 可以被分散到多个区域。 单区域存储后端应该被供应到 Pod 被调度到的区域。 -这可以通过设置[卷绑定模式](/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。 +这可以通过设置[卷绑定模式](/zh/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。 + diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index 8dbff4f851..b2d98c25aa 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -1,14 +1,15 @@ --- -reviewers: -- jsafrane -- saad-ali -- thockin -- msau42 -title: Storage Classes +title: 存储类 content_type: concept weight: 30 --- +<!-- +title: Storage Classes +content_type: concept +weight: 30 +--> + <!-- overview --> <!-- @@ -16,10 +17,8 @@ This document describes the concept of a StorageClass in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) and [persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested. --> -本文描述了 Kubernetes 中 StorageClass 的概念。建议先熟悉 [卷](/docs/concepts/storage/volumes/) 和 -[持久卷](/docs/concepts/storage/persistent-volumes) 的概念。 - - +本文描述了 Kubernetes 中 StorageClass 的概念。建议先熟悉 [卷](/zh/docs/concepts/storage/volumes/) 和 +[持久卷](/zh/docs/concepts/storage/persistent-volumes) 的概念。 <!-- body --> @@ -67,7 +66,8 @@ request any particular class to bind to: see the for details. --> 管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类 : -更多详情请参阅 [PersistentVolumeClaim 章节](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 +更多详情请参阅 +[PersistentVolumeClaim 章节](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。 ```yaml apiVersion: storage.k8s.io/v1 @@ -134,7 +134,8 @@ the specification. Some external provisioners are listed under the repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage). --> 您不限于指定此处列出的 "内置" 分配器(其名称前缀为 "kubernetes.io" 并打包在 Kubernetes 中)。 -您还可以运行和指定外部分配器,这些独立的程序遵循由 Kubernetes 定义的 [规范](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)。 +您还可以运行和指定外部分配器,这些独立的程序遵循由 Kubernetes 定义的 +[规范](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)。 外部供应商的作者完全可以自由决定他们的代码保存于何处、打包方式、运行方式、使用的插件(包括 Flex)等。 代码仓库 [kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner) 包含一个用于为外部分配器编写功能实现的类库。可以通过下面的代码仓库,查看外部分配器列表。 @@ -182,7 +183,6 @@ allows the users to resize the volume by editing the corresponding PVC object. The following types of volumes support volume expansion, when the underlying Storage Class has the field `allowVolumeExpansion` set to true. --> - PersistentVolume 可以配置为可扩展。将此功能设置为 `true` 时,允许用户通过编辑相应的 PVC 对象来调整卷大小。 当基础存储类的 `allowVolumeExpansion` 字段设置为 true 时,以下类型的卷支持卷扩展。 @@ -207,10 +207,10 @@ Volume type | Required Kubernetes version {{< /table >}} -{{< note >}} <!-- You can only use the volume expansion feature to grow a Volume, not to shrink it. --> +{{< note >}} 此功能仅可用于扩容卷,不能用于缩小卷。 {{< /note >}} @@ -240,7 +240,7 @@ the class or PV, so mount of the PV will simply fail if one is invalid. The `volumeBindingMode` field controls when [volume binding and dynamic provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur. --> -`volumeBindingMode` 字段控制了 [卷绑定和动态分配](/docs/concepts/storage/persistent-volumes/#provisioning) +`volumeBindingMode` 字段控制了[卷绑定和动态分配](/zh/docs/concepts/storage/persistent-volumes/#provisioning) 应该发生在什么时候。 <!-- @@ -266,10 +266,11 @@ and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration). --> 集群管理员可以通过指定 `WaitForFirstConsumer` 模式来解决此问题。 该模式将延迟 PersistentVolume 的绑定和分配,直到使用该 PersistentVolumeClaim 的 Pod 被创建。 -PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或分配。这些包括但不限于 [资源需求](/docs/concepts/configuration/manage-compute-resources-container), -[节点筛选器](/docs/concepts/configuration/assign-pod-node/#nodeselector), -[pod 亲和性和互斥性](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity), -以及 [污点和容忍度](/docs/concepts/configuration/taint-and-toleration). +PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或分配。这些包括但不限于 +[资源需求](/zh/docs/concepts/configuration/manage-resources-containers/)、 +[节点筛选器](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)、 +[pod 亲和性和互斥性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)、 +以及[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 <!-- The following plugins support `WaitForFirstConsumer` with dynamic provisioning: @@ -302,14 +303,13 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver to see its supported topology keys and examples. --> - -动态配置和预先创建的 PV 也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), +动态配置和预先创建的 PV 也支持 [CSI卷](/zh/docs/concepts/storage/volumes/#csi), 但是您需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键名和例子。 <!-- ### Allowed Topologies --> -### 允许的拓扑结构 +### 允许的拓扑结构 {#allowed-topologies} {{< feature-state for_k8s_version="v1.12" state="beta" >}} <!-- @@ -402,14 +402,22 @@ parameters: encrypting the volume. If none is supplied but `encrypted` is true, a key is generated by AWS. See AWS docs for valid ARN value. --> -* `type`:`io1`,`gp2`,`sc1`,`st1`。详细信息参见 [AWS 文档](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。默认值:`gp2`。 -* `zone`(弃用):AWS 区域。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone` 和 `zones` 参数不能同时使用。 -* `zones`(弃用):以逗号分隔的 AWS 区域列表。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone`和`zones`参数不能同时使用。 -* `iopsPerGB`:只适用于 `io1` 卷。每 GiB 每秒 I/O 操作。AWS 卷插件将其与请求卷的大小相乘以计算 IOPS 的容量,并将其限制在 20 000 IOPS(AWS 支持的最高值,请参阅 [AWS 文档](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。 +* `type`:`io1`,`gp2`,`sc1`,`st1`。详细信息参见 + [AWS 文档](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。默认值:`gp2`。 +* `zone`(弃用):AWS 区域。如果没有指定 `zone` 和 `zones`, + 通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone` 和 `zones` 参数不能同时使用。 +* `zones`(弃用):以逗号分隔的 AWS 区域列表。 + 如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone`和`zones`参数不能同时使用。 +* `iopsPerGB`:只适用于 `io1` 卷。每 GiB 每秒 I/O 操作。 + AWS 卷插件将其与请求卷的大小相乘以计算 IOPS 的容量, + 并将其限制在 20000 IOPS(AWS 支持的最高值,请参阅 + [AWS 文档](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。 这里需要输入一个字符串,即 `"10"`,而不是 `10`。 * `fsType`:受 Kubernetes 支持的文件类型。默认值:`"ext4"`。 -* `encrypted`:指定 EBS 卷是否应该被加密。合法值为 `"true"` 或者 `"false"`。这里需要输入字符串,即 `"true"`, 而非 `true`。 -* `kmsKeyId`:可选。加密卷时使用密钥的完整 Amazon 资源名称。如果没有提供,但 `encrypted` 值为 true,AWS 生成一个密钥。关于有效的 ARN 值,请参阅 AWS 文档。 +* `encrypted`:指定 EBS 卷是否应该被加密。合法值为 `"true"` 或者 `"false"`。 + 这里需要输入字符串,即 `"true"`, 而非 `true`。 +* `kmsKeyId`:可选。加密卷时使用密钥的完整 Amazon 资源名称。 + 如果没有提供,但 `encrypted` 值为 true,AWS 生成一个密钥。关于有效的 ARN 值,请参阅 AWS 文档。 {{< note >}} <!-- @@ -445,8 +453,12 @@ parameters: * `replication-type`: `none` or `regional-pd`. Default: `none`. --> * `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard` -* `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone` 和 `zones` 参数不能同时使用。 -* `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。`zone` 和 `zones` 参数不能同时使用。 +* `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,通常 + 卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。 + `zone` 和 `zones` 参数不能同时使用。 +* `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`, + 通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。 + `zone` 和 `zones` 参数不能同时使用。 * `fstype`: `ext4` 或 `xfs`。 默认: `ext4`。宿主机操作系统必须支持所定义的文件系统类型。 * `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。 @@ -465,14 +477,18 @@ specified, Kubernetes will arbitrarily choose among the specified zones. If the `zones` parameter is omitted, Kubernetes will arbitrarily choose among zones managed by the cluster. --> -如果 `replication-type` 设置为 `regional-pd`,会分配一个 [区域性持久化磁盘(Regional Persistent Disk)](https://cloud.google.com/compute/docs/disks/#repds)。在这种情况下,用户必须使用 `zones` 而非 `zone` 来指定期望的复制区域(zone)。如果指定来两个特定的区域,区域性持久化磁盘会在这两个区域里分配。如果指定了多于两个的区域,Kubernetes 会选择其中任意两个区域。如果省略了 `zones` 参数,Kubernetes 会在集群管理的区域中任意选择。 - -{{< note >}} +如果 `replication-type` 设置为 `regional-pd`,会分配一个 +[区域性持久化磁盘(Regional Persistent Disk)](https://cloud.google.com/compute/docs/disks/#repds)。 +在这种情况下,用户必须使用 `zones` 而非 `zone` 来指定期望的复制区域(zone)。 +如果指定来两个特定的区域,区域性持久化磁盘会在这两个区域里分配。 +如果指定了多于两个的区域,Kubernetes 会选择其中任意两个区域。 +如果省略了 `zones` 参数,Kubernetes 会在集群管理的区域中任意选择。 <!-- `zone` and `zones` parameters are deprecated and replaced with [allowedTopologies](#allowed-topologies) --> +{{< note >}} `zone` 和 `zones` 已被弃用并被 [allowedTopologies](#allowed-topologies) 取代。 {{< /note >}} @@ -516,11 +532,14 @@ parameters: --> * `resturl`:分配 gluster 卷的需求的 Gluster REST 服务/Heketi 服务 url。 通用格式应该是 `IPaddress:Port`,这是 GlusterFS 动态分配器的必需参数。 - 如果 Heketi 服务在 openshift/kubernetes 中安装并暴露为可路由服务,则可以使用类似于 + 如果 Heketi 服务在 OpenShift/kubernetes 中安装并暴露为可路由服务,则可以使用类似于 `http://heketi-storage-project.cloudapps.mystorage.com` 的格式,其中 fqdn 是可解析的 heketi 服务网址。 -* `restauthenabled`:Gluster REST 服务身份验证布尔值,用于启用对 REST 服务器的身份验证。如果此值为 'true',则必须填写 `restuser` 和 `restuserkey` 或 `secretNamespace` + `secretName`。此选项已弃用,当在指定 `restuser`,`restuserkey`,`secretName` 或 `secretNamespace` 时,身份验证被启用。 +* `restauthenabled`:Gluster REST 服务身份验证布尔值,用于启用对 REST 服务器的身份验证。 + 如果此值为 'true',则必须填写 `restuser` 和 `restuserkey` 或 `secretNamespace` + `secretName`。 + 此选项已弃用,当在指定 `restuser`、`restuserkey`、`secretName` 或 `secretNamespace` 时,身份验证被启用。 * `restuser`:在 Gluster 可信池中有权创建卷的 Gluster REST服务/Heketi 用户。 -* `restuserkey`:Gluster REST 服务/Heketi 用户的密码将被用于对 REST 服务器进行身份验证。此参数已弃用,取而代之的是 `secretNamespace` + `secretName`。 +* `restuserkey`:Gluster REST 服务/Heketi 用户的密码将被用于对 REST 服务器进行身份验证。 + 此参数已弃用,取而代之的是 `secretNamespace` + `secretName`。 <!-- * `secretNamespace`, `secretName` : Identification of Secret instance that @@ -539,7 +558,8 @@ parameters: [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml). --> * `secretNamespace`,`secretName`:Secret 实例的标识,包含与 Gluster REST 服务交互时使用的用户密码。 - 这些参数是可选的,`secretNamespace` 和 `secretName` 都省略时使用空密码。所提供的 Secret 必须将类型设置为 "kubernetes.io/glusterfs",例如以这种方式创建: + 这些参数是可选的,`secretNamespace` 和 `secretName` 都省略时使用空密码。 + 所提供的 Secret 必须将类型设置为 "kubernetes.io/glusterfs",例如以这种方式创建: ``` kubectl create secret generic heketi-secret \ @@ -547,7 +567,7 @@ parameters: --namespace=default ``` - secret 的例子可以在 [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml) 中找到。 + Secret 的例子可以在 [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml) 中找到。 <!-- * `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster @@ -561,9 +581,12 @@ parameters: specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively. --> -* `clusterid`:`630372ccdc720a92c681fb928f27b53f` 是集群的 ID,当分配卷时,Heketi 将会使用这个文件。它也可以是一个 clusterid 列表,例如: +* `clusterid`:`630372ccdc720a92c681fb928f27b53f` 是集群的 ID,当分配卷时, + Heketi 将会使用这个文件。它也可以是一个 clusterid 列表,例如: `"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397"`。这个是可选参数。 -* `gidMin`,`gidMax`:storage class GID 范围的最小值和最大值。在此范围(gidMin-gidMax)内的唯一值(GID)将用于动态分配卷。这些是可选的值。如果不指定,卷将被分配一个 2000-2147483647 之间的值,这是 gidMin 和 gidMax 的默认值。 +* `gidMin`,`gidMax`:storage class GID 范围的最小值和最大值。 + 在此范围(gidMin-gidMax)内的唯一值(GID)将用于动态分配卷。这些是可选的值。 + 如果不指定,卷将被分配一个 2000-2147483647 之间的值,这是 gidMin 和 gidMax 的默认值。 <!-- * `volumetype` : The volume type and its parameters can be configured with this @@ -587,17 +610,17 @@ parameters: deleted when the persistent volume claim is deleted. --> * `volumetype`:卷的类型及其参数可以用这个可选值进行配置。如果未声明卷类型,则由分配器决定卷的类型。 + 例如: - 例如: - 'Replica volume': `volumetype: replicate:3` 其中 '3' 是 replica 数量. - 'Disperse/EC volume': `volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量. - 'Distribute volume': `volumetype: none` + * 'Replica volume': `volumetype: replicate:3` 其中 '3' 是 replica 数量. + * 'Disperse/EC volume': `volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量. + * 'Distribute volume': `volumetype: none` - 有关可用的卷类型和管理选项,请参阅 [管理指南](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html)。 + 有关可用的卷类型和管理选项,请参阅 [管理指南](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html)。 - 更多相关的参考信息,请参阅 [如何配置 Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)。 + 更多相关的参考信息,请参阅 [如何配置 Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)。 - 当动态分配持久卷时,Gluster 插件自动创建名为 `gluster-dynamic-<claimname>` 的端点和 headless service。在 PVC 被删除时动态端点和 headless service 会自动被删除。 + 当动态分配持久卷时,Gluster 插件自动创建名为 `gluster-dynamic-<claimname>` 的端点和 headless service。在 PVC 被删除时动态端点和 headless service 会自动被删除。 ### OpenStack Cinder @@ -674,7 +697,11 @@ OpenStack 的内部驱动程序已经被弃用。请使用 [OpenStack 的外部 specified in the vSphere config file used to initialize the vSphere Cloud Provider. --> - `datastore`:用户也可以在 StorageClass 中指定数据存储。卷将在 storage class 中指定的数据存储上创建,在这种情况下是 `VSANDatastore`。该字段是可选的。如果未指定数据存储,则将在用于初始化 vSphere Cloud Provider 的 vSphere 配置文件中指定的数据存储上创建该卷。 + `datastore`:用户也可以在 StorageClass 中指定数据存储。 + 卷将在 storage class 中指定的数据存储上创建,在这种情况下是 `VSANDatastore`。 + 该字段是可选的。 + 如果未指定数据存储,则将在用于初始化 vSphere Cloud Provider 的 vSphere + 配置文件中指定的数据存储上创建该卷。 <!-- 3. Storage Policy Management inside kubernetes @@ -697,7 +724,10 @@ OpenStack 的内部驱动程序已经被弃用。请使用 [OpenStack 的外部 --> * 使用现有的 vCenter SPBM 策略 - vSphere 用于存储管理的最重要特性之一是基于策略的管理。基于存储策略的管理(SPBM)是一个存储策略框架,提供单一的统一控制平面的跨越广泛的数据服务和存储解决方案。 SPBM 使能 vSphere 管理员克服先期的存储配置挑战,如容量规划,差异化服务等级和管理容量空间。 + vSphere 用于存储管理的最重要特性之一是基于策略的管理。 + 基于存储策略的管理(SPBM)是一个存储策略框架,提供单一的统一控制平面的 + 跨越广泛的数据服务和存储解决方案。 + SPBM 使能 vSphere 管理员克服先期的存储配置挑战,如容量规划,差异化服务等级和管理容量空间。 SPBM 策略可以在 StorageClass 中使用 `storagePolicyName` 参数声明。 @@ -719,7 +749,10 @@ OpenStack 的内部驱动程序已经被弃用。请使用 [OpenStack 的外部 --> * Kubernetes 内的 Virtual SAN 策略支持 - Vsphere Infrastructure(VI)管理员将能够在动态卷配置期间指定自定义 Virtual SAN 存储功能。您现在可以定义存储需求,例如性能和可用性,当动态卷供分配时会以存储功能的形式提供。存储功能需求会转换为 Virtual SAN 策略,然后当 persistent volume(虚拟磁盘)在创建时,会将其推送到 Virtual SAN 层。虚拟磁盘分布在 Virtual SAN 数据存储中以满足要求。 + Vsphere Infrastructure(VI)管理员将能够在动态卷配置期间指定自定义 Virtual SAN + 存储功能。您现在可以定义存储需求,例如性能和可用性,当动态卷供分配时会以存储功能的形式提供。 + 存储功能需求会转换为 Virtual SAN 策略,然后当持久卷(虚拟磁盘)在创建时, + 会将其推送到 Virtual SAN 层。虚拟磁盘分布在 Virtual SAN 数据存储中以满足要求。 更多有关 persistent volume 管理的存储策略的详细信息, 您可以参考 [基于存储策略的动态分配卷管理](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html)。 @@ -955,7 +988,8 @@ parameters: * `kind`:可能的值是 `shared`(默认)、`dedicated` 和 `managed`。 当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。 当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。 -* `resourceGroup`: 指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。 +* `resourceGroup`: 指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。 + 若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。 <!-- - Premium VM can attach both Standard_LRS and Premium_LRS disks, while Standard VM can only attach Standard_LRS disks. @@ -1015,7 +1049,8 @@ mounting credentials. If the cluster has enabled both add the `create` permission of resource `secret` for clusterrole `system:controller:persistent-volume-binder`. --> -在存储分配期间,为挂载凭证创建一个名为 `secretName` 的 secret。如果集群同时启用了 [RBAC](/docs/admin/authorization/rbac/) 和 [Controller Roles](/docs/admin/authorization/rbac/#controller-roles), +在存储分配期间,为挂载凭证创建一个名为 `secretName` 的 Secret。如果集群同时启用了 +[RBAC](/zh/docs/reference/access-authn-authz/rbac/) 和 [控制器角色](/zh/docs/reference/access-authn-authz/rbac/#controller-roles), 为 `system:controller:persistent-volume-binder` 的 clusterrole 添加 `secret` 资源的 `create` 权限。 <!-- @@ -1075,7 +1110,8 @@ parameters: * `aggregation_level`:指定卷分配到的块数量,0 表示一个非聚合卷(默认:`0`)。 这里需要填写字符串,即,是 `"0"` 而不是 `0`。 * `ephemeral`:指定卷在卸载后进行清理还是持久化。 `emptyDir` 的使用场景可以将这个值设置为 true , - `persistent volumes` 的使用场景可以将这个值设置为 false(例如 Cassandra 这样的数据库)`true/false`(默认为 `false`)。这里需要填写字符串,即,是 `"true"` 而不是 `true`。 + `persistent volumes` 的使用场景可以将这个值设置为 false(例如 Cassandra 这样的数据库) + `true/false`(默认为 `false`)。这里需要填写字符串,即,是 `"true"` 而不是 `true`。 ### ScaleIO @@ -1136,8 +1172,8 @@ secret 必须用 `kubernetes.io/scaleio` 类型创建,并与引用它的 PVC ```shell kubectl create secret generic sio-secret --type="kubernetes.io/scaleio" \ ---from-literal=username=sioadmin --from-literal=password=d2NABDNjMA== \ ---namespace=default + --from-literal=username=sioadmin --from-literal=password=d2NABDNjMA== \ + --namespace=default ``` ### StorageOS diff --git a/content/zh/docs/concepts/storage/volume-pvc-datasource.md b/content/zh/docs/concepts/storage/volume-pvc-datasource.md index c39fad6f42..96838d402e 100644 --- a/content/zh/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/zh/docs/concepts/storage/volume-pvc-datasource.md @@ -5,16 +5,9 @@ weight: 30 --- <!-- ---- -reviewers: -- jsafrane -- saad-ali -- thockin -- msau42 title: CSI Volume Cloning content_type: concept weight: 30 ---- --> <!-- overview --> @@ -22,42 +15,40 @@ weight: 30 <!-- This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested. --> - -本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉[卷](/docs/concepts/storage/volumes)。 - - - +本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉[卷](/zh/docs/concepts/storage/volumes)。 <!-- body --> <!-- ## Introduction + +The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}. --> ## 介绍 -<!-- -The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}. ---> - -{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷克隆功能增加了通过在 `dataSource` 字段中指定存在的 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s,来表示用户想要克隆的 {{< glossary_tooltip term_id="volume" >}}。 +{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷克隆功能增加了通过在 +`dataSource` 字段中指定存在的 +{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}, +来表示用户想要克隆的 {{< glossary_tooltip term_id="volume" >}}。 <!-- A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume. --> -克隆,意思是为已有的 Kubernetes 卷创建副本,它可以像任何其它标准卷一样被使用。唯一的区别就是配置后,后端设备将创建指定完全相同的副本,而不是创建一个“新的”空卷。 +克隆,意思是为已有的 Kubernetes 卷创建副本,它可以像任何其它标准卷一样被使用。 +唯一的区别就是配置后,后端设备将创建指定完全相同的副本,而不是创建一个“新的”空卷。 <!-- The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use). ---> -从 Kubernetes API 的角度看,克隆的实现只是在创建新的 PVC 时,增加了指定一个现有 PVC 作为数据源的能力。源 PVC 必须是 bound 状态且可用的(不在使用中)。 - -<!-- Users need to be aware of the following when using this feature: --> +从 Kubernetes API 的角度看,克隆的实现只是在创建新的 PVC 时, +增加了指定一个现有 PVC 作为数据源的能力。源 PVC 必须是 bound +状态且可用的(不在使用中)。 + 用户在使用该功能时,需要注意以下事项: <!-- @@ -78,18 +69,15 @@ Users need to be aware of the following when using this feature: * 仅在同一存储类中支持克隆。 - 目标卷必须和源卷具有相同的存储类 - 可以使用默认的存储类并且 storageClassName 字段在规格中忽略了 -* 克隆只能在两个使用相同 VolumeMode 设置的卷中进行(如果请求克隆一个块存储模式的卷,源卷必须也是块存储模式)。 - +* 克隆只能在两个使用相同 VolumeMode 设置的卷中进行 + (如果请求克隆一个块存储模式的卷,源卷必须也是块存储模式)。 <!-- ## Provisioning ---> -## 供应 - -<!-- Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. --> +## 供应 克隆卷与其他任何 PVC 一样配置,除了需要增加 dataSource 来引用同一命名空间中现有的 PVC。 @@ -112,7 +100,8 @@ spec: ``` <!-- -You must specify a capacity value for `spec.resources.requests.storage`, and the value you specify must be the same or larger than the capacity of the source volume. +You must specify a capacity value for `spec.resources.requests.storage`, +and the value you specify must be the same or larger than the capacity of the source volume. --> {{< note >}} @@ -127,14 +116,14 @@ The result is a new PVC with the name `clone-of-pvc-1` that has the exact same c <!-- ## Usage + +Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone. --> ## 用法 -<!-- -Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone. ---> - -一旦新的 PVC 可用,被克隆的 PVC 像其他 PVC 一样被使用。可以预期的是,新创建的 PVC 是一个独立的对象。可以独立使用,克隆,快照或删除它,而不需要考虑它的原始数据源 PVC。这也意味着,源没有以任何方式链接到新创建的 PVC,它也可以被修改或删除,而不会影响到新创建的克隆。 - +一旦新的 PVC 可用,被克隆的 PVC 像其他 PVC 一样被使用。 +可以预期的是,新创建的 PVC 是一个独立的对象。 +可以独立使用、克隆、快照或删除它,而不需要考虑它的原始数据源 PVC。 +这也意味着,源没有以任何方式链接到新创建的 PVC,它也可以被修改或删除,而不会影响到新创建的克隆。 diff --git a/content/zh/docs/concepts/storage/volume-snapshots.md b/content/zh/docs/concepts/storage/volume-snapshots.md index f729f4b923..aaa68a342d 100644 --- a/content/zh/docs/concepts/storage/volume-snapshots.md +++ b/content/zh/docs/concepts/storage/volume-snapshots.md @@ -5,18 +5,9 @@ weight: 20 --- <!-- ---- -reviewers: -- saad-ali -- thockin -- msau42 -- jingxu97 -- xing-yang -- yuxiangqian title: Volume Snapshots content_type: concept weight: 20 ---- --> <!-- overview --> @@ -26,9 +17,8 @@ weight: 20 <!-- In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/). --> -在 Kubernetes 中,卷快照是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes 的 [持久卷](/docs/concepts/storage/persistent-volumes/)。 - - +在 Kubernetes 中,卷快照是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes +的 [持久卷](/zh/docs/concepts/storage/persistent-volumes/)。 <!-- body --> @@ -108,7 +98,9 @@ Instead of using a pre-existing snapshot, you can request that a snapshot to be --> #### 动态的 {#dynamic} -可以从 `PersistentVolumeClaim` 中动态获取快照,而不用使用已经存在的快照。在获取快照时,[卷快照类](/docs/concepts/storage/volume-snapshot-classes/)指定要用的特定于存储提供程序的参数。 +可以从 `PersistentVolumeClaim` 中动态获取快照,而不用使用已经存在的快照。 +在获取快照时,[卷快照类](/zh/docs/concepts/storage/volume-snapshot-classes/) +指定要用的特定于存储提供程序的参数。 <!-- ### Binding @@ -181,12 +173,14 @@ using the attribute `volumeSnapshotClassName`. If nothing is set, then the defau --> `persistentVolumeClaimName` 是 `PersistentVolumeClaim` 数据源对快照的名称。这个字段是动态配置快照中的必填字段。 -卷快照可以通过指定 [VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/) 使用 `volumeSnapshotClassName` 属性来请求特定类。如果没有设置,那么使用默认类(如果有)。 +卷快照可以通过指定 [VolumeSnapshotClass](/zh/docs/concepts/storage/volume-snapshot-classes/) +使用 `volumeSnapshotClassName` 属性来请求特定类。如果没有设置,那么使用默认类(如果有)。 <!-- For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName` as the source for the snapshot as shown in the following example. The `volumeSnapshotContentName` source field is required for pre-provisioned snapshots. --> -如下面例子所示,对于预配置的快照,需要给快照指定 `volumeSnapshotContentName` 来作为源。对于预配置的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。 +如下面例子所示,对于预配置的快照,需要给快照指定 `volumeSnapshotContentName` 来作为源。 +对于预配置的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。 ``` apiVersion: snapshot.storage.k8s.io/v1beta1 @@ -266,6 +260,5 @@ the *dataSource* field in the `PersistentVolumeClaim` object. For more details, see [Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support). --> -更多详细信息,请参阅 [卷快照和从快照还原卷](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。 - +更多详细信息,请参阅 [卷快照和从快照还原卷](/zh/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。 diff --git a/content/zh/docs/concepts/storage/volumes.md b/content/zh/docs/concepts/storage/volumes.md index 731f73d53f..262798346f 100644 --- a/content/zh/docs/concepts/storage/volumes.md +++ b/content/zh/docs/concepts/storage/volumes.md @@ -1,14 +1,15 @@ --- -reviewers: -- jsafrane -- saad-ali -- thockin -- msau42 -title: Volumes +title: 卷 content_type: concept weight: 10 --- +<!-- +title: Volumes +content_type: concept +weight: 10 +--> + <!-- overview --> <!-- @@ -28,11 +29,7 @@ Kubernetes 抽象出 `Volume` 对象来解决这两个问题。 <!-- Familiarity with [Pods](/docs/user-guide/pods) is suggested. --> - -阅读本文前建议您熟悉一下 [Pods](/docs/user-guide/pods)。 - - - +阅读本文前建议您熟悉一下 [Pods](/zh/docs/concepts/workloads/pods)。 <!-- body --> @@ -54,7 +51,8 @@ parameters to volumes). Docker 也有 [Volume](https://docs.docker.com/storage/) 的概念,但对它只有少量且松散的管理。 在 Docker 中,Volume 是磁盘上或者另外一个容器内的一个目录。 直到最近,Docker 才支持对基于本地磁盘的 Volume 的生存期进行管理。 -虽然 Docker 现在也能提供 Volume 驱动程序,但是目前功能还非常有限(例如,截至 Docker 1.7,每个容器只允许有一个 Volume 驱动程序,并且无法将参数传递给卷)。 +虽然 Docker 现在也能提供 Volume 驱动程序,但是目前功能还非常有限 +(例如,截至 Docker 1.7,每个容器只允许有一个 Volume 驱动程序,并且无法将参数传递给卷)。 <!-- A Kubernetes volume, on the other hand, has an explicit lifetime - the same as @@ -64,10 +62,10 @@ Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously. --> - 另一方面,Kubernetes 卷具有明确的生命周期——与包裹它的 Pod 相同。 因此,卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留。 -当然,当一个 Pod 不再存在时,卷也将不再存在。也许更重要的是,Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。 +当然,当一个 Pod 不再存在时,卷也将不再存在。 +也许更重要的是,Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。 <!-- At its core, a volume is just a directory, possibly with some data in it, which @@ -75,7 +73,6 @@ is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. --> - 卷的核心是包含一些数据的目录,Pod 中的容器可以访问该目录。 特定的卷类型可以决定这个目录如何形成的,并能决定它支持何种介质,以及目录中存放什么内容。 @@ -87,7 +84,6 @@ field) and where to mount those into Containers (the `.spec.containers.volumeMounts` field). --> - 使用卷时, Pod 声明中需要提供卷的类型 (`.spec.volumes` 字段)和卷挂载的位置 (`.spec.containers.volumeMounts` 字段). <!-- @@ -99,9 +95,9 @@ the image. Volumes can not mount onto other volumes or have hard links to other volumes. Each Container in the Pod must independently specify where to mount each volume. --> - 容器中的进程能看到由它们的 Docker 镜像和卷组成的文件系统视图。 -[Docker 镜像](https://docs.docker.com/userguide/dockerimages/) 位于文件系统层次结构的根部,并且任何 Volume 都挂载在镜像内的指定路径上。 +[Docker 镜像](https://docs.docker.com/userguide/dockerimages/) +位于文件系统层次结构的根部,并且任何 Volume 都挂载在镜像内的指定路径上。 卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。 @@ -110,7 +106,6 @@ Pod 中的每个容器必须独立地指定每个卷的挂载位置。 Kubernetes supports several types of Volumes: --> - ## Volume 的类型 Kubernetes 支持下列类型的卷: @@ -161,19 +156,15 @@ volume are preserved and the volume is merely unmounted. This means that an EBS volume can be pre-populated with data, and that data can be "handed off" between Pods. --> - -`awsElasticBlockStore` 卷将 Amazon Web服务(AWS)[EBS 卷](http://aws.amazon.com/ebs/) 挂载到您的 Pod 中。 +`awsElasticBlockStore` 卷将 Amazon Web服务(AWS)[EBS 卷](https://aws.amazon.com/ebs/) 挂载到您的 Pod 中。 与 `emptyDir` 在删除 Pod 时会被删除不同,EBS 卷的内容在删除 Pod 时会被保留,卷只是被卸载掉了。 这意味着 EBS 卷可以预先填充数据,并且可以在 Pod 之间传递数据。 -{{< caution >}} - <!-- You must create an EBS volume using `aws ec2 create-volume` or the AWS API before you can use it. --> - +{{< caution >}} 您在使用 EBS 卷之前必须先创建它,可以使用 `aws ec2 create-volume` 命令进行创建;也可以使用 AWS API 进行创建。 - {{< /caution >}} <!-- @@ -183,7 +174,6 @@ There are some restrictions when using an `awsElasticBlockStore` volume: * those instances need to be in the same region and availability-zone as the EBS volume * EBS only supports a single EC2 instance mounting a volume --> - 使用 `awsElasticBlockStore` 卷时有一些限制: * Pod 正在运行的节点必须是 AWS EC2 实例。 @@ -195,7 +185,6 @@ There are some restrictions when using an `awsElasticBlockStore` volume: Before you can use an EBS volume with a Pod, you need to create it. --> - #### 创建 EBS 卷 在将 EBS 卷用到 Pod 上之前,您首先要创建它。 @@ -208,13 +197,11 @@ aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2 Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume type are suitable for your use!) --> - 确保该区域与您的群集所在的区域相匹配。(也要检查卷的大小和 EBS 卷类型都适合您的用途!) <!-- #### AWS EBS Example configuration --> - #### AWS EBS 配置示例 ```yaml @@ -276,7 +263,6 @@ into a Pod. More details can be found [here](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/azure_file/README.md). --> - `azureFile` 用来在 Pod 上挂载 Microsoft Azure 文件卷(File Volume) (SMB 2.1 和 3.0)。 更多详情请参考[这里](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/azure_file/README.md)。 @@ -295,7 +281,6 @@ Driver](https://github.com/kubernetes-sigs/azurefile-csi-driver) must be installed on the cluster and the `CSIMigration` and `CSIMigrationAzureFile` Alpha features must be enabled. --> - 启用azureFile的CSI迁移功能后,它会将所有插件操作从现有的内建插件填添加file.csi.azure.com容器存储接口(CSI)驱动程序中。 为了使用此功能,必须在群集上安装 [Azure文件CSI驱动程序](https://github.com/kubernetes-sigs/azurefile-csi-driver), 并且 `CSIMigration` 和 `CSIMigrationAzureFile` Alpha功能 必须启用。 @@ -310,37 +295,32 @@ unmounted. This means that a CephFS volume can be pre-populated with data, and that data can be "handed off" between Pods. CephFS can be mounted by multiple writers simultaneously. --> - -`cephfs` 允许您将现存的 CephFS 卷挂载到 Pod 中。不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`cephfs` 卷的内容在删除 Pod 时会被保留,卷只是被卸载掉了。 +`cephfs` 允许您将现存的 CephFS 卷挂载到 Pod 中。 +不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`cephfs` 卷的内容在删除 Pod 时会被保留,卷只是被卸载掉了。 这意味着 CephFS 卷可以被预先填充数据,并且这些数据可以在 Pod 之间"传递"。CephFS 卷可同时被多个写者挂载。 - -{{< caution >}} - <!-- You must have your own Ceph server running with the share exported before you can use it. --> - +{{< caution >}} 在您使用 Ceph 卷之前,您的 Ceph 服务器必须正常运行并且要使用的 share 被导出(exported)。 {{< /caution >}} <!-- See the [CephFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/cephfs/) for more details. --> - 更多信息请参考 [CephFS 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/cephfs/)。 ### cinder {#cinder} -{{< note >}} - <!-- Prerequisite: Kubernetes with OpenStack Cloud Provider configured. For cloudprovider configuration please refer [cloud provider openstack](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack). --> - -先决条件:配置了OpenStack Cloud Provider 的 Kubernetes。 有关 cloudprovider 配置,请参考 [cloud provider openstack](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack)。 - +{{< note >}} +先决条件:配置了OpenStack Cloud Provider 的 Kubernetes。 +有关 cloudprovider 配置,请参考 +[cloud provider openstack](/zh/docs/concepts/cluster-administration/cloud-providers/#openstack)。 {{< /note >}} <!-- @@ -348,7 +328,6 @@ configuration please refer [cloud provider openstack](https://kubernetes.io/docs #### Cinder Volume Example configuration --> - `cinder` 用于将 OpenStack Cinder 卷安装到 Pod 中。 #### Cinder Volume示例配置 @@ -402,8 +381,7 @@ provides a way to inject configuration data into Pods. The data stored in a `ConfigMap` object can be referenced in a volume of type `configMap` and then consumed by containerized applications running in a Pod. --> - -[`configMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/) 资源提供了向 Pod 注入配置数据的方法。 +[`configMap`](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 资源提供了向 Pod 注入配置数据的方法。 `ConfigMap` 对象中存储的数据可以被 `configMap` 类型的卷引用,然后被应用到 Pod 中运行的容器化应用。 <!-- @@ -445,23 +423,22 @@ its `log_level` entry are mounted into the Pod at path "`/etc/config/log_level`" Note that this path is derived from the volume's `mountPath` and the `path` keyed with `log_level`. --> - `log-config` ConfigMap 是以卷的形式挂载的, 存储在 `log_level` 条目中的所有内容都被挂载到 Pod 的 "`/etc/config/log_level`" 路径下。 请注意,这个路径来源于 Volume 的 `mountPath` 和 `log_level` 键对应的 `path`。 -{{< caution >}} <!-- You must create a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) before you can use it. --> -在使用 [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) 之前您首先要创建它。 +{{< caution >}} +在使用 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 之前您首先要创建它。 {{< /caution >}} -{{< note >}} <!-- A Container using a ConfigMap as a [subPath](#using-subpath) volume mount will not receive ConfigMap updates. --> +{{< note >}} 容器以 [subPath](#using-subpath) 卷挂载方式使用 ConfigMap 时,将无法接收 ConfigMap 的更新。 {{< /note >}} @@ -475,20 +452,18 @@ It mounts a directory and writes the requested data in plain text files. `downwardAPI` 卷用于使 downward API 数据对应用程序可用。 这种卷类型挂载一个目录并在纯文本文件中写入请求的数据。 -{{< note >}} - <!-- A Container using Downward API as a [subPath](#using-subpath) volume mount will not receive Downward API updates. --> - +{{< note >}} 容器以挂载 [subPath](#using-subpath) 卷的方式使用 downwardAPI 时,将不能接收到它的更新。 {{< /note >}} <!-- See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. --> -更多详细信息请参考 [`downwardAPI` 卷示例](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)。 +更多详细信息请参考 [`downwardAPI` 卷示例](/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)。 ### emptyDir {#emptydir} @@ -506,11 +481,10 @@ any reason, the data in the `emptyDir` is deleted forever. 尽管 Pod 中的容器挂载 `emptyDir` 卷的路径可能相同也可能不同,但是这些容器都可以读写 `emptyDir` 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,`emptyDir` 卷中的数据也会永久删除。 -{{< note >}} - <!-- A Container crashing does *NOT* remove a Pod from a node, so the data in an `emptyDir` volume is safe across Container crashes. --> +{{< note >}} 容器崩溃并不会导致 Pod 被从节点上移除,因此容器崩溃时 `emptyDir` 卷中的数据是安全的。 {{< /note >}} @@ -522,14 +496,12 @@ Some uses for an `emptyDir` are: * holding files that a content-manager Container fetches while a webserver Container serves the data --> - `emptyDir` 的一些用途: * 缓存空间,例如基于磁盘的归并排序。 * 为耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。 * 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。 - <!-- By default, `emptyDir` volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your @@ -539,7 +511,6 @@ While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write will count against your Container's memory limit. --> - 默认情况下, `emptyDir` 卷存储在支持该节点所使用的介质上;这里的介质可以是磁盘或 SSD 或网络存储,这取决于您的环境。 但是,您可以将 `emptyDir.medium` 字段设置为 `"Memory"`,以告诉 Kubernetes 为您安装 tmpfs(基于 RAM 的文件系统)。 虽然 tmpfs 速度非常快,但是要注意它与磁盘不同。 @@ -548,7 +519,6 @@ tmpfs 在节点重启时会被清除,并且您所写入的所有文件都会 <!-- #### Example Pod --> - #### Pod 示例 ```yaml @@ -576,24 +546,22 @@ You can specify single or multiple target World Wide Names using the parameter `targetWWNs` in your volume configuration. If multiple WWNs are specified, targetWWNs expect that those WWNs are from multi-path connections. --> - ### fc (光纤通道) {#fc} `fc` 卷允许将现有的光纤通道卷挂载到 Pod 中。 可以使用卷配置中的参数 `targetWWNs` 来指定单个或多个目标 WWN。 如果指定多个 WWN,targetWWNs 期望这些 WWN 来自多路径连接。 -{{< caution >}} <!-- You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. --> +{{< caution >}} 您必须配置 FC SAN Zoning,以便预先向目标 WWN 分配和屏蔽这些 LUN(卷),这样 Kubernetes 主机才可以访问它们。 {{< /caution >}} <!-- See the [FC example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/fibre_channel) for more details. --> - 更多详情请参考 [FC 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/fibre_channel)。 <!-- @@ -615,17 +583,15 @@ CLI or by using the Flocker API. If the dataset already exists it will be reattached by Flocker to the node that the Pod is scheduled. This means data can be "handed off" between Pods as required. --> - `flocker` 卷允许将一个 Flocker 数据集挂载到 Pod 中。 如果数据集在 Flocker 中不存在,则需要首先使用 Flocker CLI 或 Flocker API 创建数据集。 如果数据集已经存在,那么 Flocker 将把它重新附加到 Pod 被调度的节点。 这意味着数据可以根据需要在 Pod 之间 "传递"。 - -{{< caution >}} <!-- You must have your own Flocker installation running before you can use it. --> +{{< caution >}} 您在使用 Flocker 之前必须先安装运行自己的 Flocker。 {{< /caution >}} @@ -651,10 +617,10 @@ pre-populated with data, and that data can be "handed off" between Pods. 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,持久盘卷的内容在删除 Pod 时会被保留,卷只是被卸载掉了。 这意味着持久盘卷可以被预先填充数据,并且这些数据可以在 Pod 之间"传递"。 -{{< caution >}} <!-- You must create a PD using `gcloud` or the GCE API or UI before you can use it. --> +{{< caution >}} 您在使用 PD 前,必须使用 `gcloud` 或者 GCE API 或 UI 创建它。 {{< /caution >}} @@ -664,7 +630,6 @@ There are some restrictions when using a `gcePersistentDisk`: * the nodes on which Pods are running must be GCE VMs * those VMs need to be in the same GCE project and zone as the PD --> - 使用 `gcePersistentDisk` 时有一些限制: * 运行 Pod 的节点必须是 GCE VM @@ -677,7 +642,6 @@ and then serve it in parallel from as many Pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed. --> - PD 的一个特点是它们可以同时被多个消费者以只读方式挂载。 这意味着您可以用数据集预先填充 PD,然后根据需要并行地在尽可能多的 Pod 中提供该数据集。 不幸的是,PD 只能由单个使用者以读写模式挂载——即不允许同时写入。 @@ -686,7 +650,6 @@ PD 的一个特点是它们可以同时被多个消费者以只读方式挂载 Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1. --> - 在由 ReplicationController 所管理的 Pod 上使用 PD 将会失败,除非 PD 是只读模式或者副本的数量是 0 或 1。 <!-- @@ -694,7 +657,6 @@ the PD is read-only or the replica count is 0 or 1. Before you can use a GCE PD with a Pod, you need to create it. --> - #### 创建持久盘(PD) 在 Pod 中使用 GCE 持久盘之前,您首先要创建它。 @@ -706,7 +668,6 @@ gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk <!-- #### Example Pod --> - #### Pod 示例 ```yaml @@ -731,7 +692,6 @@ spec: <!-- #### Regional Persistent Disks --> - #### 区域持久盘(Regional Persistent Disks) {{< feature-state for_k8s_version="v1.10" state="beta" >}} @@ -739,19 +699,18 @@ spec: <!-- The [Regional Persistent Disks](https://cloud.google.com/compute/docs/disks/#repds) feature allows the creation of Persistent Disks that are available in two zones within the same region. In order to use this feature, the volume must be provisioned as a PersistentVolume; referencing the volume directly from a pod is not supported. --> - [区域持久盘](https://cloud.google.com/compute/docs/disks/#repds) 功能允许您创建能在同一区域的两个可用区中使用的持久盘。 要使用这个功能,必须以持久盘的方式提供卷;Pod 不支持直接引用这种卷。 <!-- #### Manually provisioning a Regional PD PersistentVolume + Dynamic provisioning is possible using a [StorageClass for GCE PD](/docs/concepts/storage/storage-classes/#gce). Before creating a PersistentVolume, you must create the PD: --> - #### 手动供应基于区域 PD 的 PersistentVolume -使用 [为 GCE PD 定义的存储类](/docs/concepts/storage/storage-classes/#gce) 也可以动态供应。 +使用 [为 GCE PD 定义的存储类](/zh/docs/concepts/storage/storage-classes/#gce) 也可以动态供应。 在创建 PersistentVolume 之前,您首先要创建 PD。 ```shell @@ -798,7 +757,6 @@ Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-drive must be installed on the cluster and the `CSIMigration` and `CSIMigrationGCE` Alpha features must be enabled. --> - 启用 GCE PD 的 CSI 迁移功能后,它会将所有插件操作从现有的内建插件填添加 `pd.csi.storage.gke.io` 容器存储接口( CSI )驱动程序中。 为了使用此功能,必须在群集上安装 [GCE PD CSI驱动程序](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver), 并且 `CSIMigration` 和 `CSIMigrationGCE` Alpha功能 必须启用。 @@ -810,10 +768,10 @@ Alpha features must be enabled. ### gitRepo (已弃用) -{{< warning >}} <!-- The gitRepo volume type is deprecated. To provision a container with a git repo, mount an [EmptyDir](#emptydir) into an InitContainer that clones the repo using git, then mount the [EmptyDir](#emptydir) into the Pod's container. --> +{{< warning >}} gitRepo 卷类型已经被废弃。如果需要在容器中提供 git 仓库,请将一个 [EmptyDir](#emptydir) 卷挂载到 InitContainer 中,使用 git 命令完成仓库的克隆操作,然后将 [EmptyDir](#emptydir) 卷挂载到 Pod 的容器中。 {{< /warning >}} @@ -862,22 +820,20 @@ means that a glusterfs volume can be pre-populated with data, and that data can be "handed off" between Pods. GlusterFS can be mounted by multiple writers simultaneously. --> - -`glusterfs` 卷能将 [Glusterfs](http://www.gluster.org) (一个开源的网络文件系统) 挂载到您的 Pod 中。 +`glusterfs` 卷能将 [Glusterfs](https://www.gluster.org) (一个开源的网络文件系统) 挂载到您的 Pod 中。 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`glusterfs` 卷的内容在删除 Pod 时会被保存,卷只是被卸载掉了。 这意味着 `glusterfs` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间"传递"。GlusterFS 可以被多个写者同时挂载。 -{{< caution >}} <!-- You must have your own GlusterFS installation running before you can use it. --> +{{< caution >}} 在使用前您必须先安装运行自己的 GlusterFS。 {{< /caution >}} <!-- See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/glusterfs) for more details. --> - 更多详情请参考 [GlusterFS 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/glusterfs)。 ### hostPath {#hostpath} @@ -959,12 +915,13 @@ Watch out when using this type of volume, because: * 具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。 * 当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 `hostPath` 使用的资源。 -* 基础主机上创建的文件或目录只能由 root 用户写入。您需要在 [特权容器](/docs/user-guide/security-context) 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 `hostPath` 卷。 +* 基础主机上创建的文件或目录只能由 root 用户写入。您需要在 +[特权容器](/zh/docs/tasks/configure-pod-container/security-context/) +中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 `hostPath` 卷。 <!-- #### Example Pod --> - #### Pod 示例 ```yaml @@ -988,9 +945,16 @@ spec: type: Directory ``` +<!-- +It should be noted that the `FileOrCreate` mode does not create the parent +directory of the file. If the parent directory of the mounted file does not +exist, the pod fails to start. To ensure that this mode works, you can try to +mount directories and files separately, as shown below. +--> {{< caution >}} -<!-- It should be noted that the `FileOrCreate` mode does not create the parent directory of the file. If the parent directory of the mounted file does not exist, the pod fails to start. To ensure that this mode works, you can try to mount directories and files separately, as shown below. --> -应当注意,`FileOrCreate` 类型不会负责创建文件的父目录。如果挂载挂载文件的父目录不存在,pod 启动会失败。为了确保这种 `type` 能够工作,可以尝试把文件和它对应的目录分开挂载,如下所示: +应当注意,`FileOrCreate` 类型不会负责创建文件的父目录。 +如果挂载挂载文件的父目录不存在,pod 启动会失败。 +为了确保这种 `type` 能够工作,可以尝试把文件和它对应的目录分开挂载,如下所示: {{< /caution >}} #### FileOrCreate pod 示例 @@ -1035,10 +999,10 @@ that data can be "handed off" between Pods. 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,持久盘 卷的内容在删除 Pod 时会被保存,卷只是被卸载掉了。 这意味着 `iscsi` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间"传递"。 -{{< caution >}} <!-- You must have your own iSCSI server running with the volume created before you can use it. --> +{{< caution >}} 在您使用 iSCSI 卷之前,您必须拥有自己的 iSCSI 服务器,并在上面创建卷。 {{< /caution >}} @@ -1049,14 +1013,12 @@ and then serve it in parallel from as many Pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed. --> - iSCSI 的一个特点是它可以同时被多个用户以只读方式挂载。 这意味着您可以用数据集预先填充卷,然后根据需要在尽可能多的 Pod 上提供它。不幸的是,iSCSI 卷只能由单个使用者以读写模式挂载——不允许同时写入。 <!-- See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/iscsi) for more details. --> - 更多详情请参考 [iSCSI 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/iscsi)。 <!-- @@ -1067,11 +1029,13 @@ See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{< param "g {{< feature-state for_k8s_version="v1.14" state="stable" >}} -{{< note >}} -<!--The alpha PersistentVolume NodeAffinity annotation has been deprecated +<!-- +The alpha PersistentVolume NodeAffinity annotation has been deprecated and will be removed in a future release. Existing PersistentVolumes using this annotation must be updated by the user to use the new PersistentVolume -`NodeAffinity` field.--> +`NodeAffinity` field. +--> +{{< note >}} alpha 版本的 PersistentVolume NodeAffinity 注释已被取消,将在将来的版本中废弃。 用户必须更新现有的使用该注解的 PersistentVolume,以使用新的 PersistentVolume `NodeAffinity` 字段。 {{< /note >}} @@ -1094,7 +1058,8 @@ portable manner without manually scheduling Pods to nodes, as the system is awar of the volume's node constraints by looking at the node affinity on the PersistentVolume. --> -相比 `hostPath` 卷,`local` 卷可以以持久和可移植的方式使用,而无需手动将 Pod 调度到节点,因为系统通过查看 PersistentVolume 所属节点的亲和性配置,就能了解卷的节点约束。 +相比 `hostPath` 卷,`local` 卷可以以持久和可移植的方式使用,而无需手动将 Pod +调度到节点,因为系统通过查看 PersistentVolume 所属节点的亲和性配置,就能了解卷的节点约束。 <!-- However, local volumes are still subject to the availability of the underlying @@ -1107,7 +1072,6 @@ durability characteristics of the underlying disk. The following is an example of PersistentVolume spec using a `local` volume and `nodeAffinity`: --> - 然而,`local` 卷仍然取决于底层节点的可用性,并不是适合所有应用程序。 如果节点变得不健康,那么`local` 卷也将变得不可访问,并且使用它的 Pod 将不能运行。 使用 `local` 卷的应用程序必须能够容忍这种可用性的降低,以及因底层磁盘的耐用性特征而带来的潜在的数据丢失风险。 @@ -1145,7 +1109,6 @@ PersistentVolume `nodeAffinity` is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node. --> - 使用 `local` 卷时,需要使用 PersistentVolume 对象的 `nodeAffinity` 字段。 它使 Kubernetes 调度器能够将使用 `local` 卷的 Pod 正确地调度到合适的节点。 @@ -1154,8 +1117,8 @@ PersistentVolume `volumeMode` can now be set to "Block" (instead of the default value "Filesystem") to expose the local volume as a raw block device. The `volumeMode` field requires `BlockVolume` Alpha feature gate to be enabled. --> - -现在,可以将 PersistentVolume 对象的 `volumeMode` 字段设置为 "Block"(而不是默认值 "Filesystem"),以将 `local` 卷作为原始块设备暴露出来。 +现在,可以将 PersistentVolume 对象的 `volumeMode` 字段设置为 "Block" +(而不是默认值 "Filesystem"),以将 `local` 卷作为原始块设备暴露出来。 `volumeMode` 字段需要启用 Alpha 功能 `BlockVolume`。 <!-- @@ -1168,8 +1131,9 @@ selectors, Pod affinity, and Pod anti-affinity. --> 当使用 `local` 卷时,建议创建一个 StorageClass,将 `volumeBindingMode` 设置为 `WaitForFirstConsumer`。 -请参考 [示例](/docs/concepts/storage/storage-classes/#local)。 -延迟卷绑定操作可以确保 Kubernetes 在为 PersistentVolumeClaim 作出绑定决策时,会评估 Pod 可能具有的其他节点约束,例如:如节点资源需求、节点选择器、Pod 亲和性和 Pod 反亲和性。 +请参考[示例](/zh/docs/concepts/storage/storage-classes/#local)。 +延迟卷绑定操作可以确保 Kubernetes 在为 PersistentVolumeClaim 作出绑定决策时, +会评估 Pod 可能具有的其他节点约束,例如:如节点资源需求、节点选择器、Pod 亲和性和 Pod 反亲和性。 <!-- An external static provisioner can be run separately for improved management of @@ -1178,18 +1142,17 @@ provisioning yet. For an example on how to run an external local provisioner, see the [local volume provisioner user guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner). --> - 您可以在 Kubernetes 之外单独运行静态驱动以改进对 local 卷的生命周期管理。 请注意,此驱动不支持动态配置。 有关如何运行外部 `local` 卷驱动的示例,请参考 [local 卷驱动用户指南](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner)。 -{{< note >}} <!-- The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle. --> +{{< note >}} 如果不使用外部静态驱动来管理卷的生命周期,则用户需要手动清理和删除 local 类型的持久卷。 {{< /note >}} @@ -1203,22 +1166,20 @@ unmounted. This means that an NFS volume can be pre-populated with data, and that data can be "handed off" between Pods. NFS can be mounted by multiple writers simultaneously. --> - `nfs` 卷能将 NFS (网络文件系统) 挂载到您的 Pod 中。 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`nfs` 卷的内容在删除 Pod 时会被保存,卷只是被卸载掉了。 这意味着 `nfs` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间"传递"。 -{{< caution >}} <!-- You must have your own NFS server running with the share exported before you can use it. --> +{{< caution >}} 在您使用 NFS 卷之前,必须运行自己的 NFS 服务器并将目标 share 导出备用。 {{< /caution >}} <!-- See the [NFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/nfs) for more details. --> - 要了解更多详情请参考 [NFS 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/nfs)。 ### persistentVolumeClaim {#persistentvolumeclaim} @@ -1229,7 +1190,7 @@ A `persistentVolumeClaim` volume is used to mount a way for users to "claim" durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. --> -`persistentVolumeClaim` 卷用来将[持久卷](/docs/concepts/storage/persistent-volumes/)(PersistentVolume)挂载到 Pod 中。 +`persistentVolumeClaim` 卷用来将[持久卷](/zh/docs/concepts/storage/persistent-volumes/)(PersistentVolume)挂载到 Pod 中。 持久卷是用户在不知道特定云环境细节的情况下"申领"持久存储(例如 GCE PersistentDisk 或者 iSCSI 卷)的一种方法。 <!-- @@ -1237,7 +1198,7 @@ See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) details. --> -更多详情请参考[持久卷示例](/docs/concepts/storage/persistent-volumes/) +更多详情请参考[持久卷示例](/zh/docs/concepts/storage/persistent-volumes/) ### projected {#projected} @@ -1273,7 +1234,8 @@ True. --> 服务帐户令牌的映射是 Kubernetes 1.11 版本中引入的一个功能,并在 1.12 版本中被提升为 Beta 功能。 -若要在 1.11 版本中启用此特性,需要显式设置 `TokenRequestProjection` [功能开关](/docs/reference/command-line-tools-reference/feature-gates/) 为 True。 +若要在 1.11 版本中启用此特性,需要显式设置 `TokenRequestProjection` +[功能开关](/zh/docs/reference/command-line-tools-reference/feature-gates/) 为 True。 <!-- #### Example Pod with a secret, a downward API, and a configmap. @@ -1500,16 +1462,14 @@ More details and examples can be found [here](https://github.com/kubernetes/exam A `quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to be mounted into your Pod. --> +`quobyte` 卷允许将现有的 [Quobyte](https://www.quobyte.com) 卷挂载到您的 Pod 中。 -`quobyte` 卷允许将现有的 [Quobyte](http://www.quobyte.com) 卷挂载到您的 Pod 中。 - -{{< caution >}} <!-- You must have your own Quobyte setup running with the volumes created before you can use it. --> +{{< caution >}} 在使用 Quobyte 卷之前,您首先要进行安装并创建好卷。 - {{< /caution >}} <!-- @@ -1517,7 +1477,6 @@ Quobyte supports the {{< glossary_tooltip text="Container Storage Interface" ter CSI is the recommended plugin to use Quobyte volumes inside Kubernetes. Quobyte's GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte-csi) for deploying Quobyte using CSI, along with examples. --> - Quobyte 支持{{< glossary_tooltip text="容器存储接口" term_id="csi" >}}。 推荐使用 CSI 插件以在 Kubernetes 中使用 Quobyte 卷。 Quobyte 的 GitHub 项目具有[说明](https://github.com/quobyte/quobyte-csi#quobyte-csi)以及使用示例来部署 CSI 的 Quobyte。 @@ -1532,16 +1491,14 @@ a `rbd` volume are preserved and the volume is merely unmounted. This means that a RBD volume can be pre-populated with data, and that data can be "handed off" between Pods. --> - -`rbd` 卷允许将 [Rados 块设备](http://ceph.com/docs/master/rbd/rbd/) 卷挂载到您的 Pod 中. +`rbd` 卷允许将 [Rados 块设备](https://ceph.com/docs/master/rbd/rbd/) 卷挂载到您的 Pod 中. 不像 `emptyDir` 那样会在删除 Pod 的同时也会被删除,`rbd` 卷的内容在删除 Pod 时会被保存,卷只是被卸载掉了。 这意味着 `rbd` 卷可以被预先填充数据,并且这些数据可以在 Pod 之间"传递"。 - -{{< caution >}} <!-- You must have your own Ceph installation running before you can use RBD. --> +{{< caution >}} 在使用 RBD 之前,您必须安装运行 Ceph。 {{< /caution >}} @@ -1574,18 +1531,17 @@ volumes (or it can dynamically provision new volumes for persistent volume claim ScaleIO 是基于软件的存储平台,可以使用现有硬件来创建可伸缩的、共享的而且是网络化的块存储集群。 `scaleIO` 卷插件允许部署的 Pod 访问现有的 ScaleIO 卷(或者它可以动态地为持久卷申领提供新的卷,参见[ScaleIO 持久卷](/docs/concepts/storage/persistent-volumes/#scaleio))。 -{{< caution >}} <!-- You must have an existing ScaleIO cluster already setup and running with the volumes created before you can use them. --> +{{< caution >}} 在使用前,您必须有个安装完毕且运行正常的 ScaleIO 集群,并且创建好了存储卷。 {{< /caution >}} <!-- The following is an example of Pod configuration with ScaleIO: --> - 下面是配置了 ScaleIO 的 Pod 示例: ```yaml @@ -1632,26 +1588,25 @@ non-volatile storage. `secret` 卷用来给 Pod 传递敏感信息,例如密码。您可以将 secret 存储在 Kubernetes API 服务器上,然后以文件的形式挂在到 Pod 中,无需直接与 Kubernetes 耦合。 `secret` 卷由 tmpfs(基于 RAM 的文件系统)提供存储,因此它们永远不会被写入非易失性(持久化的)存储器。 -{{< caution >}} <!-- You must create a secret in the Kubernetes API before you can use it. --> +{{< caution >}} 使用前您必须在 Kubernetes API 中创建 secret。 {{< /caution >}} -{{< note >}} <!-- A Container using a Secret as a [subPath](#using-subpath) volume mount will not receive Secret updates. --> +{{< note >}} 容器以 [subPath](#using-subpath) 卷的方式挂载 Secret 时,它将感知不到 Secret 的更新。 {{< /note >}} <!-- Secrets are described in more detail [here](/docs/user-guide/secrets). --> - -Secret 的更多详情请参考[这里](/docs/user-guide/secrets)。 +Secret 的更多详情请参考[这里](/zh/docs/concepts/configuration/secret/)。 ### storageOS {#storageos} @@ -1659,7 +1614,6 @@ Secret 的更多详情请参考[这里](/docs/user-guide/secrets)。 A `storageos` volume allows an existing [StorageOS](https://www.storageos.com) volume to be mounted into your Pod. --> - `storageos` 卷允许将现有的 [StorageOS](https://www.storageos.com) 卷挂载到您的 Pod 中。 <!-- @@ -1668,7 +1622,6 @@ or attached storage accessible from any node within the Kubernetes cluster. Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost. --> - StorageOS 在 Kubernetes 环境中以容器的形式运行,这使得应用能够从 Kubernetes 集群中的任何节点访问本地或关联的存储。 为应对节点失效状况,可以复制数据。 若需提高利用率和降低成本,可以考虑瘦配置(Thin Provisioning)和数据压缩。 @@ -1679,23 +1632,20 @@ At its core, StorageOS provides block storage to Containers, accessible via a fi The StorageOS Container requires 64-bit Linux and has no additional dependencies. A free developer license is available. --> - 作为其核心能力之一,StorageOS 为容器提供了可以通过文件系统访问的块存储。 StorageOS 容器需要 64 位的 Linux,并且没有其他的依赖关系。 StorageOS 提供免费的开发者授权许可。 -{{< caution >}} <!-- You must run the StorageOS Container on each node that wants to access StorageOS volumes or that will contribute storage capacity to the pool. For installation instructions, consult the [StorageOS documentation](https://docs.storageos.com). --> - +{{< caution >}} 您必须在每个希望访问 StorageOS 卷的或者将向存储资源池贡献存储容量的节点上运行 StorageOS 容器。 有关安装说明,请参阅 [StorageOS 文档](https://docs.storageos.com)。 - {{< /caution >}} ```yaml @@ -1735,27 +1685,27 @@ For more information including Dynamic Provisioning and Persistent Volume Claims ### vsphereVolume {#vspherevolume} -{{< note >}} <!-- Prerequisite: Kubernetes with vSphere Cloud Provider configured. For cloudprovider configuration please refer [vSphere getting started guide](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/). --> -前提条件:配备了 vSphere 云驱动的 Kubernetes。云驱动的配置方法请参考 [vSphere 使用指南](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)。 +{{< note >}} +前提条件:配备了 vSphere 云驱动的 Kubernetes。云驱动的配置方法请参考 +[vSphere 使用指南](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)。 {{< /note >}} <!-- A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore. --> - `vsphereVolume` 用来将 vSphere VMDK 卷挂载到您的 Pod 中。 在卸载卷时,卷的内容会被保留。 vSphereVolume 卷类型支持 VMFS 和 VSAN 数据仓库。 -{{< caution >}} <!-- You must create VMDK using one of the following methods before using with Pod. --> +{{< caution >}} 在挂载到 Pod 之前,您必须用下列方式之一创建 VMDK。 {{< /caution >}} @@ -1764,7 +1714,6 @@ You must create VMDK using one of the following methods before using with Pod. Choose one of the following methods to create a VMDK. --> - #### 创建 VMDK 卷 选择下列方式之一创建 VMDK。 @@ -1793,7 +1742,6 @@ vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk <!-- #### vSphere VMDK Example configuration --> - #### vSphere VMDK 配置示例 ```yaml @@ -1819,7 +1767,6 @@ spec: <!-- More examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere). --> - 更多示例可以在[这里](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere)找到。 <!-- @@ -1828,8 +1775,7 @@ More examples can be found [here](https://github.com/kubernetes/examples/tree/ma Sometimes, it is useful to share one volume for multiple uses in a single Pod. The `volumeMounts.subPath` property can be used to specify a sub-path inside the referenced volume instead of its root. --> - -## 使用 subPath +## 使用 subPath {#using-path} 有时,在单个 Pod 中共享卷以供多方使用是很有用的。 `volumeMounts.subPath` 属性可用于指定所引用的卷内的子路径,而不是其根路径。 @@ -1838,7 +1784,6 @@ property can be used to specify a sub-path inside the referenced volume instead Here is an example of a Pod with a LAMP stack (Linux Apache Mysql PHP) using a single, shared volume. The HTML contents are mapped to its `html` folder, and the databases will be stored in its `mysql` folder: --> - 下面是一个使用同一共享卷的、内含 LAMP 栈(Linux Apache Mysql PHP)的 Pod 的示例。 HTML 内容被映射到卷的 `html` 文件夹,数据库将被存储在卷的 `mysql` 文件夹中: @@ -1878,13 +1823,11 @@ spec: {{< feature-state for_k8s_version="v1.15" state="beta" >}} - <!-- Use the `subPathExpr` field to construct `subPath` directory names from Downward API environment variables. Before you use this feature, you must enable the `VolumeSubpathEnvExpansion` feature gate. The `subPath` and `subPathExpr` properties are mutually exclusive. --> - 使用 `subPathExpr` 字段从 Downward API 环境变量构造 `subPath` 目录名。 在使用此特性之前,必须启用 `VolumeSubpathEnvExpansion` 功能开关。 `subPath` 和 `subPathExpr` 属性是互斥的。 @@ -1892,8 +1835,8 @@ The `subPath` and `subPathExpr` properties are mutually exclusive. <!-- In this example, a Pod uses `subPathExpr` to create a directory `pod1` within the hostPath volume `/var/log/pods`, using the pod name from the Downward API. The host directory `/var/log/pods/pod1` is mounted at `/logs` in the container. --> - -在这个示例中,Pod 基于 Downward API 中的 Pod 名称,使用 `subPathExpr` 在 hostPath 卷 `/var/log/pods` 中创建目录 `pod1`。 +在这个示例中,Pod 基于 Downward API 中的 Pod 名称,使用 `subPathExpr` +在 hostPath 卷 `/var/log/pods` 中创建目录 `pod1`。 主机目录 `/var/log/pods/pod1` 挂载到了容器的 `/logs` 中。 ```yaml @@ -1932,7 +1875,6 @@ medium of the filesystem holding the kubelet root dir (typically `hostPath` volume can consume, and no isolation between Containers or between Pods. --> - ## 资源 `emptyDir` 卷的存储介质(磁盘、SSD 等)是由保存 kubelet 根目录(通常是 `/var/lib/kubelet`)的文件系统的介质确定。 @@ -1944,8 +1886,10 @@ request a certain amount of space using a [resource](/docs/user-guide/compute-re specification, and to select the type of media to use, for clusters that have several media types. --> - -将来,我们希望 `emptyDir` 卷和 `hostPath` 卷能够使用 [resource](/docs/user-guide/computeresources) 规范来请求一定量的空间,并且能够为具有多种介质类型的集群选择要使用的介质类型。 +将来,我们希望 `emptyDir` 卷和 `hostPath` 卷能够使用 +[resource](/zh/docs/concepts/configuration/manage-compute-resources-containers/) +规约来请求一定量的空间, +并且能够为具有多种介质类型的集群选择要使用的介质类型。 <!-- ## Out-of-Tree Volume Plugins @@ -1967,7 +1911,8 @@ Kubernetes API. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository. --> -在引入 CSI 和 FlexVolume 之前,所有卷插件(如上面列出的卷类型)都是 "in-tree" 的,这意味着它们是与 Kubernetes 的核心组件一同构建、链接、编译和交付的,并且这些插件都扩展了 Kubernetes 的核心 API。 +在引入 CSI 和 FlexVolume 之前,所有卷插件(如上面列出的卷类型)都是 "in-tree" 的, +这意味着它们是与 Kubernetes 的核心组件一同构建、链接、编译和交付的,并且这些插件都扩展了 Kubernetes 的核心 API。 这意味着向 Kubernetes 添加新的存储系统(卷插件)需要将代码合并到 Kubernetes 核心代码库中。 <!-- @@ -2007,25 +1952,22 @@ Kubernetes v1.10, and is GA in Kubernetes v1.13. CSI 的支持在 Kubernetes v1.9 中作为 alpha 特性引入,在 Kubernetes v1.10 中转为 beta 特性,并在 Kubernetes v1.13 正式 GA。 -{{< note >}} <!-- Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes v1.13 and will be removed in a future release. --> +{{< note >}} +Kubernetes v1.13中不支持 CSI 规范版本0.2和0.3,并将在以后的版本中删除。 {{< /note >}} -Kubernetes v1.13中不支持 CSI 规范版本0.2和0.3,并将在以后的版本中删除。 - -{{< note >}} <!-- CSI drivers may not be compatible across all Kubernetes releases. Please check the specific CSI driver's documentation for supported deployments steps for each Kubernetes release and a compatibility matrix. --> - +{{< note >}} CSI驱动程序可能并非在所有Kubernetes版本中都兼容。 请查看特定CSI驱动程序的文档,以获取每个 Kubernetes 版本所支持的部署步骤以及兼容性列表。 - {{< /note >}} <!-- @@ -2065,7 +2007,6 @@ persistent volume: The value is passed as `volume_id` on all calls to the CSI volume driver when referencing the volume. --> - - `volumeHandle`:唯一标识卷的字符串值。 该值必须与CSI 驱动程序在 `CreateVolumeResponse` 的 `volume_id` 字段中返回的值相对应;接口定义在 [CSI spec](https://github.com/container-storageinterface/spec/blob/master/spec.md#createvolume) 中。 在所有对 CSI 卷驱动程序的调用中,引用该 CSI 卷时都使用此值作为 `volume_id` 参数。 @@ -2076,7 +2017,6 @@ persistent volume: passed to the CSI driver via the `readonly` field in the `ControllerPublishVolumeRequest`. --> - - `readOnly`:一个可选的布尔值,指示通过 `ControllerPublished` 关联该卷时是否设置该卷为只读。 默认值是 false。 该值通过 `ControllerPublishVolumeRequest` 中的 `readonly` 字段传递给 CSI 驱动程序。 @@ -2090,7 +2030,6 @@ persistent volume: `ControllerPublishVolumeRequest`, `NodeStageVolumeRequest`, and `NodePublishVolumeRequest`. --> - - `fsType`:如果 PV 的 `VolumeMode` 为 `Filesystem`,那么此字段指定挂载卷时应该使用的文件系统。 如果卷尚未格式化,并且支持格式化,此值将用于格式化卷。 此值可以通过 `ControllerPublishVolumeRequest`、`NodeStageVolumeRequest` 和 @@ -2117,7 +2056,6 @@ persistent volume: optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed. --> - - `controllerPublishSecretRef`:对包含敏感信息的 secret 对象的引用;该敏感信息会被传递给 CSI 驱动来完成 CSI `ControllerPublishVolume` 和 `ControllerUnpublishVolume` 调用。 此字段是可选的;在不需要 secret 时可以是空的。 如果 secret 对象包含多个 secret,则所有的 secret 都会被传递。 @@ -2129,7 +2067,6 @@ persistent volume: is required. If the secret object contains more than one secret, all secrets are passed. --> - - `nodeStageSecretRef`:对包含敏感信息的 secret 对象的引用,以传递给 CSI 驱动来完成 CSI `NodeStageVolume` 调用。 此字段是可选的,如果不需要 secret,则可能是空的。 如果 secret 对象包含多个 secret,则传递所有 secret。 @@ -2141,7 +2078,6 @@ persistent volume: secret is required. If the secret object contains more than one secret, all secrets are passed. --> - - `nodePublishSecretRef`:对包含敏感信息的 secret 对象的引用,以传递给 CSI 驱动来完成 CSI ``NodePublishVolume` 调用。 此字段是可选的,如果不需要 secret,则可能是空的。 如果 secret 对象包含多个 secret,则传递所有 secret。 @@ -2149,6 +2085,7 @@ persistent volume: <!-- #### CSI raw block volume support --> + #### CSI 原始块卷支持 {{< feature-state for_k8s_version="v1.14" state="beta" >}} @@ -2178,8 +2115,7 @@ CSI块卷支持功能已启用,但默认情况下启用。必须为此功能 Learn how to [setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support). --> - -学习怎样[安装您的带有块卷支持的 PV/PVC](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)。 +学习怎样[安装您的带有块卷支持的 PV/PVC](/zh/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)。 <!-- #### CSI ephemeral volumes @@ -2239,7 +2175,7 @@ documentation](https://kubernetes-csi.github.io/docs/) #### Migrating to CSI drivers from in-tree plugins --> -#开发人员资源 +# 开发人员资源 有关如何开发 CSI 驱动程序的更多信息,请参考[kubernetes-csi文档](https://kubernetes-csi.github.io/docs/) #### 从 in-tree 插件迁移到 CSI 驱动程序 @@ -2277,7 +2213,6 @@ plugin path on each node (and in some cases master). Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin. More details can be found [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md). --> - FlexVolume 是一个自 1.2 版本(在 CSI 之前)以来在 Kubernetes 中一直存在的 out-of-tree 插件接口。 它使用基于 exec 的模型来与驱动程序对接。 用户必须在每个节点(在某些情况下是主节点)上的预定义卷插件路径中安装 FlexVolume 驱动程序可执行文件。 @@ -2356,8 +2291,6 @@ Its values are: 该模式等同于 [Linux 内核文档](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) 中描述的 `rshared` 挂载传播选项。 - -{{< caution >}} <!-- `Bidirectional` mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged @@ -2365,12 +2298,11 @@ Containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by Containers in Pods must be destroyed (unmounted) by the Containers on termination. --> - +{{< caution >}} `Bidirectional` 形式的挂载传播可能比较危险。 它可以破坏主机操作系统,因此它只被允许在特权容器中使用。 强烈建议您熟悉 Linux 内核行为。 此外,由 Pod 中的容器创建的任何卷挂载必须在终止时由容器销毁(卸载)。 - {{< /caution >}} <!-- @@ -2412,4 +2344,3 @@ sudo systemctl restart docker * 参考[使用持久卷部署 WordPress 和 MySQL](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) 示例。 - diff --git a/content/zh/docs/concepts/workloads/controllers/job.md b/content/zh/docs/concepts/workloads/controllers/job.md new file mode 100644 index 0000000000..0a5dfd8291 --- /dev/null +++ b/content/zh/docs/concepts/workloads/controllers/job.md @@ -0,0 +1,861 @@ +--- +title: Jobs +content_type: concept +feature: + title: 批量执行 + description: > + 除了服务之外,Kubernetes 还可以管理你的批处理和 CI 工作负载,在期望时替换掉失效的容器。 +weight: 60 +--- +<!-- +reviewers: +- erictune +- soltysh +title: Jobs +content_type: concept +feature: + title: Batch execution + description: > + In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. +weight: 60 +--> + +<!-- overview --> +<!-- +A Job creates one or more Pods and ensures that a specified number of them successfully terminate. +As pods successfully complete, the Job tracks the successful completions. When a specified number +of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up +the Pods it created. + +A simple case is to create one Job object in order to reliably run one Pod to completion. +The Job object will start a new Pod if the first Pod fails or is deleted (for example +due to a node hardware failure or a node reboot). + +You can also use a Job to run multiple Pods in parallel. +--> +Job 会创建一个或者多个 Pods,并确保指定数量的 Pods 成功终止。 +随着 Pods 成功结束,Job 跟踪记录成功完成的 Pods 个数。 +当数量达到指定的成功个数阈值时,任务(即 Job)结束。 +删除 Job 的操作会清除所创建的全部 Pods。 + +一种简单的使用场景下,你会创建一个 Job 对象以便以一种可靠的方式运行某 Pod 直到完成。 +当第一个 Pod 失败或者被删除(比如因为节点硬件失效或者重启)时,Job +对象会启动一个新的 Pod。 + +你也可以使用 Job 以并行的方式运行多个 Pod。 + +<!-- body --> +<!-- +## Running an example Job + +Here is an example Job config. It computes π to 2000 places and prints it out. +It takes around 10s to complete. +--> +## 运行示例 Job {#running-an-example-job} + +下面是一个 Job 配置示例。它负责计算 π 到小数点后 2000 位,并将结果打印出来。 +此计算大约需要 10 秒钟完成。 + +{{< codenew file="controllers/job.yaml" >}} + +<!--You can run the example with this command:--> +你可以使用下面的命令来运行此示例: + +```shell +kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml +``` + +输出类似于: + +``` +job.batch/pi created +``` + +<!-- Check on the status of the Job with `kubectl`: --> +使用 `kubectl` 来检查 Job 的状态: + +```shell +kubectl describe jobs/pi +``` + +输出类似于: + +``` +Name: pi +Namespace: default +Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c +Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c + job-name=pi +Annotations: kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":... +Parallelism: 1 +Completions: 1 +Start Time: Mon, 02 Dec 2019 15:20:11 +0200 +Completed At: Mon, 02 Dec 2019 15:21:16 +0200 +Duration: 65s +Pods Statuses: 0 Running / 1 Succeeded / 0 Failed +Pod Template: + Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c + job-name=pi + Containers: + pi: + Image: perl + Port: <none> + Host Port: <none> + Command: + perl + -Mbignum=bpi + -wle + print bpi(2000) + Environment: <none> + Mounts: <none> + Volumes: <none> +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 +``` + +<!-- +To view completed Pods of a Job, use `kubectl get pods`. + +To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: +--> +要查看 Job 对应的已完成的 Pods,可以执行 `kubectl get pods`。 + +要以机器可读的方式列举隶属于某 Job 的全部 Pods,你可以使用类似下面这条命令: + +```shell +pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}') +echo $pods +``` + +输出类似于: + +``` +pi-5rwd7 +``` + +<!-- +Here, the selector is the same as the selector for the Job. The `-output=jsonpath` option specifies an expression +that just gets the name from each Pod in the returned list. + +View the standard output of one of the pods: +--> +这里,选择算符与 Job 的选择算符相同。`--output=jsonpath` 选项给出了一个表达式, +用来从返回的列表中提取每个 Pod 的 name 字段。 + +查看其中一个 Pod 的标准输出: + +```shell +kubectl logs $pods +``` + +<!--The output is similar to this:--> +输出类似于: + +``` +3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 +``` +<!-- +## Writing a Job spec + +As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields. +Its name must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). +--> +## 编写 Job 规约 + +与 Kubernetes 中其他资源的配置类似,Job 也需要 `apiVersion`、`kind` 和 `metadata` 字段。 +Job 的名字必须时合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +Job 配置还需要一个[`.spec` 节](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 + +<!-- +### Pod Template + +The `.spec.template` is the only required field of the `.spec`. + +The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`. + +In addition to required fields for a Pod, a pod template in a Job must specify appropriate +labels (see [pod selector](#pod-selector)) and an appropriate restart policy. + +Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed. +--> +### Pod 模版 + +Job 的 `.spec` 中只有 `.spec.template` 是必需的字段。 + +字段 `.spec.template` 的值是一个 [Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)。 +其定义规范与 {{< glossary_tooltip text="Pod" term_id="pod" >}} +完全相同,只是其中不再需要 `apiVersion` 或 `kind` 字段。 + +除了作为 Pod 所必需的字段之外,Job 中的 Pod 模版必需设置合适的标签 +(参见[Pod 选择算符](#pod-selector))和合适的重启策略。 + +Job 中 Pod 的 [`RestartPolicy`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +只能设置为 `Never` 或 `OnFailure` 之一。 + +<!-- +### Pod selector + +The `.spec.selector` field is optional. In almost all cases you should not specify it. +See section [specifying your own pod selector](#specifying-your-own-pod-selector). +--> +### Pod 选择算符 {#pod-selector} + +字段 `.spec.selector` 是可选的。在绝大多数场合,你都不需要为其赋值。 +参阅[设置自己的 Pod 选择算符](#specifying-your-own-pod-selector). + +<!-- +### Parallel execution for Jobs {#parallel-jobs} + +There are three main types of task suitable to run as a Job: +--> +### Job 的并行执行 {#parallel-jobs} + +适合以 Job 形式来运行的任务主要有三种: +<!-- +1. Non-parallel Jobs + - normally, only one Pod is started, unless the Pod fails. + - the Job is complete as soon as its Pod terminates successfully. +1. Parallel Jobs with a *fixed completion count*: + - specify a non-zero positive value for `.spec.completions`. + - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`. + - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`. +1. Parallel Jobs with a *work queue*: + - do not specify `.spec.completions`, default to `.spec.parallelism`. + - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue. + - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done. + - when _any_ Pod from the Job terminates with success, no new Pods are created. + - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success. + - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting. +--> +1. 非并行 Job + - 通常只启动一个 Pod,除非该 Pod 失败 + - 当 Pod 成功终止时,立即视 Job 为完成状态 +1. 具有 *确定完成计数* 的并行 Job + - `.spec.completions` 字段设置为非 0 的正数值 + - Job 用来代表整个任务,当对应于 1 和 `.spec.completions` 之间的每个整数都存在 + 一个成功的 Pod 时,Job 被视为完成 + - **尚未实现**:每个 Pod 收到一个介于 1 和 `spec.completions` 之间的不同索引值 +1. 带 *工作队列* 的并行 Job + - 不设置 `spec.completions`,默认值为 `.spec.parallelism` + - 多个 Pod 之间必须相互协调,或者借助外部服务确定每个 Pod 要处理哪个工作条目。 + 例如,任一 Pod 都可以从工作队列中取走最多 N 个工作条目。 + - 每个 Pod 都可以独立确定是否其它 Pod 都已完成,进而确定 Job 是否完成 + - 当 Job 中 _任何_ Pod 成功终止,不再创建新 Pod + - 一旦至少 1 个 Pod 成功完成,并且所有 Pod 都已终止,即可宣告 Job 成功完成 + - 一旦任何 Pod 成功退出,任何其它 Pod 都不应再对此任务执行任何操作或生成任何输出。 + 所有 Pod 都应启动退出过程。 + +<!-- +For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are +unset, both are defaulted to 1. + +For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed. +You can set `.spec.parallelism`, or leave it unset and it will default to 1. + +For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to +a non-negative integer. + +For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section. +--> +对于 _非并行_ 的 Job,你可以不设置 `spec.completions` 和 `spec.parallelism`。 +这两个属性都不设置时,均取默认值 1。 + +对于 _确定完成计数_ 类型的 Job,你应该设置 `.spec.completions` 为所需要的完成个数。 +你可以设置 `.spec.parallelism`,也可以不设置。其默认值为 1。 + +对于一个 _工作队列_ Job,你不可以设置 `.spec.completions`,但要将`.spec.parallelism` +设置为一个非负整数。 + +关于如何利用不同类型的 Job 的更多信息,请参见 [Job 模式](#job-patterns)一节。 + +<!-- +#### Controlling parallelism + +The requested parallelism (`.spec.parallelism`) can be set to any non-negative value. +If it is unspecified, it defaults to 1. +If it is specified as 0, then the Job is effectively paused until it is increased. + +Actual parallelism (number of pods running at any instant) may be more or less than requested +parallelism, for a variety of reasons: +--> +#### 控制并行性 {#controlling-parallelism} + +并行性请求(`.spec.parallelism`)可以设置为任何非负整数。 +如果未设置,则默认为 1。 +如果设置为 0,则 Job 相当于启动之后便被暂停,直到此值被增加。 + +实际并行性(在任意时刻运行状态的 Pods 个数)可能比并行性请求略大或略小, +原因如下: + +<!-- +- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of + remaining completions. Higher values of `.spec.parallelism` are effectively ignored. +- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded - remaining Pods are allowed to complete, however. +- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react. +- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.), + then there may be fewer pods than requested. +- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job. +- When a Pod is gracefully shut down, it takes time to stop. +--> +- 对于 _确定完成计数_ Job,实际上并行执行的 Pods 个数不会超出剩余的完成数。 + 如果 `.spec.parallelism` 值较高,会被忽略。 +- 对于 _工作队列_ Job,有任何 Job 成功结束之后,不会有新的 Pod 启动。 + 不过,剩下的 Pods 允许执行完毕。 +- 如果 Job {{< glossary_tooltip text="控制器" term_id="controller" >}} 没有来得及作出响应,或者 +- 如果 Job 控制器因为任何原因(例如,缺少 `ResourceQuota` 或者没有权限)无法创建 Pods。 + Pods 个数可能比请求的数目小。 +- Job 控制器可能会因为之前同一 Job 中 Pod 失效次数过多而压制新 Pod 的创建。 +- 当 Pod 处于体面终止进程中,需要一定时间才能停止。 + +<!-- +## Handling Pod and container failures + +A container in a Pod may fail for a number of reasons, such as because the process in it exited with +a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this +happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays +on the node, but the container is re-run. Therefore, your program needs to handle the case when it is +restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`. +See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. +--> + +## 处理 Pod 和容器失效 + +Pod 中的容器可能因为多种不同原因失效,例如因为其中的进程退出时返回值非零, +或者容器因为超出内存约束而被杀死等等。 +如果发生这类事件,并且 `.spec.template.spec.restartPolicy = "OnFailure"`, +Pod 则继续留在当前节点,但容器会被重新运行。 +因此,你的程序需要能够处理在本地被重启的情况,或者要设置 +`.spec.template.spec.restartPolicy = "Never"`。 +关于 `restartPolicy` 的更多信息,可参阅 +[Pod 生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/#example-states)。 + +<!-- +An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node +(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the +`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller +starts a new Pod. This means that your application needs to handle the case when it is restarted in a new +pod. In particular, it needs to handle temporary files, locks, incomplete output and the like +caused by previous runs. +--> +整个 Pod 也可能会失败,且原因各不相同。 +例如,当 Pod 启动时,节点失效(被升级、被重启、被删除等)或者其中的容器失败而 +`.spec.template.spec.restartPolicy = "Never"`。 +当 Pod 失败时,Job 控制器会启动一个新的 Pod。 +这意味着,你的应用需要处理在一个新 Pod 中被重启的情况。 +尤其是应用需要处理之前运行所触碰或产生的临时文件、锁、不完整的输出等问题。 + +<!-- +Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and +`.spec.template.spec.restartPolicy = "Never"`, the same program may +sometimes be started twice. + +If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be +multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. +--> +注意,即使你将 `.spec.parallelism` 设置为 1,且将 `.spec.completions` 设置为 +1,并且 `.spec.template.spec.restartPolicy` 设置为 "Never",同一程序仍然有可能被启动两次。 + +如果你确实将 `.spec.parallelism` 和 `.spec.completions` 都设置为比 1 大的值, +那就有可能同时出现多个 Pod 运行的情况。 +为此,你的 Pod 也必须能够处理并发性问题。 + +<!-- +### Pod backoff failure policy + +There are situations where you want to fail a Job after some amount of retries +due to a logical error in configuration etc. +To do so, set `.spec.backoffLimit` to specify the number of retries before +considering a Job as failed. The back-off limit is set by default to 6. Failed +Pods associated with the Job are recreated by the Job controller with an +exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The +back-off count is reset when a Job's Pod is deleted or successful without any +other Pods for the Job failing around that time. +--> +### Pod 回退失效策略 + +在有些情形下,你可能希望 Job 在经历若干次重试之后直接进入失败状态,因为这很 +可能意味着遇到了配置错误。 +为了实现这点,可以将 `.spec.backoffLimit` 设置为视 Job 为失败之前的重试次数。 +失效回退的限制值默认为 6。 +与 Job 相关的失效的 Pod 会被 Job 控制器重建,并且以指数型回退计算重试延迟 +(从 10 秒、20 秒到 40 秒,最多 6 分钟)。 +当 Job 的 Pod 被删除时,或者 Pod 成功时没有其它 Pod 处于失败状态,失效回退的次数也会被重置(为 0)。 + +<!-- +If your job has `restartPolicy = "OnFailure"`, keep in mind that your container running the Job +will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting +`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output +from failed Jobs is not lost inadvertently. +--> +{{< note >}} +如果你的 Job 的 `restartPolicy` 被设置为 "OnFailure",就要注意运行该 Job 的容器 +会在 Job 到达失效回退次数上限时自动被终止。 +这会使得调试 Job 中可执行文件的工作变得非常棘手。 +我们建议在调试 Job 时将 `restartPolicy` 设置为 "Never", +或者使用日志系统来确保失效 Jobs 的输出不会意外遗失。 +{{< /note >}} + +<!-- +## Job termination and cleanup + +When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around +allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. +The job object also remains after it is completed so that you can view its status. It is up to the user to delete +old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too. +--> +## Job 终止与清理 + +Job 完成时不会再创建新的 Pod,不过已有的 Pod 也不会被删除。 +保留这些 Pod 使得你可以查看已完成的 Pod 的日志输出,以便检查错误、警告 +或者其它诊断性输出。 +Job 完成时 Job 对象也一样被保留下来,这样你就可以查看它的状态。 +在查看了 Job 状态之后删除老的 Job 的操作留给了用户自己。 +你可以使用 `kubectl` 来删除 Job(例如,`kubectl delete jobs/pi` +或者 `kubectl delete -f ./job.yaml`)。 +当使用 `kubectl` 来删除 Job 时,该 Job 所创建的 Pods 也会被删除。 + +<!-- +By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the +`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated. + +Another way to terminate a Job is by setting an active deadline. +Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds. +The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created. +Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`. +--> +默认情况下,Job 会持续运行,除非某个 Pod 失败(`restartPolicy=Never`) +或者某个容器出错退出(`restartPolicy=OnFailure`)。 +这时,Job 基于前述的 `spec.backoffLimit` 来决定是否以及如何重试。 +一旦重试次数到达 `.spec.backoffLimit` 所设的上限,Job 会被标记为失败, +其中运行的 Pods 都会被终止。 + +终止 Job 的另一种方式是设置一个活跃期限。 +你可以为 Job 的 `.spec.activeDeadlineSeconds` 设置一个秒数值。 +该值适用于 Job 的整个生命期,无论 Job 创建了多少个 Pod。 +一旦 Job 运行时间达到 `activeDeadlineSeconds` 秒,其所有运行中的 Pod +都会被终止,并且 Job 的状态更新为 `type: Failed` +及 `reason: DeadlineExceeded`。 + +<!-- +Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached. + +Example: +--> +注意 Job 的 `.spec.activeDeadlineSeconds` 优先级高于其 `.spec.backoffLimit` 设置。 +因此,如果一个 Job 正在重试一个或多个失效的 Pod,该 Job 一旦到达 +`activeDeadlineSeconds` 所设的时限即不再部署额外的 Pod,即使其重试次数还未 +达到 `backoffLimit` 所设的限制。 + +例如: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: pi-with-timeout +spec: + backoffLimit: 5 + activeDeadlineSeconds: 100 + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never +``` +<!-- +Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. + +Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`. +That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve. +--> +注意 Job 规约和 Job 中的 +[Pod 模版规约](/zh/docs/concepts/workloads/pods/init-containers/#detailed-behavior) +都有 `activeDeadlineSeconds` 字段。 +请确保你在合适的层次设置正确的字段。 + +还要注意的是,`restartPolicy` 对应的是 Pod,而不是 Job 本身: +一旦 Job 状态变为 `type: Failed`,就不会再发生 Job 重启的动作。 +换言之,由 `.spec.activeDeadlineSeconds` 和 `.spec.backoffLimit` 所触发的 Job 终结机制 +都会导致 Job 永久性的失败,而这类状态都需要手工干预才能解决。 + +<!-- +## Clean up finished jobs automatically + +Finished Jobs are usually no longer needed in the system. Keeping them around in +the system will put pressure on the API server. If the Jobs are managed directly +by a higher level controller, such as +[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be +cleaned up by CronJobs based on the specified capacity-based cleanup policy. + +### TTL mechanism for finished Jobs +--> +## 自动清理完成的 Job + +完成的 Job 通常不需要留存在系统中。在系统中一直保留它们会给 API +服务器带来额外的压力。 +如果 Job 由某种更高级别的控制器来管理,例如 +[CronJobs](/zh/docs/concepts/workloads/controllers/cron-jobs/), +则 Job 可以被 CronJob 基于特定的根据容量裁定的清理策略清理掉。 + +### 已完成 Job 的 TTL 机制 {#ttl-mechanisms-for-finished-jobs} + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +<!-- +Another way to clean up finished Jobs (either `Complete` or `Failed`) +automatically is to use a TTL mechanism provided by a +[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for +finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of +the Job. + +When the TTL controller cleans up the Job, it will delete the Job cascadingly, +i.e. delete its dependent objects, such as Pods, together with the Job. Note +that when the Job is deleted, its lifecycle guarantees, such as finalizers, will +be honored. + +For example: +--> +自动清理已完成 Job (状态为 `Complete` 或 `Failed`)的另一种方式是使用由 +[TTL 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)所提供 +的 TTL 机制。 +通过设置 Job 的 `.spec.ttlSecondsAfterFinished` 字段,可以让该控制器清理掉 +已结束的资源。 + +TTL 控制器清理 Job 时,会级联式地删除 Job 对象。 +换言之,它会删除所有依赖的对象,包括 Pod 及 Job 本身。 +注意,当 Job 被删除时,系统会考虑其生命周期保障,例如其 Finalizers。 + +例如: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: pi-with-ttl +spec: + ttlSecondsAfterFinished: 100 + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never +``` + +<!-- +The Job `pi-with-ttl` will be eligible to be automatically deleted, `100` +seconds after it finishes. + +If the field is set to `0`, the Job will be eligible to be automatically deleted +immediately after it finishes. If the field is unset, this Job won't be cleaned +up by the TTL controller after it finishes. + +Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For +more information, see the documentation for +[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for +finished resources. +--> +Job `pi-with-ttl` 在结束 100 秒之后,可以成为被自动删除的标的。 + +如果该字段设置为 `0`,Job 在结束之后立即成为可被自动删除的对象。 +如果该字段没有设置,Job 不会在结束之后被 TTL 控制器自动清除。 + +注意这种 TTL 机制仍然是一种 Alpha 状态的功能特性,需要配合 `TTLAfterFinished` +特性门控使用。有关详细信息,可参考 +[TTL 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)的文档。 + +<!-- +## Job patterns + +The Job object can be used to support reliable parallel execution of Pods. The Job object is not +designed to support closely-communicating parallel processes, as commonly found in scientific +computing. It does support parallel processing of a set of independent but related *work items*. +These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a +NoSQL database to scan, and so on. +--> +## Job 模式 {#job-patterns} + +Job 对象可以用来支持多个 Pod 的可靠的并发执行。 +Job 对象不是设计用来支持相互通信的并行进程的,后者一般在科学计算中应用较多。 +Job 的确能够支持对一组相互独立而又有所关联的 *工作条目* 的并行处理。 +这类工作条目可能是要发送的电子邮件、要渲染的视频帧、要编解码的文件、NoSQL +数据库中要扫描的主键范围等等。 + +<!-- +In a complex system, there may be multiple different sets of work items. Here we are just +considering one set of work items that the user wants to manage together — a *batch job*. + +There are several different patterns for parallel computation, each with strengths and weaknesses. +The tradeoffs are: +--> +在一个复杂系统中,可能存在多个不同的工作条目集合。这里我们仅考虑用户希望一起管理的 +工作条目集合之一 — *批处理作业*。 + +并行计算的模式有好多种,每种都有自己的强项和弱点。这里要权衡的因素有: + +<!-- +- One Job object for each work item, vs. a single Job object for all work items. The latter is + better for large numbers of work items. The former creates some overhead for the user and for the + system to manage large numbers of Job objects. +- Number of pods created equals number of work items, vs. each Pod can process multiple work items. + The former typically requires less modification to existing code and containers. The latter + is better for large numbers of work items, for similar reasons to the previous bullet. +- Several approaches use a work queue. This requires running a queue service, + and modifications to the existing program or container to make it use the work queue. + Other approaches are easier to adapt to an existing containerised application. +--> +- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。 + 后者更适合处理大量工作条目的场景; + 前者会给用户带来一些额外的负担,而且需要系统管理大量的 Job 对象。 +- 创建与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。 + 前者通常不需要对现有代码和容器做较大改动; + 后者则更适合工作条目数量较大的场合,原因同上。 +- 有几种技术都会用到工作队列。这意味着需要运行一个队列服务,并修改现有程序或容器 + 使之能够利用该工作队列。 + 与之比较,其他方案在修改现有容器化应用以适应需求方面可能更容易一些。 + +<!-- +The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. +The pattern names are also links to examples and more detailed description. +--> +下面是对这些权衡的汇总,列 2 到 4 对应上面的权衡比较。 +模式的名称对应了相关示例和更详细描述的链接。 + +| 模式 | 单个 Job 对象 | Pods 数少于工作条目数? | 直接使用应用无需修改? | 在 Kube 1.1 上可用?| +| ----- |:-------------:|:-----------------------:|:---------------------:|:-------------------:| +| [Job 模版扩展](/zh/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ | +| [每工作条目一 Pod 的队列](/zh/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 有时 | ✓ | +| [Pod 数量可变的队列](/zh/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ | +| 静态工作分派的单个 Job | ✓ | | ✓ | | + +<!-- +When you specify completions with `.spec.completions`, each Pod created by the Job controller +has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that +all pods for a task will have the same command line and the same +image, the same volumes, and (almost) the same environment variables. These patterns +are different ways to arrange for pods to work on different things. + +This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns. +Here, `W` is the number of work items. +--> +当你使用 `.spec.completions` 来设置完成数时,Job 控制器所创建的每个 Pod +使用完全相同的 [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)。 +这意味着任务的所有 Pod 都有相同的命令行,都使用相同的镜像和数据卷,甚至连 +环境变量都(几乎)相同。 +这些模式是让每个 Pod 执行不同工作的几种不同形式。 + +下表显示的是每种模式下 `.spec.parallelism` 和 `.spec.completions` 所需要的设置。 +其中,`W` 表示的是工作条目的个数。 + +| 模式 | `.spec.completions` | `.spec.parallelism` | +| ----- |:-------------------:|:--------------------:| +| [Job 模版扩展](/zh/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 | +| [每工作条目一 Pod 的队列](/zh/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | 任意值 | +| [Pod 个数可变的队列](/zh/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | 任意值 | +| 基于静态工作分派的单一 Job | W | 任意值 | + +<!-- +## Advanced usage + +### Specifying your own Pod selector {#specifying-your-own-pod-selector} + +Normally, when you create a Job object, you do not specify `.spec.selector`. +The system defaulting logic adds this field when the Job is created. +It picks a selector value that will not overlap with any other jobs. + +However, in some cases, you might need to override this automatically set selector. +To do this, you can specify the `.spec.selector` of the Job. +--> +## 高级用法 {#advanced-usage} + +### 指定你自己的 Pod 选择算符 {#specifying-your-own-pod-selector} + +通常,当你创建一个 Job 对象时,你不会设置 `.spec.selector`。 +系统的默认值填充逻辑会在创建 Job 时添加此字段。 +它会选择一个不会与任何其他 Job 重叠的选择算符设置。 + +不过,有些场合下,你可能需要重载这个自动设置的选择算符。 +为了实现这点,你可以手动设置 Job 的 `spec.selector` 字段。 + +<!-- +Be very careful when doing this. If you specify a label selector which is not +unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated +job may be deleted, or this Job may count other Pods as completing it, or one or both +Jobs may refuse to create Pods or run to completion. If a non-unique selector is +chosen, then other controllers (e.g. ReplicationController) and their Pods may behave +in unpredictable ways too. Kubernetes will not stop you from making a mistake when +specifying `.spec.selector`. +--> +做这个操作时请务必小心。 +如果你所设定的标签选择算符并不唯一针对 Job 对应的 Pod 集合,甚或该算符还能匹配 +其他无关的 Pod,这些无关的 Job 的 Pod 可能会被删除。 +或者当前 Job 会将另外一些 Pod 当作是完成自身工作的 Pods, +又或者两个 Job 之一或者二者同时都拒绝创建 Pod,无法运行至完成状态。 +如果所设置的算符不具有唯一性,其他控制器(如 RC 副本控制器)及其所管理的 Pod +集合可能会变得行为不可预测。 +Kubernetes 不会在你设置 `.spec.selector` 时尝试阻止你犯这类错误。 + +<!-- +Here is an example of a case when you might want to use this feature. + +Say Job `old` is already running. You want existing Pods +to keep running, but you want the rest of the Pods it creates +to use a different pod template and for the Job to have a new name. +You cannot update the Job because these fields are not updatable. +Therefore, you delete Job `old` but _leave its pods +running_, using `kubectl delete jobs/old -cascade=false`. +Before deleting it, you make a note of what selector it uses: +--> +下面是一个示例场景,在这种场景下你可能会使用刚刚讲述的特性。 + +假定名为 `old` 的 Job 已经处于运行状态。 +你希望已有的 Pod 继续运行,但你希望 Job 接下来要创建的其他 Pod +使用一个不同的 Pod 模版,甚至希望 Job 的名字也发生变化。 +你无法更新现有的 Job,因为这些字段都是不可更新的。 +因此,你会删除 `old` Job,但 _允许该 Job 的 Pod 集合继续运行_。 +这是通过 `kubectl delete jobs/old --cascade=false` 实现的。 +在删除之前,我们先记下该 Job 所使用的选择算符。 + +```shell +kubectl get job old -o yaml +``` + +输出类似于: + +``` +kind: Job +metadata: + name: old + ... +spec: + selector: + matchLabels: + controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 + ... +``` + +<!-- +Then you create a new Job with name `new` and you explicitly specify the same selector. +Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, +they are controlled by Job `new` as well. + +You need to specify `manualSelector: true` in the new Job since you are not using +the selector that the system normally generates for you automatically. +--> +接下来你会创建名为 `new` 的新 Job,并显式地为其设置相同的选择算符。 +由于现有 Pod 都具有标签 `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, +它们也会被名为 `new` 的 Job 所控制。 + +你需要在新 Job 中设置 `manualSelector: true`,因为你并未使用系统通常自动为你 +生成的选择算符。 + +``` +kind: Job +metadata: + name: new + ... +spec: + manualSelector: true + selector: + matchLabels: + controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 + ... +``` + +<!-- +The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting +`manualSelector: true` tells the system to that you know what you are doing and to allow this +mismatch. +--> +新的 Job 自身会有一个不同于 `a8f3d00d-c6d2-11e5-9f87-42010af00002` 的唯一 ID。 +设置 `manualSelector: true` 是在告诉系统你知道自己在干什么并要求系统允许这种不匹配 +的存在。 + +<!-- +## Alternatives + +### Bare Pods + +When the node that a Pod is running on reboots or fails, the pod is terminated +and will not be restarted. However, a Job will create new Pods to replace terminated ones. +For this reason, we recommend that you use a Job rather than a bare Pod, even if your application +requires only a single Pod. +--> +## 替代方案 {#alternatives} + +### 裸 Pod {#bare-pods} + +当 Pod 运行所在的节点重启或者失败,Pod 会被终止并且不会被重启。 +Job 会重新创建新的 Pod 来替代已终止的 Pod。 +因为这个原因,我们建议你使用 Job 而不是独立的裸 Pod, +即使你的应用仅需要一个 Pod。 + +<!-- +### Replication Controller + +Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller). +A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job +manages Pods that are expected to terminate (e.g. batch tasks). + +As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate +for pods with `RestartPolicy` equal to `OnFailure` or `Never`. +(Note: If `RestartPolicy` is not set, the default value is `Always`.) +--> +### 副本控制器 {#replication-controller} + +Job 与[副本控制器](/docs/user-guide/replication-controller)是彼此互补的。 +副本控制器管理的是那些不希望被终止的 Pod (例如,Web 服务器), +Job 管理的是那些希望被终止的 Pod(例如,批处理作业)。 + +正如在 [Pod 生命期](/zh/docs/concepts/workloads/pods/pod-lifecycle/) 中讨论的, +`Job` 仅适合于 `restartPolicy` 设置为 `OnFailure` 或 `Never` 的 Pod。 +注意:如果 `restartPolicy` 未设置,其默认值是 `Always`。 + +<!-- +### Single Job starts controller Pod + +Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort +of custom controller for those Pods. This allows the most flexibility, but may be somewhat +complicated to get started with and offers less integration with Kubernetes. +--> +### 单个 Job 启动控制器 Pod + +另一种模式是用唯一的 Job 来创建 Pod,而该 Pod 负责启动其他 Pod,因此扮演了一种 +后启动 Pod 的控制器的角色。 +这种模式的灵活性更高,但是有时候可能会把事情搞得很复杂,很难入门, +并且与 Kubernetes 的集成度很低。 + +<!-- +One example of this pattern would be a Job which starts a Pod which runs a script that in turn +starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark +driver, and then cleans up. + +An advantage of this approach is that the overall process gets the completion guarantee of a Job +object, but maintains complete control over what Pods are created and how work is assigned to them. +--> +这种模式的实例之一是用 Job 来启动一个运行脚本的 Pod,脚本负责启动 Spark +主控制器(参见 [Spark 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), +运行 Spark 驱动,之后完成清理工作。 + +这种方法的优点之一是整个过程得到了 Job 对象的完成保障, +同时维持了对创建哪些 Pod、如何向其分派工作的完全控制能力, + +<!-- +## Cron Jobs {#cron-jobs} + +You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`. +--> +## Cron Jobs {#cron-jobs} + +你可以使用 [`CronJob`](/zh/docs/concepts/workloads/controllers/cron-jobs/) +创建一个在指定时间/日期运行的 Job,类似于 UNIX 系统上的 `cron` 工具。 + diff --git a/content/zh/docs/concepts/workloads/pods/_index.md b/content/zh/docs/concepts/workloads/pods/_index.md index db1df2566a..ac5806e3f0 100644 --- a/content/zh/docs/concepts/workloads/pods/_index.md +++ b/content/zh/docs/concepts/workloads/pods/_index.md @@ -1,4 +1,524 @@ --- title: "Pods" +content_type: concept weight: 10 +no_list: true +card: + name: concepts + weight: 60 --- +<!-- +reviewers: +- erictune +title: Pods +content_type: concept +weight: 10 +no_list: true +card: + name: concepts + weight: 60 +--> + +<!-- overview --> + +<!-- +_Pods_ are the smallest deployable units of computing that you can create and manage in Kubernetes. + +A _Pod_ (as in a pod of whales or pea pod) is a group of one or more +{{< glossary_tooltip text="containers" term_id="container" >}} +with shared storage/network resources, and a specification +for how to run the containers. A Pod's contents are always co-located and +co-scheduled, and run in a shared context. A Pod models an +application-specific "logical host": it contains one or more application +containers which are relatively tightly coupled. +In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host. +--> +_Pod_ 是可以在 Kubernetes 中创建和管理的、最小的可部署的计算单元。 + +_Pod_ (就像在鲸鱼荚或者豌豆荚中)是一组(一个或多个) +{{< glossary_tooltip text="容器" term_id="container" >}}; +这些容器共享存储、网络、以及怎样运行这些容器的声明。 +Pod 中的内容总是并置(colocated)的并且一同调度,在共享的上下文中运行。 +Pod 所建模的是特定于应用的“逻辑主机”,其中包含一个或多个应用容器, +这些容器是相对紧密的耦合在一起的。 +在非云环境中,在相同的物理机或虚拟机上运行的应用类似于 +在同一逻辑主机上运行的云应用。 + +<!-- +As well as application containers, a Pod can contain +[init containers](/docs/concepts/workloads/pods/init-containers/) that run +during Pod startup. You can also inject +[ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/) +for debugging if your cluster offers this. +--> +除了应用容器,Pod 还可以包含在 Pod 启动期间运行的 +[Init 容器](/zh/docs/concepts/workloads/pods/init-containers/)。 +你也可以在集群中支持[临时性容器](/zh/docs/concepts/workloads/pods/ephemeral-containers/) +的情况外,为调试的目的注入临时性容器。 + +<!-- body --> + +## 什么是 Pod? {#what-is-a-pod} + +<!-- +While Kubernetes supports more +{{< glossary_tooltip text="container runtimes" term_id="container-runtime" >}} +than just Docker, [Docker](https://www.docker.com/) is the most commonly known +runtime, and it helps to describe Pods using some terminology from Docker. +--> +{{< note >}} +除了 Docker 之外,Kubernetes 支持 +很多其他{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}, +[Docker](https://www.docker.com/) 是最有名的运行时, +使用 Docker 的术语来描述 Pod 会很有帮助。 +{{< /note >}} + +<!-- +The shared context of a Pod is a set of Linux namespaces, cgroups, and +potentially other facets of isolation - the same things that isolate a Docker +container. Within a Pod's context, the individual applications may have +further sub-isolations applied. + +In terms of Docker concepts, a Pod is similar to a group of Docker containers +with shared namespaces and shared filesystem volumes. +--> +Pod 的共享上下文包括一组 Linux 名字空间、控制组(cgroup)和可能一些其他的隔离 +方面,即用来隔离 Docker 容器的技术。 +在 Pod 的上下文中,每个独立的应用可能会进一步实施隔离。 + +就 Docker 概念的术语而言,Pod 类似于共享名字空间和文件系统卷的一组 Docker +容器。 + +<!-- +## Using Pods + +Usually you don't need to create Pods directly, even singleton Pods. +Instead, create them using workload resources such as {{< glossary_tooltip text="Deployment" +term_id="deployment" >}} or {{< glossary_tooltip text="Job" term_id="job" >}}. +If your Pods need to track state, consider the +{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} resource. + +Pods in a Kubernetes cluster are used in two main ways: +--> +## 使用 Pod {#using-pods} + +通常你不需要直接创建 Pod,甚至单实例 Pod。 +相反,你会使用诸如 +{{< glossary_tooltip text="Deployment" term_id="deployment" >}} 或 +{{< glossary_tooltip text="Job" term_id="job" >}} 这类工作负载资源 +来创建 Pod。如果 Pod 需要跟踪状态, +可以考虑 {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} +资源。 + +Kubernetes 集群中的 Pod 主要有两种用法: + +<!-- +* **Pods that run a single container**. The "one-container-per-Pod" model is the + most common Kubernetes use case; in this case, you can think of a Pod as a + wrapper around a single container; Kubernetes manages Pods rather than managing + the containers directly. +* **Pods that run multiple containers that need to work together**. A Pod can + encapsulate an application composed of multiple co-located containers that are + tightly coupled and need to share resources. These co-located containers + form a single cohesive unit of service—for example, one container serving data + stored in a shared volume to the public, while a separate _sidecar_ container + refreshes or updates those files. + The Pod wraps these containers, storage resources, and an ephemeral network + identity together as a single unit. + + Grouping multiple co-located and co-managed containers in a single Pod is a + relatively advanced use case. You should use this pattern only in specific + instances in which your containers are tightly coupled. +--> +* **运行单个容器的 Pod**。"每个 Pod 一个容器"模型是最常见的 Kubernetes 用例; + 在这种情况下,可以将 Pod 看作单个容器的包装器,并且 Kubernetes 直接管理 Pod,而不是容器。 +* **运行多个协同工作的容器的 Pod**。 + Pod 可能封装由多个紧密耦合且需要共享资源的共处容器组成的应用程序。 + 这些位于同一位置的容器可能形成单个内聚的服务单元 —— 一个容器将文件从共享卷提供给公众, + 而另一个单独的“挂斗”(sidecar)容器则刷新或更新这些文件。 + Pod 将这些容器和存储资源打包为一个可管理的实体。 + + {{< note >}} + 将多个并置、同管的容器组织到一个 Pod 中是一种相对高级的使用场景。 + 只有在一些场景中,容器之间紧密关联时你才应该使用这种模式。 + {{< /note >}} + +<!-- +Each Pod is meant to run a single instance of a given application. If you want to +scale your application horizontally (to provide more overall resources by running +more instances), you should use multiple Pods, one for each instance. In +Kubernetes, this is typically referred to as _replication_. +Replicated Pods are usually created and managed as a group by a workload resource +and its {{< glossary_tooltip text="controller" term_id="controller" >}}. + +See [Pods and controllers](#pods-and-controllers) for more information on how +Kubernetes uses workload resources, and their controllers, to implement application +scaling and auto-healing. +--> + +每个 Pod 都旨在运行给定应用程序的单个实例。如果希望横向扩展应用程序(例如,运行多个实例 +以提供更多的资源),则应该使用多个 Pod,每个实例使用一个 Pod。 +在 Kubernetes 中,这通常被称为 _副本(Replication)_。 +通常使用一种工作负载资源及其{{< glossary_tooltip text="控制器" term_id="controller" >}} +来创建和管理一组 Pod 副本。 + +参见 [Pod 和控制器](#pods-and-controllers)以了解 Kubernetes +如何使用工作负载资源及其控制器以实现应用的扩缩和自动修复。 + +<!-- +### How Pods manage multiple containers + +Pods are designed to support multiple cooperating processes (as containers) that form +a cohesive unit of service. The containers in a Pod are automatically co-located and +co-scheduled on the same physical or virtual machine in the cluster. The containers +can share resources and dependencies, communicate with one another, and coordinate +when and how they are terminated. +--> +### Pod 怎样管理多个容器 + +Pod 被设计成支持形成内聚服务单元的多个协作过程(形式为容器)。 +Pod 中的容器被自动安排到集群中的同一物理机或虚拟机上,并可以一起进行调度。 +容器之间可以共享资源和依赖、彼此通信、协调何时以及何种方式终止自身。 + +<!-- +For example, you might have a container that +acts as a web server for files in a shared volume, and a separate "sidecar" container +that updates those files from a remote source, as in the following diagram: +--> + +例如,你可能有一个容器,为共享卷中的文件提供 Web 服务器支持,以及一个单独的 +“sidecar(挂斗)”容器负责从远端更新这些文件,如下图所示: + +{{< figure src="/images/docs/pod.svg" alt="example pod diagram" width="50%" >}} + +<!-- +Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}} +as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}. +Init containers run and complete before the app containers are started. + +Pods natively provide two kinds of shared resources for their constituent containers: +[networking](#pod-networking) and [storage](#pod-storage). +--> +有些 Pod 具有 {{< glossary_tooltip text="Init 容器" term_id="init-container" >}} 和 +{{< glossary_tooltip text="应用容器" term_id="app-container" >}}。 +Init 容器会在启动应用容器之前运行并完成。 + +Pod 天生地为其成员容器提供了两种共享资源:[网络](#pod-networking)和 +[存储](#pod-storage)。 + +<!-- +## Working with Pods + +You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This +is because Pods are designed as relatively ephemeral, disposable entities. When +a Pod gets created (directly by you, or indirectly by a +{{< glossary_tooltip text="controller" term_id="controller" >}}), the new Pod is +scheduled to run on a {{< glossary_tooltip term_id="node" >}} in your cluster. +The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, +the Pod is *evicted* for lack of resources, or the node fails. +--> +## 使用 Pod {#working-with-pods} + +你很少在 Kubernetes 中直接创建一个个的 Pod,甚至是单实例(Singleton)的 Pod。 +这是因为 Pod 被设计成了相对临时性的、用后即抛的一次性实体。 +当 Pod 由你或者间接地由 {{< glossary_tooltip text="控制器" term_id="controller" >}} +创建时,它被调度在集群中的{{< glossary_tooltip text="节点" term_id="node" >}}上运行。 +Pod 会保持在该节点上运行,直到 Pod 结束执行、Pod 对象被删除、Pod 因资源不足而被 +*驱逐* 或者节点失效为止。 + +<!-- +Restarting a container in a Pod should not be confused with restarting a Pod. A Pod +is not a process, but an environment for running container(s). A Pod persists until +it is deleted. +--> +{{< note >}} +重启 Pod 中的容器不应与重启 Pod 混淆。 +Pod 不是进程,而是容器运行的环境。 +在被删除之前,Pod 会一直存在。 +{{< /note >}} + +<!-- +When you create the manifest for a Pod object, make sure the name specified is a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +--> +当你为 Pod 对象创建清单时,要确保所指定的 Pod 名称是合法的 +[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + +<!-- +### Pods and controllers + +You can use workload resources to create and manage multiple Pods for you. A controller +for the resource handles replication and rollout and automatic healing in case of +Pod failure. For example, if a Node fails, a controller notices that Pods on that +Node have stopped working and creates a replacement Pod. The scheduler places the +replacement Pod onto a healthy Node. + +Here are some examples of workload resources that manage one or more Pods: +--> +### Pod 和控制器 {#pods-and-controllers} + +你可以使用工作负载资源来创建和管理多个 Pod。 +资源的控制器能够处理副本的管理、上线,并在 Pod 失效时提供自愈能力。 +例如,如果一个节点失败,控制器注意到该节点上的 Pod 已经停止工作, +就可以创建替换性的 Pod。调度器会将替身 Pod 调度到一个健康的节点执行。 + +下面是一些管理一个或者多个 Pod 的工作负载资源的示例: + +* {{< glossary_tooltip text="Deployment" term_id="deployment" >}} +* {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} +* {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}} + +<!-- +### Pod templates + +Controllers for {{< glossary_tooltip text="workload" term_id="workload" >}} resources create Pods +from a _pod template_ and manage those Pods on your behalf. + +PodTemplates are specifications for creating Pods, and are included in workload resources such as +[Deployments](/docs/concepts/workloads/controllers/deployment/), +[Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and +[DaemonSets](/docs/concepts/workloads/controllers/daemonset/). +--> +### Pod 模版 {#pod-templates} + +{{< glossary_tooltip text="负载" term_id="workload" >}}资源的控制器通常使用 _Pod 模板(Pod Template)_ +来替你创建 Pod 并管理它们。 + +Pod 模板是包含在工作负载对象中的规范,用来创建 Pod。这类负载资源包括 +[Deployment](/zh/docs/concepts/workloads/controllers/deployment/)、 +[Job](/zh/docs/concepts/workloads/containers/job/) 和 +[DaemonSets](/zh/docs/concepts/workloads/controllers/daemonset/)等。 + +<!-- +Each controller for a workload resource uses the `PodTemplate` inside the workload +object to make actual Pods. The `PodTemplate` is part of the desired state of whatever +workload resource you used to run your app. + +The sample below is a manifest for a simple Job with a `template` that starts one +container. The container in that Pod prints a message then pauses. +--> +工作负载的控制器会使用负载对象中的 `PodTemplate` 来生成实际的 Pod。 +`PodTemplate` 是你用来运行应用时指定的负载资源的目标状态的一部分。 + +下面的示例是一个简单的 Job 的清单,其中的 `template` 指示启动一个容器。 +该 Pod 中的容器会打印一条消息之后暂停。 + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: hello +spec: + template: + # 这里是 Pod 模版 + spec: + containers: + - name: hello + image: busybox + command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600'] + restartPolicy: OnFailure + # 以上为 Pod 模版 +``` + +<!-- +Modifying the pod template or switching to a new pod template has no effect on the +Pods that already exist. Pods do not receive template updates directly. Instead, +a new Pod is created to match the revised pod template. + +For example, the deployment controller ensures that the running Pods match the current +pod template for each Deployment object. If the template is updated, the Deployment has +to remove the existing Pods and create new Pods based on the updated template. Each workload +resource implements its own rules for handling changes to the Pod template. +--> +修改 Pod 模版或者切换到新的 Pod 模版都不会对已经存在的 Pod 起作用。 +Pod 不会直接收到模版的更新。相反, +新的 Pod 会被创建出来,与更改后的 Pod 模版匹配。 + +例如,Deployment 控制器针对每个 Deployment 对象确保运行中的 Pod 与当前的 Pod +模版匹配。如果模版被更新,则 Deployment 必须删除现有的 Pod,基于更新后的模版 +创建新的 Pod。每个工作负载资源都实现了自己的规则,用来处理对 Pod 模版的更新。 + +<!-- +On Nodes, the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} does not +directly observe or manage any of the details around pod templates and updates; those +details are abstracted away. That abstraction and separation of concerns simplifies +system semantics, and makes it feasible to extend the cluster's behavior without +changing existing code. +--> +在节点上,{{< glossary_tooltip term_id="kubelet" text="kubelet" >}}并不直接监测 +或管理与 Pod 模版相关的细节或模版的更新,这些细节都被抽象出来。 +这种抽象和关注点分离简化了整个系统的语义,并且使得用户可以在不改变现有代码的 +前提下就能扩展集群的行为。 + +<!-- +## Resource sharing and communication + +Pods enable data sharing and communication among their constituent +containters. +--> +### 资源共享和通信 {#resource-sharing-and-communication} + +Pod 使它的成员容器间能够进行数据共享和通信。 + +<!-- +### Storage in Pods {#pod-storage} + +A Pod can specify a set of shared storage +{{< glossary_tooltip text="volumes" term_id="volume" >}}. All containers +in the Pod can access the shared volumes, allowing those containers to +share data. Volumes also allow persistent data in a Pod to survive +in case one of the containers within needs to be restarted. See +[Storage](/docs/concepts/storage/) for more information on how +Kubernetes implements shared storage and makes it available to Pods. +--> +### Pod 中的存储 {#pod-storage} + +一个 Pod 可以设置一组共享的存储{{< glossary_tooltip text="卷" term_id="volume" >}}。 +Pod 中的所有容器都可以访问该共享卷,从而允许这些容器共享数据。 +卷还允许 Pod 中的持久数据保留下来,即使其中的容器需要重新启动。 +有关 Kubernetes 如何在 Pod 中实现共享存储并将其提供给 Pod 的更多信息, +请参考[卷](/zh/docs/concepts/storage/)。 + +<!-- +### Pod networking + +Each Pod is assigned a unique IP address for each address family. Every +container in a Pod shares the network namespace, including the IP address and +network ports. Inside a Pod (and **only** then), the containers that belong to the Pod +can communicate with one another using `localhost`. When containers in a Pod communicate +with entities *outside the Pod*, +they must coordinate how they use the shared network resources (such as ports). +--> +### Pod 联网 {#pod-networking} + +每个 Pod 都在每个地址族中获得一个唯一的 IP 地址。 +Pod 中的每个容器共享网络名字空间,包括 IP 地址和网络端口。 +*Pod 内* 的容器可以使用 `localhost` 互相通信。 +当 Pod 中的容器与 *Pod 之外* 的实体通信时,它们必须协调如何使用共享的网络资源 +(例如端口)。 + +<!-- +Within a Pod, containers share an IP address and port space, and +can find each other via `localhost`. The containers in a Pod can also communicate +with each other using standard inter-process communications like SystemV semaphores +or POSIX shared memory. Containers in different Pods have distinct IP addresses +and can not communicate by IPC without +[special configuration](/docs/concepts/policy/pod-security-policy/). +Containers that want to interact with a container running in a different Pod can +use IP networking to comunicate. +--> +在同一个 Pod 内,所有容器共享一个 IP 地址和端口空间,并且可以通过 `localhost` 发现对方。 +他们也能通过如 SystemV 信号量或 POSIX 共享内存这类标准的进程间通信方式互相通信。 +不同 Pod 中的容器的 IP 地址互不相同,没有 +[特殊配置](/zh/docs/concepts/policy/pod-security-policy/) 就不能使用 IPC 进行通信。 +如果某容器希望与运行于其他 Pod 中的容器通信,可以通过 IP 联网的方式实现。 + +<!-- +Containers within the Pod see the system hostname as being the same as the configured +`name` for the Pod. There's more about this in the [networking](/docs/concepts/cluster-administration/networking/) +section. +--> +Pod 中的容器所看到的系统主机名与为 Pod 配置的 `name` 属性值相同。 +[网络](/zh/docs/concepts/cluster-administration/networking/)部分提供了更多有关此内容的信息。 + +<!-- +## Privileged mode for containers + +Any container in a Pod can enable privileged mode, using the `privileged` flag on +the [security context](/docs/tasks/configure-pod-container/security-context/) of the container spec. This is useful for containers that want to use operating system administrative capabilities such as manipulating the network stack or accessing hardware devices. +Processes within a privileged container get almost the same privileges that are available to processes outside a container. +--> +## 容器的特权模式 {#rivileged-mode-for-containers} + +Pod 中的任何容器都可以使用容器规约中的 +[安全性上下文](/zh/docs/tasks/configure-pod-container/security-context/)中的 +`privileged` 参数启用特权模式。 +这对于想要使用使用操作系统管理权能(Capabilities,如操纵网络堆栈和访问设备) +的容器很有用。 +容器内的进程几乎可以获得与容器外的进程相同的特权。 + +<!-- +Your {< glossary_tooltip text="container runtime" term_id="container-runtime" >}} must support the concept of a privileged container for this setting to be relevant. +--> +{{< note >}} +你的{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}必须支持 +特权容器的概念才能使用这一配置。 +{{< /note >}} + +<!-- +## Static Pods + +_Static Pods_ are managed directly by the kubelet daemon on a specific node, +without the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} +observing them. +Whereas most Pods are managed by the control plane (for example, a +{{< glossary_tooltip text="Deployment" term_id="deployment" >}}), for static +Pods, the kubelet directly supervises each static Pod (and restarts it if it fails). +--> +## 静态 Pod {#static-pods} + +_静态 Pod(Static Pod)_ 直接由特定节点上的 `kubelet` 守护进程管理, +不需要{{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}看到它们。 +尽管大多数 Pod 都是通过控制面(例如,{{< glossary_tooltip text="Deployment" term_id="deployment" >}}) +来管理的,对于静态 Pod 而言,`kubelet` 直接监控每个 Pod,并在其失效时重启之。 + +<!-- +Static Pods are always bound to one {{< glossary_tooltip term_id="kubelet" >}} on a specific node. +The main use for static Pods is to run a self-hosted control plane: in other words, +using the kubelet to supervise the individual [control plane components](/docs/concepts/overview/components/#control-plane-components). + +The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Pod" term_id="mirror-pod" >}} +on the Kubernetes API server for each static Pod. +This means that the Pods running on a node are visible on the API server, +but cannot be controlled from there. +--> +静态 Pod 通常绑定到某个节点上的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}。 +其主要用途是运行自托管的控制面。 +在自托管场景中,使用 `kubelet` 来管理各个独立的 +[控制面组件](/zh/docs/concepts/overview/components/#control-plane-components)。 + +`kubelet` 自动尝试为每个静态 Pod 在 Kubernetes API 服务器上创建一个 +{{< glossary_tooltip text="镜像 Pod" term_id="mirror-pod" >}}。 +这意味着在节点上运行的 Pod 在 API 服务器上是可见的,但不可以通过 API +服务器来控制。 + +## {{% heading "whatsnext" %}} + +<!-- +* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/). +* Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/). +* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to + configure different Pods with different container runtime configurations. +* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). +* Read about [PodDisruptionBudget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. +* Pod is a top-level resource in the Kubernetes REST API. + The [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) + object definition describes the object in detail. +* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container. +-- +* 了解 [Pod 生命周期](/zh/docs/concepts/workloads/pods/pod-lifecycle/) +* 了解 [PodPresets](/zh/docs/concepts/workloads/pods/podpreset/) +* 了解 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/),以及如何使用它 + 来配置不同的 Pod 使用不同的容器运行时配置 +* 了解 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* 了解 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/),以及你 + 如何可以利用它在出现干扰因素时管理应用的可用性 +* Pod 在 Kubernetes REST API 中是一个顶层资源; + [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) + 对象的定义中包含了更多的细节信息。 +* 博客 [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) 中解释了在同一 Pod 中包含多个容器时的几种常见布局。 + +<!-- +To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}, you can read about the prior art, including: +--> +要了解为什么 Kubernetes 会在其他资源 +(如 {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} +或 {{< glossary_tooltip text="Deployment" term_id="deployment" >}}) +封装通用的 Pod API,相关的背景信息可以在前人的研究中找到。具体包括: + + * [Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema) + * [Borg](https://research.google.com/pubs/pub43438.html) + * [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html) + * [Omega](https://research.google/pubs/pub41684/) + * [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/). + diff --git a/content/zh/docs/concepts/workloads/pods/pod-overview.md b/content/zh/docs/concepts/workloads/pods/pod-overview.md deleted file mode 100644 index e3959d62cb..0000000000 --- a/content/zh/docs/concepts/workloads/pods/pod-overview.md +++ /dev/null @@ -1,265 +0,0 @@ ---- -title: Pod 概览 -content_type: concept -weight: 10 -card: - name: concepts - weight: 60 ---- - -<!-- ---- -reviewers: -- erictune -title: Pod Overview -content_type: concept -weight: 10 -card: - name: concepts - weight: 60 ---- ---> - -<!-- -This page provides an overview of `Pod`, the smallest deployable object in the Kubernetes object model. ---> -<!-- overview --> -本节提供了 `Pod` 的概览信息,`Pod` 是最小可部署的 Kubernetes 对象模型。 - - - -<!-- body --> - -<!-- -## Understanding Pods ---> -## 理解 Pod - -<!-- -A *Pod* is the basic execution unit of a Kubernetes application--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your {{< glossary_tooltip term_id="cluster" >}}. ---> -*Pod* 是 Kubernetes 应用程序的基本执行单元,即它是 Kubernetes 对象模型中创建或部署的最小和最简单的单元。Pod 表示在 {{< glossary_tooltip term_id="cluster" >}} 上运行的进程。 - -<!-- -A Pod encapsulates an application's container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: *a single instance of an application in Kubernetes*, which might consist of either a single {{< glossary_tooltip text="container" term_id="container" >}} or a small number of containers that are tightly coupled and that share resources. ---> -Pod 封装了应用程序容器(或者在某些情况下封装多个容器)、存储资源、唯一网络 IP 以及控制容器应该如何运行的选项。 -Pod 表示部署单元:*Kubernetes 中应用程序的单个实例*,它可能由单个 {{< glossary_tooltip text="容器" term_id="container" >}} 或少量紧密耦合并共享资源的容器组成。 - -<!-- -[Docker](https://www.docker.com) is the most common container runtime used in a Kubernetes Pod, but Pods support other [container runtimes](/docs/setup/production-environment/container-runtimes/) as well. ---> -[Docker](https://www.docker.com) 是 Kubernetes Pod 中最常用的容器运行时,但 Pod 也能支持其他的[容器运行时](/docs/setup/production-environment/container-runtimes/)。 - - -<!-- -Pods in a Kubernetes cluster can be used in two main ways: ---> -Kubernetes 集群中的 Pod 可被用于以下两个主要用途: - -<!-- -* **Pods that run a single container**. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly. -* **Pods that run multiple containers that need to work together**. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service--one container serving files from a shared volume to the public, while a separate "sidecar" container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity. -The [Kubernetes Blog](https://kubernetes.io/blog) has some additional information on Pod use cases. For more information, see: ---> - -* **运行单个容器的 Pod**。"每个 Pod 一个容器"模型是最常见的 Kubernetes 用例;在这种情况下,可以将 Pod 看作单个容器的包装器,并且 Kubernetes 直接管理 Pod,而不是容器。 -* **运行多个协同工作的容器的 Pod**。 -Pod 可能封装由多个紧密耦合且需要共享资源的共处容器组成的应用程序。 -这些位于同一位置的容器可能形成单个内聚的服务单元 —— 一个容器将文件从共享卷提供给公众,而另一个单独的“挂斗”(sidecar)容器则刷新或更新这些文件。 -Pod 将这些容器和存储资源打包为一个可管理的实体。 -[Kubernetes 博客](https://kubernetes.io/blog) 上有一些其他的 Pod 用例信息。更多信息请参考: - -<!-- - * [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) - * [Container Design Patterns](https://kubernetes.io/blog/2016/06/container-design-patterns) ---> - * [分布式系统工具包:容器组合的模式](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) - * [容器设计模式](https://kubernetes.io/blog/2016/06/container-design-patterns) - -<!-- -Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information. ---> - -每个 Pod 表示运行给定应用程序的单个实例。如果希望横向扩展应用程序(例如,运行多个实例),则应该使用多个 Pod,每个应用实例使用一个 Pod 。在 Kubernetes 中,这通常被称为 _副本_。通常使用一个称为控制器的抽象来创建和管理一组副本 Pod。更多信息请参见 [Pod 和控制器](#pods-and-controllers)。 - -<!-- -### How Pods manage multiple Containers ---> -### Pod 怎样管理多个容器 - -<!-- -Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated. ---> -Pod 被设计成支持形成内聚服务单元的多个协作过程(作为容器)。 -Pod 中的容器被自动的安排到集群中的同一物理或虚拟机上,并可以一起进行调度。 -容器可以共享资源和依赖、彼此通信、协调何时以及何种方式终止它们。 - -<!-- -Note that grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. For example, you might have a container that acts as a web server for files in a shared volume, and a separate "sidecar" container that updates those files from a remote source, as in the following diagram: ---> - -注意,在单个 Pod 中将多个并置和共同管理的容器分组是一个相对高级的使用方式。 -只在容器紧密耦合的特定实例中使用此模式。 -例如,您可能有一个充当共享卷中文件的 Web 服务器的容器,以及一个单独的 sidecar 容器,该容器从远端更新这些文件,如下图所示: - - -{{< figure src="/images/docs/pod.svg" alt="Pod 图例" width="50%" >}} - - -<!-- -Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}} as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}. Init containers run and complete before the app containers are started. ---> -有些 Pod 具有 {{< glossary_tooltip text="初始容器" term_id="init-container" >}} 和 {{< glossary_tooltip text="应用容器" term_id="app-container" >}}。初始容器会在启动应用容器之前运行并完成。 - -<!-- -Pods provide two kinds of shared resources for their constituent containers: *networking* and *storage*. ---> - -Pod 为其组成容器提供了两种共享资源:*网络* 和 *存储*。 - -<!-- -#### Networking ---> -#### 网络 - -<!-- -Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers *inside a Pod* can communicate with one another using `localhost`. When containers in a Pod communicate with entities *outside the Pod*, they must coordinate how they use the shared network resources (such as ports). ---> -每个 Pod 分配一个唯一的 IP 地址。 -Pod 中的每个容器共享网络命名空间,包括 IP 地址和网络端口。 -*Pod 内的容器* 可以使用 `localhost` 互相通信。 -当 Pod 中的容器与 *Pod 之外* 的实体通信时,它们必须协调如何使用共享的网络资源(例如端口)。 - -<!-- -#### Storage ---> -#### 存储 - -<!-- -A Pod can specify a set of shared storage {{< glossary_tooltip text="Volumes" term_id="volume" >}}. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See [Volumes](/docs/concepts/storage/volumes/) for more information on how Kubernetes implements shared storage in a Pod. ---> -一个 Pod 可以指定一组共享存储{{< glossary_tooltip text="卷" term_id="volume" >}}。 -Pod 中的所有容器都可以访问共享卷,允许这些容器共享数据。 -卷还允许 Pod 中的持久数据保留下来,以防其中的容器需要重新启动。 -有关 Kubernetes 如何在 Pod 中实现共享存储的更多信息,请参考[卷](/docs/concepts/storage/volumes/)。 - -<!-- -## Working with Pods ---> -## 使用 Pod - -<!-- -You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a {{< glossary_tooltip term_id="node" >}} in your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, the Pod is *evicted* for lack of resources, or the Node fails. ---> -你很少在 Kubernetes 中直接创建单独的 Pod,甚至是单个存在的 Pod。 -这是因为 Pod 被设计成了相对短暂的一次性的实体。 -当 Pod 由您创建或者间接地由控制器创建时,它被调度在集群中的 {{< glossary_tooltip term_id="node" >}} 上运行。 -Pod 会保持在该节点上运行,直到进程被终止、Pod 对象被删除、Pod 因资源不足而被 *驱逐* 或者节点失效为止。 - -<!-- -Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted. ---> -{{< note >}} -重启 Pod 中的容器不应与重启 Pod 混淆。Pod 本身不运行,而是作为容器运行的环境,并且一直保持到被删除为止。 -{{< /note >}} - -<!-- -Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a *Controller*, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a Controller. See [Pods and Controllers](#pods-and-controllers) for more information on how Kubernetes uses Controllers to implement Pod scaling and healing. ---> - -Pod 本身并不能自愈。 -如果 Pod 被调度到失败的节点,或者如果调度操作本身失败,则删除该 Pod;同样,由于缺乏资源或进行节点维护,Pod 在被驱逐后将不再生存。 -Kubernetes 使用了一个更高级的称为 *控制器* 的抽象,由它处理相对可丢弃的 Pod 实例的管理工作。 -因此,虽然可以直接使用 Pod,但在 Kubernetes 中,更为常见的是使用控制器管理 Pod。 -有关 Kubernetes 如何使用控制器实现 Pod 伸缩和愈合的更多信息,请参考 [Pod 和控制器](#pods-and-controllers)。 - -<!-- -### Pods and Controllers ---> -### Pod 和控制器 {#pods-and-controllers} - -<!-- -A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node. ---> -控制器可以为您创建和管理多个 Pod,管理副本和上线,并在集群范围内提供自修复能力。 -例如,如果一个节点失败,控制器可以在不同的节点上调度一样的替身来自动替换 Pod。 - -<!-- -Some examples of Controllers that contain one or more pods include: ---> -包含一个或多个 Pod 的控制器一些示例包括: - -<!-- -* [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) -* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) ---> -* [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) -* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) - -<!-- -In general, Controllers use a Pod Template that you provide to create the Pods for which it is responsible. ---> -控制器通常使用您提供的 Pod 模板来创建它所负责的 Pod。 - -<!-- -## Pod Templates ---> -## Pod 模板 - -<!-- -Pod templates are pod specifications which are included in other objects, such as -[Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and -[DaemonSets](/docs/concepts/workloads/controllers/daemonset/). Controllers use Pod Templates to make actual pods. -The sample below is a simple manifest for a Pod which contains a container that prints -a message. ---> -Pod 模板是包含在其他对象中的 Pod 规范,例如 -[Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/)、 [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/) 和 -[DaemonSets](/docs/concepts/workloads/controllers/daemonset/)。 -控制器使用 Pod 模板来制作实际使用的 Pod。 -下面的示例是一个简单的 Pod 清单,它包含一个打印消息的容器。 - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: myapp-pod - labels: - app: myapp -spec: - containers: - - name: myapp-container - image: busybox - command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] -``` - -<!-- -Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no "quantum entanglement". Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive. ---> - -Pod 模板就像饼干切割器,而不是指定所有副本的当前期望状态。 -一旦饼干被切掉,饼干就与切割器没有关系。 -没有“量子纠缠”。 -随后对模板的更改或甚至切换到新的模板对已经创建的 Pod 没有直接影响。 -类似地,由副本控制器创建的 Pod 随后可以被直接更新。 -这与 Pod 形成有意的对比,Pod 指定了属于 Pod 的所有容器的当前期望状态。 -这种方法从根本上简化了系统语义,增加了原语的灵活性。 - - - -<!-- -* Learn more about [Pods](/docs/concepts/workloads/pods/pod/) -* Learn more about Pod behavior: - * [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods) - * [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) ---> -## {{% heading "whatsnext" %}} - -* 详细了解 [Pod](/docs/concepts/workloads/pods/pod/) -* 了解有关 Pod 行为的更多信息: - * [Pod 的终止](/docs/concepts/workloads/pods/pod/#termination-of-pods) - * [Pod 的生命周期](/docs/concepts/workloads/pods/pod-lifecycle/) - diff --git a/content/zh/docs/concepts/workloads/pods/pod.md b/content/zh/docs/concepts/workloads/pods/pod.md deleted file mode 100644 index b0a0486280..0000000000 --- a/content/zh/docs/concepts/workloads/pods/pod.md +++ /dev/null @@ -1,387 +0,0 @@ ---- -title: Pods -content_type: concept -weight: 20 ---- - -<!-- -reviewers: -title: Pods -content_type: concept -weight: 20 ---> - -<!-- overview --> - -<!-- -_Pods_ are the smallest deployable units of computing that can be created and -managed in Kubernetes. ---> - -_Pod_ 是可以在 Kubernetes 中创建和管理的、最小的可部署的计算单元。 - - - - -<!-- body --> - -<!-- -## What is a Pod? ---> -## Pod 是什么? - -<!-- -A _Pod_ (as in a pod of whales or pea pod) is a group of one or more -{{< glossary_tooltip text="containers" term_id="container" >}} (such as -Docker containers), with shared storage/network, and a specification -for how to run the containers. A Pod's contents are always co-located and -co-scheduled, and run in a shared context. A Pod models an -application-specific "logical host" - it contains one or more application -containers which are relatively tightly coupled — in a pre-container -world, being executed on the same physical or virtual machine would mean being -executed on the same logical host. ---> -_Pod_ (就像在鲸鱼荚或者豌豆荚中)是一组(一个或多个){{< glossary_tooltip text="容器" term_id="container" >}}(例如 Docker 容器),这些容器共享存储、网络、以及怎样运行这些容器的声明。Pod 中的内容总是并置(colocated)的并且一同调度,在共享的上下文中运行。 -Pod 所建模的是特定于应用的“逻辑主机”,其中包含一个或多个应用容器,这些容器是相对紧密的耦合在一起 — 在容器出现之前,在相同的物理机或虚拟机上运行意味着在相同的逻辑主机上运行。 - -<!-- -While Kubernetes supports more container runtimes than just Docker, Docker is -the most commonly known runtime, and it helps to describe Pods in Docker terms. ---> -虽然 Kubernetes 支持多种容器运行时,但 Docker 是最常见的一种运行时,它有助于使用 Docker 术语来描述 Pod。 - -<!-- -The shared context of a Pod is a set of Linux namespaces, cgroups, and -potentially other facets of isolation - the same things that isolate a Docker -container. Within a Pod's context, the individual applications may have -further sub-isolations applied. ---> -Pod 的共享上下文是一组 Linux 命名空间、cgroups、以及其他潜在的资源隔离相关的因素,这些相同的东西也隔离了 Docker 容器。在 Pod 的上下文中,单个应用程序可能还会应用进一步的子隔离。 - -<!-- -Containers within a Pod share an IP address and port space, and -can find each other via `localhost`. They can also communicate with each -other using standard inter-process communications like SystemV semaphores or -POSIX shared memory. Containers in different Pods have distinct IP addresses -and can not communicate by IPC without -[special configuration](/docs/concepts/policy/pod-security-policy/). -These containers usually communicate with each other via Pod IP addresses. - -Applications within a Pod also have access to shared {{< glossary_tooltip text="volumes" term_id="volume" >}}, which are defined -as part of a Pod and are made available to be mounted into each application's -filesystem. ---> -Pod 中的所有容器共享一个 IP 地址和端口空间,并且可以通过 `localhost` 互相发现。他们也能通过标准的进程间通信(如 SystemV 信号量或 POSIX 共享内存)方式进行互相通信。不同 Pod 中的容器的 IP 地址互不相同,没有 [特殊配置](/docs/concepts/policy/pod-security-policy/) 就不能使用 IPC 进行通信。这些容器之间经常通过 Pod IP 地址进行通信。 - -Pod 中的应用也能访问共享 {{< glossary_tooltip text="卷" term_id="volume" >}},共享卷是 Pod 定义的一部分,可被用来挂载到每个应用的文件系统上。 - -<!-- -In terms of [Docker](https://www.docker.com/) constructs, a Pod is modelled as -a group of Docker containers with shared namespaces and shared filesystem -volumes. ---> -在 [Docker](https://www.docker.com/) 体系的术语中,Pod 被建模为一组具有共享命名空间和共享文件系统[卷](/docs/concepts/storage/volumes/) 的 Docker 容器。 - -<!-- -Like individual application containers, Pods are considered to be relatively -ephemeral (rather than durable) entities. As discussed in -[pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), Pods are created, assigned a unique ID (UID), and -scheduled to nodes where they remain until termination (according to restart -policy) or deletion. If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that node are -scheduled for deletion, after a timeout period. A given Pod (as defined by a UID) is not -"rescheduled" to a new node; instead, it can be replaced by an identical Pod, -with even the same name if desired, but with a new UID (see [replication -controller](/docs/concepts/workloads/controllers/replicationcontroller/) for more details). ---> -与单个应用程序容器一样,Pod 被认为是相对短暂的(而不是持久的)实体。如 [Pod 的生命周期](/docs/concepts/workloads/pods/pod-lifecycle/) 所讨论的那样:Pod 被创建、给它指定一个唯一 ID (UID)、被调度到节点、在节点上存续直到终止(取决于重启策略)或被删除。如果 {{< glossary_tooltip term_id="node" >}} 宕机,调度到该节点上的 Pod 会在一个超时周期后被安排删除。给定 Pod (由 UID 定义)不会重新调度到新节点;相反,它会被一个完全相同的 Pod 替换掉,如果需要甚至连 Pod 名称都可以一样,除了 UID 是新的(更多信息请查阅 [副本控制器(replication -controller)](/docs/concepts/workloads/controllers/replicationcontroller/)。 - -<!-- -When something is said to have the same lifetime as a Pod, such as a volume, -that means that it exists as long as that Pod (with that UID) exists. If that -Pod is deleted for any reason, even if an identical replacement is created, the -related thing (e.g. volume) is also destroyed and created anew. ---> -当某些东西被说成与 Pod(如卷)具有相同的生命周期时,这表明只要 Pod(具有该 UID)存在,它就存在。如果出于任何原因删除了该 Pod,即使创建了相同的 Pod,相关的内容(例如卷)也会被销毁并重新创建。 - -{{< figure src="/images/docs/pod.svg" title="Pod diagram" width="50%" >}} - -<!-- -*A multi-container Pod that contains a file puller and a -web server that uses a persistent volume for shared storage between the containers.* ---> - -*一个多容器 Pod,其中包含一个文件拉取器和一个 Web 服务器,该 Web 服务器使用持久卷在容器之间共享存储* - -<!-- -## Motivation for pods ---> -## 设计 Pod 的目的 - -<!-- -### Management ---> -### 管理 - -<!-- -Pods are a model of the pattern of multiple cooperating processes which form a -cohesive unit of service. They simplify application deployment and management -by providing a higher-level abstraction than the set of their constituent -applications. Pods serve as unit of deployment, horizontal scaling, and -replication. Colocation (co-scheduling), shared fate (e.g. termination), -coordinated replication, resource sharing, and dependency management are -handled automatically for containers in a Pod. ---> -Pod 是形成内聚服务单元的多个协作过程模式的模型。它们提供了一个比它们的应用组成集合更高级的抽象,从而简化了应用的部署和管理。Pod 可以用作部署、水平扩展和制作副本的最小单元。在 Pod 中,系统自动处理多个容器的在并置运行(协同调度)、生命期共享(例如,终止),协同复制、资源共享和依赖项管理。 - -<!-- -### Resource sharing and communication ---> -### 资源共享和通信 - -<!-- -Pods enable data sharing and communication among their constituents. ---> -Pod 使它的组成容器间能够进行数据共享和通信。 - -<!-- -The applications in a Pod all use the same network namespace (same IP and port -space), and can thus "find" each other and communicate using `localhost`. -Because of this, applications in a Pod must coordinate their usage of ports. -Each Pod has an IP address in a flat shared networking space that has full -communication with other physical computers and Pods across the network. ---> -Pod 中的应用都使用相同的网络命名空间(相同 IP 和 端口空间),而且能够互相“发现”并使用 `localhost` 进行通信。因此,在 Pod 中的应用必须协调它们的端口使用情况。每个 Pod 在扁平的共享网络空间中具有一个 IP 地址,该空间通过网络与其他物理计算机和 Pod 进行全面通信。 - -<!-- -Containers within the Pod see the system hostname as being the same as the configured -`name` for the Pod. There's more about this in the [networking](/docs/concepts/cluster-administration/networking/) -section. ---> -Pod 中的容器获取的系统主机名与为 Pod 配置的 `name` 相同。[网络](/docs/concepts/cluster-administration/networking/) 部分提供了更多有关此内容的信息。 - -<!-- -In addition to defining the application containers that run in the Pod, the Pod -specifies a set of shared storage volumes. Volumes enable data to survive -container restarts and to be shared among the applications within the Pod. ---> -Pod 除了定义了 Pod 中运行的应用程序容器之外,Pod 还指定了一组共享存储卷。该共享存储卷能使数据在容器重新启动后继续保留,并能在 Pod 内的应用程序之间共享。 - -<!-- -## Uses of pods ---> -## 使用 Pod - -<!-- -Pods can be used to host vertically integrated application stacks (e.g. LAMP), -but their primary motivation is to support co-located, co-managed helper -programs, such as: - -* content management systems, file and data loaders, local cache managers, etc. -* log and checkpoint backup, compression, rotation, snapshotting, etc. -* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc. -* proxies, bridges, and adapters -* controllers, managers, configurators, and updaters ---> -Pod 可以用于托管垂直集成的应用程序栈(例如,LAMP),但最主要的目的是支持位于同一位置的、共同管理的工具程序,例如: - -* 内容管理系统、文件和数据加载器、本地缓存管理器等。 -* 日志和检查点备份、压缩、旋转、快照等。 -* 数据更改监视器、日志跟踪器、日志和监视适配器、事件发布器等。 -* 代理、桥接器和适配器 -* 控制器、管理器、配置器和更新器 - - -<!-- -Individual Pods are not intended to run multiple instances of the same -application, in general. ---> -通常,不会用单个 Pod 来运行同一应用程序的多个实例。 - -<!-- -For a longer explanation, see [The Distributed System ToolKit: Patterns for -Composite -Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). ---> -有关详细说明,请参考 [分布式系统工具包:组合容器的模式](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)。 - -<!-- -## Alternatives considered ---> -## 可考虑的备选方案 - -<!-- -_Why not just run multiple programs in a single (Docker) container?_ - -1. Transparency. Making the containers within the Pod visible to the - infrastructure enables the infrastructure to provide services to those - containers, such as process management and resource monitoring. This - facilitates a number of conveniences for users. -1. Decoupling software dependencies. The individual containers may be - versioned, rebuilt and redeployed independently. Kubernetes may even support - live updates of individual containers someday. -1. Ease of use. Users don't need to run their own process managers, worry about - signal and exit-code propagation, etc. -1. Efficiency. Because the infrastructure takes on more responsibility, - containers can be lighter weight. ---> -_为什么不在单个(Docker)容器中运行多个程序?_ - -1. 透明度。Pod 内的容器对基础设施可见,使得基础设施能够向这些容器提供服务,例如流程管理和资源监控。这为用户提供了许多便利。 -1. 解耦软件依赖关系。可以独立地对单个容器进行版本控制、重新构建和重新部署。Kubernetes 有一天甚至可能支持单个容器的实时更新。 -1. 易用性。用户不需要运行他们自己的进程管理器、也不用担心信号和退出代码传播等。 -1. 效率。因为基础结构承担了更多的责任,所以容器可以变得更加轻量化。 - -<!-- -_Why not support affinity-based co-scheduling of containers?_ ---> -_为什么不支持基于亲和性的容器协同调度?_ - -<!-- -That approach would provide co-location, but would not provide most of the -benefits of Pods, such as resource sharing, IPC, guaranteed fate sharing, and -simplified management. ---> -这种处理方法尽管可以提供同址,但不能提供 Pod 的大部分好处,如资源共享、IPC、有保证的命运共享和简化的管理。 - -<!-- -## Durability of pods (or lack thereof) ---> -## Pod 的持久性(或稀缺性) - -<!-- -Pods aren't intended to be treated as durable entities. They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance. ---> -不得将 Pod 视为持久实体。它们无法在调度失败、节点故障或其他驱逐策略(例如由于缺乏资源或在节点维护的情况下)中生存。 - -<!-- -In general, users shouldn't need to create Pods directly. They should almost -always use controllers even for singletons, for example, -[Deployments](/docs/concepts/workloads/controllers/deployment/). -Controllers provide self-healing with a cluster scope, as well as replication -and rollout management. -Controllers like [StatefulSet](/docs/concepts/workloads/controllers/statefulset.md) -can also provide support to stateful Pods. ---> -一般来说,用户不需要直接创建 Pod。他们几乎都是使用控制器进行创建,即使对于单例的 Pod 创建也一样使用控制器,例如 [Deployments](/docs/concepts/workloads/controllers/deployment/)。 -控制器提供集群范围的自修复以及副本数和滚动管理。 -像 [StatefulSet](/docs/concepts/workloads/controllers/statefulset.md) 这样的控制器还可以提供支持有状态的 Pod。 - -<!-- -The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema), and [Tupperware](https://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997). ---> - -在集群调度系统中,使用 API 合集作为面向用户的主要原语是比较常见的,包括 [Borg](https://research.google.com/pubs/pub43438.html)、[Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)、[Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)、和 [Tupperware](https://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997)。 - -<!-- -Pod is exposed as a primitive in order to facilitate: ---> -Pod 暴露为原语是为了便于: - -<!-- -* scheduler and controller pluggability -* support for pod-level operations without the need to "proxy" them via controller APIs -* decoupling of Pod lifetime from controller lifetime, such as for bootstrapping -* decoupling of controllers and services — the endpoint controller just watches Pods -* clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller" -* high-availability applications, which will expect Pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions or image prefetching. ---> -* 调度器和控制器可插拔性 -* 支持 Pod 级别的操作,而不需要通过控制器 API "代理" 它们 -* Pod 生命与控制器生命的解耦,如自举 -* 控制器和服务的解耦 — 端点控制器只监视 Pod -* kubelet 级别的功能与集群级别功能的清晰组合 — kubelet 实际上是 "Pod 控制器" -* 高可用性应用程序期望在 Pod 终止之前并且肯定要在 Pod 被删除之前替换 Pod,例如在计划驱逐或镜像预先拉取的情况下。 - -<!-- -## Termination of Pods ---> -## Pod 的终止 - -<!-- -Because Pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period. ---> -因为 Pod 代表在集群中的节点上运行的进程,所以当不再需要这些进程时(与被 KILL 信号粗暴地杀死并且没有机会清理相比),允许这些进程优雅地终止是非常重要的。 -用户应该能够请求删除并且知道进程何时终止,但是也能够确保删除最终完成。当用户请求删除 Pod 时,系统会记录在允许强制删除 Pod 之前所期望的宽限期,并向每个容器中的主进程发送 TERM 信号。一旦过了宽限期,KILL 信号就发送到这些进程,然后就从 API 服务器上删除 Pod。如果 Kubelet 或容器管理器在等待进程终止时发生重启,则终止操作将以完整的宽限期进行重试。 - - -<!-- -An example flow: ---> -流程示例: - -<!-- -1. User sends command to delete Pod, with default grace period (30s) -1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period. -1. Pod shows up as "Terminating" when listed in client commands -1. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the Pod shutdown process. - 1. If one of the Pod's containers has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the container. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period. - 1. The container is sent the TERM signal. Note that not all containers in the Pod will receive the TERM signal at the same time and may each require a `preStop` hook if the order in which they shut down matters. -1. (simultaneous with 3) Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers. Pods that shutdown slowly cannot continue to serve traffic as load balancers (like the service proxy) remove them from their rotations. -1. When the grace period expires, any processes still running in the Pod are killed with SIGKILL. -1. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client. ---> -1. 用户发送命令删除 Pod,使用的是默认的宽限期(30秒) -1. API 服务器中的 Pod 会随着宽限期规定的时间进行更新,过了这个时间 Pod 就会被认为已 "死亡"。 -1. 当使用客户端命令查询 Pod 状态时,Pod 显示为 "Terminating"。 -1. (和第 3 步同步进行)当 Kubelet 看到 Pod 由于步骤 2 中设置的时间而被标记为 terminating 状态时,它就开始执行关闭 Pod 流程。 - 1. 如果 Pod 定义了 [preStop 钩子](/docs/concepts/containers/container-lifecycle-hooks/#hook-details),就在 Pod 内部调用它。如果宽限期结束了,但是 `preStop` 钩子还在运行,那么就用小的(2 秒)扩展宽限期调用步骤 2。 - 1. 给 Pod 内的进程发送 TERM 信号。请注意,并不是所有 Pod 中的容器都会同时收到 TERM 信号,如果它们关闭的顺序很重要,则每个容器可能都需要一个 `preStop` 钩子。 -1. (和第 3 步同步进行)从服务的端点列表中删除 Pod,Pod 也不再被视为副本控制器的运行状态的 Pod 集的一部分。因为负载均衡器(如服务代理)会将其从轮换中删除,所以缓慢关闭的 Pod 无法继续为流量提供服务。 -1. 当宽限期到期时,仍在 Pod 中运行的所有进程都会被 SIGKILL 信号杀死。 -1. kubelet 将通过设置宽限期为 0 (立即删除)来完成在 API 服务器上删除 Pod 的操作。该 Pod 从 API 服务器中消失,并且在客户端中不再可见。 - - -<!-- -By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=<seconds>` option which allows a user to override the default and specify their own value. The value `0` [force deletes](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) the Pod. -You must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions. ---> -默认情况下,所有删除操作宽限期是 30 秒。`kubectl delete` 命令支持 `--grace-period=<seconds>` 选项,允许用户覆盖默认值并声明他们自己的宽限期。设置为 `0` 会[强制删除](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) Pod。您必须指定一个附加标志 `--force` 和 `--grace-period=0` 才能执行强制删除操作。 - -<!-- -### Force deletion of pods ---> -### Pod 的强制删除 - -<!-- -Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately. When a force deletion is performed, the API server does not wait for confirmation from the kubelet that the Pod has been terminated on the node it was running on. It removes the Pod in the API immediately so a new Pod can be created with the same name. On the node, Pods that are set to terminate immediately will still be given a small grace period before being force killed. ---> -强制删除 Pod 被定义为从集群状态与 etcd 中立即删除 Pod。当执行强制删除时,API 服务器并不会等待 kubelet 的确认信息,该 Pod 已在所运行的节点上被终止了。强制执行删除操作会从 API 服务器中立即清除 Pod, 因此可以用相同的名称创建一个新的 Pod。在节点上,设置为立即终止的 Pod 还是会在被强制删除前设置一个小的宽限期。 - -<!-- -Force deletions can be potentially dangerous for some Pods and should be performed with caution. In case of StatefulSet Pods, please refer to the task documentation for [deleting Pods from a StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/). ---> -强制删除对某些 Pod 可能具有潜在危险,因此应该谨慎地执行。对于 StatefulSet 管理的 Pod,请参考 [从 StatefulSet 中删除 Pod](/docs/tasks/run-application/force-delete-stateful-set-pod/) 的任务文档。 - -<!-- -## Privileged mode for pod containers ---> -## Pod 容器的特权模式 - -<!-- -Any container in a Pod can enable privileged mode, using the `privileged` flag on the [security context](/docs/tasks/configure-pod-container/security-context/) of the container spec. This is useful for containers that want to use Linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate Pods that don't need to be compiled into the kubelet. ---> -Pod 中的任何容器都可以使用容器规范 [security context](/docs/tasks/configure-pod-container/security-context/) 上的 `privileged` 参数启用特权模式。这对于想要使用 Linux 功能(如操纵网络堆栈和访问设备)的容器很有用。容器内的进程几乎可以获得与容器外的进程相同的特权。使用特权模式,将网络和卷插件编写为不需要编译到 kubelet 中的独立的 Pod 应该更容易。 - -<!-- -Your container runtime must support the concept of a privileged container for this setting to be relevant. ---> - -{{< note >}} -您的容器运行时必须支持特权容器模式才能使用此设置。 -{{< /note >}} - -<!-- -## API Object ---> -## API 对象 - -<!-- -Pod is a top-level resource in the Kubernetes REST API. -The [Pod API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) definition -describes the object in detail. ---> -Pod 是 Kubernetes REST API 中的顶级资源。 -[Pod API 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)定义详细描述了该 Pod 对象。 - - diff --git a/content/zh/docs/contribute/advanced.md b/content/zh/docs/contribute/advanced.md index d7a4624438..f3b3aab9af 100644 --- a/content/zh/docs/contribute/advanced.md +++ b/content/zh/docs/contribute/advanced.md @@ -30,11 +30,11 @@ client and other tools for some of these tasks. <!-- ## Propose improvements -SIG Docs [members](/docs/contribute/participating/#members) can propose improvements. +SIG Docs [members](/docs/contribute/participate/roles-and-responsibilities/#members) can propose improvements. --> ## 提出改进建议 -SIG Docs 的 [成员](/zh/docs/contribute/participating/#members) 可以提出改进建议。 +SIG Docs 的 [成员](/zh/docs/contribute/participate/roles-and-responsibilities/#members) 可以提出改进建议。 <!-- After you've been contributing to the Kubernetes documentation for a while, you @@ -82,13 +82,13 @@ Kubernetes 版本发布协调文档工作。 <!-- Each Kubernetes release is coordinated by a team of people participating in the sig-release Special Interest Group (SIG). Others on the release team for a given -release include an overall release lead, as well as representatives from sig-pm, -sig-testing, and others. To find out more about Kubernetes release processes, +release include an overall release lead, as well as representatives from +sig-testing and others. To find out more about Kubernetes release processes, refer to [https://github.com/kubernetes/sig-release](https://github.com/kubernetes/sig-release). --> 每一个 Kubernetes 版本都是由参与 sig-release 的 SIG(特别兴趣小组)的一个团队协调的。 -指定版本的发布团队中还包括总体发布牵头人,以及来自 sig-pm、sig-testing 的代表等。 +指定版本的发布团队中还包括总体发布牵头人,以及来自 sig-testing 的代表等。 要了解更多关于 Kubernetes 版本发布的流程,请参考 [https://github.com/kubernetes/sig-release](https://github.com/kubernetes/sig-release)。 @@ -185,7 +185,7 @@ organization. The contributor's membership needs to be backed by two sponsors who are already reviewers. --> 新的贡献者针对一个或多个 Kubernetes 项目仓库成功提交了 5 个实质性 PR 之后, -就有资格申请 Kubernetes 组织的[成员身份](/zh/docs/contribute/participating#members)。 +就有资格申请 Kubernetes 组织的[成员身份](/zh/docs/contribute/participate/roles-and-responsibilities/#members)。 贡献者的成员资格需要同时得到两位评审人的保荐。 <!-- @@ -211,7 +211,7 @@ SIG Docs [approvers](/docs/contribute/participating/#approvers) can serve a term --> ## 担任 SIG 联合主席 -SIG Docs [批准人(Approvers)](/zh/docs/contribute/participating/#approvers) +SIG Docs [批准人(Approvers)](/zh/docs/contribute/participate/roles-and-responsibilities/#approvers) 可以担任 SIG Docs 的联合主席。 ### 前提条件 diff --git a/content/zh/docs/contribute/generate-ref-docs/_index.md b/content/zh/docs/contribute/generate-ref-docs/_index.md index 7afd3743f5..fc73d84b29 100644 --- a/content/zh/docs/contribute/generate-ref-docs/_index.md +++ b/content/zh/docs/contribute/generate-ref-docs/_index.md @@ -22,5 +22,5 @@ To build the reference documentation, see the following guide: 本节的主题是描述如何生成 Kubernetes 参考指南。 要生成参考文档,请参考下面的指南: -* [生成参考文档快速入门](/docs/contribute/generate-ref-docs/quickstart/) +* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) diff --git a/content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md b/content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md index 4bfa40f0a8..cf7b51cbaf 100644 --- a/content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md +++ b/content/zh/docs/contribute/generate-ref-docs/contribute-upstream.md @@ -29,8 +29,8 @@ API or the `kube-*` components from the upstream code, see the following instruc --> 如果您仅想从上游代码重新生成 Kubernetes API 或 `kube-*` 组件的参考文档。请参考以下说明: -- [生成 Kubernetes API 的参考文档](/docs/contribute/generate-ref-docs/kubernetes-api/) -- [生成 Kubernetes 组件和工具的参考文档](/docs/contribute/generate-ref-docs/kubernetes-components/) +- [生成 Kubernetes API 的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) +- [生成 Kubernetes 组件和工具的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) ## {{% heading "prerequisites" %}} @@ -395,7 +395,7 @@ You are now ready to follow the [Generating Reference Documentation for the Kube [published Kubernetes API reference documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/). --> 现在,您可以按照 -[生成 Kubernetes API 的参考文档](/docs/contribute/generate-ref-docs/kubernetes-api/) +[生成 Kubernetes API 的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) 指南来生成 [已发布的 Kubernetes API 参考文档](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)。 @@ -406,7 +406,7 @@ You are now ready to follow the [Generating Reference Documentation for the Kube * [Generating Reference Docs for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/) * [Generating Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/) --> -* [生成 Kubernetes API 的参考文档](/docs/contribute/generate-ref-docs/kubernetes-api/) -* [为 Kubernetes 组件和工具生成参考文档](/docs/home/contribute/generated-reference/kubernetes-components/) -* [生成 kubectl 命令的参考文档](/docs/home/contribute/generated-reference/kubectl/) +* [生成 Kubernetes API 的参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) +* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) +* [生成 kubectl 命令的参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) diff --git a/content/zh/docs/contribute/generate-ref-docs/kubectl.md b/content/zh/docs/contribute/generate-ref-docs/kubectl.md index bc0076a4dd..c25f07582d 100644 --- a/content/zh/docs/contribute/generate-ref-docs/kubectl.md +++ b/content/zh/docs/contribute/generate-ref-docs/kubectl.md @@ -35,8 +35,8 @@ reference page, see 本主题描述了如何为 [kubectl 命令](/docs/reference/generated/kubectl/kubectl-commands) 生成参考文档,如 [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) 和 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint)。 -本主题没有讨论如何生成 [kubectl](/docs/reference/generated/kubectl/kubectl/) 组件选项的参考页面。 -相关说明请参见[为 Kubernetes 组件和工具生成参考页面](/docs/home/contribute/generated-reference/kubernetes-components/)。 +本主题没有讨论如何生成 [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) 组件选项的参考页面。 +相关说明请参见[为 Kubernetes 组件和工具生成参考页面](/zh/docs/contribute/generate-ref-docs/kubernetes-components/)。 {{< /note >}} ## {{% heading "prerequisites" %}} @@ -412,7 +412,7 @@ topics will be visible in the 对 `kubernetes/website` 仓库创建 PR。跟踪你的 PR,并根据需要回应评审人的评论。 继续跟踪你的 PR,直到它被合入。 -在 PR 合入的几分钟后,你更新的参考主题将出现在[已发布文档](/docs/home/)中。 +在 PR 合入的几分钟后,你更新的参考主题将出现在[已发布文档](/zh/docs/home/)中。 ## {{% heading "whatsnext" %}} @@ -421,7 +421,7 @@ topics will be visible in the * [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) --> -* [生成参考文档快速入门](/docs/home/contribute/generate-ref-docs/quickstart/) -* [为 Kubernetes 组件和工具生成参考文档](/docs/home/contribute/generate-ref-docs/kubernetes-components/) -* [为 Kubernetes API 生成参考文档](/docs/home/contribute/generate-ref-docs/kubernetes-api/) +* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) +* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) +* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) diff --git a/content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md index 135f8843ef..c000bff384 100644 --- a/content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md +++ b/content/zh/docs/contribute/generate-ref-docs/kubernetes-api.md @@ -31,7 +31,7 @@ Kubernetes API 参考文档是从 构建的,而工具是从 [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) 构建的。 -如果您在生成的文档中发现错误,则需要[在上游修复](/docs/contribute/generate-ref-docs/contribute-upstream/)。 +如果您在生成的文档中发现错误,则需要[在上游修复](/zh/docs/contribute/generate-ref-docs/contribute-upstream/)。 如果您只需要从 [OpenAPI](https://github.com/OAI/OpenAPI-Specification) 规范中重新生成参考文档,请继续阅读此页。 @@ -280,12 +280,12 @@ In `<web-base>` run `git add` and `git commit` to commit the change. <!-- Submit your changes as a -[pull request](/docs/contribute/start/) to the +[pull request](/docs/contribute/new-content/open-a-pr/) to the [kubernetes/website](https://github.com/kubernetes/website) repository. Monitor your pull request, and respond to reviewer comments as needed. Continue to monitor your pull request until it has been merged. --> -基于你所生成的更改[创建 PR](/docs/contribute/start/), +基于你所生成的更改[创建 PR](/zh/docs/contribute/new-content/open-a-pr/), 提交到 [kubernetes/website](https://github.com/kubernetes/website) 仓库。 监视您提交的 PR,并根据需要回复 reviewer 的评论。继续监视您的 PR,直到合并为止。 @@ -296,7 +296,7 @@ to monitor your pull request until it has been merged. * [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) --> -* [生成参考文档快速入门](/docs/home/contribute/generate-ref-docs/quickstart/) -* [为 Kubernetes 组件和工具生成参考文档](/docs/home/contribute/generate-ref-docs/kubernetes-components/) -* [为 kubectl 命令集生成参考文档](/docs/home/contribute/generate-ref-docs/kubectl/) +* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) +* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) +* [为 kubectl 命令集生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) diff --git a/content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md b/content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md index 95b2794047..0ca9d48df6 100644 --- a/content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md +++ b/content/zh/docs/contribute/generate-ref-docs/kubernetes-components.md @@ -23,7 +23,7 @@ This page shows how to build the Kubernetes component and tool reference pages. Start with the [Prerequisites section](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin) in the Reference Documentation Quickstart guide. --> -阅读参考文档快速入门指南中的[准备工作](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin)节。 +阅读参考文档快速入门指南中的[准备工作](/zh/docs/contribute/generate-ref-docs/quickstart/#before-you-begin)节。 <!-- steps --> @@ -31,7 +31,7 @@ in the Reference Documentation Quickstart guide. Follow the [Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) to generate the Kubernetes component and tool reference pages. --> -按照[参考文档快速入门](/docs/contribute/generate-ref-docs/quickstart/) +按照[参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) 指引,生成 Kubernetes 组件和工具的参考文档。 ## {{% heading "whatsnext" %}} @@ -43,8 +43,8 @@ to generate the Kubernetes component and tool reference pages. * [Contributing to the Upstream Kubernetes Project for Documentation](/docs/contribute/generate-ref-docs/contribute-upstream/) --> -* [生成参考文档快速入门](/docs/home/contribute/generate-ref-docs/quickstart/) -* [为 kubectll 命令生成参考文档](/docs/home/contribute/generate-ref-docs/kubectl/) -* [为 Kubernetes API 生成参考文档](/docs/home/contribute/generate-ref-docs/kubernetes-api/) -* [为上游 Kubernetes 项目做贡献以改进文档](/docs/contribute/generate-ref-docs/contribute-upstream/) +* [生成参考文档快速入门](/zh/docs/contribute/generate-ref-docs/quickstart/) +* [为 kubectll 命令生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) +* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) +* [为上游 Kubernetes 项目做贡献以改进文档](/zh/docs/contribute/generate-ref-docs/contribute-upstream/) diff --git a/content/zh/docs/contribute/generate-ref-docs/quickstart.md b/content/zh/docs/contribute/generate-ref-docs/quickstart.md index 4011b5f9b8..0c97a99163 100644 --- a/content/zh/docs/contribute/generate-ref-docs/quickstart.md +++ b/content/zh/docs/contribute/generate-ref-docs/quickstart.md @@ -57,7 +57,7 @@ see the [contributing upstream guide](/docs/contribute/generate-ref-docs/contrib {{< note>}} 如果你希望更改构建工具和 API 参考资料,可以阅读 -[上游贡献指南](/docs/contribute/generate-ref-docs/contribute-upstream). +[上游贡献指南](/zh/docs/contribute/generate-ref-docs/contribute-upstream). {{< /note >}} <!-- @@ -185,7 +185,7 @@ Single page Markdown documents, imported by the tool, must adhere to the [Documentation Style Guide](/docs/contribute/style/style-guide/). --> 通过工具导入的单页面的 Markdown 文档必须遵从 -[文档样式指南](/docs/contribute/style/style-guide/)。 +[文档样式指南](/zh/docs/contribute/style/style-guide/)。 <!-- ## Customizing reference.yml @@ -385,7 +385,7 @@ topics will be visible in the 继续监视该 PR 直到其被合并为止。 当你的 PR 被合并几分钟之后,你所做的对参考文档的变更就会出现 -[发布的文档](/docs/home/)上。 +[发布的文档](/zh/docs/home/)上。 ## {{% heading "whatsnext" %}} @@ -399,7 +399,7 @@ running the build targets, see the following guides: --> 要手动设置所需的构造仓库,执行构建目标,以生成各个参考文档,可参考下面的指南: -* [为 Kubernetes 组件和工具生成参考文档](/docs/contribute/generate-ref-docs/kubernetes-components/) -* [为 kubeclt 命令生成参考文档](/docs/contribute/generate-ref-docs/kubectl/) -* [为 Kubernetes API 生成参考文档](/docs/contribute/generate-ref-docs/kubernetes-api/) +* [为 Kubernetes 组件和工具生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-components/) +* [为 kubeclt 命令生成参考文档](/zh/docs/contribute/generate-ref-docs/kubectl/) +* [为 Kubernetes API 生成参考文档](/zh/docs/contribute/generate-ref-docs/kubernetes-api/) diff --git a/content/zh/docs/contribute/localization.md b/content/zh/docs/contribute/localization.md index efd57dbdbf..2241265ec2 100644 --- a/content/zh/docs/contribute/localization.md +++ b/content/zh/docs/contribute/localization.md @@ -62,7 +62,7 @@ First, [create your own fork](/docs/contribute/start/#improve-existing-content) ### 派生(fork)并且克隆仓库 {#fork-and-clone-the-repo} 首先,为 [kubernetes/website](https://github.com/kubernetes/website) 仓库 -[创建你自己的副本](/zh/docs/contribute/new-content/new-content/#fork-the-repo)。 +[创建你自己的副本](/zh/docs/contribute/new-content/open-a-pr/#fork-the-repo)。 <!-- Then, clone your fork and `cd` into it: @@ -359,7 +359,7 @@ Site strings | [All site strings in a new localized TOML file](https://github.co -----|----- 主页 | [所有标题和副标题网址](/zh/docs/home/) 安装 | [所有标题和副标题网址](/zh/docs/setup/) -教程 | [Kubernetes 基础](/zh/docs/tutorials/kubernetes-basics/), [Hello Minikube](/zh/docs/tutorials/stateless-application/hello-minikube/) +教程 | [Kubernetes 基础](/zh/docs/tutorials/kubernetes-basics/), [Hello Minikube](/zh/docs/tutorials/hello-minikube/) 网站字符串 | [新的本地化 TOML 文件中的所有网站字符串](https://github.com/kubernetes/website/tree/master/i18n) <!-- @@ -545,11 +545,11 @@ For more information about working from forks or directly from the repository, s <!-- ## Upstream contributions -SIG Docs welcomes [upstream contributions and corrections](/docs/contribute/intermediate#localize-content) to the English source. +SIG Docs welcomes upstream contributions and corrections to the English source. --> ### 上游贡献 {#upstream-contributions} -Sig Docs 欢迎对英文原文的[上游贡献和修正](/zh/docs/contribute/intermediate#localize-content)。 +Sig Docs 欢迎对英文原文的上游贡献和修正。 <!-- ## Help an existing localization diff --git a/content/zh/docs/contribute/new-content/open-a-pr.md b/content/zh/docs/contribute/new-content/open-a-pr.md index c20b36d560..ed3476ca9b 100644 --- a/content/zh/docs/contribute/new-content/open-a-pr.md +++ b/content/zh/docs/contribute/new-content/open-a-pr.md @@ -1,6 +1,5 @@ --- title: 发起拉取请求(PR) -slug: new-content content_type: concept weight: 10 card: @@ -9,7 +8,6 @@ card: --- <!-- title: Opening a pull request -slug: new-content content_type: concept weight: 10 card: @@ -879,5 +877,5 @@ PR,也可以添加对它们的链接。你可以多少了解该团队的流程 <!-- - Read [Reviewing](/docs/contribute/reviewing/revewing-prs) to learn more about the review process. --> -- 阅读[评阅](/zh/docs/contribute/review/revewing-prs)节,学习评阅过程。 +- 阅读[评阅](/zh/docs/contribute/review/reviewing-prs)节,学习评阅过程。 diff --git a/content/zh/docs/contribute/participate/_index.md b/content/zh/docs/contribute/participate/_index.md index 0c263dd8a8..1fcab79b2a 100644 --- a/content/zh/docs/contribute/participate/_index.md +++ b/content/zh/docs/contribute/participate/_index.md @@ -206,7 +206,7 @@ SIG Docs 批准人。下面是合并的工作机制: - 所有 Kubernetes 成员可以通过 `/lgtm` 评论添加 `lgtm` 标签。 - 只有 SIG Docs 批准人可以通过评论 `/approve` 合并 PR。 某些批准人还会执行一些其他角色,例如 - [PR 管理者](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) 或 + [PR 管理者](/zh/docs/contribute/participate/pr-wranglers/) 或 [SIG Docs 主席](#sig-docs-chairperson)等。 ## {{% heading "whatsnext" %}} @@ -220,6 +220,6 @@ For more information about contributing to the Kubernetes documentation, see: --> 关于贡献 Kubernetes 文档的更多信息,请参考: -- [贡献新内容](/docs/contribute/overview/) -- [评阅内容](/docs/contribute/review/reviewing-prs) -- [文档样式指南](/docs/contribute/style/) +- [贡献新内容](/zh/docs/contribute/new-content/overview/) +- [评阅内容](/zh/docs/contribute/review/reviewing-prs) +- [文档样式指南](/zh/docs/contribute/style/) diff --git a/content/zh/docs/contribute/participate/pr-wranglers.md b/content/zh/docs/contribute/participate/pr-wranglers.md index 442dcb48e6..1199c5e865 100644 --- a/content/zh/docs/contribute/participate/pr-wranglers.md +++ b/content/zh/docs/contribute/participate/pr-wranglers.md @@ -81,31 +81,34 @@ These queries exclude localization PRs. All queries are against the main branch 这些查询都不包含本地化的 PR,并仅包含主分支上的 PR(除了最后一个查询)。 <!-- -- [No CLA, not eligible to merge](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fen): +- [No CLA, not eligible to merge](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3Alanguage%2Fen): Remind the contributor to sign the CLA. If both the bot and a human have reminded them, close the PR and remind them that they can open it after signing the CLA. **Do not review PRs whose authors have not signed the CLA!** -- [Needs LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): +- [Needs LGTM](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+-label%3Algtm): Lists PRs that need an LGTM from a member. If the PR needs technical review, loop in one of the reviewers suggested by the bot. If the content needs work, add suggestions and feedback in-line. -- [Has LGTM, needs docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): +- [Has LGTM, needs docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+label%3Algtm+): Lists PRs that need an `/approve` comment to merge. -- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22+): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]). -- [Not against the main branch](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-base%3Amaster): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @<manager's_github-username>`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch. +- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]). +- [Not against the main branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amaster): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @<manager's_github-username>`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch. --> -- [未签署 CLA,不可合并的 PR](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fen): +- [未签署 CLA,不可合并的 PR](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3Alanguage%2Fen): 提醒贡献者签署 CLA。如果机器人和审阅者都已经提醒他们,请关闭 PR,并提醒他们在签署 CLA 后可以重新提交。 **在作者没有签署 CLA 之前,不要审阅他们的 PR!** -- [需要 LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): +- [需要 LGTM](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+-label%3Algtm): 列举需要来自成员的 LGTM 评论的 PR。 如果需要技术审查,请告知机器人所建议的审阅者。 如果 PR 继续改进,就地提供更改建议或反馈。 -- [已有 LGTM标签,需要 Docs 团队批准](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): + +- [已有 LGTM标签,需要 Docs 团队批准](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+label%3Algtm+): 列举需要 `/approve` 评论来合并的 PR。 -- [快速批阅](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22+): + +- [快速批阅](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): 列举针对主分支的、没有明确合并障碍的 PR。 在浏览 PR 时,可以将 "XS" 尺寸标签更改为 "S"、"M"、"L"、"XL"、"XXL"。 -- [非主分支的 PR](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-base%3Amaster): + +- [非主分支的 PR](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amaster): 如果 PR 针对 `dev-` 分支,则表示它适用于即将发布的版本。 请添加带有 `/assign @<负责人的 github 账号>`,将其指派给 [发行版本负责人](https://github.com/kubernetes/sig-release/tree/master/release-team)。 diff --git a/content/zh/docs/contribute/participate/roles-and-responsibilties.md b/content/zh/docs/contribute/participate/roles-and-responsibilities.md similarity index 99% rename from content/zh/docs/contribute/participate/roles-and-responsibilties.md rename to content/zh/docs/contribute/participate/roles-and-responsibilities.md index 5fa2a43832..c1359d7171 100644 --- a/content/zh/docs/contribute/participate/roles-and-responsibilties.md +++ b/content/zh/docs/contribute/participate/roles-and-responsibilities.md @@ -58,7 +58,7 @@ For more information, see [contributing new content](/docs/contribute/new-conten [`kubernetes/website`](https://github.com/kubernetes/website) 上报告 Issue。 - 对某 PR 给出无约束力的反馈信息 - 为本地化提供帮助 -- 在 [Slack](http://slack.k8s.io/) 或 +- 在 [Slack](https://slack.k8s.io/) 或 [SIG Docs 邮件列表](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) 上提出改进建议。 diff --git a/content/zh/docs/contribute/review/for-approvers.md b/content/zh/docs/contribute/review/for-approvers.md index 3db894e0b4..b801e7efcf 100644 --- a/content/zh/docs/contribute/review/for-approvers.md +++ b/content/zh/docs/contribute/review/for-approvers.md @@ -15,7 +15,7 @@ weight: 20 <!-- overview --> <!-- -SIG Docs [Reviewers](/docs/contribute/participating/#reviewers) and [Approvers](/docs/contribute/participating/#approvers) do a few extra things when reviewing a change. +SIG Docs [Reviewers](/docs/contribute/participate/roles-and-responsibilities/#reviewers) and [Approvers](/docs/contribute/participate/roles-and-responsibilities/#approvers) do a few extra things when reviewing a change. Every week a specific docs approver volunteers to triage and review pull requests. This @@ -26,8 +26,9 @@ requests (PRs) that are not already under active review. In addition to the rotation, a bot assigns reviewers and approvers for the PR based on the owners for the affected files. --> -SIG Docs [评阅人(Reviewers)](/docs/contribute/participating/#reviewers) -和[批准人(Approvers)](/docs/contribute/participating/#approvers) +SIG Docs +[评阅人(Reviewers)](/zh/docs/contribute/participate/roles-and-responsibilities/#reviewers) +和[批准人(Approvers)](/zh/docs/contribute/participate/roles-and-responsibilities/#approvers) 在对变更进行评审时需要做一些额外的事情。 每周都有一个特定的文档批准人自愿负责对 PR 进行分类和评阅。 @@ -50,7 +51,7 @@ Everything described in [Reviewing a pull request](/docs/contribute/review/revie Kubernetes 文档遵循 [Kubernetes 代码评阅流程](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process)。 -[评阅 PR](/docs/contribute/review/reviewing-prs) 文档中所描述的所有规程都适用, +[评阅 PR](/zh/docs/contribute/review/reviewing-prs/) 文档中所描述的所有规程都适用, 不过评阅人和批准人还要做以下工作: <!-- @@ -73,7 +74,7 @@ when it comes to requesting technical review from code contributors. 你可以查看 Markdown 文件的文件头,其中的 `reviewers` 字段给出了哪些人可以为文档提供技术审核。 {{< /note >}} -- 确保 PR 遵从[内容指南](/docs/contribute/style/content-guide/)和[样式指南](/docs/contribute/style/style-guide/); +- 确保 PR 遵从[内容指南](/zh/docs/contribute/style/content-guide/)和[样式指南](/zh/docs/contribute/style/style-guide/); 如果 PR 没有达到要求,指引作者阅读指南中的相关部分。 - 适当的时候使用 GitHub **Request Changes** 选项,建议 PR 作者实施所建议的修改。 - 当你所提供的建议被采纳后,在 GitHub 中使用 `/approve` 或 `/lgtm` Prow 命令,改变评审状态。 @@ -406,9 +407,9 @@ Sample response to a request for support: This issue sounds more like a request for support and less like an issue specifically for docs. I encourage you to bring your question to the `#kubernetes-users` channel in -[Kubernetes slack](http://slack.k8s.io/). You can also search +[Kubernetes slack](https://slack.k8s.io/). You can also search resources like -[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) +[Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) for answers to similar questions. You can also open issues for Kubernetes functionality in diff --git a/content/zh/docs/contribute/review/reviewing-prs.md b/content/zh/docs/contribute/review/reviewing-prs.md index cc93511a8b..e420fb919a 100644 --- a/content/zh/docs/contribute/review/reviewing-prs.md +++ b/content/zh/docs/contribute/review/reviewing-prs.md @@ -34,9 +34,9 @@ Before reviewing, it's a good idea to: 在评阅之前,可以考虑: -- 阅读[内容指南](/docs/contribute/style/content-guide/)和 - [样式指南](/docs/contribute/style/style-guide/)以便给出有价值的评论。 -- 了解 Kubernetes 文档社区中不同的[角色和职责](/docs/contribute/participating/#roles-and-responsibilities)。 +- 阅读[内容指南](/zh/docs/contribute/style/content-guide/)和 + [样式指南](/zh/docs/contribute/style/style-guide/)以便给出有价值的评论。 +- 了解 Kubernetes 文档社区中不同的[角色和职责](/zh/docs/contribute/participate/roles-and-responsibilities/)。 <!-- body --> <!-- @@ -90,7 +90,7 @@ In general, review pull requests for content and style in English. 2. 使用以下标签(组合)对待处理 PRs 进行过滤: - `cncf-cla: yes` (建议):由尚未签署 CLA 的贡献者所发起的 PRs 不可以合并。 - 参考[签署 CLA](/docs/contribute/new-content/overview/#sign-the-cla) 以了解更多信息。 + 参考[签署 CLA](/zh/docs/contribute/new-content/overview/#sign-the-cla) 以了解更多信息。 - `language/en` (建议):仅查看英语语言的 PRs。 - `size/<尺寸>`:过滤特定尺寸(规模)的 PRs。如果你刚入门,可以从较小的 PR 开始。 @@ -153,7 +153,7 @@ When reviewing, use the following as a starting point. - 是否存在明显的语言或语法错误?对某事的描述有更好的方式? - 是否存在一些过于复杂晦涩的用词,本可以用简单词汇来代替? - 是否有些用词、术语或短语可以用不带歧视性的表达方式代替? -- 用词和大小写方面是否遵从了[样式指南](/docs/contribute/style/style-guide/)? +- 用词和大小写方面是否遵从了[样式指南](/zh/docs/contribute/style/style-guide/)? - 是否有些句子太长,可以改得更短、更简单? - 是否某些段落过长,可以考虑使用列表或者表格来表达? @@ -188,10 +188,10 @@ For small issues with a PR, like typos or whitespace, prefix your comments with 如果是这样,PR 是否会导致出现新的失效链接? 是否有其他的办法,比如改变页面标题但不改变其 slug? - PR 是否引入新的页面?如果是: - - 该页面是否使用了正确的[页面内容类型](/docs/contribute/style/page-content-types/) + - 该页面是否使用了正确的[页面内容类型](/zh/docs/contribute/style/page-content-types/) 及相关联的 Hugo 短代码(shortcodes)? - 该页面能否在对应章节的侧面导航中显示?显示得正确么? - - 该页面是否应出现在[网站主页面](/docs/home/)的列表中? + - 该页面是否应出现在[网站主页面](/zh/docs/home/)的列表中? - 变更是否正确出现在 Netlify 预览中了? 要对列表、代码段、表格、注释和图像等元素格外留心 diff --git a/content/zh/docs/contribute/style/hugo-shortcodes/index.md b/content/zh/docs/contribute/style/hugo-shortcodes/index.md index 7e12d2da8a..dc3400c611 100644 --- a/content/zh/docs/contribute/style/hugo-shortcodes/index.md +++ b/content/zh/docs/contribute/style/hugo-shortcodes/index.md @@ -358,7 +358,7 @@ println "This is tab 2." {{< tabs name="tab_with_file_include" >}} {{< tab name="Content File #1" include="example1" />}} {{< tab name="Content File #2" include="example2" />}} -{{< tab name="JSON File" include="podtemplate" />}} +{{< tab name="JSON File" include="podtemplate.json" />}} {{< /tabs >}} ## {{% heading "whatsnext" %}} @@ -367,11 +367,13 @@ println "This is tab 2." * Learn about [Hugo](https://gohugo.io/). * Learn about [writing a new topic](/docs/home/contribute/style/write-new-topic/). * Learn about [page content types](/docs/home/contribute/style/page-content-types/). -* Learn about [creating a pull request](/docs/home/contribute/create-pull-request/). +* Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/). +* Learn about [advanced contributing](/docs/contribute/advanced/). --> * 了解 [Hugo](https://gohugo.io/)。 -* 了解 [撰写新的话题](/zh/docs/contribute/write-new-topic/)。 -* 了解 [使用页面类型](/zh/docs/contribute/style/page-content-types/)。 -* 了解 [发起 PR](/zh/docs/contribute/new-content/create-a-pr/)。 +* 了解[撰写新的话题](/zh/docs/contribute/style/write-new-topic/)。 +* 了解[使用页面内容类型](/zh/docs/contribute/style/page-content-types/)。 +* 了解[发起 PR](/zh/docs/contribute/new-content/open-a-pr/)。 +* 了解[高级贡献](/zh/docs/contribute/advanced/)。 diff --git a/content/zh/docs/contribute/style/page-content-types.md b/content/zh/docs/contribute/style/page-content-types.md index 241f440e73..a5526383f5 100644 --- a/content/zh/docs/contribute/style/page-content-types.md +++ b/content/zh/docs/contribute/style/page-content-types.md @@ -354,7 +354,7 @@ An example of a published tutorial topic is 阅读的主题。 已发布的教程主题的一个例子是 -[使用 Deployment 运行无状态应用](/zh/docs/tutorials/stateless-application/run-stateless-application-deployment/). +[使用 Deployment 运行无状态应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/). <!-- ### Reference diff --git a/content/zh/docs/contribute/style/style-guide.md b/content/zh/docs/contribute/style/style-guide.md index c3c00b5be9..777a0b6ca4 100644 --- a/content/zh/docs/contribute/style/style-guide.md +++ b/content/zh/docs/contribute/style/style-guide.md @@ -38,12 +38,14 @@ discussion. <!-- body --> <!-- -Kubernetes documentation uses [Blackfriday Markdown Renderer](https://github.com/russross/blackfriday) along with a few [Hugo Shortcodes](/docs/home/contribute/includes/) to support glossary entries, tabs, +Kubernetes documentation uses [Goldmark Markdown Renderer](https://github.com/yuin/goldmark) +with some adjustments along with a few +[Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/) to support glossary entries, tabs, and representing feature state. --> {{< note >}} -Kubernetes 文档 [Blackfriday Markdown 解释器](https://github.com/russross/blackfriday) -和一些 [Hugo 短代码](/zh/docs/home/contribute/includes/) 来支持词汇表项、Tab +Kubernetes 文档使用带调整的 [Goldmark Markdown 解释器](https://github.com/yuin/goldmark/) +和一些 [Hugo 短代码](/zh/docs/contribute/style/hugo-shortcodes/) 来支持词汇表项、Tab 页以及特性门控标注。 {{< /note >}} diff --git a/content/zh/docs/contribute/style/write-new-topic.md b/content/zh/docs/contribute/style/write-new-topic.md index 4bb8f81254..bd5a2fce72 100644 --- a/content/zh/docs/contribute/style/write-new-topic.md +++ b/content/zh/docs/contribute/style/write-new-topic.md @@ -285,7 +285,7 @@ For an example of a topic that uses this technique, see [Running a Single-Instance Stateful Application](/docs/tutorials/stateful-application/run-stateful-application/). --> 有关使用此技术的主题的示例,请参见 -[运行单实例有状态的应用](/zh/docs/tutorials/stateful-application/run-stateful-application/)。 +[运行单实例有状态的应用](/zh/docs/tasks/run-application/run-single-instance-stateful-application/)。 <!-- ## Adding images to a topic diff --git a/content/zh/docs/reference/access-authn-authz/admission-controllers.md b/content/zh/docs/reference/access-authn-authz/admission-controllers.md index 339b3af53d..ac42b47703 100755 --- a/content/zh/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/zh/docs/reference/access-authn-authz/admission-controllers.md @@ -83,7 +83,7 @@ other admission controllers. 最后,除了对对象进行变更外,准入控制器还可以有其它作用:将相关资源作为请求处理的一部分进行变更。 增加使用配额就是一个典型的示例,说明了这样做的必要性。 -此类用法都需要相应的回收或回调过程,因为任一准入控制器都无法确定某个请能否通过所有其它准入控制器。 +此类用法都需要相应的回收或回调过程,因为任一准入控制器都无法确定某个请求能否通过所有其它准入控制器。 <!-- ## Why do I need them? diff --git a/content/zh/docs/reference/access-authn-authz/controlling-access.md b/content/zh/docs/reference/access-authn-authz/controlling-access.md index e181571a5d..edfc3aab85 100644 --- a/content/zh/docs/reference/access-authn-authz/controlling-access.md +++ b/content/zh/docs/reference/access-authn-authz/controlling-access.md @@ -5,7 +5,7 @@ approvers: title: Kubernetes API 访问控制 --- -用户通过 `kubectl`、客户端库或者通过发送 REST 请求[访问 API](/docs/user-guide/accessing-the-cluster)。 用户(自然人)和 [Kubernetes 服务账户](/docs/tasks/configure-pod-container/configure-service-account/) 都可以被授权进行 API 访问。 +用户通过 `kubectl`、客户端库或者通过发送 REST 请求[访问 API](/docs/user-guide/accessing-the-cluster)。 用户和 [Kubernetes 服务账户](/docs/tasks/configure-pod-container/configure-service-account/) 都可以被授权进行 API 访问。 请求到达 API 服务器后会经过几个阶段,具体说明如图: ![Diagram of request handling steps for Kubernetes API request](/images/docs/admin/access-control-overview.svg) diff --git a/content/zh/docs/reference/access-authn-authz/node.md b/content/zh/docs/reference/access-authn-authz/node.md index 2f0654dd00..22573d5e82 100644 --- a/content/zh/docs/reference/access-authn-authz/node.md +++ b/content/zh/docs/reference/access-authn-authz/node.md @@ -43,7 +43,7 @@ Read operations: * endpoints * nodes * pods -* secrets、configmaps、以及绑定到 kubelet 的节点的 pod 的持久卷申领和持久卷 +* secrets、configmaps、pvcs 以及绑定到 kubelet 节点的与 pod 相关的持久卷 <!-- * services diff --git a/content/zh/docs/reference/kubectl/cheatsheet.md b/content/zh/docs/reference/kubectl/cheatsheet.md index b157fefd71..0ede5fb604 100644 --- a/content/zh/docs/reference/kubectl/cheatsheet.md +++ b/content/zh/docs/reference/kubectl/cheatsheet.md @@ -780,6 +780,6 @@ Verbosity | Description --> * 进一步了解 [kubectl 概述](/docs/reference/kubectl/overview/)。 * 参阅 [kubectl](/docs/reference/kubectl/kubectl/) 选项. -* 参阅 [kubectl 使用约定](/docs/reference/kubectl/conventions/)来理解如果在可复用的脚本中使用它。 +* 参阅 [kubectl 使用约定](/docs/reference/kubectl/conventions/)来理解如何在可复用的脚本中使用它。 * 查看社区中其他的 [kubectl 备忘单](https://github.com/dennyzhang/cheatsheet-kubernetes-A4)。 diff --git a/content/zh/docs/setup/best-practices/certificates.md b/content/zh/docs/setup/best-practices/certificates.md index 40a9da84a1..67fe1c9942 100644 --- a/content/zh/docs/setup/best-practices/certificates.md +++ b/content/zh/docs/setup/best-practices/certificates.md @@ -55,7 +55,7 @@ Kubernetes 需要 PKI 才能执行以下操作: * API 服务器的客户端证书,用于和 etcd 的会话 * 控制器管理器的客户端证书/kubeconfig,用于和 API server 的会话 * 调度器的客户端证书/kubeconfig,用于和 API server 的会话 -* [前端代理][proxy] 的客户端及服务端证书 +* [前端代理](/zh/docs/tasks/extend-kubernetes/configure-aggregation-layer/) 的客户端及服务端证书 {{< note >}} <!-- @@ -280,6 +280,3 @@ These files are used as follows: [usage]: https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage [kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/ -[proxy]: /docs/tasks/access-kubernetes-api/configure-aggregation-layer/ - - diff --git a/content/zh/docs/setup/independent/create-cluster-kubeadm.md b/content/zh/docs/setup/independent/create-cluster-kubeadm.md index c00f723631..2f80c85c2d 100644 --- a/content/zh/docs/setup/independent/create-cluster-kubeadm.md +++ b/content/zh/docs/setup/independent/create-cluster-kubeadm.md @@ -388,7 +388,7 @@ support [Network Policy](/docs/concepts/services-networking/networkpolicies/). S - IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). - [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9. --> -**网络必须在部署任何应用之前部署好。此外,在网络安装之前是 CoreDNS 不会启用的。 +**网络必须在部署任何应用之前部署好。此外,在网络安装之前 CoreDNS 是不会启用的。 kubeadm 只支持基于容器网络接口(CNI)的网络而且不支持 kubenet 。** 有一些项目为 Kubernetes 提供使用 CNI 的 Pod 网络,其中一些也支持[网络策略](/docs/concepts/services-networking/networkpolicies/). diff --git a/content/zh/docs/setup/production-environment/turnkey/clc.md b/content/zh/docs/setup/production-environment/turnkey/clc.md new file mode 100644 index 0000000000..6be7d06157 --- /dev/null +++ b/content/zh/docs/setup/production-environment/turnkey/clc.md @@ -0,0 +1,532 @@ +--- +title: 在 CenturyLink Cloud 上运行 Kubernetes +--- +<!-- +--- +title: Running Kubernetes on CenturyLink Cloud +--- +---> + + +<!-- +These scripts handle the creation, deletion and expansion of Kubernetes clusters on CenturyLink Cloud. + +You can accomplish all these tasks with a single command. We have made the Ansible playbooks used to perform these tasks available [here](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md). +---> +这些脚本适用于在 CenturyLink Cloud 上创建、删除和扩展 Kubernetes 集群。 + +您可以使用单个命令完成所有任务。我们提供了用于执行这些任务的 Ansible 手册 [点击这里](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md) + +<!-- +## Find Help + +If you run into any problems or want help with anything, we are here to help. Reach out to use via any of the following ways: + +- Submit a github issue +- Send an email to Kubernetes AT ctl DOT io +- Visit [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes) +---> +## 寻求帮助 + +如果运行出现问题或者想要寻求帮助,我们非常乐意帮忙。通过以下方式获取帮助: + +- 提交 github issue +- 给 Kubernetes AT ctl DOT io 发送邮件 +- 访问 [http://info.ctl.io/kubernetes](http://info.ctl.io/kubernetes) + +<!-- +## Clusters of VMs or Physical Servers, your choice. + +- We support Kubernetes clusters on both Virtual Machines or Physical Servers. If you want to use physical servers for the worker nodes (minions), simple use the --minion_type=bareMetal flag. +- For more information on physical servers, visit: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/) +- Physical serves are only available in the VA1 and GB3 data centers. +- VMs are available in all 13 of our public cloud locations +---> +## 基于 VM 或物理服务器的集群可供选择 + +- 我们在虚拟机或物理服务器上都支持 Kubernetes 集群。如果要将物理服务器用于辅助 Node(minions),则只需使用 --minion_type=bareMetal 标志。 +- 有关物理服务器的更多信息,请访问: [https://www.ctl.io/bare-metal/](https://www.ctl.io/bare-metal/) +- 仅在 VA1 和 GB3 数据中心提供物理服务。 +- 13个公共云位置都可以使用虚拟机。 + +<!-- +## Requirements + +The requirements to run this script are: + +- A linux administrative host (tested on ubuntu and macOS) +- python 2 (tested on 2.7.11) + - pip (installed with python as of 2.7.9) +- git +- A CenturyLink Cloud account with rights to create new hosts +- An active VPN connection to the CenturyLink Cloud from your linux host +---> +## 要求 + +运行此脚本的要求有: + +- Linux 管理主机(在 ubuntu 和 macOS 上测试) +- python 2(在 2.7.11 版本上测试) + - pip(从 2.7.9 版开始与 python 一起安装) +- git +- 具有新建主机权限的 CenturyLink Cloud 帐户 +- 从 Linux 主机到 CenturyLink Cloud 的有效 VPN 连接 + +<!-- +## Script Installation + +After you have all the requirements met, please follow these instructions to install this script. + +1) Clone this repository and cd into it. +---> +## 脚本安装 + +满足所有要求后,请按照以下说明安装此脚本。 + +1)克隆此存储库并通过 cd 进入。 + +```shell +git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc +``` + +<!-- +2) Install all requirements, including +---> +2)安装所有要求的部分,包括 + + * Ansible + * CenturyLink Cloud SDK + * Ansible Modules + +```shell +sudo pip install -r ansible/requirements.txt +``` + +<!-- +3) Create the credentials file from the template and use it to set your ENV variables +---> +3)从模板创建凭证文件,并使用它来设置您的 ENV 变量 + +```shell +cp ansible/credentials.sh.template ansible/credentials.sh +vi ansible/credentials.sh +source ansible/credentials.sh + +``` + +<!-- +4) Grant your machine access to the CenturyLink Cloud network by using a VM inside the network or [ configuring a VPN connection to the CenturyLink Cloud network.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/) +---> +4)使用内网的虚拟机或 [ 配置与 CenturyLink Cloud 网络的 VPN 连接.](https://www.ctl.io/knowledge-base/network/how-to-configure-client-vpn/) 授予您的计算机对 CenturyLink Cloud 网络的访问权限。 + +<!-- +#### Script Installation Example: Ubuntu 14 Walkthrough + +If you use an ubuntu 14, for your convenience we have provided a step by step guide to install the requirements and install the script. +---> +#### 脚本安装示例:Ubuntu 14 演练 + +如果您使用 Ubuntu 14,为方便起见,我们会提供分步指导帮助安装必备条件和脚本。 + +```shell +# system +apt-get update +apt-get install -y git python python-crypto +curl -O https://bootstrap.pypa.io/get-pip.py +python get-pip.py + +# installing this repository +mkdir -p ~home/k8s-on-clc +cd ~home/k8s-on-clc +git clone https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc.git +cd adm-kubernetes-on-clc/ +pip install -r requirements.txt + +# getting started +cd ansible +cp credentials.sh.template credentials.sh; vi credentials.sh +source credentials.sh +``` + + + +<!-- +## Cluster Creation + +To create a new Kubernetes cluster, simply run the ```kube-up.sh``` script. A complete +list of script options and some examples are listed below. +---> +## 创建集群 + +要创建一个新的 Kubernetes 集群,只需运行 ```kube-up.sh``` 脚本即可。以下是一套完整的脚本选项列表和一些示例。 + +```shell +CLC_CLUSTER_NAME=[name of kubernetes cluster] +cd ./adm-kubernetes-on-clc +bash kube-up.sh -c="$CLC_CLUSTER_NAME" +``` + +<!-- +It takes about 15 minutes to create the cluster. Once the script completes, it +will output some commands that will help you setup kubectl on your machine to +point to the new cluster. + +When the cluster creation is complete, the configuration files for it are stored +locally on your administrative host, in the following directory +---> +创建集群大约需要15分钟。脚本完成后,它将输出一些命令,这些命令将帮助您在计算机上设置 kubectl 以指向新集群。 + +完成集群创建后,其配置文件会存储在本地管理主机的以下目录中 + +```shell +> CLC_CLUSTER_HOME=$HOME/.clc_kube/$CLC_CLUSTER_NAME/ +``` + + +<!-- +#### Cluster Creation: Script Options +---> +#### 创建集群:脚本选项 + +```shell +Usage: kube-up.sh [OPTIONS] +Create servers in the CenturyLinkCloud environment and initialize a Kubernetes cluster +Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in +order to access the CenturyLinkCloud API + +All options (both short and long form) require arguments, and must include "=" +between option name and option value. + + -h (--help) display this help and exit + -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names + -t= (--minion_type=) standard -> VM (default), bareMetal -> physical] + -d= (--datacenter=) VA1 (default) + -m= (--minion_count=) number of kubernetes minion nodes + -mem= (--vm_memory=) number of GB ram for each minion + -cpu= (--vm_cpu=) number of virtual cps for each minion node + -phyid= (--server_conf_id=) physical server configuration id, one of + physical_server_20_core_conf_id + physical_server_12_core_conf_id + physical_server_4_core_conf_id (default) + -etcd_separate_cluster=yes create a separate cluster of three etcd nodes, + otherwise run etcd on the master node +``` + +<!-- +## Cluster Expansion + +To expand an existing Kubernetes cluster, run the ```add-kube-node.sh``` +script. A complete list of script options and some examples are listed [below](#cluster-expansion-script-options). +This script must be run from the same host that created the cluster (or a host +that has the cluster artifact files stored in ```~/.clc_kube/$cluster_name```). +---> +## 扩展集群 + +要扩展现有的Kubernetes集群,请运行```add-kube-node.sh``` 脚本。脚本选项的完整列表和一些示例在 [下面](#cluster-expansion-script-options) 列出。该脚本必须运行在创建集群的同一个主机(或储存集群工件文件的 ```~/.clc_kube/$cluster_name``` 主机)。 + +```shell +cd ./adm-kubernetes-on-clc +bash add-kube-node.sh -c="name_of_kubernetes_cluster" -m=2 +``` + +<!-- +#### Cluster Expansion: Script Options +---> +#### 扩展集群:脚本选项 + +```shell +Usage: add-kube-node.sh [OPTIONS] +Create servers in the CenturyLinkCloud environment and add to an +existing CLC kubernetes cluster + +Environment variables CLC_V2_API_USERNAME and CLC_V2_API_PASSWD must be set in +order to access the CenturyLinkCloud API + + -h (--help) display this help and exit + -c= (--clc_cluster_name=) set the name of the cluster, as used in CLC group names + -m= (--minion_count=) number of kubernetes minion nodes to add +``` + +<!-- +## Cluster Deletion + +There are two ways to delete an existing cluster: + +1) Use our python script: +---> +## 删除集群 + +有两种方法可以删除集群: + +1)使用 Python 脚本: + +```shell +python delete_cluster.py --cluster=clc_cluster_name --datacenter=DC1 +``` + +<!-- +2) Use the CenturyLink Cloud UI. To delete a cluster, log into the CenturyLink +Cloud control portal and delete the parent server group that contains the +Kubernetes Cluster. We hope to add a scripted option to do this soon. +---> +2)使用 CenturyLink Cloud UI。要删除集群,请登录 CenturyLink Cloud 控制页面并删除包含 Kubernetes 集群的父服务器组。我们希望能够添加脚本选项以尽快完成此操作。 + +<!-- +## Examples + +Create a cluster with name of k8s_1, 1 master node and 3 worker minions (on physical machines), in VA1 +---> +## 示例 + +在 VA1 中创建一个集群,名称为 k8s_1,具备1个主 Node 和3个辅助 Minion(在物理机上) + +```shell +bash kube-up.sh --clc_cluster_name=k8s_1 --minion_type=bareMetal --minion_count=3 --datacenter=VA1 +``` + +<!-- +Create a cluster with name of k8s_2, an ha etcd cluster on 3 VMs and 6 worker minions (on VMs), in VA1 +---> +在 VA1 中创建一个 ha etcd 集群,名称为 k8s_2,运行在3个虚拟机和6个辅助 Minion(在虚拟机上) + +```shell +bash kube-up.sh --clc_cluster_name=k8s_2 --minion_type=standard --minion_count=6 --datacenter=VA1 --etcd_separate_cluster=yes +``` + +<!-- +Create a cluster with name of k8s_3, 1 master node, and 10 worker minions (on VMs) with higher mem/cpu, in UC1: +---> +在 UC1 中创建一个集群,名称为k8s_3,具备1个主 Node 和10个具有更高 mem/cpu 的辅助 Minion(在虚拟机上): + +```shell +bash kube-up.sh --clc_cluster_name=k8s_3 --minion_type=standard --minion_count=10 --datacenter=VA1 -mem=6 -cpu=4 +``` + + + +<!-- +## Cluster Features and Architecture + +We configure the Kubernetes cluster with the following features: + +* KubeDNS: DNS resolution and service discovery +* Heapster/InfluxDB: For metric collection. Needed for Grafana and auto-scaling. +* Grafana: Kubernetes/Docker metric dashboard +* KubeUI: Simple web interface to view Kubernetes state +* Kube Dashboard: New web interface to interact with your cluster +---> +## 集群功能和架构 + +我们使用以下功能配置 Kubernetes 集群: + +* KubeDNS:DNS 解析和服务发现 +* Heapster / InfluxDB:用于指标收集,是 Grafana 和 auto-scaling 需要的。 +* Grafana:Kubernetes/Docker 指标仪表板 +* KubeUI:用于查看 Kubernetes 状态的简单 Web 界面 +* Kube 仪表板:新的 Web 界面可与您的集群进行交互 + +<!-- +We use the following to create the Kubernetes cluster: +---> +使用以下工具创建 Kubernetes 集群: + +* Kubernetes 1.1.7 +* Ubuntu 14.04 +* Flannel 0.5.4 +* Docker 1.9.1-0~trusty +* Etcd 2.2.2 + +<!-- +## Optional add-ons + +* Logging: We offer an integrated centralized logging ELK platform so that all + Kubernetes and docker logs get sent to the ELK stack. To install the ELK stack + and configure Kubernetes to send logs to it, follow [the log + aggregation documentation](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md). Note: We don't install this by default as + the footprint isn't trivial. +---> +## 可选附件 +* 日志记录:我们提供了一个集成的集中式日志记录 ELK 平台,以便将所有 Kubernetes 和 docker 日志发送到 ELK 堆栈。要安装 ELK 堆栈并配置 Kubernetes 向其发送日志,请遵循 [日志聚合文档](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/log_aggregration.md)。注意:默认情况下我们不安装此程序,因为占用空间并不小。 + +<!-- +## Cluster management + +The most widely used tool for managing a Kubernetes cluster is the command-line +utility ```kubectl```. If you do not already have a copy of this binary on your +administrative machine, you may run the script ```install_kubectl.sh``` which will +download it and install it in ```/usr/bin/local```. +---> +## 管理集群 + +管理 Kubernetes 集群最常用工具是 command-line 实用程序 ```kubectl```。如果您的管理器上还没有此二进制文件的副本,可以运行脚本 ```install_kubectl.sh```,它将下载该脚本并将其安装在 ```/usr/bin/local``` 中。 + +<!-- +The script requires that the environment variable ```CLC_CLUSTER_NAME``` be defined. ```install_kubectl.sh``` also writes a configuration file which will embed the necessary +authentication certificates for the particular cluster. The configuration file is +written to the ```${CLC_CLUSTER_HOME}/kube``` directory +---> +该脚本要求定义环境变量 ```CLC_CLUSTER_NAME```。```install_kubectl.sh``` 还将写入一个配置文件,该文件为特定集群嵌入必要的认证证书。配置文件被写入 ```${CLC_CLUSTER_HOME}/kube``` 目录中 + + +```shell +export KUBECONFIG=${CLC_CLUSTER_HOME}/kube/config +kubectl version +kubectl cluster-info +``` + +<!-- +### Accessing the cluster programmatically + +It's possible to use the locally stored client certificates to access the apiserver. For example, you may want to use any of the [Kubernetes API client libraries](/docs/reference/using-api/client-libraries/) to program against your Kubernetes cluster in the programming language of your choice. + +To demonstrate how to use these locally stored certificates, we provide the following example of using ```curl``` to communicate to the master apiserver via https: +---> +### 以编程方式访问集群 + +可以使用本地存储的客户端证书来访问 apiserver。例如,您可以使用 [Kubernetes API 客户端库](/docs/reference/using-api/client-libraries/) 选择编程语言对 Kubernetes 集群进行编程。 + +为了演示如何使用这些本地存储的证书,我们提供以下示例,使用 ```curl``` 通过 https 与主 apiserver 进行通信: + +```shell +curl \ + --cacert ${CLC_CLUSTER_HOME}/pki/ca.crt \ + --key ${CLC_CLUSTER_HOME}/pki/kubecfg.key \ + --cert ${CLC_CLUSTER_HOME}/pki/kubecfg.crt https://${MASTER_IP}:6443 +``` + +<!-- +But please note, this *does not* work out of the box with the ```curl``` binary +distributed with macOS. +---> +但是请注意,这 *不能* 与 MacOS 一起发行的 ```curl``` 二进制文件分开使用。 + +<!-- +### Accessing the cluster with a browser + +We install [the kubernetes dashboard](/docs/tasks/web-ui-dashboard/). When you +create a cluster, the script should output URLs for these interfaces like this: + +kubernetes-dashboard is running at ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy```. +---> +### 使用浏览器访问集群 + +安装 [Kubernetes 仪表板](/docs/tasks/web-ui-dashboard/)。创建集群时,脚本会为这些接口输出 URL,如下所示: + +kubernetes-dashboard 在以下位置运行 ```https://${MASTER_IP}:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy``` + +<!-- +Note on Authentication to the UIs: + +The cluster is set up to use basic authentication for the user _admin_. +Hitting the url at ```https://${MASTER_IP}:6443``` will +require accepting the self-signed certificate +from the apiserver, and then presenting the admin +password written to file at: ```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_``` +---> +对 UI 进行身份验证的注意事项: + +群集设置为对用户 _admin_ 使用基本的身份验证。进入 URL ```https://${MASTER_IP}:6443``` 获取从 apiserver 上接收的自签名证书,然后出示管理员密码并写入文件 ```> _${CLC_CLUSTER_HOME}/kube/admin_password.txt_``` + + +<!-- +### Configuration files + +Various configuration files are written into the home directory *CLC_CLUSTER_HOME* under ```.clc_kube/${CLC_CLUSTER_NAME}``` in several subdirectories. You can use these files +to access the cluster from machines other than where you created the cluster from. +---> +### 配置文件 + +多个配置文件被写入几个子目录,子目录在 ```.clc_kube/${CLC_CLUSTER_NAME}``` 下的主目录 *CLC_CLUSTER_HOME* 中。使用这些文件可以从创建群集的计算机之外的其他计算机访问群集。 + +<!-- +* ```config/```: Ansible variable files containing parameters describing the master and minion hosts +* ```hosts/```: hosts files listing access information for the Ansible playbooks +* ```kube/```: ```kubectl``` configuration files, and the basic-authentication password for admin access to the Kubernetes API +* ```pki/```: public key infrastructure files enabling TLS communication in the cluster +* ```ssh/```: SSH keys for root access to the hosts +---> +* ```config/```:Ansible 变量文件,包含描述主机和从机的参数 +* ```hosts/```: 主机文件,列出了 Ansible 手册的访问信息 +* ```kube/```: ```kubectl``` 配置文件,包含管理员访问 Kubernetes API 所需的基本身份验证密码 +* ```pki/```: 公钥基础结构文件,用于在集群中启用 TLS 通信 +* ```ssh/```: 对主机进行根访问的 SSH 密钥 + + +<!-- +## ```kubectl``` usage examples + +There are a great many features of _kubectl_. Here are a few examples + +List existing nodes, pods, services and more, in all namespaces, or in just one: +---> +## ```kubectl``` 使用示例 + +_kubectl_ 有很多功能,例如 + +列出所有或者一个命名空间中存在的 Node,Pod,服务等。 + +```shell +kubectl get nodes +kubectl get --all-namespaces pods +kubectl get --all-namespaces services +kubectl get --namespace=kube-system replicationcontrollers +``` + +<!-- +The Kubernetes API server exposes services on web URLs, which are protected by requiring +client certificates. If you run a kubectl proxy locally, ```kubectl``` will provide +the necessary certificates and serve locally over http. +---> +Kubernetes API 服务器在 Web URL 上公开服务,这些 URL 受客户端证书的保护。如果您在本地运行 kubectl 代理,```kubectl``` 将提供必要的证书,并通过 http 在本地提供服务。 + +```shell +kubectl proxy -p 8001 +``` + +<!-- +Then, you can access urls like ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` without the need for client certificates in your browser. +---> +然后,您可以访问 ```http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/``` 之类的 URL,不再需要浏览器中的客户端证书 。 + + +<!-- +## What Kubernetes features do not work on CenturyLink Cloud + +These are the known items that don't work on CenturyLink cloud but do work on other cloud providers: + +- At this time, there is no support services of the type [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/). We are actively working on this and hope to publish the changes sometime around April 2016. + +- At this time, there is no support for persistent storage volumes provided by + CenturyLink Cloud. However, customers can bring their own persistent storage + offering. We ourselves use Gluster. +---> +## Kubernetes 的哪些功能无法在 CenturyLink Cloud 上使用 + +这些是已知的在 CenturyLink Cloud 上不能使用,但在其他云提供商中可以使用: + +- 目前,没有 [LoadBalancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)类型的支持服务。我们正在为此积极努力,并希望在2016年4月左右发布更改。 + +- 目前,不支持 CenturyLink Cloud 提供的永久存储卷。但是,客户可以自带永久性存储产品。我们自己使用 Gluster。 + + +<!-- +## Ansible Files + +If you want more information about our Ansible files, please [read this file](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md) +---> +## Ansible 文件 + +如果您想了解有关 Ansible 文件的更多信息,请 [浏览此文件](https://github.com/CenturyLinkCloud/adm-kubernetes-on-clc/blob/master/ansible/README.md) + +<!-- +## Further reading + +Please see the [Kubernetes docs](/docs/) for more details on administering +and using a Kubernetes cluster. +---> +## 更多 + +有关管理和使用 Kubernetes 集群的更多详细信息,请参见 [Kubernetes 文档](/docs/) + + + diff --git a/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md b/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md new file mode 100644 index 0000000000..dab03002d3 --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/ingress-minikube.md @@ -0,0 +1,421 @@ +--- +title: 在 Minikube 环境中使用 NGINX Ingress 控制器配置 Ingress +content_type: task +weight: 100 +--- +<!-- +title: Set up Ingress on Minikube with the NGINX Ingress Controller +content_type: task +weight: 100 +--> + +<!-- overview --> + +<!-- +An [Ingress](/docs/concepts/services-networking/ingress/) is an API object that defines rules which allow external access +to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress. + +This page shows you how to set up a simple Ingress which routes requests to Service web or web2 depending on the HTTP URI. +--> +[Ingress](/zh/docs/concepts/services-networking/ingress/)是一种 API 对象,其中定义了一些规则使得集群中的 +服务可以从集群外访问。 +[Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers/) +负责满足 Ingress 中所设置的规则。 + +本节为你展示如何配置一个简单的 Ingress,根据 HTTP URI 将服务请求路由到 +服务 `web` 或 `web2`。 + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +<!-- steps --> + +<!-- +## Create a Minikube cluster + +1. Click **Launch Terminal** +--> +## 创建一个 Minikube 集群 + +1. 点击 **Launch Terminal** + + {{< kat-button >}} + +<!-- +1. (Optional) If you installed Minikube locally, run the following command: +--> +2. (可选操作)如果你在本地安装了 Minikube,运行下面的命令: + + ```shell + minikube start + ``` + +<!-- +## Enable the Ingress controller + +1. To enable the NGINX Ingress controller, run the following command: +--> +## 启用 Ingress 控制器 + +1. 为了启用 NGINIX Ingress 控制器,可以运行下面的命令: + + + ```shell + minikube addons enable ingress + ``` + +<!-- +1. Verify that the NGINX Ingress controller is running +--> +2. 检查验证 NGINX Ingress 控制器处于运行状态: + + ```shell + kubectl get pods -n kube-system + ``` + + <!-- This can take up to a minute. --> + {{< note >}}这一操作可供需要近一分钟时间。{{< /note >}} + + 输出: + + ```shell + NAME READY STATUS RESTARTS AGE + default-http-backend-59868b7dd6-xb8tq 1/1 Running 0 1m + kube-addon-manager-minikube 1/1 Running 0 3m + kube-dns-6dcb57bcc8-n4xd4 3/3 Running 0 2m + kubernetes-dashboard-5498ccf677-b8p5h 1/1 Running 0 2m + nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m + storage-provisioner 1/1 Running 0 2m + ``` + +<!-- +## Deploy a hello, world app + +1. Create a Deployment using the following command: +--> +## 部署一个 Hello World 应用 + +1. 使用下面的命令创建一个 Deployment: + + ```shell + kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 + ``` + + <!--Output:--> + 输出: + + ``` + deployment.apps/web created + ``` + +<!-- +1. Expose the Deployment: +--> +2. 将 Deployment 暴露出来: + + ```shell + kubectl expose deployment web --type=NodePort --port=8080 + ``` + + <!-- Output: --> + 输出: + + ``` + service/web exposed + ``` + +<!-- +1. Verify the Service is created and is available on a node port: +--> +3. 验证 Service 已经创建,并且可能从节点端口访问: + + ```shell + kubectl get service web + ``` + + <!-- Output: --> + 输出: + + ```shell + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + web NodePort 10.104.133.249 <none> 8080:31637/TCP 12m + ``` + +<!-- +1. Visit the service via NodePort: +--> +4. 使用节点端口信息访问服务: + + ```shell + minikube service web --url + ``` + + <!-- Output: --> + 输出: + + ```shell + http://172.17.0.15:31637 + ``` + + <!-- + Katacoda environment only: at the top of the terminal panel, click the plus sign, and then click **Select port to view on Host 1**. Enter the NodePort, in this case `31637`, and then click **Display Port**. + --> + {{< note >}} + 如果使用的是 Katacoda 环境,在终端面板顶端,请点击加号标志。 + 然后点击 **Select port to view on Host 1**。 + 输入节点和端口号(这里是`31637`),之后点击 **Display Port**。 + {{< /note >}} + + <!-- Output: --> + 输出: + + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + + <!-- + You can now access the sample app via the Minikube IP address and NodePort. The next step lets you access + the app using the Ingress resource. + --> + 你现在应该可以通过 Minikube 的 IP 地址和节点端口来访问示例应用了。 + 下一步是让自己能够通过 Ingress 资源来访问应用。 + +<!-- +## Create an Ingress resource + +The following file is an Ingress resource that sends traffic to your Service via hello-world.info. + +1. Create `example-ingress.yaml` from the following file: +--> +## 创建一个 Ingress 资源 + +下面是一个 Ingress 资源的配置文件,负责通过 `hello-world.info` 将服务请求 +转发到你的服务。 + +1. 根据下面的 YAML 创建文件 `example-ingress.yaml`: + + ```yaml + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + metadata: + name: example-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: /$1 + spec: + rules: + - host: hello-world.info + http: + paths: + - path: / + backend: + serviceName: web + servicePort: 8080 + ``` + +<!-- +1. Create the Ingress resource by running the following command: +--> +2. 通过运行下面的命令创建 Ingress 资源: + + ```shell + kubectl apply -f example-ingress.yaml + ``` + + <!-- Output: --> + 输出: + + ```shell + ingress.networking.k8s.io/example-ingress created + ``` +<!-- +1. Verify the IP address is set: +--> +3. 验证 IP 地址已被设置: + + ```shell + kubectl get ingress + ``` + + <!-- This can take a couple of minutes. --> + {{< note >}}此操作可能需要几分钟时间。{{< /note >}} + + ```shell + NAME HOSTS ADDRESS PORTS AGE + example-ingress hello-world.info 172.17.0.15 80 38s + ``` + +<!-- +1. Add the following line to the bottom of the `/etc/hosts` file. +--> +4. 在 `/etc/hosts` 文件的末尾添加以下内容: + + <!-- + If you are running Minikube locally, use `minikube ip` to get the external IP. The IP address displayed within the ingress list will be the internal IP. + --> + {{< note >}} + 如果你在本地运行 Minikube 环境,需要使用 `minikube ip` 获得外部 IP 地址。 + Ingress 列表中显示的 IP 地址会是内部 IP 地址。 + {{< /note >}} + ``` + 172.17.0.15 hello-world.info + ``` + + <!-- This sends requests from hello-world.info to Minikube. --> + 此设置使得来自 `hello-world.info` 的请求被发送到 Minikube。 + +<!-- +1. Verify that the Ingress controller is directing traffic: +--> +5. 验证 Ingress 控制器能够转发请求流量: + + ```shell + curl hello-world.info + ``` + + <!-- Output: --> + 输出: + + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + + <!-- + If you are running Minikube locally, you can visit hello-world.info from your browser. + --> + {{< note >}} + 如果你在使用本地 Minikube 环境,你可以从浏览器中访问 hellow-world.info。 + {{< /note >}} + +<!-- +## Create Second Deployment + +1. Create a v2 Deployment using the following command: +--> +## 创建第二个 Deployment + +1. 使用下面的命令创建 v2 的 Deployment: + + ```shell + kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0 + ``` + <!-- Output: --> + 输出: + + ```shell + deployment.apps/web2 created + ``` + +<!-- +1. Expose the Deployment: +--> +2. 将 Deployment 暴露出来: + + ```shell + kubectl expose deployment web2 --port=8080 --type=NodePort + ``` + + <!-- Output: --> + 输出: + + ```shell + service/web2 exposed + ``` + +<!-- +## Edit Ingress + +1. Edit the existing `example-ingress.yaml` and add the following lines: +--> +## 编辑 Ingress + +1. 编辑现有的 `example-ingress.yaml`,添加以下行: + + + ```yaml + - path: /v2 + backend: + serviceName: web2 + servicePort: 8080 + ``` + +<!-- +1. Apply the changes: +--> +2. 应用所作变更: + + ```shell + kubectl apply -f example-ingress.yaml + ``` + + <!-- Output: --> + 输出: + + ```shell + ingress.networking/example-ingress configured + ``` + +<!-- +## Test Your Ingress + +1. Access the 1st version of the Hello World app. +--> +## 测试你的 Ingress + +1. 访问 HelloWorld 应用的第一个版本: + + ```shell + curl hello-world.info + ``` + + <!-- Output: --> + 输出: + + ``` + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + +<!-- +1. Access the 2nd version of the Hello World app. +--> +2. 访问 HelloWorld 应用的第二个版本: + + ```shell + curl hello-world.info/v2 + ``` + + <!-- Output: --> + 输出: + + ``` + Hello, world! + Version: 2.0.0 + Hostname: web2-75cd47646f-t8cjk + ``` + + <!-- + If you are running Minikube locally, you can visit hello-world.info and hello-world.info/v2 from your browser + --> + {{< note >}} + 如果你在本地运行 Minikube 环境,你可以使用浏览器来访问 + hellow-world.info 和 hello-world.info/v2。 + {{< /note >}} + +## {{% heading "whatsnext" %}} + +<!-- +* Read more about [Ingress](/docs/concepts/services-networking/ingress/) +* Read more about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/) +* Read more about [Services](/docs/concepts/services-networking/service/) +--> + +* 进一步了解 [Ingress](/zh/docs/concepts/services-networking/ingress/)。 +* 进一步了解 [Ingress 控制器](/zh/docs/concepts/services-networking/ingress-controllers/) +* 进一步了解[服务](/zh/docs/concepts/services-networking/service/) + diff --git a/content/zh/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/zh/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index d36686c38a..a2b6d89e0c 100644 --- a/content/zh/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/content/zh/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -22,7 +22,7 @@ content_type: task ## 为什么要更改 PersistentVolume 的回收策略 -`PersistentVolumes` 可以有多种回收策略,包括 "Retain"、"Recycle" 和 "Delete"。对于动态配置的 `PersistentVolumes` 来说,默认回收策略为 "Delete"。这表示当用户删除对应的 `PersistentVolumeClaim` 时,动态配置的 volume 将被自动删除。如果 volume 包含重要数据时,这种自动行为可能是不合适的。那种情况下,更适合使用 "Retain" 策略。使用 "Retain" 时,如果用户删除 `PersistentVolumeClaim`,对应的 `PersistentVolume` 不会被删除。相反,它将变为 `Released` 状态,表示所有的数据可以被手动恢复。 +PersistentVolumes 可以有多种回收策略,包括 "Retain"、"Recycle" 和 "Delete"。对于动态配置的 PersistentVolumes 来说,默认回收策略为 "Delete"。这表示当用户删除对应的 PersistentVolumeClaim 时,动态配置的 volume 将被自动删除。如果 volume 包含重要数据时,这种自动行为可能是不合适的。那种情况下,更适合使用 "Retain" 策略。使用 "Retain" 时,如果用户删除 PersistentVolumeClaim,对应的 PersistentVolume 不会被删除。相反,它将变为 Released 状态,表示所有的数据可以被手动恢复。 ## 更改 PersistentVolume 的回收策略 diff --git a/content/zh/docs/tasks/administer-cluster/declare-network-policy.md b/content/zh/docs/tasks/administer-cluster/declare-network-policy.md index 11ddbff307..f54c581d5d 100644 --- a/content/zh/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/zh/docs/tasks/administer-cluster/declare-network-policy.md @@ -1,53 +1,93 @@ --- -approvers: -- caseydavenport -- danwinship title: 声明网络策略 content_type: task --- +<!-- +reviewers: +- caseydavenport +- danwinship +title: Declare Network Policy +min-kubernetes-server-version: v1.8 +content_type: task +--> <!-- overview --> - -本文可以帮助您开始使用 Kubernetes 的 [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) 声明网络策略去管理 Pod 之间的通信 - - +<!-- +This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other. +--> +本文可以帮助您开始使用 Kubernetes 的 [NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/) 声明网络策略去管理 Pod 之间的通信 ## {{% heading "prerequisites" %}} +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +<!-- +Make sure you've configured a network provider with network policy support. There are a number of network providers that support NetworkPolicy, including: +* [Calico](/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/) +* [Cilium](/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/) +* [Kube-router](/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/) +* [Romana](/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/) +* [Weave Net](/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/) +--> 您首先需要有一个支持网络策略的 Kubernetes 集群。已经有许多支持 NetworkPolicy 的网络提供商,包括: -* [Calico](/docs/tasks/configure-pod-container/calico-network-policy/) -* [Romana](/docs/tasks/configure-pod-container/romana-network-policy/) -* [Weave 网络](/docs/tasks/configure-pod-container/weave-network-policy/) - - -**注意**:以上列表是根据产品名称按字母顺序排序,而不是按推荐或偏好排序。下面示例对于使用了上面任何提供商的 Kubernetes 集群都是有效的 - +* [Calico](/zh/docs/tasks/configure-pod-container/calico-network-policy/) +* [Cilium](/zh/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/) +* [Kube-router](/zh/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy/) +* [Romana](/zh/docs/tasks/configure-pod-container/romana-network-policy/) +* [Weave 网络](/zh/docs/tasks/configure-pod-container/weave-network-policy/) +<!-- +The above list is sorted alphabetically by product name, not by recommendation or preference. This example is valid for a Kubernetes cluster using any of these providers. +--> +{{< note >}} +以上列表是根据产品名称按字母顺序排序,而不是按推荐或偏好排序。 +下面示例对于使用了上面任何提供商的 Kubernetes 集群都是有效的 +{{< /note >}} <!-- steps --> +<!-- +## Create an `nginx` deployment and expose it via a service -## 创建一个`nginx` deployment 并且通过服务将其暴露 - +To see how Kubernetes network policy works, start off by creating an `nginx` Deployment. +--> +## 创建一个`nginx` Deployment 并且通过服务将其暴露 为了查看 Kubernetes 网络策略是怎样工作的,可以从创建一个`nginx` deployment 并且通过服务将其暴露开始 ```console -$ kubectl create deployment nginx --image=nginx +kubectl create deployment nginx --image=nginx +``` +```none deployment "nginx" created -$ kubectl expose deployment nginx --port=80 +``` + +<!-- +Expose the Deployment through a Service called `nginx`. +--> +将此 Deployment 以名为 `nginx` 的 Service 暴露出来: + +```console +kubectl expose deployment nginx --port=80 +``` +```none service "nginx" exposed ``` - -在 default 命名空间下运行了两个 `nginx` pod,而且通过一个名字为 `nginx` 的服务进行了暴露 +<!-- +The above commands create a Deployment with an nginx Pod and expose the Deployment through a Service named `nginx`. The `nginx` Pod and Deployment are found in the `default` namespace. +--> +上述命令创建了一个带有一个 nginx 的 Deployment,并将之通过名为 `nginx` 的 +Service 暴露出来。名为 `nginx` 的 Pod 和 Deployment 都位于 `default` +名字空间内。 ```console -$ kubectl get svc,pod +kubectl get svc,pod +``` +```none NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 10.100.0.1 <none> 443/TCP 46m svc/nginx 10.100.0.16 <none> 80/TCP 33s @@ -56,93 +96,128 @@ NAME READY STATUS RESTARTS AGE po/nginx-701339712-e0qfq 1/1 Running 0 35s ``` +<!-- +## Test the service by accessing it from another Pod -## 测试服务能够被其它的 pod 访问 +You should be able to access the new `nginx` service from other Pods. To access the `nginx` Service from another Pod in the `default` namespace, start a busybox container: +--> +## 通过从 Pod 访问服务对其进行测试 - -您应该可以从其它的 pod 访问这个新的 `nginx` 服务。为了验证它,从 default 命名空间下的其它 pod 来访问该服务。请您确保在该命名空间下没有执行孤立动作。 - - -启动一个 busybox 容器,然后在容器中使用 `wget` 命令去访问 `nginx` 服务: +您应该可以从其它的 Pod 访问这个新的 `nginx` 服务。 +要从 default 命名空间中的其它s Pod 来访问该服务。可以启动一个 busybox 容器: ```console -$ kubectl run busybox --rm -ti --image=busybox /bin/sh -Waiting for pod default/busybox-472357175-y0m47 to be running, status is Pending, pod ready: false +kubectl run busybox --rm -ti --image=busybox /bin/sh +``` -Hit enter for command prompt +<!-- +In your shell, run the following command: +--> +在你的 Shell 中,运行下面的命令: -/ # wget --spider --timeout=1 nginx +```shell +wget --spider --timeout=1 nginx +``` +```none Connecting to nginx (10.100.0.16:80) -/ # +remote file exists ``` +<!-- +## Limit access to the `nginx` service -## 限制访问 `nginx` 服务 +To limit the access to the `nginx` service so that only Pods with the label `access: true` can query it, create a NetworkPolicy object as follows: +--> +## 限制 `nginx` 服务的访问 +如果想限制对 `nginx` 服务的访问,只让那些拥有标签 `access: true` 的 Pod 访问它, +那么可以创建一个如下所示的 NetworkPolicy 对象: -如果说您想限制 `nginx` 服务,只让那些拥有标签 `access: true` 的 pod 访问它,那么您可以创建一个只允许从那些 pod 连接的 `NetworkPolicy`: +{{< codenew file="service/networking/nginx-policy.yaml" >}} -```yaml -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - name: access-nginx -spec: - podSelector: - matchLabels: - app: nginx - ingress: - - from: - - podSelector: - matchLabels: - access: "true" -``` +<!-- +The name of a NetworkPolicy object must be a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +--> +NetworkPolicy 对象的名称必须是一个合法的 +[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +<!-- +NetworkPolicy includes a `podSelector` which selects the grouping of Pods to which the policy applies. You can see this policy selects Pods with the label `app=nginx`. The label was automatically added to the Pod in the `nginx` Deployment. An empty `podSelector` selects all pods in the namespace. +--> +{{< note >}} +NetworkPolicy 中包含选择策略所适用的 Pods 集合的 `podSelector`。 +你可以看到上面的策略选择的是带有标签 `app=nginx` 的 Pods。 +此标签是被自动添加到 `nginx` Deployment 中的 Pod 上的。 +如果 `podSelector` 为空,则意味着选择的是名字空间中的所有 Pods。 +{{< /note >}} +<!-- +## Assign the policy to the service + +Use kubectl to create a NetworkPolicy from the above `nginx-policy.yaml` file: +--> ## 为服务指定策略 - -使用 kubectl 工具根据上面的 nginx-policy.yaml 文件创建一个 NetworkPolicy: +使用 kubectl 根据上面的 `nginx-policy.yaml` 文件创建一个 NetworkPolicy: ```console -$ kubectl create -f nginx-policy.yaml -networkpolicy "access-nginx" created +kubectl apply -f https://k8s.io/examples/service/networking/nginx-policy.yaml +``` +```none +networkpolicy.networking.k8s.io/access-nginx created ``` +<!-- +## Test access to the service when access label is not defined -## 当访问标签没有定义时测试访问服务 +When you attempt to access the `nginx` Service from a Pod without the correct labels, the request times out: +--> +## 测试没有定义访问标签时访问服务 - -如果您尝试从没有设定正确标签的 pod 中去访问 `nginx` 服务,请求将会超时: +如果你尝试从没有设定正确标签的 Pod 中去访问 `nginx` 服务,请求将会超时: ```console -$ kubectl run busybox --rm -ti --image=busybox /bin/sh -Waiting for pod default/busybox-472357175-y0m47 to be running, status is Pending, pod ready: false +kubectl run busybox --rm -ti --image=busybox -- /bin/sh +``` -Hit enter for command prompt +<!-- +In your shell, run the command: +--> +在 Shell 中运行命令: -/ # wget --spider --timeout=1 nginx +```shell +wget --spider --timeout=1 nginx +``` + +```none Connecting to nginx (10.100.0.16:80) wget: download timed out -/ # ``` +<!-- +## Define access label and test again +You can create a Pod with the correct labels to see that the request is allowed: +--> ## 定义访问标签后再次测试 - -创建一个拥有正确标签的 pod,您将看到请求是被允许的: +创建一个拥有正确标签的 Pod,你将看到请求是被允许的: ```console -$ kubectl run busybox --rm -ti --labels="access=true" --image=busybox /bin/sh -Waiting for pod default/busybox-472357175-y0m47 to be running, status is Pending, pod ready: false +kubectl run busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh +``` +<!-- +In your shell, run the command: +--> +在 Shell 中运行命令: -Hit enter for command prompt - -/ # wget --spider --timeout=1 nginx -Connecting to nginx (10.100.0.16:80) -/ # +```shell +wget --spider --timeout=1 nginx ``` - +```none +Connecting to nginx (10.100.0.16:80) +remote file exists +``` diff --git a/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index ad176a8e5f..b005e3e343 100644 --- a/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -28,10 +28,12 @@ please refer to following pages instead: 要查看 kubeadm 创建的有关旧版本集群升级的信息,请参考以下页面: <!-- +- [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) - [Upgrading kubeadm cluster from 1.13 to 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) --> +- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) - [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) @@ -43,17 +45,14 @@ The upgrade workflow at high level is the following: 1. Upgrade additional control plane nodes. 1. Upgrade worker nodes. --> -高版本升级工作流如下: +升级工作的基本流程如下: 1. 升级主控制平面节点。 1. 升级其他控制平面节点。 1. 升级工作节点。 - - ## {{% heading "prerequisites" %}} - <!-- - You need to have a kubeadm Kubernetes cluster running version 1.16.0 or later. - [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). @@ -84,8 +83,6 @@ The upgrade workflow at high level is the following: - 您只能从一个次版本升级到下一个次版本,或者同样次版本的补丁版。也就是说,升级时无法跳过版本。 例如,您只能从 1.y 升级到 1.y+1,而不能从 from 1.y 升级到 1.y+2。 - - <!-- steps --> <!-- @@ -94,335 +91,362 @@ The upgrade workflow at high level is the following: ## 确定要升级到哪个版本 <!-- -1. Find the latest stable 1.17 version: +Find the latest stable 1.18 version: - {{< tabs name="k8s_install_versions" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} +{{< tabs name="k8s_install_versions" >}} +{{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache policy kubeadm - # find the latest 1.17 version in the list - # it should look like 1.17.x-00, where x is the latest patch - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} + # find the latest 1.18 version in the list + # it should look like 1.18.x-00, where x is the latest patch +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.17 version in the list - # it should look like 1.17.x-0, where x is the latest patch - {{% /tab %}} - {{< /tabs >}} + # find the latest 1.18 version in the list + # it should look like 1.18.x-0, where x is the latest patch +{{% /tab %}} +{{< /tabs >}} --> -1. 找到最新的稳定版 1.17: +找到最新的稳定版 1.18: - {{< tabs name="k8s_install_versions" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} +{{< tabs name="k8s_install_versions" >}} +{{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache policy kubeadm - # 在列表中查找最新的 1.17 版本 - # 它看起来应该是 1.17.x-00 ,其中 x 是最新的补丁 - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} + # 在列表中查找最新的 1.18 版本 + # 它看起来应该是 1.18.x-00 ,其中 x 是最新的补丁 +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # 在列表中查找最新的 1.17 版本 - # 它看起来应该是 1.17.x-0 ,其中 x 是最新的补丁 - {{% /tab %}} - {{< /tabs >}} + # 在列表中查找最新的 1.18 版本 + # 它看起来应该是 1.18.x-0 ,其中 x 是最新的补丁版本 +{{% /tab %}} +{{< /tabs >}} <!-- -## Upgrade the first control plane node +## Upgrade the control plane node + +### Upgrade the first control plane node --> -## 升级第一个控制平面节点 +## 升级控制平面节点 + +### 升级第一个控制面节点 <!-- -1. On your first control plane node, upgrade kubeadm: +- On your first control plane node, upgrade kubeadm: - {{< tabs name="k8s_install_kubeadm_first_cp" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version +{{< tabs name="k8s_install_kubeadm_first_cp" >}} +{{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace x in 1.18.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.17.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} + # replace x in 1.18.x-0 with the latest patch version + yum install -y kubeadm-1.18.x-0 -disableexcludes=kubernetes +{{% /tab %}} +{{< /tabs >}} --> -1. 在第一个控制平面节点上,升级 kubeadm : +- 在第一个控制平面节点上,升级 kubeadm : - {{< tabs name="k8s_install_kubeadm_first_cp" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 x +{{< tabs name="k8s_install_kubeadm_first_cp" >}} +{{% tab name="Ubuntu, Debian or HypriotOS" %}} + # 用最新的修补程序版本替换 1.18.x-00 中的 x apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.17.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # 用最新的修补程序版本替换 1.17.x-0 中的 x - yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} + # 用最新的修补程序版本替换 1.18.x-0 中的 x + yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes +{{% /tab %}} +{{< /tabs >}} <!-- -1. Verify that the download works and has the expected version: +- Verify that the download works and has the expected version: - ```shell - kubeadm version - ``` + ```shell + kubeadm version + ``` --> -1. 验证 kubeadm 版本: +- 验证下载操作正常,并且 kubeadm 版本正确: - ```shell - kubeadm version - ``` + ```shell + kubeadm version + ``` <!-- -1. Drain the control plane node: +- Drain the control plane node: + ```shell + # replace <cp-node-name> with the name of your control plane node + kubectl drain $CP_NODE -ignore-daemonsets + ``` --> -1. 腾空控制平面节点: +- 腾空控制平面节点: - ```shell - kubectl drain $CP_NODE --ignore-daemonsets - ``` + ```shell + # 将 <cp-node-name> 替换为你自己的控制面节点名称 + kubectl drain <cp-node-name> --ignore-daemonsets + ``` <!-- -1. On the control plane node, run: +- On the control plane node, run: --> -1. 在主节点上,运行: +- 在控制面节点上,运行: - ```shell - sudo kubeadm upgrade plan - ``` + ```shell + sudo kubeadm upgrade plan + ``` - <!-- - You should see output similar to this: - --> - 您应该可以看到与下面类似的输出: + <!-- + You should see output similar to this: + --> + 您应该可以看到与下面类似的输出: - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.16.0 - [upgrade/versions] kubeadm version: v1.17.0 + ```none + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' + [preflight] Running pre-flight checks. + [upgrade] Running cluster health checks + [upgrade] Fetching available versions to upgrade to + [upgrade/versions] Cluster version: v1.17.3 + [upgrade/versions] kubeadm version: v1.18.0 + [upgrade/versions] Latest stable version: v1.18.0 + [upgrade/versions] Latest version in the v1.17 series: v1.18.0 - Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': - COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.16.0 v1.17.0 + Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': + COMPONENT CURRENT AVAILABLE + Kubelet 1 x v1.17.3 v1.18.0 - Upgrade to the latest version in the v1.13 series: + Upgrade to the latest version in the v1.17 series: - COMPONENT CURRENT AVAILABLE - API Server v1.16.0 v1.17.0 - Controller Manager v1.16.0 v1.17.0 - Scheduler v1.16.0 v1.17.0 - Kube Proxy v1.16.0 v1.17.0 - CoreDNS 1.6.2 1.6.5 - Etcd 3.3.15 3.4.3-0 + COMPONENT CURRENT AVAILABLE + API Server v1.17.3 v1.18.0 + Controller Manager v1.17.3 v1.18.0 + Scheduler v1.17.3 v1.18.0 + Kube Proxy v1.17.3 v1.18.0 + CoreDNS 1.6.5 1.6.7 + Etcd 3.4.3 3.4.3-0 - You can now apply the upgrade by executing the following command: + You can now apply the upgrade by executing the following command: - kubeadm upgrade apply v1.17.0 + kubeadm upgrade apply v1.18.0 - _____________________________________________________________________ - ``` + _____________________________________________________________________ + ``` - <!-- - This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. - --> - 此命令检查您的集群是否可以升级,并可以获取到升级的版本。 + <!-- + This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. + --> + 此命令检查您的集群是否可以升级,并可以获取到升级的版本。 <!-- -1. Choose a version to upgrade to, and run the appropriate command. For example: +`kubeadm upgrade` also automatically renews the certificates that it manages on this node. +To opt-out of certificate renewal the flag `-certificate-renewal=false` can be used. +For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs). --> -1. 选择要升级到的版本,然后运行相应的命令。例如: - ```shell - sudo kubeadm upgrade apply v1.17.x - ``` - - <!-- - - Replace `x` with the patch version you picked for this ugprade. - --> - - 将 `x` 替换为您为此升级选择的修补程序版本。 - - <!-- - You should see output similar to this: - --> - 您应该可以看见与下面类似的输出: - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade/version] You have chosen to change the cluster version to "v1.17.0" - [upgrade/versions] Cluster version: v1.16.0 - [upgrade/versions] kubeadm version: v1.17.0 - [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] - [upgrade/prepull] Prepulling image for component etcd. - [upgrade/prepull] Prepulling image for component kube-scheduler. - [upgrade/prepull] Prepulling image for component kube-apiserver. - [upgrade/prepull] Prepulling image for component kube-controller-manager. - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [upgrade/prepull] Prepulled image for component etcd. - [upgrade/prepull] Prepulled image for component kube-apiserver. - [upgrade/prepull] Prepulled image for component kube-scheduler. - [upgrade/prepull] Prepulled image for component kube-controller-manager. - [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.0"... - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 - Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 - [upgrade/etcd] Upgrading to TLS for etcd - Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/etcd.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 - Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 - Static pod: etcd-myhost hash: 64a28f011070816f4beb07a9c96d73b6 - [apiclient] Found 1 Pods for label selector component=etcd - [upgrade/staticpods] Component "etcd" upgraded successfully! - [upgrade/etcd] Waiting for etcd to become available - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests043818770" - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-apiserver.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-apiserver-myhost hash: b8a6533e241a8c6dab84d32bb708b8a1 - [apiclient] Found 1 Pods for label selector component=kube-apiserver - [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-controller-manager.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 - Static pod: kube-controller-manager-myhost hash: 6f77d441d2488efd9fc2d9a9987ad30b - [apiclient] Found 1 Pods for label selector component=kube-controller-manager - [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-scheduler.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 - Static pod: kube-scheduler-myhost hash: a24773c92bb69c3748fcce5e540b7574 - [apiclient] Found 1 Pods for label selector component=kube-scheduler - [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! - [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace - [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" - [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials - [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token - [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster - [addons] Applied essential addon: CoreDNS - [addons] Applied essential addon: kube-proxy - - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy! - - [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. - ``` +{{< note >}} +`kubeadm upgrade` 也会自动对它在此节点上管理的证书进行续约。 +如果选择不对证书进行续约,可以使用标志 `--certificate-renewal=false`。 +关于更多细节信息,可参见[证书管理指南](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)。 +{{</ note >}} <!-- -1. Manually upgrade your CNI provider plugin. ---> -1. 手动升级你的 CNI 供应商插件。 +- Choose a version to upgrade to, and run the appropriate command. For example: + + ```shell + # replace x with the patch version you picked for this upgrade + sudo kubeadm upgrade apply v1.18.x + ``` +--> +- 选择要升级到的版本,然后运行相应的命令。例如: + + ```shell + # 将 x 替换为你为此次升级所选的补丁版本号 + sudo kubeadm upgrade apply v1.18.x + ``` + + <!-- + You should see output similar to this: + --> + 您应该可以看见与下面类似的输出: + + ```none + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' + [preflight] Running pre-flight checks. + [upgrade] Running cluster health checks + [upgrade/version] You have chosen to change the cluster version to "v1.18.0" + [upgrade/versions] Cluster version: v1.17.3 + [upgrade/versions] kubeadm version: v1.18.0 + [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y + [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] + [upgrade/prepull] Prepulling image for component etcd. + [upgrade/prepull] Prepulling image for component kube-apiserver. + [upgrade/prepull] Prepulling image for component kube-controller-manager. + [upgrade/prepull] Prepulling image for component kube-scheduler. + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler + [upgrade/prepull] Prepulled image for component etcd. + [upgrade/prepull] Prepulled image for component kube-apiserver. + [upgrade/prepull] Prepulled image for component kube-controller-manager. + [upgrade/prepull] Prepulled image for component kube-scheduler. + [upgrade/prepull] Successfully prepulled the images for all the control plane components + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"... + Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 + Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 + Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 + [upgrade/etcd] Upgrading to TLS for etcd + [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue + [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012" + W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" + [upgrade/staticpods] Preparing for "kube-apiserver" upgrade + [upgrade/staticpods] Renewing apiserver certificate + [upgrade/staticpods] Renewing apiserver-kubelet-client certificate + [upgrade/staticpods] Renewing front-proxy-client certificate + [upgrade/staticpods] Renewing apiserver-etcd-client certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 + Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4 + [apiclient] Found 1 Pods for label selector component=kube-apiserver + [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! + [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade + [upgrade/staticpods] Renewing controller-manager.conf certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 + Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156 + [apiclient] Found 1 Pods for label selector component=kube-controller-manager + [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! + [upgrade/staticpods] Preparing for "kube-scheduler" upgrade + [upgrade/staticpods] Renewing scheduler.conf certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 + Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d + [apiclient] Found 1 Pods for label selector component=kube-scheduler + [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! + [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster + [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace + [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" + [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials + [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token + [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster + [addons] Applied essential addon: CoreDNS + [addons] Applied essential addon: kube-proxy + + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy! + + [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. + ``` + +<!-- +- Manually upgrade your CNI provider plugin. - <!-- Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see whether additional upgrade steps are required. - --> - 您的容器网络接口(CNI)应该提供了程序自身的升级说明。 - 检查[插件](/docs/concepts/cluster-administration/addons/)页面查找您 CNI 所提供的程序,并查看是否需要其他升级步骤。 - <!-- This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet. - --> - 如果 CNI 提供程序作为 DaemonSet 运行,则在其他控制平面节点上不需要此步骤。 +--> +- 手动升级你的 CNI 驱动插件。 + + 您的容器网络接口(CNI)驱动应该提供了程序自身的升级说明。 + 检查[插件](/docs/concepts/cluster-administration/addons/)页面查找您 CNI 所提供的程序,并查看是否需要其他升级步骤。 + + 如果 CNI 提供程序作为 DaemonSet 运行,则在其他控制平面节点上不需要此步骤。 <!-- -1. Uncordon the control plane node ---> -1. 取消对控制面节点的保护 +- Uncordon the control plane node ```shell - kubectl uncordon $CP_NODE - ``` - -<!-- -1. Upgrade the kubelet and kubectl on the control plane node: ---> -1. 升级控制平面节点上的 kubelet 和 kubectl : - {{< tabs name="k8s_install_kubelet" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 x - apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \ - apt-mark hold kubelet kubectl - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 x - yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - - -<!-- -1. Restart the kubelet ---> -1. 重启 kubelet - - ```shell - sudo systemctl restart kubelet + # replace <cp-node-name> with the name of your control plane node + kubectl uncordon <cp-node-name> ``` +--> +- 取消对控制面节点的保护 + + ```shell + # 将 <cp-node-name> 替换为你的控制面节点名称 + kubectl uncordon <cp-node-name> + ``` <!-- -## Upgrade additional control plane nodes ---> -## 升级其他控制平面节点 +### Upgrade additional control plane nodes -<!-- -1. Same as the first control plane node but use: +Same as the first control plane node but use: --> -1. 与第一个控制平面节点相同,但使用: +### 升级其他控制面节点 + +与第一个控制面节点类似,不过使用下面的命令: ``` -sudo kubeadm upgrade node experimental-control-plane +sudo kubeadm upgrade node ``` +<!-- instead of: --> 而不是: ``` sudo kubeadm upgrade apply ``` +<!-- Also `sudo kubeadm upgrade plan` is not needed. --> +同时,也不需要执行 `sudo kubeadm upgrade plan`。 + <!-- -Also `sudo kubeadm upgrade plan` is not needed. +### Upgrade kubelet and kubectl --> -也不需要 `sudo kubeadm upgrade plan` 。 +### 升级 kubelet 和 kubectl + +{{< tabs name="k8s_install_kubelet" >}} +{{% tab name="Ubuntu、Debian 或 HypriotOS" %}} + # 用最新的补丁版本替换 1.18.x-00 中的 x + apt-mark unhold kubelet kubectl && \ + apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ + apt-mark hold kubelet kubectl + - + # 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法: + apt-get update && \ + apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 +{{% /tab %}} +{{% tab name="CentOS、RHEL 或 Fedora" %}} + # 用最新的补丁版本替换 1.18.x-00 中的 x + yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes +{{% /tab %}} +{{< /tabs >}} + +<!-- +Restart the kubelet +--> +重启 kubelet + +```shell +sudo systemctl daemon-reload +sudo systemctl restart kubelet +``` <!-- ## Upgrade worker nodes ---> -## 升级工作节点 -<!-- The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads. --> +## 升级工作节点 + 工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,以不影响运行工作负载所需的最小容量。 <!-- @@ -431,35 +455,39 @@ without compromising the minimum required capacity for running your workloads. ### 升级 kubeadm <!-- -1. Upgrade kubeadm on all worker nodes: +- Upgrade kubeadm on all worker nodes: - {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version +{{< tabs name="k8s_install_kubeadm_worker_nodes" >}} +{{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace x in 1.18.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.17.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} + # replace x in 1.18.x-0 with the latest patch version + yum install -y kubeadm-1.18.x-0 -disableexcludes=kubernetes +{{% /tab %}} +{{< /tabs >}} --> -1. 在所有工作节点升级 kubeadm : +- 在所有工作节点升级 kubeadm: - {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 x +{{< tabs name="k8s_install_kubeadm_worker_nodes" >}} +{{% tab name="Ubuntu、Debian 或 HypriotOS" %}} + # 将 1.18.x-00 中的 x 替换为最新的补丁版本 apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.17.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 x - yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} + - + # 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法: + apt-get update && \ + apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 +{{% /tab %}} +{{% tab name="CentOS、RHEL 或 Fedora" %}} + # 用最新的补丁版本替换 1.18.x-00 中的 x + yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes +{{% /tab %}} +{{< /tabs >}} <!-- ### Cordon the node @@ -470,8 +498,8 @@ without compromising the minimum required capacity for running your workloads. 1. Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run: ```shell - kubectl drain $NODE --ignore-daemonsets - ``` + # replace <node-to-drain> with the name of your node you are draining + kubectl drain <node-to-drain> --ignore-daemonsets You should see output similar to this: @@ -481,22 +509,23 @@ without compromising the minimum required capacity for running your workloads. node/ip-172-31-85-18 drained ``` --> -1. 通过将节点标记为不可调度并逐出工作负载,为维护做好准备。运行: +- 通过将节点标记为不可调度并逐出工作负载,为维护做好准备。运行: - ```shell - kubectl drain $NODE --ignore-daemonsets - ``` + ```shell + # 将 <node-to-drain> 替换为你正在腾空的节点的名称 + kubectl drain <node-to-drain> --ignore-daemonsets + ``` - <!-- - You should see output similar to this: - --> - 您应该可以看见与下面类似的输出: + <!-- + You should see output similar to this: + --> + 你应该可以看见与下面类似的输出: - ```shell - node/ip-172-31-85-18 cordoned - WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx - node/ip-172-31-85-18 drained - ``` + ```shell + node/ip-172-31-85-18 cordoned + WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx + node/ip-172-31-85-18 drained + ``` <!-- ### Upgrade the kubelet config @@ -507,22 +536,14 @@ without compromising the minimum required capacity for running your workloads. 1. Upgrade the kubelet config: ```shell - sudo kubeadm upgrade node config --kubelet-version v1.14.x + sudo kubeadm upgrade node ``` - - Replace `x` with the patch version you picked for this ugprade. --> -1. 升级 kubelet 配置: - - ```shell - sudo kubeadm upgrade node config --kubelet-version v1.14.x - ``` - - <!-- - Replace `x` with the patch version you picked for this ugprade. - --> - 用最新的修补程序版本替换 1.14.x-00 中的 x +- 升级 kubelet 配置: + ```shell + sudo kubeadm upgrade node + ``` <!-- ### Upgrade kubelet and kubectl @@ -530,66 +551,60 @@ without compromising the minimum required capacity for running your workloads. ### 升级 kubelet 与 kubectl <!-- -1. Upgrade the Kubernetes package version by running the Linux package manager for your distribution: - - {{< tabs name="k8s_kubelet_and_kubectl" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version - apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \ - apt-mark hold kubelet kubectl - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} +- Upgrade the kubelet and kubectl on all worker nodes: --> -1. 通过运行适用于您的 Linux 发行版包管理器升级 Kubernetes 软件包版本: +- 在所有工作节点上升级 kubelet 和 kubectl: - {{< tabs name="k8s_kubelet_and_kubectl" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 xs +{{< tabs name="k8s_kubelet_and_kubectl" >}} +{{% tab name="Ubuntu、Debian 或 HypriotOS" %}} + # 将 1.18.x-00 中的 x 替换为最新的补丁版本 apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \ + apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ apt-mark hold kubelet kubectl - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # 用最新的修补程序版本替换 1.17.x-00 中的 x - yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} + - + # 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法: + apt-get update && \ + apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 +{{% /tab %}} +{{% tab name="CentOS, RHEL or Fedora" %}} + # 将 1.18.x-00 中的 x 替换为最新的补丁版本 + yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes +{{% /tab %}} +{{< /tabs >}} <!-- -1. Restart the kubelet +- Restart the kubelet ```shell + sudo systemctl daemon-reload sudo systemctl restart kubelet ``` --> -1. 重启 kubelet - - ```shell - sudo systemctl restart kubelet - ``` +- 重启 kubelet + ```shell + sudo systemctl daemon-reload + sudo systemctl restart kubelet + ``` <!-- ### Uncordon the node --> ### 取消对节点的保护 <!-- -1. Bring the node back online by marking it schedulable: +- Bring the node back online by marking it schedulable: ```shell - kubectl uncordon $NODE + # replace <node-to-drain> with the name of your node + kubectl uncordon <node-to-drain> ``` --> -1. 通过将节点标记为可调度,让节点重新上线: +- 通过将节点标记为可调度,让节点重新上线: - ```shell - kubectl uncordon $NODE - ``` + ```shell + # 将 <node-to-drain> 替换为当前节点的名称 + kubectl uncordon <node-to-drain> + ``` <!-- ## Verify the status of the cluster @@ -613,8 +628,6 @@ The `STATUS` column should show `Ready` for all your nodes, and the version numb --> `STATUS` 应显示所有节点为 `Ready` 状态,并且版本号已经被更新。 - - <!-- ## Recovering from a failure state @@ -629,6 +642,35 @@ To recover from a bad state, you can also run `kubeadm upgrade --force` without 此命令是幂等的,并最终确保实际状态是您声明的所需状态。 要从故障状态恢复,您还可以运行 `kubeadm upgrade --force` 而不去更改集群正在运行的版本。 +<!-- +During upgrade kubeadm writes the following backup folders under `/etc/kubernetes/tmp`: +- `kubeadm-backup-etcd-<date>-<time>` +- `kubeadm-backup-manifests-<date>-<time>` + +`kubeadm-backup-etcd` contains a backup of the local etcd member data for this control-plane Node. +In case of an etcd upgrade failure and if the automatic rollback does not work, the contents of this folder +can be manually restored in `/var/lib/etcd`. In case external etcd is used this backup folder will be empty. + +`kubeadm-backup-manifests` contains a backup of the static Pod manifest files for this control-plane Node. +In case of a upgrade failure and if the automatic rollback does not work, the contents of this folder can be +manually restored in `/etc/kubernetes/manifests`. If for some reason there is no difference between a pre-upgrade +and post-upgrade manifest file for a certain component, a backup file for it will not be written. +--> +在升级期间,kubeadm 向 `/etc/kubernetes/tmp` 目录下的如下备份文件夹写入数据: + +- `kubeadm-backup-etcd-<date>-<time>` +- `kubeadm-backup-manifests-<date>-<time>` + +`kubeadm-backup-etcd` 包含当前控制面节点本地 etcd 成员数据的备份。 +如果 etcd 升级失败并且自动回滚也无法修复,则可以将此文件夹中的内容复制到 +`/var/lib/etcd` 进行手工修复。如果使用的是外部的 etcd,则此备份文件夹为空。 + +`kubeadm-backup-manifests` 包含当前控制面节点的静态 Pod 清单文件的备份版本。 +如果升级失败并且无法自动回滚,则此文件夹中的内容可以复制到 +`/etc/kubernetes/manifests` 目录实现手工恢复。 +如果由于某些原因,在升级前后某个组件的清单未发生变化,则 kubeadm 也不会为之 +生成备份版本。 + <!-- ## How it works @@ -644,27 +686,42 @@ To recover from a bad state, you can also run `kubeadm upgrade --force` without - Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created. - Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. --> -## 它是怎么工作的 +## 工作原理 `kubeadm upgrade apply` 做了以下工作: - 检查您的集群是否处于可升级状态: - API 服务器是可访问的 - 所有节点处于 `Ready` 状态 - - 控制平面是健康的 + - 控制面是健康的 - 强制执行版本 skew 策略。 -- 确保控制平面的镜像是可用的或可拉取到服务器上。 -- 升级控制平面组件或回滚(如果其中任何一个组件无法启动)。 +- 确保控制面的镜像是可用的或可拉取到服务器上。 +- 升级控制面组件或回滚(如果其中任何一个组件无法启动)。 - 应用新的 `kube-dns` 和 `kube-proxy` 清单,并强制创建所有必需的 RBAC 规则。 - 如果旧文件在 180 天后过期,将创建 API 服务器的新证书和密钥文件并备份旧文件。 <!-- -`kubeadm upgrade node experimental-control-plane` does the following on additional control plane nodes: +`kubeadm upgrade node` does the following on additional control plane nodes: - Fetches the kubeadm `ClusterConfiguration` from the cluster. - Optionally backups the kube-apiserver certificate. - Upgrades the static Pod manifests for the control plane components. +- Upgrades the kubelet configuration for this node. --> -`kubeadm upgrade node experimental-control-plane` 在其他控制平面节点上执行以下操作: +`kubeadm upgrade node` 在其他控制平节点上执行以下操作: + - 从集群中获取 kubeadm `ClusterConfiguration`。 - 可选地备份 kube-apiserver 证书。 - 升级控制平面组件的静态 Pod 清单。 +- 为本节点升级 kubelet 配置 + +<!-- +`kubeadm upgrade node` does the following on worker nodes: + +- Fetches the kubeadm `ClusterConfiguration` from the cluster. +- Upgrades the kubelet configuration for this node. +--> +`kubeadm upgrade node` 在工作节点上完成以下工作: + +- 从集群取回 kubeadm `ClusterConfiguration`。 +- 为本节点升级 kubelet 配置 + diff --git a/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md index 3b26ba5ba9..8e68d12845 100644 --- a/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/zh/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -5,6 +5,7 @@ reviewers: - dashpole title: 为系统守护进程预留计算资源 content_type: task +min-kubernetes-server-version: 1.8 --- <!-- --- @@ -14,6 +15,7 @@ reviewers: - dashpole title: Reserve Compute Resources for System Daemons content_type: task +min-kubernetes-server-version: 1.8 --- --> @@ -31,18 +33,23 @@ compute resources for system daemons. Kubernetes recommends cluster administrators to configure `Node Allocatable` based on their workload density on each node. --> -Kubernetes 的节点可以按照 `Capacity` 调度。默认情况下 pod 能够使用节点全部可用容量。这是个问题,因为节点自己通常运行了不少驱动 OS 和 Kubernetes 的系统守护进程。除非为这些系统守护进程留出资源,否则它们将与 pod 争夺资源并导致节点资源短缺问题。 - -`kubelet` 公开了一个名为 `Node Allocatable` 的特性,有助于为系统守护进程预留计算资源。Kubernetes 推荐集群管理员按照每个节点上的工作负载密度配置 `Node Allocatable`。 - +Kubernetes 的节点可以按照 `Capacity` 调度。默认情况下 pod 能够使用节点全部可用容量。 +这是个问题,因为节点自己通常运行了不少驱动 OS 和 Kubernetes 的系统守护进程。 +除非为这些系统守护进程留出资源,否则它们将与 pod 争夺资源并导致节点资源短缺问题。 +`kubelet` 公开了一个名为 `Node Allocatable` 的特性,有助于为系统守护进程预留计算资源。 +Kubernetes 推荐集群管理员按照每个节点上的工作负载密度配置 `Node Allocatable`。 ## {{% heading "prerequisites" %}} {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - +<!-- +Your Kubernetes server must be at or later than version 1.17 to use +the kubelet command line option `--reserved-cpus` to set an +[explicitly reserved CPU list](#explicitly-reserved-cpu-list). +--> +您的 kubernetes 服务器版本必须至少是 1.17 版本,才能使用 kubelet 命令行选项 `--reserved-cpus` 来设置 [显式 CPU 保留列表](#explicitly-reserved-cpu-list) <!-- steps --> @@ -94,7 +101,8 @@ Resources can be reserved for two categories of system daemons in the `kubelet`. --------------------------- ``` -Kubernetes 节点上的 `Allocatable` 被定义为 pod 可用计算资源量。调度器不会超额申请 `Allocatable`。目前支持 `CPU`, `memory` 和 `ephemeral-storage` 这几个参数。 +Kubernetes 节点上的 `Allocatable` 被定义为 pod 可用计算资源量。调度器不会超额申请 `Allocatable`。 +目前支持 `CPU`, `memory` 和 `ephemeral-storage` 这几个参数。 可分配的节点暴露为 API 中 `v1.Node` 对象的一部分,也是 CLI 中 `kubectl describe node` 的一部分。 @@ -110,7 +118,8 @@ under a cgroup hierarchy managed by the `kubelet`. --> ### 启用 QoS 和 Pod 级别的 cgroups -为了恰当的在节点范围实施 node allocatable,您必须通过 `--cgroups-per-qos` 标志启用新的 cgroup 层次结构。这个标志是默认启用的。启用后,`kubelet` 将在其管理的 cgroup 层次结构中创建所有终端用户的 pod。 +为了恰当的在节点范围实施 node allocatable,您必须通过 `--cgroups-per-qos` 标志启用新的 cgroup 层次结构。 +这个标志是默认启用的。启用后,`kubelet` 将在其管理的 cgroup 层次结构中创建所有终端用户的 Pod。 <!-- ### Configuring a cgroup driver @@ -141,7 +150,8 @@ be configured to use the `systemd` cgroup driver. * `cgroupfs` 是默认的驱动,在主机上直接操作 cgroup 文件系统以对 cgroup 沙箱进行管理。 * `systemd` 是可选的驱动,使用 init 系统支持的资源的瞬时切片管理 cgroup 沙箱。 -取决于相关容器运行时的配置,操作员可能需要选择一个特定的 cgroup 驱动来保证系统正常运行。例如如果操作员使用 `docker` 运行时提供的 `systemd` cgroup 驱动时,必须配置 `kubelet` 使用 `systemd` cgroup 驱动。 +取决于相关容器运行时的配置,操作员可能需要选择一个特定的 cgroup 驱动来保证系统正常运行。 +例如如果操作员使用 `docker` 运行时提供的 `systemd` cgroup 驱动时,必须配置 `kubelet` 使用 `systemd` cgroup 驱动。 <!-- ### Kube Reserved @@ -152,13 +162,7 @@ be configured to use the `systemd` cgroup driver. `kube-reserved` is meant to capture resource reservation for kubernetes system daemons like the `kubelet`, `container runtime`, `node problem detector`, etc. It is not meant to reserve resources for system daemons that are run as pods. -`kube-reserved` is typically a function of `pod density` on the nodes. [This -performance dashboard](http://node-perf-dash.k8s.io/#/builds) exposes `cpu` and -`memory` usage profiles of `kubelet` and `docker engine` at multiple levels of -pod density. [This blog -post](https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node-dashboard) -explains how the dashboard can be interpreted to come up with a suitable -`kube-reserved` reservation. +`kube-reserved` is typically a function of `pod density` on the nodes. In addition to `cpu`, `memory`, and `ephemeral-storage`, `pid` may be specified to reserve the specified number of process IDs for @@ -185,14 +189,14 @@ exist. Kubelet will fail if an invalid cgroup is specified. `kube-reserved` 是为了给诸如 `kubelet`、`container runtime`、`node problem detector` 等 kubernetes 系统守护进程争取资源预留。 这并不代表要给以 pod 形式运行的系统守护进程保留资源。`kube-reserved` 通常是节点上的一个 `pod 密度` 功能。 -[这个性能仪表盘](http://node-perf-dash.k8s.io/#/builds) 从 pod 密度的多个层面展示了 `kubelet` 和 `docker engine` 的 `cpu` 和 `内存` 使用情况。 -[这个博文](https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node-dashboard)解释了如何仪表板以提出合适的 `kube-reserved` 预留。 除了 `cpu`,`内存` 和 `ephemeral-storage` 之外,`pid` 可能是指定为 kubernetes 系统守护进程预留指定数量的进程 ID。 要选择性的在系统守护进程上执行 `kube-reserved`,需要把 kubelet 的 `--kube-reserved-cgroup` 标志的值设置为 kube 守护进程的父控制组。 -推荐将 kubernetes 系统守护进程放置于顶级控制组之下(例如 systemd 机器上的 `runtime.slice`)。理想情况下每个系统守护进程都应该在其自己的子控制组中运行。请参考[这篇文档](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup),获取更过关于推荐控制组层次结构的细节。 +推荐将 kubernetes 系统守护进程放置于顶级控制组之下(例如 systemd 机器上的 `runtime.slice`)。 +理想情况下每个系统守护进程都应该在其自己的子控制组中运行。 +请参考[这篇文档](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup),获取更过关于推荐控制组层次结构的细节。 请注意,如果 `--kube-reserved-cgroup` 不存在,Kubelet 将**不会**创建它。如果指定了一个无效的 cgroup,Kubelet 将会失败。 @@ -243,7 +247,7 @@ exist. Kubelet will fail if an invalid cgroup is specified. <!-- ### Explicitly Reserved CPU List --> -### 明确保留的 CPU 列表 +### 显式保留的 CPU 列表 {#explicitly-reserved-cpu-list} {{< feature-state for_k8s_version="v1.17" state="stable" >}} - **Kubelet Flag**: `--reserved-cpus=0-3` @@ -272,7 +276,10 @@ defined by this option, other mechanism outside Kubernetes should be used. For example: in Centos, you can do this using the tuned toolset. --> 此选项是专门为 Telco 或 NFV 用例设计的,在这些用例中不受控制的中断或计时器可能会影响其工作负载性能。 -可以使用此选项为系统或 kubernetes 守护程序以及中断或计时器定义显式的 cpuset,因此系统上的其余 CPU 可以专门用于工作负载,而不受不受控制的中断或计时器的影响较小。要将系统守护程序、kubernetes 守护程序和中断或计时器移动到此选项定义的显式 cpuset 上,应使用 Kubernetes 之外的其他机制。 +可以使用此选项为系统或 kubernetes 守护程序以及中断或计时器定义显式的 cpuset,因此系统上的 +其余 CPU 可以专门用于工作负载,而不受不受控制的中断或计时器的影响较小。 +要将系统守护程序、kubernetes 守护程序和中断或计时器移动到此选项定义的显式 cpuset 上, +应使用 Kubernetes 之外的其他机制。 例如:在 Centos 系统中,可以使用 tuned 工具集来执行此操作。 <!-- @@ -283,7 +290,7 @@ For example: in Centos, you can do this using the tuned toolset. Memory pressure at the node level leads to System OOMs which affects the entire node and all pods running on it. Nodes can go offline temporarily until memory has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet -provides [`Out of Resource`](./out-of-resource.md) management. Evictions are +provides [`Out of Resource`](/docs/tasks/administer-cluster/out-of-resource/) management. Evictions are supported for `memory` and `ephemeral-storage` only. By reserving some memory via `--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory availability on the node drops below the reserved value. Hypothetically, if @@ -296,7 +303,9 @@ available for pods. - **Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]` 节点级别的内存压力将导致系统内存不足,这将影响到整个节点及其上运行的所有 pod。节点可以暂时离线直到内存已经回收为止。 -为了防止(或减少可能性)系统内存不足,kubelet 提供了[资源不足](./out-of-resource.md)管理。驱逐操作只支持 `memory` 和 `ephemeral-storage`。 +为了防止(或减少可能性)系统内存不足,kubelet 提供了 +[资源不足](/zh/docs/tasks/administer-cluster/out-of-resource/)管理。 +驱逐操作只支持 `memory` 和 `ephemeral-storage`。 通过 `--eviction-hard` 标志预留一些内存后,当节点上的可用内存降至保留值以下时,`kubelet` 将尝试`驱逐` pod。 假设,如果节点上不存在系统守护进程,pod 将不能使用超过 `capacity-eviction-hard` 的资源。因此,为驱逐而预留的资源对 pod 是不可用的。 @@ -310,7 +319,7 @@ The scheduler treats `Allocatable` as the available `capacity` for pods. `kubelet` enforce `Allocatable` across pods by default. Enforcement is performed by evicting pods whenever the overall usage across all pods exceeds `Allocatable`. More details on eviction policy can be found -[here](./out-of-resource.md#eviction-policy). This enforcement is controlled by +[here](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy). This enforcement is controlled by specifying `pods` value to the kubelet flag `--enforce-node-allocatable`. @@ -326,9 +335,14 @@ respectively. 调度器将 `Allocatable` 按 pod 的可用 `capacity` 对待。 -`kubelet` 默认在 pod 中执行 `Allocatable`。无论何时,如果所有 pod 的总用量超过了 `Allocatable`,驱逐 pod 的措施将被执行。有关驱逐策略的更多细节可以在[这里](./out-of-resource.md#eviction-policy)找到。请通过设置 kubelet `--enforce-node-allocatable` 标志值为 `pods` 控制这个措施。 +`kubelet` 默认在 Pod 中执行 `Allocatable`。无论何时,如果所有 pod 的总用量超过了 `Allocatable`, +驱逐 pod 的措施将被执行。有关驱逐策略的更多细节可以在 +[这里](/zh/docs/tasks/administer-cluster/out-of-resource/#eviction-policy).找到。 +请通过设置 kubelet `--enforce-node-allocatable` 标志值为 `pods` 控制这个措施。 -可选的,通过在相同标志中同时指定 `kube-reserved` 和 `system-reserved` 值能够使 `kubelet` 执行 `kube-reserved` 和 `system-reserved`。请注意,要想执行 `kube-reserved` 或者 `system-reserved` 时,需要分别指定 `--kube-reserved-cgroup` 或者 `--system-reserved-cgroup`。 +可选的,通过在相同标志中同时指定 `kube-reserved` 和 `system-reserved` 值能够使 `kubelet` +执行 `kube-reserved` 和 `system-reserved`。请注意,要想执行 `kube-reserved` 或者 `system-reserved` 时, +需要分别指定 `--kube-reserved-cgroup` 或者 `--system-reserved-cgroup`。 <!-- ## General Guidelines @@ -359,17 +373,23 @@ So expect a drop in `Allocatable` capacity in future releases. --> ## 一般原则 -系统守护进程期望被按照类似 `Guaranteed` pod 一样对待。系统守护进程可以在其范围控制组中爆发式增长,您需要将这个行为作为 kubernetes 部署的一部分进行管理。 -例如,`kubelet` 应该有它自己的控制组并和容器运行时共享 `Kube-reserved` 资源。然而,如果执行了 `kube-reserved`,则 kubelet 不能突然爆发并耗尽节点的所有可用资源。 +系统守护进程期望被按照类似 `Guaranteed` pod 一样对待。系统守护进程可以在其范围控制组中爆发式增长, +您需要将这个行为作为 kubernetes 部署的一部分进行管理。 +例如,`kubelet` 应该有它自己的控制组并和容器运行时共享 `Kube-reserved` 资源。 +然而,如果执行了 `kube-reserved`,则 kubelet 不能突然爆发并耗尽节点的所有可用资源。 -在执行 `system-reserved` 预留操作时请加倍小心,因为它可能导致节点上的关键系统服务 CPU 资源短缺或因为内存不足而被终止。 -建议只有当用户详尽地描述了他们的节点以得出精确的估计时才强制执行 `system-reserved`,并且如果该组中的任何进程都是 oom_killed,则对他们恢复的能力充满信心。 +在执行 `system-reserved` 预留操作时请加倍小心,因为它可能导致节点上的关键系统服务 CPU 资源短缺 +或因为内存不足而被终止。 +建议只有当用户详尽地描述了他们的节点以得出精确的估计时才强制执行 `system-reserved`, +并且如果该组中的任何进程都是 oom_killed,则对他们恢复的能力充满信心。 * 在 `pods` 上执行 `Allocatable` 作为开始。 * 一旦足够用于追踪系统守护进程的监控和告警的机制到位,请尝试基于用量探索方式执行 `kube-reserved`。 * 随着时间推进,如果绝对必要,可以执行 `system-reserved`。 -随着时间的增长以及越来越多特性的加入,kube 系统守护进程对资源的需求可能也会增加。以后 kubernetes 项目将尝试减少对节点系统守护进程的利用,但目前那并不是优先事项。所以,请期待在将来的发布中将 `Allocatable` 容量降低。 +随着时间的增长以及越来越多特性的加入,kube 系统守护进程对资源的需求可能也会增加。 +以后 kubernetes 项目将尝试减少对节点系统守护进程的利用,但目前那并不是优先事项。 +所以,请期待在将来的发布中将 `Allocatable` 容量降低。 @@ -408,58 +428,8 @@ usage is higher than `31.5Gi` or `storage` is greater than `90Gi` 在这个场景下,`Allocatable` 将会是 `14.5 CPUs`、`28.5Gi` 内存以及 `88Gi` 本地存储。 调度器保证这个节点上的所有 pod `请求`的内存总量不超过 `28.5Gi`,存储不超过 `88Gi`。 -当 pod 的内存使用总量超过 `28.5Gi` 或者磁盘使用总量超过 `88Gi` 时,Kubelet 将会驱逐它们。如果节点上的所有进程都尽可能多的使用 CPU,则 pod 加起来不能使用超过 `14.5 CPUs` 的资源。 - -当没有执行 `kube-reserved` 和/或 `system-reserved` 且系统守护进程使用量超过其预留时,如果节点内存用量高于 `31.5Gi` 或`存储`大于 `90Gi`,kubelet 将会驱逐 pod。 - -<!-- -## Feature Availability - -As of Kubernetes version 1.2, it has been possible to **optionally** specify -`kube-reserved` and `system-reserved` reservations. The scheduler switched to -using `Allocatable` instead of `Capacity` when available in the same release. - -As of Kubernetes version 1.6, `eviction-thresholds` are being considered by -computing `Allocatable`. To revert to the old behavior set -`--experimental-allocatable-ignore-eviction` kubelet flag to `true`. - -As of Kubernetes version 1.6, `kubelet` enforces `Allocatable` on pods using -control groups. To revert to the old behavior unset `--enforce-node-allocatable` -kubelet flag. Note that unless `--kube-reserved`, or `--system-reserved` or -`--eviction-hard` flags have non-default values, `Allocatable` enforcement does -not affect existing deployments. - -As of Kubernetes version 1.6, `kubelet` launches pods in their own cgroup -sandbox in a dedicated part of the cgroup hierarchy it manages. Operators are -required to drain their nodes prior to upgrade of the `kubelet` from prior -versions in order to ensure pods and their associated containers are launched in -the proper part of the cgroup hierarchy. - -As of Kubernetes version 1.7, `kubelet` supports specifying `storage` as a resource -for `kube-reserved` and `system-reserved`. - -As of Kubernetes version 1.8, the `storage` key name was changed to `ephemeral-storage` -for the alpha release. ---> -## 可用特性 - -截至 Kubernetes 1.2 版本,已经可以**可选**的指定 `kube-reserved` 和 `system-reserved` 预留。当在相同的发布中都可用时,调度器将转为使用 `Allocatable` 替代 `Capacity`。 - -截至 Kubernetes 1.6 版本,`eviction-thresholds` 是通过计算 `Allocatable` 进行考虑。要使用旧版本的行为,请设置 `--experimental-allocatable-ignore-eviction` kubelet 标志为 `true`。 - -截至 Kubernetes 1.6 版本,`kubelet` 使用控制组在 pod 上执行 `Allocatable`。要使用旧版本行为,请取消设置 `--enforce-node-allocatable` kubelet 标志。请注意,除非 `--kube-reserved` 或者 `--system-reserved` 或者 `--eviction-hard` 标志没有默认参数,否则 `Allocatable` 的实施不会影响已经存在的 deployment。 - -截至 Kubernetes 1.6 版本,`kubelet` 在 pod 自己的 cgroup 沙箱中启动它们,这个 cgroup 沙箱在 `kubelet` 管理的 cgroup 层次结构中的一个独占部分中。在从前一个版本升级 kubelet 之前,要求操作员 drain 节点,以保证 pod 及其关联的容器在 cgroup 层次结构中合适的部分中启动。 - -截至 Kubernetes 1.7 版本,`kubelet` 支持指定 `storage` 为 `kube-reserved` 和 `system-reserved` 的资源。 - -截至 Kubernetes 1.8 版本,对于 alpha 版本,`storage` 键值名称已更改为 `ephemeral-storage`。 - -<!-- -As of Kubernetes version 1.17, you can optionally specify -explicit cpuset by `reserved-cpus` as CPUs reserved for OS system -daemons/interrupts/timers and Kubernetes daemons. ---> -从 Kubernetes 1.17 版本开始,可以选择将 `reserved-cpus` 显式 cpuset 指定为操作系统守护程序、中断、计时器和 Kubernetes 守护程序保留的 CPU。 - +当 pod 的内存使用总量超过 `28.5Gi` 或者磁盘使用总量超过 `88Gi` 时,Kubelet 将会驱逐它们。 +如果节点上的所有进程都尽可能多的使用 CPU,则 pod 加起来不能使用超过 `14.5 CPUs` 的资源。 +当没有执行 `kube-reserved` 和/或 `system-reserved` 且系统守护进程使用量超过其预留时, +如果节点内存用量高于 `31.5Gi` 或`存储`大于 `90Gi`,kubelet 将会驱逐 pod。 diff --git a/content/zh/docs/tasks/extend-kubernetes/http-proxy-access-api.md b/content/zh/docs/tasks/extend-kubernetes/http-proxy-access-api.md index a186e78a44..32a782b5ac 100644 --- a/content/zh/docs/tasks/extend-kubernetes/http-proxy-access-api.md +++ b/content/zh/docs/tasks/extend-kubernetes/http-proxy-access-api.md @@ -31,7 +31,7 @@ This page shows how to use an HTTP proxy to access the Kubernetes API. * 如果您的集群中还没有任何应用,使用如下命令启动一个 Hello World 应用: ```shell -kubectl run node-hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080 +kubectl create deployment node-hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080 ``` diff --git a/content/zh/docs/tasks/extend-kubernetes/setup-extension-api-server.md b/content/zh/docs/tasks/extend-kubernetes/setup-extension-api-server.md index bd3c600296..dd162bdc15 100644 --- a/content/zh/docs/tasks/extend-kubernetes/setup-extension-api-server.md +++ b/content/zh/docs/tasks/extend-kubernetes/setup-extension-api-server.md @@ -78,7 +78,7 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server. 1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap. 1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. -1. Use kubectl to get your resource. It should return "No resources found." Which means that everything worked but you currently have no objects of that resource type created yet. +1. Use kubectl to get your resource. When run, kubectl should return "No resources found.". This message indicates that everything worked but you currently have no objects of that resource type created. --> 1. 确保启用了 APIService API(检查 `--runtime-config`)。默认应该是启用的,除非被特意关闭了。 1. 您可能需要制定一个 RBAC 规则,以允许您添加 APIService 对象,或让您的集群管理员创建一个。(由于 API 扩展会影响整个集群,因此不建议在实时集群中对 API 扩展进行测试/开发/调试) @@ -94,7 +94,7 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到 `system:auth-delegator` 集群角色,以将 auth 决策委派给 Kubernetes 核心 API 服务器。 1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到 `extension-apiserver-authentication-reader` 角色。这将让您的扩展 api-server 能够访问 `extension-apiserver-authentication` configmap。 1. 创建一个 Kubernetes apiservice。上述的 CA 证书应该使用 base64 编码,剥离新行并用作 apiservice 中的 spec.caBundle。这不应该是命名空间化的。如果使用了 [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/),那么只需要传入 PEM 编码的 CA 绑定,因为 base 64 编码已经完成了。 -1. 使用 kubectl 来获得您的资源。它应该返回 "找不到资源"。这意味着一切正常,但您目前还没有创建该资源类型的对象。 +1. 使用 kubectl 来获得您的资源。它应该返回 "找不到资源"。此消息表示一切正常,但您目前还没有创建该资源类型的对象。 @@ -109,8 +109,3 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu * 如果你还未配置,请 [配置聚合层](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 并启用 apiserver 的相关参数。 * 高级概述,请参阅 [使用聚合层扩展 Kubernetes API](/docs/concepts/api-extension/apiserver-aggregation)。 * 了解如何 [使用 Custom Resource Definition 扩展 Kubernetes API](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)。 - - - - - diff --git a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index d7db4ef10b..58ee1d72a2 100644 --- a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -74,7 +74,7 @@ Dockerfile 内容如下: ``` FROM php:5-apache -ADD index.php /var/www/html/index.php +COPY index.php /var/www/html/index.php RUN chmod a+rx index.php ``` <!-- diff --git a/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md index 37bc08ed3d..91116c46f0 100644 --- a/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/zh/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -40,7 +40,7 @@ Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes clust * 您必须启用 Kubernetes 集群的 DNS 功能。 * 如果使用基于云的 Kubernetes 集群或 {{< glossary_tooltip text="Minikube" term_id="minikube" >}},则可能已经启用了集群 DNS。 * 如果您正在使用 `hack/local-up-cluster.sh`,请确保设置了 `KUBE_ENABLE_CLUSTER_DNS` 环境变量,然后运行安装脚本。 -* [安装和设置 v1.7 或更高版本的 kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/),确保将其配置为连接到 Kubernetes 集群。 +* [安装和设置 v1.7 或更高版本的 kubectl](/zh/docs/tasks/tools/install-kubectl/),确保将其配置为连接到 Kubernetes 集群。 * 安装 v2.7.0 或更高版本的 [Helm](http://helm.sh/)。 * 遵照 [Helm 安装说明](https://github.com/kubernetes/helm/blob/master/docs/install.md)。 * 如果已经安装了适当版本的 Helm,请执行 `helm init` 来安装 Helm 的服务器端组件 Tiller。 @@ -142,7 +142,7 @@ Install Service Catalog from the root of the Helm repository using the following --> 使用以下命令从 Helm 存储库的根目录安装 Service Catalog: -{{< tabs name="helm-versions" >}} +{{< tabs name="helm-versions" >}} {{% tab name="Helm version 3" %}} ```shell helm install catalog svc-cat/catalog --namespace catalog @@ -163,5 +163,3 @@ helm install svc-cat/catalog --name catalog --namespace catalog --> * 查看[示例服务代理](https://github.com/openservicebrokerapi/servicebroker/blob/mastergettingStarted.md#sample-service-brokers)。 * 探索 [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) 项目。 - - diff --git a/i18n/pt.toml b/i18n/pt.toml index 833e4ba627..9b715dcf8f 100644 --- a/i18n/pt.toml +++ b/i18n/pt.toml @@ -1,26 +1,54 @@ -# i18n strings for the Portuguese (main) site. - -[deprecation_warning] -other = " a documentação não é mais mantida ativamente. A versão que você está visualizando no momento é uma captura instantânea estática. Para obter documentação atualizada, consulte " - -[deprecation_file_warning] -other = "Descontinuado" - -[objectives_heading] -other = "Objetivos" + # i18n strings for the Portuguese (main) site. +[caution] +other = "Cuidado:" [cleanup_heading] other = "Limpando" -[prerequisites_heading] -other = "Antes de você começar" +[community_events_calendar] +other = "Calendário de Eventos" -[whatsnext_heading] -other = "Qual é o próximo" +[community_forum_name] +other = "Fórum" + +[community_github_name] +other = "GitHub" + +# Community links + +[community_slack_name] +other = "Slack" + +[community_stack_overflow_name] +other = "Stack Overflow" + +[community_twitter_name] +other = "Twitter" + +[deprecation_file_warning] +other = "Descontinuado" + +[deprecation_warning] +other = " a documentação não é mais mantida ativamente. A versão que você está visualizando no momento é uma captura instantânea estática. Para obter documentação atualizada, consulte " + +[docs_label_browse] +other = "Procurar documentos" + +[docs_label_contributors] +other = "Colaboradores" + +[docs_label_i_am] +other = "Eu sou..." + +[docs_label_users] +other = "Usuários" [feedback_heading] other = "Comentários" +[feedback_no] +other = "Não" + [feedback_question] other = "Esta página foi útil?" @@ -30,169 +58,138 @@ other = "Sim" [input_placeholder_email_address] other = "endereço de e-mail" -[feedback_no] -other = "Não" - [latest_version] other = "última versão." -[version_check_mustbe] -other = "Seu servidor Kubernetes deve ser versão" - -[version_check_mustbeorlater] -other = "O seu servidor Kubernetes deve estar em ou depois da versão " - -[version_check_tocheck] -other = "Para verificar a versão, digite " - -[caution] -other = "Cuidado:" - -[note] -other = "Nota:" - -[warning] -other = "Aviso:" - -[main_read_about] -other = "Ler sobre" - -[main_read_more] -other = "Consulte Mais informação" - -[main_github_invite] -other = "Interessado em mergulhar na base de código do Kubernetes?" - -[main_github_view_on] -other = "Veja no Github" - -[main_github_create_an_issue] -other = "Abra um bug" - -[main_community_explore] -other = "Explore a comunidade" - -[main_kubernetes_features] -other = "Recursos do Kubernetes" - -[main_cncf_project] -other = """Nós somos uma <a href="https://cncf.io/">CNCF</a> projeto graduado</p>""" - -[main_kubeweekly_baseline] -other = "Interessado em receber as últimas novidades sobre Kubernetes? Inscreva-se no KubeWeekly." - -[main_kubernetes_past_link] -other = "Veja boletins passados" - -[main_kubeweekly_signup] -other = "Se inscrever" - -[main_contribute] -other = "Contribuir" - -[main_edit_this_page] -other = "Edite essa página" - -[main_page_history] -other ="História da página" - -[main_page_last_modified_on] -other = "Última modificação da página em" - -[main_by] -other = "por" - -[main_documentation_license] -other = """Os autores do Kubernetes | Documentação Distribuída sob <a href="https://git.k8s.io/website/LICENSE" class="light-text">CC BY 4.0</a>""" - -[main_copyright_notice] -other = """A Fundação Linux ®. Todos os direitos reservados. A Linux Foundation tem marcas registradas e usa marcas registradas. Para uma lista de marcas registradas da The Linux Foundation, por favor, veja nossa <a href="https://www.linuxfoundation.org/trademark-usage" class="light-text">Página de uso de marca registrada</a>""" - -# Labels for the docs portal home page. -[docs_label_browse] -other = "Procurar documentos" - -[docs_label_contributors] -other = "Colaboradores" - -[docs_label_users] -other = "Usuários" - -[docs_label_i_am] -other = "Eu sou..." - -# layouts > blog > pager - -[layouts_blog_pager_prev] -other = "<< Anterior" - [layouts_blog_pager_next] other = "Próximo >>" -# layouts > blog > list +[layouts_blog_pager_prev] +other = "<< Anterior" [layouts_case_studies_list_tell] other = "Conte seu caso" -# layouts > docs > glossary +[layouts_docs_glossary_aka] +other = "Também conhecido como" + +[layouts_docs_glossary_click_details_after] +other = "indicadores abaixo para uma maior explicação sobre um termo em particular." + +[layouts_docs_glossary_click_details_before] +other = "Clique nos" [layouts_docs_glossary_description] other = "Este glossário pretende ser uma lista padronizada e abrangente da terminologia do Kubernetes. Inclui termos técnicos específicos dos K8s, além de termos mais gerais que fornecem um contexto útil." +[layouts_docs_glossary_deselect_all] +other = "Desmarcar tudo" + [layouts_docs_glossary_filter] other = "Filtrar termos de acordo com suas tags" [layouts_docs_glossary_select_all] other = "Selecionar tudo" -[layouts_docs_glossary_deselect_all] -other = "Desmarcar tudo" - -[layouts_docs_glossary_aka] -other = "Também conhecido como" - -[layouts_docs_glossary_click_details_before] -other = "Clique nos" - -[layouts_docs_glossary_click_details_after] -other = "indicadores abaixo para uma maior explicação sobre um termo em particular." - -# layouts > docs > search - -[layouts_docs_search_fetching] -other = "Buscando resultados.." - -# layouts > partial > feedback - -[layouts_docs_partials_feedback_thanks] -other = "Obrigado pelo feedback. Se você tiver uma pergunta específica sobre como utilizar o Kubernetes, faça em" +[layouts_docs_partials_feedback_improvement] +other = "sugerir uma melhoria" [layouts_docs_partials_feedback_issue] other = "Abra um bug no repositório do GitHub se você deseja " -[layouts_docs_partials_feedback_problem] -other = "reportar um problema" - [layouts_docs_partials_feedback_or] other = "ou" -[layouts_docs_partials_feedback_improvement] -other = "sugerir uma melhoria" +[layouts_docs_partials_feedback_problem] +other = "reportar um problema" -# Community links -[community_twitter_name] -other = "Twitter" -[community_github_name] -other = "GitHub" -[community_slack_name] -other = "Slack" -[community_stack_overflow_name] -other = "Stack Overflow" -[community_forum_name] -other = "Fórum" -[community_events_calendar] -other = "Calendário de Eventos" +[layouts_docs_partials_feedback_thanks] +other = "Obrigado pelo feedback. Se você tiver uma pergunta específica sobre como utilizar o Kubernetes, faça em" + +[layouts_docs_search_fetching] +other = "Buscando resultados.." + +# Main page localization + +[main_by] +other = "por" + +[main_cncf_project] +other = """Nós somos uma <a href="https://cncf.io/">CNCF</a> projeto graduado</p>""" + +[main_community_explore] +other = "Explore a comunidade" + +[main_contribute] +other = "Contribuir" + +[main_copyright_notice] +other = """A Fundação Linux ®. Todos os direitos reservados. A Linux Foundation tem marcas registradas e usa marcas registradas. Para uma lista de marcas registradas da The Linux Foundation, por favor, veja nossa <a href="https://www.linuxfoundation.org/trademark-usage" class="light-text">Página de uso de marca registrada</a>""" + +[main_documentation_license] +other = """Os autores do Kubernetes | Documentação Distribuída sob <a href="https://git.k8s.io/website/LICENSE" class="light-text">CC BY 4.0</a>""" + +[main_edit_this_page] +other = "Edite essa página" + +[main_github_create_an_issue] +other = "Abra um bug" + +[main_github_invite] +other = "Interessado em mergulhar na base de código do Kubernetes?" + +[main_github_view_on] +other = "Veja no Github" + +[main_kubernetes_features] +other = "Recursos do Kubernetes" + +[main_kubernetes_past_link] +other = "Veja boletins passados" + +[main_kubeweekly_baseline] +other = "Interessado em receber as últimas novidades sobre Kubernetes? Inscreva-se no KubeWeekly." + +[main_kubeweekly_signup] +other = "Se inscrever" + +[main_page_history] +other ="História da página" + +[main_page_last_modified_on] +other = "Última modificação da página em" + +[main_read_about] +other = "Ler sobre" + +[main_read_more] +other = "Consulte Mais informação" + +# Miscellaneous + +[note] +other = "Nota:" + +[objectives_heading] +other = "Objetivos" + +[prerequisites_heading] +other = "Antes de você começar" -# UI elements [ui_search_placeholder] other = "Procurar" + +[version_check_mustbeorlater] +other = "O seu servidor Kubernetes deve estar em ou depois da versão " + +[version_check_mustbe] +other = "Seu servidor Kubernetes deve ser versão" + +[version_check_tocheck] +other = "Para verificar a versão, digite " + +[warning] +other = "Aviso:" + +[whatsnext_heading] +other = "Qual é o próximo" diff --git a/layouts/partials/git-info.html b/layouts/partials/git-info.html index d37a47dd77..2f01fd8d28 100644 --- a/layouts/partials/git-info.html +++ b/layouts/partials/git-info.html @@ -3,9 +3,11 @@ <hr/> <div class="issue-button-container"> + {{ if eq (getenv "HUGO_ENV") "production" }} <p> <a href=""><img alt="Analytics" src="https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/{{ .Path }}?pixel"/></a> </p> + {{ end }} {{ if and (ne .Kind "404") (not (strings.Contains .Path "search")) }} {{ if not .Params.no_issue }} <script type="text/javascript"> diff --git a/layouts/partials/head.html b/layouts/partials/head.html index 16fd6d321b..a31d4533df 100644 --- a/layouts/partials/head.html +++ b/layouts/partials/head.html @@ -10,8 +10,12 @@ gtag('config', 'UA-36037335-10'); </script> -<!-- Docsy head.html begins here --> +<!-- alternative translations --> +{{ range .Translations -}} +<link rel="alternate" hreflang="{{ .Language.Lang }}" href="{{ .Permalink }}"> +{{ end -}} +<!-- Docsy head.html begins here --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> {{ hugo.Generator }} @@ -77,7 +81,7 @@ <script src="{{ "js/jquery-ui-1.12.1.min.js" | relURL }}"></script> <script src="{{ "js/sweetalert-2.1.2.min.js" | relURL }}"></script> -{{ if eq .Params.mermaid true }} +{{ if .HasShortcode "mermaid" }} <!-- Copied from https://unpkg.com/mermaid@8.5.0/dist/mermaid.min.js --> <script async src="{{ "js/mermaid.min.js" | relURL }}"></script> {{ end }} diff --git a/static/js/script.js b/static/js/script.js index 2e477e9a55..f1d33c2caf 100644 --- a/static/js/script.js +++ b/static/js/script.js @@ -1,11 +1,3 @@ -function addAnchorTags() { - anchors.options = { - visible: 'touch' - } - - anchors.add('#docsContent h2, #docsContent h3, #docsContent h4, #docsContent h5, #docsContent h6'); -} - //modal close button (function(){ //π.modalCloseButton = function(closingFunction){ @@ -517,9 +509,6 @@ var pushmenu = (function(){ })(); $(function() { - addAnchorTags(); - - // If vendor strip doesn't exist add className if ( !$('#vendorStrip').length > 0 ) { $('.header-hero').addClass('bot-bar'); diff --git a/update-imported-docs/reference.yml b/update-imported-docs/reference.yml index 1862354ce1..b25ce0a084 100644 --- a/update-imported-docs/reference.yml +++ b/update-imported-docs/reference.yml @@ -7,10 +7,10 @@ repos: generate-command: | cd $GOPATH # set the branch, ex: v1.17.0 while K8S_RELEASE=1.17 - # CAUTION: The script won't work if you set K8S_RELEASE=1.18 before 1.18 is formally released. - # The `v${K8S_RELEASE}.0` string must be a valid tag name from the kubernetes repo, which + # CAUTION: The script won't work if you set K8S_RELEASE=1.18.0 before 1.18 is formally released. + # The `v${K8S_RELEASE}` string must be a valid tag name from the kubernetes repo, which # is only created after the formal release. - git clone --depth=1 --single-branch --branch v${K8S_RELEASE}.0 https://github.com/kubernetes/kubernetes.git src/k8s.io/kubernetes + git clone --depth=1 --single-branch --branch v${K8S_RELEASE} https://github.com/kubernetes/kubernetes.git src/k8s.io/kubernetes cd src/k8s.io/kubernetes make generated_files cp -L -R vendor $GOPATH/src @@ -24,7 +24,7 @@ repos: cd $GOPATH go get -v github.com/kubernetes-sigs/reference-docs/gen-kubectldocs cd src/github.com/kubernetes-sigs/reference-docs/ - # create versioned dirs if needed and fetch v${K8S_RELEASE}.0:swagger.json + # create versioned dirs if needed and fetch v${K8S_RELEASE}:swagger.json make updateapispec # generate kubectl cmd reference make copycli diff --git a/update-imported-docs/update-imported-docs.py b/update-imported-docs/update-imported-docs.py index 7f7c420173..ecc88a5573 100755 --- a/update-imported-docs/update-imported-docs.py +++ b/update-imported-docs/update-imported-docs.py @@ -1,6 +1,6 @@ #!/usr/bin/env python3 ## -# This script was tested with Python 3.7.4, Go 1.13+, and PyYAML 5.1.2 +# This script was tested with Python 3.7.4, Go 1.14.4+, and PyYAML 5.1.2 # installed in a virtual environment. # This script assumes you have the Python package manager 'pip' installed. # @@ -22,7 +22,7 @@ # Config files: # reference.yml use this to update the reference docs # release.yml use this to auto-generate/import release notes -# K8S_RELEASE: provide the release version such as, 1.17 +# K8S_RELEASE: provide a valid release tag such as, 1.17.0 ## import argparse @@ -167,7 +167,7 @@ def parse_input_args(): help="reference.yml to generate reference docs; " "release.yml to generate release notes") parser.add_argument('k8s_release', type=str, - help="k8s release version, ex: 1.17" + help="k8s release version, ex: 1.17.0" ) return parser.parse_args() @@ -188,6 +188,11 @@ def main(): k8s_release = in_args.k8s_release print("k8s_release is {}".format(k8s_release)) + # if release string does not contain patch num, add zero + if len(k8s_release) == 4: + k8s_release = k8s_release + ".0" + print("k8s_release updated to {}".format(k8s_release)) + curr_dir = os.path.dirname(os.path.abspath(__file__)) print("curr_dir {}".format(curr_dir)) root_dir = os.path.realpath(os.path.join(curr_dir, '..'))