Merge branch 'master' into dev-1.15
This commit is contained in:
commit
455f312775
|
@ -44,6 +44,10 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
|
|||
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes.
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
|
||||
|
||||
## Infrastructure
|
||||
|
||||
* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) is an add-on to run virtual machines on Kubernetes. Usually run on bare-metal clusters.
|
||||
|
||||
## Legacy Add-ons
|
||||
|
||||
There are several other add-ons documented in the deprecated [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) directory.
|
||||
|
|
|
@ -212,10 +212,10 @@ You can check node capacities and amounts allocated with the
|
|||
`kubectl describe nodes` command. For example:
|
||||
|
||||
```shell
|
||||
kubectl describe nodes e2e-test-minion-group-4lw4
|
||||
kubectl describe nodes e2e-test-node-pool-4lw4
|
||||
```
|
||||
```
|
||||
Name: e2e-test-minion-group-4lw4
|
||||
Name: e2e-test-node-pool-4lw4
|
||||
[ ... lines removed for clarity ...]
|
||||
Capacity:
|
||||
cpu: 2
|
||||
|
|
|
@ -15,196 +15,77 @@ This page is an overview of Kubernetes.
|
|||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
Kubernetes is a portable, extensible open-source platform for managing
|
||||
containerized workloads and services, that facilitates both
|
||||
declarative configuration and automation. It has a large, rapidly
|
||||
growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
|
||||
Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon
|
||||
a [decade and a half of experience that Google has with running
|
||||
production workloads at
|
||||
scale](https://research.google.com/pubs/pub43438.html), combined with
|
||||
best-of-breed ideas and practices from the community.
|
||||
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a [decade and a half of experience that Google has with running production workloads at scale](https://ai.google/research/pubs/pub43438), combined with best-of-breed ideas and practices from the community.
|
||||
|
||||
## Going back in time
|
||||
Let's take a look at why Kubernetes is so useful by going back in time.
|
||||
|
||||

|
||||
|
||||
**Traditional deployment era:**
|
||||
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
|
||||
|
||||
**Virtualized deployment era:** As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.
|
||||
|
||||
Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more.
|
||||
|
||||
Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.
|
||||
|
||||
**Container deployment era:** Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
|
||||
|
||||
Containers are becoming popular because they have many benefits. Some of the container benefits are listed below:
|
||||
|
||||
* Agile application creation and deployment: Increased ease and efficiency of container image creation compared to VM image use.
|
||||
* Continuous development, integration, and deployment: Provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
|
||||
* Dev and Ops separation of concerns: Create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
|
||||
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
|
||||
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
|
||||
* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
|
||||
* Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
|
||||
* Loosely coupled, distributed, elastic, liberated micro-services: Applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine.
|
||||
* Resource isolation: Predictable application performance.
|
||||
* Resource utilization: High efficiency and density.
|
||||
|
||||
## Why do I need Kubernetes and what can it do
|
||||
|
||||
Kubernetes has a number of features. It can be thought of as:
|
||||
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to restart. Wouldn't it be easier if this behavior was handled by a system?
|
||||
|
||||
- a container platform
|
||||
- a microservices platform
|
||||
- a portable cloud platform
|
||||
and a lot more.
|
||||
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of your scaling requirements, failover, deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
|
||||
|
||||
Kubernetes provides a **container-centric** management environment. It
|
||||
orchestrates computing, networking, and storage infrastructure on
|
||||
behalf of user workloads. This provides much of the simplicity of
|
||||
Platform as a Service (PaaS) with the flexibility of Infrastructure as
|
||||
a Service (IaaS), and enables portability across infrastructure
|
||||
providers.
|
||||
Kubernetes provides you with:
|
||||
|
||||
## How Kubernetes is a platform
|
||||
|
||||
Even though Kubernetes provides a lot of functionality, there are
|
||||
always new scenarios that would benefit from new
|
||||
features. Application-specific workflows can be streamlined to
|
||||
accelerate developer velocity. Ad hoc orchestration that is acceptable
|
||||
initially often requires robust automation at scale. This is why
|
||||
Kubernetes was also designed to serve as a platform for building an
|
||||
ecosystem of components and tools to make it easier to deploy, scale,
|
||||
and manage applications.
|
||||
|
||||
[Labels](/docs/concepts/overview/working-with-objects/labels/) empower
|
||||
users to organize their resources however they
|
||||
please. [Annotations](/docs/concepts/overview/working-with-objects/annotations/)
|
||||
enable users to decorate resources with custom information to
|
||||
facilitate their workflows and provide an easy way for management
|
||||
tools to checkpoint state.
|
||||
|
||||
Additionally, the [Kubernetes control
|
||||
plane](/docs/concepts/overview/components/) is built upon the same
|
||||
[APIs](/docs/reference/using-api/api-overview/) that are available to developers
|
||||
and users. Users can write their own controllers, such as
|
||||
[schedulers](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md),
|
||||
with [their own
|
||||
APIs](/docs/concepts/api-extension/custom-resources/)
|
||||
that can be targeted by a general-purpose [command-line
|
||||
tool](/docs/user-guide/kubectl-overview/).
|
||||
|
||||
This
|
||||
[design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md)
|
||||
has enabled a number of other systems to build atop Kubernetes.
|
||||
* **Service discovery and load balancing**
|
||||
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
|
||||
* **Storage orchestration**
|
||||
Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
|
||||
* **Automated rollouts and rollbacks**
|
||||
You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
|
||||
* **Automatic bin packing**
|
||||
Kubernetes allows you to specify how much CPU and memory (RAM) each container needs. When containers have resource requests specified, Kubernetes can make better decisions to manage the resources for containers.
|
||||
* **Self-healing**
|
||||
Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
|
||||
* **Secret and configuration management**
|
||||
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
|
||||
|
||||
## What Kubernetes is not
|
||||
|
||||
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a
|
||||
Service) system. Since Kubernetes operates at the container level
|
||||
rather than at the hardware level, it provides some generally
|
||||
applicable features common to PaaS offerings, such as deployment,
|
||||
scaling, load balancing, logging, and monitoring. However, Kubernetes
|
||||
is not monolithic, and these default solutions are optional and
|
||||
pluggable. Kubernetes provides the building blocks for building developer
|
||||
platforms, but preserves user choice and flexibility where it is
|
||||
important.
|
||||
Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.
|
||||
|
||||
Kubernetes:
|
||||
|
||||
* Does not limit the types of applications supported. Kubernetes aims
|
||||
to support an extremely diverse variety of workloads, including
|
||||
stateless, stateful, and data-processing workloads. If an
|
||||
application can run in a container, it should run great on
|
||||
Kubernetes.
|
||||
* Does not deploy source code and does not build your
|
||||
application. Continuous Integration, Delivery, and Deployment
|
||||
(CI/CD) workflows are determined by organization cultures and preferences
|
||||
as well as technical requirements.
|
||||
* Does not provide application-level services, such as middleware
|
||||
(e.g., message buses), data-processing frameworks (for example,
|
||||
Spark), databases (e.g., mysql), caches, nor cluster storage systems (e.g.,
|
||||
Ceph) as built-in services. Such components can run on Kubernetes, and/or
|
||||
can be accessed by applications running on Kubernetes through portable
|
||||
mechanisms, such as the Open Service Broker.
|
||||
* Does not dictate logging, monitoring, or alerting solutions. It provides
|
||||
some integrations as proof of concept, and mechanisms to collect and
|
||||
export metrics.
|
||||
* Does not provide nor mandate a configuration language/system (e.g.,
|
||||
[jsonnet](https://github.com/google/jsonnet)). It provides a declarative
|
||||
API that may be targeted by arbitrary forms of declarative specifications.
|
||||
* Does not provide nor adopt any comprehensive machine configuration,
|
||||
maintenance, management, or self-healing systems.
|
||||
|
||||
Additionally, Kubernetes is not a mere *orchestration system*. In
|
||||
fact, it eliminates the need for orchestration. The technical
|
||||
definition of *orchestration* is execution of a defined workflow:
|
||||
first do A, then B, then C. In contrast, Kubernetes is comprised of a
|
||||
set of independent, composable control processes that continuously
|
||||
drive the current state towards the provided desired state. It
|
||||
shouldn't matter how you get from A to C. Centralized control is also
|
||||
not required. This results in a system that is easier to use and more
|
||||
powerful, robust, resilient, and extensible.
|
||||
|
||||
## Why containers
|
||||
|
||||
Looking for reasons why you should be using containers?
|
||||
|
||||

|
||||
|
||||
The *Old Way* to deploy applications was to install the applications
|
||||
on a host using the operating-system package manager. This had the
|
||||
disadvantage of entangling the applications' executables,
|
||||
configuration, libraries, and lifecycles with each other and with the
|
||||
host OS. One could build immutable virtual-machine images in order to
|
||||
achieve predictable rollouts and rollbacks, but VMs are heavyweight
|
||||
and non-portable.
|
||||
|
||||
The *New Way* is to deploy containers based on operating-system-level
|
||||
virtualization rather than hardware virtualization. These containers
|
||||
are isolated from each other and from the host: they have their own
|
||||
filesystems, they can't see each others' processes, and their
|
||||
computational resource usage can be bounded. They are easier to build
|
||||
than VMs, and because they are decoupled from the underlying
|
||||
infrastructure and from the host filesystem, they are portable across
|
||||
clouds and OS distributions.
|
||||
|
||||
Because containers are small and fast, one application can be packed
|
||||
in each container image. This one-to-one application-to-image
|
||||
relationship unlocks the full benefits of containers. With containers,
|
||||
immutable container images can be created at build/release time rather
|
||||
than deployment time, since each application doesn't need to be
|
||||
composed with the rest of the application stack, nor married to the
|
||||
production infrastructure environment. Generating container images at
|
||||
build/release time enables a consistent environment to be carried from
|
||||
development into production. Similarly, containers are vastly more
|
||||
transparent than VMs, which facilitates monitoring and
|
||||
management. This is especially true when the containers' process
|
||||
lifecycles are managed by the infrastructure rather than hidden by a
|
||||
process supervisor inside the container. Finally, with a single
|
||||
application per container, managing the containers becomes tantamount
|
||||
to managing deployment of the application.
|
||||
|
||||
Summary of container benefits:
|
||||
|
||||
* **Agile application creation and deployment**:
|
||||
Increased ease and efficiency of container image creation compared to VM image use.
|
||||
* **Continuous development, integration, and deployment**:
|
||||
Provides for reliable and frequent container image build and
|
||||
deployment with quick and easy rollbacks (due to image
|
||||
immutability).
|
||||
* **Dev and Ops separation of concerns**:
|
||||
Create application container images at build/release time rather
|
||||
than deployment time, thereby decoupling applications from
|
||||
infrastructure.
|
||||
* **Observability**
|
||||
Not only surfaces OS-level information and metrics, but also application
|
||||
health and other signals.
|
||||
* **Environmental consistency across development, testing, and production**:
|
||||
Runs the same on a laptop as it does in the cloud.
|
||||
* **Cloud and OS distribution portability**:
|
||||
Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
|
||||
* **Application-centric management**:
|
||||
Raises the level of abstraction from running an OS on virtual
|
||||
hardware to running an application on an OS using logical resources.
|
||||
* **Loosely coupled, distributed, elastic, liberated [micro-services](https://martinfowler.com/articles/microservices.html)**:
|
||||
Applications are broken into smaller, independent pieces and can
|
||||
be deployed and managed dynamically -- not a monolithic stack
|
||||
running on one big single-purpose machine.
|
||||
* **Resource isolation**:
|
||||
Predictable application performance.
|
||||
* **Resource utilization**:
|
||||
High efficiency and density.
|
||||
|
||||
## What Kubernetes and K8s mean
|
||||
|
||||
The name **Kubernetes** originates from Greek, meaning *helmsman* or
|
||||
*pilot*, and is the root of *governor* and
|
||||
[cybernetic](http://www.etymonline.com/index.php?term=cybernetics). *K8s*
|
||||
is an abbreviation derived by replacing the 8 letters "ubernete" with
|
||||
"8".
|
||||
* Does not limit the types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads. If an application can run in a container, it should run great on Kubernetes.
|
||||
* Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization cultures and preferences as well as technical requirements.
|
||||
* Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, mysql), caches, nor cluster storage systems (for example, Ceph) as built-in services. Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker.
|
||||
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
|
||||
* Does not provide nor mandate a configuration language/system (for example, jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
|
||||
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
|
||||
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
* Take a look at the [Kubernetes Components](/docs/concepts/overview/components/)
|
||||
* Ready to [Get Started](/docs/setup/)?
|
||||
* For more details, see the [Kubernetes Documentation](/docs/home/).
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -144,7 +144,7 @@ kind: StatefulSet
|
|||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/instance: wordpress-abcxzy
|
||||
app.kubernetes.io/instance: mysql-abcxzy
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/component: database
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
|
@ -160,7 +160,7 @@ kind: Service
|
|||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/instance: wordpress-abcxzy
|
||||
app.kubernetes.io/instance: mysql-abcxzy
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/component: database
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
|
|
|
@ -26,6 +26,21 @@ See the [identifiers design doc](https://git.k8s.io/community/contributors/desig
|
|||
|
||||
By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions.
|
||||
|
||||
For example, here’s the configuration file with a Pod name as `nginx-demo` and a Container name as `nginx`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.7.9
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
## UIDs
|
||||
|
||||
{{< glossary_definition term_id="uid" length="all" >}}
|
||||
|
|
|
@ -485,8 +485,10 @@ spec:
|
|||
minimum value of the first range as the default. Validates against all ranges.
|
||||
- *MustRunAsNonRoot* - Requires that the pod be submitted with a non-zero
|
||||
`runAsUser` or have the `USER` directive defined (using a numeric UID) in the
|
||||
image. No default provided. Setting `allowPrivilegeEscalation=false` is strongly
|
||||
recommended with this strategy.
|
||||
image. Pods which have specified neither `runAsNonRoot` nor `runAsUser` settings
|
||||
will be mutated to set `runAsNonRoot=true`, thus requiring a defined non-zero
|
||||
numeric `USER` directive in the container. No default provided. Setting
|
||||
`allowPrivilegeEscalation=false` is strongly recommended with this strategy.
|
||||
- *RunAsAny* - No default provided. Allows any `runAsUser` to be specified.
|
||||
|
||||
**RunAsGroup** - Controls which primary group ID the containers are run with.
|
||||
|
|
|
@ -66,10 +66,8 @@ The following resource types are supported:
|
|||
|
||||
| Resource Name | Description |
|
||||
| --------------------- | ----------------------------------------------------------- |
|
||||
| `cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
|
||||
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
|
||||
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
|
||||
| `memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
|
||||
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
|
||||
| `requests.memory` | Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value. |
|
||||
|
||||
|
|
|
@ -41,11 +41,11 @@ For more up-to-date specification, see
|
|||
### A records
|
||||
|
||||
"Normal" (not headless) Services are assigned a DNS A record for a name of the
|
||||
form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP
|
||||
form `my-svc.my-namespace.svc.cluster-domain.example`. This resolves to the cluster IP
|
||||
of the Service.
|
||||
|
||||
"Headless" (without a cluster IP) Services are also assigned a DNS A record for
|
||||
a name of the form `my-svc.my-namespace.svc.cluster.local`. Unlike normal
|
||||
a name of the form `my-svc.my-namespace.svc.cluster-domain.example`. Unlike normal
|
||||
Services, this resolves to the set of IPs of the pods selected by the Service.
|
||||
Clients are expected to consume the set or else use standard round-robin
|
||||
selection from the set.
|
||||
|
@ -55,12 +55,12 @@ selection from the set.
|
|||
SRV Records are created for named ports that are part of normal or [Headless
|
||||
Services](/docs/concepts/services-networking/service/#headless-services).
|
||||
For each named port, the SRV record would have the form
|
||||
`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`.
|
||||
`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster-domain.example`.
|
||||
For a regular service, this resolves to the port number and the domain name:
|
||||
`my-svc.my-namespace.svc.cluster.local`.
|
||||
`my-svc.my-namespace.svc.cluster-domain.example`.
|
||||
For a headless service, this resolves to multiple answers, one for each pod
|
||||
that is backing the service, and contains the port number and the domain name of the pod
|
||||
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`.
|
||||
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`.
|
||||
|
||||
## Pods
|
||||
|
||||
|
@ -76,7 +76,7 @@ the hostname of the pod. For example, given a Pod with `hostname` set to
|
|||
The Pod spec also has an optional `subdomain` field which can be used to specify
|
||||
its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain`
|
||||
set to "`bar`", in namespace "`my-namespace`", will have the fully qualified
|
||||
domain name (FQDN) "`foo.bar.my-namespace.svc.cluster.local`".
|
||||
domain name (FQDN) "`foo.bar.my-namespace.svc.cluster-domain.example`".
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -133,7 +133,7 @@ record for the Pod's fully qualified hostname.
|
|||
For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to
|
||||
"`default-subdomain`", and a headless Service named "`default-subdomain`" in
|
||||
the same namespace, the pod will see its own FQDN as
|
||||
"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an
|
||||
"`busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`". DNS serves an
|
||||
A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and
|
||||
"`busybox2`" can have their distinct A records.
|
||||
|
||||
|
@ -143,7 +143,7 @@ along with its IP.
|
|||
{{< note >}}
|
||||
Because A records are not created for Pod names, `hostname` is required for the Pod's A
|
||||
record to be created. A Pod with no `hostname` but with `subdomain` will only create the
|
||||
A record for the headless service (`default-subdomain.my-namespace.svc.cluster.local`),
|
||||
A record for the headless service (`default-subdomain.my-namespace.svc.cluster-domain.example`),
|
||||
pointing to the Pod's IP address. Also, Pod needs to become ready in order to have a
|
||||
record unless `publishNotReadyAddresses=True` is set on the Service.
|
||||
{{< /note >}}
|
||||
|
@ -234,7 +234,7 @@ in its `/etc/resolv.conf` file:
|
|||
|
||||
```
|
||||
nameserver 1.2.3.4
|
||||
search ns1.svc.cluster.local my.dns.search.suffix
|
||||
search ns1.svc.cluster-domain.example my.dns.search.suffix
|
||||
options ndots:2 edns0
|
||||
```
|
||||
|
||||
|
@ -246,7 +246,7 @@ kubectl exec -it dns-example -- cat /etc/resolv.conf
|
|||
The output is similar to this:
|
||||
```shell
|
||||
nameserver fd00:79:30::a
|
||||
search default.svc.cluster.local svc.cluster.local cluster.local
|
||||
search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example
|
||||
options ndots:5
|
||||
```
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ within a cluster. When you create an ingress, you should annotate each ingress w
|
|||
[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
|
||||
to indicate which ingress controller should be used if more than one exists within your cluster.
|
||||
|
||||
If you do not define a class, your cloud provider may use a default ingress provider.
|
||||
If you do not define a class, your cloud provider may use a default ingress controller.
|
||||
|
||||
Ideally, all ingress controllers should fulfill this specification, but the various ingress
|
||||
controllers operate slightly differently.
|
||||
|
|
|
@ -230,12 +230,13 @@ allows you to still view the logs of completed pods to check for errors, warning
|
|||
The job object also remains after it is completed so that you can view its status. It is up to the user to delete
|
||||
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
|
||||
|
||||
By default, a Job will run uninterrupted unless a Pod fails, at which point the Job defers to the
|
||||
`.spec.backoffLimit` described above. Another way to terminate a Job is by setting an active deadline.
|
||||
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
|
||||
By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
|
||||
`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated.
|
||||
|
||||
Another way to terminate a Job is by setting an active deadline.
|
||||
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
|
||||
The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
|
||||
Once a Job reaches `activeDeadlineSeconds`, all of its Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
|
||||
Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
|
||||
|
||||
Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Minikube
|
||||
id: minikube
|
||||
date: 2018-04-12
|
||||
full_link: /docs/getting-started-guides/minikube/
|
||||
full_link: /docs/setup/learning-environment/minikube/
|
||||
short_description: >
|
||||
A tool for running Kubernetes locally.
|
||||
|
||||
|
@ -16,4 +16,5 @@ tags:
|
|||
<!--more-->
|
||||
|
||||
Minikube runs a single-node cluster inside a VM on your computer.
|
||||
|
||||
You can use Minikube to
|
||||
[try Kubernetes in a learning environment](/docs/setup/learning-environment/).
|
||||
|
|
|
@ -33,7 +33,9 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur
|
|||
|
||||
- You think you discovered a potential security vulnerability in Kubernetes
|
||||
- You are unsure how a vulnerability affects Kubernetes
|
||||
- You think you discovered a vulnerability in another project that Kubernetes depends on (e.g. docker, rkt, etcd)
|
||||
- You think you discovered a vulnerability in another project that Kubernetes depends on
|
||||
- For projects with their own vulnerability reporting and disclosure process, please report it directly there
|
||||
|
||||
|
||||
### When Should I NOT Report a Vulnerability?
|
||||
|
||||
|
@ -51,5 +53,5 @@ As the security issue moves from triage, to identified fix, to release planning
|
|||
|
||||
## Public Disclosure Timing
|
||||
|
||||
A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. As a basic default, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date.
|
||||
A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date.
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -182,6 +182,9 @@ echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name
|
|||
# Also uses "jq"
|
||||
for item in $( kubectl get pod --output=name); do printf "Labels for %s\n" "$item" | grep --color -E '[^/]+$' && kubectl get "$item" --output=json | jq -r -S '.metadata.labels | to_entries | .[] | " \(.key)=\(.value)"' 2>/dev/null; printf "\n"; done
|
||||
|
||||
# Or this command can be used as well to get all the labels associated with pods
|
||||
kubectl get pods --show-labels
|
||||
|
||||
# Check which nodes are ready
|
||||
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
|
||||
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
|
||||
|
|
|
@ -37,19 +37,15 @@ For `kubectl run` to satisfy infrastructure as code:
|
|||
|
||||
You can create the following resources using `kubectl run` with the `--generator` flag:
|
||||
|
||||
| Resource | kubectl command |
|
||||
|---------------------------------|---------------------------------------------------|
|
||||
| Pod | `kubectl run --generator=run-pod/v1` |
|
||||
| Replication controller | `kubectl run --generator=run/v1` |
|
||||
| Deployment | `kubectl run --generator=extensions/v1beta1` |
|
||||
| -for an endpoint (default) | `kubectl run --generator=deployment/v1beta1` |
|
||||
| Deployment | `kubectl run --generator=apps/v1beta1` |
|
||||
| -for an endpoint (recommended) | `kubectl run --generator=deployment/apps.v1beta1` |
|
||||
| Job | `kubectl run --generator=job/v1` |
|
||||
| CronJob | `kubectl run --generator=batch/v1beta1` |
|
||||
| -for an endpoint (default) | `kubectl run --generator=cronjob/v1beta1` |
|
||||
| CronJob | `kubectl run --generator=batch/v2alpha1` |
|
||||
| -for an endpoint (deprecated) | `kubectl run --generator=cronjob/v2alpha1` |
|
||||
| Resource | api group | kubectl command |
|
||||
|---------------------------------|--------------------|---------------------------------------------------|
|
||||
| Pod | v1 | `kubectl run --generator=run-pod/v1` |
|
||||
| Replication controller | v1 | `kubectl run --generator=run/v1` |
|
||||
| Deployment (deprecated) | extensions/v1beta1 | `kubectl run --generator=deployment/v1beta1` |
|
||||
| Deployment (deprecated) | apps/v1beta1 | `kubectl run --generator=deployment/apps.v1beta1` |
|
||||
| Job (deprecated) | batch/v1 | `kubectl run --generator=job/v1` |
|
||||
| CronJob (default) | batch/v1beta1 | `kubectl run --generator=cronjob/v1beta1` |
|
||||
| CronJob (deprecated) | batch/v2alpha1 | `kubectl run --generator=cronjob/v2alpha1` |
|
||||
|
||||
If you do not specify a generator flag, other flags prompt you to use a specific generator. The following table lists the flags that force you to use specific generators, depending on the version of the cluster:
|
||||
|
||||
|
|
|
@ -183,7 +183,7 @@ add-apt-repository ppa:projectatomic/ppa
|
|||
apt-get update
|
||||
|
||||
# Install CRI-O
|
||||
apt-get install cri-o-1.11
|
||||
apt-get install cri-o-1.13
|
||||
|
||||
{{< /tab >}}
|
||||
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
|
||||
|
|
|
@ -178,6 +178,8 @@ The following file is an Ingress resource that sends traffic to your Service via
|
|||
|
||||
1. Add the following line to the bottom of the `/etc/hosts` file.
|
||||
|
||||
{{< note >}}If you are running Minikube locally, use `minikube ip` to get the external IP. The IP address displayed within the ingress list will be the internal IP.{{< /note >}}
|
||||
|
||||
```
|
||||
172.17.0.15 hello-world.info
|
||||
```
|
||||
|
|
|
@ -85,7 +85,7 @@ If your cluster runs short on resources you can easily add more machines to it i
|
|||
If you're using GCE or Google Kubernetes Engine it's done by resizing Instance Group managing your Nodes. It can be accomplished by modifying number of instances on `Compute > Compute Engine > Instance groups > your group > Edit group` [Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
|
||||
|
||||
```shell
|
||||
gcloud compute instance-groups managed resize kubernetes-minion-group --size=42 --zone=$ZONE
|
||||
gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE
|
||||
```
|
||||
|
||||
Instance Group will take care of putting appropriate image on new machines and start them, while Kubelet will register its Node with API server to make it available for scheduling. If you scale the instance group down, system will randomly choose Nodes to kill.
|
||||
|
|
|
@ -316,8 +316,9 @@ without compromising the minimum required capacity for running your workloads.
|
|||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.14.x-00 with the latest patch version
|
||||
apt-get update
|
||||
apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.14.x-0 with the latest patch version
|
||||
|
|
|
@ -89,13 +89,18 @@ kind: Namespace
|
|||
metadata:
|
||||
name: <insert-namespace-name-here>
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```shell
|
||||
```
|
||||
kubectl create -f ./my-namespace.yaml
|
||||
```
|
||||
|
||||
2. Alternatively, you can create namespace using below command:
|
||||
|
||||
```
|
||||
kubectl create namespace <insert-namespace-name-here>
|
||||
```
|
||||
|
||||
Note that the name of your namespace must be a DNS compatible label.
|
||||
|
||||
There's an optional field `finalizers`, which allows observables to purge resources whenever the namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will be created but will get stuck in the `Terminating` state if the user tries to delete it.
|
||||
|
|
|
@ -582,6 +582,10 @@ When the pod runs, the command `cat /etc/config/keys` produces the output below:
|
|||
very
|
||||
```
|
||||
|
||||
{{< caution >}}
|
||||
Like before, all previous files in the `/etc/config/` directory will be deleted.
|
||||
{{< /caution >}}
|
||||
|
||||
### Project keys to specific paths and file permissions
|
||||
|
||||
You can project keys to specific paths and specific permissions on a per-file
|
||||
|
|
|
@ -53,7 +53,7 @@ the container starts.
|
|||
|
||||
1. Display detailed information about the Pod:
|
||||
|
||||
kubectl get pod --output=yaml
|
||||
kubectl get pod termination-demo --output=yaml
|
||||
|
||||
The output includes the "Sleep expired" message:
|
||||
|
||||
|
|
|
@ -274,6 +274,14 @@ APIs, cluster administrators must ensure that:
|
|||
|
||||
* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated.
|
||||
|
||||
For more information on these different metrics paths and how they differ please see the relevant design proposals for
|
||||
[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md),
|
||||
[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md)
|
||||
and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md).
|
||||
|
||||
For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
|
||||
and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
|
|
@ -363,7 +363,7 @@ The Node name should show up in the last column:
|
|||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-minion-group-9l2t
|
||||
mysql-2 2/2 Running 0 15m 10.244.5.27 kubernetes-node-9l2t
|
||||
```
|
||||
|
||||
Then drain the Node by running the following command, which cordons it so
|
||||
|
@ -387,14 +387,14 @@ It should look something like this:
|
|||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-minion-group-9l2t
|
||||
mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t
|
||||
[...]
|
||||
mysql-2 0/2 Pending 0 0s <none> kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 Init:0/2 0 0s <none> kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-minion-group-fjlm
|
||||
mysql-2 0/2 Pending 0 0s <none> kubernetes-node-fjlm
|
||||
mysql-2 0/2 Init:0/2 0 0s <none> kubernetes-node-fjlm
|
||||
mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm
|
||||
mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm
|
||||
mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm
|
||||
mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm
|
||||
```
|
||||
|
||||
And again, you should see server ID `102` disappear from the
|
||||
|
|
|
@ -159,7 +159,7 @@ To verify that the profile was applied, you can look for the AppArmor security o
|
|||
kubectl get events | grep Created
|
||||
```
|
||||
```
|
||||
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-minion-group-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
|
||||
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
|
||||
```
|
||||
|
||||
You can also verify directly that the container's root process is running with the correct profile by checking its proc attr:
|
||||
|
@ -315,8 +315,8 @@ Tolerations: <none>
|
|||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-minion-group-t1f5
|
||||
23s 23s 1 {kubelet e2e-test-stclair-minion-group-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
|
||||
23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
|
||||
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
|
||||
```
|
||||
|
||||
Note the pod status is Failed, with a helpful error message: `Pod Cannot enforce AppArmor: profile
|
||||
|
|
|
@ -67,13 +67,13 @@ kubectl get nodes
|
|||
The output is similar to this:
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kubernetes-minion-group-6jst Ready <none> 2h v1.13.0
|
||||
kubernetes-minion-group-cx31 Ready <none> 2h v1.13.0
|
||||
kubernetes-minion-group-jj1t Ready <none> 2h v1.13.0
|
||||
kubernetes-node-6jst Ready <none> 2h v1.13.0
|
||||
kubernetes-node-cx31 Ready <none> 2h v1.13.0
|
||||
kubernetes-node-jj1t Ready <none> 2h v1.13.0
|
||||
```
|
||||
Get the proxy mode on one of the node
|
||||
```console
|
||||
kubernetes-minion-group-6jst $ curl localhost:10249/proxyMode
|
||||
kubernetes-node-6jst $ curl localhost:10249/proxyMode
|
||||
```
|
||||
The output is:
|
||||
```
|
||||
|
@ -326,18 +326,18 @@ kubectl get pod -o wide -l run=source-ip-app
|
|||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-minion-group-6jst
|
||||
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst
|
||||
```
|
||||
Curl the `/healthz` endpoint on different nodes.
|
||||
```console
|
||||
kubernetes-minion-group-6jst $ curl localhost:32122/healthz
|
||||
kubernetes-node-6jst $ curl localhost:32122/healthz
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
1 Service Endpoints found
|
||||
```
|
||||
```console
|
||||
kubernetes-minion-group-jj1t $ curl localhost:32122/healthz
|
||||
kubernetes-node-jj1t $ curl localhost:32122/healthz
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
|
|
@ -832,9 +832,9 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo "";
|
|||
All of the Pods in the `zk` `StatefulSet` are deployed on different nodes.
|
||||
|
||||
```shell
|
||||
kubernetes-minion-group-cxpk
|
||||
kubernetes-minion-group-a5aq
|
||||
kubernetes-minion-group-2g2d
|
||||
kubernetes-node-cxpk
|
||||
kubernetes-node-a5aq
|
||||
kubernetes-node-2g2d
|
||||
```
|
||||
|
||||
This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` specified.
|
||||
|
@ -906,9 +906,9 @@ In another terminal, use this command to get the nodes that the Pods are current
|
|||
```shell
|
||||
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
|
||||
|
||||
kubernetes-minion-group-pb41
|
||||
kubernetes-minion-group-ixsl
|
||||
kubernetes-minion-group-i4c4
|
||||
kubernetes-node-pb41
|
||||
kubernetes-node-ixsl
|
||||
kubernetes-node-i4c4
|
||||
```
|
||||
|
||||
Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) to cordon and
|
||||
|
@ -916,11 +916,11 @@ drain the node on which the `zk-0` Pod is scheduled.
|
|||
|
||||
```shell
|
||||
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
|
||||
node "kubernetes-minion-group-pb41" cordoned
|
||||
node "kubernetes-node-pb41" cordoned
|
||||
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-pb41, kube-proxy-kubernetes-minion-group-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
|
||||
pod "zk-0" deleted
|
||||
node "kubernetes-minion-group-pb41" drained
|
||||
node "kubernetes-node-pb41" drained
|
||||
```
|
||||
|
||||
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
|
||||
|
@ -947,11 +947,11 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
|
|||
`zk-1` is scheduled.
|
||||
|
||||
```shell
|
||||
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-minion-group-ixsl" cordoned
|
||||
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
|
||||
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-ixsl, kube-proxy-kubernetes-minion-group-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
|
||||
pod "zk-1" deleted
|
||||
node "kubernetes-minion-group-ixsl" drained
|
||||
node "kubernetes-node-ixsl" drained
|
||||
```
|
||||
|
||||
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
|
||||
|
@ -986,10 +986,10 @@ Continue to watch the Pods of the stateful set, and drain the node on which
|
|||
|
||||
```shell
|
||||
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
|
||||
node "kubernetes-minion-group-i4c4" cordoned
|
||||
node "kubernetes-node-i4c4" cordoned
|
||||
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
|
||||
WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
|
||||
WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4
|
||||
There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruption budget.
|
||||
pod/zk-2
|
||||
```
|
||||
|
@ -1025,9 +1025,9 @@ numChildren = 0
|
|||
Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) to uncordon the first node.
|
||||
|
||||
```shell
|
||||
kubectl uncordon kubernetes-minion-group-pb41
|
||||
kubectl uncordon kubernetes-node-pb41
|
||||
|
||||
node "kubernetes-minion-group-pb41" uncordoned
|
||||
node "kubernetes-node-pb41" uncordoned
|
||||
```
|
||||
|
||||
`zk-1` is rescheduled on this node. Wait until `zk-1` is Running and Ready.
|
||||
|
@ -1070,11 +1070,11 @@ kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-dae
|
|||
The output:
|
||||
|
||||
```
|
||||
node "kubernetes-minion-group-i4c4" already cordoned
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-minion-group-i4c4, kube-proxy-kubernetes-minion-group-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
|
||||
node "kubernetes-node-i4c4" already cordoned
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
|
||||
pod "heapster-v1.2.0-2604621511-wht1r" deleted
|
||||
pod "zk-2" deleted
|
||||
node "kubernetes-minion-group-i4c4" drained
|
||||
node "kubernetes-node-i4c4" drained
|
||||
```
|
||||
|
||||
This time `kubectl drain` succeeds.
|
||||
|
@ -1082,11 +1082,11 @@ This time `kubectl drain` succeeds.
|
|||
Uncordon the second node to allow `zk-2` to be rescheduled.
|
||||
|
||||
```shell
|
||||
kubectl uncordon kubernetes-minion-group-ixsl
|
||||
kubectl uncordon kubernetes-node-ixsl
|
||||
```
|
||||
|
||||
```
|
||||
node "kubernetes-minion-group-ixsl" uncordoned
|
||||
node "kubernetes-node-ixsl" uncordoned
|
||||
```
|
||||
|
||||
You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure that your services remain available during maintenance. If drain is used to cordon nodes and evict pods prior to taking the node offline for maintenance, services that express a disruption budget will have that budget respected. You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled.
|
||||
|
|
|
@ -42,7 +42,12 @@ external IP address.
|
|||
|
||||
1. Run a Hello World application in your cluster:
|
||||
|
||||
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
|
||||
{{< codenew file="service/load-balancer-example.yaml" >}}
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
|
||||
```
|
||||
|
||||
|
||||
The preceding command creates a
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
|
@ -86,9 +91,9 @@ external IP address.
|
|||
|
||||
Name: my-service
|
||||
Namespace: default
|
||||
Labels: run=load-balancer-example
|
||||
Labels: app.kubernetes.io/name=load-balancer-example
|
||||
Annotations: <none>
|
||||
Selector: run=load-balancer-example
|
||||
Selector: app.kubernetes.io/name=load-balancer-example
|
||||
Type: LoadBalancer
|
||||
IP: 10.3.245.137
|
||||
LoadBalancer Ingress: 104.198.205.71
|
||||
|
|
|
@ -12,3 +12,5 @@ spec:
|
|||
volumeMounts:
|
||||
- name: shared-data
|
||||
mountPath: /usr/share/nginx/html
|
||||
hostNetwork: true
|
||||
dnsPolicy: Default
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: load-balancer-example
|
||||
name: hello-world
|
||||
spec:
|
||||
replicas: 5
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: load-balancer-example
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: load-balancer-example
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google-samples/node-hello:1.0
|
||||
name: hello-world
|
||||
ports:
|
||||
- containerPort: 8080
|
|
@ -12,7 +12,7 @@ spec:
|
|||
nameservers:
|
||||
- 1.2.3.4
|
||||
searches:
|
||||
- ns1.svc.cluster.local
|
||||
- ns1.svc.cluster-domain.example
|
||||
- my.dns.search.suffix
|
||||
options:
|
||||
- name: ndots
|
||||
|
|
|
@ -0,0 +1,204 @@
|
|||
---
|
||||
title: Génération de pages de référence pour les composants et les outils Kubernetes
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Cette page montre comment utiliser l'outil `update-importer-docs` pour générer une documentation de référence pour les outils et les composants des dépôts [Kubernetes](https://github.com/kubernetes/kubernetes) et [Federation](https://github.com/kubernetes/federation).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* Vous avez besoin d'une machine qui exécute Linux ou macOS.
|
||||
|
||||
* Ces logiciels doivent être installés:
|
||||
|
||||
* [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
|
||||
|
||||
* [Golang](https://golang.org/doc/install) version 1.9 ou ultérieure
|
||||
|
||||
* [make](https://www.gnu.org/software/make/)
|
||||
|
||||
* [gcc compiler/linker](https://gcc.gnu.org/)
|
||||
|
||||
* Votre variable d'environnement `$GOPATH` doit être définie.
|
||||
|
||||
* Vous devez savoir comment créer une pull request sur un dépôt GitHub.
|
||||
Cela implique généralement la création d’un fork d'un dépôt.
|
||||
Pour plus d'informations, consultez [Créer une Pull Request de documentation](/docs/home/contribute/create-pull-request/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Obtenir deux dépôts
|
||||
|
||||
Si vous n'avez pas déjà le dépôt `kubernetes/website`, obtenez le maintenant:
|
||||
|
||||
```shell
|
||||
mkdir $GOPATH/src
|
||||
cd $GOPATH/src
|
||||
go get github.com/kubernetes/website
|
||||
```
|
||||
|
||||
Déterminez le répertoire de base de votre clone du dépôt [kubernetes/website](https://github.com/kubernetes/website).
|
||||
Par exemple, si vous avez suivi l’étape précédente pour obtenir le dépôt, votre répertoire de base est `$GOPATH/src/github.com/kubernetes/website`.
|
||||
Les étapes restantes se réfèrent à votre répertoire de base en tant que `<web-base>`.
|
||||
|
||||
Si vous envisagez d’apporter des modifications aux documents de référence et si vous ne disposez pas déjà du dépôt `kubernetes/kubernetes`, obtenez-le maintenant:
|
||||
|
||||
```shell
|
||||
mkdir $GOPATH/src
|
||||
cd $GOPATH/src
|
||||
go get github.com/kubernetes/kubernetes
|
||||
```
|
||||
|
||||
Déterminez le répertoire de base de votre clone du dépôt [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes).
|
||||
Par exemple, si vous avez suivi l’étape précédente pour obtenir le dépôt, votre répertoire de base est `$GOPATH/src/github.com/kubernetes/kubernetes`.
|
||||
Les étapes restantes se réfèrent à votre répertoire de base en tant que `<k8s-base>`.
|
||||
|
||||
{{< note >}}
|
||||
Si vous devez uniquement générer, sans modifier, les documents de référence, vous n'avez pas besoin d'obtenir manuellement le dépôt `kubernetes/kubernetes`.
|
||||
Lorsque vous exécutez la commande `update-imported-docs`, il clone automatiquement le dépôt `kubernetes/kubernetes`.
|
||||
{{< /note >}}
|
||||
|
||||
## Modification du code source de Kubernetes
|
||||
|
||||
La documentation de référence pour les composants et les outils Kubernetes est générée automatiquement à partir du code source de Kubernetes.
|
||||
Si vous souhaitez modifier la documentation de référence, commencez par modifier un ou plusieurs commentaires dans le code source de Kubernetes.
|
||||
Faites le changement dans votre dépôt local `kubernetes/kubernetes`, puis soumettez une pull request sur la branche master [github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
[PR 56942](https://github.com/kubernetes/kubernetes/pull/56942) est un exemple de pull request qui modifie les commentaires dans le code source de Kubernetes.
|
||||
|
||||
Surveillez votre pull request, et répondez aux commentaires des relecteurs.
|
||||
Continuez à surveiller votre pull request jusqu'à ce qu'elle soit mergée dans la branche master du dépot `kubernetes/kubernetes`.
|
||||
|
||||
## Selectionnez vos commits dans une branche release
|
||||
|
||||
Vos commits sont sur la branche master, qui est utilisée pour le développement sur la prochaine sortie de Kubernetes.
|
||||
Si vous souhaitez que vos commits apparaissent dans la documentation d'une version Kubernetes déjà publiée, vous devez proposer que vos commits soit sélectionnée dans la branche de publication.
|
||||
|
||||
Par exemple, supposons que la branche master est utilisée pour développer Kubernetes 1.10, et vous voulez transférer vos commits sur la branche release-1.9.
|
||||
Pour savoir comment faire cela, consultez [Propose a Cherry Pick](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md).
|
||||
|
||||
Surveillez votre pull request cherry-pick jusqu'à ce qu'elle soit mergée dans la branche release.
|
||||
|
||||
{{< note >}}
|
||||
Proposer un cherry pick exige que vous ayez la permission de définir un label et un milestone dans votre pull request.
|
||||
Si vous ne disposez pas de ces autorisations, vous devrez travailler avec une personne pouvant définir les paramètres de labels et de milestone pour vous.
|
||||
{{< /note >}}
|
||||
|
||||
## Vue générale de update-imported-docs
|
||||
|
||||
L'outil `update-importer-docs` se trouve dans le répertoire `kubernetes/website/update-importer-docs/`.
|
||||
L'outil effectue les étapes suivantes:
|
||||
|
||||
1. Effectuez un clone des différents dépots spéciés dans le fichier de configuration.
|
||||
Afin de générer des documents de référence, les dépôts clonés par défaut sont: `kubernetes-incubator/reference-docs` et `kubernetes/federation`.
|
||||
1. Effectuez les commandes dans les dépôts clonés pour préparer le générateur de documentation et génerer les fichiers Markdown.
|
||||
1. Copiez les fichiers markdown générés dans un copie locale du dépôt `kubernetes/website`. Les fichiers doivent être mis dans les dossiers spécifiés dans le fichier de configuration.
|
||||
|
||||
Quand les fichiers Markdown sont dans votre clone local du dépot `kubernetes/website`, vous pouvez les soumettre dans une [pull request](/docs/home/contribute/create-pull-request/) vers `kubernetes/website`.
|
||||
|
||||
## Personnaliser le fichier de configuration
|
||||
|
||||
Ouvrez `<web-base>/update-importer-docs/reference.yml` pour le modifier.
|
||||
Ne modifiez pas le contenu de l'entrée `generate-command` sauf si vous comprenez ce qu'elle fait et devez modifier la branche de release spécifiée.
|
||||
|
||||
```shell
|
||||
repos:
|
||||
- name: reference-docs
|
||||
remote: https://github.com/kubernetes-incubator/reference-docs.git
|
||||
# Ceci et la commande generate ci-dessous nécessitent une modification lorsque les branches de référence-docs sont correctement définies
|
||||
branch: master
|
||||
generate-command: |
|
||||
cd $GOPATH
|
||||
git clone https://github.com/kubernetes/kubernetes.git src/k8s.io/kubernetes
|
||||
cd src/k8s.io/kubernetes
|
||||
git checkout release-1.11
|
||||
make generated_files
|
||||
cp -L -R vendor $GOPATH/src
|
||||
rm -r vendor
|
||||
cd $GOPATH
|
||||
go get -v github.com/kubernetes-incubator/reference-docs/gen-compdocs
|
||||
cd src/github.com/kubernetes-incubator/reference-docs/
|
||||
make comp
|
||||
```
|
||||
|
||||
Dans reference.yml, les attributs `files` est une liste d'objets ayant des attributs `src` et `dst`.
|
||||
L'attribut `src` spécifie l'emplacement d'un fichier Markdown généré, et l'attribut `dst` spécifie où copier ce fichier dans le dépôt local `kubernetes/website`.
|
||||
Par exemple:
|
||||
|
||||
```yaml
|
||||
repos:
|
||||
- name: reference-docs
|
||||
remote: https://github.com/kubernetes-incubator/reference-docs.git
|
||||
files:
|
||||
- src: gen-compdocs/build/kube-apiserver.md
|
||||
dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
|
||||
...
|
||||
```
|
||||
|
||||
Notez que lorsqu'il y a beaucoup de fichiers à copier du même répertoire source dans le même répertoire de destination, vous pouvez utiliser des caractères génériques dans la valeur donnée à `src` et vous pouvez simplement fournir le nom du répertoire comme valeur pour `dst`.
|
||||
Par exemple:
|
||||
|
||||
```shell
|
||||
files:
|
||||
- src: gen-compdocs/build/kubeadm*.md
|
||||
dst: content/en/docs/reference/setup-tools/kubeadm/generated/
|
||||
```
|
||||
|
||||
## Exécution de l'outil update-importer-docs
|
||||
|
||||
Après avoir revu et/ou personnalisé le fichier `reference.yaml`, vous pouvez exécuter l'outil `update-imports-docs`:
|
||||
|
||||
```shell
|
||||
cd <web-base>/update-imported-docs
|
||||
./update-imported-docs reference.yml
|
||||
```
|
||||
|
||||
## Ajouter et valider des modifications dans kubernetes/website
|
||||
|
||||
Répertoriez les fichiers générés et copiés dans le dépôt `kubernetes/website`:
|
||||
|
||||
```
|
||||
cd <web-base>
|
||||
git status
|
||||
```
|
||||
|
||||
La sortie affiche les fichiers nouveaux et modifiés.
|
||||
Par exemple, la sortie pourrait ressembler à ceci:
|
||||
|
||||
```shell
|
||||
...
|
||||
|
||||
modified: content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md
|
||||
modified: content/en/docs/reference/command-line-tools-reference/federation-apiserver.md
|
||||
modified: content/en/docs/reference/command-line-tools-reference/federation-controller-manager.md
|
||||
modified: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md
|
||||
modified: content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md
|
||||
modified: content/en/docs/reference/command-line-tools-reference/kube-proxy.md
|
||||
modified: content/en/docs/reference/command-line-tools-reference/kube-scheduler.md
|
||||
...
|
||||
```
|
||||
|
||||
Exécutez `git add` et `git commit` pour faire un commit de ces fichiers.
|
||||
|
||||
## Créer une pull request
|
||||
|
||||
Créez une pull request vers le dépôt `kubernetes/website`.
|
||||
Consultez votre pull request et répondez aux corrections suggérées par les rélecteurs jusqu'à ce que la pull request soit acceptée et mergée.
|
||||
|
||||
Quelques minutes après le merge votre pull request, vos références mises à jour seront visibles dans la [documentation publiée](/docs/home/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [Génération de documentation de référence pour les commandes kubectl](/docs/home/contribute/generated-reference/kubectl/)
|
||||
* [Génération de documentation de référence pour l'API Kubernetes](/fr/docs/contribute/generate-ref-docs/kubernetes-api/)
|
||||
* [Génération de documentation de référence pour l'API de fédération Kubernetes](/docs/home/contribute/generated-reference/federation-api/)
|
||||
|
||||
{{% /capture %}}
|
|
@ -28,7 +28,7 @@
|
|||
<a class="widget-link" href="http://slack.k8s.io"><i class="fab fa-slack fab-icon"> </i> #kubernetes-users </a>
|
||||
<a class="widget-link" href="https://stackoverflow.com/questions/tagged/kubernetes"><i class="fab fa-stack-overflow fab-icon"></i> Stack Overflow</a>
|
||||
<a class="widget-link" href="https://discuss.kubernetes.io"><i class="fab fa-discourse fab-icon"></i>Forum</a>
|
||||
<a class="widget-link" href="https://kubernetes.io/docs/setup/pick-right-solution/"><i class="fa fa-download fab-icon"></i> Download Kubernetes</a>
|
||||
<a class="widget-link" href="https://kubernetes.io/docs/setup"><i class="fa fa-download fab-icon"></i> Download Kubernetes</a>
|
||||
</div>
|
||||
{{ partialCached "blog/archive.html" . }}
|
||||
</div>
|
||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 428 KiB |
Loading…
Reference in New Issue